id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
53,316,084
https://en.wikipedia.org/wiki/Flore%20de%20la%20Nouvelle-Cal%C3%A9donie
Flore de la Nouvelle-Calédonie is an ongoing multi-volume flora describing the vascular plants of New Caledonia (including the Loyalty Islands, Isles of Pines and Belep islands) in the South-West Pacific. published by the National Museum of Natural History in Paris since 1967. Each species treatment typically includes taxonomic information, morphological description, a line drawing and a distribution map. Originally published as Flore de la Nouvelle-Calédonie et Dépendances, since 2014 it has been renamed shortly Flore de la Nouvelle-Calédonie and is co-published with Institut de Recherche pour le Développement in a fully colored format. Flore de la Nouvelle-Calédonie currently consists of 27 volumes, covering little over 50% of a total of approximately 3,400 species native to the New Caledonian archipelago. Major botanical families awaiting treatment include Rubiaceae, Cyperaceae, Rutaceae, and Poaceae. List of published volumes Volume 1 (1967) – Sapotaceae by A. Aubréville Volume 2 (1968) – Proteaceae by R. Virot Volume 3 (1969) –Pteridophytes (Ferns & Lycopods) by G. Brownlie Volume 4 (1972) – Gymnosperms by D.J. de Laubenfels Volume 5 (1974) – Lauraceae by A.J.G. Kostermans Volume 6 (1975) – Epacridaceae (genera now included in Ericaceae) by Robert Virot Volume 7 (1976) – Acanthaceae, Bignoniaceae, Boraginaceae & Solanaceae by H.Heine Volume 8 (1977) – Orchidaceae by N. Hallé Volume 9 (1980) – Flacourtiaceae (genera now included in Salicaceae) by M. Lescot, Symplocaceae by H.P. Nooteboom, Icacinaceae (genera now placed in Cardiopteridaceae, Metteniusaceae and Stemonuraceae), Corynocarpaceae & Olacaceae by J.F. Villiers Volume 10 (1981) – Apocynaceae (excluding former Asclepiadaceae) by P. Boiteau Volume 11 (1982) – Elaeocarpaceae by C. Tirel, Monimiaceae, Amborellaceae, Atherospermataceae, Trimeniaceae, Chloranthaceae by J. Jérémie Volume 12 (1983) – Fabaceae-Mimosoideae by I. Nielsen, Chrysobalanaceae by G. Prance, Plumbaginaceae by J. Edmondson Volume 13 (1984) – Convolvulaceae by H. Heine Volume 14 (1987) – Euphorbiaceae part I (including genera now placed in Picrodendraceae) by G. McPherson & C. Tirel Volume 15 (1988) – Hernandiaceae by J. Jérémie, Meliaceae by D.J. Mabberley, Oncothecaceae by P. Morat & J.-M. Veillon, Santalaceae by N. Hallé Volume 16 (1990) – Dilleniaceae by J.-M. Veillon, Goodeniaceae by I.H. Müller, Iridaceae by P. Goldblatt, Campynemataceae by P. Goldblatt Volume 17 (1991) – Euphorbiaceae - Phyllanthoideae (genera now placed in Phyllanthaceae and Putranjivaceae) by G. McPherson & M. Schmid Volume 18 (1992) – Myrtaceae-Leptospermoideae (Myrtaceae excluding Myrteae and Syzygieae) by J.W. Dawson Volume 19 (1993) – Ebenaceae by F. White, Winteraceae by W. Vink Volume 20 (1996) – Celastraceae by I.H. Müller, Loranthaceae & Viscaceae (Korthalsella now placed in Santalaceae) by B.A. Barlow, Alseuosmiaceae (Periomphale) by C. Tirel & J. Jérémie, Paracryphiaceae (Paracryphia) by J. Jérémie, Tiliaceae (genera now placed in Malvaceae-Grewioideae) by C. Tirel Volume 21 (1997) – Sphenostemonaceae (Sphenostemon now placed in Paracryphiaceae) by J. Jérémie, Anacardiaceae by M. Hoff, Cruciferae by B. Jonsell Volume 22 (1998) – Menispermaceae by L. Forman, Oleaceae & Passifloraceae by P.S. Green Volume 23 (1999) – Myrtaceae-Syzygium by J.W. Dawson Volume 24 (2002) – Pittosporaceae by C. Tirel & J.-M. Veillon Volume 25 (2004) – Hippocrateaceae (Dicarpellum now placed in Celastraceae) by M.P. Simmons, Labiatae by D.J. Mabberley & R.P.J. de Kok, Vitaceae by D.J. Mabberley Volume 26 (2014) – Cunoniaceae by H.C.F. Hopkins, Y. Pillon & R.D. Hoogland Volume 27 (2020) – Apocynaceae p.p. (Periplocoideae, Secamonoideae and Asclepiadoideae) by S. Liede-Schumann, U. Meve & G. Gâteblé, Phellinaceae by G. Barriera, Capparaceae by S. Fici off series Catalogue of introduced and cultivated plants in New Caledonia by H.S. McKee (1985: first edition, 1994: second edition) References New Caledonia F Botany in Oceania Books about Oceania
Flore de la Nouvelle-Calédonie
[ "Biology" ]
1,236
[ "Flora", "Florae (publication)" ]
53,316,670
https://en.wikipedia.org/wiki/Disk%20footprint
Disk footprint (or storage footprint) of a software application refers to its sizing information when it's in an inactive state, or in other words, when it's not executing but stored on a secondary media or downloaded over a network connection. It gives a sense of the size of an application, typically expressed in units of computer bytes (kilobytes, megabytes, etc.) that would be required to store the application on a media device or to be transmitted over a network. Due to organization of modern software applications, disk footprint may not be the best indicator of its actual execution time memory requirements - a tiny application that has huge memory requirements or loads a large number dynamically linked libraries, may not have comparable disk footprint vis-a-vis its runtime footprint. See also Computer data storage Disk storage References Software optimization
Disk footprint
[ "Technology" ]
171
[ "Computing stubs" ]
53,318,462
https://en.wikipedia.org/wiki/Iota%20Crateris
Iota Crateris (ι Crateris) is the Bayer designation for a binary star system in the southern constellation of Crater. It is faintly visible to the naked eye with an apparent visual magnitude of 5.48. According to the Bortle scale, this means it can be viewed from suburban skies at night. Based upon an annual parallax shift of 37.41 mas, Iota Crateris is located 87 light years from the Sun. This is an astrometric binary system with an estimated orbital period of roughly 79,000 years. The primary, component A, is an F-type main sequence star with a stellar classification of F6.5 V, which is generating energy through the thermonuclear fusion of hydrogen in its core region. It is around 4.45 billion years old with 1.19 times the mass of the Sun. The star is radiating energy from its outer atmosphere at an effective temperature of 6,230 K. The companion, component B, is a red dwarf star with a probable classification of M3, although its mass estimate of 0.57 solar would be more consistent with an M0 class star. As of 2014, this magnitude 11.0 star had an angular separation of 1.10 arc seconds along a position angle of 248°. It has a projected separation of 25 AU, which means it is positioned at least this distance away from the primary. References F-type main-sequence stars Crater (constellation) Crateris, Iota Durchmusterung objects Crateris, 24 101198 056802 4488 3677 Astrometric binaries
Iota Crateris
[ "Astronomy" ]
328
[ "Crater (constellation)", "Constellations" ]
53,319,015
https://en.wikipedia.org/wiki/Official%20Medicines%20Control%20Laboratory
Official Medicines Control Laboratory (OMCL) is the term coined in Europe for a public institute in charge of controlling the quality of medicines and, depending on the country, other similar products (for example, medical devices). They are part of or report to national competent authorities (NCAs). By testing medicines independently of manufacturers (that is, without any conflict of interest and with guaranteed impartiality), OMCLs play a fundamental role in ensuring the quality and contributing to the safety and efficacy of medicines, whether already on the market or not, for human and veterinary use. OMCLs assess human and veterinary medicines to determine whether they meet the relevant requirements for content, purity, etc., as specified in the marketing authorisation dossier or an official pharmacopoeia. They can also check whether packaging and labelling comply with legal requirements, and provide support during quality assessment, good manufacturing practice (GMP) inspections and investigations of quality defects and pharmacovigilance. Investigations may also be carried out on products suspected of being falsified, in support of police, customs, health or judicial authorities. OMCLs also actively contribute to the development and verification of pharmacopoeial methods. To take into account the cross-border and global dimension of medicines markets, OMCLs co-operate actively at the European level and beyond. They do so through the General European OMCL Network (GEON), which was set up jointly by the Council of Europe and the European Commission (EC) in 1995. A number of non-European OMCLs have joined the network as associate members. The GEON, which comprises over 70 OMCLs from over 40 different countries, is co-ordinated by the Strasbourg-based European Directorate for the Quality of Medicines & HealthCare (EDQM) of the Council of Europe, an international organisation upholding human rights, democracy and the rule of law in Europe. A list of network members is publicly available on the EDQM homepage. The network supports laboratories across Europe in making the best use of their expertise, technical capacity and financial resources, in order to ensure the appropriate control of medicines in Europe. This is done by organising co-ordinated testing programmes, meetings, training, audits and tailored Proficiency Testing Schemes (PTSs) and by providing the necessary (IT) infrastructure. The activities of the GEON are co-funded by the Council of Europe and the European Union. OMCLs play an essential role in the Official Control Authority Batch Release (OCABR) procedure, which is foreseen in EU legislation. Under this procedure, each batch of vaccine for human use, medicinal product derived from human blood or plasma (e.g. clotting factors, human albumin) or immunological veterinary medicinal product (e.g. veterinary vaccine) undergoes independent quality control, including testing, by an OMCL after release by the manufacturer and before it reaches the patient. The legislation requires mutual recognition of test results among the member states (EU/EEA), so the OMCLs involved work together as a network to ensure that any batch is tested in only one OMCL, under agreed conditions, for the benefit of all. See also Public health laboratory External links General European OMCL Network, Council of Europe OMCL - Official Medicines Control Laboratories, Health Canada References Laboratory types
Official Medicines Control Laboratory
[ "Chemistry" ]
689
[ "Laboratory types" ]
53,320,238
https://en.wikipedia.org/wiki/White%20Space%20Internet
White Space Internet uses a part of the radio spectrum known as white spaces. The frequency range is created when there are gaps between the coverage areas of television channels. The spaces can provide broadband internet access that is similar to that of 4G mobile. Wilmington, North Carolina In a 2012 test of the technology, the city of Wilmington, North Carolina implemented technology utilizing the white space systems "to connect the city's infrastructure, allowing public officials to remotely turn lights on and off in parks, provide public wireless broadband to certain areas of the city, and monitor water levels." The initial tests of this internet showed that white space signals travel further and with less interference than Wi-Fi and Bluetooth. White space can help to alleviate some of the problems that are occurring with networks being over crowded. In 2013 the system was still in use. Carlson Wireless Technologies Carlson Wireless Technologies users are utilizing white space in order to access broadband internet. Carlson Wireless Technologies has been able to conduct research that has proven white space internet to cover around 10 kilometers in diameter, which covers 100 times further than WI-Fi. Also, white space is considered non-line-of-sight (NLOS). It differs from microwave links that need line-of-sight. The white space works with the lower-frequency UHF signals in order to connect between devices. NLOS allows white space to cover areas that have obstacles with limited issue. Wireless Reach White space technology has been suggested for several countries. Microsoft has white space databases, and is advancing white space technology to Jamaica, Namibia, Philippines, Tanzania, Taiwan, Colombia, United Kingdom, and the United States. Also, Google has decided to push white space technology to Cape Town, South Africa. An argument against white space Internet is that it uses a radio frequency range commonly used for television, and is not Super Wi-Fi. See also White spaces (database) White spaces (radio) References External links https://www.microsoft.com/en-us/research/project/dynamic-spectrum-and-tv-white-spaces/ https://www.microsoft.com/africa/4afrika/tv-white-spaces.aspx https://www.microsoft.com/empowering-countries/en-us/decent-work-and-economic-growth/tv-white-space/ Wireless networking
White Space Internet
[ "Technology", "Engineering" ]
480
[ "Wireless networking", "Computer networks engineering" ]
53,320,788
https://en.wikipedia.org/wiki/Henry%20Royce%20Institute
The Henry Royce Institute (often referred to as ‘Royce’) is the UK’s national institute for advanced materials research and innovation. Vision Royce's vision is to identify challenges and stimulate innovation in advanced materials research to support sustainable growth and development. Royce aims to be a "single front door" to the UK’s materials research community. Its stated mission is to “support world-recognised excellence in UK materials research, accelerating commercial exploitation of innovations, and delivering positive economic and societal impact for the UK.” Operating from its Hub at the University of Manchester, Royce is a partnership of eleven leading UK institutions. Royce operates as a hub and spoke collaboration between the University of Manchester (the hub), and the spokes of the founding Partners National Nuclear Laboratory, UK Atomic Energy Authority, Imperial College London, University of Cambridge, University of Leeds, University of Liverpool, University of Oxford and the University of Sheffield. Royce also has two Associate Partners, Cranfield University and the University of Strathclyde. Aims Royce aims to fulfil its mission by: Enabling national materials research foresighting, collaboration and strategy Providing access to the latest equipment facilities and capabilities Catalysing industrial collaboration and exploitation of materials research Fostering materials science skills development, innovation training, and outreach. History In 2014, Chancellor George Osborne announced the launch of the Henry Royce Institute for advanced materials science in his Autumn Statement in 2014. He pledged "a quarter of a billion" to support his proposals from June 2014 on creating a Northern Powerhouse. Royce was then established through a grant from the Engineering and Physical Sciences Research Council (EPSRC), which has been used to fund construction and refurbishment of buildings, equipment, and research and technical staff. Royce now coordinates over 900 academics and over £300 million in facilities, "providing a joined-up framework that can deliver beyond the current capabilities of individual partners or research teams.” Royce is one of the EPSRC's four major research institutes, the other three being: The Alan Turing Institute in data science; The Faraday Institution in battery science and technology; and the Rosalind Franklin Institute, which focuses on transforming life science through interdisciplinary research and technology development. These institutes represent a total financial investment of around £478 million and reflect the EPSRC’s vision and objectives ("to deliver economic impact and social prosperity; to realise the potential of engineering and physical sciences research; and to enable the UK engineering and physical sciences landscape to deliver"). In 2022, the Secretary of State for Business, Energy and Industrial Strategy, Grant Schapps announced a further £95 Million investment into Royce to deliver Phase 2 of its operations. Name The Henry Royce Institute is named after Sir Frederick Henry Royce OBE, a British engineer famous for his designs of car and airplane engines. Henry Royce manufactured his first car in Manchester in 1904, and in 1906 co-founded Rolls-Royce. Research strategy Royce strategy currently focuses on research in five areas: Low carbon power: new modes of energy generation, energy storage, and efficient energy use – from hydrogen to fusion power and energy-efficient devices Infrastructure and mobility: efficient housing, clean transport, and transforming foundation industries for clean manufacturing Digital and Communications: low-loss digital processes quantum technologies for computing, sensors, and data storage Circular economy: rethinking the way we use plastic and engage with waste streams, developing truly degradable materials Health and wellbeing: reducing carbon emissions and enabling clean water production, delivering personalised medicine, and supporting the ageing population. In September 2020, Royce published five technology roadmaps to "set out how materials science can be harnessed to deliver net-zero targets". These roadmaps were the product of a collaboration with the Institute of Physics and the Institute for Manufacturing, convening the UK's academic and industrial materials research communities to explore how novel materials and processes can contribute to more sustainable, affordable, and reliable energy production. The roadmaps cover photovoltaics, hydrogen, thermoelectrics, calorics, and low-loss electronics. Structure Royce operates as a hub-and-spoke model, with the hub at the University of Manchester and spokes at the other founding Partners, comprising the universities of Sheffield, Leeds, Liverpool, Cambridge, Oxford and Imperial College London, as well as UKAEA and NNL. The hub and spokes collaborate on research in the following areas: 2-Dimensional materials – led by the University of Manchester Advanced metals processing – led by the University of Sheffield and the University of Manchester Atoms to devices – led jointly by the University of Leeds, Imperial College London, the University of Cambridge and the University of Manchester Biomedical materials – led by the University of Manchester Chemical materials design – led by the University of Liverpool and the University of Manchester Electrochemical Systems – led by the University of Oxford Materials systems for demanding environments – led by the University of Manchester and Cranfield University Nuclear materials – led by the University of Manchester, National Nuclear Laboratory and UKAEA Location The Henry Royce Institute’s hub in Manchester is in line with the Northern Powerhouse policy, and the UK government’s aim to support centres of excellence outside of the "Golden Triangle" of research institutions in London, Cambridge and Oxford. New buildings funded or part-funded by the Royce grant include: Royce Hub Building, Manchester The newly-constructed £105m Royce Hub Building draws together research facilities and meeting spaces to drive collaboration and industry engagement. Research undertaken here encompasses biomedical, metals processing, digital fabrication, and sustainable materials themes. The nine-storey Hub building in the heart of the University of Manchester campus is high, making it the second-tallest current building on the campus after the Maths and Social Sciences Building. It has of space. It is located next to the Alan Turing Building, and is close to the National Graphene Institute, the School of Physics and Astronomy, the School of Chemistry, and the Manchester Engineering Campus Development. The building was due to open in autumn 2020, but the ceremony was delayed due to the COVID-19 pandemic. Planning permission was granted in February 2017 and construction started in December 2017. It was originally going to be constructed on the site of the BBC's New Broadcasting House, but the site was changed to the main campus of the university. Sir Michael Uren Hub, Imperial Royce funding was invested in Imperial’s new Sir Michael Uren Hub building, on which Royce occupies the eighth floor. Final works are still ongoing on some floors of the building, which has not yet been officially opened. Royce facilities here focus on the production and characterisation of thin films and devices composed of a broad spectrum of materials. Bragg Centre for Materials Research, Leeds Housed within the new Sir William Henry Bragg Building, the Bragg Centre for Materials Research will become operational in 2021, with a formal opening following in 2022. Royce equipment at the Bragg Centre focusses on enabling the discovery, creation, characterisation, and exploitation of materials engineered at the atomic level. Royce Discovery Centre, Sheffield Located in the new Harry Brearley Building in the centre of Sheffield, construction on the Royce Discovery Centre finished in 2020 with a formal opening in 2022. Housing equipment worth over £20m, the building features specialist laboratories, workshops and office spaces focussed on early-stage materials discovery and processing. Materials Innovation Factory, Liverpool Royce invested £10.9m in Liverpool’s new Materials Innovation Factory – an £81m facility dedicated to the research and development of advanced materials. Officially opened in 2018, the site includes a Royce Open Access Area which houses one of the highest concentrations of materials science robotics in the world, and also a suite of advanced analytical equipment. Royce Translational Centre, Sheffield Part of the University of Sheffield’s Advanced Manufacturing Park, the Royce Translational Centre officially opened in October 2018 Its purpose is to evolve novel materials and processing techniques developed by research teams, making them accessible for trial by industry collaborators. Royce@Cambridge Royce@Cambridge is based within Cambridge’s contemporary Maxwell Centre with labs at various sites on the University's West Cambridge site. Royce’s £10m investment in open access facilities at the University of Cambridge address energy generation, energy storage, and efficient energy use, with equipment for fabrication of new battery structures, X-ray photoelectron spectroscopy, X-ray tomography, and electrochemical characterisation. Outreach Alongside funding research and facilities, Royce has funded outreach and skills programmes aimed at encouraging children and young people to consider careers in Materials Science and Engineering. Working with the Discover Materials group, they facilitated a virtual open week in July/August 2020 aimed at 16-18 year olds considering university degrees. Royce also has a regular stand at the annual Bluedot festival, engaging children and their families in interactive materials science challenges. Collaborations Royce partners have collaborated with other institutions to win grants from the Faraday Institution to develop new energy storage technologies. Royce has also collaborated with the Alan Turing Institute on some data-centric engineering challenges, and the Franklin Institute on procuring characterisation capabilities. Leadership The Royce leadership team comprises: Chair: Professor Mark Smith CEO: Professor David Knowles FREng Chief Scientist: Professor Phil Withers FRS, FREng Chief Scientist Officer: Professor Ian Kinloch Connection to UK Government Policy Advanced materials innovation is a key focus area for the UK for several reasons. UK businesses that depend on the production/processing of materials represent 15% of UK GDP, have a turnover of approximately £200bn, exports of £50bn and employ over 2.6 million people. Research in advanced materials is an area of national strength as one of the "Eight Great" technologies, and is a major contributor to most of the other seven, with over 150,000 published patent applications between 2004–2013. Of the UK’s sector strategies, advanced materials is the critical component to ensure the full economic benefit for the energy sector, transportation, construction, a growing digital economy, life sciences, and agriculture technology. References Buildings at the University of Manchester Engineering research institutes Engineering and Physical Sciences Research Council Materials science institutes Research institutes in Manchester
Henry Royce Institute
[ "Materials_science", "Engineering" ]
2,035
[ "Materials science organizations", "Engineering research institutes", "Materials science institutes" ]
53,322,500
https://en.wikipedia.org/wiki/Kappa%20Crateris
Kappa Crateris (κ Crateris) is the Bayer designation for a star in the southern constellation of Crater. It has an apparent visual magnitude of 5.94, which, according to the Bortle scale, can be seen with the naked eye under dark suburban skies. The distance to this star, as determined from an annual parallax shift of 14.27 mas, is around 229 light years. This is an evolved F-type giant star with a stellar classification of F5/6 III, where the F5/6 indicates the spectrum lies intermediate between types F5 and F6. It is an estimated 1.74 billion years old and is spinning with a projected rotational velocity of 39 km/s. Kappa Crateris has 1.74 times the mass of the Sun, and radiates 17 times the solar luminosity from its outer atmosphere at an effective temperature of 6,545 K. Kappa Crateris has a visual companion: a magnitude 13.0 star located at an angular separation of 24.6 arc seconds along a position angle of 343°, as of 2000. References F-type giants Crater (constellation) Crateris, Kappa Durchmusterung objects Crateris, 16 099564 055874 4416
Kappa Crateris
[ "Astronomy" ]
257
[ "Crater (constellation)", "Constellations" ]
53,323,405
https://en.wikipedia.org/wiki/254%20%28number%29
254 (two hundred [and] fifty-four) is the natural number following 253 and preceding 255. In mathematics It is a deficient number, since the sum of its divisors (excluding the same number) is 130 < 254. It is a semiprime number. Moreover, in American English, its name has a semiprime number of syllables. It is a square-free integer. It is a nontotient. It is a lazy caterer number. It is a congruent number. References Integers
254 (number)
[ "Mathematics" ]
110
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
53,324,696
https://en.wikipedia.org/wiki/K%E2%80%93Ca%20dating
Potassium–calcium dating, abbreviated K–Ca dating, is a radiometric dating method used in geochronology. It is based upon measuring the ratio of a parent isotope of potassium () to a daughter isotope of calcium (). This form of radioactive decay is accomplished through beta decay. Calcium is common in many minerals, with being the most abundant naturally occurring isotope of calcium (96.94%), so use of this dating method to determine the ratio of daughter calcium produced from parent potassium is generally not practical. However, recent advancements in mass spectrometric techniques [e.g., thermal ionization mass spectrometry (TIMS) and collision-cell inductively-coupled plasma mass spectrometry (CC-ICP-MS)] are allowing radiogenic Ca isotope variations to be measured at unprecedented precisions in an increasing variety of materials, including high Ca minerals (e.g., plagioclase, garnet, clinopyroxene) and aqueous (e.g., seawater and riverine) samples. In earlier studies, this technique was especially useful in minerals with low calcium contents (under 1/50th of the potassium content) so that radiogenic ingrowth of 40-Ca could be more easily quantified. Examples of such minerals include lepidolite, potassium-feldspar, and late-formed muscovite or biotite from pegmatites (preferably older than ). This method is also useful for zircon-poor, felsic-to-intermediate igneous rocks, various metamorphic rocks, and evaporite minerals (i.e. sylvite). Method Potassium has three naturally occurring isotopes: stable , and radioactive . exhibits dual decay: through β-decay (E = 1.33 MeV), 89% of decays to , and the rest decays to via electron capture (E = 1.46 MeV). While comprises only 0.001167% of total potassium mass, makes up 96.9821% of total calcium mass; thus, decay leads to significantly greater enrichment than any other isotope. The decay constant for the decay to is denoted as λβ and equals  yr−1; the decay constant to is denoted as λEC and equals  yr−1. The general equation for the decay time of a radioactive nucleus that decays to a single product is: Where λ is the decay constant, t1/2 is the half-life, N0 is the initial concentration of the parent isotope, and N is the final concentration of the parent isotope. Similarly, the equation for the decay time of a radioactive nucleus that decays to more than one product is: Where a is the daughter product of interest, λa is the decay constant for daughter product a, and λt is the sum of decay constants for daughter products a and b. This approach is taken in potassium-calcium dating where argon and calcium are both products of decay and can be expressed as: Where Ca is the measured amount of radiogenic in terms of parent isotope , and K0 is the initial concentration of . Age equation Age determination using potassium–calcium dating is best done using the isochron technique. The isochron constructed for Pike’s Peak in Colorado and the K/Ca age for the granites in the area were found to be . Rb-Sr dating of the same batholith gave results of , supporting the practicality of this method of dating. For comparison, the isochron method uses non-radiogenic to develop an isochron. The following equation is used in the construction of the isochron plot: t is time elapsed ξ is the branching ratio (= λβ / λ total) = 0.8952 Ca0 is the initial / isotope ratio Ca is the / isotope ratio K is the / isotope ratio Applications Chronological applications This technique's primary application is towards determining the crystallization age of minerals or rocks enriched in potassium and depleted in calcium. Due to the long half life of (~1.25 billion years), K–Ca dating is most useful on samples older than 100,000 years. Given that the chosen sample has a relatively high current K/Ca ratio, and that the initial concentration of can be determined, any error in this initial concentration can be considered negligible when determining the sample's age. K–Ca dating is not a common radioactive dating method for metamorphic rocks. However, this system is considered more stable than both the K-Ar and Rb-Sr dating methods. This fact, combined with advances in precision of mass spectrometry, makes K–Ca dating a viable option for igneous and metamorphic rocks containing little to no zircon. Potassium-calcium dating is especially useful for diagenetic minerals and marine sediments, which are both assumed to have had the same initial calcium isotopic composition as Earth's seawater at the time of their formation. As such, being able to assume the initial / ratio as a constant, this dating method proves particularly fruitful for these respective samples. Non-chronological applications Aside from radioactive dating, the K-Ca system is the only isotopic system capable of detecting elemental signatures in magmatic processes. Normalizing the / ratio to non-radioactive isotopes (/), it was found that the isotopic composition of calcium was similar across meteorites, lunar samples, and Earth's mantle. Advantages & disadvantages Disadvantages The primary disadvantage to K–Ca dating is the abundance of calcium in most minerals; this dating method cannot be used on minerals with a high preexisting calcium content, as the radioactively added calcium will increase calcium abundance in the sample only very slightly. As such, K–Ca dating is effective only in circumstances where K/Ca>50 (in a potassium-enriched, calcium-depleted sample). Examples of such minerals include lepidolite, potassium-feldspar, and late-formed muscovite or biotite from pegmatites (preferably older than ). This method is also useful for zircon-poor, felsic-to-intermediate igneous rocks, various metamorphic rocks, and evaporite minerals (i.e. sylvite). Another disadvantage to K–Ca dating is that the isotopic composition of calcium ( compared to ) is difficult to determine using mass spectrometry. Calcium is not easily ionized using a thermoionic source, and tends to isotopically fractionate during ionization. As such, this dating method does not yield satisfactory results unless performed with extremely high precision. Until recently, K–Ca dating was not considered useful for samples younger than the Precambrian, with extremely depleted to ratios. Advantages However, if used effectively on the aforementioned minerals, the K–Ca dating method provides high-precision dating comparable to other isotopic dating methods. It is also most effective, comparatively, at providing major element abundances for crustal magma sources, if used with high precision. See also K–Ar dating Rb-Sr dating References Radiometric dating
K–Ca dating
[ "Chemistry" ]
1,463
[ "Radiometric dating", "Radioactivity" ]
53,325,526
https://en.wikipedia.org/wiki/%C4%8Cech%20complex
In algebraic topology and topological data analysis, the Čech complex is an abstract simplicial complex constructed from a point cloud in any metric space which is meant to capture topological information about the point cloud or the distribution it is drawn from. Given a finite point cloud X and an ε > 0, we construct the Čech complex as follows: Take the elements of X as the vertex set of . Then, for each , let if the set of ε-balls centered at points of σ has a nonempty intersection. In other words, the Čech complex is the nerve of the set of ε-balls centered at points of X. By the nerve lemma, the Čech complex is homotopy equivalent to the union of the balls, also known as the offset filtration. Relation to Vietoris–Rips complex The Čech complex is a subcomplex of the Vietoris–Rips complex. While the Čech complex is more computationally expensive than the Vietoris–Rips complex, since we must check for higher order intersections of the balls in the complex, the nerve theorem provides a guarantee that the Čech complex is homotopy equivalent to union of the balls in the complex. The Vietoris-Rips complex may not be. See also Vietoris–Rips complex Topological data analysis Čech cohomology Computational geometry Abstract simplicial complex Simplicial complex Simplicial homology References Algebraic topology
Čech complex
[ "Mathematics" ]
296
[ "Computational topology", "Computational mathematics", "Algebraic topology", "Fields of abstract algebra", "Topology" ]
53,325,759
https://en.wikipedia.org/wiki/Environmental%20Politics%20%28journal%29
Environmental Politics is a peer-reviewed academic journal, published seven times per year, which provides a forum for environmental politics particularly in relation to environmental social movements, NGOs, and parties; analysis of environmental policy-making; and environmental political thought. The journal publishes articles on politics at all scales (local, national, and global) and studies from all regions of the world. The journal's editor-in-chief is John M. Meyer (Humboldt State University). Abstracting and indexing According to the Journal Citation Reports, the journal has a 2019 impact factor of 4.320, ranking it 3rd out of 180 journals in the category "Political Science" and 21st out of 123 journals in the category "Environmental Studies". See also List of environmental social science journals List of political science journals References External links Bimonthly journals English-language journals Environmental health journals Political science journals Academic journals established in 1992 Taylor & Francis academic journals
Environmental Politics (journal)
[ "Environmental_science" ]
188
[ "Environmental science journals", "Environmental health journals" ]
70,345,105
https://en.wikipedia.org/wiki/Internal%20wave%20breaking
Internal wave breaking is a process during which internal gravity waves attain a large amplitude compared to their length scale, become nonlinearly unstable and finally break. This process is accompanied by turbulent dissipation and mixing. As internal gravity waves carry energy and momentum from the environment of their inception, breaking and subsequent turbulent mixing affects the fluid characteristics in locations of breaking. Consequently, internal wave breaking influences even the large scale flows and composition in both the ocean and the atmosphere. In the atmosphere, momentum deposition by internal wave breaking plays a key role in atmospheric phenomena such as the Quasi-Biennial Oscillation and the Brewer-Dobson Circulation. In the deep ocean, mixing induced by internal wave breaking is an important driver of the meridional overturning circulation. On smaller scales, breaking-induced mixing is important for sediment transport and for nutrient supply to the photic zone. Most breaking of oceanic internal waves occurs in continental shelves, well below the ocean surface, which makes it a difficult phenomenon to observe. The contribution of breaking internal waves to many atmospheric and ocean processes makes it important to parametrize their effects in weather and climate models. Breaking mechanisms Similar to what happens to surface gravity waves near a coastline, when internal waves enter shallow waters and encounter steep topography, they steepen and grow in amplitude in a nonlinear process known as shoaling. As the wave travels over topography with increasing height, bed friction leads to internal waves becoming asymmetrical with an increasing steepness. These nonlinear internal waves on a shallow slope are generally referred to as internal bores. Wave height and energy increase until a critical steepness is reached, whereafter the wave breaks by convective, Kelvin-Helmholtz or parametric subharmonic instability. Due to the relatively small density differences (and thus small restoring forces) over the ocean depth, ocean internal waves may reach amplitudes up to around 100 m. Analogous to surface wave breaking in the region known as the surf zone, internal breaking waves dissipate energy in what is known as the internal surf zone. Internal tide breaking Internal tidal waves are internal waves at tidal frequency in the ocean, which are generated by the interaction of the tide with the ocean topography. Alongside internal inertial waves, they constitute the majority of the ocean internal wavefield. The internal tides consist of so-called low modes and high modes with varying vertical wavelengths. As these waves propagate, the high modes tend to dissipate their energy quickly, leading to the low modes to dominate further away from the location of their generation. Low mode internal waves, with wavelengths exceeding 100 km, generated by either tides or winds acting on the sea surface, can travel thousands of kilometers from their regions of generation, where they will eventually encounter sloping topography and break. When this happens, isopycnals become steeper and steeper, where the wavefront is followed by a sharp temperature drop. This then leads to an unstable density profile that eventually overturns and breaks. The magnitude of the topographic slope and the slope of the internal wave beam dictate where internal waves break. The slope of an internal wave beam () can be expressed as the ratio between its horizontal () and vertical () wavenumbers: where is the buoyancy frequency (or Brunt-Väisälä frequency), is the Coriolis frequency and is the wave frequency in the dispersion relation that governs the propagation of internal waves in a continuously stratified and rotating medium: . In the case that the slope of a downgoing incident internal wave beam is larger than the topographic slope (supercritical slope), waves will be reflected downward. In the case that the slope of a downgoing incident internal wave beam is smaller than the topographic slope (subcritical slope), however, waves will be reflected upward with reduced wavelength and lower group velocity. Because the energy flux is conserved during reflection, energy density and therefore wave amplitude in the reflected wave must increase with respect to the incident wave. This increase in amplitude and wave steepness results in the waves being subject to breaking. These effects are increased the closer the slope of the internal wave beam is to the magnitude of the topographic slope. When the slope of the beam of the incoming internal wave is equal to the topographic slope, the slope of the topography is referred to as the critical slope. Critical slopes and near-critical slopes are important locations for both wave breaking and wave generation via tide-topography interactions. Internal solitary wave breaking Owing to the generally long distances traveled by internal tidal waves, they may steepen and form trains of internal solitary waves, or internal solitons. These internal solitons have much shorter wavelengths, on the order of hundreds of meters, making them much steeper than internal tides. The ratio of the topographic slope to the wave steepness can be characterized by the internal Iribarren number: where is the topographical slope, the internal wave amplitude and the wavelength of the internal wave. The internal Iribarren number can be used to classify internal bores into two categories: canonical bores and non-canonical bores. For a gentle slope, as is typical for the continental shelf and nearshore areas, the internal Iribarren number is low () such that canonical bores occur. In this case, an incoming internal solitary wave can convert to a packet of solitary waves or boluses as it travels up the slope in a process referred to as fission. This is also called a fission breaker. Canonical bores are generally accompanied by an intense drop in temperature as the wavefront passes by, followed by a gradual increase over time. In rarer cases, non-canonical () bores may occur. In these cases, for an increasing internal Iribarren number (that is, steeper waves or steeper topographic slope), wave breaking can be classified successively as surging, collapsing and plunging breakers (see Breaking wave). Contrary to canonical bores, temperature gradually decreases as the wavefront passes by, followed by a sharp increase in temperature. Due to the steeper topographic slopes associated with non-canonical bores, a larger part of the wave energy is reflected back, meaning there is less turbulent energy that leads to mixing. Mixing Breaking internal waves are regarded to play an important role on mixing of the ocean based on lab experiments and remote sensing. The effect of internal waves on mixing is also studied extensively in direct numerical simulations. Even though research indicates that internal wave breaking is important for local turbulence, there remains uncertainty in global estimates. Breaking internal tidal waves can result in turbulent water columns of several hundred meters high and the turbulent kinetic energy may reach levels up to 10.000 times higher than in the open ocean. Quantifying mixing efficiency The intensity of the turbulence caused by breaking internal waves depends mainly on the ratio between topographical steepness and the wave steepness, known as the internal Iribarren number. A smaller internal Iribarren number correlates with a larger intensity of the resulting turbulence due to internal wave breaking. That means that a small internal Iribarren number predicts that a lot of the wave energy will be transferred to mixing and turbulence, while a large internal Iribarren number predicts that the wave energy will reflect offshore. Studies express the mixing efficiency as the ratio between the total amount of mixing and the total irreversible energy loss. In other words, the mixing efficiency can generally be defined as the following ratio: , where is the mixing efficiency, the change in background potential energy due to mixing and the total energy expended. Because and are not directly observable, studies use different definitions to determine the mixing efficiency. It is notoriously hard to estimate the mixing efficiency in the ocean, due to practical limitations in measuring ocean dynamics. Besides measurements of ocean dynamics, the mixing efficiency can also be obtained from lab experiments and numerical simulations, but they also have their limitations. Therefore, these three different approaches have slightly different definitions of mixing efficiency. In theory these three approaches should give the same estimates for the mixing efficiency, but there remain discrepancies between them. Therefore, there are varying estimates and disagreements on mixing efficiency and comparisons are difficult due to the different definitions. Studies that quantify the mixing properties of breaking internal solitary waves have split estimates of the mixing efficiency range, with values between 5% and 25% for laboratory experiments or between 13% and 21% for numerical simulations depending on the Internal Iribarren number. Mass and sediment transport Breaking and shoaling of internal waves have been shown to cause the transport of mass and energy in the form of sediment and heat, but also of nutrients, plankton and other forms of marine life. Sediment transport Wave breaking causes mass and sediment transport that is important for the ocean biology and shaping of the continental shelves due to erosion. The erosion caused by internal wave breaking can result in sediment to be suspended and transported off-shore. This off-shore sediment transport may give rise to the emergence of nepheloid layers, which are in turn important for the ocean biology. Direct numerical simulations show that breaking internal waves are also responsible for on-shore sediment transport, after which sediment can be deposited or transported elsewhere. Although many studies show that internal wave breaking leads to sediment transport, their traces in the geologic record remain uncertain. Their sedimentary structures may coexist in turbidites on continental slopes and canyons. Transport of nutrients The mixing and transport of nutrients in the ocean is affected largely by internal wave breaking. The arrival of internal tidal bores has been shown to cause a 10 to 40 fold increase of nutrients on Conch Reef. Here it has been shown that the appearance of internal bores provide a predictable and periodic source of transport that can be important for a diversity of marine life. Large amplitude tidal internal tidal waves can cause sediments to be resuspended for as long as 5 hours each tidal wave and internal bores have shown to play a vital role in the onshore transportation of planktonic larvae. Internal wave breaking may also cause ecological hazards, such as red tides and low dissolved oxygen levels. References Water waves
Internal wave breaking
[ "Physics", "Chemistry" ]
2,040
[ "Water waves", "Waves", "Physical phenomena", "Fluid dynamics" ]
70,345,470
https://en.wikipedia.org/wiki/National%20Security%20Commission%20on%20Emerging%20Biotechnology
The National Security Commission on Emerging Biotechnology (NSCEB) is a bipartisan U.S. legislative commission established by Congress in the 2022 National Defense Authorization Act. The NSCEB reviews how emerging biotechnology affects U.S. Department of Defense activities. It submits an interim report within one year and a final unclassified report within two years, including recommendations for Congress and the federal government. The National Security Commission on Emerging Biotechnology was announced in March 2022 and will issue a final report to Congress by 2026. Commissioners The NSCEB is composed of appointed commissioners: Dr. Michelle Rozo, Vice Chair Senator Alex Padilla (D-CA) Senator Todd Young (R-IN), Chair Representative Stephanie Bice (R-OK) Representative Ro Khanna (D-CA) The Honorable Dov S. Zakheim - former Under Secretary of Defense (Comptroller) Mr. Paul Arcangeli Dr. Alexander Titus - former Assistant Director for Biotechnology in the Office of the Under Secretary of Defense for Research and Engineering Dr. Eric Schmidt - Billionaire philanthropist and former CEO of Google Dr. Angela Belcher Ms. Dawn Meyerriecks Reports and policy recommendations The Commission submitted its first report to Congress in December 2023 which included policy recommendations related to agricultural policy for potential inclusion in the Farm Bill. In December 2023, NSCEB published its interim report, which discusses the potential applications of biotechnology in fields such as human health, food security, energy production, and economic development. The report stresses the need for the U.S. to stay ahead in biotechnology as international competition increases. In January 2024, the Commission published white papers on the integration of artificial intelligence and biotechnology ("AIxBio"). In May 2024, members of Congress introduced several pieces of legislation recommended by the Commission focused on food security and agricultural security. In December 2024, several NSCEB policy ideas were passed into law as part of the FY25 NDAA. References External links Official website Biotechnology in the United States United States national commissions
National Security Commission on Emerging Biotechnology
[ "Biology" ]
416
[ "Biotechnology in the United States", "Biotechnology by country" ]
70,346,758
https://en.wikipedia.org/wiki/Sea%20surface%20skin%20temperature
The sea surface skin temperature (SSTskin), or ocean skin temperature, is the temperature of the sea surface as determined through its infrared spectrum (3.7–12 μm) and represents the temperature of the sublayer of water at a depth of 10–20 μm. High-resolution data of skin temperature gained by satellites in passive infrared measurements is a crucial constituent in determining the sea surface temperature (SST). Since the skin layer is in radiative equilibrium with the atmosphere and the sun, its temperature underlies a daily cycle. Even small changes in the skin temperature can lead to large changes in atmospheric circulation. This makes skin temperature a widely used quantity in weather forecasting and climate science. Remote Sensing Large-scale sea surface skin temperature measurements started with the use of satellites in remote sensing. The underlying principle of this kind of measurement is to determine the surface temperature via its black body spectrum. Different measurement devices are installed where each device measures a different wavelength. Every wavelength corresponds to different sublayers in the upper 500 μm of the ocean water column. Since this layer shows a strong temperature gradient, the observed temperature depends on the wavelength used. Therefore, the measurements are often indicated with their wavelength band instead of their depths. History First satellite measurements of the sea surface were conducted as early as 1964 by Nimbus-I. Further satellites were deployed in 1966 and the early 1970s. Early measurements suffered from contamination by atmospheric disturbances. The first satellite to carry a sensor operating on multiple infrared bands was launched late in 1978, which enabled atmospheric correction. This class of sensors is called Advanced very-high-resolution radiometers (AVHRR) and provides information that is also relevant for the tracking of clouds. The current, third-generation features six channels at wavelength ranges important for cloud observation, cloud/snow differentiation, surface temperature observation and atmospheric correction. The modern satellite array is able to give a global coverage with a resolution of 10 km every ~6 h. Conversion to SST Sea surface skin temperature measurements are completed with SSTsubskin measurements in the microwave regime to estimate the sea surface temperature. These measurements have the advantage of being independent of cloud cover and underlie less variation. The conversion to SST is done via elaborate retrieval algorithms. These algorithms take additional information like the current wind, cloud cover, precipitation and water vapor content into account and model the heat transfer between the layers. The determined SST is validated by in-situ measurements from ships, buoys and profilers. On average, the skin temperature is estimated to be systematically cooler by 0.15 ± 0.1 K compared to the temperature at 5m depth. Vertical temperature profile of the sea surface The vertical temperature profile of the surface layer of the ocean is determined by different heat transport processes. At the very interface, the ocean is in thermal equilibrium with the atmosphere which is dominated by conductive and diffusive heat transfer. Also, evaporation takes place at the interface and thus cools the skin layer. Below the skin layer lies the subskin layer, this layer is defined as the layer where molecular and viscous heat transfer dominates. At larger scales, as the much bigger foundation layer, turbulent heat transport through eddies contributes most to the vertical heat transfer. During the day, there is additional heating by the sun. The solar radiation entering the ocean gets heats the surface following the Beer-Lambert law. Here, approximately five percent of the incoming radiation is absorbed in the upper 1 mm of the ocean. Since the heating from above leads to a stable stratification, other processes dominate the heat transport, depending on the considered scale. Regarding the skin layer with thickness , turbulent diffusion term is negligible. For the stationary case without external heating, the vertical temperature profile obeys the following energy budget: Here, and denote the density and heat capacity of water, the molecular thermal conductivity and the vertical partial derivative of the temperature. The vertical heat difference consists of latent heat release, sensible heat fluxes and the net longwave thermal radiation. The observed in the skin layer is positive, which corresponds to a temperature increasing with depth (Note that the z-axis points downward into the ocean). This leads to a cool skin layer as can be seen in Fig. 2. A common empiric description of the vertical temperature profile within the skin layer of depth is: Here, and denote the temperature of the surface and the lower boundary. When including the diurnal heating, we have to include an additional heating term, depending on the absorbed short wave radiation. Integrating over , we can express the temperature at depth as: where is the net shortwave solar radiation at the ocean interface and is its fraction absorbed up to depth . As can be seen in Fig. 2, the diurnal heating reduces the cool skin effect. The maximum temperature can be found in the subskin layer, where the external heating per depth is lower than in the skin layer, but where the surface cooling has a smaller effect. With further increasing depth, the temperature declines, as the proportional heating is smaller and the layer is mixed via turbulent processes. Variation of skin temperature Daily cycle The ocean skin temperature is defined as the temperature of the water at 20 μm depth. This means that the SSTskin is very dependent on the heat flux from the ocean to the atmosphere. This results in diurnal warming of the sea surface, high temperatures occur during the day and low temperatures during the night (especially with clear skies and low wind speed conditions). Because the SSTskin can be measured by satellites and is the temperature almost at the interface of the ocean and the atmosphere, it is a very useful measure to find the heat flux from the ocean. The increased heat flux due to diurnal warming can reach as high as 50-60 W/m2 and has a temporal mean of 10 W/m2. These amounts of heat flux cannot be neglected in atmospheric processes. Wind and interaction with the atmosphere The sea surface temperature is also highly dependent on wind and waves. Both processes cause mixing and therefore cooling/heating of the SSTskin. For example, when rough seas occur during the day, colder water from lower layers are mixed with the ocean skin. When gravity waves are present at the sea surface, there is a modulation of ocean skin temperature. In this modulation, the wind plays an important role. The magnitude of this modulation depends on wind speed, the phase is determined by the direction of the wind relative to the waves. When the wind and wave direction are similar, maximum temperatures occur on the forward side of the wave and when the wind blows from the opposite side compared to the waves, maximum temperatures are found at the rear face of the wave. Interaction with marine lifeforms On a global scale, skin temperature is an indicator of plankton concentrations. In areas where a relatively cold SSTskin is measured, abundance of phytoplankton is high. This effect is caused by the rise of cold, nutrient-rich water from the sea bottom in these regions. This increase in nutrients causes phytoplankton to thrive. On the other hand, relatively high SSTskin is an indication of higher zooplankton concentrations. These plankton depend on organic matter to thrive and higher temperatures increase production. On more local scales, surface accumulations of cyanobacteria can cause local increases in SSTskin by up to 1.5 degrees Celsius. Cyanobacteria are bacteria that photosynthesize and therefore chlorophyll is present in these bacteria. This increased chlorophyll concentration causes more absorption of incoming radiation. This increased absorption causes the temperature of the sea surface to rise. This increased temperature is most likely only apparent in the first meter and definitely only in the first five meters, after which no increased temperatures are measured. See also Sea surface temperature Remote sensing Remote sensing (oceanography) Thermal radiation Skin temperature of an atmosphere Sea surface interface temperature Sea surface subskin temperature Group for High Resolution Sea Surface Temperature (GHRSST) Weather modification References Oceans Temperature
Sea surface skin temperature
[ "Physics", "Chemistry" ]
1,637
[ "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Physical quantities", "SI base quantities", "Intensive quantities", "Thermodynamics", "Wikipedia categories named after physical quantities" ]
70,347,258
https://en.wikipedia.org/wiki/Eddy%20pumping
Eddy pumping is a component of mesoscale eddy-induced vertical motion in the ocean. It is a physical mechanism through which vertical motion is created from variations in an eddy's rotational strength. Cyclonic (Anticyclonic) eddies lead primarily to upwelling (downwelling). It is a key mechanism driving biological and biogeochemical processes in the ocean such as algal blooms and the carbon cycle. The mechanism Eddies have a re-stratifying effect, which means they tend to organise the water in layers of different density. These layers are separated by surfaces called isopycnals. The re-stratification of the mixed layer is strongest in regions with large horizontal density gradients, known also as “fronts”, where the geostrophic shear and potential energy provide an energy source from which baroclinic and symmetric instabilities can grow. Below the mixed layer, a region of rapid density change (or pycnocline) separates the upper and lower water, hindering vertical transport. Eddy pumping is a component of mesoscale eddy-induced vertical motion. Such vertical motion is caused by the deformation of the pycnocline. It can be conceptualised by assuming that ocean water has a density surface with mean depth averaged over time and space. This surface separates the upper ocean, corresponding to the euphotic zone, from the lower, deep ocean. When an eddy transits through, such density surface is deformed. Dependent on the phases of the lifespan of an eddy this will create vertical perturbations in different direction. Eddy lifespans are divided in formation, evolution and destruction. Eddy-pumping perturbations are of three types: Cyclones Anticyclones Mode-water eddies Eddy-centric approach Mode-water eddies have a complex density structure. Due to their shape, they cannot be distinguished from regular anticyclones in an eddy-centric (focused on the core of the eddy) analysis based on sea level height. Nonetheless, eddy pumping induced vertical motion in the euphotic zone of mode-water eddies is comparable to cyclones. For this reasons, only the cyclonic and anticyclonic mechanisms of eddy-pumping perturbations are explained. Conceptual explanation based on sea-surface level An intuitive description of this mechanism is what is defined as eddy-centric-analysis based on sea-surface level. In the Northern hemisphere, anticlockwise rotation in cyclonic eddies creates a divergence of horizontal surface currents due to the Coriolis effect, leading to a dampened water surface. To compensate the inhomogeneity of surface elevation, isopycnal surfaces are uplifted toward the euphotic zone and incorporation of deep ocean, nutrient-rich waters can occur. Physical explanation Conceptually, eddy pumping associates the vertical motion in the interior of eddies to temporal changes in eddy relative vorticity. The vertical motion created by the change in vorticity is understood from the characteristics of the water contained in the core of the eddy. Cyclonic eddies rotate anticlockwise (clockwise) in the Northern (Southern) hemisphere and have a cold core. Anticyclonic eddies rotate clockwise (anticlockwise) in the Northern (Southern) hemisphere and have a warm core. The temperature and salinity difference between the eddy core and the surrounding waters is the key element driving vertical motion. While propagating in horizontal direction, Cyclones and anticyclones “bend” the pycnocline upwards and downwards, respectively, induced by this temperature and salinity discrepancy. The extent of the vertical perturbation of the density surface inside the eddy (compared to the mean ocean density surface) is determined by the changes in rotational strength (relative vorticity) of the eddy. Ignoring horizontal advection in the density conservation equation, the density changes due to changes in vorticity can be directly related to vertical transport. This assumption is coherent with the idea of vertical motion occurring at the eddy centre, in correspondence to variations of a perfectly circular flow. Through such mechanism eddy pumping generates upwelling of cold, nutrient rich deep waters in cyclonic eddies and downwelling of warm, nutrient poor, surface water in anticyclonic eddies. Dependency on the phase of lifespan Eddies weaken over time due to kinetic energy dissipation. As eddies form and intensify, the mechanisms mentioned above will strengthen and, as an increase in relative vorticity generates perturbations of the isopycnal surfaces, the pycnocline deforms. On the other hand, when eddies have aged and carry low kinetic energy, their vorticity diminishes and leads to eddy destruction. Such process opposes to eddy formation and intensification, as the pycnocline will return to its original position prior to the eddy-induced deformation. This means that the pycnocline will uplift in anticyclones and compress in cyclones, leading to upwelling and downwelling, respectively. Eddy pumping characteristics The direction of vertical motion in cyclonic and anticyclonic eddies is independent of the hemisphere. Observed vertical velocities of eddy pumping are in the order of one meter per day. However, there are regional differences. In regions where kinetic energy is higher, such as in the Western boundary current, eddies are found to generate stronger vertical currents than eddies in open ocean. Limitations When describing vertical motion in eddies it is important to note that eddy pumping is only one component of a complex mechanism. Another important factor to take into account, especially when considering ocean-wind interaction, is the role played by eddy-induced Ekman pumping. Some other limitations of the explanation above are due to the idealised, quasi circular linear dynamical response to perturbations that neglects the vertical displacement that a particle can experience moving along a sloping neutral surface. Vertical motion in eddies is a fairly recent research topic that still presents limitations in the theory both due to complexity and lack of sufficient observations. Nonetheless, the one presented above is a simplification that helps explain partially the important role that eddies play in biological productivity, as well as their biogeochemical role in the carbon cycle. Biological impact Recent findings suggest that mesoscale eddies are likely to play a key role in nutrient transport, such as the spatial distribution of chlorophyll concentration, in the open ocean. Lack of knowledge on the impact of eddy activity is however still notable, as eddies’ contribution has been argued not to be sufficient to maintain the observed primary production through nitrogen supply in parts of the subtropical gyre. Although the mechanisms through which eddies shape ecosystems are not yet fully understood, eddies transport nutrients through a combination of horizontal and vertical processes. Stirring and trapping relate to nutrient transport, whereas eddy pumping, eddy-induced Ekman pumping, and eddy impacts on mixed-layer depth variate nutrient. Here, the role played by eddy pumping is discussed. Cyclonic eddy pumping drives new primary production by lifting nutrient-rich waters into the euphotic zone. Complete utilisation of the upwelled nutrients is guaranteed by two main factors. Firstly, biological uptake takes place in timescales that are much shorter than the average lifetime of eddies. Secondly, because the nutrient enhancement takes place in the eddy's interior, isolated from the surrounding waters, biomass can accumulate until upwelled nutrients are fully consumed. Main examples Evidence of the biological impacts of eddy pumping mechanism is present in various publications based on observations and modelling of multiple locations worldwide. Eddy-centric chlorophyll anomalies have been observed in the Gulf Stream region and off the west coast of British Columbia (Haida eddies), as well as eddy-induced enhanced biological production in the Weddell-Scotia Confluence in the Southern Ocean, in the northern Gulf of Alaska, in the South China Sea, in the Bay of Bengal, in the Arabian Sea and in the north-western Alboran Sea, to name a few. Estimations of the eddy pumping in the Sargasso Sea resulted in a flux between 0.24 and 0.5 nitrogen . These quantities have been deemed sufficient to sustain a rate of new primary production consistent with estimates for this region. On a wider ecological scale, eddy-driven variations in productivity influence the trade-off between phytoplankton larval survival and the abundance of predators. These concepts partially explain mesoscale variations in the distribution of larval bluefin tuna, sailfish, marlin, swordfish, and other species. Distributions of adult fishes have also been associated with the presence of cyclonic eddies. Particularly, higher abundances of bluefin tuna and cetaceans in the Gulf of Mexico and blue marlin in the proximity of Hawaii are linked to cyclonic eddy activities. Such spatial patterns extend to seabirds spotted in the vicinities of eddies, including great frigate birds in the Mozambique Channel and albatross, terns, and shearwaters in the South Indian Ocean. North Atlantic Algal Bloom The North Sea is an ideal basin for the formation of algal blooms or spring blooms due to the combination of abundant nutrients and intense Arctic winds that favour the mixing of waters. Blooms are important indicators of the health of a marine ecosystem. Springtime phytoplankton blooms have been thought to be initiated by seasonal light increase and near-surface stratification. Recent observations from the sub-polar North Atlantic experiment and biophysical models suggest that the bloom may be instead resulting from an eddy-induced stratification, taking place 20 to 30 days earlier than it would occur by seasonal changes. These findings revolutionise the entire understanding of spring blooms. Moreover, eddy pumping and eddy-induced Ekman pumping have been shown to dominate late-bloom and post-bloom biological fields. Biogeochemistry Phytoplankton absorbs through photosynthesis. When such organisms die and sink to the seafloor, the carbon they absorbed gets stored in the deep ocean through what is known as the biological pump. Recent research has been investigating the role of eddy pumping and more in general, of vertical motion in mesoscale eddies in the carbon cycle. Evidence has shown that eddy pumping-induced upwelling and downwelling may play a significant role in shaping the way that carbon is stored in the ocean. Despite the fact that research in this field is only developing recently, first results show that eddies contribute less than 5% of the total annual export of phytoplankton to the ocean interior. Plastic pollution Eddies play an important role in the sea surface distribution of microplastics in the ocean. Due to their convergent nature, anticyclonic eddies trap and transport microplastics at the sea surface, along with nutrients, chlorophyll and zooplankton. In the North Atlantic subtropical gyre, the first direct observation of sea surface concentrations of microplastics between a cyclonic and an anticyclonic mesoscale eddy has shown an increased accumulation in the latter. Accumulation of microplastics has environmental impacts through its interaction with the biota. Initially buoyant plastic particles (between 0.01 and 1 mm) are submerged below the climatological mixed layer depth mainly due to biofouling. In regions with very low productivity, particles remain within the upper part of the mixed layer and can only sink below it if a spring bloom occurs. See also Algal bloom - a rapid increase or accumulation in the population of algae in freshwater o marine water systems Baroclinic instability - fluid dynamical instability of fundamental importance in the atmosphere and ocean Ekman pumping - Ekman Pumping is the component of Ekman transport that results in areas of downwelling due to the convergence of water Haida Eddies - episodic, clockwise rotating ocean eddies that form during the winter off the west coast of British Columbia Mesoscale ocean eddies - Swirling in the ocean created by its turbulent nature Spring bloom – Strong increase in phytoplankton abundance that typically occurs in the early spring References Water physics
Eddy pumping
[ "Physics", "Materials_science" ]
2,529
[ "Water physics", "Condensed matter physics" ]
70,347,574
https://en.wikipedia.org/wiki/Exclusion%20zone%20%28physics%29
The exclusion zone is a large stratum (typically on the order of a few microns to a millimeter) observed in pure liquid water, from which particles of other materials in suspension are repelled. It is observed next to the surface of solid materials, e.g. the walls of the container in which the liquid water is held, or solid specimens immersed in it, and also at the water/air interface. Several independent research groups have reported observations of the exclusion zone next to hydrophilic surfaces. Some research groups have reported the observation of the exclusion zone next to metal surfaces. The Exclusion zone has been observed using different techniques, e.g. birefringence, neutron radiography, nuclear magnetic resonance, and others, and it has potentially high importance in biology, and in engineering applications such as filtration and microfluidics. Historical background The first observations of a different behavior of water molecules, close to the walls of its container, date back to late 1960s and early 1970s, when Drost-Hansen, upon reviewing many experimental articles, came to the conclusion that interfacial water shows structural difference. In 1986 Deryagin and his colleagues observed an exclusion zone next to the walls of cells. In 2006 the group of Gerald Pollack reported their observation of what they called an exclusion zone. They observed that the particles of colloidal and molecular solutes suspended in aqueous solution are profoundly and extensively excluded from the vicinity of various hydrophilic surfaces. The exclusion zone has been observed and characterized by several independent groups since those early observations. Theoretical models Since the early observations, several theoretical models have been proposed, to explain the experimental observation of the exclusion zone. Mechanical model: Change in geometrical structure Some researchers suggest that the exclusion zone is due to a change in the geometrical structure of water, induced by the surface of the hydrophilic (or metal) solid water's structure. In this model, the water in the exclusion zone has a structure of hexagonal sheets, where the hydrogen atoms are positioned between oxygen atoms. Moreover, hydrogen atoms bond to the oxygens atoms lying in the layer above and below so that in total each hydrogen forms three bonds. This structure can be considered as an intermediate between ice and water. However, the hexagonal sheet hypothesis does not account for all aspects of the exclusion zone, and it is not supported by the majority of physicists. Quantum Electrodynamical model: quantum confinement Another calculation performed describes the molecules of the exclusion zone using Quantum Mechanics and Quantum Electrodynamics. In this model the liquid bulk water is in a gaseous state. Then, above a certain density threshold and below a specific critical temperature, those molecules go to another quantum state, with lower energy. In this lower energy, coherent state, the cloud of electrons oscillate between two quantum states: a ground state, and an excited state where one electron per molecule is almost free (the binding energy is about 0.5 eV). In this coherent state the quantum superposition has a component with coefficient 0.9 of the ground state, and a component with 0.1 of the excited state. The electrons in this quantum state oscillate between the ground state and the excited state with a certain frequency, and this oscillation creates an electromagnetic field, which is confined within the super-molecular structure, so that no radiation is observed. The molecules of the structure, together with the confined electromagnetic field, constitute in this model the exclusion zone. References Water Fluid dynamics
Exclusion zone (physics)
[ "Chemistry", "Engineering", "Environmental_science" ]
719
[ "Hydrology", "Water", "Chemical engineering", "Piping", "Fluid dynamics" ]
70,347,595
https://en.wikipedia.org/wiki/Phil%20Bagwell
Phil Bagwell (died 6 October 2012) was a computer scientist known for his work and influence in the area of persistent data structures. He is best known for his 2000 invention of hash array mapped tries. Bagwell was probably the most influential researcher in the field of persistent data structures from 2000 until his death. His work is now a standard part of the runtimes of functional programming languages including Clojure, Scala, and Haskell. His contributions to building the Scala community are remembered in the Phil Bagwell Memorial Scala Community Award. Publications "Ideal Hash Trees" (2000), EPFL Technical Report "Fast Functional Lists, Hash-Lists, Deques and Variable Length Arrays" (2002), EPFL Technical Report References 2012 deaths Year of birth missing Computer scientists
Phil Bagwell
[ "Technology" ]
157
[ "Computing stubs", "Computer science", "Computer scientists", "Computer science stubs" ]
70,347,828
https://en.wikipedia.org/wiki/Papiliotrema%20mangaliensis
Papiliotrema mangaliensis (synonym Cryptococcus mangaliensis) is a fungal species in the family Rhynchogastremataceae. The species was first found in its yeast state in the Florida Everglades. References Further reading Jones, EB Gareth, and Jack W. Fell. "4 Basidiomycota." Marine Fungi: and Fungal-like Organisms (2012): 49. Fell, Jack W. "6 Yeasts in marine environments." Marine Fungi: and Fungal-like Organisms (2012): 91. Tremellomycetes Fungus species
Papiliotrema mangaliensis
[ "Biology" ]
124
[ "Fungi", "Fungus species" ]
70,348,415
https://en.wikipedia.org/wiki/Standard%20linear%20array
In the context of phased arrays, a standard linear array (SLA) is a uniform linear array (ULA) of interconnected transducer elements, e.g. microphones or antennas, where the individual elements are arranged in a straight line spaced at one half of the smallest wavelength of the intended signal to be received and/or transmitted. Therefore, an SLA is a subset of the ULA category. The reason for this spacing is that it prevents grating lobes in the visible region of the array. Intuitively one can think of a ULA as spatial sampling of a signal in the same sense as time sampling of a signal. Grating lobes are identical to aliasing that occurs in time series analysis for an under-sampled signal. Per Shannon's sampling theorem, the sampling rate must be at least twice the highest frequency of the desired signal in order to preclude spectral aliasing. Because the beam pattern (or array factor) of a linear array is the Fourier transform of the element pattern, the sampling theorem directly applies, but in the spatial instead of spectral domain. The discrete-time Fourier transform (DTFT) of a sampled signal is always periodic, producing "copies" of the spectrum at intervals of the sampling frequency. In the spatial domain, these copies are the grating lobes. The analog of radian frequency in the time domain is wavenumber, radians per meter, in the spatial domain. Therefore, the spatial sampling rate, in samples per meter, must be . The sampling interval, which is the inverse of the sampling rate, in meters per sample, must be . References Antennas (radio) Phased arrays
Standard linear array
[ "Technology" ]
338
[ "Phased arrays", "Wireless locating" ]
70,348,496
https://en.wikipedia.org/wiki/Naganishia%20bhutanensis
Naganishia bhutanensis is a species of fungus in the family Filobasidiaceae. It was isolated in its yeast state from soil in Bhutan. The cell is encapsulated with an extended ovoid shape. When the cell buds, it creates birth scars, and the neck of the new yeast fits inside of the bud scar neck. The new cell typically only buds from the birth scar present from where it budded off the parent cell. In over half of the dividing cells in N. bhutanensis cultures the cell walls were holoblastic, meaning that the new cell wall was continuous with the old cell wall on the parent cell; the remaining portion of dividing cells in N. bhutanensis cultures divide enteroblastically, meaning that only the inner layer of the new cell wall is continuous with the inner layer of the parental cell wall. After the cells bud off they produce a collar on the parent cell. One thing of note with N. bhutanensis is that mitosis is not intranuclear. This species does not produce urease. References Tremellomycetes Biota of Bhutan Fungus species
Naganishia bhutanensis
[ "Biology" ]
229
[ "Biota by country", "Fungi", "Fungus species", "Biota of Bhutan" ]
70,348,606
https://en.wikipedia.org/wiki/Cutaneotrichosporon%20curvatum
Cutaneotrichosporon curvatum is a species of fungus in the family Trichosporonaceae. It is an extremophile found in cold-seep sites. It is oleaginous, and uses the sugars in cellulose for the growth and production of storage triglycerides. This species has been extensively studied in relationship to lipids. It can uptake both glucose and xylose simultaneously. When grown in old oil with high levels of polymerized triglyceride, the cell wall transforms from being smooth to having hairy or wart-like protuberances which are believed to assist in lipid uptake. References Tremellomycetes Fungus species
Cutaneotrichosporon curvatum
[ "Biology" ]
148
[ "Fungi", "Fungus species" ]
70,348,795
https://en.wikipedia.org/wiki/Real%20Time%20%28art%20series%29
Real Time is an art installation series by Dutch designer Maarten Baas. It consists of works in which people manually create and erase the hands on a clock each minute. Portions of the time depiction are completed using CGI after the motions of the painter are filmed separately and repeated to complete the 24 hours. The first works in the series were launched in April 2009. They consist of videos in which sweepers move around trash to create the analog clock hands ("Sweeper's clock"), a person behind a translucent screen paints a digital clock, and grandfather clocks in which a man behind a screen paints the analog hands. In 2016, Baas continued the series with the "Schiphol clock" at the Schiphol Airport in Amsterdam, Netherlands. It depicts a man behind a translucent screen painting the minutes. Baas recorded an actor (Tiago Sá da Costa) for 12 hours to create the video used. References External links Clocks Installation art works
Real Time (art series)
[ "Physics", "Technology", "Engineering" ]
196
[ "Physical systems", "Machines", "Clocks", "Measuring instruments" ]
70,349,840
https://en.wikipedia.org/wiki/1%2C1%27-Dilithioferrocene
1,1'-Dilithioferrocene is the organoiron compound with the formula Fe(C5H4Li)2. It is exclusively generated and isolated as a solvate, using either ether or tertiary amine ligands bound to the lithium centers. Regardless of the solvate, dilithioferrocene is used commonly to prepare derivatives of ferrocene. Synthesis and reactions Treatment of ferrocene with butyl lithium gives 1,1'-dilithioferrocene, regardless of the stoichiometry (monolithioferrocene requires special conditions for its preparation). Typically the lithiation reaction is conducted in the presence of tetramethylethylenediamine (tmeda). The adduct [Fe(C5H4Li)2]3(tmeda)2 has been crystallized from such solutions. Recrystallization of this adduct from thf gives [Fe(C5H4Li)2]3(thf)6. 1,1'-Dilithioferrocene reacts with a variety of electrophiles to afford disubstituted derivatives of ferrocene. These electrophiles include S8 (to give 1,1'-ferrocenetrisulfide), chlorophosphines, and chlorosilanes. The diphosphine ligand 1,1'-bis(diphenylphosphino)ferrocene (dppf) is prepared by treating dilithioferrocene with chlorodiphenylphosphine. Monolithioferrocene The reaction of ferrocene with one equivalent of butyllithium mainly affords dilithioferrocene. Monolithioferrocene can be obtained using tert-butyllithium. References Ferrocenes Sandwich compounds Cyclopentadienyl complexes Organolithium compounds
1,1'-Dilithioferrocene
[ "Chemistry" ]
406
[ "Organolithium compounds", "Cyclopentadienyl complexes", "Sandwich compounds", "Reagents for organic chemistry", "Organometallic chemistry" ]
70,353,036
https://en.wikipedia.org/wiki/James%20Duncan%20Hague
James Duncan Hague (18361908) was an American mining engineer, mineralogist, and geologist. Early years Hague was born in Boston, Massachusetts, to the Rev. William Hague and Mary Bowditch Moriarty. He attended school in Boston and Newark, New Jersey, before enrolling at the Lawrence Scientific School at Harvard University in 1854. The following year, he headed to the Georg-August University of Göttingen in Germany to study chemistry and mineralogy for a year before studying mining engineering at the Royal Saxon Mining Academy in Freiberg for two years. Career After returning to New York, Hague was selected by financier William H. Webb to explore several equatorial coral islands in the Pacific Ocean. Webb was involved in the guano business, and Hague examined and documented phosphate deposits on Baker, Howland, and Jarvis Islands. During the American Civil War, he spent 1862 and 1863 in Port Royal, South Carolina, as a judge advocate for the U.S. Navy, handling negotiations involving the Atlantic Blockading Squadron. He then worked for Edwin J. Hulbert, developing the Calumet and Hecla copper mines in Michigan, before joining Clarence King in 1867 as an assistant geologist on the Geological Exploration of the Fortieth Parallel. In 1871, he headed to California to work as a consulting expert on mining engineering, working with both private and governmental clients throughout the western U.S. and Mexico. In 1878, he was a member of the U.S. delegation to the Exposition Universelle in Paris, writing a report on the mining companies and innovations on display. In 1887, Hague acquired the North Star Mining Co. on Lafayette Hill near Grass Valley, California, which he had helped develop during his time in the state. He reorganized the company in 1889 and acquired several other mines, including Gold Hill. Working with his brother-in-law, Arthur De Wint Foote, he grew North Star's operations, eventually deciding to commission North Star House as an event space for the company and home for the company's supervisor, Foote. Affiliations Hague was a fellow of the American Association for the Advancement of Science and in 1904 he became a member of the American Academy of Arts and Sciences. From 1906, Hague was vice president of the American Institute of Mining Engineers. In 1887, he became a fellow of the American Geographical Society before becoming a councillor in 1907 and the society's vice president a year later. Personal life In April 1872, Hague married Mary Ward Foote (18461898). They had three children: Marian (18731971), Eleanor (18751954), and William (18821918). He died August 3, 1908, at his summer home in Stockbridge, Massachusetts, and was buried in Albany Rural Cemetery in Colonie, New York. References 1908 deaths 1836 births Fellows of the American Academy of Arts and Sciences Fellows of the American Association for the Advancement of Science Fellows of the American Institute of Mining, Metallurgical, and Petroleum Engineers American Geographical Society 19th-century American geologists Scientists from Boston American mining engineers American mineralogists Mining engineers
James Duncan Hague
[ "Engineering" ]
634
[ "Mining engineering", "Mining engineers" ]
70,353,559
https://en.wikipedia.org/wiki/Grating%20lobes
For discrete aperture antennas (such as phased arrays) in which the element spacing is greater than a half wavelength, a spatial aliasing effect allows plane waves incident to the array from visible angles other than the desired direction to be coherently added, causing grating lobes. Grating lobes are undesirable and identical to the main lobe. The perceived difference seen in the grating lobes is because of the radiation pattern of non-isotropic antenna elements, which affects main and grating lobes differently. For isotropic antenna elements, the main and grating lobes are identical. Definition In antenna or transducer arrays, a grating lobe is defined as "a lobe other than the main lobe, produced by an array antenna when the inter-element spacing is sufficiently large to permit the in-phase addition of radiated fields in more than one direction." Derivation To illustrate the concept of grating lobes, we will use a simple uniform linear array. The beam pattern (or array factor) of any array can be defined as the dot product of the steering vector and the array manifold vector. For a uniform linear array, the manifold vector is , where is the phase difference between adjacent elements created by an impinging plane wave from an arbitrary direction, is the element number, and is the total number of elements. The term merely centers the point of reference for phase to the physical center of the array. From simple geometry, can be shown to be , where is defined as the plane wave incident angle where is a plane wave incident orthogonal to the array (from boresight). For a uniformly weighted (un-tapered) uniform linear array, the steering vector is of similar form to the manifold vector, but is "steered" to a target phase, , that may differ from the actual phase, of the impinging signal. The resulting normalized array factor is a function of the phase difference, . The array factor is therefore periodic and maximized whenever the numerator and denominator both equal zero, by L'Hôpital's rule. Thus, a maximum of unity is obtained for all integers , where . Returning to our definition of , we wish to be able to steer the array electronically over the entire visible region, which extends from to , without incurring a grating lobe. This requires that the grating lobes be separated by at least . From the definition of , we see that maxima will occur whenever . The first grating lobe will occur at . For a beam steered to , we require the grating lobe to be no closer than . Thus . Relationship to sampling theorem Alternatively, one can think of a uniform linear array (ULA) as spatial sampling of a signal in the same sense as time sampling of a signal. Grating lobes are identical to aliasing that occurs in time series analysis for an under-sampled signal. Per Shannon's sampling theorem, the sampling rate must be at least twice the highest frequency of the desired signal in order to preclude spectral aliasing. Because the beam pattern (or array factor) of a linear array is the Fourier transform of the element pattern, the sampling theorem directly applies, but in the spatial instead of spectral domain. The discrete-time Fourier transform (DTFT) of a sampled signal is always periodic, producing "copies" of the spectrum at intervals of the sampling frequency. In the spatial domain, these copies are the grating lobes. The analog of radian frequency in the time domain is wavenumber, radians per meter, in the spatial domain. Therefore the spatial sampling rate, in samples per meter, must be . The sampling interval, which is the inverse of the sampling rate, in meters per sample, must be . References Antennas
Grating lobes
[ "Engineering" ]
757
[ "Antennas", "Telecommunications engineering" ]
70,355,064
https://en.wikipedia.org/wiki/Sana%20%28dairy%20product%29
Sana (Sa-na) is a fermented milk drink originating from Romania. It is not certain when it was first created but it's been a staple of Romanian cuisine for centuries. Sana is similar to kefir, buttermilk or soured milk but unlike these, it is slightly sweeter in taste. It is made by fermenting the unpasteurised, raw milk by reducing its pH levels. Sana usually has at least 3.6% fat content. Sana is usually made from cow's milk but in recent years the goat milk sana has increased in popularity, mainly due to the better perceived health benefits of goat milk. There is also sheep's milk sana and even buffalo milk sana. Similar to other fermented dairy products, sana has health and gut benefits, contributing to a healthier gut microbiome. Sana is also used as an ingredient in various recipes for baking breads or other types of pastries. It is only found in Romania and Moldova. References Fermented dairy products Fermented drinks Milk-based drinks Sour foods Romanian drinks Moldovan drinks
Sana (dairy product)
[ "Biology" ]
236
[ "Fermented drinks", "Biotechnology products" ]
63,114,331
https://en.wikipedia.org/wiki/Romanov%27s%20theorem
In mathematics, specifically additive number theory, Romanov's theorem is a mathematical theorem proved by Nikolai Pavlovich Romanov. It states that given a fixed base , the set of numbers that are the sum of a prime and a positive integer power of has a positive lower asymptotic density. Statement Romanov initially stated that he had proven the statements "In jedem Intervall (0, x) liegen mehr als ax Zahlen, welche als Summe von einer Primzahl und einer k-ten Potenz einer ganzen Zahl darstellbar sind, wo a eine gewisse positive, nur von k abhängige Konstante bedeutet" and "In jedem Intervall (0, x) liegen mehr als bx Zahlen, weiche als Summe von einer Primzahl und einer Potenz von a darstellbar sind. Hier ist a eine gegebene ganze Zahl und b eine positive Konstante, welche nur von a abhängt". These statements translate to "In every interval there are more than numbers which can be represented as the sum of a prime number and a -th power of an integer, where is a certain positive constant that is only dependent on " and "In every interval there are more than numbers which can be represented as the sum of a prime number and a power of . Here is a given integer and is a positive constant that only depends on " respectively. The second statement is generally accepted as the Romanov's theorem, for example in Nathanson's book. Precisely, let and let , . Then Romanov's theorem asserts that . History Alphonse de Polignac wrote in 1849 that every odd number larger than 3 can be written as the sum of an odd prime and a power of 2. (He soon noticed a counterexample, namely 959.) This corresponds to the case of in the original statement. The counterexample of 959 was, in fact, also mentioned in Euler's letter to Christian Goldbach, but they were working in the opposite direction, trying to find odd numbers that cannot be expressed in the form. In 1934, Romanov proved the theorem. The positive constant mentioned in the case was later known as Romanov's constant. Various estimates on the constant, as well as , has been made. The history of such refinements are listed below. In particular, since is shown to be less than 0.5 this implies that the odd numbers that cannot be expressed this way has positive lower asymptotic density. Generalisations Analogous results of Romanov's theorem has been proven in number fields by Riegel in 1961. In 2015, the theorem was also proven for polynomials in finite fields. Also in 2015, an arithmetic progression of Gaussian integers that are not expressible as the sum of a Gaussian prime and a power of is given. References Theorems in number theory Additive number theory
Romanov's theorem
[ "Mathematics" ]
632
[ "Mathematical theorems", "Mathematical problems", "Number theory", "Theorems in number theory" ]
63,115,295
https://en.wikipedia.org/wiki/Imageability
Imageability is a measure of how easily a physical object, word or environment will evoke a clear mental image in the mind of any person observing it. It is used in architecture and city planning, in psycholinguistics, and in automated computer vision research. In automated image recognition, training models to connect images with concepts that have low imageability can lead to biased and harmful results. History and components Kevin A. Lynch first introduced the term, "imageability" in his 1960 book, The Image of the City. In the book, Lynch argues cities contain a key set of physical elements that people use to understand the environment, orient themselves inside of it, and assign it meaning. Lynch argues the five key elements that impact the imageability of a city are Paths, Edges, Districts, Nodes, and Landmarks. Paths: channels in which people travel. Examples: streets, sidewalks, trails, canals, railroads. Edges: objects that form boundaries around space. Examples: walls, buildings, shoreline, curbstone, streets, and overpasses. Districts: medium to large areas people can enter into and out of that have a common set of identifiable characteristics. Nodes: large areas people can enter, that serve as the foci of the city, neighborhood, district, etc. Landmarks: memorable points of reference people cannot enter into. Examples: signs, mountains and public art. In 1914, half a century before The Image of the City was published, Paul Stern discussed a concept similar to imageability in the context of art. Stern, in Susan Langer's Reflections on Art, names the attribute that describes how vividly and intensely an artistic object could be experienced apparency. In computer vision Automated image recognition was developed by using machine learning to find patterns in large, annotated datasets of photographs, like ImageNet. Images in ImageNet are labelled using concepts in WordNet. Concepts that are easily expressed verbally, like "early", are seen as less "imageable" than nouns referring to physical objects like "leaf". Training AI models to associate concepts with low imageability with specific images can lead to problematic bias in image recognition algorithms. This has particularly been critiqued as it relates to the "person" category of WordNet and therefore also ImageNet. Trevor Pagan and Kate Crawford demonstrated in their essay "Excavating AI" and their art project ImageNet Roulette how this leads to photos of ordinary people being labelled by AI systems as "terrorists" or "sex offenders". Images in datasets are often labelled as having a certain level of imageability. As described by Kaiyu Yang, Fei-Fei Li and co-authors, this is often done following criteria from Allan Paivio and collaborators' 1968 psycholinguistic study of nouns. Yang el.al. write that dataset annotators tasked with labelling imageability "see a list of words and rate each word on a 1-7 scale from 'low imagery' to 'high imagery'. To avoid biased or harmful image recognition and image generation, Yang et.al. recommend not training vision recognition models on concepts with low imageability, especially when the concepts are offensive (such as sexual or racial slurs) or sensitive (their examples for this category include "orphan", "separatist", "Anglo-Saxon" and "crossover voter"). Even "safe" concepts with low imageability, like "great-niece" or "vegetarian" can lead to misleading results and should be avoided. See also Wayfinding Mental mapping Environmental psychology Speech perception Experimental psychology Further reading Holahan, Charles J.; Sorenson, Paul F. (1985-09-01). "The role of figural organization in city imageability: An information processing analysis". Journal of Environmental Psychology. Smolík Filip (2019-05-21). "Imageability and Neighborhood Density Facilitate the Age of Word Acquisition in Czech". Journal of Speech, Language, and Hearing Research. Paivio, Allan; Yuille, John C.; Madigan, Stephen A. (1968). "Concreteness, imagery, and meaningfulness values for 925 nouns". Journal of Experimental Psychology. Hansen, Pernille; Holm, Elisabeth; Lind, Marianne; Simonsen, Hanne Gram (2012). "Name relatedness and imageability". Richardson, John T. E. (1975-05). "Concreteness and Imageability". Quarterly Journal of Experimental Psychology. Silva, Kapila Dharmasena (2015). "Developing Alternative Methods for Urban Imageability Research". McCunn, Lindsay J.; Gifford, Robert (2018-04-01). "Spatial navigation and place imageability in sense of place". Cities. Caplan, Jeremy B.; Madan, Christopher R. (2016-06-17). "Word Imageability Enhances Association-memory by Increasing Hippocampal Engagement". Journal of Cognitive Neuroscience Chmielewski S., Bochniak A., Natapov A., Wezyk P. (2020). "Introducing GEOBIA to Landscape Imageability Assessment". Remote Sensing. References Psychogeography Environmental psychology Knowledge representation
Imageability
[ "Environmental_science" ]
1,072
[ "Environmental social science", "Environmental psychology" ]
63,115,362
https://en.wikipedia.org/wiki/Filter%20quantifier
In mathematics, a filter on a set informally gives a notion of which subsets are "large". Filter quantifiers are a type of logical quantifier which, informally, say whether or not a statement is true for "most" elements of Such quantifiers are often used in combinatorics, model theory (such as when dealing with ultraproducts), and in other fields of mathematical logic where (ultra)filters are used. Background Here we will use the set theory convention, where a filter on a set is defined to be an order-theoretic proper filter in the poset that is, a subset of such that: and ; For all we have ; For all if then Recall a filter on is an ultrafilter if, for every either or Given a filter on a set we say a subset is -stationary if, for all we have Definition Let be a filter on a set We define the filter quantifiers and as formal logical symbols with the following interpretation: is -stationary for every first-order formula with one free variable. These also admit alternative definitions as When is an ultrafilter, the two quantifiers defined above coincide, and we will often use the notation instead. Verbally, we might pronounce as "for -almost all ", "for -most ", "for the majority of (according to )", or "for most (according to )". In cases where the filter is clear, we might omit mention of Properties The filter quantifiers and satisfy the following logical identities, for all formulae : Duality: Weakening: Conjunction: Disjunction: If are filters on then: Additionally, if is an ultrafilter, the two filter quantifiers coincide: Renaming this quantifier the following properties hold: Negation: Weakening: Conjunction: Disjunction: In general, filter quantifiers do not commute with each other, nor with the usual and quantifiers. Examples If is the trivial filter on then unpacking the definition, we have and This recovers the usual and quantifiers. Let be the Fréchet filter on an infinite set Then, holds iff holds for cofinitely many and holds iff holds for infinitely many The quantifiers and are more commonly denoted and respectively. Let be the "measure filter" on generated by all subsets with Lebesgue measure The above construction gives us "measure quantifiers": holds iff holds almost everywhere, and holds iff holds on a set of positive measure. Suppose is the principal filter on some set Then, we have and If is the principal ultrafilter of an element then we have Use The utility of filter quantifiers is that they often give a more concise or clear way to express certain mathematical ideas. For example, take the definition of convergence of a real-valued sequence: a sequence converges to a point if Using the Fréchet quantifier as defined above, we can give a nicer (equivalent) definition: Filter quantifiers are especially useful in constructions involving filters. As an example, suppose that has a binary operation defined on it. There is a natural way to extend to the set of ultrafilters on : With an understanding of the ultrafilter quantifier, this definition is reasonably intuitive. It says that is the collection of subsets such that, for most (according to ) and for most (according to ), the sum is in Compare this to the equivalent definition without ultrafilter quantifiers: The meaning of this is much less clear. This increased intuition is also evident in proofs involving ultrafilters. For example, if is associative on using the first definition of it trivially follows that is associative on Proving this using the second definition takes a lot more work. See also References Quantifier (logic)
Filter quantifier
[ "Mathematics" ]
818
[ "Predicate logic", "Mathematical logic", "Quantifier (logic)", "Basic concepts in set theory" ]
63,115,993
https://en.wikipedia.org/wiki/Barnard%2030
Barnard 30 is a dark cloud in the Lambda Orionis ring, north of Lambda Orionis, also called Meissa. The region is about 1300 light years from Earth. The Barnard 30 cloud is one of the regions in the Lambda Orionis Ring where the population of young stars is concentrated, together with the Lambda Orionis cluster and Barnard 35. It contains Herbig-Haro Objects, young stars, brown dwarfs and multiple T Tauri stars. The young population includes HK Orionis, a Herbig Ae/Be star and HI Orionis, a T Tauri star. The stellar population in Barnard 30 is about 2-3 million years old and is therefore significantly younger than the central Lambda Orionis cluster. This cloud is likely shaped by the massive star Meissa and this star is also responsible for triggering star-formation in this cloud. A possible supernova 1 million years ago that possibly has formed the Lambda Orionis ring might be an additional trigger for the star-formation in this region. The region contains a reflection nebula. Observations The emission region associated with Barnard 30 has a low surface brightness and covers a large region of the sky. Because Barnard 30 shares the same constellation as the famous Orion Nebula it is rarely imaged. Gallery References Orion molecular cloud complex Orion (constellation) Dark nebulae 30
Barnard 30
[ "Astronomy" ]
263
[ "Constellations", "Orion (constellation)" ]
63,116,041
https://en.wikipedia.org/wiki/Mozenavir
Mozenavir (DMP-450) is an antiviral drug which was developed as a treatment for HIV/AIDS. It acts as an HIV protease inhibitor and binds to this target with high affinity, however despite promising results in early testing, mozenavir was unsuccessful in human clinical trials. Studies continue into related derivatives. References Anti–RNA virus drugs Antiviral drugs
Mozenavir
[ "Biology" ]
82
[ "Antiviral drugs", "Biocides" ]
63,116,479
https://en.wikipedia.org/wiki/Crossness%20Sewage%20Treatment%20Works
The Crossness Sewage Treatment Works is a sewage treatment plant located at Crossness in the London Borough of Bexley. It was opened in 1865 and is Europe's second largest sewage treatment works, after its counterpart Beckton Sewage Treatment Works located north of the river. Crossness treats the waste water from the Southern Outfall Sewer serving South and South East London, and is operated by Thames Water. The treated effluent from the plant is discharged into the River Thames at the eastern end of the site. History As originally conceived the works comprised reservoirs covering 2.6 hectares designed to retain six hours’ flow of sewage. No sewage treatment was provided and the sewage was discharged untreated into the River Thames on the ebb tide. Following the Princess Alice disaster in 1878 a Royal Commission was appointed in 1882 to examine Metropolitan Sewage Disposal. It recommended that a precipitation process should be deployed to separate solids from the liquid and that the solids should be burned, applied to land or dumped at sea. A precipitation works using lime and iron sulphate was installed at Crossness in 1888–91. Sludge was disposed of in the Barrow Deep and later in the Black Deep in the outer Thames estuary. In the year 1912/13 the Crossness works received and treated 49,534 million gallons (225.2 million m3) of sewage, and disposed of 880,000 tons of sludge. The cost of operating the Crossness works was £44,269. In 1919/20 the corresponding figures were 41,209 million gallons (187.3 million m3), of sewage, 767,000 tons of sludge sent to sea, entailing 767 sludge vessel voyages, and the costs were £52,282. Advanced treatment Work began in the early 1960s to install a modern treatment plant capable of treating 450,000 cubic metres per day of sewage. The cost of the works was £9 million at 1963 prices. The plant comprised storm tanks, detritus channels, primary sedimentation, mechanical aeration, final sedimentation and sludge digestion. Following the 1964 upgrade the works at Crossness began to produce a nitrifying effluent whereupon sulphide disappeared from the tideway; an excess of nitrate provided a safeguard against sulphide formation in the river. The practice of dumping sewage sludge at sea was banned in 1998. In that year a sludge incineration plant was commissioned. This provides 6 MW of power for use at the treatment works. New processes In 2010–14 the Crossness works were upgraded at a cost of £220 million, increasing capacity by 44% to reduce storm sewage flowing into the Thames during heavy rainfall. The upgrade involved the installation of new renewable energy sources including a 2.3 MW wind turbine, a thermal hydrolysis plant, an advanced digestion plant, and an odour control treatment system. The project enabled the plant to treat 13 cubic metres of sewage per second and incorporated new inlet works, primary settlement tanks, secondary biological treatment implementing the activated sludge process and final settlement tanks. It also included the installation of associated sludge thickening and odour treatment facilities. The hydrolysis plant burns combustible sludge flakes created after waste water treatment to 160 °C, producing 50 per cent more biogas than anaerobic digestion process. The project included the installation of eight new primary settlement tanks where sewage is collected to remove primary sludge passing through two 1.2 km-long culverts of 2 m diameter. Sewage passes through a pair of new aeration lanes into twelve final settlement tanks of 40 m diameter. The activated sludge plant includes six aeration lanes of 69 m with total volume of 86,000 cubic metres and a treatment capacity of 564,000 cubic metres per day. It includes anoxic zone mixers, a fine bubble diffused aeration system and five centrifugal blowers giving an air flow of up to 21,000 cubic metres per hour. Additional sludge storage and thickening facilities store the additional sludge. The five raw sludge gravity belt thickeners have a capacity of 6,055 cubic metres per day each. Crossness Pumping Station The original sewage pumping station on the site of the treatment plant, constructed between 1859 and 1865 and featuring spectacular Victorian architecture, has been restored and is now open as a museum. See also London sewer system References Source Wood, Leslie B, (1982). The Restoration of the Tidal Thames. Bristol: Adam Hilger. Thames Water London water infrastructure Sewage treatment plants in the United Kingdom Sewerage
Crossness Sewage Treatment Works
[ "Chemistry", "Engineering", "Environmental_science" ]
927
[ "Sewerage", "Environmental engineering", "Water pollution" ]
63,118,618
https://en.wikipedia.org/wiki/Acoustic%20panel
Acoustic panels (also sound absorption panels, soundproof panels or sound panels) are sound-absorbing fabric-wrapped boards designed to control echo and reverberation in a room. Most commonly used to resolve speech intelligibility issues in commercial soundproofing treatments. Most panels are constructed with a wooden frame, filled with sound absorption material (mineral wool, fiber glass, cellulose, open cell foam, or combination of) and wrapped with fabric. An acoustic board is a board made from sound absorbing materials, designed to provide sound insulation. Between two outer walls sound absorbing material is inserted and the wall is porous. Thus, when sound passes through an acoustic board, the intensity of sound is decreased. The loss of sound energy is balanced by producing heat energy. They are used in auditoriums, halls, seminar rooms, libraries, courts and wherever sound insulation is needed. Acoustic boards are also used in speaker boxes. See also Acoustics Architectural acoustics Room acoustics Absorption (acoustics) References Acoustics Building engineering
Acoustic panel
[ "Physics", "Engineering" ]
208
[ "Building engineering", "Classical mechanics", "Acoustics", "Civil engineering", "Architecture" ]
63,119,437
https://en.wikipedia.org/wiki/Transplant%20engineering
Transplant engineering (or allograft engineering) is a variant of genetic organ engineering which comprises allograft, autograft and xenograft engineering. In allograft engineering the graft is substantially modified by altering its genetic composition. The genetic modification can be permanent or transient. The aim of modifying the allograft is usually the mitigation of immunological graft rejection. History Transient genetic allograft engineering has been pioneered by Shaf Keshavjee and Marcelo Cypel at University Health Network in Toronto by adenoviral transduction for transgenic expression of the IL-10 gene. Permanent genetic allograft engineering has first been done by Rainer Blasczyk and Constanca Figueiredo at Hannover Medical School in Hanover by lentiviral transduction for knocking down MHC expression in pigs (lung) and rats (kidney). References Genetic engineering Transplantation medicine
Transplant engineering
[ "Chemistry", "Engineering", "Biology" ]
193
[ "Biological engineering", "Bioengineering stubs", "Biotechnology stubs", "Genetic engineering", "Molecular biology" ]
63,120,085
https://en.wikipedia.org/wiki/HH%201/2
The Herbig-Haro objects HH 1/2 are the first such objects to be recognized as Herbig-Haro objects and were discovered by George Herbig and Guillermo Haro. They are located at a distance of about 1343 light-years (412 parsec) in the constellation Orion near NGC 1999. HH 1/2 are among the brightest Herbig-Haro objects in the sky and consist of a pair of oppositely oriented bow shocks, separated by 2.5 arcminutes (a projected separation of about 1.1 light year). The HH 1/2 pair were the first Herbig-Haro objects with detected proper motion and HH 2 was the first Herbig-Haro object to be detected in x-rays. Some of the structures in the Herbig-Haro Objects move with a speed of 400 km/s. The central region The central region contains an opaque cloud core with an astrophysical jet and a highly embedded multiple-star system that remains invisible below 3 Microns. These sources were first detected with the Very Large Array and are therefore named VLA 1 and 2. The source HH 1-2 VLA 1 drives the HH 1/2 pair and the source VLA 2 drives the Herbig-Haro objects HH 144/145. There might be even a third outflow in the central region of HH 1/2, indicating a third member. The jet towards HH 1 is visible in optical images, but the counterjet towards HH 2 was detected in the infrared with the Spitzer Space Telescope. Gallery References Herbig–Haro objects Orion (constellation) Orion molecular cloud complex
HH 1/2
[ "Astronomy" ]
345
[ "Constellations", "Orion (constellation)" ]
63,120,368
https://en.wikipedia.org/wiki/Sociology%20of%20quantification
The sociology of quantification is the investigation of quantification as a sociological phenomenon in its own right. Content According to a review published in 2018, the sociology of quantification is an expanding field which includes the literature on the quantified self, on algorithms, and on various forms of metrics and indicators. A prior review in 2016 names a similar range of topics: "quantification processes in the sciences, quantification in society driven by the sciences, quantification processes driven by other social processes, including for example implementations of numeric technologies, standardization procedures, bureaucratic management, political decision-taking and newer trends as self-quantification." Older works which can be classified under the heading of the sociology of quantification are Theodore Porter’s Trust in Numbers, the works of French sociologists Pierre Bourdieu and Alain Desrosières, and the classic works on probability by Ian Hacking and Lorraine Daston. The discipline gained traction due to the increasing importance and scope of quantification, its relation to the economics of conventions, and the perception of its dangers as a weapon of oppression or as means to undesirable ends. For Sally Engle Merry quantification is a technology of control, but whether it is reformist or authoritarian depends on who harnessed it and for what purpose. The ‘governance by numbers’ is seen by jurist Alain Supiot as repudiating the goal of governing by just laws, advocating in its stead the attainment of measurable objectives. For Supiot the normative use of economic quantification leaves no option for countries and economic actors than to ride roughshod over social legislation, and pledge allegiance to stronger powers. The French movement of ‘statactivisme’ suggests fighting numbers with numbers under the slogan “a new number is possible". On the other extreme, algorithmic automation is seen as an instrument of liberation by Aaron Bastani, spurring a debate on digital socialism. According to Espeland and Stevens an ethics of quantification would naturally descend from a sociology of quantification, especially at an age where democracy, merit, participation, accountability and even "fairness" are assumed to be best discovered and appreciated via numbers. Andrea Mennicken and Wendy Espeland provide a review (2019) of the main concerns about the "increasing expansion of quantification into all realms, including into people’s personal lives". These authors discuss the new patterns of visibility and obscurity created by quantitative technologies, how these influence relations of power, and how neoliberal regimes of quantification favour 'economization', where "individuals, activities, and organizations are constituted or framed as economic actors and entities." Mennicken and Robert Salais have curated in 2022 a multi-author volume titled The New Politics of Numbers: Utopia, Evidence and Democracy, with contributions encompassing Foucauldian studies of governmentality, which first flourished in the English-speaking world, and studies of state statistics known as ‘economics of convention’, developed mostly at INSEE in France. A theme treated by several authors is the relationship between quantification and democracy, with regimes of algorithmic governmentality and artificial intelligence posing a threat to democracy and to democratic agency. Mathematical modelling is a field of interest for sociology of quantification, and the intensified use of mathematical models in relation to the COVID-19 pandemic has spurred a debate on how society uses models. Rhodes and Lancaster speak of 'model as public troubles' and starting from models as boundary objects call for a better relation between models and society. Other authors propose five principles for making models serve society, on the premise that modelling is a social activity. Models as mediators between 'theories' and 'the world' are discussed in a multi-author book edited by Mary S. Morgan and Margaret Morrison that offers several examples from physics and economics. The volume provides a historical and philosophical discussion of what models are and of what models do, with contributions from the authors as well as from scholars such as Ursula Klein, Marcel Boumans, R.I.G. Hughes, Mauricio Suárez, Geert Reuten, Nancy Cartwright, Adrienne van den Boogard, and Stephan Hartmann. A later work by Morgan offers elements of history, sociology and epistemology of modelling in economics and econometrics. Relevant material for a sociology of mathematical models can be found in the works of Ian Scoones and Andy Stirling, in Mirowski’s Machine Dreams, in Evelyn Fox Keller Making Sense of Life, Jean Baudrillard's Simulacra and Simulation, in Bruno Latour and Steve Woolgar's Laboratory Life. The role of quantification in historiography and macrohistory is the subject of The Measure of Reality: Quantification in Western Europe, 1250-1600, a 1997 nonfiction book by Alfred W. Crosby. The book examines the origins and effects of quantitative thinking in post-medieval European history, suggesting it as a major factor in the ensuing development of European arts and techniques. Links Algorithmic Justice League. Cardiff University: “Data Justice Lab”, School of Journalism, Media and Culture. French National Research Institute for Sustainable Development: “Project SSSQ - Society for the Social Studies of Quantification. References Quantification (science) Science and technology studies
Sociology of quantification
[ "Mathematics", "Technology" ]
1,100
[ "Quantity", "Science and technology studies", "Quantification (science)" ]
63,120,463
https://en.wikipedia.org/wiki/Lobucavir
Lobucavir (previously known as BMS-180194, Cyclobut-G) is an antiviral drug that shows broad-spectrum activity against herpesviruses, hepatitis B and other hepadnaviruses, HIV/AIDS and cytomegalovirus. It initially demonstrated positive results in human clinical trials against hepatitis B with minimal adverse effects but was discontinued from further development following the discovery of increased risk of cancer associated with long-term use in mice. Although this carcinogenic risk is present in other antiviral drugs, such as zidovudine and ganciclovir that have been approved for clinical use, development was halted by Bristol-Myers Squibb, its manufacturer. Medical use Lobucavir has been shown to exhibit antiviral activity against herpesvirus, hepatitis B, HIV/AIDS, and human cytomegalovirus. It reached phase III clinical trials for hepatitis B and herpesvirus, phase II clinical trials for cytomegalovirus, and underwent a pilot study for use in treating AIDs prior to discontinuation. Adverse effects In early clinical trials, Lobucavir was relatively well tolerated in subjects and was not subject to discontinuation due to adverse effects. Commonly reported effects included headache, fatigue, diarrhea, abdominal pain, and flu-like symptoms common with other nucleoside analogs. Other studies, however, identified Lobucavir-induced carcinogenesis associated with long-term use in mice that led to the drug's discontinuation in clinical trials in 1999. Mechanism of action Lobucavir is a guanine analog that interferes with viral DNA polymerase. It must be phosphorylated into its triphosphate form within infected cells via intracellular enzymes before it can demonstrate its anti-viral activity. In hepatitis B studies, Lobucavir has been found to inhibit viral primer synthesis, reverse-transcription, and DNA-dependent DNA polymerization by acting as a non-obligate chain terminator of the viral polymerase. Unlike traditional chain terminators that lack a 3'-OH group to prevent further DNA replication, Lobucavir is thought to cause a conformational change that blocks optimal polymerase activity two to three nucleotides downstream of its incorporation. Its mechanism of action has been found to be similar in use against human cytomegalovirus. Pharmacokinetics Lobucavir's bioavailability is 30-40% of the original oral dose and its half-life is approximately 10 hours, as demonstrated by pre-clinical testing References Antiviral drugs Purines Cyclobutanes Diols Abandoned drugs
Lobucavir
[ "Chemistry", "Biology" ]
559
[ "Antiviral drugs", "Biocides", "Drug safety", "Abandoned drugs" ]
63,120,759
https://en.wikipedia.org/wiki/Corbomycin
Corbomycin is a member of the glycopeptide family of antibiotics that are produced by soil bacteria. Mechanism of action Corbomycin blocks autolysins by attaching to the peptidoglycan cell wall. As a result, the bacterium cannot divide, as division requires the wall to be broken down and remodeled. Ordinary glycopeptides instead block cell wall formation. Applications It can block infections caused by the drug-resistant strain of Staphylococcus aureus that cause serious infections. As of 2020 it had not been approved by any regulatory body for human use. History The antibiotic was discovered in 2020. Researchers found the substance while studying the biosynthetic genes of glycopeptides that lacked self-resistance mechanisms. Researcher Beth Culp worked with Yves Brun and his team to image the cells to identify the action site. Culp's later team found other antibiotics that employed the same method of action. Complestatin is an existing antibiotic that was shown to use the same mechanism of action. References Glycopeptide antibiotics
Corbomycin
[ "Chemistry" ]
225
[ "Glycopeptide antibiotics", "Glycopeptides" ]
63,121,461
https://en.wikipedia.org/wiki/Apoptozole
Apoptozole is a drug that acts as a potent and selective inhibitor of the heat shock protein Hsp70, and was one of the first compounds developed to act at this target. It induces apoptosis in susceptible cells, and displays anti-cancer, anti-malarial and antiviral activity. The promising results in animal studies makes it likely that either apoptozole or compounds with similar modes of action will be further researched as potential therapeutics in the future. References Antiviral drugs Trifluoromethyl compounds Imidazoles Amides
Apoptozole
[ "Chemistry", "Biology" ]
120
[ "Antiviral drugs", "Amides", "Biocides", "Functional groups" ]
63,122,178
https://en.wikipedia.org/wiki/Cytochrome%20P450%20aromatic%20O-demethylase
Cytochrome P450 aromatic O-demethylase is a bacterial enzyme that catalyzes the demethylation of lignin and various lignols. The net reaction follows the following stoichiometry, illustrated with a generic methoxy arene: ArOCH3 + O2 + 2e− + 2H+ → ArOH + CH2O + H2O The enzyme is notable for its promiscuity, affecting the O-demethylation of a range of substrates, including lignin. It is a heterodimeric protein derived from the products of two genes. The component proteins are a cytochrome P450 enzyme (encoded by the gcoA gene from the family CYP255A) and a three-domain reductase (encoded by the gcoB gene) complexed with three cofactors (2Fe-2S, FAD, and NADH). Mechanism GcoA and GcoB form a dimer complex in solution. GcoA process the substrate while GcoB provides the electrons to support the mixed function oxidase. As with other P450's, monooxygenation of the substrate proceeds concomitantly with reduction of half an equivalent of O2 to water. An oxygen rebound mechanism can be assumed. GcoA positions the aromatic ring within the hydrophobic active site cavity where the heme is located. Structure GcoA has a typical P450 structure: a thiolate-ligated heme next to a buried active site. GcoB is however unusual. Cytochrome P450s normally are complemented by either a cytochrome P450 reductase or a ferredoxin and ferredoxin reductase;  its electrons are carried by NAD+ or NADP+. GcoB however has a single polypeptide. This polypeptide has an N-terminal ferredoxin with both an NAD(P)+ and also an FAD binding region. CcoA and GcoB are closely interlinked, acting as an heterodimer in solution. The surface of GcoB has an acidic patch that must interact with the matching basic region in GcoA. It is assumed that the part of GcoB interacting with GcoA is at the intersection between the FAD binding domain and ferredoxin domain. To achieve this GcoB would have to go through some structural change, which would represent a new class of P450 systems (family N). Potential applications Cytochrome P450 aromatic O-demethylase assists in the partial O-demethylation of lignin. The resulting 1,2-diols are well suited for oxidative degradation via intra- and extra-diol dioxygenases. Thus O-demethylated lignins are potentially susceptible to partial depolymerization.  With fewer crosslinks, the modified ligand is potentially more useful than the precursor., ranging from fuels References Cytochrome P450 EC 1.14.14 EC 1.6.2 Prokaryote genes
Cytochrome P450 aromatic O-demethylase
[ "Biology" ]
659
[ "Prokaryotes", "Prokaryote genes" ]
63,122,422
https://en.wikipedia.org/wiki/The%20Higher%20Infinite
The Higher Infinite: Large Cardinals in Set Theory from their Beginnings is a monograph in set theory by Akihiro Kanamori, concerning the history and theory of large cardinals, infinite sets characterized by such strong properties that their existence cannot be proven in Zermelo–Fraenkel set theory (ZFC). This book was published in 1994 by Springer-Verlag in their series Perspectives in Mathematical Logic, with a second edition in 2003 in their Springer Monographs in Mathematics series, and a paperback reprint of the second edition in 2009 (). Topics Not counting introductory material and appendices, there are six chapters in The Higher Infinite, arranged roughly in chronological order by the history of the development of the subject. The author writes that he chose this ordering "both because it provides the most coherent exposition of the mathematics and because it holds the key to any epistemological concerns". In the first chapter, "Beginnings", the material includes inaccessible cardinals, Mahlo cardinals, measurable cardinals, compact cardinals and indescribable cardinals. The chapter covers the constructible universe and inner models, elementary embeddings and ultrapowers, and a result of Dana Scott that measurable cardinals are inconsistent with the axiom of constructibility. The second chapter, "Partition properties", includes the partition calculus of Paul Erdős and Richard Rado, trees and Aronszajn trees, the model-theoretic study of large cardinals, and the existence of the set 0# of true formulae about indiscernibles. It also includes Jónsson cardinals and Rowbottom cardinals. Next are two chapters on "Forcing and sets of reals" and "Aspects of measurability". The main topic of the first of these chapters is forcing, a technique introduced by Paul Cohen for proving consistency and inconsistency results in set theory; it also includes material in descriptive set theory. The second of these chapters covers the application of forcing by Robert M. Solovay to prove the consistency of measurable cardinals, and related results using stronger notions of forcing. Chapter five is "Strong hypotheses". It includes material on supercompact cardinals and their reflection properties, on huge cardinals, on Vopěnka's principle, on extendible cardinals, on strong cardinals, and on Woodin cardinals. The book concludes with the chapter "Determinacy", involving the axiom of determinacy and the theory of infinite games. Reviewer Frank R. Drake views this chapter, and the proof in it by Donald A. Martin of the Borel determinacy theorem, as central for Kanamori, "a triumph for the theory he presents". Although quotations expressing the philosophical positions of researchers in this area appear throughout the book, more detailed coverage of issues in the philosophy of mathematics regarding the foundations of mathematics are deferred to an appendix. Audience and reception Reviewer Pierre Matet writes that this book "will no doubt serve for many years to come as the main reference for large cardinals", and reviewers Joel David Hamkins, Azriel Lévy and Philip Welch express similar sentiments. Hamkins writes that the book is "full of historical insight, clear writing, interesting theorems, and elegant proofs". Because this topic uses many of the important tools of set theory more generally, Lévy recommends the book "to anybody who wants to start doing research in set theory", and Welch recommends it to all university libraries. References External links The Higher Infinite (1st edition) at the Internet Archive Large cardinals Mathematics books 1994 non-fiction books 2003 non-fiction books
The Higher Infinite
[ "Mathematics" ]
733
[ "Large cardinals", "Mathematical objects", "Infinity" ]
63,122,597
https://en.wikipedia.org/wiki/Pyrazofurin
Pyrazofurin (pyrazomycin) is a natural product found in Streptomyces candidus, which is a nucleoside analogue related to ribavirin. It has antibiotic, antiviral and anti-cancer properties but was not successful in human clinical trials due to severe side effects. Nevertheless, it continues to be the subject of ongoing research as a potential drug of last resort, or a template for improved synthetic derivatives. See also Acadesine EICAR (antiviral) Sangivamycin References Antiviral drugs Tetrahydrofurans Pyrazolecarboxamides Polyols
Pyrazofurin
[ "Biology" ]
134
[ "Antiviral drugs", "Biocides" ]
63,122,726
https://en.wikipedia.org/wiki/List%20of%20megaprojects%20in%20Bangladesh
This is a list of megaprojects of Bangladesh. "(i.e. projects) characterized by: large investment commitment, vast complexity (especially in organizational terms), and long-lasting impact on the economy, the environment, and society". The number of such projects is so large that the list may never be fully completed. The Finance Minister of Bangladesh has recently unveiled an extensive roster of ambitious mega projects encompassing various sectors. These projects primarily focus on the construction of hospitals, schools, colleges, and other essential infrastructures. Consequently, this development surge is expected to generate a substantial demand for cement within the country. Terms Explanation Airports Bridges Road and highways Railways Energy projects Ports Defense Buildings and Housing Sports Barrages Delta Plan Satellites Special Economic Zone References Megaprojects Megaprojects
List of megaprojects in Bangladesh
[ "Engineering" ]
161
[ "Megaprojects" ]
60,981,737
https://en.wikipedia.org/wiki/Roland%20SP-606
The Roland SP-606 is a music sampler manufactured by Roland Corporation. It is part of the SP family, which includes Roland’s popular SP-303 and SP-404 installments. Released in the year of 2004, the sampler was soon succeeded in 2005 by the SP-404. Features Unlike the predecessors, the SP-606 was designed in collaboration with Cakewalk. As a result, the sampler was bundled with the then-new P606 software from Cakewalk for enhanced integration and functionality with a PC. The SP-606 has over 40 various effects, which include Isolator, Filter + Drive, Slicer, Reverb, Tape Echo. It also has the D Beam feature, which allows the user to control 3 different effects physically, including synth, trigger, and filter. The maximum number of samples that can be internally stored is 128. Samples are recorded in 'SP606 original format', with the memory used equating roughly to 16MB. Audio samples are imported and exported in WAV/AIF file format, by way of CompactFlash. Additionally, CompactFlash is used to update the sampler itself. MIDI can be transferred via USB connection for both Mac and PC. Users Despite not being initially popular and successful as the SP-303 and SP-404 installments upon release, the digital sampler managed to receive a small following over the years, being utilized by notable producers such as Madlib and Conductor Williams. References External links https://www.roland.com/global/products/sp-606/ SP-606 Samplers (musical instrument) Grooveboxes D-Beam Music sequencers Sound modules Music workstations Hip-hop production Japanese inventions
Roland SP-606
[ "Engineering" ]
357
[ "Music sequencers", "Automation" ]
60,982,324
https://en.wikipedia.org/wiki/Hybrid%20macaw
Hybrid macaws are the product of cross breeding of more than one species of macaw, resulting in a hybrid. They are often characterized and bred for their unique and distinct coloring, and for this reason, are highly sought after and valued in the exotic pet trade. Macaws are native to tropical North and South America. Hybridization of macaws occurs both in nature and captivity, being one of the few species that can produce viable, fertile offspring unlike many other hybrids produced from crossing different species resulting in sterile hybrids with factors that limit their success of survival (e.g. the liger and mule). Hybrid macaws do not hold any scientific names, and are often labeled by the two macaw species they are produced from (e.g. scarlet macaw × green winged macaw) There are 19 species of macaw, many of which can produce up to three generations (potentially more) of hybrids. Generation F1, being the most common, has the widest variety of hybrids and are the most popular and well known. Hybrid macaws are also often viable in generation F2 which means they are able to reproduce, unlike generation F3 and later due to a rising rate of sterility. The most popular hybrids include crossing with the blue and gold macaw, military macaw or scarlet macaw. Despite belonging to a different genus, hybrids between the hyacinth macaw and Ara species have also been produced. Because macaw species are able to hybridize and produce viable offspring, scientists study and breed them in captivity to better understand hybridization, and understand their importance in preserving endangered macaw species. A study performed of the hybridization between the last wild Spix's macaw and an Illiger's macaw, provides evidence and important information that could potentially help establish endangered wild populations of the Spix's macaw, demonstrating how vital hybrid macaws are. Hybrid macaws in nature The hybridization of macaws in the wild is less common than in captivity due to natural barriers and mating behaviors, although a few rare cases have been recorded. One example was the natural hybridization of a Spix's and Illiger's macaw recorded in Conservation Genetics (2001), which demonstrated two species of macaws producing offspring. This discovery created a major breakthrough in the preservation of this species and macaws as a whole as it is understood that the Spix's macaw may now be fully extinct in the wild. Hybrid macaws in captivity The hybridization of macaws is usually due to the placement of multiple macaw species in the same enclosure. Breeders may choose to pair different species to intentionally produce hybrid offspring, or the parrots themselves may select such a partner due to a lack of a suitable conspecific of the opposite sex. Due to the rising interest in hybrid macaws in the exotic-pet trade, production has increased. Their distinct coloring makes them highly sought after by competitive and exotic-bird breeders and traders. They are also bred for their "pet quality" and personality traits which results from the mixing of two species of birds. One example is the hybridization of the Catalina macaw, which is bred for its intelligence and ability to respond to training, and the harlequin macaw, bred for its relaxed and calm personality. Although, behavior, temperament and coloring can vary from each hybrid. Recently there has been an over abundance of female blue-and-yellow macaws in captivity, and they have been highly hybridized. Some bird breeders consider intentionally breeding hybrid macaws, particularly endangered species, to be unethical - as to do so is to dilute bloodlines and potentially produce hybrids that appear to be identical to a parent species, yet contain genes from a supposedly separate species. This may prove to be detrimental to conservation efforts if the day ever comes when (as occurred with the Spix's macaw) captive macaws are required to maintain the existence of a pure species. Hybrid macaws bred in captivity, despite having little conservation value in themselves, have successfully been used in zoological settings as surrogate parents for the eggs and chicks of endangered macaw species, successfully rearing the offspring without human intervention. In addition, a 2021 study exploring the use of free flight techniques developed by aviculturists and adapting these with the potential goal of returning captive-bred parrots to the wild featured several hybrid macaws as experimental participants. Macaw hybrid breeding types/generations First-generation macaw – F1 First-generation hybrid macaws are the most popular and abundant macaw hybrids. Examples: Other Examples Maui sunset = red-fronted macaw × blue-and-gold macaw Corrientes macaw = military macaw × blue-throated macaw Second-generation macaw – F2 Examples: Aqua blush macaw = blue-and-gold macaw x verde macaw Miliquin macaw = military macaw × harlequin macaw Starlight macaw = scarlet macaw × miligold macaw Sunburst macaw = scarlet macaw x verde macaw Third-generation macaw – F3 Examples: There is also another F3 Macaw Bred in Australia in 2024 by Peter Barnes. It is called a Flamingold and it's created by breeding an F2 flame macaw with a blue and gold macaw References Macaws Hybrid animals Intergeneric hybrids
Hybrid macaw
[ "Biology" ]
1,111
[ "Intergeneric hybrids", "Hybrid animals", "Animals", "Hybrid organisms" ]
60,982,538
https://en.wikipedia.org/wiki/Eye%20contact%20effect
The eye-contact effect is a psychological phenomenon in human selective attention and cognition. It is the effect that the perception of eye contact with another human face has on certain mechanisms in the brain. This contact has been shown to increase activation in certain areas of what has been termed the ‘social brain’. This social brain network processes social information as the face, theory of mind, empathy, and goal-directedness. Activation in the brain When gaze is direct, the eye-contact effect produces activation in the social brain. These six regions demonstrate that perceived eye contact increases activation of elements within this network, with the area of activation depending on task demands and the social context. Fusiform gyrus Increase in regional cerebral blood flow (rCBF) shows larger activation for direct than averted gaze in this area. It has been suggested that this increased activation is related to initial increased face encoding. However, these effects are absent when one has already been presented with a face and its gaze shifts towards the participant. This indicates that when attending to face identity, face encoding effects can be masked. Anterior, right side of superior temporal sulcus When directing gaze specifically towards the eye area, the anterior, right side of the superior temporal sulcus is activated, indicating facilitation of gaze direction encoding in this region when eye contact is present. Like the fusiform gyrus, this effect can also be masked in this area. Posterior, right side of superior temporal sulcus The activation in this region of the brain during the eye contact effect is not always consistent. Although it has been demonstrated in several studies when dynamic stimuli are used, activation is not demonstrated across all literatures. The studies that demonstrate activation provided social or communicative context in their experiments, suggesting that the eye-contact effect only activates the posterior right side of superior temporal sulcus in these instances. Medial prefrontal cortex and orbitofrontal cortex These two areas activate when dynamic facial expressions are presented as well as in a communication context when participants are required to decode the intention of the presented face. Like the posterior right side of superior temporal sulcus, this suggests that context could be a factor in this activation. However, some studies have shown higher activation in the medial prefrontal cortex when averted gaze is perceived than direct gaze. This activation was in a slightly posterior position compared to the areas which had higher activation for direct gaze. Amygdala The activation in the amygdala, like the posterior right side of superior temporal sulcus, is not hugely consistent either. While three studies have found activation for direct gaze in this area, several studies have found no effect. It is speculated that due to the small size of the amygdala, neural imaging methods are not sensitive enough to correctly detect activation. Underlying mechanisms Three models have been developed to explain the mechanisms underlying the eye-contact effect. These models show the areas of the brain that are activated by direct eye contact and where they overlap in activation with areas in the brain related to the social brain network. The affective arousal model This model proposes that eye contact directly activates brain arousal systems and emotional responses which influences perceptual and cognitive processing. Because reciprocated eye contact elevates emotional arousal, activation becomes widespread across cortical structures as emotional arousal is often associated with the amygdala. As gaze is directed towards the perceiver, activation increases in a region in the right amygdala. However, this model fails to account for the selective nature of activation effects. Activation must be more widespread across the network if general arousal is the main effect at work. The communicative intention detector model Eye contact signals intent of communication and the social significance of eye gaze engages theory of mind computations. Because there is an overlap of activation in structures involved in theory of mind computation with regions associated with eye contact detection, this model proposes that this is the mechanism that causes the eye contact effect. However, only parts of the theory of mind network (medial prefrontal cortex, posterior right side of superior temporal sulcus, and sometimes precuneus and amygdala) are activated depending on task demands and context. The first-track modulator model Proposed by Senju and Johnson, this model argues that the eye contact effect is facilitated by the subcortical face detection pathway. This pathway involves the superior colliculus, pulvinar, and amygdala. This route is fast and operates on low spatial frequency and modulates cortical face processing. Development Sensitivity to eye contact is present in newborns. From as early as four months old cortical activation as a result of eye contact has suggested that infants are able to detect and orient towards faces that make eye contact with them. This sensitivity to eye contact remains as the presence of eye contact has an effect on the processing of social stimuli in slightly older infants. For example, a 9-month-old infant will shift its gaze towards an object in response to another face shifting its gaze towards the same object. As humans get older, the eye contact effect develops as well. Accurate face recognition facilitated by direct gaze improves over the period of development from 6 to 11 years of age. Atypical development Autism spectrum disorders Autism spectrum disorders (ASDs), which include autism and Asperger syndrome, are characterized by difficulties with social interaction and communication. Atypical responses to direct gaze, a characteristic of ASD, have been demonstrated to manifest in infancy, suggesting that these responses are present from early in development. Due to these difficulties, the development of the eye contact effect may be obstructed. However, studies addressing eye contact in individuals with ASD can elicit mixed results. Response to eye contact has been identified as stronger neuro-physiologically for direct gaze than indirect gaze. This may be due to individuals with ASD responding faster to eye contact based on their detection of features, rather than in the facial context. See also Stare-in-the-crowd effect References Interpersonal relationships Attention Cognition Theory of mind
Eye contact effect
[ "Biology" ]
1,228
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
60,983,275
https://en.wikipedia.org/wiki/Amplicon%20sequence%20variant
An amplicon sequence variant (ASV) is any one of the inferred single DNA sequences recovered from a high-throughput analysis of marker genes. Because these analyses, also called "amplicon reads," are created following the removal of erroneous sequences generated during PCR and sequencing, using ASVs makes it possible to distinguish sequence variation by a single nucleotide change. The uses of ASVs include classifying groups of species based on DNA sequences, finding biological and environmental variation, and determining ecological patterns. ASVs were first described in 2013, by Eren and colleagues. Before that, for many years the standard unit for marker-gene analysis was the operational taxonomic unit (OTU), which is generated by clustering sequences based on a threshold of similarity. Compared to ASVs, OTUs reflect a coarser notion of similarity. Though there is no single threshold, the most commonly chosen value is 3%, which means these units share 97% of the DNA sequence. ASV methods on the other hand are able to resolve sequence differences by as little as a single nucleotide change, thus avoiding similarity-based operational clustering units altogether. Therefore, ASVs represent a finer distinction between sequences. ASVs are also referred to as exact sequence variants (ESVs), zero-radius OTUs (ZOTUs), sub-OTUs (sOTUs), haplotypes, or oligotypes. Uses of ASVs versus OTUs The introduction of ASV methods was marked by a debate about their utility. Although OTUs do not provide such precise and accurate measurements of sequence variation, they are still an acceptable and valuable approach. In one research study, Glassman and Martiny confirmed the suitability of OTUs for investigating broad-scale ecological diversity. They concluded that OTUs and ASVs provided similar results, with ASVs enabling a slightly stronger detection of fungal and bacterial diversity. And their work indicated that even though species diversification can be measured more accurately with ASVs, the use of OTUs in well-constructed studies is generally valid to demonstrate diversification at broad scales. Some have argued that ASVs should replace OTUs in marker-gene analysis. Their arguments focus on the precision, tractability, reproducibility, and comprehensiveness they can bring to marker-gene analysis. For these researchers, the utility of finer sequence resolution (precision) and the advantage of being able to easily compare sequences between different studies (tractability and reproducibility) make ASVs the better option for analyzing sequence differences. By contrast, since OTUs depend on the specifics of the similarity thresholds used to generate them, the units within any OTU can vary across researchers, experiments, and databases. Thus comparison across OTU-based studies and datasets can be very challenging. ASV methods Popular methods for resolving ASVs including DADA2, Deblur, MED, and UNOISE. These methods work broadly by generating an error model tailored to an individual sequencing run and employing algorithms that use the model to distinguish between true biological sequences and those generated by error. References DNA
Amplicon sequence variant
[ "Engineering", "Biology" ]
637
[ "Bioinformatics", "Biological engineering" ]
60,983,768
https://en.wikipedia.org/wiki/Power%20Distribution%20Equipment%20Identification
The Power Distribution Equipment Identification (PDEID) () is a unique identification label used for exclusively identifying equipment and customers of the power distribution network of Iran, which has been in use since 1997. PDEID is used to simplify identifying equipment, their approximate address, updating the electrical network information and to transfer information to computers. Etymology The first unique identification code for equipment was introduced in the Iran's Power Distribution Network Standard in 1969. Only three equipment which were medium and low voltage poles, medium and low voltage branching nodes and distribution substations suggested to have equipment identification label. In 1996, at the 6th Conference on Electrical Power Distribution Networks, an article entitled "Application of the Integrated Equipment Identification Label for Distribution Networks in the Iran" by Gholamreza Saffarpour () and Ali Mamdoohi () was presented in which a method for uniquely identifying all equipment and subscribers of distribution networks introduced. This method was selected in 1997 with minor modifications by Tavanir to integrate the identification of equipment and subscribers of distribution networks. Structure of Power Distribution Equipment Identification Label Every code in the Power Distribution Equipment Identification label consists of 12 numbers and letters. The labels are grouped into two categories: network equipment and customers. Distribution Equipment Identification for network equipment The equipment label contains 12 numbers and characters. The first 5 identifies the zip code of the area where the equipment is located. The 5-digit postcode contains the location information provided by Iran Post for the whole country. The next two are letters which identify the equipment type-ID. The last five digits are an assigned sequence (or serial) number for the equipment in the postcode area. The sequence or serial number used in the integrated PDEID is an arbitrary number that is unique within the area of postcode. For example, the first distribution substation in the 13457 postcode should have the serial number 00001 and the second substation in the same postcode area (13457) should have serial number 00002, and so on. In this system, determining which equipment (substation in the above example) is first and which one is second is absolutely arbitrary. Distribution Equipment Identification for customers (subscribers) For customers the Power Distribution Equipment Identification label is composed of 12 digits, and similar to the equipment, the 5 leftmost digits are the 5-digit postcode of the area where the electricity meter of the customer is located. The right 7 digits of the PDEID are the customer-id which is used in the billing system of power distribution utilities. In some parts of the Iran where the customer-id is more than 7 digits, the PDEID has 14 digits, and the 9 rightmost digits contain the customer-id number. Equipment Type ID In the Power Distribution Equipment Identification label, identification is not considered for all equipment. However, by identifying and labeling 23 equipment, all equipment which is important in regard to engineering calculations or information statistics can be uniquely identified. Equipment type-ID does not include the letters I (i), O (o), and Q (q) to avoid confusion with numerals 1 and 0. In assigning PDEID to the distribution equipment, a single-line diagram is used, so when the three-phase system is used, insulators, cable terminations, and cable joints of all three phases take just one PDEID each. Changes to the original design There are three differences between the final implemented PDEID and what was proposed in the article entitled "Application of the Integrated Equipment Identification Label for Distribution Networks in the Iran” as following: Suggested outdoor HV cable termination type id was C1, and indoor HV cable termination type id was C3. In final PDEID type id of both outdoor and indoor HV cable termination is CH. Suggested outdoor LV cable termination type id was C2, and indoor LV cable termination type id was C4. In final PDEID type id of both outdoor and indoor HV cable termination is CL. Adding the JN type id for virtual node. Also to improve the readability of the labels, the type id is placed between the postcode (zip code) and sequence number (or customer id number for customers). References Electrical engineering Electric power distribution network operators
Power Distribution Equipment Identification
[ "Engineering" ]
865
[ "Electrical engineering" ]
60,985,069
https://en.wikipedia.org/wiki/Construction%20costs%20%28biology%29
Construction costs is a concept in biology that conveys how much glucose is required to construct a unit of plant biomass, given the biosynthetic pathways and starting from glucose and mineral constituents. It includes the sugars required to provide the carbon skeletons for the formation of e.g. lipids, lignin and proteins, but also the glucose required to produce energy (ATP) and reducing power (NAD(P)H) to drive the metabolic pathways. Rationale The concept of construction costs comes from microbiology, when studying the different metabolic pathways that heterotrophic microorganisms use for producing biomass. It has subsequently been used by plant biologists to calculate how much glucose would be required to build leaves, stems and roots. The metabolic costs to maintain cells or organs are generally not included in these estimates. Plant ecologists have also calculated the payback time of leaves, the time required for a leaf to fix as much carbon as it cost to construct the leaf. For plants to be successful, payback time must be shorter than average leaf longevity, otherwise the plant has a negative carbon balance and is losing out on its investments. However, leaves can only function with stems that help expose leaves to the light, and roots that take up the necessary nutrients and water, so the concept of payback time can also be applied to whole plants. Payback time then approaches the doubling time of biomass, or, in other words, the relative growth rate of plants. Units and measurements The unit of construction costs is g g−1 (g glucose required / g biomass produced). Theoretically, if the biochemical pathways to construct all of the thousands of different compounds of an organism would be known, as well as the concentrations of all those compounds, construction costs could be simply calculated as the product of concentration and construction costs, summed over all constituents present. However, as there are so many different compounds, this is not really feasible. Alternatively, chemical constituents can be grouped in a number of groups with relatively similar construction costs per unit compound. These groups and their estimated construction costs are given in the table below. Using approaches to determine the concentrations of these groups of compounds then enables calculation of plant organs construction costs. The construction costs given in the table are strongly related to the oxidation state of these compounds. That is: highest construction costs are for highly reduced compounds like lipids and lignin, lowest for highly oxidized organic compounds such as organic acids, and for the minerals. This is the basis for short-cut methods that estimate construction costs on the basis of elemental composition or energy content. They may be less precise, but do not require extended measurements of the proximate chemical composition. Consequence of using shortcut methods is that less insight is obtained as to the underlying reasons why construction costs differ between organisms. Normal values Generally, construction costs of leaves are in the order of 1.3 – 1.7 g g−1, with slightly lower values for stems and roots. The reason for this is that across most vegetative plant tissue there is a strong negative correlation between the level of – expensive - protein and the level of – expensive- lignin, and a strong positive correlation between protein and the concentration of – cheap - minerals. Seeds and fruits may have very high concentrations of sugars, or of protein, and/or lipids, and therefore show a much wider range in construction costs (1.1 - 2.3 g g−1). Payback time of leaves vary from a few days for fast-growing herbaceous species under optimal conditions, to three months or more for plants growing in low-light environments. Environmental effects Although concentrations of chemical constituents change with the environment, the effects on leaf chemical composition are general small. Leaves of plants grown at elevated may have a few percent higher construction costs, as do leaves that have been growing at higher light intensities. Functional groups Differences in leaf construction costs between deciduous and evergreen species are small, and so are the differences between inherently fast- and slow-growing herbaceous species. References Plant ecology Glucose Biomass
Construction costs (biology)
[ "Biology" ]
824
[ "Plant ecology", "Plants" ]
60,986,001
https://en.wikipedia.org/wiki/National%20Poo%20Museum
The National Poo Museum on the Isle of Wight, southern England, is a museum dedicated to the collection, conservation and display of faeces. The museum, which opened on 25 March 2016, originally as a mobile museum, is now permanently located at Sandown Barrack Battery. Overview The faeces are displayed in resin spheres, where it can be viewed and held. The process involves drying the poo, which can take up to two weeks, before it is encapsulated and placed in a vacuum chamber, so that air bubbles are removed. The main aim of the museum is to break down the 'taboo' surrounding poo in human life, and the museum hopes to do this by receiving donations of poo from celebrities. The museum also aims to educate people about the issues related to poo including dog fouling and sanitation. The museum was founded by members of Eccleston George who are "a collection of creative people who work together on many different kinds of projects", based on the Isle of Wight. Poo at the Zoo The first public exhibition, named Poo at the Zoo, opened on 25 March 2016 at the Isle of Wight Zoo, where 20 excrements belonging to different animals were on display. The animals and faeces included: Lesser Madagascan Tenrec Tawny Owl Lion Meerkat Cow Fox Human baby 38 million-year-old poo A 140 million-year-old coprolite A poo with teeth and bones in it A poo that looks like a cereal bar A child's shoe which a cat has marked by pooing in it The faeces came from animals at the zoo, faeces collected elsewhere and faeces donated by the Dinosaur Isle museum. Sandown Barrack Battery Sandown Barrack Battery is a 19th-century fort built on the southwest coast of the Isle of Wight. The National Poo Museum is converting two of the derelict buildings at the battery in order to house exhibits permanently and build a cafe. This is being done with £15,000 from the local authority and a further £2,500 from a crowdfunding campaign. Reception The crowdfunding campaign received money from 76 donors over 42 days. The campaign was supported by Kate Humble, the presenter of Curious Creatures - a nature quiz TV series on BBC Two. The series used faeces provided by the museum for a round called 'Whose poo?' where contestants guessed the animal which the faeces belonged to. Humble, a wildlife presenter, said that "The world would be a much poorer place without the National Poo Museum". Gallery See also Shit Museum References External links National Poo Museum Website 2016 establishments in England Museums established in 2016 Museums on the Isle of Wight Natural history museums in England Poo Museum Feces
National Poo Museum
[ "Biology" ]
561
[ "Excretion", "Feces", "Animal waste products" ]
60,986,128
https://en.wikipedia.org/wiki/NGC%204900
NGC 4900 is a barred spiral galaxy in the constellation Virgo. It was discovered by William Herschel on April 30, 1786. It is a member of the NGC 4753 Group of galaxies, which is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster. One supernova has been observed in NGC 4900: SN 1999br (Type II, mag. 17.5). See also List of NGC objects (4001–5000) References External links Barred spiral galaxies Virgo (constellation) 4900
NGC 4900
[ "Astronomy" ]
126
[ "Virgo (constellation)", "Constellations" ]
60,986,996
https://en.wikipedia.org/wiki/Better%20Environmentally%20Sound%20Transportation
Better Environmentally Sound Transportation (BEST) is a Vancouver-based charity focused on promoting walking, cycling, public transit, and other forms of sustainable transportation in Metro Vancouver. The organization was founded in 1991 and has several active projects as of 2019. History BEST has undergone numerous changes since its formation. Originally, it started off as a non-profit organization that focused on promoting cycling as a sustainable form of transportation. Through time, its evolved to advocate and promote all forms of sustainable transportation. The organization started Our Community Bikes, Vancouver’s first do-it-yourself bike store which includes a full service mechanic and retail bike shop, a club for youth to learn bike skills, as well as Pedals for the People which provides refurbished bikes to people in need. BEST is well-known for its role in development of the Central Valley Greenway, a 24-km long pedestrian and cyclist route running from Vancouver to New Westminster through Burnaby. The Central Valley Greenway was kickstarted when Vancity Credit Union awarded BEST with a $1 million dollar grant to develop the project. Programs The Bicycle Valet Founded in 2006, The Bicycle Valet operates free, safe bicycle parking services at festivals, events, Vancouver Whitecaps games, and many other happenings around the Metro Vancouver area. The program promotes cycling as an environmentally friendly transportation option and as an alternative to driving. Senior Transportation Access and Resources (STAR) STAR (Seniors Transportation Access and Resources) started in 2011. They collaborate with senior-focused organizations across B.C. to improve and develop transportation services for older adults. Its largest project includes establishing Seniors on the Move, a project focused on addressing social isolation that many senior citizen experience. Coming out of this project, the Seniors Transportation Hub and Hotline was created in collaboration with bc211. Parkbus Parkbus expanded to British Columbia in 2016. Parkbus provides accessible transportation options to various National and Provincial Parks across Canada. In B.C., destinations include Golden Ears Provincial Park, Garibaldi Provincial Park, Joffre Lakes Provincial Park, and Cypress Provincial Park. One of their most popular programs, ActiveDays, leads group hikes in nature and aspires to create outdoor communities. The Commuter Challenge The Commuter Challenge in BC was created in 1997 as part of Canadian Environment Week. The competition runs annually during the first week of June and encourages people coming to and from work to walk, cycle, take transit, ride share, or work from home. Participants can track their commutes and achievements online. Living Streets Living Streets is an environmental education program aimed at increasing civic participation in youth and new Canadians. Participants in the educational presentation focus on daily transportation choices, safety, and sustainability of communities. As community ambassadors, participants go on “street audits” to analyze the infrastructure and identify barriers to walking, cycling, and public transportation. References Non-profit organizations based in Vancouver Charities based in Canada Sustainable transport 1991 establishments in Canada
Better Environmentally Sound Transportation
[ "Physics" ]
592
[ "Physical systems", "Transport", "Sustainable transport" ]
60,987,204
https://en.wikipedia.org/wiki/Trine%204%3A%20The%20Nightmare%20Prince
Trine 4: The Nightmare Prince is a 2019 video game developed by Frozenbyte and published by Modus Games for Microsoft Windows, Nintendo Switch, PlayStation 4, and Xbox One in October 2019 and released for Stadia in March 2021. It is the fourth installment of Trine series and features a return of the series' three protagonists and medieval fantasy location. Gameplay Trine 4 refines the gameplay of the previous installments. It is a 2D platformer with a focus on physics puzzles. The abilities available to the player for solving puzzles include conjuring boxes, attaching ropes to objects, and telekinesis. The player character in the game is, story-wise, three characters in one. The player can transform into one of these three characters at a time: Amadeus the wizard, Zoya the thief, and Pontius the knight. Amadeus can conjure boxes, spheres, and planks, and he can telekinetically move some objects. Zoya can affix ropes to objects either to swing from them, immobilize them, or create rope bridges. She is also armed with a bow. Pontius has a shield with which he can deflect certain projectiles at other targets, and jump-stomp. Pontius is also armed with a sword. As with previous entries, the gameplay is a mix of physics puzzles and combat. While Trine 3 experimented with 3D platforming Trine 4 returns to the strictly 2-dimensional movement of the first two Trine games. In another return to form, Trine 4 adds character progression (which was missing from the third game). While certain core abilities (such as Zoya's "fairy rope") are unlocked at fixed points in the game, there are optional abilities that the player can purchase using tokens collected over the course of the game. These abilities give the player new options for solving puzzles, or new ways to fight monsters. If the player wishes, they can replay earlier levels with unlocked abilities to solve those earlier puzzles in new ways. Plot Prince Selius (introduced in Nine Parchments) had aptitude for magic, but misused his father's spellbook, releasing the shadow of his soul and destroying his family's castle. His parents therefore sent him away to the Astral Academy, but the Academy wizards were unable to control the prince's nightmare magic and he eventually escapes. Amadeus the wizard, Pontius the knight and Zoya the thief are tasked by the Academy headmaster to bring Selius back. At first the trio tries to convince him to return, but Selius refuses, revealing that the Academy simply locked him in the dungeons. His unstable shadow magic causes his own nightmares, those of the heroes, and even the inhabitants of the wilderness to manifest physically, and Selius flees. While tracking the prince, the trio helps the creatures of the wilderness in fighting their nightmares and in return three nature spirits give them a potion of light, which should help Selius—it does, but it also empowers Selius's shadow, since the brighter the light, the deeper the shadow it casts. The trio assists Selius in fighting his shadow; ultimately, the prince is able to use the power of light to seal his shadow back into his soul. His magic now stabilized, he agrees to return to the Astral Academy. Melody of Mystery Some time after returning Prince Selius to the Astral Academy, some of the students (all first seen in Nine Parchments) began rehearsing for an autumn play. One of them, Cornelius, stumbles upon a magic music box, accidentally releasing a spirit known as Melody, who traps the students in a magical dream where their wishes come true. Selius, unaffected thanks to his own dream magic, summons the heroes of Trine into the dream world, seeking their assistance. The heroes traverse the students' dreams, and learn that Melody is a genie-in-training who uses dreams as a way to grant wishes, not understanding that it is harmful for humans to sleep forever. The heroes successfully wake up all the students except Cornelius, who is so attached to his dream that he must be forced to awaken. Reception The game has received positive reviews. The game was given "generally favorable" reviews according to Metacritic. IGN gave the game an 8.5/10 saying, "Trine 4: The Nightmare Prince is a sequel that plays it very safe – which, in this particular case, is for the better. Coming back to the traditional style of co-op gameplay and puzzle solving that made the first two games so delightful is exactly the kind of refocusing that the Trine series needed after the misfire of Trine 3. Some lackluster puzzle designs, technical issues, and a lack of difficulty stand in the way of it overtaking Trine 2 as the best of the series, but Trine 4 still remains a shining example of how cooperative gaming should be, and is one of the most gorgeous looking 2.5D games of 2019." Eurogamer stated that the game was "Outrageously pretty and newly refined, Frozenbyte's series finally strikes gold." Destructoid also reviewed the game, saying, "It's an easy recommendation for platform fans, but it's also just a plain fun time. It's not revolutionary or trailblazing, but it does what it needs to prove that Frozenbyte hasn't lost its touch. I wouldn't necessarily expect a Trine 5 or anything, but clearly, this series has some life left in it." References External links 2019 video games Action-adventure games Asymmetrical multiplayer video games Cooperative video games Fantasy video games Frozenbyte games Multiplayer and single-player video games Nintendo Switch games PlayStation 4 games Puzzle-platformers Side-scrolling platformers Video game sequels Video games featuring female protagonists Video games developed in Finland Video games scored by Ari Pulkkinen Video games with 2.5D graphics Windows games Xbox One games Modus Games games
Trine 4: The Nightmare Prince
[ "Physics" ]
1,223
[ "Asymmetrical multiplayer video games", "Symmetry", "Asymmetry" ]
60,987,584
https://en.wikipedia.org/wiki/Jason%20S.%20Lewis
Jason S. Lewis is a British radiochemist whose work relates to oncologic therapy and diagnosis. His research focus is a molecular imaging-based program focused on radiopharmaceutical development as well as the study of multimodality (PET, CT & MRI) small- and biomolecule-based agents and their clinical translation. He has worked on the development of small molecules as well as radiolabeled peptides and antibodies probing the overexpression of receptors and antigens on tumors. Education Jason S. Lewis was born and raised in Horndean, Hampshire, England. Lewis received his Bachelor of Science in chemistry, B.Sc. from the University of Essex in 1992. He received his Master of Science in chemistry from the University of Essex in 1993. In 1996, he received his Doctor of Philosophy in Biochemistry at the University of Kent. Career Lewis was an instructor of Radiology at the Washington University School of Medicine, Mallinckrodt Institute of Radiology from 2000 to 2002. From 2003 to 2008, he was an assistant professor of radiology at the Washington University School of Medicine, Mallinckrodt Institute of Radiology. He is the Emily Tow Jackson Chair in Oncology, the Vice Chairman for Research (Radiology) and Chief Attending of the Radiochemistry and Imaging Sciences Service at Memorial Sloan Kettering Cancer Center. He also heads a laboratory in the Sloan Kettering Institute's Molecular Pharmacology Program and is a professor at the Gerstner Sloan Kettering Graduate School of Biomedical Sciences. Lewis holds joint appointments in the Departments of Radiology and the Department of Pharmacology at Weill Cornell Medical School, NY. Research focus Lewis is a proponent of development imaging tools for use in personalized medicine. Lewis designs and develops radiochemical probes for use in nuclear medicine as well as multi-modality molecular imaging. The use of these probes span from oncological metabolic detection to understanding the biological processes of cancer and pharmacological modification. These probes can be used for biomarkers in clinical trials as well as used as an agent for oncological diagnostics. He has developed multiple new small molecules that target tumor metabolism, as well as radiolabeled peptides and antibodies for use in probing overexpression of receptors and antigens on tumors for research, clinical trials and in the clinic. Recognition 2014 Distinguished Investigator Award from the Academy of Radiology Research Fellow, World Molecular Imaging Society (FWMIS)                                                    2017 Michael J. Welch Award, Society of Nuclear Medicine and Molecular Imaging (SNMMI)                       2019 Paul C. Aebersold Award, Educational and Research Fund for Nuclear Medicine and Molecular Imaging (ERF) & Society of Nuclear Medicine and Molecular Imaging 2019 Fellow, Society of Nuclear Medicine and Molecular Imaging (FSNMMI) Editorial board memberships Molecular Imaging and Biology (Editor-in-Chief) Journal of Nuclear Medicine (Associate Editor) ACS Bioconjugate Chemistry (Editorial Advisory Board) Nuclear Medicine and Biology (Associate Editor-in-Chief) References 1970 births Alumni of the University of Kent Alumni of the University of Essex Living people Radiochemistry Washington University School of Medicine faculty Weill Medical College of Cornell University faculty
Jason S. Lewis
[ "Chemistry" ]
659
[ "Radiochemistry", "Radioactivity" ]
60,987,695
https://en.wikipedia.org/wiki/C23H22N2O
{{DISPLAYTITLE:C23H22N2O}} The molecular formula C23H22N2O (molar mass: 342.434 g/mol, exact mass: 342.1732 u) may refer to: BIM-018 THJ-018 (SGT-17) Molecular formulas
C23H22N2O
[ "Physics", "Chemistry" ]
72
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
60,987,806
https://en.wikipedia.org/wiki/Stretch%20sensor
A stretch sensor is a sensor which can be used to measure deformation and stretching forces such as tension or bending. They are usually made from a material that is itself soft and stretchable. Most stretch sensors fall into one of three categories. The first type consists of an electrical conductor for which the electrical resistance changes (usually increases) substantially when the sensor is deformed. The second type consists of a capacitor for which the capacitance changes under deformation. Known properties of the sensor can then be used to deduce the deformation from the resistance/capacitance. Both the rheostatic and capacitive types often take the form of a cord, tape, or mesh. The third type of sensor uses high performance piezoelectric systems in soft, flexible/stretchable formats for measuring signals using the capability of piezoelectric materials to interconvert mechanical and electrical forms of energy. Applications Wearable stretch sensors can be used for tasks such as measuring body posture or movement. in 2018, New Zealand based company StretchSense began making a motion capture glove (data glove) using stretch sensors. Unlike gloves that use inertial or optical sensors, stretchable sensors do not suffer from drift or occlusion. They can also be used in robotics, particularly in soft robots. Stretch sensors are now widely used in medical fields for analysis and measuring the human dielectric properties w.r.t skin. See also Flexible electronics Force gauge Thermistor Stretchable electronics Stretch receptor References Sensors Resistive components Measuring instruments Transducers
Stretch sensor
[ "Physics", "Technology", "Engineering" ]
318
[ "Physical quantities", "Measuring instruments", "Resistive components", "Sensors", "Electrical resistance and conductance" ]
60,989,642
https://en.wikipedia.org/wiki/Summer%20Haven%2C%20Florida
Summer Haven is an unincorporated community in southeast St. Johns County, Florida, United States, located just south of the Matanzas Inlet, on a narrow strip of land between a shallow marsh and the Atlantic Ocean. Summer Haven is a small beach resort which began with cottages used by farmers. It consists of approximately 60 structures, mostly single family homes. Around 1890, Thomas A. Mellon started vacationing in Summer Haven. Eventually the community attracted other well known people, including Cornelius Vanderbilt Whitney, Owen D. Young, and author Marjorie Kinnan Rawlings. After severe nor’easters in 1962, Summer Haven was declared a disaster area, and the U.S. Army Corps of Engineers constructed an 1,800-foot granite revetment along the northern portion. After Hurricane Dora in 1964, the existing revetment, which fronts the majority of the upland development in the reach, was extended by an additional 1,070 linear feet. South of the revetment, development is limited to one row of single-family residences. When possible, St. Johns County has been purchasing structures and lands in this southern area and not allowing further development. Road access Until the construction of the Old A1A road in the 1920s, Summer Haven was only accessible by boat, or by driving along the beach. Because this section of Florida State Road A1A washed out frequently during storms, it was moved inland in the 1970s after Hurricane Dora, and the Old A1A became a county road. The road was rebuilt after washouts caused by Hurricane Jeanne in 2004, and Tropical Storm Fay in 2008. Local residents filed suit to ensure ongoing maintenance of the road by the county, and a decision was issued on May 20, 2011. Attorney Thomas Ruppert of the Florida Sea Grant and Peter Byrne of Georgetown University have expressed concerns about the broader consequences of the precedents set by this case, which may enable more affluent property owners to sue in order to fund rebuilding on the grounds that engineers have failed to anticipate sea level rise. The precedents set have the potential to direct public funding towards expensive, unrealistic projects, and away from coastal mitigation measures for less affluent communities. References External links U.S. Army Corps of Engineers, Jacksonville District. St. Johns County, Florida, South Ponte Vedra Beach, Vilano Beach, and Summer Haven Reaches. Coastal Storm Risk Management Project Final Integrated Feasibility Study and Environmental Assessment. March 2017. Unincorporated communities in St. Johns County, Florida Unincorporated communities in the Jacksonville metropolitan area Unincorporated communities in Florida Beaches of Florida Seaside resorts in Florida Seaside resorts in the United States Populated coastal places in Florida on the Atlantic Ocean Coastal engineering Sea level
Summer Haven, Florida
[ "Engineering" ]
540
[ "Coastal engineering", "Civil engineering" ]
60,991,538
https://en.wikipedia.org/wiki/Group%20living
In ethology and evolutionary biology, group living is defined as individuals of the same species (conspecifics), maintaining spatial proximity with one another over time with mechanisms of social attraction. Solitary life in animals is considered to be the ancestral state of living; and group living has thus evolved independently in many species of animals. Therefore, species that form groups through social interaction will result in a group of individuals that gain an evolutionary advantage, such as increased protection against predators, access to potential mates, increased foraging efficiency and the access to social information. Important aspects of group living include the frequency and type of social interactions (egoistic, cooperative, altruistic, revengeful) between individuals of a group (social life), the group size, and the organization of group members in the group. Terminology of animal groups also varies among different taxonomic groups. Groups of sheep are termed herds, whilst groups of birds are referred to as colonies, or flocks. Most studies on group living focus strictly on groups comprising a single species. However, many mixed-species groups commonly occur in nature. Examples of mixed-species groups include wildebeests forming groups with zebras, and different species of birds that form large foraging flocks. Group living may sometimes be confused with collective animal behavior. Collective animal behavior is the study of how the interactions between individuals of a group give rise to group level patterns and how these patterns have evolved. Examples include the marching of locusts and flocks of migrating birds. Group living however focuses on the long-term social interactions between individuals of a group and how animals have evolved from solitary living. Definition of group living It is extremely difficult to distinguish between solitary living and group living. Distinctions between the two are relatively artificial. This is because many species of animals who spend a majority of their life alone, at some point in their life, will join a group or engage in social behavior. Some examples of this happens during mating, parental care of their offspring, or even aggregations of conspecifics to an area to exploit resources of food or shelter. Therefore, multiple definitions of group living have been proposed. Differences in group living definitions vary dependent on the frequency and type of social interactions that members of a group display and the level of coordination and cohesion of group members. For example, Wilson (2000) defines a group as “any set of organisms, belonging to the same species, that remain together for a period of time while interacting with one another to a distinctly greater degree than with other conspecific organisms." This definition cannot be applied to situations such as moths drawn to a lamp, or when animals aggregate around a watering hole, as they are not exampling of a social aggregation. Most definitions however agree that a fundamental characteristic of group living is that individuals need to show spatial proximity over time to be considered a group. Therefore, the working definition of group living is where two or more individuals display a degree of spatial proximity over time, emphasizing the importance of mechanisms of social attraction to maintain these groups. Evolution of group living There have been multiple different hypotheses proposed to explain how group living evolved in animals. Research shows that grouping habits may differ between individuals, and this tendency to group can be inherited. Research also shows that grouping tendency depends heavily on the interaction of many genes, as well as experiences gained by an individual and the environmental conditions surrounding the individual. Other studies argue that the main driving force of the evolution of social grouping is phylogenetic inertia alongside ecological pressure. However, it is still unclear how exactly animals have evolved from the ancestral state of solitary life. Benefits of group living Information access and transfer A key advantage to group living is the ability for individuals in a group to access information gained by other group members. This ability to share information can benefit many aspects of a group’s success, such as increased foraging efficiency and increased defenses against predators. Foraging efficiency An advantage of information access from group living is increased foraging efficiency. When individuals form a group, they can more effectively locate high quality resources in their environment. Foraging efficiency can be increased by the sheer area of space individuals occupy as well as a greater number of individuals searching for food. Once a high-quality resource is found, the individuals may produce signals or cues that guides other members of the group to the location of the resource. The cues and signals produced thus helps individuals of a group discriminate between low- and high-quality resources. An example of this information transfer to benefit foraging efficiency can be seen in honey-bee (Apis mellifera) colonies, in which waggle dances performed by honey-bees share information on where these dancing bees foraged nectar. The waggle dance thus guides other bees to the location of highly productive flowers. In some species, for instance the forest tent caterpillar (Malacosoma disstria), foraging behaviors change depending on food source. On less favorable food sources, caterpillar groups tend to splinter, thereby potentially increasing the risk for predation, but increasing the potential of finding a more favorable food source. Increased defense from predators Another advantage of living in a group is seen in many prey species in their ability to increase defenses against predatory animals. A way that a group may increase its defenses against predators is through the ‘many-eyes effect’. This effect states that larger groups of animals are better at detecting predators compared to smaller groups. This allows the individuals within a group to more effectively identify predators, allowing these individuals to flee or adopt postures to alert the predators that their presence is known. An example can be seen in a study conducted by Siegfried and Underhill (1975) on laughing doves, in which large groups react to a mock-predator much more rapidly than smaller groups. Another way in which a group may have decreased risk of predation is through the dilution effect. The dilution effect shows the idea that an individual in a large group will have a reduced risk of predation compared to an individual in a small group or a solitary individual. Hence the risk is ‘diluted’ among the other members in a group. It is important to note however; this effect only occurs where predators are unable to capture all individuals in a group. For example, a flock of birds preying on a large group of caterpillars will not have any dilution effect, as these birds can rapidly consume all caterpillars at once. All individuals in a large group however, may not benefit from the dilution effect, and thus the selfish herd theory was developed. The selfish herd theory states that individuals in the periphery of a group is more likely to be preyed upon than those in the center of the group Breeding It is hypothesized that reproductive success of a female is determined by the number of eggs she can produce, while reproductive success of a male is determined by the number of females he mates with. Furthermore, according to Bateman's principle, it is expected that females of a population will select mates that result in the best quality offspring, while males compete among each other to mate with a female. Group living provides the presence of social information within the group, allowing both male and female members to find and select potential mating partners. Alongside this, living in a group allows for higher reproductive success as individuals have access to a greater number of potential mates, and the possibility to choose between them. Therefore, individuals living in groups have a higher chance of finding a mate and successfully reproducing, on the basis that larger groups present a greater number of accessible mates nearby. Costs of group living Despite the many benefits of living in groups, individuals of the group may also incur costs when forming groups. Ectoparasitism and disease When individuals of the same species aggregate to form groups, there is an increased risk of diseases and parasites spreading throughout the group. Because individuals of a group live together in close proximity, when one individual is infected with a disease or parasite, they bring this disease or parasite into a habitat full of susceptible individuals. Also, larger groups of animals will produce larger amounts of waste material, allowing for a favorable environment for pathogens, that may spread to individuals. Thus, transmissions of diseases and parasites are more likely to occur and more rapidly than if an individual lived alone. A great example was shown in colonies of cliff swallows by Brown and Brown (1986). Cliff swallows are commonly parasitized by swallow bugs and this study showed that the number of swallow bugs per nest increased significantly with an increase of the number of cliff swallows per colony, which thus reduced the survivability of the nests’ offspring by up to 50%. Another study shows that bank swallows have an increased likelihood of flea infestations per burrow with the increase of colony size, which also increased mortality rates of offspring in infected burrows. Intraspecific competition A consequence that may arise from forming large groups is the increased intraspecific competition between group members. If resources in a group’s environment becomes limited, group members will then have to compete with one another for the available resources. The increased competition then results in reduced nutritional intake in some individuals compared to others. An example of this can be seen in a study conducted on leaf monkeys. This study showed that females in a larger group of leaf monkeys had a reduced energetic intake than females in groups of smaller sizes. The reduction in energy gain seen in females of the larger group also then negatively affected the development rates of any infant offspring. Therefore, despite the benefits of animals forming groups that increases foraging efficiency due to the presence of social information, large groups of animals may also incur a cost of having to compete for the resources available in the environment. Reproduction Another cost to group living is the effect that a larger group size has on the reproductive success of individuals. While forming groups may benefit the reproductive success of individuals as there are more potential mates, consequentially individuals may also have increased competition between one another to successfully find a mate and reproduce. This means some individuals will have a reduction in their reproductive success as it now competes with other group members. An example can be seen in a study conducted on the Eurasian badger (Meles meles). This study showed that females belonging to a large group of badgers had a higher failure rate of reproduction in comparison to badgers living in solitary. Therefore, some individuals may actually show reduced reproductive success while living in a group despite the increased presence of potential mates. Stress It is clear that animals that form groups need to maintain a group size around an optimal level. Individual group members in group sizes much larger or smaller than the optimum may have increased stress levels. Individuals in groups much larger than their optimum group size may have increased stress levels due to competition for food resources or mates. In contrast, individuals in groups smaller than their optimum have increased stress levels arising from inadequate defense from predators. An example of this can be seen in a study conducted on a species of ring-tailed lemurs (Lemur catta). This study predicted that the optimum group size of ring-tailed lemurs is 10-20 individuals. The study then showed that groups within the optimum group size produced the lowest level of cortisol (an indicator of stress), while groups larger or smaller than the optimum group size had a significant increase in cortisol production. Therefore, group sizes that are not maintained within their optimum size may incur a cost of increased stress levels of individuals within those groups. Inbreeding Another proposed cost of group living is the increased risk of inbreeding. As members of the group in close proximity to one another over long periods of time, this increases the chances that offspring of the group may mate with related individuals. Offspring resulting from inbreeding have an increased chance to be affected by recessive or deleterious traits, thus reducing its survivability and ability to reproduce. The risk of inbreeding however, is only prevalent in smaller, isolated groups, as larger group sizes dilutes the chance of an individual mating with its relatives. Further reading Ward, A. and Webster, M., 2016. Sociality: the behaviour of group-living animals. Berlin, Germany: Springer. References Evolutionary biology Ethnology
Group living
[ "Biology" ]
2,479
[ "Evolutionary biology" ]
60,991,858
https://en.wikipedia.org/wiki/ASASSN-V%20J213939.3-702817.4
ASASSN-V J213939.3-702817.4 (also known as ASAS-SN-V J213939.3-702817.4 and J213939.3-702817.4) is a star, previously non-variable, found to be associated with an unusual, deep dimming event that was uncovered by the All Sky Automated Survey for SuperNovae (ASAS-SN) project, and first reported on 4 June 2019 in The Astronomer's Telegram. The star, in the constellation of Indus, about away, was first observed on 15 May 2014 (UT) by ASAS-SN, and, as of 4 June 2019, has resulted in more than 1780 data points, including a quiescent mean magnitude of g~12.95. On 4 June 2019, the star was reported to have dimmed gradually from g~11.96 at HJD 2458635.78, to g~14.22 at 4458837.45, and, as of 4 June 2019, seems to be returning to its quiescent state of g~13.29 at HJD 2851634.89. According to astronomer Tharindu Jayasinghe, one of the discoverers of the deep dimming event, "[The star has] been quiescent for so long and then suddenly decreased in brightness by a huge amount ... Why that happened, we don't know yet." See also Disrupted planet List of stars that have unusual dimming periods References External links , a presentation by Tabetha S. Boyajian (2016). , a presentation by Issac Arthur (2016). , star with unusual light fluctuations (2017). , up to 80% dimming (2019). Astronomical objects discovered in 2019 Indus (constellation) 2019 in science Unsolved problems in astronomy Unexplained phenomena F-type main-sequence stars
ASASSN-V J213939.3-702817.4
[ "Physics", "Astronomy" ]
406
[ "Indus (constellation)", "Unsolved problems in astronomy", "Concepts in astronomy", "Constellations", "Astronomical controversies" ]
60,992,857
https://en.wikipedia.org/wiki/Federated%20learning
Federated learning (also known as collaborative learning) is a machine learning technique focusing on settings in which multiple entities (often referred to as clients) collaboratively train a model while ensuring that their data remains decentralized. This stands in contrast to machine learning settings in which data is centrally stored. One of the primary defining characteristics of federated learning is data heterogeneity. Due to the decentralized nature of the clients' data, there is no guarantee that data samples held by each client are independently and identically distributed. Federated learning is generally concerned with and motivated by issues such as data privacy, data minimization, and data access rights. Its applications involve a variety of research areas including defence, telecommunications, the Internet of things, and pharmaceuticals. Definition Federated learning aims at training a machine learning algorithm, for instance deep neural networks, on multiple local datasets contained in local nodes without explicitly exchanging data samples. The general principle consists in training local models on local data samples and exchanging parameters (e.g. the weights and biases of a deep neural network) between these local nodes at some frequency to generate a global model shared by all nodes. The main difference between federated learning and distributed learning lies in the assumptions made on the properties of the local datasets, as distributed learning originally aims at parallelizing computing power where federated learning originally aims at training on heterogeneous datasets. While distributed learning also aims at training a single model on multiple servers, a common underlying assumption is that the local datasets are independent and identically distributed (i.i.d.) and roughly have the same size. None of these hypotheses are made for federated learning; instead, the datasets are typically heterogeneous and their sizes may span several orders of magnitude. Moreover, the clients involved in federated learning may be unreliable as they are subject to more failures or drop out since they commonly rely on less powerful communication media (i.e. Wi-Fi) and battery-powered systems (i.e. smartphones and IoT devices) compared to distributed learning where nodes are typically datacenters that have powerful computational capabilities and are connected to one another with fast networks. Mathematical formulation The objective function for federated learning is as follows: where is the number of nodes, are the weights of model as viewed by node , and is node 's local objective function, which describes how model weights conforms to node 's local dataset. The goal of federated learning is to train a common model on all of the nodes' local datasets, in other words: Optimizing the objective function . Achieving consensus on . In other words, converge to some common at the end of the training process. Centralized federated learning In the centralized federated learning setting, a central server is used to orchestrate the different steps of the algorithms and coordinate all the participating nodes during the learning process. The server is responsible for the nodes selection at the beginning of the training process and for the aggregation of the received model updates. Since all the selected nodes have to send updates to a single entity, the server may become a bottleneck of the system. Decentralized federated learning In the decentralized federated learning setting, the nodes are able to coordinate themselves to obtain the global model. This setup prevents single point failures as the model updates are exchanged only between interconnected nodes without the orchestration of the central server. Nevertheless, the specific network topology may affect the performances of the learning process. See blockchain-based federated learning and the references therein. Heterogeneous federated learning An increasing number of application domains involve a large set of heterogeneous clients, e.g., mobile phones and IoT devices. Most of the existing Federated learning strategies assume that local models share the same global model architecture. Recently, a new federated learning framework named HeteroFL was developed to address heterogeneous clients equipped with very different computation and communication capabilities. The HeteroFL technique can enable the training of heterogeneous local models with dynamically varying computation and non-IID data complexities while still producing a single accurate global inference model. Main features Iterative learning To ensure good task performance of a final, central machine learning model, federated learning relies on an iterative process broken up into an atomic set of client-server interactions known as a federated learning round. Each round of this process consists in transmitting the current global model state to participating nodes, training local models on these local nodes to produce a set of potential model updates at each node, and then aggregating and processing these local updates into a single global update and applying it to the global model. In the methodology below, a central server is used for aggregation, while local nodes perform local training depending on the central server's orders. However, other strategies lead to the same results without central servers, in a peer-to-peer approach, using gossip or consensus methodologies. Assuming a federated round composed by one iteration of the learning process, the learning procedure can be summarized as follows: Initialization: according to the server inputs, a machine learning model (e.g., linear regression, neural network, boosting) is chosen to be trained on local nodes and initialized. Then, nodes are activated and wait for the central server to give the calculation tasks. Client selection: a fraction of local nodes are selected to start training on local data. The selected nodes acquire the current statistical model while the others wait for the next federated round. Configuration: the central server orders selected nodes to undergo training of the model on their local data in a pre-specified fashion (e.g., for some mini-batch updates of gradient descent). Reporting: each selected node sends its local model to the server for aggregation. The central server aggregates the received models and sends back the model updates to the nodes. It also handles failures for disconnected nodes or lost model updates. The next federated round is started returning to the client selection phase. Termination: once a pre-defined termination criterion is met (e.g., a maximum number of iterations is reached or the model accuracy is greater than a threshold) the central server aggregates the updates and finalizes the global model. The procedure considered before assumes synchronized model updates. Recent federated learning developments introduced novel techniques to tackle asynchronicity during the training process, or training with dynamically varying models. Compared to synchronous approaches where local models are exchanged once the computations have been performed for all layers of the neural network, asynchronous ones leverage the properties of neural networks to exchange model updates as soon as the computations of a certain layer are available. These techniques are also commonly referred to as split learning and they can be applied both at training and inference time regardless of centralized or decentralized federated learning settings. Non-IID data In most cases, the assumption of independent and identically distributed samples across local nodes does not hold for federated learning setups. Under this setting, the performances of the training process may vary significantly according to the unbalanced local data samples as well as the particular probability distribution of the training examples (i.e., features and labels) stored at the local nodes. To further investigate the effects of non-IID data, the following description considers the main categories presented in the preprint by Peter Kairouz et al. from 2019. The description of non-IID data relies on the analysis of the joint probability between features and labels for each node. This allows decoupling of each contribution according to the specific distribution available at the local nodes. The main categories for non-iid data can be summarized as follows: Covariate shift: local nodes may store examples that have different statistical distributions compared to other nodes. An example occurs in natural language processing datasets where people typically write the same digits/letters with different stroke widths or slants. Prior probability shift: local nodes may store labels that have different statistical distributions compared to other nodes. This can happen if datasets are regional and/or demographically partitioned. For example, datasets containing images of animals vary significantly from country to country. Concept drift (same label, different features): local nodes may share the same labels but some of them correspond to different features at different local nodes. For example, images that depict a particular object can vary according to the weather condition in which they were captured. Concept shift (same features, different labels): local nodes may share the same features but some of them correspond to different labels at different local nodes. For example, in natural language processing, the sentiment analysis may yield different sentiments even if the same text is observed. Unbalanced: the amount of data available at the local nodes may vary significantly in size. The loss in accuracy due to non-iid data can be bounded through using more sophisticated means of doing data normalization, rather than batch normalization. Algorithmic hyper-parameters Network topology The way the statistical local outputs are pooled and the way the nodes communicate with each other can change from the centralized model explained in the previous section. This leads to a variety of federated learning approaches: for instance no central orchestrating server, or stochastic communication. In particular, orchestrator-less distributed networks are one important variation. In this case, there is no central server dispatching queries to local nodes and aggregating local models. Each local node sends its outputs to several randomly-selected others, which aggregate their results locally. This restrains the number of transactions, thereby sometimes reducing training time and computing cost. Federated learning parameters Once the topology of the node network is chosen, one can control different parameters of the federated learning process (in addition to the machine learning model's own hyperparameters) to optimize learning: Number of federated learning rounds: Total number of nodes used in the process: Fraction of nodes used at each iteration for each node: Local batch size used at each learning iteration: Other model-dependent parameters can also be tinkered with, such as: Number of iterations for local training before pooling: Local learning rate: Those parameters have to be optimized depending on the constraints of the machine learning application (e.g., available computing power, available memory, bandwidth). For instance, stochastically choosing a limited fraction of nodes for each iteration diminishes computing cost and may prevent overfitting, in the same way that stochastic gradient descent can reduce overfitting. Technical limitations Federated learning requires frequent communication between nodes during the learning process. Thus, it requires not only enough local computing power and memory, but also high bandwidth connections to be able to exchange parameters of the machine learning model. However, the technology also avoids data communication, which can require significant resources before starting centralized machine learning. Nevertheless, the devices typically employed in federated learning are communication-constrained, for example IoT devices or smartphones are generally connected to Wi-Fi networks, thus, even if the models are commonly less expensive to be transmitted compared to raw data, federated learning mechanisms may not be suitable in their general form. Federated learning raises several statistical challenges: Heterogeneity between the different local datasets: each node may have some bias with respect to the general population, and the size of the datasets may vary significantly; Temporal heterogeneity: each local dataset's distribution may vary with time; Interoperability of each node's dataset is a prerequisite; Each node's dataset may require regular curations; Hiding training data might allow attackers to inject backdoors into the global model; Lack of access to global training data makes it harder to identify unwanted biases entering the training e.g. age, gender, sexual orientation; Partial or total loss of model updates due to node failures affecting the global model; Lack of annotations or labels on the client side. Heterogeneity between processing platforms Federated learning variations A number of different algorithms for federated optimization have been proposed. Federated stochastic gradient descent (FedSGD) Deep learning training mainly relies on variants of stochastic gradient descent, where gradients are computed on a random subset of the total dataset and then used to make one step of the gradient descent. Federated stochastic gradient descent is the direct transposition of this algorithm to the federated setting, but by using a random fraction of the nodes and using all the data on this node. The gradients are averaged by the server proportionally to the number of training samples on each node, and used to make a gradient descent step. Federated averaging Federated averaging (FedAvg) is a generalization of FedSGD, which allows local nodes to perform more than one batch update on local data and exchanges the updated weights rather than the gradients. The rationale behind this generalization is that in FedSGD, if all local nodes start from the same initialization, averaging the gradients is strictly equivalent to averaging the weights themselves. Further, averaging tuned weights coming from the same initialization does not necessarily hurt the resulting averaged model's performance. Variations of FedAvg based on adaptive optimizers such as ADAM and AdaGrad have been proposed, and generally outperform FedAvg. Federated Learning with Dynamic Regularization (FedDyn) Federated learning methods suffer when the device datasets are heterogeneously distributed. Fundamental dilemma in heterogeneously distributed device setting is that minimizing the device loss functions is not the same as minimizing the global loss objective. In 2021, Acar et al. introduced FedDyn method as a solution to heterogenous dataset setting. FedDyn dynamically regularizes each devices loss function so that the modified device losses converges to the actual global loss. Since the local losses are aligned, FedDyn is robust to the different heterogeneity levels and it can safely perform full minimization in each device. Theoretically, FedDyn converges to the optimal (a stationary point for nonconvex losses) by being agnostic to the heterogeneity levels. These claims are verified with extensive experimentations on various datasets. Minimizing the number of communications is the gold-standard for comparison in federated learning. We may also want to decrease the local computation levels per device in each round. FedDynOneGD is an extension of FedDyn with less local compute requirements. FedDynOneGD calculates only one gradients per device in each round and update the model with a regularized version of the gradient. Hence, the computation complexity is linear in local dataset size. Moreover, gradient computation can be parallelizable within each device which is different from successive SGD steps. Theoretically, FedDynOneGD achieves the same convergence guarantees as in FedDyn with less local computation. Personalized Federated Learning by Pruning (Sub-FedAvg) Federated Learning methods cannot achieve good global performance under non-IID settings which motivates the participating clients to yield personalized models in federation. Recently, Vahidian et al. introduced Sub-FedAvg opening a new personalized FL algorithm paradigm by proposing Hybrid Pruning (structured + unstructured pruning) with averaging on the intersection of clients’ drawn subnetworks which simultaneously handles communication efficiency, resource constraints and personalized models accuracies. Sub-FedAvg is the first work which shows existence of personalized winning tickets for clients in federated learning through experiments. Moreover, it also proposes two algorithms on how to effectively draw the personalized subnetworks. Sub-FedAvg tries to extend the "lottery ticket hypothesis" which is for centrally trained neural networks to federated learning trained neural networks leading to this open research problem: “Do winning tickets exist for clients’ neural networks being trained in federated learning? If yes, how to effectively draw the personalized subnetworks for each client?” Dynamic Aggregation - Inverse Distance Aggregation IDA (Inverse Distance Aggregation) is a novel adaptive weighting approach for clients based on meta-information which handles unbalanced and non-iid data. It uses the distance of the model parameters as a strategy to minimize the effect of outliers and improve the model's convergence rate. Hybrid Federated Dual Coordinate Ascent (HyFDCA) Very few methods for hybrid federated learning, where clients only hold subsets of both features and samples, exist. Yet, this scenario is very important in practical settings. Hybrid Federated Dual Coordinate Ascent (HyFDCA) is a novel algorithm proposed in 2024 that solves convex problems in the hybrid FL setting. This algorithm extends CoCoA, a primal-dual distributed optimization algorithm introduced by Jaggi et al. (2014) and Smith et al. (2017), to the case where both samples and features are partitioned across clients. HyFDCA claims several improvement over existing algorithms: HyFDCA is a provably convergent primal-dual algorithm for hybrid FL in at least the following settings. Hybrid Federated Setting with Complete Client Participation Horizontal Federated Setting with Random Subsets of Available Clients The authors show HyFDCA enjoys a convergence rate of () which matches the convergence rate of FedAvg (see below). Vertical Federated Setting with Incomplete Client Participation The authors show HyFDCA enjoys a convergence rate of () whereas FedBCD exhibits a slower () convergence rate and requires full client participation. HyFDCA provides the privacy steps that ensure privacy of client data in the primal-dual setting. These principles apply to future efforts in developing primal-dual algorithms for FL. HyFDCA empirically outperforms HyFEM and FedAvg in loss function value and validation accuracy across a multitude of problem settings and datasets (see below for more details). The authors also introduce a hyperparameter selection framework for FL with competing metrics using ideas from multiobjective optimization. There is only one other algorithm that focuses on hybrid FL, HyFEM proposed by Zhang et al. (2020). This algorithm uses a feature matching formulation that balances clients building accurate local models and the server learning an accurate global model. This requires a matching regularizer constant that must be tuned based on user goals and results in disparate local and global models. Furthermore, the convergence results provided for HyFEM only prove convergence of the matching formulation not of the original global problem. This work is substantially different than HyFDCA's approach which uses data on local clients to build a global model that converges to the same solution as if the model was trained centrally. Furthermore, the local and global models are synchronized and do not require the adjustment of a matching parameter between local and global models. However, HyFEM is suitable for a vast array of architectures including deep learning architectures, whereas HyFDCA is designed for convex problems like logistic regression and support vector machines. HyFDCA is empirically benchmarked against the aforementioned HyFEM as well as the popular FedAvg in solving convex problem (specifically classification problems) for several popular datasets (MNIST, Covtype, and News20). The authors found HyFDCA converges to a lower loss value and higher validation accuracy in less overall time in 33 of 36 comparisons examined and 36 of 36 comparisons examined with respect to the number of outer iterations. Lastly, HyFDCA only requires tuning of one hyperparameter, the number of inner iterations, as opposed to FedAvg (which requires tuning three) or HyFEM (which requires tuning four). In addition to FedAvg and HyFEM being quite difficult to optimize hyperparameters in turn greatly affecting convergence, HyFDCA's single hyperparameter allows for simpler practical implementations and hyperparameter selection methodologies. Current research topics Federated learning has started to emerge as an important research topic in 2015 and 2016, with the first publications on federated averaging in telecommunication settings. Before that, in a thesis work titled "A Framework for Multi-source Prefetching Through Adaptive Weight", an approach to aggregate predictions from multiple models trained at three location of a request response cycle with was proposed. Another important aspect of active research is the reduction of the communication burden during the federated learning process. In 2017 and 2018, publications have emphasized the development of resource allocation strategies, especially to reduce communication requirements between nodes with gossip algorithms as well as on the characterization of the robustness to differential privacy attacks. Other research activities focus on the reduction of the bandwidth during training through sparsification and quantization methods, where the machine learning models are sparsified and/or compressed before they are shared with other nodes. Developing ultra-light DNN architectures is essential for device-/edge- learning and recent work recognises both the energy efficiency requirements for future federated learning and the need to compress deep learning, especially during learning. Recent research advancements are starting to consider real-world propagating channels as in previous implementations ideal channels were assumed. Another active direction of research is to develop Federated learning for training heterogeneous local models with varying computation complexities and producing a single powerful global inference model. A learning framework named Assisted learning was recently developed to improve each agent's learning capabilities without transmitting private data, models, and even learning objectives. Compared with Federated learning that often requires a central controller to orchestrate the learning and optimization, Assisted learning aims to provide protocols for the agents to optimize and learn among themselves without a global model. Use cases Federated learning typically applies when individual actors need to train models on larger datasets than their own, but cannot afford to share the data in itself with others (e.g., for legal, strategic or economic reasons). The technology yet requires good connections between local servers and minimum computational power for each node. Transportation: self-driving cars Self-driving cars encapsulate many machine learning technologies to function: computer vision for analyzing obstacles, machine learning for adapting their pace to the environment (e.g., bumpiness of the road). Due to the potential high number of self-driving cars and the need for them to quickly respond to real world situations, traditional cloud approach may generate safety risks. Federated learning can represent a solution for limiting volume of data transfer and accelerating learning processes. Industry 4.0: smart manufacturing In Industry 4.0, there is a widespread adoption of machine learning techniques to improve the efficiency and effectiveness of industrial process while guaranteeing a high level of safety. Nevertheless, privacy of sensitive data for industries and manufacturing companies is of paramount importance. Federated learning algorithms can be applied to these problems as they do not disclose any sensitive data. In addition, FL also implemented for PM2.5 prediction to support Smart city sensing applications. Medicine: digital health Federated learning seeks to address the problem of data governance and privacy by training algorithms collaboratively without exchanging the data itself. Today's standard approach of centralizing data from multiple centers comes at the cost of critical concerns regarding patient privacy and data protection. To solve this problem, the ability to train machine learning models at scale across multiple medical institutions without moving the data is a critical technology. Nature Digital Medicine published the paper "The Future of Digital Health with Federated Learning" in September 2020, in which the authors explore how federated learning may provide a solution for the future of digital health, and highlight the challenges and considerations that need to be addressed. Recently, a collaboration of 20 different institutions around the world validated the utility of training AI models using federated learning. In a paper published in Nature Medicine "Federated learning for predicting clinical outcomes in patients with COVID-19", they showcased the accuracy and generalizability of a federated AI model for the prediction of oxygen needs in patients with COVID-19 infections. Furthermore, in a published paper "A Systematic Review of Federated Learning in the Healthcare Area: From the Perspective of Data Properties and Applications", the authors trying to provide a set of challenges on FL challenges on medical data-centric perspective. A coalition from industry and academia has developed MedPerf, an open source platform that enables validation of medical AI models in real world data. The platform relies technically on federated evaluation of AI models aiming to alleviate concerns of patient privacy and conceptually on diverse benchmark committees to build the specifications of neutral clinically impactful benchmarks. Robotics Robotics includes a wide range of applications of machine learning methods: from perception and decision-making to control. As robotic technologies have been increasingly deployed from simple and repetitive tasks (e.g. repetitive manipulation) to complex and unpredictable tasks (e.g. autonomous navigation), the need for machine learning grows. Federated Learning provides a solution to improve over conventional machine learning training methods. In the paper, mobile robots learned navigation over diverse environments using the FL-based method, helping generalization. In the paper, Federated Learning is applied to improve multi-robot navigation under limited communication bandwidth scenarios, which is a current challenge in real-world learning-based robotic tasks. In the paper, Federated Learning is used to learn Vision-based navigation, helping better sim-to-real transfer. Biometrics Federated Learning (FL) is transforming biometric recognition by enabling collaborative model training across distributed data sources while preserving privacy. By eliminating the need to share sensitive biometric templates like fingerprints, facial images, and iris scans, FL addresses privacy concerns and regulatory constraints, allowing for improved model accuracy and generalizability. It mitigates challenges of data fragmentation by leveraging scattered datasets, making it particularly effective for diverse biometric applications such as facial and iris recognition. However, FL faces challenges, including model and data heterogeneity, computational overhead, and vulnerability to security threats like inference attacks. Future directions include developing personalized FL frameworks, enhancing system efficiency, and expanding FL applications to biometric presentation attack detection (PAD) and quality assessment, fostering innovation and robust solutions in privacy-sensitive environments. References External links "Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016" at eur-lex.europa.eu. Retrieved October 18, 2019. "Data minimisation and privacy-preserving techniques in AI systems" at UK Information Commissioners Office. Retrieved July 22, 2020 "Realising the Potential of Data Whilst Preserving Privacy with EyA and Conclave from R3" at eya.global. Retrieved March 31, 2022. Machine learning Distributed artificial intelligence
Federated learning
[ "Technology", "Engineering" ]
5,508
[ "Artificial intelligence engineering", "Information systems", "Distributed artificial intelligence", "Machine learning" ]
60,992,969
https://en.wikipedia.org/wiki/Muyoba%20Macwani
Muyoba Macwani is a nuclear physicist who is a political Zambian figure and has been actively involved in the fight for equal rights for all Zambians and has taken an active role in the fight for the restoration of the Barotseland agreement in the Zambian constitution. Education Muyoba Macwani is a nuclear physicist who did his undergraduate studies at the University of Zambia. He obtained his master's from the University of Surrey after which he obtained his PhD in nuclear physics in 1982 with a focus in Neutron activation analysis without multi-element standards from Imperial College London. Career Macwani was the party president of the Caucus for National Unity (CNN) in 1997 which was a split group from MMD. He has been actively involved in the fight for the restoration of the Barotseland Agreement in the Constitution, which was terminated by the Zambia government even though the Zambian government still benefits from the contents of the agreement. Macwani has been involved in the flight for equity for all Zambians and took an active role in the fight for the slogan of one Zambia One nation in trying to unite all the people of Zambia in terms of being equal and one. The former University of Barotseland Vice-Chancellor has also been instrumental in fighting for equal opportunity for rural students and furthermore calling for the Zambian government to make education a priority for the development of Zambia . Macwani was also involved in the site investigations and risk assessments carried out in the proposed rehabilitation projects on the Ndola - Mufulira Mwambashi (M4), Mufulira and Border with DRC (M5) and Kafulafuta-Luanshya (M6) Roads to fully ascertain the impacts of the rehabilitation works and development of mitigation measures. References Year of birth missing (living people) Living people Alumni of the University of Surrey Alumni of Imperial College London Nuclear physicists 20th-century Zambian politicians
Muyoba Macwani
[ "Physics" ]
388
[ "Nuclear physicists", "Nuclear physics" ]
76,133,802
https://en.wikipedia.org/wiki/Nasrat%20Canal
The Nasrat Canal (often transliterated as Nusrat Canal) also locally known as Sada Wah, is a major irrigation canal located in the Sindh province of Pakistan. It originates from the Rohri Canal near Sukkur and flows southward for approximately 260 kilometers, irrigating vast agricultural lands in the districts of Sukkur, Khairpur, Naushero, and Shaheed Benazirabad. History and significance Construction of the Nasrat Canal began in the early 20th century under the British Raj and was completed in 1923. It played a crucial role in transforming the arid landscape of Sindh into a fertile agricultural region. The canal serves as a vital source of water for various crops, including cotton, wheat, rice, sugarcane, and fruits. References Irrigation canals Irrigation projects Irrigation in Pakistan 1923 establishments in British India
Nasrat Canal
[ "Engineering" ]
167
[ "Irrigation projects" ]
76,135,218
https://en.wikipedia.org/wiki/Ion%20network
An ion network is an interconnected network or structure composed of ions in a solution. The term "ion network" was coined by Cho and coworkers in 2014. The notion of extended ion aggregates in electrolyte solutions, however, can be found in an earlier report. The ion network is particularly relevant in high-salt solutions where ions can aggregate and interact strongly and it has been investigated in an increasing number of research and review articles. In high-salt solutions, ions can form clusters or aggregates due to their electrostatic interactions. These aggregates may further organize into spatially more extensive networks, where ions are connected through electrostatic forces and possibly other types of interactions, such as hydrogen bonding. The formation of percolating ion networks can significantly affect the surrounding solvent molecules, particularly the water hydrogen-bonding networks in aqueous solutions that become intertwined with morphologically complementary ion networks. The presence of ion networks can disrupt the hydrogen-bonding network of water molecules, altering the structure and properties of the solution. This disruption in water structure may have implications for various phenomena, including solvation dynamics, ion transport, and chemical reactions occurring in the solution. Overall, the concept of an ion network highlights the complex and dynamic interactions between ions and solvent molecules in solution, and its understanding is crucial for elucidating the behavior of electrolyte solutions in various contexts, ranging from biological systems to industrial processes, including lithium-ion batteries. Research The study of ion networks and their implications in solution chemistry is an active and interdisciplinary field that has attracted attention from researchers across various disciplines, including chemistry, physics, materials science, and biology. Here are some key research subjects and activities in this field: Electrolyte Solutions and Ionic Liquids: Electrolyte solutions, which contain dissolved ions, and ionic liquids, which are essentially molten salts at room temperature, are important systems for studying ion networks. Researchers have investigated the structure and dynamics of ion networks in these systems using a variety of experimental and theoretical techniques. Molecular Dynamics (MD) Simulations: Molecular dynamics simulations play a crucial role in understanding ion networks at the molecular level. By simulating the behavior of individual ions and solvent molecules over time, researchers can explore the formation, structure, and dynamics of ion networks in solution. Spectroscopic Techniques: Experimental techniques such as infrared spectroscopy, nuclear magnetic resonance (NMR) spectroscopy, and X-ray scattering are commonly used to study ion networks in solution. These techniques provide valuable information about the structure, composition, and dynamics of ion networks. Hofmeister Effect: The Hofmeister effect refers to the phenomenon where the addition of specific ions to a solution can significantly alter the solubility, stability, and other properties of solutes. Understanding the Hofmeister effect is essential for elucidating the role of ion networks in solution chemistry. Soft Matter Physics: Ion networks in solution are also of interest in the field of soft matter physics, where researchers study the behavior of complex fluids and materials. Understanding the structure and dynamics of ion networks is crucial for designing new materials with tailored properties. Graph Theory Analysis: Ions often self-assemble into large and polydisperse aggregates in solution. Graph-theoretical approaches have been applied to quantitatively study morphological characteristics of these structural patterns including ion networks. In this approach, the aggregate structures taken from MD trajectories are treated as mathematical structures called graphs, and their properties, such as graph spectrum, degree distribution, clustering coefficient, minimum path length, and graph entropy, are calculated and analyzed. For example, this approach has been used to identify two morphologically different ion aggregates, namely localized clusters and extended networks, in high-salt solutions of the Hofmeister series of ions. References Electrolytes Liquids
Ion network
[ "Physics", "Chemistry" ]
765
[ "Electrolytes", "Phases of matter", "Electrochemistry", "Matter", "Liquids" ]
76,135,255
https://en.wikipedia.org/wiki/Thulium%20nitride
Thulium nitride is a binary inorganic compound of thulium and nitrogen with the chemical formula . It can be prepared by reacting thulium amalgam with nitrogen at high temperature. Physical properties Thulium nitride crystallises cubic system with Fm3m space group. It has a sodium chloride structure. Chemical properties It reacts with Tm2In at 1220 K to form (Tm3N)In. References Nitrides Thulium compounds Nitrogen compounds
Thulium nitride
[ "Chemistry" ]
102
[ "Inorganic compounds", "Inorganic compound stubs" ]
76,136,188
https://en.wikipedia.org/wiki/Northern%20green%20anaconda
The northern green anaconda (Eunectes akayima) is a disputed boa species found in northern South America and the Caribbean island of Trinidad. It is closely related to Eunectes murinus, the (southern) green anaconda, from which it was claimed to be genetically distinct in 2024. It is one of the heaviest and longest snakes in the world, with one specimen reported by a newspaper to have been long. Like all boas, it is a non-venomous constrictor. E. akayima is estimated to have diverged from its closest relative between 5 and 20 million years ago, originally separated by the Vaupés Arch. Today, specimens of E. akayima are known from the Orinoco basin and surrounding regions, ranging from Ecuador to Trinidad. Its range is still mostly separate from that of E. murinus, although they partially overlap around French Guiana, with no clear geographical barrier. While allegedly separated through mitochondrial DNA markers, the two species are indistinguishable in morphology. The anacondas in the region have been known for centuries as akayima by the local Carib people, and this became the formal scientific name proposed for the species. Later studies raised nomenclatural issues about the description, while also calling into question the validity of the clades recovered through mitochondrial DNA, casting doubt on the validity of E. akayima as a separate species. History Previous proposals Before the discovery of E. akayima, several proposals were made to split a new species or subspecies from the green anaconda (Eunectes murinus), such as Eunectes gigas (Latreille, 1801), or Eunectes barbouri (Dunn and Conant, 1936). Boa gigas, later Eunectes gigas or Eunectes murinus gigas, was proposed in 1801, at first described from specimens in Guyana and Trinidad but later believed to range from Colombia to French Guiana. It was distinguished by a lighter postocular coloration, as well as differences in scale count in the ventral and subcaudal regions. The coloration was later found to be uniformly distributed throughout the green anaconda's range, while the scale count was found not to differ significantly between the putative subspecies. This showed E. m. gigas to be a color variation rather than a true subspecies. Eunectes barbouri was described in 1936 from an individual donated to Philadelphia Zoo, believed to originate from the island of Marajó in northern Brazil. The holotype was distinguished from E. murinus specimens by the coloration pattern of its dorsal spots. In the presumed E. barbouri holotype, the latter had a dark border with a light center, while they were uniformly dark in E. murinus. Beyond the holotype, E. barbouri was estimated to have a wider range throughout Brazil, with another attributed specimen originating from Barreiras. The validity of E. barbouri was debated by some contemporary authors, although others continued to maintain the distinctness of the species. A 1970 expedition on Marajó failed to find any E. barbouri specimens, with the only individuals encountered being E. murinus. Later studies have cast doubts on the diagnostic characteristics used to define the species, with the spot patterns being found to be part of the natural color variation in E. murinus, while a 2022 genetic study upheld the synonymy of the two species. Discovery In 2024, research claimed through genetic testing that populations of Eunectes murinus found in northern South America formed a clade distinct from those occurring farther south. The approximately 5.5 percent difference in mitochondrial DNA prompted the northern populations to be described as a separate species, Eunectes akayima, by the researchers. Both wild and captive specimens were tested with blood samples taken. While nuclear DNA failed to show measurable differences between the two species, authors attributed this to the lack of appropriate markers with high enough variation rates, with nuclear genomic studies barely separating E. murinus from the more distant E. notaeus. Samples were taken in multiple locations throughout South America over the course of 20 years in a study led by New Mexico Highlands University professor Jesús Rivas, measuring both genetic data through blood and tissue samples and anatomical characteristics like scale count, as well as habitat data. An earlier specimen, MCNG 1042, was designated as the holotype. This specimen, collected by Jesús Rivas in 1993 at Hato El Cedral in the Venezuelan Llanos, is now preserved by the in the Museo de Ciencias Naturales de Guanare. Several of the specimens were found in a 2022 expedition by researchers working with indigenous Huaorani people, who consider the anaconda sacred, in the Ecuadorian Amazon. The 10-day expedition, organized with Huaorani leader Penti Baihua, was accompanied by the filming of the Pole to Pole with Will Smith series for National Geographic, with actor Will Smith taking part in the expedition. The Huaorani collaborators were acknowledged as co-authors in the discovery paper. Reactions The description of Eunectes akayima has been favorably received by , who acknowledged the validity of the genetic divide between the two green anaconda species. However, the study's other result of merging the yellow anaconda species Eunectes deschauenseei and E. beniensis into E. notaeus was criticized as premature by Böhme, whose own study upheld E. beniensis as distinct two years earlier. Two reviews published in the month following the discovery criticized the original publication. The first critique, by Vásquez-Restrepo et al., only discussed the validity of the name Eunectes akayima on procedural grounds, pointing out violations of the International Code of Zoological Nomenclature. While also raising nomenclatural criticism, the second study by Dubois et al. additionally cast doubt on the reliability of the genetic markers used, and on the existence of a separate northern green anaconda species. Future studies Follow-up studies of E. akayima have been considered by the discovering team, aiming to understand the green anaconda's evolutionary history and the split between the two species. The differences in genital configurations of the two species is also an open topic of research. These often vary even among related species of snakes, as non-compatible genitals are often a factor preventing interbreeding between nearby populations and leading to further divergence between species. Nomenclature Type specimen While Carl Linnaeus originally described Boa murina in 1758, the precise location of the species' syntypes is not known, with the most likely locations of Suriname and French Guiana being contact zones between both populations. As the required genetic testing is not possible on samples of that age, it is uncertain which species the type specimens belonged to, making the choice of which species to refer to as Eunectes murinus ultimately arbitrary. With the southern clade being the most widespread, discoverers opted to keep the original name for the latter in the name of taxonomic stability. They designated as lectotype for E. murinus a specimen found in the Xingu River of Pará, Brazil in 2011 and now hosted in the Museu Paraense Emílio Goeldi, and established the northern population as a new species, Eunectes akayima. Etymology The species' name akayima comes from the local Cariban languages, with akayi meaning "snake" and the suffix -ima describing largeness in a way that elevates the term to a separate category, giving a literal meaning of "The Great Snake". The word akayima and variants (okoyimo, okoimo) have been used by the local Carib people to refer to the northern green anaconda for centuries before its formal scientific description. The term also refers in Cariban languages to the rainbow, likely associated with a feathered serpent in Carib beliefs. The Arekuna, for instance, speak of a rainbow serpent in their creation narrative, believed to be the source of all birds' plumage. Among the Carib peoples, the Wai-wai also give mythological significance to the Okoimo-Yenna or "anaconda people", understood either as relatives or as opposite counterparts to human beings. The continued use of the name by indigenous people prior to the advent of the ICZN, along with the invalidity of previous proposed splits of E. murinus due to inconsistencies in differentiation, prompted the researchers to accept akayima as the senior synonym for the species. Nomenclatural availability and validity A review by Vásquez-Restrepo et al. argued against the validity of E. akayima, pointing out several purported violations of the ICZN in the Rivas et al. article. Notably, they concluded that the oldest available name for this taxon was Eunectes gigas (Latreille, 1801), historically considered the northern subspecies of E. murinus, but for which no evidence of distinctiveness had been provided prior to Rivas et al.s molecular analyses. Separately, Dubois et al. criticized the publication for lacking a well-defined species concept, while questioning the validity of the clades recovered through mitochondrial DNA. Their review claimed that the name itself was not validly published under the rules of the Zoological Code (ICZN) as it violated Article 13 of the Code, and labeled the designation as a nomen nudum. This would make the proposed name unavailable, rather than only invalid as Vásquez-Restrepo et al. concluded. Both studies also considered the designation of the Eunectes murinus lectotype by Rivas et al. to be invalid, with Dubois et al. pointing out that the specimen (only collected in 2011) was not part of the syntypes designated by Linnaeus, and thus not valid as a lectotype. The latter study instead designated as lectotype a specimen collected by Albertus Seba in the Spanish West Indies, referenced by Linnaeus in his original description. Genetic history The divergence between Eunectes akayima and E. murinus has been estimated to have occurred between 5 and 20 million years ago, during the Miocene. Molecular clock analyses were calibrated using the earlier split between the geographically isolated Sanziniinae (found in Madagascar) and the rest of Boidae, along with fossil evidence. Approaches vary depending on the scenario considered for the aforementioned split, with scenarios involving one land bridge, two, or none providing different hard minimums for the Boidae divergence, and thus different calibrations for the split between the two species. Another method, only considering fossil evidence, led to the most recent estimate for the split between E. akayima and E. murinus, placing it between 5 and 11 million years ago. The split between the two green anaconda species has been claimed by the discovering team to parallel other such north-south splits in South American fauna, such as between the northern caiman lizard (Dracaena guianensis) and the Paraguay caiman lizard (Dracaena paraguayensis), or between the Orinoco mata mata (Chelus orinocensis) and the Amazon mata mata (Chelus fimbriata). This was attributed by the authors to the rise of the Vaupés Arch between the Andes and the Guyana Shield, which created a geographical barrier between populations in the Proto-Orinoco and Proto-Amazon river basins. Modern E. akayima populations can be found farther south than the Vaupés Arch, and there is currently no geographical barrier between the two species. A more recent divergence inside E. akayima has been dated to , splitting populations in Venezuela south of the Orinoco Delta. This is believed to coincide with the beginning of the Quaternary glaciation, as the sequestration of water by the ice caps led to wetlands receding, while forests grew and separated the Orinoco and Amazon river basins once again. Description Eunectes akayima has been described as one of the world's heaviest and longest snakes, with one specimen described in a newspaper article quote as being long. Unconfirmed reports from native Huaorani people speak of individuals reaching and . While the two are allegedly distinguished by their mitochondrial genomes, no morphological differences have yet been recognized between E. akayima and E. murinus, making them cryptic species. Morphological measurements such as scale count in several locations have been found to lie in the same ranges in both species. Ecology and behavior Northern green anacondas are ambush predators, and are among the apex predators in the swamps, rivers and other wetlands of northern South America, spending most of their time submerged in shallow waters. They hunt by waiting for prey to come nearby, with the buoyancy of the water helping them to rapidly leap out and take hold of the prey with their strong jaws. Like other boids, they are non-venomous constrictors, subduing their prey by wrapping themselves around and asphyxiating it, often crushing the prey before swallowing it whole. Prey of the northern green anaconda include large animals such as capybaras, caimans and deer. It is a keystone species in its ecosystem, whose presence impacts the habits and migration patterns of other species in the surrounding environment. Despite popular beliefs, there have been no confirmed records of E. akayima hunting or eating humans. Distribution and habitat Eunectes akayima is found in northern South America. The precise distribution area is not yet known, but based on samples taken, the species is known to occur in Venezuela, French Guiana, Suriname, Guyana, Ecuador, and the island of Trinidad. Its range is also believed to include parts of Colombia and Peru, as well as northern Brazil. Contact regions have been discovered, such as French Guiana and likely Suriname, where populations of both green anaconda species overlap with each other. While specimens of both have been found in nearby localities, notably on opposite riverbanks from each other, the two species have not been found to interbreed. Conservation The possibility of E. akayima being a separate species reveals a higher conservation risk than previously believed, despite the green anaconda being originally assessed as Least Concern by the IUCN due to its broad range. Lack of precise knowledge about population distribution make the assessment of both species' true status difficult, while the different ecological habitats and threats faced by both species mean specific conservation programs must be established for each of them. Notably, the northern green anaconda's smaller range makes it much more vulnerable than its southern neighbor. As with other anaconda species, main threats include conflict with humans, as well as habitat degradation and fragmentation caused by agriculture, climate change and oil extraction in the region. Researchers noted the importance of monitoring population numbers for the species, as well as studying the effects of oil spill-related petrochemicals on the snake's reproductive biology. Their high sensitivity to changes makes them an indicator species for environmental health, highlighting the importance of assessing their populations. It is believed that 20 to 31 percent of the northern green anaconda's habitat has been lost to deforestation, with the number estimated to reach 40 percent by 2050. Notes References Apex predators Eunectes Fauna of the Amazon Reptiles described in 2024 Reptiles of the Caribbean Reptiles of Colombia Reptiles of Ecuador Reptiles of Guyana Reptiles of Trinidad and Tobago Reptiles of Venezuela Snakes of South America Controversial taxa
Northern green anaconda
[ "Biology" ]
3,199
[ "Biological hypotheses", "Controversial taxa" ]
76,136,329
https://en.wikipedia.org/wiki/Institutiones%20rei%20herbariae
Institutiones rei herbariae (), originally published in French as Eléments de botanique, is a 1700 Latin-language botanical compendium. The book was the principal work of Joseph Pitton de Tournefort, a French botanist credited with establishing the modern concept of the genus. Contents As a part of the book's introduction, Tournefort included what may be the first recorded history of botany, titled Isagoge in rem herbarium. In it, some of the most important botanical authors are noted, and brief biographies are given for each. In the 1694 edition Eléments de botanique, Tournefort argued against John Ray's conception of the genus, to which Ray responded twice in 1696. However, in Institutiones rei herbariae in 1700, criticisms towards Ray were removed and replaced with praise. The main portion of the book contains an exhaustive list of plant names, organized in a system of "classes", "sections", "genera", and "species". Furthermore, myriad images of plant leaves and flowers are included throughout the volume, engraved on copper-plate. Publication While Institutiones rei herbariae was published in 1700 (and again in 1719), the book was originally written in French in 1694 as Eléments de botanique. Beginning in 1716, an English language version of Institutiones was published monthly under the title Botanical institutions. Rather than being translated from the original French work, Botanical institutions was adapted from the Latin Institutiones rei herbariae. The edition included a direct translation of the original, additional commentary from English contributors, two alphabetical indices, and a brief biography on Tournefort. Legacy Tournefort's central work has been praised for its simplicity of organization, and for its role as a foundational document for later botanists. One biographer of Tournefort noted that the work was highly influenced by the societal thinking of the time. Eléments de botanique was a strictly utilitarian work: it was solely designed to facilitate plant identification in order that those plants may be put to use for their various purposes. As such, every name had to be clearly linked to one species only; there was as little ambiguity as possible. Many French, English, Italian, and German botanists continued to use Tournefort's system throughout the first half of the 18th century, much in the same way that later taxonomists would model their works off the system of Carl Linnaeus. The book also reached outside of botanical circles. For example, Charles De Geer (who would later become a prominent entomologist) purchased three volumes of the 1719 edition of Institutiones rei herbariae. De Geer used the book to identify plants in his own garden, and also made use of Tournefort's classification system in his publications. However, some 18th-century naturalists, following the principles of John Locke, argued against the nominalism of Tournefort. Where Tournefort argued that the "essence of the plant" could be tied to specific and generic names, botanists like Georges-Louis Leclerc and Jean-Baptiste Lamarck did not believe an organized science should be burdened by arbitrary nominal distinctions. Notes References Bibliography External links 1700 non-fiction books 1700 in science 1700 in France Botany books 17th-century books in Latin Biological classification Botanical nomenclature
Institutiones rei herbariae
[ "Biology" ]
680
[ "Botanical nomenclature", "Botanical terminology", "Biological nomenclature", "nan" ]
76,136,734
https://en.wikipedia.org/wiki/William%20H.%20Welch%20Medal
The William H. Welch Medal is an annual award given by the American Association for the History of Medicine (AAHM) to the author or co-authors of an outstanding book in medical history. According to the current rules, the award is not for editorial work. The book must be published during the five years preceding the award, which is presented at the AAHM's annual meeting. Any author who is awarded the William H. Welch is ineligible for subsequent awards of the medal — this rule of ineligibility was instituted in 1973, after Erwin Ackerknecht received the medal in 1953 and in 1972. The medal is named in honor of William H. Welch, M.D., a pathologist, bacteriologist, and first dean of the Johns Hopkins School of Medicine. The inaugural medal was awarded in 1950 to Henry E. Sigerist. He grew up in Paris and Zurich and in 1932 moved to the United States as the successor to William H. Welch as director of the Johns Hopkins University Institute of the History of Medicine. Past recipients The medal has been awarded every year since 1971. Before 1971, there were some years in which the medal was not awarded. 2023 — Yan Liu, Healing with Poisons: Potent Medicines in Medieval China (University of Washington Press, 2021) 2022 — Jaipreet Virdi, Hearing Happiness: Deafness Cures in History (University of Chicago Press, 2020) 2021 — Benjamin Breen, The Age of Intoxication: Origins of the Global Drug Trade (University of Pennsylvania Press, 2019); 10-digit 2020 — Nicole Barnes, Intimate Communities: Wartime Healthcare and the Birth of Modern China, 1937–1945 (University of California Press, 2018) 2019 — Pablo Gómez, The Experiential Caribbean: Creating Knowledge and Healing in the Early Modern Atlantic (University of North Carolina Press, 2017); 2018 — Cristian Berco, From Body to Community: Venereal Disease and Society in Baroque Spain (University of Toronto Press, 2016) 2017 — Johanna Schoen, Abortion After Roe: Abortion After Legalization (University of North Carolina Press, 2015) 2016 — Sean Hsiang-Lin Lei, Neither Donkey Nor Horse: Medicine in the Struggle Over China’s Modernity (University of Chicago Press, 2014) 2015 — Leslie J. Reagan, Dangerous Pregnancies: Mothers, Disabilities, and Abortion in Modern America (University of California Press, 2010) 2014 — Julie Livingston, Improvising Medicine: An African Oncology Ward in an Emerging Cancer Epidemic (Duke, University Press, 2012) 2013 — Michael Willrich, Pox: An American History (Penguin Press, 2011) 2012 — Gregg Mitman, Breathing Space: How Allergies Shape Our Lives and Landscapes (Yale University Press, 2007) 2011 — Allan M. Brandt, The Cigarette Century: The Rise, Fall, and Deadly Persistence of the Product That Defined America (Basic Books, 2007) 2010 — Warwick Anderson, The Collectors of Lost Souls: Turning Kuru Scientists into Whitemen (The Johns Hopkins University Press: 2008) 2009 — Katharine Park, Secrets of Women: Gender, Generation, and the Origins of Human Dissection (Zone Books, 2006) 2008 — Frank M. Snowden III, The Conquest of Malaria: Italy, 1900-1962 (New Haven: Yale University Press 2006) 2007 — Ruth Rogaski, Hygienic Modernity: Meaning of Health and Disease in Treaty-Port China (Berkeley: University of California Press, 2004) 2006 — Barron H. Lerner, Breast Cancer Wars: Hope, Fear, and the Pursuit of a Cure in Twentieth Century America (New York: Oxford University Press, 2001) 2005 — Keith Wailoo, Dying in the City of the Blues: Sickle Cell Anemia and the Politics of Race and Health (Chapel Hill: University of North Carolina Press, 2001) 2004 — Kenneth Ludmerer, Time to Heal: American Medical Education from the Turn of the Century to the Era of Managed Care (New York: Oxford University Press, 1999) 2003 — Roy Porter, The Greatest Benefit to Mankind: A Medical History of Humanity from Antiquity to the Present (London: HarperCollins, 1997); 2002 — Nancy Tomes, The Gospel of Germs: Men, Women and the Microbe in American Life (Cambridge, MA: Harvard University Press, 1998). 2001 — Shigehisa Kuriyama, The Expressiveness of the Body and the Divergence of Greek and Chinese Medicine (NY: Zone Books, 1999) 2000 — W. Bruce Fye, American Cardiology: The History of a Specialty and its College (Johns Hopkins University Press, 1996) 1999 — Jack D. Pressman, Last Resort: Psychosurgery and the Limits of Medicine (Cambridge University Press, 1998); 1998 — Mary Lindemann, Health and Healing in Eighteenth-Century Germany (Johns Hopkins University Press, 1995) 1997 — Harold J. Cook, Trials of an Ordinary Doctor: Joannes Groenevelt in Seventeenth-Century London (Johns Hopkins University Press, 1994) 1996 — Gerald L. Geison, The Private Science of Louis Pasteur (Princeton University Press, 1995) 1995 — Laurel Thatcher Ulrich, A Midwife’s Tale (NY: Knopf, distributed by Random House, 1990) 1994 — Michael R. McVaugh, Medicine Before the Plague: Practitioners and Their Patients in the Crown of Aragon, 1285–1345 (Cambridge University Press, 1993) 1993 — Heinrich von Staden, Herophilus: The Art of Medicine in Ancient Alexandria (Cambridge University Press, 1989) 1992 — Philip Curtin, Death by Migration (Cambridge University Press, 1989) 1991 — John Harley Warner, The Therapeutic Perspective: Medical Knowledge and Identity in America, 1820–1855 (Harvard University Press, 1986) 1990 — Rosemary Stevens, In Sickness and in Wealth: American Hospitals in the Twentieth Century (NY: Basic Books, 1989) 1989 — Richard J. Evans, Death in Hamburg: Society and Politics in the Cholera Years, 1830–1910 (Oxford University Press, 1987) 1988 — Guenter B. Risse, Hospital Life in Enlightenment Scotland: Care and Teaching at the Royal Infirmary of Edinburgh (Oxford University Press, 1987) 1987 — James H. Cassedy (1919–2007), American Medicine and Statistical Thinking, 1800–1860 (Harvard University Press, 1984) ; and Medicine and American Growth, 1800–1860 (University of Wisconsin Press, 1986) 1986 — Gerald N. Grob, The State and the Mentally Ill: A History of the Worcester State Hospital (University of North Carolina Press, 1966); and Mental Institutions in America (NY: Free Press, 1972); and Mental Illness and American Society, 1897–1940 (Princeton University Press, 1983) 1985 — Nancy G. Siraisi, Taddeo Alderotti and His Pupils: Two Generations of Italian Medical Learning (Princeton University Press, 1981) 1984 — Michael Bliss, The Discovery of Insulin (University of Chicago Press, 1982) 1983 — Robert Gregg Frank, Harvey and the Oxford Physiologists: Scientific Ideas and Social Interaction (University of California Press, 1980) 1982 — James Harvey Young, "for scholarly contributions to the history of medicine" 1981 — Erna Lesky, "for significant contributions to the history of medicine" 1980 — John Ballard Blake (1922–2006), "for his valuable scholarly contributions to the history of medicine" 1979 — Charles Webster, The Great Instauration: Medicine and Reform, 1626–1660 (NY: Holmes and Meier Publishers, 1976, c1975) 1978 — Frederic L. Holmes, Claude Bernard and Animal Chemistry: The Emergence of a Scientist (Harvard University Press, 1974) 1977 — Lester S. King, ”for his scholarly contributions to the history of medicine” 1976 — Lelland J. Rather, Addison and the White Corpuscles (University of California Press, 1972) ; and Mind and Body in Eighteenth-Century Medicine (University of California Press, 1965), ; and for “his important continuing studies in the history of medicine” 1975 — George W. Corner, "for invaluable contributions" 1974 — Walter Pagel, "for extensive and most valuable publications" 1973 — Margaret Tallmadge May, Galen on the Usefulness of the Parts of the Body (Cornell University Press, 1968) 1972 — Erwin H. Ackerknecht, Medicine at the Paris Hospital, 1794–1848 (Johns Hopkins University Press, 1962) 1971 — Charles Donald O’Malley (1907–1970), (posthumously) "for scholarly contributions" 1970 — No award 1969 — Charles E. Rosenberg, The Cholera Years (University of Chicago Press, 1962); 1968 — Saul Benison, Tom Rivers: Reflections on a Life in Medicine and Science (M.I.T. Press, 1967) 1967 — Howard B. Adelman, Marcello Malpighi and the Evolution of Embryology (Cornell University Press, 1966) 1966 — Whitfield J. Bell Jr., John Morgan: Continental Doctor (University of Pennsylvania Press, 1965); 1965 — No award 1964 — No award 1963 — Saul Jarcho, "for scholarly contributions" 1962 — Genevieve Miller, The Adoption of Inoculation for Smallpox in England and France (University of Kentucky Press, 1960) 1961 — George Rosen, "for contributions to the social history of medicine" 1960 — Richard H. Shryock, "for scholarly contributions" 1959 — No award 1958 — Charles F. Mullett (1901–1994), The Bubonic Plague and England: An Essay in the History of Preventive Medicine (University of Kentucky Press, 1956) 1957 — No award 1956 — Lyman Henry Butterfield (editor), Letters of Benjamin Rush (Princeton University Press, 1951) ; 1955 — No award 1954 — Jerome Pierce Webster and Martha Teach Gnudi, The Life and Times of Gaspare Tagliacozzi (New York: Reichner, 1950) 1953 — Erwin H. Ackerknecht, "for scholarly contributions" 1952 — Owsei Temkin, "for scholarly contributions" 1951 — No award 1950 — Henry E. Sigerist, "for scholarly contribution" References History of science awards Awards established in 1950
William H. Welch Medal
[ "Technology" ]
2,092
[ "Science and technology awards", "History of science awards" ]
76,139,996
https://en.wikipedia.org/wiki/QSO%20J0529-4351
QSO J0529−4351 (SMSS J052915.80–435152.0) is a quasar, 12 billion light-years away in the Pictor constellation, notable for being the most luminous object ever observed at roughly 500 trillion times the luminosity of the Sun. The black hole at its centre has a mass of approximately 17 billion solar masses, and accretes around one solar mass per day. In a Gaia DR3 data set published on 13 June 2022, QSO J0529−4351 was assigned a 99.98% probability of being a star in the Milky Way via an automated analysis. However, the quasar was identified as one using the Very Large Telescope of the European Southern Observatory; the discovery was announced on 19 February 2024. Detection and identification The object itself was detected in ESO images dating back to 1980, but its identification as a quasar occurred only several decades later. An automated analysis of 2022 data from the European Space Agency's Gaia satellite did not confirm J0529-4351 as too bright to be a quasar, and suggested it was a 16th magnitude star with a 99.98% probability. In 2023, using observations from the 2.3-meter ANU telescope at Siding Spring Observatory in Australia, it was identified as a distant quasar. However, discovering that this was the brightest quasar ever observed required a larger telescope—the X-shooter spectrograph on the European Southern Observatory's VLT in Chile's Atacama Desert. However, additional observations are needed to definitively exclude the possibility of gravitational lensing as a possible explanation for such a high brightness of the quasar. Characteristics The redshift of J0529-4351 is 3.962. The object itself is classified as a radio-quiet quasar. Fitting accretion models to the spectra yields an accretion rate of matter onto the black hole of 280 to 490 solar masses per year for an accretion disk around the black hole observed at an angle of zero to 60 degrees, with accretion occurring near the Eddington limit. The mass of the black hole is estimated at 17 billion solar masses, and the total bolometric luminosity is 1048.37 ergs per second. Also List of quasars References 20240219 Pictor QSO J0529-4351
QSO J0529-4351
[ "Astronomy" ]
510
[ "Pictor", "Constellations" ]
76,140,624
https://en.wikipedia.org/wiki/Calocera%20pallidospathulata
Calocera pallidospathulata is a species of fungus in the family Dacrymycetaceae. In the UK, it has the recommended English name of pale stagshorn. Basidiocarps (fruit bodies) are gelatinous, pale yellow, and spathulate (widening towards the apex). It typically grows on logs and dead wood of both broadleaved trees and conifers. It is mainly found in Great Britain, but has also been recorded from continental Europe. Taxonomy The species was originally described from Yorkshire, England in 1974 by British mycologist Derek Reid. Description Calocera pallidospathulata forms pale yellowish, gelatinous fruit bodies up to 1 cm tall, comprising a whitish or pallid stalk and a pale yellowish, fertile head that is typically thin, flattened, and spathulate (widening towards the apex). The fruit bodies typically grow gregariously, but do not coalesce. Microscopic characters Hyphae lack clamp connections. The basidia are two-spored and typical of the Dacrymycetaceae. The spores are weakly allantoid (sausage-shaped), 10 to 13 by 3.5 to 4 μm, thin-walled, becoming tardily 1 to 3-septate. Habitat and distribution Calocera pallidospathulata is a wood-rotting species, typically found on logs and dead wood of both broadleaved trees and conifers. It was originally described from England and is locally common in Great Britain, but has also been recorded from Belgium, the Netherlands and Norway. Since its initial discovery in Yorkshire, Calocera pallidospathulata has spread rapidly through much of England and into Wales and Scotland. Since the species is conspicuous, it seems probable that it is an invasive introduction from another continent, possibly North America. References Dacrymycetes Fungi of Europe Fungi described in 1974 Taxa named by Derek Reid Fungus species
Calocera pallidospathulata
[ "Biology" ]
400
[ "Fungi", "Fungus species" ]
76,140,824
https://en.wikipedia.org/wiki/Criminal%20Justice%20%28Offences%20Relating%20to%20Information%20Systems%29%20Act%202017
The Criminal Justice (Offences Relating to Information Systems) Act 2017 is an act of the Oireachtas dealing with cybercrime. Previous legislation Previous laws dealing with computer crime in Ireland were the Criminal Damage Act 1991 and Criminal Justice (Theft and Fraud Offences Act) 2001. Neither of these were specifically intended to deal with computer crime. Then Tánaiste and Minister for Justice Frances Fitzgerald brought forward the legislation in May 2017. Offences The act introduces new offences relating to: Unauthorised access to data systems Interference with information systems or data on said systems Interception of transmission of data to or from an information system The use of tools to facilitate such offences The Act amends the Criminal Damage Act 1991, the Bail Act 1997 and Criminal Justice Act 2011. See also General Data Protection Regulation NIS Directive References External links Computer Crime in Ireland: a Critical Assessment of the Substantive Law - T. J. McIntyre - article originally appeared at 15(1) Irish Criminal Law Journal 13 Criminal Justice (Offences Relating to Information Systems) Act 2017 Bail Act, 1997 Criminal Damage Act, 1991 Criminal Justice Act 2011 Acts of the Oireachtas of the 2010s 2017 in Irish law Computer law Irish criminal law
Criminal Justice (Offences Relating to Information Systems) Act 2017
[ "Technology" ]
239
[ "Computer law", "Computing and society" ]
76,141,489
https://en.wikipedia.org/wiki/Osmosensing
In biology, osmosensing is a biological mechanism for detecting changes in environmental salinity. An osmosensor is a biological molecule involved in the process. In cell biology, osmosensing is the detection of changes in the activity of water outside the cell (direct osmosensing) or the structure and composition of the cell itself (indirect osmosensing). References Aquatic ecology
Osmosensing
[ "Biology" ]
84
[ "Aquatic ecology", "Ecosystems" ]
76,142,154
https://en.wikipedia.org/wiki/Vibrational%20spectroscopic%20map
Vibrational spectroscopic maps are a series of ab initio, semiempirical, or empirical models tailored to specific IR probes to describe vibrational solvatochromic effects on molecular spectra quantitatively. Coherent multidimensional spectroscopy, a nonlinear spectroscopy utilizing multiple time-delayed pulses, is a technique that enables the measurement of solvation-induced frequency shifts and the time-correlations of the fluctuating frequencies. Researchers employ various organic and biochemical methods to introduce small vibrational probes into molecular systems into a variety of chemicals, proteins, nucleic acids, etc. These probes, labeled with infrared (IR) markers, were subject to spectroscopic investigations to obtain quantitative insights into various features of chemical and biological systems. In general, interpreting the experimental multidimensional spectra to get information on the underlying molecular processes requires theoretical modeling. The vibrational frequency shifts observed due to complex intermolecular interactions of small IR probes with surroundings in the condensed phase are minute, often representing fractions of thermal energy. Numerical accuracy associated with advanced quantum mechanical calculations are not sufficient to accurately model these shifts. Consequently, researchers commonly resort to mapping procedures, which correlate certain physical variables calculated for the probe molecule with spectroscopic properties such as vibrational frequencies. These mapping procedures are referred to as vibrational spectroscopic maps within the field. Typically, the physical variables employed in vibrational frequency maps include electric potentials, electric fields, distributed higher multipole moments, and other relevant factors evaluated at specific points surrounding the molecule. As an example, the vibrational frequency associated with a localized vibrational mode is correlated with the electrostatic potential and electric field values at a designated set of points known as distributed sites within the infrared (IR) chromophore. Theoretical foundation The vibrational frequency shift, denoted as , for the jth normal mode of a given probe molecule is defined as the difference between the actual vibrational frequency   of the mode in a solution and the frequency  in the gas phase. ref name=":2"></ref> From an effective Hamiltonian for the solute in the presence of molecular environment, one can derive the effective vibrational force constant (or Hessian) matrix approximately as follows: where the subscript 0 means the quantity is evaluated at the gas-phase geometry. In the limiting case that the vibrational couplings of the normal mode of interest with other vibrational modes are relatively weak, the vibrational frequency shift under such a weak coupling approximation (WCA) in solution from the gas-phase frequency is given by Here, and are the electric anharmonicity (EA) and mechanical anharmonicity (MA) operators, respectively. These operators are defined as and By substituting a relevant expression for the intermolecular interaction potential into the WCA expression for , one can derive the vibrational frequency shift based on the specific theoretical potential model under consideration. Semiempirical approaches While several rigorous theories for vibrational solvatochromism based on physical approximations have been proposed, these sophisticated models often necessitate extensive quantum chemistry calculations performed at elevated levels of precision with a large basis set. Current electronic structure simulation methods fall short in providing vibrational frequencies directly comparable to experimentally measured frequency shifts, especially when they are on the order of a few wavenumbers. To accurately calculate coefficients in vibrational solvatochromism expressions, researchers frequently turn to employing multivariate leastsquare fitting. This technique involves fitting a sufficiently extensive set of training data obtained from quantum chemistry calculations of vibrational frequency shifts for numerous clusters containing a solute and multiple solvent molecules. An early approach aimed to express the solvation-induced vibrational frequency shift in terms of the solvent electric potentials evaluated at distributed atomic sites on the target solute molecule. This method involves calculating the solvent electric potentials at these specific solute sites through the utilization of atomic partial charges from surrounding solvent molecules. The vibrational frequency shift of the solute molecule, denoted as , for the jth vibrational mode can be represented as Here, represents the vibrational frequency of the jth normal mode in solution, signifies the vibrational frequency in the gas phase, N denotes the number of distributed sites on the solute molecule, denotes the solvent electric potential at the kth site of the solute molecule, and are the parameters to be determined through least-square fitting to a training database comprising clusters containing a solute and multiple solvent molecules. This method provides a means to quantify the impact of solvation on the vibrational frequencies of the solute molecule. Another widely used model for characterizing vibrational solvatochromic frequency shifts involves expressing the frequency shift in terms of solvent electric fields evaluated at distributed sites on the target solute molecule. Developments Vibrational spectroscopic maps have been developed for a diverse range of vibrational modes, including various molecular systems and functional groups. Some of the notable vibrational modes include: Amide I mode of NMA (N-Methylacetamide) Amide I mode of peptide molecules Amide I vibration of isotope-labeled proteins Amide II vibration Nitrile (CN) stretch Thiocyanato (SCN) stretch Selenothiocyanato (SeCN) stretch Azido (N3) stretch Carbonmonoxy (CO) stretch Ester carbonyl (O-C=O) stretch Carbonate carbonyl (C=O) stretch Water OH and OD stretch C-D stretch S=O stretch Phosphate (PO2) stretch Nucleic acid base modes OH and OD stretch mode in alcohols Water bending mode References External links Frequency map repository Spectroscopy
Vibrational spectroscopic map
[ "Physics", "Chemistry" ]
1,157
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
76,142,629
https://en.wikipedia.org/wiki/ESASky
ESASky is a web-based tool developed by the European Space Agency (ESA) to provide access to astronomical data. It aims to offer a user-friendly interface for exploring various datasets, including images, catalogues, and spectra, collected from ESA missions like Planck, Herschel, Gaia, HST, XMM-Newton, and INTEGRAL, among others. Additionally, ESASky incorporates data from other projects such as NASA's Chandra and JAXA's Suzaku, enhancing its scope and utility. Details ESASky can generate all-sky maps using the Hierarchical Progressive Surveys (HiPS) technology. These maps are constructed from actual observations gathered by different missions and allow users to visualize and compare imaging observations across multiple wavelengths. This capability enables astronomers to gain insights into celestial objects and phenomena spanning a wide range of wavelengths. ESASky is also designed to facilitate data exploration by allowing users to overlay footprints of different datasets, providing a comprehensive view of multiwavelength observations for specific celestial sources or regions of interest. This feature enables researchers to analyze data from various missions and projects simultaneously, enhancing their ability to study astrophysical phenomena comprehensively. ESASky was designed to simplify data discovery and retrieval by offering efficient search functionalities and allowing users to filter and preview data before downloading. It is intended to streamline the process of accessing astronomical data, aiming to reduce the need to navigate through individual mission archives. References Free astronomy software Astronomy websites European Space Agency
ESASky
[ "Astronomy" ]
302
[ "Works about astronomy", "Astronomy stubs", "Astronomy websites" ]
76,142,905
https://en.wikipedia.org/wiki/Plastivore
A plastivore is an organism capable of degrading and metabolising plastic. While plastic is normally thought of as non-biodegradable, a variety of bacteria, fungi and insects have been found to degrade it. Definition Plastivores are "organisms that use plastic as their primary carbon and energy source". This does not necessarily mean being able to fulfill all biological needs from plastic alone. For example, mealworms fed only on plastic show very little weight gain, unlike mealworms fed on a normal diet of bran. This is due to plastic lacking water and nutrients needed to grow. Plastic-fed mealworms can still derive energy from their diet, so they do not lose weight like starved mealworms do. Mechanisms For both bacterial and fungal plastivores, the first step is adhesion of spores to the plastic surface via hydrophobic interactions. Bacterial plastivores, when cultured on plastic, form biofilms on the surface as the second step. Using enzymes, they increase the roughness of the surface and oxidize the plastic. Oxidation forms oxygenated groups such as carbonyl groups, used by the bacteria for carbon and energy, and also converts the plastic into smaller molecules (depolymerization). For fungal plastivores, the second step is growth of mycelia (root-like structures of fungi, composed of thread-like hyphae) on the surface, while the third step is secretion of enzymes. Both the enzymes as well as the mechanical force produced by fungal hyphae degrades the plastic. The same basic steps of oxidation and depolymerization also occur in insect plastivores. For insects, the bacteria in their guts plays a role in digesting plastic. In mealworms, inhibiting these bacteria by giving antibiotics removes the ability to digest polystyrene, but low-density polyethylene can still be digested to an extent. The insects themselves also play a role: saliva of waxworms contains enzymes that oxidize and depolymerize polyethylene. Examples The following is not an exhaustive list. Plastivorous activity seems to be quite common in nature, with a 2011 sampling of endophytic fungi in the Amazon finding that almost half of the fungi showed some activity. Bacteria The plastic pollution in the oceans supports many species of bacteria. The alkaliphilic bacteria Bacillus pseudofirmus and Salipaludibacillus agaradhaerens can degrade low-density polyethylene (LDPE). These bacteria can degrade LDPE on their own but work more quickly as a consortium of both species, and degradation is faster still when iron oxide nanoparticles are added. Exiguobacterium sibiricum and E. undae, isolated from a wetland in India, can degrade polystyrene. Similarly, Exiguobacterium sp. strain YT2 has been isolated from the gut of mealworms, which are themselves plastivores, and can degrade polystyrene on its own, though less quickly than mealworms. Acinetobacter sp. AnTc-1, isolated from the gut of plastivorous red flour beetle larvae, can likewise degrade polystyrene on its own. Ideonella sakaiensis and Comamonas testosteroni can degrade polyethylene terephthalate. Fungi Aspergillus tubingensis and several isolates of Pestalotiopsis are capable of degrading polyurethane. Polycarbonate, the main material in CDs, is attacked by a range of fungi: Bjerkandera adusta (initially misidentified as Geotrichum sp.), Chaetomium globosum, Trichoderma atroviride, Coniochaeta sp., Cladosporium cladosporioides and Penicillium chrysogenum. Insects Mealworms (Tenebrio molitor), a species commonly used as animal feed, can consume polyethylene and polystyrene. Its congener T. obscurus can also consume polystyrene, as can superworm (Zophobas morio) and red flour beetle (Tribolium castaneum) from different genera in the same family. Plastivory also occurs in Lepidoptera, with waxworms (Galleria mellonella) able to consume polyethylene. Even homogenising waxworms and applying the homogenate to polyethylene can cause degradation. This species is the fastest known organism to chemically modify polyethylene, with oxidation occurring within one hour from exposure. See also Plastisphere References Eating behaviors
Plastivore
[ "Biology" ]
994
[ "Behavior", "Plastivores", "Biological interactions", "Organisms by adaptation", "Eating behaviors" ]
76,142,988
https://en.wikipedia.org/wiki/Category%20of%20matrices
In mathematics, the category of matrices, often denoted , is the category whose objects are natural numbers and whose morphisms are matrices, with composition given by matrix multiplication. Construction Let be an real matrix, i.e. a matrix with rows and columns. Given a matrix , we can form the matrix multiplication or only when , and in that case the resulting matrix is of dimension . In other words, we can only multiply matrices and when the number of rows of matches the number of columns of . One can keep track of this fact by declaring an matrix to be of type , and similarly a matrix to be of type . This way, when the two arrows have matching source and target, , and can hence be composed to an arrow of type . This is precisely captured by the mathematical concept of a category, where the arrows, or morphisms, are the matrices, and they can be composed only when their domain and codomain are compatible (similar to what happens with functions). In detail, the category is constructed as follows: It has natural numbers as objects; Given numbers and , a morphism is an matrix, i.e. a matrix with rows and columns; The identity morphism at each object is given by the identity matrix; The composition of morphisms and (i.e. of matrices and ) is given by matrix multiplication. More generally, one can define the category of matrices over a fixed field , such as the one of complex numbers. Properties The category of matrices is equivalent to the category of finite-dimensional real vector spaces and linear maps. This is witnessed by the functor mapping the number to the vector space , and an matrix to the corresponding linear map . A possible interpretation of this fact is that, as mathematical theories, abstract finite-dimensional vector spaces and concrete matrices have the same expressive power. More generally, the category of matrices is equivalent to the category of finite-dimensional vector spaces over the field and -linear maps. A linear row operation on a matrix can be equivalently obtained by applying the same operation to the identity matrix, and then multiplying the resulting matrix with . In particular, elementary row operations correspond to elementary matrices. This fact can be seen as an instance of the Yoneda lemma for the category of matrices. The transpose operation makes the category of matrices a dagger category. The same can be said about the conjugate transpose in the case of complex numbers. Particular subcategories For every fixed , the morphisms of are the matrices, and form a monoid, canonically isomorphic to the monoid of linear endomorphisms of . In particular, the invertible matrices form a group. The same can be said for a generic field . A stochastic matrix is a real matrix of nonnegative entries, such that the sum of each column is one. Stochastic matrices include the identity and are closed under composition, and so they form a subcategory of . Citations References External links The Yoneda lemma in the category of matrices, tutorial video. Categories in category theory Linear algebra
Category of matrices
[ "Mathematics" ]
635
[ "Mathematical structures", "Category theory", "Categories in category theory", "Linear algebra", "Algebra" ]
76,145,496
https://en.wikipedia.org/wiki/Cresomycin
Cresomycin is an experimental antibiotic. It binds to the bacterial ribosome in both Gram-negative and Gram-positive bacteria, and it has been found to be effective against multi-drug-resistant stains of Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa. It belongs to the bridged macrobicyclic oxepanoprolinamide antibiotics, which have similarities with lincosamides antibiotics. Cresomycin has been specially designed to bind in a preorganised way with the bacterial ribosome, resulting to improved binding. This allows cresomycin to overcome the ribosomal methylase genes that are responsible for the bacterial resistance against other antibiotics that bind to the peptidyl transferase center of the ribosome, such as lincosamides. Cresomycin was synthesized based on iboxamycin, another oxepanoprolinamide antibiotic, with the addition of a 10-membered ring to it. Cresomycin has been found to effective against bacteria that are resistant to multible antibiotics, including lincosamides, both in vitro and in vivo, being more potent than iboxamycin. The antibiotic was found in time-kill studies to be bacteriostatic against S. aureus. In vitro safety experiments with human cells indicated low cytotoxicity. Cresomycin was developed by a research group led by Andrew G. Myers at the Harvard University Department of Chemistry and Chemical Biology and the University of Illinois at Chicago and received a fund from CARB-X for further development. References Antibiotics Pyrrolidines Carboxamides Thioethers Macrocycles Oxepanes Experimental drugs Isobutyl compounds
Cresomycin
[ "Chemistry", "Biology" ]
366
[ "Biotechnology products", "Organic compounds", "Antibiotics", "Macrocycles", "Biocides" ]
76,145,702
https://en.wikipedia.org/wiki/Single-cell%20multi-omics%20integration
Single-cell multi-omics integration describes a suite of computational methods used to harmonize information from multiple "omes" to jointly analyze biological phenomena. This approach allows researchers to discover intricate relationships between different chemical-physical modalities by drawing associations across various molecular layers simultaneously. Multi-omics integration approaches can be categorized into four broad categories: Early integration, intermediate integration, late integration methods. Multi-omics integration can enhance experimental robustness by providing independent sources of evidence to address hypotheses, leveraging modality-specific strengths to compensate for another's weaknesses through imputation, and offering cell-type clustering and visualizations that are more aligned with reality Background The emergence of single-cell sequencing technologies has revolutionized our understanding of cellular heterogeneity, uncovering a nuanced landscape of cell types and their associations with biological processes. Single-cell omics technologies has extended beyond the transcriptome to profile diverse physical-chemical properties at single-cell resolution, including whole genomes/exomes, DNA methylation, chromatin accessibility, histone modifications, epitranscriptome (e.g., mRNAs, microRNAs, tRNAs, lncRNAs), proteome, phosphoproteome, metabolome, and more. In fact, there is an expanding repository of publicly available single-cell datasets, exemplified by growing databases such as the Human Cell Atlas Project (HCA), the Cancer Genome Atlas (TCGA), and the ENCODE project. With the increasing diversity in both available datasets and data types, multi-omics data integration and multimodal data analysis represent pivotal trajectories for the future of systems biology. Single-cell multi-omics integration can reveal underappreciated relationships between chemical-physical modalities, broaden our definition of cell states beyond single modality feature profiles, and provide independent evidence during analysis to support testing of biological hypotheses. However, the high dimensionality (features > observations), high degree of stochastic technical and biological variability, and sparsity of single-cell data (low molecule recovery efficiency) make computational integration a challenging problem. Furthermore, different solutions for multi-omics integration are available depending on factors such as whether the data is matched (simultaneous measurements derived from the same cell) or unmatched (measurements derived from different cells), whether cell-type annotations are available, or whether modality feature conversion is available, with different implementations tailored to suit the specific use case. As such, there are multiple approaches to single-cell data integration, each with a distinct use case, and each with its own set of advantages and disadvantages. Approaches to multi-omics integration Early integration Early integration is a method that concatenates (by binding rows and columns) two or more omics datasets into a single data matrix. Some advantages of early integration are that the approach is simple, highly interpretable, and capable of capturing relationships between features from different modalities. Early integration is primarily employed to merge datasets of the same datatype (e.g., integrating two distinct scRNA-seq datasets). This is because integrating datasets from different modalities may lead to a combined feature set with variable feature value ranges. For instance, expression data often spans a wider range compared to accessibility data, which typically ranges between values of 0 and 2. Early integration approaches produce data matrices with higher dimensionality compared to the original matrix. As such, dimensionality reduction methods such as feature selection and feature extraction are often necessary steps for downstream analysis. Feature selection involves retaining only the important variables from the original omic layers, while feature extraction transforms the original input features into combinations of the original features. The projection of high-dimensional data into a lower-dimensional space reduces noise and simplifies the dataset, resulting in easier data handling. Intermediate integration Intermediate integration describes a class of approaches which aim to analyze multiple omic datasets simultaneously without the need for prior data transformation (as this occurs during data integration). Several examples of intermediate integration include similarity-based integration, joint dimension reduction, and statistical modelling. Similarity-based integration Similarity-based integration aims to identify patterns across multi-omic datasets through the use of spectral clustering (eg. Spectrum and PC-MSC). Spectral clustering cluster cells based on either similarity matrices derived from a multi-omic dataset or graph fusion algorithms (eg. Seurat4) which construct graphs from individual omics layers and merges them into a single graph. Joint dimension reduction Joint dimension reduction aims to reduce the complexity of multi-omics data by projecting observations onto a lower dimensional latent space such that the different omics layers can be analyzed together. Canonical correlation analysis (CCA), non-negative matrix factorization (NMF) and manifold alignment are popular approaches for joint dimensionality reduction. Tools that use CCA or its derivative sparse CCA, such as Seurat3 and bindSC identify linear relationships between datasets by identifying linear combinations of variables that maximize feature correlation. Tools which use NMF (eg. LIGER and coupledNMF) extract low-dimensional representations of high-dimensional data such that both shared and dataset-specific factors across the multiple omics datasets can be identified. Manifold alignment (eg., MATCHER and MAGAN) refers to an approach where low dimension representations of various multi-omic datasets are computed individually and then represented as a common latent space. Statistical modeling Various statistical approaches, including the probabilistic Bayesian modeling framework (which allows for the incorporation of prior knowledge and uncertainties into the analysis), can be used to integrate multi-omic datasets. For instance, BREM-SC employ a Bayesian clustering framework to jointly cluster multi-omic datasets, while other tools like clonealign utilizes Bayesian methods to integrate gene expression and copy number profiles for studying cancer clones. Late integration Late integration aims to preprocess and model omics modalities separately, and then combine the two models at the end. The advantage of late integration is that tailored tools for each omics modality can be applied per modality. While late integration approaches are commonly used in the context of bulk multi-omics studies (eg., Cluster-of-clusters analysis and Kernel Learning Integrative Clustering), late integration approaches to single cell integration is still a novel field. For example, ensemble learning techniques such as ensemble clustering (eg. SAME-clustering, Sc-GPE, EC-PGMGR), have demonstrated potential in aggregating clustering results from different sources. These methods combine the clustering results from different omics datasets to create a consensus clustering which models the relationships between the individual clustering results to find an improved global clustering solution across the different modalities. As late integration involves analyzing each individual omics layer separately before integrating the results into a consensus result, it may fail to capture interactions and relationships across different omics modalities. As such, some groups argue that late integration represents multiple parallel single-omics analysis conducted on multiple data types, rather than fulfilling the "true goal" of multi-omics integration, which is to discover inter-omics relationships present in multi-omics data. Considerations for multi-omics data integration Noise As single-cell data is prone to noise from both biological and technical sources, developing robust de-noising methods to mitigate noise may be necessary. In the context of single-cell experiments, biological variation arising from factors such as transcriptional bursts, differences in cell cycle, and cell microenvironment can introduce noise to the dataset. Additionally, technical variability resulting from factors like poor sequence quality, uneven sequence coverage, and sample contamination must also be addressed. Dataset compatibility Integrating different omic modalities can be challenging due to differences in the structure of different datasets. For example, scRNA-seq features are expressed on a continuous scale whereas chromatin accessibility data (ie. scATAC-seq) exists between 0-2 (two copies of each region per cell). As such, integration of different modalities may require additional steps to transform the datasets into a common latent space. Even then, integration strategies such as early integration may still be prone to issues of bias if the resulting matrix is disproportionately represented by features from one specific modality. Dimensionality Analyzing large-scale single-cell multi-omics datasets can be computationally intensive because of the high dimensionality of the datasets. Hence, the tools employed for integrating datasets must be computationally efficient, or computational methods should be utilized initially to reduce the dimensionality of the datasets (refer to dimensionality reduction). Interpretability and validation Many integration methods focus on statistical associations rather than detailed causal modeling. As such, interpreting and validating the results can be particularly challenging, especially if a neural network was utilized, as these methods are black boxes. The utility and validation of integration methods need to be assessed based on practical applications, such as accurately identifying biologically relevant multi-omic relationships. Matched and unmatched data The integration of single-cell multi-omic data presents different challenges depending on whether the datasets are matched or unmatched. Matched datasets refer to multiple omic layers that are measured from the same individual cell whereas unmatched data refer to dataset that are measured from a different set of cells. While matched datasets enable direct comparisons between the different omics layers within the same cell, they may not be as readily available as unmatched datasets. On the other hand, while unmatched datasets allow for the integration of different sources and conditions, they require considerations of potential biases and confounding factors. (e.g., differences in cell populations, experimental conditions, or sample preparation methods between different datasets). Several approaches to multi-omics integration for unmatched data include matching by cell group (requires cell type annotations), matching by shared features, or statistical approaches such as NMF. Applications and uses While single-modality datasets have proven to be a mainstay in systems biology, combining biological information across multiple modalities has the potential to address biological questions that cannot be inferred by a single data type alone. Modelling biological networks For example, the integration of transcriptome and DNA accessibility has enabled the development of bioinformatic tools to infer cell-type-specific gene regulatory networks. This is achieved by leveraging transcription factor and target gene expression along with cis-regulatory information to impute relevant transcription factors and their regulatory partners. Expanding definitions of cell state Another application for multi omics integration is in expanding definitions of cell states incorporating features observed across multiple modalities. For instance, integrating protein marker detection with transcriptome profiling using a multi-omics sequencing technology such as CITE-seq can resolve cell state signatures based on joint gene regulatory and surface marker expression. This enables more robust inferences regarding cellular phenotypes, which are akin to and directly comparable with results from classical flow cytometry. Moreover, defining cell states based on clustering analysis within an integrated latent space may offer more stable estimations of cellular phenotypes compared to analysis within a single-modality latent space. Imputation Furthermore, multi omics integration can overcome modality-specific limitations through imputation. For example, most spatial transcriptomic sequencing technologies suffer from limited spatial resolution (pixels comprising a mixture of local cells) and low feature complexity. Integration of spatial transcriptomics with scRNAseq can help overcome these limitations by supporting the spatial deconvolution of low-resolution readouts and estimating the frequencies of each cell type References Omics
Single-cell multi-omics integration
[ "Biology" ]
2,473
[ "Bioinformatics", "Omics" ]
76,146,348
https://en.wikipedia.org/wiki/5%CE%B1-Pregnane-3%CE%B1%2C11%CE%B2-diol-20-one
5α-Pregnane-3α,11β-diol-20-one, abbreviated as 3,11diOH-DHP4, also known as 3α,11β-dihydroxy-5α-pregnan-20-one, is an endogenous steroid. The steroid 5α-pregnan-3α,11β-diol-20-one (3,11diOH-DHP4) plays a role in the 11-oxygenated steroid backdoor pathway to androgens as a metabolic intermediate. This pathway involves the metabolism of C21 steroids (pregnanes) via enzymes such as steroid 11β-hydroxylase (CYP11B1), steroid 5α-reductase (SRD5A1), 17α-hydroxylase/17,20-lyase (CYP17A1), resulting in the production of androgen precursors. Docking studies have shown that the C11-oxy group of 3,11diOH-DHP4 and alfaxalone does not significantly affect their binding to CYP17A1. Furthermore, it has been observed that the lyase activity of CYP17A1 is impaired by the C11-hydroxyl (-OH) and keto- (=O) moieties present in these steroids. The lyase activity of CYP17A1 converts intermediates like 3,11diOH-DHP4 to potent androgens such as 5α-pregnan-3α,11β,17α-triol-20-one (11OH-Pdiol). These findings indicate that CYP17A1 plays a role in the metabolism of this steroid through both hydroxylation and lyase reactions in the 11-oxygenated steroid backdoor pathway to androgens. This pathway is important for regulating adrenal and gonadal steroid hormone biosynthesis and can contribute to elevated levels of androgens in certain conditions. References Metabolic intermediates Steroids
5α-Pregnane-3α,11β-diol-20-one
[ "Chemistry" ]
431
[ "Metabolism", "Metabolic intermediates", "Biomolecules" ]
76,146,659
https://en.wikipedia.org/wiki/Iboxamycin
Iboxamycin is a synthetic lincosamide or oxepanoprolinamide antibiotic. It binds to the bacterial ribosome in both Gram-negative and Gram-positive bacteria and it has been found to effective against bacteria which are resistant to other antibiotics that target the large ribosomal subunit. It was developed by combining an oxepanoproline unit with the aminooctose residue of clindamycin. Iboxamycin is effective against ESKAPE bacteria, methicillin-resistant Staphylococcus aureus (MRSA), Enterococcus, Clostridioides difficile, and Listeria monocytogenes, indicating an extended spectrum when compared to clindamycin. Isotopic labeling of iboxamycin with tritium indicated that it binds 70 times more tightly to the ribosome than clindamycin. Iboxamycin can be administered orally and is safe when administered to mice. It is a bacteriostatic antibiotic. See also Cresomycin - a similar antibiotic developed from iboxamycin References Antibiotics Pyrrolidines Oxepanes Thioethers Carboxamides Isobutyl compounds
Iboxamycin
[ "Chemistry", "Biology" ]
253
[ "Pharmacology", "Biotechnology products", "Medicinal chemistry stubs", "Antibiotics", "Pharmacology stubs", "Biocides" ]
62,199,433
https://en.wikipedia.org/wiki/Histopathologic%20diagnosis%20of%20prostate%20cancer
A histopathologic diagnosis of prostate cancer is the discernment of whether there is a cancer in the prostate, as well as specifying any subdiagnosis of prostate cancer if possible. The histopathologic subdiagnosis of prostate cancer has implications for the possibility and methodology of any subsequent Gleason scoring. The most common histopathological subdiagnosis of prostate cancer is acinar adenocarcinoma, constituting 93% of prostate cancers. The most common form of acinar adenocarcinoma, in turn, is "adenocarcinoma, not otherwise specified", also termed conventional, or usual acinar adenocarcinoma. Sampling The main sources of tissue sampling are prostatectomy and prostate biopsy. Subdiagnoses - overview In uncertain cases, a diagnosis of malignancy can be excluded by immunohistochemical detection of basal cells (or confirmed by absence thereof), such as using the PIN-4 cocktail of stains, which targets p63, CK-5, CK-14 and AMACR (latter also known as P504S). Other prostate cancer tumor markers may be necessary in cases that remain uncertain after microscopy. Acinar adenocarcinoma These constitute 93% of prostate cancers. Microscopic characteristics Specific but relatively rare Collagenous micronodules Glomerulations, epithelial proliferations into one or more gland lumina, typically a cribriform tuft with a single attachment to the gland wall. Perineural invasion. It should be circumferential Angiolymphatic invasion Extraprostatic extension Relatively common and highly specific Multiple nucleoli Eccentric nucleoli Less specific findings. Mitoses (also seen in for example high-grade prostatic intraepithelial neoplasia (HGPIN) and prostate inflammation). Prominent nucleoli Intraluminal eosinophilic secretion Intraluminal blue mucin In uncertain cases, a diagnosis of malignancy can be discarded by immunohistochemical detection of basal cells. Intraductal carcinoma Intraductal carcinoma of the prostate gland (IDCP), which is now categorised as a distinct entity by WHO 2016, includes two biologically distinct diseases. IDCP associated with invasive carcinoma (IDCP-inv) generally represents a growth pattern of invasive prostatic adenocarcinoma while the rarely encountered pure IDCP is a precursor of prostate cancer. The diagnostic criterion of nuclear size at least 6 times normal is ambiguous as size could refer to either nuclear area or diameter. If area, then this criterion could be re-defined as nuclear diameter at least three times normal as it is difficult to visually compare area of nuclei. It is also unclear whether IDCP could also include tumors with ductal morphology. There is no consensus whether pure IDCP in needle biopsies should be managed with re-biopsy or radical therapy. A pragmatic approach would be to recommend radical therapy only for extensive pure IDCP that is morphologically unequivocal for high-grade prostate cancer. Active surveillance is not appropriate when low-grade invasive cancer is associated with IDCP, as such patients usually have unsampled high-grade prostatic adenocarcinoma. It is generally recommended that IDCP component of IDCP-inv should be included in tumor extent but not grade. However, there are good arguments in favor of grading IDCP associated with invasive cancer. WHO 2016 recommends that IDCP should not be graded, but it is unclear whether this applies to both pure IDCP and IDCP-inv. Ductal adenocarcinoma may have a prominent cribriforming architecture, with glands appearing relatively round, and may thereby mimic intraductal adenocarcinoma, but can be distinguished by the following features: Further workup Further workup of a diagnosis of prostate cancer includes mainly: Gleason score Prostate cancer staging Notes Reference list Sources Prostate cancer Histopathology
Histopathologic diagnosis of prostate cancer
[ "Chemistry" ]
835
[ "Histopathology", "Microscopy" ]
62,201,270
https://en.wikipedia.org/wiki/Arrest%20of%20Robert%20Seacat
The arrest of Robert Jonathan Seacat was the culmination of a destructive 19.48-hour standoff with American police in June 2015. After being chased by police for stealing clothing from a Walmart, Seacat barricaded himself in a house at 4219 South Alton Street in Greenwood Village, Colorado. By the time Seacat was finally extracted from the premises, the house had been destroyed by law enforcement in their efforts to flush him out. The homeowner—Leo Lech—filed a lawsuit against the municipality for compensation, but was ruled against by the United States Court of Appeals for the Tenth Circuit; he appealed to the Supreme Court of the United States, but the court declined to hear the case. Background Robert Jonathan Seacat (also Robert Seakat) was born in Kansas on May 6, 1982. In June 2015, he was tall, weighed , was residing in Aurora, Colorado, and was married to Ramona Vitalyevna Grabchenko. With previous convictions for drug possession, aggravated motor vehicle theft, and burglary, on June 3, 2015, Seacat also had three outstanding warrants: two for illegal drugs and one for aggravated motor vehicle theft. , he was tall and weighed . Leo and Alfonsia Lech bought the house at 4219 South Alton Street for their son, John. In June 2015, John Lech was living in the house with his girlfriend—Anna Mumzhiyan—and her nine-year-old son, and paid his father monthly rent of . John Lech kept an unloaded pistol and 20-gauge shotgun in the master closet, while ammunition was kept elsewhere in the house. Standoff According to a police affidavit, on June 3, 2015,Seacat shoplifted two belts and a shirt from a Walmart in Greenwood Village. After assaulting a uniformed Aurora Police Department (APD) officer—John Reiter—with his gold-colored 1999 Lexus GS300, Seacat fled in the vehicle. Reiter soon found the abandoned car at the nearby Dayton station and radioed for a tow. A by-stander told Reiter that she saw Seacat holster a .380 ACP semi-automatic pistol when he crossed the rail platform before running across northbound Interstate 225. Reiter lost track of Seacat and returned to the Lexus to inventory its contents: in cash, of psilocybin mushrooms, of cannabis, and an unidentified blue pill. From 1:43–1:54pm, a perimeter was established by Greenwood Village (GVPD) and Aurora police. At 1:54pm, Seacat entered the home at 4219 South Alton Street in Greenwood Village, twice tripping the security alarm as he did so. The nine-year-old son of Anna Mumzhiyan was home alone, and when he exited the house at 2:17pm, he described Seacat as the man currently inside the house. After Seacat began opening the garage doors, APD officer William Woods drove a marked departmental sport utility vehicle (SUV) into the garage door to block egress. Seacat responded by blindly firing his pistol through the garage door, striking the SUV approximately from Woods. GVPD Commander Dustin Varney took command of the situation, and summoned GVPD SWAT as well as further assistance from the APD, the Arapahoe County sheriff's office, and the Douglas County sheriff's office. GVPD officer Mic Smith, a SWAT negotiator, engaged with Seacat over the telephone; by 6:08pm, Seacat ceased communicating with police, despite the application of inductive irritants. At 10:38pm, SWAT entered the house and used a stun grenade to conceal their movements, but were driven back outside by gunshots (though criminalists would later establish that they were not fired upon). During the next 10.2 hours, a Lenco BearCat was driven through the front door, tear gas and 40 mm grenades were repeatedly launched inside, shots were fired upon the house, and explosives were detonated to destroy several exterior walls. Ultimately, "the home was utterly destroyed" by the time Seacat was apprehended in the upstairs bathroom. According to police, at the time of his arrest, Seacat was carrying a loaded-and-chambered Glock 19 and of suspected methamphetamine. Aftermath In their subsequent search of the house, police found Lech's over and under shotgun unloaded and still in its case. In the master bathroom, police found a loaded Glock 17, of suspected methamphetamine, of methadone, of diazepam, of suspected heroin, 17.5 pills of methylphenidate, of clonazepam, of cannabis, 61 packets of buprenorphine/naloxone, digital weighing scales, dozens of single-use baggies, used syringes and pipes, cash, and multiple cell phones. Another of suspected methamphetamine was found in the bathroom where Seacat was captured. Anthony Costarella, a GVPD officer specialized in narcotics, argued that this cache evidenced Seacat as a drug trafficker. It was determined that, during the standoff with police, not only was Seacat experiencing the effects of intentionally-ingested methamphetamine, but also from packets of drugs he had swallowed: at Swedish Hospital on June 6, Seacat defecated seven baggies that, with the addition of a Mecke reagent, tested positive for of heroin and of methamphetamine. On June 9, GVPD Detective John J. Carr applied for an arrest warrant on Seacat. The same day, the Arapahoe County, Colorado district attorney filed 32 charges against Seacat: 17 counts of attempted murder in the first degree, one count of first-degree burglary, one count of menacing, one count of attempted motor vehicle theft, four counts of criminal possession of a weapon, three counts of drug possession, one count of trespass, and four counts of being a habitual criminal. In August 2016, Seacat's jury trial was scheduled for that October 4. , Seacat (inmate number 145189) was imprisoned in the Sterling Correctional Facility for 20 separate convictions, with an estimated mandatory release date of March 3, 2061. He is next eligible for parole on October 4, 2035. Lechs' litigation When he returned to his house, Leo Lech compared the damage to Osama bin Laden's compound in Abbottabad after Operation Neptune Spear: "[p]rojectiles were still lodged in the walls. Glass and wooden paneling crumbled on the ground below the gaping holes, and inside, the family's belongings and furniture appeared thrashed in a heap of insulation and drywall." Varney defended the police's actions by saying, "My mission is to get that individual out unharmed and make sure my team and everyone else around including the community goes home unharmed […] Sometimes that means property gets damaged, and I am sorry for that." The National Tactical Officers Association supported Varney's assertion that appropriate force was used in the Seacat standoff. Due to the extensive damage, the house was eventually condemned by Greenwood Village, and the remains were razed. John Lech, his girlfriend, and her son, moved into his father's home from Greenwood Village; the distance of the relocation forced John Lech to change jobs. The city refused to compensate the Lechs, and instead offered "in temporary rental assistance and for the [home] insurance deductible." Alleging that the house's value was , the Lechs refused the offer, calling it "insulting"; rebuilding the house cost Leo Lech . In a 2019 response to an NPR inquiry, Greenwood Village spokesperson Melissa Gallegos said it was Leo Lech's decision to demolish rather than repair the house, replace the undamaged foundation, and build a larger house than the one that was damaged. In 2016, the Lechs filed a lawsuit in the United States District Court for the District of Colorado against the participating police officers and Greenwood Village under the takings clause of the Fifth Amendment to the United States Constitution and Article II, Section 15 of the Constitution of Colorado. They claimed that they deserved compensation for "their property [being] seized by the government for public use" (Lech v. City of Greenwood Village), and were willing to settle with the city for . Instead, the defendants requested and were granted summary judgment by the district court, which ruled that "when a state acts pursuant to its police power, rather than the power of eminent domain, its actions do not constitute a taking". In the past, both the Minnesota and Texas Supreme Courts have sided with litigants whose houses were damaged or destroyed by police actions. The Lechs appealed the ruling to the United States Court of Appeals for the Tenth Circuit (10th Cir.). In 2019, the three-judge panel of the 10th Cir. ruled against the Lechs, saying unanimously that the destruction of the house fell under police power and that eminent domain was not undertaken. The court sympathized with the Lechs, calling their circumstances "unfair", but ruling that police cannot be "burdened" with the consideration of collateral property damage when performing their duties. The 10th Cir. also noted that "if police officers 'willfully or wantonly' destroy property", then they can be subject to tort law; Leo Lech was also unsuccessful in pursuing that avenue with the courts of Colorado. , Leo Lech had incurred in attorney's fees. On March 11, 2020, the Institute for Justice filed a petition for writ of certiorari with the Supreme Court of the United States. The Supreme Court denied certiorari on June 29, 2020, letting the lower court's ruling stand. References Further reading 2015 in Colorado arrests of individual people destruction of buildings Greenwood Village, Colorado law enforcement in Colorado law enforcement operations in the United States law enforcement scandals
Arrest of Robert Seacat
[ "Engineering" ]
2,065
[ "Destruction of buildings", "Architecture" ]
62,201,276
https://en.wikipedia.org/wiki/Isaac%20Newton%20Vail
Isaac Newton Vail (1840 – January 26, 1912) was an American Quaker, schoolteacher, and pseudoscientist supporting the theory of catastrophism. His ideas were taken up by creationists including Jehovah's Witnesses. Life Isaac Newton Vail was born to John Vail and Abigail (nee Edgerton) in Barnesville, Ohio in 1840. He was trained and then taught at the Quaker Seminary in Westtown Township, Pennsylvania, leaving to pursue his independent study of flood geology. He married Rachel D. Wilson in the fall of 1864; they had two daughters (Alice and Lydia). In 1876 Rachel died, and on 26 July 1880 Vail married his second wife Mary M. Cope in Salem, Ohio. The 1900 census records his occupation as a farmer. Vail argued that the Earth once had rings like Saturn's, in what became known as the "Vailan theory" or "annular theory". His 1886 "Canopy Theory" proposed that the Earth had been ringed by a toroidal mass of ice, which he named the "firmament", following the usage in Genesis 1:6-8. Vail supposed that this could explain Noah's Flood, as he described in his 1874 book The Earth's Aqueous Ring: or The Deluge and its Cause. Vail died on 26 January 1912 in Pasadena, California. Reception The geologist Donald U. Wise writes that most creationist theories of Noah's Flood derive from Vail. Wise writes that Vail's "Canopy Theory" model consisted of "a series of Saturn-like aqueous rings, the progressive collapse of which caused successive cataclysms to bury and create fossils. Collapse of the last remnant ring caused the Noachian flood." Tom McIver similarly notes in Skeptic that the "Water Canopy Theory has long been a mainstay of creationists", who invoke it to account for both the conditions before the Genesis flood and the cause of the flood itself. The historian of science Ronald Numbers, in his book on creationism, writes that the founders of the Jehovah's Witnesses "borrowed their geology" from Vail, as it was even referenced in their 1912 multi-media production The Photo Drama of Creation. However, the Witnesses have in recent decades distanced themselves from creationist teachings on the basis that such are not in harmony with Scripture nor scientific truths. The Encyclopedia of Pseudoscience notes that, a century later, "members of the Fortean Society" support Vail's theory. The mathematician and science writer Martin Gardner in his book Fads and Fallacies in the Name of Science wrote that Vail's theories were still being popularized in the 20th century by the Annular World Association of Azusa, California. The engineer Jane Albright notes several scientific failings of the canopy theory. Among these are that enough water to create a flood of even of rain would form a vapor blanket thick enough to make the earth too hot for life, since water vapor is a greenhouse gas; the same blanket would effectively obscure all incoming starlight. References 1840 births 1912 deaths Catastrophism American Quakers Creationism
Isaac Newton Vail
[ "Biology" ]
650
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
62,205,700
https://en.wikipedia.org/wiki/Androgen%20conjugate
An androgen conjugate is a conjugate of an androgen, such as testosterone. They occur naturally in the body as metabolites of androgens. Androgen conjugates include sulfate esters and glucuronide conjugates and are formed by sulfotransferase and glucuronosyltransferase enzymes, respectively. In contrast to androgens, conjugates of androgens do not bind to the androgen receptor and are hormonally inactive. However, androgen conjugates can be converted back into active androgens through enzymes like steroid sulfatase. Examples of androgen conjugates include the sulfates testosterone sulfate, dehydroepiandrosterone sulfate, androstenediol sulfate, dihydrotestosterone sulfate, and androsterone sulfate, and the glucuronides testosterone glucuronide, dihydrotestosterone glucuronide, androsterone glucuronide, and androstanediol glucuronide. Androgen conjugates are conjugated at the C3 and/or C17β positions, where hydroxyl groups are available. See also Androgen ester Estrogen conjugate References Androstanes Human metabolites Steroid hormones Testosterone
Androgen conjugate
[ "Chemistry", "Biology" ]
290
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
62,208,263
https://en.wikipedia.org/wiki/Karen%20Helen%20Wiltshire
Karen Helen Wiltshire (born 1962) is an Irish environmental scientist. She is professor for shelf-ecosystems and one of the vice directors of Alfred Wegener Institute for Polar and Marine Research (AWI). Born in Dublin, Wiltshire studied at Trinity College Dublin and graduated with a master's degree in environmental science. She received her Ph.D. and habilitation (2001) in hydrobiology at the University of Hamburg. She worked as a postdoctoral fellow at the , the University of St. Andrews, Scotland and was appointed Professor of Geosciences at Jacobs University in 2006. Wiltshire supports Scientists for Future in Germany and took part in a statement at the Bundespressekonferenz in March 2019. Publications The fractionation of phosphorus in sediments of the river Elbe under anaerobic conditions, 1988 Experimental procedures for the fractionation of phosphorus in sediments with emphasis on anaerobic techniques, 1992 The influence of microphytobenthos on oxygen and nutrient fluxes between eulittoral sediments and associated water phases in the Elbe estuary, 1992 References Irish women academics Academic staff of the University of Bremen University of Hamburg alumni Living people 1962 births Alumni of Trinity College Dublin
Karen Helen Wiltshire
[ "Environmental_science" ]
243
[ "Environmental scientists" ]
62,208,717
https://en.wikipedia.org/wiki/Halanay%20inequality
Halanay inequality is a comparison theorem for differential equations with delay. This inequality and its generalizations have been applied to analyze the stability of delayed differential equations, and in particular, the stability of industrial processes with dead-time and delayed neural networks. Statement Let be a real number and be a non-negative number. If satisfies where and are constants with , then where and . See also Grönwall's inequality References Control theory Lemmas in analysis Ordinary differential equations
Halanay inequality
[ "Mathematics" ]
100
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical analysis stubs", "Applied mathematics", "Control theory", "Lemmas in mathematical analysis", "Lemmas", "Dynamical systems" ]
62,209,510
https://en.wikipedia.org/wiki/Internet%20of%20Military%20Things
The Internet of Military Things (IoMT) is a class of Internet of things for combat operations and warfare. It is a complex network of interconnected entities, or "things", in the military domain that continually communicate with each other to coordinate, learn, and interact with the physical environment to accomplish a broad range of activities in a more efficient and informed manner. The concept of IoMT is largely driven by the idea that future military battles will be dominated by machine intelligence and cyber warfare and will likely take place in urban environments. By creating a miniature ecosystem of smart technology capable of distilling sensory information and autonomously governing multiple tasks at once, the IoMT is conceptually designed to offload much of the physical and mental burden that warfighters encounter in a combat setting. Over time, several different terms have been introduced to describe the use of IoT technology for reconnaissance, environment surveillance, unmanned warfare and other combat purposes. These terms include the Military Internet of Things (MIoT), the Internet of Battle Things, and the Internet of Battlefield Things (IoBT). Overview The Internet of Military Things encompasses a large range of devices that possess intelligent physical sensing, learning, and actuation capabilities through virtual or cyber interfaces that are integrated into systems. These devices include items such as sensors, vehicles, robots, UAVs, human-wearable devices, biometrics, munitions, armor, weapons, and other smart technology. In general, IoMT devices can generally be classified into one of four categories (but the devices are meant to be ubiquitous enough to form a data fabric): Data-carrying device: A device attached to a physical thing that indirectly connects it to the larger communication network. Data-capturing device: A reader/writer device capable of interacting with physical things. Sensing and actuating device: A device that can detect or measure information related to the surrounding environment and converts it into a digital electronic signal or a physical operation. General device: A device embedded with processing and communication capabilities that can exchange information with the larger network. In addition to connecting different electronic devices to a unified network, researchers have also suggested the possibility of incorporating inanimate and innocuous objects like plants and rocks into the system by fitting them with sensors that will turn them into information gathering points. Such efforts fall in line with projects related to the development of electronic plants, or e-Plants. Proposed examples of IoMT applications include tactical reconnaissance, smart management of resources, logistics support (i.e. equipment and supply tracking), smart city monitoring, and data warfare. Several nations, as well as NATO officials, have expressed Interest in the potential military benefits of IoT technology. History Advancements in IoMT technology largely stemmed from military efforts to bolster the development of sensor networks and low-power computing platforms during the 1960s for defense applications. During the Cold War, the U.S. military pioneered the use of wireless sensor network technologies to detect and track Soviet submarines. One example was the Sound Surveillance System (SOSUS), a network of underwater acoustic sensors, i.e. hydrophones, placed throughout the Atlantic and Pacific Oceans to act as underwater listening posts for above-ground facilities. Much of the sensor and networking technologies that the U.S. Department of Defense (DoD) developed during this time period ultimately served as the foundation for modern IoT systems. Critically, the DoD helped set the stage for future IoT research in the late 1960s with the creation of ARPANET, an early precursor to the Internet that geographically-dispersed military scientists used to share data. In the 1980s, the Defense Advanced Projects Agency (DARPA) formally partnered with academic researchers at the Massachusetts Institute of Technology (MIT) and Carnegie Mellon University to further develop distributed, wireless sensor networks. From there, research into wireless sensor technologies spread throughout the civilian research community and eventually found use for industrial applications such as power distribution, wastewater treatment, and factory automation. During this time period, the DoD also invested heavily in the miniaturization of integrated circuits in order to embed various objects with tiny computer chips. As a result of their funding, the commercial microelectronics industry was able to recover when it faced potential decline at the time. By the late 1990s, the Department of Defense had announced plans for “network-centric” warfare that integrated the physical, information, and cognitive domains to enhance information sharing and collaboration. Examples of projects guided by this goal include the Nett Warrior (formerly known as the Ground Soldier System or Mounted Soldier System) and the Force XXI Battle Command Brigade and Below communication platform, both of which were prevalent in the early 2000s. However, interest in IoT research in the military started to wane as commercial industry surged ahead with new technology. While DoD continued research into advanced sensors, intelligent information processing systems, and communication networks, few military systems have taken full advantage of the IoT stack such as networked sensors and automated-response technology largely due to security concerns. As of 2019, research in modern IoT technology within the military started to regain a considerable amount of support from the U.S. Army, Navy, and Air Force. Programs Several initiatives were formed by the Department of Defense in order to bolster IoT research in the military domain as well as to reduce the current gap in progress between military and industry applications. The Connected Soldier The Connected Soldier project was a research initiative supported by the U.S. Army Natick Soldier Research, Development and Engineering Center (NSRDEC) that focused on creating intelligent body gear. The project aimed to establish an internet of things for each soldier by integrating wideband radio, biosensors, and smart wearable systems as standard equipment. These devices served not only to monitor the soldier's physiological status but also to communicate mission data, surveillance intelligence, and other important information to nearby military vehicles, aircraft, and other troops. Internet of Battlefield Things (IoBT) In 2016, the U.S. Army Research Laboratory (ARL) created the Internet of Battlefield Things (IoBT) project in response to the U.S. Army's operational outline for 2020 to 2040, titled “Winning in a Complex World.” In the outline, the Department of Defense announced its goals to keep up with the technological advances of potential adversaries by turning its attention away from low-tech wars and instead focusing on combat in more urban areas. Acting as a detailed blueprint for what ARL suspected future warfare may entail, the IoBT project pushed for better integration of IoT technology in military operations in order to better prepare for techniques such as electronic warfare that may lie ahead. In 2017, ARL established the Internet of Battlefield Things Collaborative Research Alliance (IoBT-CRA) to bring together industry, university, and government researchers to advance the theoretical foundations of IoBT systems. According to ARL, the IoBT was primarily designed to interact with the surrounding environment by acquiring information about the environment, acting upon it, and continually learning from these interactions. As a consequence, research efforts focused on sensing, actuation, and learning challenges. In order for the IoBT to function as intended, the following prerequisite conditions must first be met in regard to technological capability, structural organization, and military implementation. Communication All entities in the IoBT must be able to properly communicate information to one another even with differences in architectural design and makeup. While future commercial internet of things may exhibit a lack of uniform standards across different brands and manufacturers, entities in IoBT must remain compatible despite displaying extreme heterogeneity. In other words, all electronic equipment, technology, or other commercial offerings accessed by military personnel must share the same language or at least have “translators” that make the transfer and processing of different types of information possible. In addition, the IoBT must be capable of temporarily incorporating available networked devices and channels that it does not own for its own use, especially if doing so is advantageous to the system (e.g. making use of existing civilian networking infrastructure in military operations in a megacity). At the same time, the IoBT must take into consideration the varying degree of trustworthiness of all the networks it leverages. Timing will be critical in the success of IoBT. The speed of communication, computation, machine learning, inference, and actuation between entities are vital to many mission tasks, as the system must know which type of information to prioritize. Scalability will also serve as an important factor in the operation since the network must be flexible enough to function at any size. Learning The success of the IoBT framework often hinges on the effectiveness of the mutual collaboration between the human agents and the electronic entities in the network. In a tactical environment, the electronic entities will be tasked with a wide range of objectives from collecting information to executing cyber actions against enemy systems. In order for these technologies to perform those functions effectively, they must be able to not only ascertain the goals of the human agents as they change but also demonstrate a significant level of autonomous self-organization to adjust to the rapidly changing environment. Unlike commercial network infrastructures, the adoption of IoT in the military domain must take into consideration the extreme likelihood that the environment may be intentionally hostile or unstable, which will require a high degree of intelligence to navigate. As a result, the IoBT technology must be capable of incorporating predictive intelligence, machine learning, and neural network in order to understand the intent of the human users and determine how to fulfill that intent without the process of micromanaging each and every component of the system. According to ARL, maintaining information dominance will rely on the development of autonomous systems that can operate outside its current state of total dependence on human control. A key focus of IoBT research is the advancement of machine learning algorithms to provide the network with decision-making autonomy. Rather than having one system at the core of the network functioning as the central intelligence component dictating the actions of the network, the IoBT will have intelligence distributed throughout the network. Therefore, individual components can learn, adapt, and interact with each other locally as well as update behaviors and characteristics automatically and dynamically on a global scale to suit the operation as the landscape of warfare constantly evolves. In the context of IoT, the incorporation of artificial intelligence into the sheer volume of data and entities involved in the network will provide an almost infinite number of possibilities for behavior and technological capability in the real world. In a tactical environment, the IoBT must be able to perform various types of learning behaviors to adapt to the rapidly changing conditions. One area that received considerable attention is the concept of meta-learning, which strives to determine how machines can learn how to learn. Having such a skill would allow the system to avoid fixating on pretrained absolute notions on how it should perceive and act whenever it enters a new environment. Uncertainty quantification models have also generated interest in IoBT research since the system's ability to determine its level of confidence in its own predictions based on its machine learning algorithms may provide some much needed context whenever important tactical decisions need to be made. The IoBT should also demonstrate a sophisticated level of situation awareness and artificial intelligence that will allow the system to autonomously perform work based on limited information. A primary goal is to teach the network how to correctly infer the complete picture of a situation while measuring relatively few variables. As a result, the system must be capable of integrating the vast amount and variety of data that it regularly collects into its collective intelligence while functioning in a continuous state of learning at multiple time scales, simultaneously learning from past actions while acting in the present and anticipating future events. The network must also account for unforeseen circumstances, errors, or breakdowns and be able to reconfigure its resources to recover at least a limited level of functionality. However, some components must be prioritized and structured to be more resilient to failure than others. For instance, networks that carry important information such as medical data must never be at risk of shutdown. Cognitive Accessibility For semi-autonomous components, the human cognitive bandwidth serves as a notable constraint for the IoBT due to its limitations in processing and deciphering the flood of information generated by the other entities in the network. In order to obtain truly useful information in a tactical environment, semi-autonomous IoBT technologies must collect an unprecedented volume of data of immense complexity in levels of abstraction, trustworthiness, value, and other attributes. Due to serious limitations in human mental capacity, attention, and time, the network must be able to easily reduce and transform large flows of information produced and delivered by the IoBT into reasonably-sized packets of essential information that is significantly relevant to army personnel, such as signals or warnings that pertain to their current situation and mission. A key risk of IoBT is the possibility that devices could communicate negligibly useful information that eats up the human's valuable time and attention or even propagate inappropriate information that misleads human individuals into performing actions that lead to adverse or unfavorable outcomes. At the same time, the system will stagnate if the human entities doubt the accuracy of the information provided by the IoBT technology. As a result, the IoBT must operate in a manner that is extremely convenient and easy to understand to the humans without compromising the quality of the information it provides them. Mosaic Warfare Mosaic Warfare is a term coined by former DARPA Strategic Technology Office director Tom Burns and former deputy director Dan Patt to describe a “systems of systems” approach to military warfare that focuses on re-configuring defense systems and technologies so that they can be fielded rapidly in a variety of different combinations for different tasks. Designed to emulate the adaptable nature of the lego blocks and mosaic art form, Mosaic Warfare was promoted as a strategy to confuse and overwhelm adversary forces by deploying low-cost adaptable technological expendable weapon systems that can play multiple roles and coordinate actions with one another, complicating the decision-making process for the enemy. This method of warfare arose as a response to the current monolithic system in the military, which relies on a centralized command-and-control structure fraught with vulnerable single-point communications and the development of a few highly capable systems that are too important to risk losing in combat. The concept of Mosaic Warfare existed within DARPA since 2017 and contributed to the development of various technology programs such as the System of Systems Integration Technology and Experimentation (SoSIT), which led to the development of a network system that allows previously disjointed ground stations and platforms to transmit and translate data between one another. Ocean of Things In 2017, DARPA announced the creation of a new program called the Ocean of Things, which planned to apply IoT technology on a grand scale in order to establish a persistent maritime situational awareness over large ocean areas. According to the announcement, the project would involve the deployment of thousands of small, commercially available floats. Each float would contain a suite of sensors that collect environmental data—like sea surface temperature and sea state—and activity data, such as the movement of commercial vessels and aircraft. All the data collected from these floats would then be transmitted periodically to a cloud network for storage and real-time analysis. Through this approach, DARPA aimed to create an extensive sensor network that can autonomously detect, track, and identify both military, commercial, and civilian vessels as well as indicators of other maritime activity. The Ocean of Things project focused primarily on the design of the sensor floats and the analytic techniques that would be involved in organizing and interpreting the incoming data as its two main objectives. For the float design, the vessel had to be able to withstand the harsh ocean conditions for at least a year while being made out of commercially available components that cost less than $500 each in total. In addition, the floats could not pose any danger to passing vessels and had to be made out of environmentally safe materials so that it could safely dispose of itself in the ocean after completing its mission. In regards to the data analytics, the project concentrated on developing cloud-based software that could collect, process, and transmit data about the environment and their own condition using a dynamic display. Security concerns One of the largest potential dangers of IoMT technology is the risk of both adversarial threats and system failures that could compromise the entire network. Since the crux of the IoMT concept is to have every component of the network—sensors, actuators, software, and other electronic devices—connected together to collect and exchange data, poorly protected IoT devices are vulnerable to attacks which may expose large amounts of confidential information. Furthermore, a compromised IoMT network is capable of causing serious, irreparable damage in the form of corrupted software, disinformation, and leaked intelligence. According to the U.S. Department of Defense, security remains a top priority in IoT research. The IoMT must be able to foresee, avoid, and recover from attempts by adversary forces to attack, impair, hijack, manipulate, or destroy the network and the information that it holds. The use of jamming devices, electronic eavesdropping, or cyber malware may pose a serious risk to the confidentiality, integrity, and availability of the information within the network. Furthermore, the human entities may also be targeted by disinformation campaigns in order to foster distrust in certain elements of the IoMT. Since IoMT technology may be used in an adversarial setting, researchers must account for the possibility that a large number of sources may become compromised to the point where threat-assessing algorithms may use some of those compromised sources to falsely corroborate the veracity of potentially malicious entities. Minimizing the risks associated with IoT devices will likely require a large-scale effort by the network to maintain impenetrable cybersecurity defenses as well as employ counterintelligence measures that thwart, subvert, or deter potential threats. Examples of possible strategies include the use of “disposable” security, where devices that are believed to be potentially compromised by the enemy are simply discarded or disconnected from the IoMT, and honeynets that mislead enemy eavesdroppers. Since adversary forces are expected to adapt and evolve their strategies for infiltrating the IoMT, the network must also undergo a continuous learning process that autonomously improves anomaly detection, pattern monitoring, and other defensive mechanisms. Secure data storage serves as one of the key points of interest for IoMT research. Since the IoMT system is predicted to produce an immense volume of information, attention was directed toward new approaches to maintaining data properly and regulating protected access that don't allow for leaks or other vulnerabilities. One potential solution that was proposed by The Pentagon was Comply to Connect (C2C), a network security platform that autonomously monitored device discovery and access control in order to keep pace with the exponentially-growing network of entities. In addition to the risks of digital interference and manipulation by hackers, concerns have also been expressed regarding the availability of strong wireless signals in remote combat locations. The lack of a constant internet connection was shown to limit the utility and usability of certain military devices that depend on reliable reception. See also Internet of Autonomous Things Internet of Things Smart munitions Edge computing Biometrics Cyberwarfare Further reading References Military technology Internet of things Ambient intelligence Technology assessments Computing and society Digital technology 21st-century inventions
Internet of Military Things
[ "Technology" ]
3,949
[ "Information and communications technology", "Technology assessments", "Digital technology", "Computing and society", "Ambient intelligence" ]
62,210,354
https://en.wikipedia.org/wiki/IoBT-CRA
The Internet of Battlefield Things Collaborative Research Alliance (IoBT-CRA), also known as the Internet of Battlefield Things Research on Evolving Intelligent Goal-driven Networks (IoBT REIGN), is a collaborative research alliance between government, industry, and university researchers for the purposes of developing a fundamental understanding of a dynamic, goal-driven Internet of Military Things (IoMT) known as the Internet of Battlefield Things (IoBT). It was first established by the U.S. Army Research Laboratory (ARL) to investigate the use of machine intelligence and smart technology on the battlefield, as well as strengthen the collaboration between autonomous agents and human soldiers in combat. An initial grant of $25 million was provided by ARL in October 2017 to fund the first five years of this potential 10-year research program. The research effort is a collaboration between ARL and Carnegie Mellon University, the University of California, Berkeley, the University of California, Los Angeles, the University of Massachusetts, the University of Southern California, Georgetown University, and SRI International with the University of Illinois at Urbana-Champaign (UIUC) acting as the consortium lead. Goals The IoBT-CRA was created as part of the U.S. Army’s long-term plans to keep up with technological advances in commercial industry and better prepare for future electronic warfare against more technologically sophisticated adversaries. In light of this objective, the IoBT-CRA focuses on exploring the capabilities of intelligent battlefield systems and large-scale heterogeneous sensor networks that dynamically evolve in real-time in order to adapt to Army mission needs. Part of the CRA research is dedicated to enhancing modern intelligent sensor and actuator capacity, allowing them to be compatible with secure military-owned networks, less trustworthy civilian networks, and adversarial networks. ARL identified six areas of research that the IoBT-CRA should strive to develop as part of its program: Agile Synthesis: Theoretical models and methods of autonomic complex systems that provide the capacity to enable fast and effective command over military, adversary, and civilian networks. Reflexes: Theoretical models and methods for structuring dynamic IoBTs that perform adaptive, autonomic, and self-aware behavior at varying ranges of scale, distribution, resource constraints, and heterogeneity. Intelligent Battlefield Services: Scientific theories that will help improve the fundamental run-time capabilities of IoBTs with tasks such as information collection, predictive processing, and data anomaly detection. Security: Methods of increasing the defenses of IoBTs such that the system is resilient to attacks and tampering from adversaries and is able to continue operating under less-than-ideal situations. Dependability: Fundamental models related to asset composition, system adaptation, and intelligent services that are all aimed to increase the reliability of IoBTs in largely uncertain environments. Experimentation: The architectural foundations for the IoBT seek to evaluate how well theories, algorithms, and technologies perform under various military-related scenarios for the purpose of addressing issues regarding scale, composability, and compatibility. External links IoBT REIGN Homepage List of IoBT-CRA publications U.S. Army Research Laboratory IoBT webpage References Military technology Internet of things Digital technology
IoBT-CRA
[ "Technology" ]
662
[ "Information and communications technology", "Digital technology" ]
73,224,878
https://en.wikipedia.org/wiki/Xiaomi%20Mi%20MIX%20Fold
Xiaomi Mi MIX Fold is an Android-based foldable smartphone manufactured by Xiaomi. Unveiled on March 30, 2021, and released on April 16, 2021, this is Xiaomi's first attempt at a Foldable smartphone. Specifications The Mi MIX Fold has two displays, one outer 6.52-inch AMOLED display for when the device is closed and an inner 8.01-inch AMOLED display when unfolded. Xiaomi rated the hinge in the phone to last at least 200,000 bends and the flexible screen for 1 million folds. The phone is powered by the Qualcomm Snapdragon 888 and can have either 12 or 16 GB of RAM. It also has a 5020 mAh battery. References Mi MIX Fold Foldable_smartphones Mobile phones with multiple rear cameras Mobile phones with infrared transmitter Mobile phones with 8K video recording Mobile phones introduced in 2021
Xiaomi Mi MIX Fold
[ "Technology" ]
184
[ "Crossover devices", "Mobile technology stubs", "Foldable smartphones", "Mobile phone stubs" ]
73,225,621
https://en.wikipedia.org/wiki/1992%20Nemadji%20River%20train%20derailment
On June 30, 1992, a Burlington Northern Railroad freight train derailed on a bridge over the Nemadji River at the southern edge of town of Superior, Wisconsin. The derailment resulted in a liquid benzene spill into the river. The fumes from the spill led to an evacuation of an estimated 80,000 residents from the town of Superior, the city of Superior and Duluth, Minnesota, apparently the largest evacuation in U.S. history resulting from a train accident. Background The derailment of the southbound train happened at about 2:50 am June 30, 1992, at the intersection of Wisconsin Highway 35, the rail line and the Nemadji River. One of the train's tank cars fell 75 feet from the bridge into the Nemadji River. The ruptured car released nearly 22,000 gallons of aromatic concentrates including liquid benzene and toluene into the river. Thirteen other derailed cars fell onto land banks. Two of these cars were carrying propane; other cars were carrying lumber. The water at the location of the BN bridge over Nemadji River was seven feet deep. The US NOAA's initial statement that day estimated the water flow rate to be 830,000 gpm, and that accordingly, the water would soon flush away spilled oil product. However, in the NOAA's early statement, it said that the water in Superior Bay was stagnant with a high turbidity. A toxic cloud of the benzene formed about Duluth and Superior. Benzene is a clear and flammable liquid. It is used for the creation of lacquers, varnishes and other admixtures. Government offices were closed on both sides of the Minnesota-Wisconsin border. Inmates in the St. Louis County, Minnesota jail were moved elsewhere. The Duluth Transit Authority was utilized in evacuated residents from senior apartments and nursing homes. Roads into Superior were closed. Superior police captain Doug Osell said, "It looked like a ghost town. Cars were leaving in droves." About 50,000 residents from Duluth were evacuated and about 35,000 people from Superior were evacuated. Around 205 Army and Air National Guard members from Minnesota and Wisconsin assisted with the evacuations and security. 26 people were treated at area hospitals for irritations after breathing the benzene fumes. The benzene gases created a visible haze. The benzene cloud moved west of Duluth and dissipated on account of rain. The evacuation order was lifted from Duluth at 3:30 pm and from Superior at 6:00 pm. Responses by government officials Wisconsin Governor Tommy Thompson on the day of the accident declared a state of emergency for Douglas County, the county of Superior. Thompson and the Minnesota governor Arne Carlson convened late in the day. Environmental effects On August 3, 1992, the Wisconsin Department of Natural Resources reported that the spill from derailment killed thousands of fish and an unspecified number of other animals. The DNR's report said that most of the dead fish were carp, suckers, redhorse, shiners, and minnows. The report went on to say that the rains subsequent to the spill "helped to dilute the chemical and probably reduced the potential magnitude of the fish kill." The report said that 16 species of wild animals and six species of domestic animals died. The DNR report also stated that vegetation along the Nemadji River suffered damage. Residual oil from the spill traveled downstream north to Superior Bay, Allouez Bay and Lake Superior. Legal settlement and pledges by Burlington Northern On April 4, 1995, it was announced that Burlington Northern agreed to make payments in a settlement over this spill and for two other spills in Wyoming. In the settlement, Burlington Northern agreed to pay $1.5 million. $1.1 million of this was a civil penalty under the Oil Pollution Act of 1990. The consent decree also obliged the railroad to spend $1.2 million in technology to prevent derailments. The railroad agreed to buy three ultrasonic rail inspection cars which would improve the ability to find rail defects and prevent derailments. Additionally, the railroad agreed to pay the US Environmental Protection Agency and other federal agencies for costs in response to the oil spill. The railroad also agreed to pay $250,000 to a fund managed cooperatively by the U.S. Department of the Interior, the Bad River of Lake Superior Chippewas and the Red Cliff Band of Lake Superior Chippewa. Lastly, Burlington Northern committed pay $100,000 to a fund for the use of studying the type of rail defects in the Nemadji River train derailment. References 1992 disasters in the United States Train derailment Wisconsin train derailment Accidents and incidents involving Burlington Northern Railroad Chemical disasters Derailments in the United States Douglas County, Wisconsin Environmental disasters in the United States Wisconsin train derailment Wisconsin train derailment 1992 train derailment
1992 Nemadji River train derailment
[ "Chemistry" ]
998
[ "Chemical accident", "Chemical disasters" ]
73,226,078
https://en.wikipedia.org/wiki/Software%20load%20testing
The term load testing or stress testing is used in different ways in the professional software testing community. Load testing generally refers to the practice of modeling the expected usage of a software program by simulating multiple users accessing the program concurrently. As such, this testing is most relevant for multi-user systems; often one built using a client/server model, such as web servers. However, other types of software systems can also be load tested. For example, a word processor or graphics editor can be forced to read an extremely large document; or a financial package can be forced to generate a report based on several years' worth of data. The most accurate load testing simulates actual use, as opposed to testing using theoretical or analytical modeling. Load testing lets you measure your website's quality of service (QOS) performance based on actual customer behavior. Nearly all the load testing tools and frameworks follow the classical load testing paradigm: when customers visit your website, a script recorder records the communication and then creates related interaction scripts. A load generator tries to replay the recorded scripts, which could possibly be modified with different test parameters before replay. In the replay procedure, both the hardware and software statistics will be monitored and collected by the conductor, these statistics include the CPU, memory, disk IO of the physical servers and the response time, the throughput of the system under test (SUT), etc. And at last, all these statistics will be analyzed and a load testing report will be generated. Load and performance testing analyzes software intended for a multi-user audience by subjecting the software to different numbers of virtual and live users while monitoring performance measurements under these different loads. Load and performance testing is usually conducted in a test environment identical to the production environment before the software system is permitted to go live. Objectives of load testing: - To ensure that the system meets performance benchmarks; - To determine the breaking point of the system; - To test the way the product reacts to load-induced downtimes. As an example, a website with shopping cart capability is required to support 100 concurrent users broken out into the following activities: 25 virtual users (VUsers) log in, browse through items and then log off 25 VUsers log in, add items to their shopping cart, check out and then log off 25 VUsers log in, return items previously purchased and then log off 25 VUsers just log in without any subsequent activity A test analyst can use various load testing tools to create these VUsers and their activities. Once the test has started and reached a steady-state, the application is being tested at the 100 VUser loads as described above. The application's performance can then be monitored and captured. The specifics of a load test plan or script will generally vary across organizations. For example, in the bulleted list above, the first item could represent 25 VUsers browsing unique items, random items, or a selected set of items depending upon the test plan or script developed. However, all load test plans attempt to simulate system performance across a range of anticipated peak workflows and volumes. The criteria for passing or failing a load test (pass/fail criteria) are generally different across organizations as well. There are no standards specifying acceptable load testing performance metrics. A common misconception is that load testing software provides record and playback capabilities like regression testing tools. Load testing tools analyze the entire OSI protocol stack whereas most regression testing tools focus on GUI performance. For example, a regression testing tool will record and playback a mouse click on a button on a web browser, but a load testing tool will send out hypertext the web browser sends after the user clicks the button. In a multiple-user environment, load testing tools can send out hypertext for multiple users with each user having a unique login ID, password, etc. The popular load testing tools available also provide insight into the causes for slow performance. There are numerous possible causes for slow system performance, including, but not limited to, the following: Application server(s) or software Database server(s) Network – latency, congestion, etc. Client-side processing Load balancing between multiple servers Load testing is especially important if the application, system, or service will be subject to a service level agreement or SLA. Load testing is performed to determine a system's behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation. When the load placed on the system is raised beyond normal usage patterns to test the system's response at unusually high or peak loads, it is known as stress testing. The load is usually so great that error conditions are the expected result, but there is no clear boundary when an activity ceases to be a load test and becomes a stress test. The term "load testing" is often used synonymously with concurrency testing, software performance testing, reliability testing, and volume testing for specific scenarios. All of these are types of non-functional testing that are not part of functionality testing used to validate suitability for use of any given software. User experience under load test In the example above, while the device under test (DUT) is under production load - 100 VUsers, run the target application. The performance of the target application here would be the User Experience Under Load. It describes how fast or slow the DUT responds, and how satisfied or how the user actually perceives performance. Browser-level vs. protocol-level users Historically, all load testing was performed with automated API tests that simulated traffic through concurrent interactions at the protocol layer (often called protocol level users or PLUs). With the advance of containers and cloud infrastructure, the option is now present to test with real browsers (often called browser level users or BLUs). Each approach has its merits for different types of applications, but generally, browser-level users will be more akin to the real traffic that a website will experience and provide a more realistic load profile and response time measurement. BLUs are certainly a more expensive way of running tests and cannot work with all types of applications, specifically those that are not accessible through a web browser like a desktop client or API-based application. Load testing tools References Software testing
Software load testing
[ "Engineering" ]
1,283
[ "Software engineering", "Software testing" ]
73,226,386
https://en.wikipedia.org/wiki/Studio%20Swine
Studio Swine is a British-Japanese art collective and design studio founded in 2011 by Azusa Murakami and Alexander Groves. Swine is an acronym for "Super Wide Interdisciplinary New Explorers". They are known for artistic works in design that combine narrative, film, and process-based object-making with an emphasis on sustainability. Background and work Azusa Murakami and Alexander Groves met while studying at the Royal College of Art in London, from which they both received Masters in Product Design degrees. Upon graduation, they founded Studio Swine. Murakami is an architect who was trained at the Bartlett School of Architecture. Groves holds an undergraduate degree in fine art from Oxford University. Studio Swine explores themes of regional identity and the future of resources. Their work combines contextual research and experimental use of sustainable materials which manifest in objects, films and immersive installations. Their sensory installations are an ongoing series of works they describe as "Ephemeral Tech" in which the boundaries between digital technology and natural forces are dissolved to create unnatural phenomena. Ephemeral Tech looks to a future where technology uses senses to transcend the familiar interfaces beyond the standard visual stimuli of flat screens, projections and LED arrays, and becomes inseparable from both built and natural environments. They explore the concept of Ephemeral Tech in their installations such as New Spring (2017). Known for their films, they are inspired by the filmmaking of Ray and Charles Eames and have said that they are designers of "mass communication rather than mass production". Their films have been awarded at Cannes and have been featured on National Geographic and Discovery Channel and museums and film festivals globally. The collective's work has been exhibited at institutions such as the Victoria and Albert Museum in London, 21_21 Design Sight in Tokyo, and shown in both Venice Art and Architecture Biennales. Examples of their work are held in the collection of the Museum of Modern Art (MoMA), Centre Pompidou, M+, Vitra Design Museum, and the Design Museum Gent. In 2022, the Murakami and Groves founded A.A.Murakami, which focuses on ephemeral experiences through sensory installations and Web3 and their inaugural project was entitled Floating World. Studio Swine is represented by Pace Gallery, Pearl Lam Gallery and Superblue. Selected works and projects Sea Chair (2011) First presented in the Royal College of Art show in 2011, Sea Chair is an open source design and film that explores the issue of plastic waste. In Sea Chair, Studio Swine demonstrate how waste plastic picked up by fishing trawlers can be transformed into chairs on board the boats. Hair Highway (2014) Having developed developed a technique to infuse hair in natural resin as an alternative to wood while studying at London's Royal College of Art, Studio Swine travelled to China to visit a hair market in Shandong and film parts of the hair trade as an exploration of human hair as a future resource and a reflection on the global human hair industry in the context of China's past and present trade relationship with the world. As part of the project, they created a series of decorative pieces and accessories influenced by the art-deco architecture and design found in Shanghai and all pieces were made from coloured resin and human hair. Hair Highway was presented at Design Miami/Basel in 2014. Can City (2014) Studio Swine created "Can City", a mobile foundry that operates around São Paulo's streets that smelts aluminium cans using waste vegetable oil collected from local cafes as fuel. The moulds and the finished pieces are all made on location, turning the street into an improvised manufacturing line. In a city with some 20 million residence, waste is on a massive scale. however over 80% of the recycling is collected by an informal system of independent waste collectors known as Catadores who pull their handmade carts around the streets. 'Can City' creates a system where their livelihoods can extend beyond rubbish collection. In Can City, Catadores mine the streets for materials to create objects with vernacular aesthetic, providing a portrait of the streets. The stools are the first line items to be produced, inspired by vernacular design the seating is made for the food market that provided the waste materials.  "Can City" was commissioned by the Coletivo Amor de Madre Gallery, São Paulo and was supported by Heineken. Fordlandia (2017) "Fordlandia" is an immersive art installation created by Studio Swine that is inspired by a ghost town deep in the Amazon Rainforest built by the American Industrialist Henry Ford in the late 1920's to secure a supply of rubber for his automobile empire. Through the construction of a fictional domestic space made entirely of Amazonian rubber and other materials from the rainforest, the installation explores the idea of synthesis between nature and industry, questioning Henry Ford's attempt to tame nature in profit of his industrial gain. New Spring (2017) "New Spring" is an immersive art installation created by Studio Swine supported by COS. The installation consists of a six metre high tree of aluminium that releases mist-filled bubbles that break upon human contact but can be held by visitors with special gloves. Visitors are invited to interact with the bubbles, triggering the release of scent and mist, while experiencing a unique sensory journey. Shown at Milan design week in 2017, Studio Swine have said the work is inspired by the ephemerality of cherry blossom and re-examines how we can interact with technology through our senses. Infinity Blue (2018) Infinity Blue is an installation by Studio Swine that celebrates cyanobacteria, one of the world's smallest living beings. They describe their work in the following terms: "At almost 9 metres tall and weighing 20 tonnes, ∞ Blue (Infinity Blue) is an immersive installation that pays homage to the cyanobacteria, one of the world's smallest living beings. Around 3 billion years ago, cyanobacteria first developed oxygenic photosynthesis. In doing so, they changed the nature of our planet. The sculpture is a monument to their vital creation, which continues to provide the oxygen in every breath we take." On the surface of the monument, Cornish clay and oxide glazes reflect local mining history. The textural pattern on the ceramic tiles are generated by a reaction-diffusion algorithm found in nature from zebras to coral. From the sculpture, 32 vortex cannons fire smoke rings whose scents tell a layered history of the earth's atmosphere. Studio Swine collaborated with Paris perfume house Givaudan to develop fragrances inspired by the aromas of primordial worlds. Metropolis. I (2024) In 2024, Studio Swine's collaboration with Sendai-Tansu cabinet makers from Miyagi city was unveiled at an exhibition called Craft x Tech Tohoku Project at Kudan House in Tokyo. Their contribution to the initiative, titled Metropolis. I, is a geometric chest of drawers made of lacquer coated wood and iron fittings that Azusa Murakami described as a type of "time travel device [...] employing age-old techniques and traditions that traverse the hands of artisans across centuries." The show was curated by Maria Cristina Didero and also included pieces by Ini Archibong, Sabine Marcelis, Yoichi Ochiai, Hideki Yoshimoto, and Michael Young. The work was subsequently exhibited in the Prince Consort Gallery of the Victoria and Albert museum in London during the London Design Festival. Films Sea Chair (2011) Can City (2013) Buttons (2013) Hair Highway (2014) Terraforming (2016) Infinity Blue (2018) St James Market (2016) Floating World (2022) Under a Flowing Field (2023) References External links Official website A.A.Murakami website Design companies Artist groups and collectives Arts organizations established in 2011 Alumni of the Royal College of Art Conceptual artists Furniture designers
Studio Swine
[ "Engineering" ]
1,621
[ "Design", "Engineering companies", "Design companies" ]
73,226,462
https://en.wikipedia.org/wiki/List%20of%20Brutalist%20architecture%20in%20the%20United%20States
This is a list of buildings that are examples of the Brutalist architectural style in the United States. Alabama University Chapel, Tuskegee University, Tuskegee Alaska Z.J. Loussac Public Library, Anchorage (1986) Arizona Phoenix Symphony Hall, Phoenix (1969-1972) Regency on Central, 2323 N. Central Ave., Phoenix (1964) Arkansas Bank of America Plaza, Little Rock California All original Bay Area Rapid Transit (BART) stations, San Francisco Bay Area (1972–73) Berkeley Art Museum and Pacific Film Archive (former campus on Bancroft Way), UC Berkeley, (Mario Ciampi, 1970) Briggs Hall, University of California, Davis (unknown, 1971) (Smith Barker Hanssen, architects) Cal Poly Pomona College of Environmental Design Campus of the University of California, Irvine Claire Trevor School of the Arts Crawford Hall (Irvine) Cathedral of St. Mary of the Assumption, San Francisco Cathedral of Our Lady of the Angels, Los Angeles Crafton Hills Community College, Yucaipa Earl Warren College Embarcadero Substation, San Francisco Embarcadero Center, San Francisco (John C. Portman Jr., 1968) Evans Hall (UC Berkeley) Geisel Library, University of California, San Diego, San Diego (William Pereira, 1970) Hilton San Francisco Financial District Huntington Beach Public Library Hyatt Regency San Francisco Airport Inglewood City Hall, Inglewood, California Irvine High School Oakland Museum of California, Oakland (Kevin Roche, 1969) Portsmouth Square pedestrian bridge Salk Institute for Biological Studies, La Jolla Sam Bell Pavilion, La Jolla Samitaur, Los Angeles San Diego Stadium, San Diego, (Frank L. Hope & Associates, 1967) (demolished) Sears, Roebuck and Company Pacific Coast Territory Administrative Offices, Alhambra Sheats Goldstein House, Los Angeles St. Basil's Catholic Church, Los Angeles UC Berkeley College of Environmental Design, Baur-Wurster Hall Berkeley, Vernon DeMars, (1964) Vaillancourt Fountain, Justin Herman Plaza, San Francisco (Armand Vaillancourt, 1971) Yosemite Hall, Cal Poly, San Luis Obispo, San Luis Obispo (Falk & Booth, 1969) Colorado Arapahoe Community College, Littleton (1974) Federal Reserve Bank of Kansas City Denver Branch, Denver Mesa Laboratory, Boulder (1966) Engineering Center, University of Colorado at Boulder, Boulder (1965) Connecticut Becton Engineering and Applied Science Center, Yale University, New Haven Beinecke Rare Book & Manuscript Library, New Haven Community Services Building, New Haven Crawford Manor Dixwell Avenue Congregational United Church of Christ Ezra Stiles and Samuel Morse Colleges, Yale University, New Haven Homer D. Babbidge Library Hotel Marcel (former Pirelli Tire Building), New Haven (Marcel Breuer & Robert F. Gatje, 1969) Kline Biology Tower Knights of Columbus Building, New Haven Louis Micheels House New Haven Central Fire Station, New Haven New Haven Coliseum, New Haven (Kevin Roche / John Dinkeloo & Associates, 1972) (demolished 2006–2007) Rudolph Hall, New Haven (Paul Rudolph, 1963) Temple Street Parking Garage, New Haven Delaware I. M. Pei Building, Wilmington Florida 1111 Lincoln Road, Miami Disney's Contemporary Resort, Walt Disney World (Welton Becket, 1971) Mailman Center for Child Development, Miami Metrorail stations, early 1980s heavy metro system (1984) Miami-Dade County School Board Administration Building South Tower, Miami Office in the Grove, Coconut Grove, Miami (Kenneth Treister, 1972) Orlando Public Library, Orlando (John M. Johansen, 1966) The University of Florida Levin College of Law National Hurricane Center headquarters, Miami Georgia AmericasMart Building 3, Atlanta Atlanta Central Library, Atlanta Atlanta Marriott Marquis, Atlanta CNN Center, Atlanta Colony Square, Atlanta Park Place on Peachtree condominiums, Atlanta (Ted Levy, 1984–1987) DeKalb County Structures include: Southern Bell Telephone & Telegraph Co., 2204 LaVista Road NE (ca. 1970) Robert T. "Bobby" Burgess Building, DeKalb County Police Department, 3610 Camp Drive (1972) First National Bank of Atlanta, 2849 N. Druid Hills Road NE (ca. 1973) Clairemont Oaks, 441 Clairemont Avenue (1973-1975) DeKalb County Parking Deck, 125 W. Trinity Place (1974) Brevard Professional Building, 246 Sycamore Street (1974) Woodruff Health Sciences Center Administrative Building (WHSCAB) at Emory University, 1440 Clifton Road (1976) Emory Rehabilitation Hospital, 1441 Clifton Road (1976) Coan Recreation Center, 1530 Woodbine Avenue SE (1976) Bank of America, 155 Clairemont Avenue (ca. 1982) Kensington Marta Station, 3350 Kensington Road (1993) Hawaii Hawaii State Capitol Hawai'i Hochi Building Jefferson Hall, East–West Center Idaho Intermountain Gas Building, Boise Whiting House, Sun Valley Illinois Arthur J. Schmitt Academic Center, DePaul University. Chicago (C.F. Murphy and Associates, 1968) Blue Cross-Blue Shield Building Cummings Life Sciences Center, Chicago Faner Hall (SIUC), Southern Illinois University Carbondale, Carbondale (1974) Henry Hinds Laboratory, Chicago Joseph Regenstein Library, University of Chicago (Walter Netsch, 1970) Kirsch Residence, Oak Park Lincoln Executive Plaza, Chicago Marina City, Chicago Metropolitan Correctional Center, Chicago Norris University Center Northwestern University Library, Evanston, (Walter Netsch, 1966–70) Old Prentice Women's Hospital Building Raymond Hilliard Homes, Chicago* Thomas Rees Memorial Carillon, Washington Park, Springfield, Bill Turley, (1962) University Hall (University of Illinois Chicago) Will County Courthouse, Joliet (1969) Indiana Bracken Library Clowes Memorial Hall, Butler University, Indianapolis, Evans Woollen III and John M. Johansen, (1963) College Life Insurance Company of America Headquarters, Indianapolis Eskenazi Museum of Art, Bloomington (I. M. Pei, 1982) Herman B. Wells Library, Indiana University, Bloomington (Eggers & Higgins, 1966–69) Indiana University Musical Arts Center, Indiana University, Bloomington (Woollen, Molzan and Partners, 1972) Minton–Capehart Federal Building, Indianapolis (Evans Woollen III, Woollen, Molzan and Partners, 1976) Southside Junior High School, Columbus Iowa Carver Hall Civic Center of Greater Des Moines Iowa State Center Kansas Wichita Central Library, Wichita (Schaefer, Schirmer & Eflin, 1965-1967) Kansas Judicial Center, Topeka Kentucky Kentucky International Convention Center Patterson Office Tower Louisiana Structures include Baton Rouge River Center Hale Boggs Memorial Bridge Lafayette Parish courthouse Louisiana National Bank Building. Tangipahoa Parish courthouse Maine Franklin Towers University of Maine School of Law Building Maryland Baltimore County Circuit Courthouses, Towson Morris A. Mechanic Theatre, Baltimore (John M. Johansen, 1967) (demolished 2014) Massachusetts 177 Huntington 320 Newbury Street (Boston Architectural College), Boston (Ashley, Myer & Associates, 1966) Alewife station, Cambridge (Ellenzweig, 1985) Boston City Hall, Boston (Kallmann McKinnell & Knowles/Campbell, Aldrich & Nulty, 1969) Boston Government Service Center, Boston (Paul Rudolph, 1962–71) Braintree High School, Braintree (1972) Campus of the Massachusetts Institute of Technology Carpenter Center for the Visual Arts, Harvard University, Cambridge (Le Corbusier, (1962) Countway Library - Harvard University, Boston Fall River Government Center, Fall River (1976) Fine Arts Center, University of Massachusetts, Amherst (Kevin Roche, 1975) The First Church of Christ, Scientist George Gund Hall, Harvard Graduate School of Design, Cambridge (John Andrews, 1972) George Sherman Union Harbor Towers Larsen Hall, Harvard University, Cambridge Law and Education Tower, Boston University, Boston Lawrence Public Library, Lawrence (Henneberg & Henneberg Architects, 1973) Lincoln House, Lincoln Mather House, Cambridge (Shepley, Bulfinch, Richardson and Abbot, 1971) Murray D. Lincoln Campus Center One Western Avenue, Harvard Business School, Boston Peabody Terrace Robert H. Goddard Library, Clark University, Worcester (John M. Johansen, 1969) Simmons Hall, Cambridge Smith Campus Center Solomon Carter Fuller Mental Health Center, Boston (1974) Technology Square Tisch Library University of Massachusetts Boston, Boston University of Massachusetts Dartmouth, Dartmouth (Paul Rudolph) Wollaston station, Quincy (1971) Michigan Blue Cross/Blue Shield Service Center, Detroit, Michigan (Ginno Rossetti, 1971) Grand Traverse Performing Arts Center at Interlochen Center for the Arts, Interlochen, Michigan (1975) St. Francis de Sales Church, Norton Shores Minnesota Arvonne Fraser Library Malcolm Moos Health Sciences Tower, University of Minnesota, Minneapolis (c. 1970) Peavey Plaza Phillips-Wangensteen Building, University of Minnesota Hospital, Minneapolis (1976) Rarig Center Riverside Plaza, Minneapolis (Ralph Rapson, 1973) Saint John's Abbey, Collegeville (1958-1961) Mississippi Lamar Law Center, University of Mississippi St. Richard's Catholic Church, Jackson Tougaloo College, Jackson A.A. Branch Hall L. Zenobia Coleman Library Renner Hall Missouri Nestlé Purina PetCare Headquarters Pointe400 Luxury Apartments, formerly the Pet Milk Building, St. Louis, MO Roy Blunt Hall at Missouri State University Montana Montana State University Billings Liberal Arts Building Nebraska Robert V. Denney Federal Building Wells Fargo Center, Lincoln, NE (I.M. Pei, 1975) Nevada William D. Carlson Education Building, University of Nevada, Las Vegas, Las Vegas New Hampshire Christensen Hall, University of New Hampshire, Durham (Ulrich Franzen, 1970) Phillips Exeter Academy Library, Exeter New Jersey 550 Broad Street Galaxy Towers Journal Square Transportation Center, Jersey City, New Jersey (1973–1975) New Mexico Farris Engineering Center, University of New Mexico, Albuquerque Humanities Building, University of New Mexico, Albuquerque Main Library, Albuquerque (George Pearl, 1978) – listed on the National Register of Historic Places in 2019. Spaceport America, Truth or Consequences New York Bradfield Hall, Cornell University, Ithaca Buffalo City Court Building, Buffalo (1974) Cube House, Ithaca Empire State Plaza, Albany Cultural Education Center (1976-1978) The Egg Endo Pharmaceuticals Building, Garden City Engineering Building, Binghamton University, Vestal (1976) Erie Basin Observation Tower, Buffalo Everson Museum of Art, Syracuse First Unitarian Church, Rochester Folsom Library, Rensselaer Polytechnic Institute, Troy (Pierik Quinlivan & Krause, 1976) Herbert F. Johnson Museum of Art, Cornell University, Ithaca (I.M. Pei, 1973) Hudson River Museum, Yonkers J. W. Chorley Elementary School, Middletown Orange County Government Center, Goshen (Paul Rudolph, 1967) Palisades Center, West Nyack (1998) New York City 1 Police Plaza (Gruzen and Partners, 1973) 811 Tenth Avenue 945 Madison Avenue museum building (Marcel Breuer, 1966) 33 Thomas Street (AT&T Long Lines Building) (John Carl Warnecke, 1974) Adam Clayton Powell Jr. State Office Building (Ifill, Johnson & Hanchard, 1974) Boston Road Apartments, Bronx Carman Hall, Lehman College, Bronx (1970) Chatham Towers, Manhattan Edward Durell Stone Townhouse, Manhattan Elmer Holmes Bobst Library, New York University (Philip Johnson, Richard Foster) Five Manhattan West Joseph Curran Building, Manhattan Kips Bay Towers Lincoln Square Synagogue, Manhattan Morrisania Air Rights New Museum, Manhattan New York Presbyterian Church, Queens New York Marriott Marquis North Central Bronx Hospital River Park Towers St. Frances de Chantal's Church St. John the Baptist Church St. Jude Church Temple Israel of the City of New York Tracey Towers Trinity Chapel, New York University University Village Waterside Plaza Weiss Research Building, Manhattan North Carolina Bath Building, Raleigh Elion-Hitchings Building (Burroughs Wellcome headquarters), Durham (1971, demolished) Hiram H. Ward Federal Courthouse, Winston-Salem North Dakota Our Lady of the Annunciation Chapel at Annunciation Priory, Bismarck, North Dakota Ohio Bricker Federal Building, Columbus Cleveland Museum of Art Education Wing, Cleveland Continental Center, Columbus Crosley Tower, Cincinnati Hamilton County Justice Center, Cincinnati Huntington Plaza, Columbus Hyatt Regency Columbus, Columbus John F. Seiberling Federal Building and United States Courthouse, Akron Justice Center Complex, Cleveland Maag Library, Youngstown State University, Youngstown (1976) The Ohio History Center, Columbus, (W. Byron Ireland & Associates, 1966) Rhodes State Office Tower, Columbus Rhodes Tower, Cleveland Seeley G. Mudd Learning Center, Oberlin College Library, Oberlin (Warner, Burns, Toan & Lunde, 1974) Sheraton Columbus Hotel at Capitol Square, Columbus The 9 Cleveland, Cleveland Oklahoma Mummers Theater, Oklahoma City John M. Johansen (demolished) Oregon Salem Public Library Mount Hood Community College, Gresham Pennsylvania 1700 Market Benedum Hall Carnegie Library of Pittsburgh – Knoxville Branch, Pittsburgh (Paul Schweikher, 1965) Century III Mall, West Mifflin (1979) Charles Patterson Van Pelt Library, University of Pennsylvania, Philadelphia Jennie King Mellon Library, Chatham University, Pittsburgh Main Hall, West Chester University, West Chester (1974) Penn Mutual Tower, Philadelphia University of Pittsburgh, Pittsburgh Barco Law Building (1976) David Lawrence Hall (1968) Hillman Library (1968) Litchfield Towers (Deeter & Ritchey, 1963) School of Information Sciences Building (Tasso Katselas, 1965) Wesley W. Posvar Hall (1975–1978) Wean Hall, Carnegie Mellon University, Pittsburgh (1971) Puerto Rico Bayamón City Hall, Bayamón, Puerto Rico (1980) Rhode Island Classical High School, Providence (1970) Community College of Rhode Island Knight Campus, Warwick (1972) John D. Rockefeller Jr. Library, Providence (Warner, Burns, Toan & Lunde, 1962–1964) John E. Fogarty Memorial Building, Providence Sciences Library (Brown University), Providence (1971) South Carolina Strom Thurmond Federal Building and United States Courthouse, Columbia South Dakota McKennan Hospital additions, Sioux Falls Northwestern Auto Bank, Sioux Falls Stanley J. Marshall HPER Center, South Dakota State University Tennessee Chattanooga Public Library Hunter Museum of American Art Lawson McGhee Library Sheraton Nashville Downtown University of Tennessee Art and Architecture Building Texas Alkek Library, Texas State University, San Marcos, Texas (1990) Alley Theatre, Houston, Texas (1968) Dallas City Hall, Dallas, Texas (I.M. Pei, 1978) Lyndon Baines Johnson Presidential Library, Austin Lovett College, Rice University, Houston, Texas (1968) Perry–Castañeda Library (PCL), University of Texas at Austin, (1977) Webb Chapel Park Pavilion, Dallas Utah Mountain Bell data processing center, Salt Lake City University of Utah College of Architecture and Planning University of Utah - Social & Behavioral Science University of Utah - Student Services Building Vermont Cathedral Church of St. Paul (Burlington, Vermont) Charterhouse of the Transfiguration Elliott Pratt Center, Goddard College, Plainfield, Vermont Virginia American Press Institute (demolished) FBI Academy, Quantico, Virginia (1972) The National Conference Center Sydney Lewis Hall, Washington and Lee University School of Law, Lexington, Virginia (1977) Unitarian Universalist Church of Arlington Washington Alhadeff Sanctuary of Temple De Hirsch, Seattle Bellevue Arts Museum, Bellevue Freeway Park, Seattle, Washington (Lawrence Halprin, 1972–1976) Kane Hall, University of Washington, Seattle (Walker & McGough, 1971) Meany Hall, University of Washington, Seattle (Kirk, Wallace & McKinley, 1974) Nuclear Reactor Building, University of Washington, The Architect Artist Group/TAAG, (1961) Odegaard Undergraduate Library, University of Washington, Seattle (Kirk, Wallace & McKinley, 1972) Rainier Tower, Seattle Seattle Central Library, Seattle Schmitz Hall, University of Washington, Seattle St. Joseph's Hospital, Tacoma Temple Beth Shalom, Spokane Washington, D.C. Gelman Library, George Washington University Hirshhorn Museum and Sculpture Garden (Gordon Bunshaft, 1974) Hubert H. Humphrey Building, the United States Department of Health and Human Services headquarters (1977) J. Edgar Hoover Building (FBI national headquarters) (C.F. Murphy, 1974) James V. Forrestal Building L'Enfant Plaza – a plaza containing many US Government buildings Lauinger Library, Georgetown University (John Carl Warnecke, 1970) Robert C. Weaver Federal Building Third Church of Christ, Scientist (Araldo Cossutta, 1971; demolished 2014) Washington Metro stations (1970–2001) Washington Hilton West Virginia Wisconsin Marcus Center for the Performing Arts, Milwaukee, Wisconsin (Harry Weese, 1966–69) Milwaukee County War Memorial, Milwaukee Sentry Insurance Headquarters, Stevens Point University of Wisconsin, Madison Curtin Hall, (Maynard W. Mayer & Assoc., 1974) George L. Mosse Humanities Building Vilas Communication Hall Wingspread, Racine Wyoming University of Wyoming dormitories and American Heritage Center See also List of Brutalist structures Further reading References External links Architecture lists Brutalist architecture in the United States Brutalist architecture
List of Brutalist architecture in the United States
[ "Engineering" ]
3,574
[ "Architecture lists", "Architecture" ]
73,226,836
https://en.wikipedia.org/wiki/Buffered%20probability%20of%20exceedance
Buffered probability of exceedance (bPOE) is a function of a random variable used in statistics and risk management, including financial risk. The bPOE is the probability of a tail with known mean value . The figure shows the bPOE at threshold (marked in red) as the blue shaded area. Therefore, by definition, bPOE is equal to one minus the confidence level at which the Conditional Value at Risk (CVaR) is equal to . bPOE is similar to the probability of exceedance of the threshold , but the tail is defined by its mean rather than the lowest point of the tail. bPOE has its origins in the concept of buffered probability of failure (bPOF), developed by R. Tyrrell Rockafellar and Johannes Royset to measure failure risk. It was further developed and defined as the inverse CVaR by Matthew Norton, Stan Uryasev, and Alexander Mafusalov. Similar to CVaR, bPOE considers not only the probability that outcomes (losses) exceed the threshold , but also the magnitude of these outcomes (losses). Formal definition There are two slightly different definitions of bPOE, so called Lower bPOE and Upper bPOE. For a random variable, the Lower bPOE, , at threshold is given by: where . bPOE can be expressed as the inverse function of CVaR: , where is the CVaR of with confidence level . References Risk management Extreme value data Reliability analysis Stochastic processes Survival analysis
Buffered probability of exceedance
[ "Engineering" ]
300
[ "Reliability analysis", "Reliability engineering" ]
73,227,568
https://en.wikipedia.org/wiki/Narave%20pig
The Narave or Naravé pig is a type of domestic pig native to northern Vanuatu. Narave pigs are pseudohermaphrodite (intersex) male individuals that are kept for ceremonial purposes. Etymology The term narave is from Bislama. Clark (2009) reconstructs Proto-North-Central Vanuatu *raβʷe ‘hermaphrodite pig, intersex pig’. Reflexes documented in Clark (2009) include Mota rawe ‘an hermaphrodite pig, female’; Raga ravwe ‘hermaphrodite (usually of pig)’; Nokuku rawe ‘boar’ (MacDonald 1889), rav ‘intersex pig’ (Clark 2005–2007 field notes); Vara Kiai rave; Tamambo ravue; Sakao e-re ‘intersex pig’; Suñwadaga na-raghwe; Araki dave; Vao na-rav ‘intersex pigs’, bò-rav ‘sow’. François (2021) documents Araki rave [ɾaβe] ‘hermaphrodite pig, of great customary value’. Endocrinology In the pigs, deficiency of the mitochondrial cytochrome P450 enzyme 17α-hydroxylase (CYP17A1) causes very low 17α-Hydroxypregnenolone levels, leading to pseudohermaphroditism. Genetics An analysis of Narave pig mitochondrial DNA by Lum et al. (2006) found that they are descended from Southeast Asian pigs and were brought to the island by Lapita seafarers about 3,000 years ago. Distribution Although formerly widespread across northern Vanuatu, intersex pigs are most common in Malo Island during the 21st century. Intersex pigs are kept for use in Nimangki grade-taking ceremonies at Patani village on the northwestern coast of Espiritu Santo. Some intersex pigs are kept on Gaua and northeastern Ambae islands, although they are not as prevalent compared to the early 20th century. Old carvings of intersex pigs on Vao Island can also still be found. An intersex pig was also found on Aore Island by McIntyre (1997). John R. Baker (1928) reported large numbers of intersex pigs in Espiritu Santo, where he reported to have seen "no fewer than 125 intersexes in one single day at Hog Harbour" during an event when younger and older pigs were brought together from different islands for trading. He also received reports that such pigs were present on the islands of Gaua, Vanua Lava, Mota, Ambae, Ambrym (not in large numbers), and Tongoa. He reports that intersex-producing females were also brought from the other islands to breed intersex pigs on Merelava and Merig islands. Baker's dissections showed that all the animals he looked at had only male internal sexual organs. That is, despite the appearance of their external genitalia, internally there was no question that they were male pigs. Baker (1928) reported the following names for intersex pigs in various parts of Vanuatu. rauoē in Mota rau or rolas in Gaua ndrē or nerē in the northeast peninsula of Espiritu Santo ra or ravē in southeast Espiritu Santo teret in Ambrym pulpul in Tongoa In the 21st century, Narave pigs can be found in Avunatari village, Malo Island. There is also another population in Nasulnun village, Espiritu Santo Island, whose residents have recently relocated there from Malo Island, which has an exclusively Tamambo-speaking native population. References Pig breeds Sanma Province Intersex topics Bislama words and phrases
Narave pig
[ "Biology" ]
799
[ "Intersex topics", "Sex" ]
73,227,716
https://en.wikipedia.org/wiki/Metal%20tetranorbornyl
In organometallic chemistry, metal tetranorbornyls are compounds with the formula M(nor)4 (M = a metal in a +4 oxidation state) (1-nor = 4bicyclo[2.2.1]hept-1-yl) and are one of the largest series of tetraalkyl complexes derived from identical ligands. Metal tetranorbornyls display uniform stoichiometry, low-spin configurations, and high stability, which can be attributed to their +4 oxidation state metal center. The stability of metal tetranorbornyls is predominately considered to be derived from the unfavorable β-hydride elimination. Computational calculations have determined that London dispersion effects significantly contribute to the stability of metal tetranorbornyls. Specifically, Fe(nor)4 has a stabilization of 45.9 kcal/mol−1. Notable metal tetranorbornyls are those synthesized with metal centers of cobalt, manganese, or iron. Preparation Traditionally, metal tetranorbornyls are prepared by a reaction of alkyllithiums, such as 1-norbornyllithium, with transition-metal halides while tumbling with glass beads in pentane. This is followed by a filtration step using a column of alumina to remove pentane byproducts. Lastly, a recrystallization step from pentane to obtain the crystalline compound. Alternative methods for the preparation of metal tetranorbornyls have been proposed. Specifically, the tetrakis(1-norbornyl)chromium complex can be prepared in inert atmosphere conditions with 1-norbornyllithium dissolved in hexane. An addition of CrCl3(THF)3 is made and allowed to stir for 48 hours. After, the solution is centrifuged for the removal of LiCl. The resulting supernatant is applied to an alumina column with hexane being used as the elution solvent. The use of the alumina column allows for the collection of a purple fraction that undergoes solvent evaporation and sublimation to obtain the desired Cr(nor)4 complex. The tetrakis(1-norbornyl)cobalt(IV) complex can be prepared by the following: The tetrakis(1-norbornyl)molybdenum(IV) complex was prepared by William M. Davis, Richard R. Schrock, and Richard M. Kolodziej by the following: The MoCl3(THF)3 was stirred with 1-norbornyllithium in a mixture of THF and diethyl ether at . The reaction mixture was then warmed to and after approximately 90 minutes it was observed as a red color with a blue precipitate. The reaction mixture was then filtered to remove the blue precipitate. The red filtrate was then reduced via a vacuum to yield red crystals of Mo(nor)4. Structure and bonding The stability of metal tetranorbornyls is generally considered to be a result of unfavorable β-hydrogen elimination. Metal alkyl species with β-hydrogen atoms present on the alkyl group are disfavored due to β-hydrogen migration to the metal center, which results in an olefin being eliminated and the production of the corresponding metal hydride. 1-norbornyl does not undergo β-hydrogen migration even though it possesses 6 β-hydrogen atoms due to the unfavorable formation of the olefin, 1-norbornene. According to Bredt's rule, one of the sp2 carbons of the double-bonded carbon atoms would be located at the bridgehead, which would cause 1-norbornene to be highly strained. β-hydrogen elimination does not explain the formation of metal tetranorbornyls complexes that are synthesized from lower valent metal center precursors, shortened bond lengths between the metal center and 1-norbornyl ligand carbons, or the resulting low-spin tetrahedral molecular geometry. Quantum mechanical calculations have elucidated that London dispersion forces between the norbornyl ligands are accountable for the stability and molecular geometry of the homoleptic tetranorbornyl metal complexes. Metal tetranorbornyls complexes consisting of the divalent and trivalent metal center species of Cr, Mn, Fe and Co halides undergo formation of negatively charged complexes followed by oxidation that is induced by other transition-metal species in the reaction. Factors that lead to disproportionation are traditionally considered to be derived from the tertiary carbanion ligand, 1-norbornyllithium, and the lack of potential for the pentane solvent to act as a ligand. Therefore, metal tetranorbornyls composed of first-row transition metals are not accessible to be penetrated by small reagents due to the metal center's coordination sphere. Tetrakis(1-norbornyl)cobalt(IV) Tetrakis(1-norbornyl)cobalt(IV) is a thermally stable homoleptic complex observed with σ-bonding ligands. The metal tetranorbornyl complex was the first isolated low-spin complex with tetrahedral molecular geometry. The tetrakis(1-norbornyl)cobalt(IV) complex was first synthesized by Barton K. Bower and Howard G. Tennent in 1972. The tetrakis(1-norbornyl)cobalt(IV) oxidation state is a reversible reaction using O2 as the oxidizing agent. The coordination environment of the cobalt metal center has a distorted tetrahedron structure. When examined by x-ray crystallography, the metal tetranorbornyl has a crystallographic Cs symmetry due to the presence of six carbons laid on the mirror plane. However, the four carbons atoms bonded to the cobalt metal center resembled a tetragonally compressed tetrahedron, which appeared as a pseudo D2d symmetry. The cobalt metal center in the +4 oxidation state has a d5 configuration. Typically, the d5 configuration is expected to result in the high spin complex containing 5 unpaired electrons and only 1 unpaired electron in the low spin tetrahedral complex. The single unpaired electron resides in the antibonding t2 orbital, which would cause the structure to experience a Jahn-Teller distortion. However, Theopold and co-workers speculated that the slight tetragonal compression could have been a result of steric interactions between norbornyl ligands and crystal packing forces. Tetrakis(1-norbornyl)iron(IV) The tetrakis(1-norbornyl)iron(IV) complex was first synthesized by Barton K. Bower and Howard G. Tennent in 1972. The 1-norbornyl ligands on the complex have a strong dispersion attraction and high ring strain, which as a consequence hinders the α- and β-hydride elimination reactions. Additionally, the identical ligands cause a reduced chemical reactivity due to a crowded chemical environment that impedes the interaction of small molecules with the Fe-C bonds. Synthesized complexes Barton K. Bower and Howard G. Tennent were able to successfully synthesize and characterize the following metal tetranorbornyls derived from the first-, second-, and third-row transition metals: tetrakis(1-norbornyl)hafnium tetrakis(1-norbornyl)zirconium tetrakis(1-norbornyl)titanium tetrakis(1-norbornyl)vanadium tetrakis(1-norbornyl)chromium tetrakis(1-norbornyl)manganese tetrakis(1-norbornyl)iron tetrakis(1-norbornyl)molybdenum The metal tetranorbornyls complexes of hafnium, zirconium, titanium, and vanadium display a tetrahedral molecular geometry, which is analogous to the tetrachloride form of the metals. In comparison, the cobalt, manganese, and iron complexes display a tetragonal molecular geometry. A combination of London dispersion force and steric effects from the 1-norbornyl ligands results in the stability observed for the metal center. Characterization Magnetic measurements The resulting molecular geometry of the metal tetranorbornyls complexes is due to the unpaired and paired d electrons. Magnetic measurements have indicated that the d electrons of tetrakis(1-norbornyl)chromium (d2) and tetrakis(1-norbornyl)manganese (d3) are not spin paired. The four d electrons of tetrakis(1-norbornyl)iron and tetrakis(1-norbornyl)cobalt are spin paired. Electron paramagnetic resonance spectroscopy Metal tetranorbornyls are commonly characterized via electron paramagnetic resonance (EPR) spectroscopy. Tetrakis(1-norbornyl)molybdenum was observed as a room temperature EPR signal that originated from a d2 metal center, which was considered to have two unpaired electrons in the eg orbital. In addition, the resulting EPR signal of tetrakis(1-norbornyl)chromium was comparable. Cyclic voltammetry In 1988, Klaus H. Theopold and Erin K. Byrne performed the electrochemical experiment, cyclic voltammetry, to determine how oxidizing was the metal center of the tetrakis(1-norbornyl)cobalt(IV) complex. Two reversible electron transfer waves at -0.65 and -2.02 V were observed in THF, which elucidated that the difference in peak potentials were consistent with two one-electron transfer processes when being compared to the ferricenium/ ferrocene couple. In the same year, William M. Davis, Richard R. Schrock, and Richard M. Kolodziej produced a cyclic voltammogram for tetrakis(1-norbornyl)molybdenum. Two oxidation waves were observed at -0.15 and +1.25 V in DCM. The oxidation at -0.15 V was considered to be reversible. In comparison, the second oxidation at +1.25 V was considered to be irreversible. References Norbornanes Organometallic compounds
Metal tetranorbornyl
[ "Chemistry" ]
2,201
[ "Organic compounds", "Organometallic compounds", "Organometallic chemistry", "Inorganic compounds" ]
73,228,040
https://en.wikipedia.org/wiki/Potassium%20stearate
Potassium stearate is a metal-organic compound, a salt of potassium and stearic acid with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid. Synthesis Potassium stearate may be prepared by saturating a hot alcoholic solution of stearic acid with alcoholic potash. Physical properties The compound forms colorless crystals. Slightly soluble in cold water, soluble in hot water, ethanol, insoluble in ether, chloroform, carbon disulfide. A component of liquid soap. Uses The compound is primarily used as an emulsifier in cosmetics and in food products. It is also used as a cleansing ingredient and lubricant. Hazards Causes skin irritation and serious eye irritation. References Stearates Potassium compounds
Potassium stearate
[ "Chemistry" ]
167
[ "Inorganic compounds", "Inorganic compound stubs" ]
73,229,635
https://en.wikipedia.org/wiki/Leucocoprinus%20antillarum
Leucocoprinus antillarum is a species of mushroom producing fungus in the family Agaricaceae. Taxonomy It was described in 2021 by the mycologists Alfredo Justo, Claudio Angelini and Alberto Bizzi who classified it as Leucocoprinus antillarum. Description Leucocoprinus antillarum is a small dapperling mushroom with thin flesh. Cap: 3.5-4.5 cm wide starting ovate before expanding to conical-campanulate and then flattening to convex or plano-convex. Finally developing a slight depression with maturity with or without a small umbo. The surface is completely white with velvety scales when immature but the umbo develops a brownish-ochre or yellowish-ochre colour as it matures and white to brownish-grey or ochre fibrillous scales are scattered sparsely across the rest of the cap surface, becoming sparser towards the margins. The surface background remains white but with a pinkish-grey tint around the centre fading to white towards the margins, which have striations that reach around halfway up the cap surface. Gills: Free, fairly crowded and white. Stem: 4.5–6 cm tall and 2.5-5mm thick and cylindrical with an only slightly wider base. The surface is whitish with white fibrillose scales on the bottom half and white rhizomorphs at the base. The ascending, membranous stem ring is white but sometimes develops a greyish-ochre edge. Spores: Ovoid to amygdaliform without a germ pore. Dextrinoid and metachromatic. (5) 5.5-7 x 3.5-5 (5.5) μm. Basidia: 15-27 x 8.12 μm.Smell: Indistinct. Etymology The specific epithet antillarum derives from the Antilles in which the species was found. Habitat and distribution The species was discovered in the Dominican Republic where it grows gregariously on the ground amongst leaf litter in deciduous woodland in August to January. Phylogenetic data has also suggested that the species is present in Brazil, Panama and Guadeloupe based on comparison with other specimens collected. Similar species Leucocoprinus fuligineopuctatus is but distinguished by finer, darker scales that make a more defined central disc, as well as a yellowish colour towards the bottom of the stem. Lepiota phaeosticta produces smaller mushrooms with darker grey-black scales and grows on wood. Lepiota nigropunctata is distinguished by smaller mushrooms and smaller spores. Lepiota tepeitensis also has smaller spores. Lepiota subclypeolaria has similar macroscopic features but has white scales on the cap surface that extend beyond the umbo. It is also distinguished by microscopic details. These similar Lepiota species may actually be Leucocoprinus or Leucoagaricus species that have yet to be reclassified. References antillarum Fungi described in 2021 Fungi of the Caribbean Fungus species
Leucocoprinus antillarum
[ "Biology" ]
633
[ "Fungi", "Fungus species" ]
73,230,022
https://en.wikipedia.org/wiki/Leucocoprinus%20fuligineopunctatus
Leucocoprinus fuligineopunctatus is a species of mushroom producing fungus in the family Agaricaceae. Taxonomy It was described in 2021 by the mycologists Alfredo Justo, Claudio Angelini and Alberto Bizzi, who classified it as Leucocoprinus fuligineopunctatus. Description Leucocoprinus fuligineopunctatus is a small dapperling mushroom with thin white flesh. Cap: 2–4 cm wide starting cylindrical to ovate before expanding to conical-campanulate and then maturing to convex or flat sometimes with a slight depression in the centre. The surface is white with a dark brown to blackish-brown velvety umbo and scattered small brown or sooty specks radiating out from the centre becoming more sparse towards the margins. These scales are more distinct on the ridges of the striations which radiate from the margins almost towards the centre of the cap. Gills: Free, white, close to subdistant. Subventricose with an entire edge that is also white. Stem: 2.5-5.5 cm tall and 2-3mm thick. It is slender and cylindrical with a slight curve towards the base, which is not significantly wider than the rest of the stem. The surface is smooth and whitish in the top half but with a yellowish colour below the ring becoming more intense towards the base, where white rhizomorphs may also be present. The membranous stem ring is small and white. Spores: Ovoid to ellipsoidal or amygdaliform, without a germ pore. Dextrinoid and metachromatic. (5.0) 5.5-7 (7.5) x 3.5-4.5 (5) μm. Smell: Indistinct. Etymology The specific epithet fuligineopunctatus derives from the Latin fuliginosus meaning a dirty brown or sooty colour and punctatus meaning with spots. This is in reference to the brown-grey colour of the punctate scales that are scattered across the cap. Habitat and distribution The species was discovered in the Dominican Republic where it was found growing gregariously on leaf litter in deciduous woodland close to the beach during November. It has also been found in botanical gardens. Similar species Leucocoprinus microlepis is very similar in appearance and likewise has a yellowish stem towards the base with similar scales on the cap, these may appear lighter but could still be confused. The cap is smaller than is smaller at 1–2 cm wide but may be confused with a small specimen of L. fuligineopunctatus. The spore size is also in a similar range and so microscopic study of the cheilocystidia or genetic sequencing may be required to confidently distinguish these species. Lepiota nigropunctata is distinguished by smaller mushrooms and smaller spores. Leucocoprinus cristatulus and Leucocoprinus revolutus are distinguished by smaller mushrooms (1 cm wide), black or grey scales and smaller spores. Leucocoprinus phaeopus likewise has small (1 cm wide) mushrooms but has larger spores. However, there still exist numerous undescribed species which could appear similar. References fuligineopunctatus Fungi described in 2021 Fungi of the Caribbean Fungus species
Leucocoprinus fuligineopunctatus
[ "Biology" ]
708
[ "Fungi", "Fungus species" ]
73,232,072
https://en.wikipedia.org/wiki/Tuza%27s%20conjecture
Tuza's conjecture is an unsolved problem in graph theory, a branch of mathematics, concerning triangles in undirected graphs. Statement In any graph , one can define two quantities and based on the triangles in . The quantity is the "triangle packing number", the largest number of edge-disjoint triangles that it is possible to find in . It can be computed in polynomial time as a special case of the matroid parity problem. The quantity is the size of the smallest "triangle-hitting set", a set of edges that touches at least one edge from each triangle. Clearly, . For the first inequality, , any triangle-hitting set must include at least one edge from each triangle of the optimal packing, and none of these edges can be shared between two or more of these triangles because the triangles are disjoint. For the second inequality, , one can construct a triangle-hitting set of size by choosing all edges of the triangles of an optimal packing. This must hit all triangles in , even the ones not in the packing, because otherwise the packing could be made larger by adding any unhit triangle. Tuza's conjecture asserts that the second inequality is not tight, and can be replaced by . That is, according to this unproven conjecture, every undirected graph has a triangle-hitting set whose size is at most twice the number of triangles in an optimal packing. History and partial results Zsolt Tuza formulated Tuza's conjecture in 1981. If true, it would be best possible: there are infinitely many graphs for which , including all of the block graphs whose blocks are cliques of 2, 4, or 5 vertices. The conjecture is known to hold for planar graphs, and more generally for sparse graphs of degeneracy at most six. (Planar graphs have degeneracy at most five.) It is also known to hold for graphs of treewidth at most six, for threshold graphs, for sufficiently dense graphs, and for chordal graphs that contain a large clique. For random graphs in the Erdős–Rényi–Gilbert model, it is true with high probability. Although Tuza's conjecture remains unproven, the bound can be improved, for all graphs, to . See also Mantel's theorem Triangle removal lemma References External links Unsolved problems in graph theory
Tuza's conjecture
[ "Mathematics" ]
489
[ "Unsolved problems in mathematics", "Mathematical problems", "Unsolved problems in graph theory" ]
73,233,518
https://en.wikipedia.org/wiki/Algorithmic%20curation
Algorithmic curation is the selection of online media by recommendation algorithms and personalized searches. Examples include search engine and social media products such as the Twitter feed, Facebook's News Feed, and the Google Personalized Search. Curation algorithms are typically proprietary or "black box", leading to concern about algorithmic bias and the creation of filter bubbles. See also Algorithmic radicalization Ambient awareness Influence-for-hire Social bot Social data revolution Social influence bias Social media bias Social media intelligence Social profiling Virtual collective consciousness References Social media Mass media monitoring Social influence
Algorithmic curation
[ "Technology" ]
114
[ "Computing and society", "Social media" ]