id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
67,367,760
https://en.wikipedia.org/wiki/NGC%204179
NGC 4179 is a lenticular galaxy located in the constellation Virgo. It was discovered by William Herschel on January 14, 1784. It is a member of the NGC 4179 Group of galaxies, which is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster. References External links Virgo (constellation) 4179 Lenticular galaxies Virgo Cluster 038950
NGC 4179
[ "Astronomy" ]
96
[ "Virgo (constellation)", "Constellations" ]
67,368,474
https://en.wikipedia.org/wiki/Freshwater%20shoreline%20management
Freshwater Shoreline Management involves assessing and protecting lakes, rivers, and other freshwater shorelines from excessive development or other anthropogenic disturbances. Shoreline management involves the long-term monitoring of watershed and shoreline revitalisation projects. Freshwater shoreline management is frequently run by local conservation authorities through state, provincial, and federal lake partner programs. These programs have been used as a method of tracking shoreline change over time, determining areas of concern, and educating shoreline property owners. History The concept of Freshwater Shoreline Management evolved from ideas developed for the Integrated Coastal Zone Management (ICZM), which emerged from the 1992 United Nations Conference on Environment and Development. In Canada, a coastal zone management plan was completed by 1996 using the ICZM framework. Freshwater management programs utilized the coastal zone management plan to create freshwater management plans to address the growing concerns for the environment that had been aired since the 1960s in Canadian society. Anthropogenic effects on watersheds were increasing globally in the 1900s, with nutrient loading of phosphorus, nitrogen, and sulfur causing eutrophication and acidification of water bodies. These effects are primarily caused by the human development of shorelines, agricultural runoff of chemicals and fertilizers, human litter, and sewage/wastewater. To manage these impacts, local and regional organizations began conducting watershed monitoring programs to detect long-term environmental changes and establish their causes. Usage Anthropogenic effects on lakes, such as freshwater usage, shoreline development, recreational use, agriculture, and retaining walls, can negatively impact aquatic and terrestrial organisms that rely on the shoreline of a lake for habitat. The anthropogenic effects can also cause eutrophication and acidification of lakes, which impacts organisms within the water itself and can also cause harm to human health. It can have the added effect of decreasing property values and tourism in the lake communities due to some beaches being unsafe to swim in because of pollutants. Since it may be modified to match the needs of the watershed and be applied to the current land use nearby, freshwater shoreline management is useful for community-based monitoring. The Lake Ontario Shoreline Management Plan is an example of how communities can use freshwater shoreline management. Programs such as this were developed by conservation authorities and citizens alongside regional and provincial governments to perform shoreline mapping and assessment, public consultation/education, and implement long-term monitoring of the watershed and shoreline. The Muskoka Watershed Council has also performed shoreline assessments using the Love Your Lakes Program to survey the shoreline of Lake Bella in the Muskoka District. It showed that the natural shoreline decreased from 96% in 2002 to 80% in 2007, impacting the overall water quality as it allows for increased nutrient runoff, negatively impacting biodiversity as it decreases habitat for fish, insects, and birds. This program has increased local education on lake health and stewardship of revitalizing shorelines. Climate Change Impacts Climate change has been found to affect freshwater shoreline communities. Effects such as increased warming of the water bodies, increased storm runoff, the quickening of yearly ice melt and limited amounts of winter ice, and increased wave height during storms, which increases the potential of erosion, were all found to potentially affect lake shorelines. Shoreline management has been identified as a method to mitigate climate change impacts such as potential flooding and nutrient loading from frequent and higher-intensity storms. That can occur as shorelines naturalize, which can increase filtration and decrease sediment and nutrient runoff. Example: Love Your Lakes Program The Love Your Lakes Program is an example of a Shoreline Assessment and Revitalization program used in Canada. It was developed under the Canadian Ministry of Environment and Climate Change (MECC) Lake Partner Program as a joint effort between Watersheds Canada, MECC, and the Canadian Wildlife Federation. The program allows lake owners and organizations to apply to have their shorelines assessed and discusses methods that individuals and the community can use to revitalize their shorelines. Naturalization, using native plant species along the shoreline to create a buffer, it is often recommended as this limits erosion from wake action and can decrease nutrient runoff from lawn maintenance or farming activities. To this date, almost 200 lakes have been assessed by the program. This has led to increased community awareness and shoreline naturalization, which has transformed up to 300 shoreline properties. References Coastal geography Coastal engineering Environmental impact in the United States Environmental impact in Canada
Freshwater shoreline management
[ "Engineering" ]
876
[ "Coastal engineering", "Civil engineering" ]
67,371,207
https://en.wikipedia.org/wiki/Elly%20Schwab-Agallidis
Elly Schwab-Agallidis (born Elly Agallidis, , ; – ) was a Greek physicist/physical chemist and one of the first women in Greece to be awarded a PhD in the field. She was the wife of Georg-Maria Schwab, who met her in Munich as the supervisor of the experimental work for her doctoral thesis; the couple then worked together as researchers in the Kanellopoulos Institute after they emigrated in Greece. Her most famous work concerned the properties and reactivity of parahydrogen. Biography Elly Agallidis was born in 1914 to a middle class family of Athens; she was the first child of Ioannis Agallidis and Maria-Edith Agallidis (née Zannou). She graduated with a degree in Physics from the University of Athens in 1934 and continued with postgraduate studies in the Physical Chemistry Laboratory of the University of Munich, then under the direction of Heinrich Otto Wieland. It was there that she met Georg-Maria Schwab, her future husband, who suggested that she examine parahydrogen and supervised her experimental work. Schwab was banned from teaching in Nazi Germany due to his half-Jewish origin. With the increasing fear of prosecution, he decided in 1930 to emigrate to Elly's homeland, Greece. Agallidis and Schwab married in Athens the same year. Schwab-Agallidis was able to find work for both in the chemical laboratory of the Kanellopoulos Institute of Chemistry and Agriculture, where the couple collaborated on various topics of physico-chemical research for the next ten years (1939–1949). Among those topics Schwab-Agallidis continued her work on the properties of parahydrogen, for which she received her PhD by the Department of Physics of the University of Athens in 1939 and published multiple relevant papers in the following years. At the same period she also delivered lectures on Physical Chemistry at the University of Athens. After a difficult period for the couple during the Axis occupation of Greece and the resumption of their research after the liberation of Greece, the two scientists eventually returned to West Germany when Schwab was offered the Professorship of Physical Chemistry at the University of Munich in 1951. Elly Schwab-Agallidis died in Essen at the age of 92 in 2006. References Greek chemists Greek women chemists 20th-century Greek physicists Greek women physicists 1914 births 2006 deaths Physical chemists Greek emigrants to Germany Scientists from Athens
Elly Schwab-Agallidis
[ "Chemistry" ]
519
[ "Physical chemists" ]
67,371,380
https://en.wikipedia.org/wiki/Mar%C3%ADa%20Teresa%20Miras%20Portugal
María Teresa Miras Portugal (19 February 1948 – 27 May 2021) was a Spanish scientist, pharmacist, biochemist, molecular biologist and Emeritus professor at the Complutense University of Madrid. She was a member of the Spanish "Real Academia Nacional de Farmacia" and served as President of this Institution from 2007 to 2013, becoming the first woman to be elected for this position in a Spanish "Real Academia". She was Honorary President. She was also a member of several scientific institutions such as the Spanish Biophysical Society, the Spanish Society of Biochemistry and Molecular Biology, the Spanish Society of Neuroscience, the European Society for Neurochemistry, the International Society for Neurochemistry, the Advisory Board of Chromaffin Cells, the Purinergic Club, the editorial board of the Journal of Neurochemistry, the IUPHAR sub-committee for the nomenclature of P2Y nucleotide receptors and the Scientific Panel of the NATO. Biography María Teresa Miras Portugal was born in 1948 in O Carballiño (Orense), where she completed her primary and secondary studies. She started her university studies in Pharmacy in the University of Santiago de Compostela and continued them at the Complutense University of Madrid. Her marks at the Licenciate university degree led her to get a Special Mention at National Level. She completed a PhD in Sciences at the University of Strasbourg and Pharmacy at the Complutense University of Madrid. She later became a professor of Biochemistry and Molecular Biology at the University of Oviedo, the University of Murcia and the Complutense University of Madrid. In her more than 40 years engaged in research, she focused in the study of nucleotide receptors and their impact on neurodegenerative diseases and published more than 350 research articles in specialized journals, combining this research with teaching and institutional work. In 2012, she was appointed president of the Committee of Experts for the study of the need for reforms in the Spanish university system. Awards In 2005 she received the Alberto Sols Medal to Research in Biochemistry, in 2008 the María Josefa Wonenburger Planells Prize by the Xunta de Galicia, in 2011 the Community of Madrid honoured her with the Miguel Catalán Research Award for her entire professional career and in 2016 in Galicia received the Castelao Medal. She was given honorary doctorates from the University of Murcia and the King Juan Carlos University and was an honorific member of the Academia Nacional de Farmacia y Bioquímica de Argentina, Académie Nationale de Pharmacie de France, Académie Européenne des Sciences des Arts et des Lettres and the European Academy (physiology and medicine). References 1948 births 2021 deaths Educators from Galicia (Spain) Spanish women scientists Spanish molecular biologists Spanish biochemists Spanish pharmacists Neuropharmacology Academic staff of the Complutense University of Madrid Scientists from Galicia (Spain)
María Teresa Miras Portugal
[ "Chemistry" ]
598
[ "Pharmacology", "Neuropharmacology" ]
67,371,587
https://en.wikipedia.org/wiki/Kirkhill%20Astronomical%20Pillar
The Kirkhill Astronomical Pillar was constructed in 1776 by David Stewart Erskine, 11th Earl of Buchan and erected in the grounds of his estate at Kirkhill House, near Broxburn, Scotland. The pillar fell into disrepair and eventually collapsed in the 1970s but fortunately the stones were preserved and the pillar was reconstructed (1988) in Almondell Country Park on land once owned by the Erskine family. The pillar records the details of an adjacent scale model of the Solar System constructed by Erskine following the measurements of the size of the Solar System deduced from the observations of the Transits of Venus in 1761 and 1769. The model, centred on a Sun of stone six feet in diameter with planets at distances and sizes to scale, has long since disappeared; only the pillar remains. Erskine and science As a young child Erskine was taught at home by his parents, both of whom had studied (and met each other) in the classes of the famous mathematician Colin Maclaurin at Edinburgh University. They also employed a private tutor, James Buchanan, a graduate of Glasgow university, well versed in mathematics and languages. Under the guidance of this trio he developed a life interest in mathematics and astronomy. At the age of 13, Erskine entered St. Andrews University (1755–1759) and then continued to Edinburgh University (1760–1762) and finally Glasgow University (1762–63). Although Erskine's later intellectual activities were dominated by his investigation of Scottish antiquities, he remained interested in science and mathematics. He was honoured by election to the Royal Society of London in 1765. At that time he was living in London and at meetings of the society he would have heard much of the following topical astronomical problem. How far is the Sun? By the beginning of the eighteenth century the Copernican model of a heliocentric Solar System was well established and astronomers such as Tycho Brahe and Johannes Kepler were able to describe the motions of the planets with ever greater precision. However, no one knew the absolute size in miles (or any other units) of the Solar System although the solar distances of the planets could all be expressed as definite ratios of the Earth-Sun distance by using Kepler's laws. This fundamental distance is termed the Astronomical Unit (AU). The breakthrough came in 1639 when Jeremiah Horrocks made the first scientific observation of a transit of Venus and used his results to estimate an approximation for the AU. A second method, proposed in 1663 by the Scottish mathematician James Gregory, was promoted by Edmond Halley in a paper published in 1691 (revised 1716). He demonstrated how the AU could be measured very accurately by comparing the duration of the Venus transit across the face of the Sun as measured by two observers spaced at latitudes a few thousand kilometres apart. The next opportunities of observing such a transit were in 1761 and 1769 but Halley had died in 1742 and it was left to others to organise observations in the first ever major international scientific collaboration. The event of 1761 produced sparse results because travel overseas was greatly hindered by the Seven Years' War but in 1769 many observers were again despatched all over the world, amongst them being Captain James Cook on behalf of the Royal Society of London. Various pairs of observation results were input into Halley's calculations giving many slightly different values and a mean value of the AU was published shortly afterwards in the Philosophical Transactions of the Royal Society. The result was 93,726,900 miles, within one per cent of the presently accepted value is 92,955,807 miles. In Scotland, both transits were observed by Erskine's friend and neighbour, Reverend Alexander Bryce, minister of the church at Kirknewton, only 3 miles from Kirkhill. Bryce was a competent mathematician and he calculated the AU and the other distance parameters of the Solar System: it is these values that Erskine used to create his scale model of the Solar System. The epitome In his 'Account of the Parish of Uphall', Erskine writes: The scale appears unusual but it followed simply from Bryce's calculation of the diameter of the Sun as 884,396 miles and Erskine's arbitrary choice of a representation of the Sun by a freestone spheroid 6 feet, or 72 inches, in diameter. Dividing 884,396 by 72 gives 12,283.28 miles to one inch, or 778,268,621:1. Of the six planets known in the eighteenth century Jupiter and Saturn were modelled in stone, the latter having an iron band, and the smaller planets were made of bronze: all were mounted on plinths or pillars in the grounds of the Kirkhill estate at the correct scaled distance from the Sun. Primrose, writing in 1898, says that only a few of the plinths remained in his day. The table giving the dimensions of his representation is carved into the east face of the stone pillar, or belfry; it is barely legible now, but the details are preserved in the Uphall account. Planet diameters and distances on the pillar are reproduced here, along with the values obtained by scaling inches up to miles, by a factor of 12,283.28. Modern values are shown for comparison. Details for the moons of Jupiter and Saturn have been omitted. Calculation of the values in the table starts from the new value of the AU calculated by Bryce. Kepler's Laws then give the solar distance (in miles) for every planet and therefore, given the actual dimensions of the orbits, it is straightforward to calculate the distance of any planet from Earth at the time of any observation. Then, using the observed angular sizes of the Sun and the planets he could deduce their diameters in miles. To fit the data on the table Bryce must have calculated the value for the AU to be 95,072,587 miles. This value is greater than the modern (average) value of 93,000,000 miles. This largely accounts for the discrepancies in Erskine's data for distances and diameters. The third, fourth and fifth columns of the pillar are reproduced in a second table below. It shows that the eccentricities of the planets and their inclinations to the ecliptic were quite well known at the time. (In the table Erskine's eccentricity value 80)387( is simply the fraction 80/387 and this has been replaced by decimal 0.207 etc.). Eccentricity and inclination are the essential parameters for working out the motions of the planets. No values are given for the orbit inclinations to the ecliptic for Mars and Jupiter, the space on the table having been utilised for a comment on the moons of Jupiter. The last pair of columns refer to what Erskine terms the inclination; the planet rotation axis to the plane of the orbit. Nowadays the term axial tilt is used by astronomers: it defines the angle between the rotation axis and the normal to the plane of the orbit and it is equal to 90 degrees minus Erskine's inclination. The values for Mercury and Venus are omitted on the pillar. The final column on the pillar is a prediction of where the planets will be on May 20, 2255. The heliocentric places within the zodiac constellations define an angle now termed the heliocentric ecliptic longitude. Both are measured from the point in the sky where Aries begins. Each constellation covers 30 degrees whereas the longitude covers the whole 360 degrees spanned by all 12 constellations. The order of zodiac constellations is Aries, Taurus, Gemini, Cancer, Leo, Virgo, Libra, Scorpio, Sagittarius, Capricorn, Aquarius, and Pisces. Therefore 9°40′ in Sagittarius for Mercury becomes a (decimal) longitude of 249.667° etc. The significance for the year 2255 specified in the prediction is that it is a year in which a transit of Venus occurs; the eighth after that of 1769. During such a transit the Earth, Venus and the Sun must be closely aligned, in other words the heliocentric places (longitude) of the planets must be very close, as shown by the predictions for the actual transit on 9 June 2255. Therefore, since Erskine gives heliocentric places for Venus and Earth differing by about 35°, he was clearly not predicting a transit for 20 May. There is no astronomical phenomenon associated with that day but it must have had some significance for Erskine, as yet unexplained. Other inscriptions on the pillar There are inscriptions on the four sides of the pillar but they are now difficult to read. Fortunately some are recorded in Erskine's history of Uphall and others in the account of the same parish by James Primrose. Most are in Latin, often abbreviated, but translations have been given by James Primrose in his chapter on Kirkhill. East Face This face has the table described in the previous section. Above the table is the quotation given at the beginning of the previous section where Erskine (Buchan) describes his construction and its scale. West Face An inscription in Latin: Jacobo Buchanano, Matheseos P. Glasg. Adolefcentiae meae Custod. incorruptissicno has Amoenitates Academicas Manibus propriis dedicavi, inscripsi, sacraque esse volui. Anno ab ejus excessu XV. et a Christo natu MDCCLXXVII. Ille ego qui quondam patriae perculsus amore, Civibus oppressis, libertati succurrere aussim, Nunc Arva paterna colo, fugioque liruina regum. Primrose gives the translation: To James Buchanan, Professor of Mathematics at Glasgow, the most incorruptible guardian of my youth, have I dedicated, inscribed with my own hands, these Academic Amenities, and I wish them to be sacred. On the 15th year of his death and from the birth of Christ 1771, I who formerly animated by love of country, dared to succour liberty and oppressed citizens, now cultivate my paternal fields and shun the threshold of Kings. James Buchanan was the tutor and mentor of Erskine's early years. He died in 1761. South Face A quotation from Vergil's Georgics: which may be translated as "Pay homage to the heavenly sent land" or "The worthy glory of the Divine Country is abiding" Underneath the inscription is a large bow and arrow the significance of which is unknown, the sign for Scorpius, and an unidentified sign. North Face A long inscription gives abbreviated details of the location of the pillar and other points. Erskine gives a fuller version in his account of Uphall Parish. "The latitude of Kirkhill is 55°56'17" north, the west longitude in time from Greenwich Observatory is 13′ 59′′10′′′. The variation of the compass 1778 in June was 22°, the dip of the north end of the needle at the same time was 71°33'. The elevation above high water mark at Lieth (sic) when there is 12 feet of water in the harbour 273 feet; it is lower than the top of Arthurs Seat, 546 feet, lower than the Observatory on Calton Hill 83, than the top of the Castle Rock 290. West longitude in time from Edinburgh Observatory, 1°8"; east longitude in time from Glasgow Observatory, 3′11′′50′′′ - distance from Kirknewton Manse in Midlothian, 20,108 feet; north from Kirknewton Manse, 17,005 feet or 2′47′′ (arc); west from Kirknewton Manse, 10,680 feet or 12′′30′′′ in time." The mention of Kirknewton Manse links this inscription to its resident, Alexander Bryce, who provided the details of the epitome table. The latitude is in a conventional notation but the longitudes are defined in terms of time: 15 degrees of longitude corresponding to one hour. The Greenwich time separation from Kirkhill given as 13′ 59′′ 10′′′ (minutes, seconds, sixtieths) corresponds to longitude 3.496°W: the modern value is 3.46°W. Similarly time displacements of the observatories at Edinburgh and Glasgow should be read as 1′8′′ (not 1°8") and 3′11′′50′′′ respectively, corresponding to 17 and 48 arc minutes of longitude, or 11 and 31 miles. The distances from Kirknewton Manse to the pillar are direct, north and west: the latitude difference is 2′47″ (arc) and the longitude difference in time is 12′′30′′′ corresponding to 3.12 arc minutes of longitude. The height differences between the pillar and locations in Edinburgh are an interesting by-product of Bryce's survey of a canal from the city, past Kirkhill and on to Falkirk. Since there were to be no locks between the city and Broxburn the height of the pillar was easily related to that of the canal terminus and hence other known Edinburgh locations. Other inscriptions There are a number of other inscriptions which were close to the pillar. The globe representing the sun was engraved, in large Hebrew letters, with the question "What is man?" A plinth showing the Moon orbiting the Earth was inscribed "Newtono Magno". A small building near the pillar was inscribed "Keplero Felici". The approach to Kirkhill was guarded by pillars inscribed "Libertate quietate". On a triangular equilateral stone in Erskine's garden, was the inscription, "Great are thy works, Jehovah, infinite thy power!" The model re-imagined In the years leading up to the 2012 transit a group of Scottish artists collaborated on an artistic realisation of the Solar System model of Erskine. The Kirkhill Pillar Project was commissioned under the auspices of Artlink Edinburgh. The Sun is represented by a light box on the top of Broxburn academy, within a few hundred metres of the Erskine's own house. The artefacts representing the nine planets are distributed around the county of West Lothian at distances given by Erskine's scale. Mars and Jupiter are represented by small spheres mounted on plinths. Mercury is represented by a cast iron replica of the cratered surface of the predominantly iron planet. Venus is represented by a schematic version of its transit over the face of the Sun. Earth, inspired by the blue and white image seen on early space missions, is represented by two planters containing blue and white flowers. Mars is a distinctive red sculpture in community woodland. A cast acrylic clear block houses a painted model of the planet Jupiter. Saturn is represented by a technical image used by James Clerk Maxwell in his explanation of the structure and stability of the rings. Uranus is represented by a band suspended from two trees: it houses seven opaque apertures which allow the light to shine through. Neptune is captured as a blue orb in a lantern above the doors of Kingscavil church. Pluto is carved into black polished granite placed in Beecraigs Country Park. Images, further details and a map of locations may be found on the website of the Kirkhill project. See also Solar System models References Notes Citations Sources External links Pillar, Kirkhill and Erskine (Buchan) Photographs at https://holeousia.com/ Solar System Solar System models
Kirkhill Astronomical Pillar
[ "Astronomy" ]
3,191
[ "Solar System models", "Space art", "Outer space", "Solar System" ]
67,371,936
https://en.wikipedia.org/wiki/Epis
Epis (, ) is a blend of peppers, garlic, and herbs that is used as a flavor base for many foods in Haitian cuisine. Some refer to it as a pesto sauce. It is also known as epise and zepis. It is essential for Haitian cuisine. Background Epis has Taino and African origins. It also has similarities to sofrito which is used in Hispanic cuisine. This use of a flavor base is common in Caribbean cuisine. Ingredients Epis often contains parsley, scallions, garlic, citrus juice, and Scotch bonnet peppers. Numerous recipes for epis exist, as traditionally, Haitian women would cook and have their personal epis recipe. Also, various regions have different recipes. Preparation Traditionally, epis is made with a large wooden mortar and pestle (called munsh pilon). Today, it is often made with a blender. The ingredients are blended until the consistency is as smooth as desired. Use It can be used as a marinade for meat. It can also marinate fish. It also is added to flavor a number of Haitian dishes. This includes rice and beans, soups, and stews. It is a convenient way to utilize flavors from fresh herbs and spices in everyday cooking. Many Haitians have epis available on hand to be used for various dishes. Dishes Griot Haitian spaghetti Storage Epis can last up to three months in the refrigerator, but this time will vary depending on the ingredients that are used. The acidity helps keep the ingredients from spoiling. Epis will last indefinitely in the freezer and will not transfer its odor to other freezer items. The epis can be distributed in an ice cube tray and frozen, so that the frozen cubes can be used in future cooking. See also Sofrito Holy trinity Tempering (spices) Mirepoix Sauce References Cooking techniques Food ingredients Haitian cuisine Marinades
Epis
[ "Technology" ]
389
[ "Food ingredients", "Components" ]
67,373,736
https://en.wikipedia.org/wiki/Green%20transport%20hierarchy
The green transport hierarchy (Canada), street user hierarchy (US), sustainable transport hierarchy (Wales), urban transport hierarchy or road user hierarchy (Australia, UK) is a hierarchy of modes of passenger transport prioritising green transport. It is a concept used in transport reform groups worldwide and in policy design. In 2020, the UK government consulted about adding to the Highway Code a road user hierarchy prioritising pedestrians. It is a key characteristic of Australian transport planning. History The Green Transportation Hierarchy: A Guide for Personal & Public Decision-Making by Chris Bradshaw was first published September 1994 and revised June 2004. As part of a pedestrian advocacy group in the United States, he proposed the hierarchy ranking passenger transport based on environmental emissions. The reviewed ranking listed, in order: walking, cycling, public transport, car sharing, and finally private car. It was first prepared for Ottawalk and the Transportation Working Committee of the Ottawa-Carleton Round-table on the Environment in January 1992, only stating 'Walk, Cycle, Bus, Truck, Car'. Factors Mode Energy source Trip length Trip speed Vehicle size Passenger load factor Trip segment Trip purpose Traveller Adoption The author directed the hierarchy at both individual lifestyle choices and public authorities who should officially direct their resources – funds, moral suasion, and formal sanctions – based on the factors. Bradshaw described the hierarchy to be logical, but the effect of applying it to seem radical. The model rejects the concept of the balanced transportation system, where users are assumed to be free to choose from amongst many different yet ‘equally valid’ modes. This is because choices incorporating factors that are ranked low (walking, cycling, public transport) are seen as generally having a high impact on other choices. See also Alternatives to car use Bicycle-friendly Bill Boaks campaigned for pedestrian priority everywhere Car-free movement Complete streets Cycling advocacy Cyclability Health and environmental impact of transport Health impact of light rail systems Induced demand Jaywalking Peak car Planetizen Priority (right of way) Reclaim the Streets Road hierarchy Road traffic safety Settlement hierarchy Street hierarchy Street reclamation Sustainable transport Traffic bottleneck Traffic code Traffic conflict Traffic flow Transportation demand management Walkability Walking audit References External links Original 1992 paper Climate change policy Rules of the road Sustainable transport 1992 documents 1994 books 1992 in transport Hierarchy
Green transport hierarchy
[ "Physics" ]
458
[ "Physical systems", "Transport", "Sustainable transport" ]
67,374,419
https://en.wikipedia.org/wiki/Ritu%20Agarwal
Ritu Agarwal is an Indian-American management scientist specializing in management information systems. She is the Wm Polk Carey Distinguished Professor of Information Systems at Johns Hopkins University. Previously, she was the Senior Associate Dean for Faculty and Research and the Robert H. Smith Dean’s Chair of Information Systems at the Robert H. Smith School of Business. Agarwal was the Editor-in-Chief of Information Systems Research and the founder and director of the Center for Health Information and Decision Systems at the Smith School. Early life and education Agarwal earned her Bachelor of Arts degree in mathematics from the St. Stephen's College, Delhi before moving to the United States for her graduate degrees at Syracuse University. Career Upon completing her PhD, Agarwal joined the faculty at the University of Dayton as an associate professor of management intelligence systems. In this role, she received a grant from Dayton's Intelligence Systems Applications Center to assist in the development of an artificial intelligence system to help small business owners craft strategic marketing plans. Agarwal eventually joined the faculty at the University of Maryland, College Park's Robert H. Smith School of Business in 1999. In 2010, Agarwal started the annual Conference on Health IT and Analytics (CHITA). She was also named the editor-in-chief of the journal Information Systems Research beginning January 1, 2011. In her first year as editor-in-chief, Agarwal became a Fellow of the Association for Information Systems, and also received the University of Maryland Distinguished Scholar-Teacher Award. By 2013, she was recognized as "one of the most widely-cited scholars in the field" and was elected a 2013 Distinguished Fellow for her outstanding intellectual contributions to the information systems field. Following the departure of Alexander Triantis in 2019, Agarwal was appointed interim dean of the University of Maryland. While serving in this role, she was a 2019 recipient of the Association for Information Systems Lyons Electronic Office Lifetime Achievements Award for her work in the field of information systems. During the COVID-19 pandemic, Agarwal and colleague Margrét Bjarnadóttir conducted a study titled Precision Therapy for Neonatal Opioid Withdrawal Syndrome in order to "solve big health care challenges through joint research that draws on the institutions’ world-leading expertise in medicine and artificial intelligence." Their study looked to improve clinical decision making in the treatment of neonatal opioid withdrawal syndrome. She was subsequently appointed to serve on the Acquired Immunodeficiency Syndrome (DAIDS) Subcommittee of the National Institute of Allergy and Infectious Diseases and recognised as being in the top 2% of the most-cited scholars and scientists worldwide. Recognition Agarwal is a Fellow of the Institute for Operations Research and the Management Sciences, elected in the 2021 class of fellows. References External links Living people Management scientists University of Maryland, College Park faculty American academic journal editors St. Stephen's College, Delhi alumni Florida State University faculty Information systems researchers Year of birth missing (living people) Syracuse University College of Engineering and Computer Science alumni Martin J. Whitman School of Management alumni University of Dayton faculty Indian Institute of Management Calcutta alumni Fellows of the Institute for Operations Research and the Management Sciences Indian academic journal editors
Ritu Agarwal
[ "Technology" ]
652
[ "Information systems", "Information systems researchers" ]
47,555,253
https://en.wikipedia.org/wiki/Former%20headquarters%20of%20Banca%20Monte%20Parma
The former Sede Centrale della Banca del Monte di Parma was the headquarters of Banca Monte Parma, located in the corner of Piazzale Battisti and Strada Cavour in central Parma, Emilia-Romagna region, Italy. In 1978 the headquarters was moved to 1 Palazzo Sanvitale. The location on 3/A Strada Cavour remained as the main branch in the city. Since 2015 the building became a branch of Intesa Sanpaolo. The Modernist-style building with an exterior of white marble and vertical windows was built from 1968 to 1974. The architects were a collaboration between Pier Luigi Nervi, Giovanni Ponti, Antonio Fornaroli, and Alberto Rosselli. References Modernist architecture in Italy Buildings and structures in Parma Intesa Sanpaolo buildings and structures Pier Luigi Nervi buildings Office buildings completed in 1974
Former headquarters of Banca Monte Parma
[ "Engineering" ]
175
[ "Architecture stubs", "Architecture" ]
47,555,513
https://en.wikipedia.org/wiki/The%20Audience%20Engine
The Audience Engine is announced open-source, customizable suite of fundraising tools for public radio being developed by the Congera Corporation, a subsidiary of WFMU Radio. It was conceived by and is being developed under the supervision of WFMU management, but as of November 2020 no product has been announced, demoed or released thus rendering the project as effectively vaporware. The platform is based on WFMU's own model of fundraising and listener-community relations, a project that began development in 1998 and WFMU claims helps raise 70% of its annual $2.5 million operating budget via its website. The developers explain that "by pairing online content, real-time playlist information, social media, and community interaction tools directly with crowdfunding campaigns, WFMU has not only built a positive and intelligent online community, but also a sustainable model that can be adopted by other organizations." Besides radio, Audience Engine has potential usage for online television and journalism. The goal is to "enable organizations ... to build audiences and become self sufficient." A large part of Audience Engine's potential appeal is its tightly integrated fundraising capabilities. "Audience Engine comes with a set of tools that integrates crowdfunding-inspired donation tools throughout a publisher's site, with on and off-site widgets for donations as well as gift reward management, and a full suite of analytics underlying it all for that publisher to gain insight on what is and isn't raising money," noted Flanagan. Freedman observed that "Kickstarter did a great job of borrowing or stealing the concept of the pledge drive, and vastly improved it as well. Public media hasn't borrowed it back yet! That's what we're trying to do." Although aimed primarily towards small and mid-sized radio stations, larger public radio stations such as WBUR and WNYC have considered harnessing the platform's possible uses in their operations. A draft of the platform was publicly debuted at a launch event held on November 5, 2015. Platform The platform is supposedly being built in modular APIs that utilize JavaScript and XML feeds, but will include modules that integrate into Drupal, which is used by many small news organizations. Part of the Audience Engine's philosophy is to retain the listener's or reader's attention on the station website, rather than redirect them to external social media. "Community based radio stations have to start thinking about online platforms that don’t effectively abandon discussion and networking to Twitter, Facebook, Reddit, or LinkedIn, and the rest of the usual suspects," said Matthew Lasar at Radio Survivor. "[O]nce your listeners and/or website readers are off to Twitter/Facebook-land, they’re all but gone. They’re not commenting on your podcast or stream or blog post in your house. They’re far far away, helping Mark Zuckerberg bring in that advertisement and audience data cash." Radio World described the mocked up Audience Engine dashboard as featuring "a responsively designed social content page for radio and news sites, engineered for live, positive audience feedback and created with self-sustaining crowdfunding in mind. Both Web and mobile pages have a built-in, interactive second screen, with incentives for positive contributions, and tools for stopping disruptive behavior." The project’s proposed first module, a crowdfunding app called Mynte, was scheduled to launch in 2018 but nothing has appeared as of November 2020. Besides WFMU, potential early adopters of Audience Engine include WWOZ-FM, a New Orleans–based jazz and blues station; WSOU-FM (Seton Hall University), and WPRB-FM (Princeton University). Development team Early development of Audience Engine was undertaken by Bocoup, a developer of open-source web technologies which has collaborated with Google, Microsoft, Walmart, eBay, and Apple. Bocoup's involvement ended in January 2016, and the project was turned over to a team of independent developers under the supervision of WFMU. WordPress developers Joey Dehnert and Andrew Nealon at InsertCulture, a now defunct development firm, have helped develop the foundation of Audience Engine’s web platform. The Audience Engine project has received $500,000 in grant money over several years from the Geraldine R. Dodge Foundation to undertake development of the software. As of March 2021, WFMU remains the sole user of Audience Engine, as development has gone "much slower than expected" and due to the fact that it remains incomplete, despite its original target release date of 2020. Spinitron In 2016, Audience Engine's parent company Congera merged with Boston-based Spinitron LLC, a music tracking software, for SoundExchange, airplay playlist, and other copyright-compliance reporting, company. References External links Official Website Vaporware Social information processing Web development software Crowdfunding platforms of the United States Mass media companies of the United States
The Audience Engine
[ "Technology" ]
1,023
[ "Computer industry", "Vaporware" ]
47,556,172
https://en.wikipedia.org/wiki/NGC%2095
NGC 95 is a spiral galaxy located in the Pisces constellation. It was discovered by English astronomer John Fredrick William Herschel on October 18, 1784. The galaxy has several blue spiral arms surrounding a bright yellow nucleus, and is approximately 120,000 light years in diameter, making it only slightly larger than the Milky Way. References External links 0095 Intermediate spiral galaxies Pisces (constellation) 00214 001426 Astronomical objects discovered in 1784 Discoveries by John Herschel
NGC 95
[ "Astronomy" ]
99
[ "Pisces (constellation)", "Constellations" ]
47,556,282
https://en.wikipedia.org/wiki/Agaricus%20nebularum
Agaricus nebularum is a species of fungus in the genus Agaricus. Found in Chile, it was described as new to science in 1969 by mycologist Rolf Singer. See also List of Agaricus species References External links nebularum Fungi described in 1969 Fungi of Chile Taxa named by Rolf Singer Fungus species
Agaricus nebularum
[ "Biology" ]
66
[ "Fungi", "Fungus species" ]
47,556,310
https://en.wikipedia.org/wiki/Sacituzumab%20govitecan
Sacituzumab govitecan, sold under the brand name Trodelvy by Gilead Sciences, is a Trop-2-directed antibody and topoisomerase inhibitor drug conjugate used for the treatment of metastatic triple-negative breast cancer and metastatic urothelial cancer. The most common side effects include nausea, neutropenia, diarrhea, fatigue, anemia, vomiting, alopecia (hair loss), constipation, decreased appetite, rash and abdominal pain. Sacituzumab govitecan has a boxed warning about the risk of severe neutropenia (abnormally low levels of white blood cells) and severe diarrhea. Sacituzumab govitecan may cause harm to a developing fetus or newborn baby. Sacituzumab govitecan was approved for medical use in the United States in April 2020, and in the European Union in November 2021. The U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) consider it to be a first-in-class medication. Medical uses Sacituzumab govitecan is indicated for the treatment of adults with metastatic triple-negative breast cancer who received at least two prior therapies for metastatic disease; people with unresectable locally advanced or metastatic triple-negative breast cancer (mTNBC) who have received two or more prior systemic therapies, at least one of them for metastatic disease; and for people with locally advanced or metastatic urothelial cancer (mUC) who previously received a platinum-containing chemotherapy and either a programmed death receptor-1 (PD-1) or a programmed death-ligand 1 (PD-L1) inhibitor. It is also indicated for the treatment of people with unresectable locally advanced or metastatic hormone receptor (HR)-positive, human epidermal growth factor receptor 2 (HER2)-negative (IHC 0, IHC 1+ or IHC 2+/ISH-) breast cancer who have received endocrine-based therapy and at least two additional systemic therapies in the metastatic setting. Mechanism Sacituzumab govitecan is a conjugate of the humanized anti-Trop-2 monoclonal antibody linked with SN-38, the active metabolite of irinotecan. Each antibody having on average 7.6 molecules of SN-38 attached. Linkage to an antibody allows the drug to specifically target cells expressing Trop-2. Sacituzumab govitecan is a Trop-2-directed antibody and topoisomerase inhibitor drug conjugate, meaning that the drug targets the Trop-2 receptor that helps the cancer grow, divide and spread, and is linked to topoisomerase inhibitor, which is a chemical compound that is toxic to cancer cells. Approximately two of every ten breast cancer diagnoses worldwide are triple-negative. Triple-negative breast cancer is a type of breast cancer that tests negative for estrogen receptors, progesterone receptors and human epidermal growth factor receptor 2 (HER2) protein. Therefore, triple-negative breast cancer does not respond to hormonal therapy medicines or medicines that target HER2. Development Immunomedics announced in 2013, that it had received fast track designation from the US Food and Drug Administration (FDA) for the compound as a potential treatment for non-small cell lung cancer, small cell lung cancer, and metastatic triple-negative breast cancer. Orphan drug status was granted for small cell lung cancer and pancreatic cancer. In February 2016, Immunomedics announced that sacituzumab govitecan had received an FDA breakthrough therapy designation (a classification designed to expedite the development and review of drugs that are intended, alone or in combination with one or more other drugs, to treat a serious or life-threatening disease or condition) for the treatment of people with triple-negative breast cancer who have failed at least two other prior therapies for metastatic disease. History Sacituzumab govitecan was added to the proposed International nonproprietary name (INN) list in 2015, and to the recommended list in 2016. Sacituzumab govitecan-hziy was approved for medical use in the United States in April 2020. Sacituzumab govitecan-hziy was approved based on the results of IMMU-132-01, a multicenter, single-arm clinical trial (NCT01631552) of 108 participants with metastatic triple-negative breast cancer who had received at least two prior treatments for metastatic disease. Of the 108 participants involved within the study, 107 were female and 1 was male. Participants received sacituzumab govitecan-hziy at a dose of 10milligrams per kilogram of body weight intravenously on days one and eight every 21 days. Treatment with sacituzumab govitecan-hziy was continued until disease progression or unacceptable toxicity. Tumor imaging was obtained every eight weeks. The efficacy of sacituzumab govitecan-hziy was based on the overall response rate (ORR) – which reflects the percentage of participants that had a certain amount of tumor shrinkage. The ORR was 33.3% (95% confidence interval [CI], 24.6 to 43.1). Additionally, with the 33.3% of study participants who achieved a response, 2.8% of participants experienced complete responses. The median time to response in participants was 2.0 months (range, 1.6 to 13.5), the median duration of response was 7.7 months (95% confidence interval [CI], 4.9 to 10.8), the median progression free survival was 5.5 months, and the median overall survival was 13.0 months. Of the participants that achieved an objective response to sacituzumab govitecan-hziy, 55.6% maintained their response for six or more months and 16.7% maintained their response for twelve or more months. Sacituzumab govitecan-hziy was granted accelerated approval along with priority review, breakthrough therapy, and fast track designations. The U.S. Food and Drug Administration (FDA) granted approval of Trodelvy to Immunomedics, Inc. In April 2021, the FDA granted regular approval to sacituzumab govitecan for people with unresectable locally advanced or metastatic triple-negative breast cancer (mTNBC) who have received two or more prior systemic therapies, at least one of them for metastatic disease. Efficacy and safety were evaluated in a multicenter, open-label, randomized trial (ASCENT; NCT02574455) conducted in 529 participants with unresectable locally advanced or mTNBC who had relapsed after at least two prior chemotherapies, one of which could be in the neoadjuvant or adjuvant setting, if progression occurred within twelve months. Participants were randomized (1:1) to receive sacituzumab govitecan, 10mg/kg as an intravenous infusion, on days 1 and 8 of a 21-day (n=267) cycle or physician's choice of single agent chemotherapy (n=262). In April 2021, the FDA granted accelerated approval to sacituzumab govitecan for people with locally advanced or metastatic urothelial cancer (mUC) who previously received a platinum-containing chemotherapy and either a programmed death receptor-1 (PD-1) or a programmed death-ligand 1 (PD-L1) inhibitor. Efficacy and safety were evaluated in TROPHY (IMMU-132-06; NCT03547973), a single-arm, multicenter trial that enrolled 112 participants with locally advanced or mUC who received prior treatment with a platinum-containing chemotherapy and either a PD-1 or PD-L1 inhibitor. In February 2023, the FDA approved sacituzumab govitecan for people with unresectable locally advanced or metastatic hormone receptor (HR)-positive, human epidermal growth factor receptor 2 (HER2)-negative (IHC 0, IHC 1+ or IHC 2+/ISH-) breast cancer who have received endocrine-based therapy and at least two additional systemic therapies in the metastatic setting. Society and culture Legal status On 14 October 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Trodelvy, intended for the treatment of unresectable or metastatic triple-negative breast cancer. The applicant for this medicinal product is Gilead Sciences Ireland UC. Sacituzumab govitecan was approved for medical use in the European Union in November 2021. References Further reading External links Antibody-drug conjugates Cancer treatments Monoclonal antibodies for tumors Orphan drugs
Sacituzumab govitecan
[ "Biology" ]
1,896
[ "Antibody-drug conjugates" ]
47,556,865
https://en.wikipedia.org/wiki/European%20environmental%20research%20and%20innovation%20policy
The European environmental research and innovation policy is a set of strategies, actions and programmes to promote more and better research and innovation for building a resource-efficient and climate resilient society and economy in sync with the natural environment. It is based on the Europe 2020 strategy for a smart, sustainable and inclusive economy and it realises the European Research Area (ERA) and Innovation Union in the field of environment. The aim of the European environmental research and innovation policy is to contribute to a transformative agenda for Europe in the coming years, where the quality of life of the citizens and the environment are steadily improved, in sync with the competitiveness of businesses, the societal inclusion and the management of resources. Main features The European environmental research and innovation policy has a multidisciplinary character and involve efforts across many different sectors to provide safe, economically feasible, environmentally sound and socially acceptable solutions along the entire value chain of human activities. To reduce resource use and environmental impacts whilst increasing competitiveness requires a decisive societal and technological transition to an economy based on a sustainable relationship between nature and human well-being. The availability of sufficient raw materials is addressed as well as the creation of opportunities for growth and new jobs. Innovative options are developed in policies ranging across science, technology, economy, regulations, society and citizens’ behavior, and governance. Research and innovation activities improve the understanding and forecasting of climate and environmental change in a systemic and cross-sectoral perspective, reduce uncertainties, identify and assess vulnerabilities, risks, costs, mitigation measures and opportunities, as well as expand the range and improve the effectiveness of societal and policy responses and solutions. International context The European environmental research and innovation policy was placed in the context of the process at the United Nations to develop a set of Sustainable Development Goals (SDGs) that were agreed at the Rio+20 Conference on Sustainable development in 2012 and are now integrated into the United Nations development agenda beyond 2015. These goals have succeeded the Millennium Development Goals and are universally applicable to all Nations, hence also to the European Union and its Member States. Implementation through Framework Programmes The implementation of the European environmental research and innovation policy relies on a systemic approach to innovation for a system-wide transformation. For large extent, it is carried out through the Framework Programmes for Research and Technological Development. The current Framework Programme is called Horizon 2020 and environmental research and innovation is envisaged across the entire programme with an interdisciplinary approach. Current price estimates suggest that more than 6,5 € billion per year could be made available for activities related to sustainable development during the duration of Horizon 2020, addressing both research and innovation differently from previous FPs. Horizon 2020 is open to cooperation with researchers and innovators world-wide in order to foster co-design and co-creation of solutions that may have a global impact. New calls for research and innovation proposals have been opened on 14 October 2015. Information is contained in the Horizon 2020 participant portal. References External links "European environmental research and innovation" "Horizon 2020 Participant portal" "European Commission - Innovation Union" Environmental research Policies of the European Union Innovation 2010s in the European Union
European environmental research and innovation policy
[ "Environmental_science" ]
629
[ "Environmental research" ]
47,557,074
https://en.wikipedia.org/wiki/Oppo%20R7
The Oppo R7 is a mid-range phablet smartphone based on Android 4.4 with Oppo Electronics’ own operating system, ColorOS 2.1, which was unveiled on 20 May 2015. This model includes 3 versions featuring different frequencies, and will be sold in 6 countries all over the world, as well as third-party retailer, Oppostyle and other E-commerce platforms. Specifications The Oppo R7 contains a Qualcomm MSM8939 Octa-core processor with 3GB of RAM, with 16 GB expandable storage, and makes use of a 5-inch, 1080p AMOLED 2.5D arc edge display. The device weighs about 147g and is composed of 92.3% metal. It also includes a 13-megapixel main camera and an 8-megapixel front camera, but phase detection autofocus and anti-shake optimization may contribute towards higher image quality. Like other flagships, the Oppo R7 is equipped with VOOC Flash Charging technology. At present, the Oppo R7 has been officially released in Indonesia, Sri Lanka, Taiwan, Singapore and Australia. See also Phones of OPPO References Mobile phones introduced in 2015 Oppo smartphones Android (operating system) devices Discontinued smartphones
Oppo R7
[ "Technology" ]
267
[ "Mobile technology stubs", "Mobile phone stubs" ]
47,558,532
https://en.wikipedia.org/wiki/XTE%20J1550%E2%80%93564
XTE J1550-564, sometimes abbreviated to J1550 and also known as V381 Normae, is a low-mass X-ray binary in the constellation Norma. It is composed of a black hole around 10 times as massive as the Sun, and a star of spectral type K3III. The black hole fires out jets of matter that are thought to arise from an accretion disk, and is hence known as a microquasar. References Norma (constellation) Normae, V381 X-ray binaries K-type giants Stellar black holes
XTE J1550–564
[ "Physics", "Astronomy" ]
120
[ "Black holes", "Stellar black holes", "Unsolved problems in physics", "Constellations", "Norma (constellation)" ]
47,559,813
https://en.wikipedia.org/wiki/True%20muonium
In particle physics, true muonium is a theoretically predicted exotic atom representing a bound state of an muon and an antimuon (μ+μ−). The existence of true muonium is well established theoretically within the Standard Model. Its properties within the Standard Model are determined by quantum electrodynamics, and may be modified by physics beyond the Standard Model. True muonium is yet to be observed experimentally, though it may have been produced in experiments involving collisions of electron and positron beams. The ortho-state of true muonium (i.e. the state with parallel alignment of the muon and antimuon spins) is expected to be relatively long-lived (with a lifetime of ), and decay predominantly to an e+e− pair, which makes it possible for LHCb experiment at CERN to observe it with the dataset collected by 2025. Experimental research There are several experimental projects searching for the true muonium. One of them is the μμ-tron experiment (Mumutron) planned at the Budker Institute of Nuclear Physics of the Siberian Branch of the Russian Academy of Sciences (INP SB RAS), which has been under development since 2017. The experiment involves the creation of a special low-energy electron–positron collider, which will make it possible to observe the production of true muonium in collisions of electron and positron beams with an intersection angle of 75° with energies of 408 MeV. Thus, the invariant mass of colliding particles will be equal to twice the mass of the muon (=105.658 MeV). To register the exotic atom (in the decay channel into an electron-positron pair), it is planned to create a specialized detector. Apart to the actual detection of true muonium, it is planned to isolate its various states and measure their lifetimes. In addition to experiments in the field of elementary particle physics, the collider created within the framework of the experiment is also of interest from the point of view of developing accelerator technologies for the Super Charm-Tau factory planned at the INP SB RAS. The experiment was proposed in 2017 by , A. I. Milshtein, and , researchers at the INP SB RAS. See also Muonium Positronium Onium References External links Low-energy electron-positron collider to search and study (μ+μ−) bound state. A.V. Bogomyagkov, V.P. Druzhinin, E.B. Levichev, A.I. Milstein, S.V. Sinyatkin. BINP, Novosibirsk. Hypothetical composite particles Onia
True muonium
[ "Physics" ]
565
[ "Particle physics stubs", "Particle physics" ]
47,561,162
https://en.wikipedia.org/wiki/History%20of%20Sulzer%20diesel%20engines
This article covers the History of Sulzer diesel engines from 1898 to 1997. Sulzer Brothers foundry was established in Winterthur, Switzerland, in 1834 by Johann Jakob Sulzer-Neuffert and his two sons, Johann Jakob and Salomon. Products included cast iron, firefighting pumps and textile machinery. Rudolf Diesel was educated in Augsburg and Munich and his works training was with Sulzer, and his later co-operation with Sulzer led to the construction of the first Sulzer diesel engine in 1898. In 2015, the Sulzer company lives on but it no longer manufactures diesel engines, having sold the diesel engine business to Wärtsilä in 1997. Overview Sulzer built diesel engines for stationary, road, rail and marine use. The engine types usually comprise a number, then some letters, then another number. For example, 6LDA28 indicates a six-cylinder engine in the "LDA" series with a 28 cm cylinder bore. Road In 1937, Sulzer introduced an opposed piston two-stroke diesel engine for road use. This used a single crank with con-rods operating levers that moved the opposed pistons, the same layout as the 1905 Arrol-Johnston petrol engine. The post-war Commer TS3 was similar but the Sulzer had a piston-type blower instead of a Roots blower. It was made in two sizes: 69 mm bore x 101.6 mm stroke or 89 mm bore by 120 mm stroke. The smaller version had two cylinders, produced 35 hp, and was intended for tractors. The larger version was available with two, three or four cylinders and was intended for trucks. Rail Sulzer supplied the main and auxiliary engines for a large diesel locomotive built by A. Borsig of Berlin in 1912, the first large locomotive of its kind. Like many modern diesels this had a cab at each end, and the Sulzer 4LV38 two-stroke diesel engine (of 1,200 bhp) could be run in either direction of rotation. Transmission was direct from the transverse crankshaft via coupling rods to the two driving axles. A Sulzer auxiliary engine provided the air for starting. In tests in 1913 on the Faunfeld-Winterthur line where express steam took 21 minutes, this did it in 17 minutes. The direct drive was a problem; starting the engine meant starting the locomotive and its load moving, requiring a lot of compressed air. Sulzer moved to a range of smaller locomotive engines and including railcars in the 1920s, but in the mid-1930s they came out with their LD series designed just for railway locomotives which they would produce for many years. The top of this range (in 1945) was the 12LDA31, which was a 12 cylinder engine with two parallel crankshafts geared together, effectively two straight 6 engines in a common crankcase. These were 2-stroke engines with four blowers, each serving three cylinders. In 1945 the engines were in use in France and Romania, and their output of 2,200 bhp at 700 rpm placed them as one of the most powerful locomotive engines at the time. There was also an LVA range of V-format locomotive engines from 1960. 12LVA24 versions of these engines were installed in D1702-D1706 in the UK, but there were some reliability problems. The locomotives were reverted to the LDA engine. The refurbished 12LVA24 engines were sold to SNCF. At the end of the 1960s the Sulzer traction division was merged into the far larger marine division. Some examples of Sulzer powered rail locomotives: Large numbers of Sulzer-engined locomotives were supplied to rail companies all over the world, particularly using the LDA engine. Type LDA28 Example applications, 6 cylinders British Rail Class 24 British Rail Class 25 British Rail Class 26 British Rail Class 27 CIE 101 Class CIE 113 Class Commonwealth Railways NSU class Commonwealth Railways NT class Example applications, 8 cylinders British Rail Class 33 Example applications, 12 cylinders British Rail Class 44 British Rail Class 45 British Rail Class 46 British Rail Class 47 SNCF Class CC 65500 PKP class ST43 (manufacturer type: LDE 1200) Type LVA24 Example applications, 12 cylinders British Rail Class 48 SNCF Class A1AA1A 68000 Example applications, 16 cylinders British Rail HS4000 Type LV31 Example applications, 8 cylinders Russian locomotive class E el-8 Marine Sulzer marine engines were well engineered and so various trials in the early days of oil engines paid dividends. In 1910 there was an icebreaker tug equipped with a Sulzer diesel at Hamburg. The 4 cylinder two-stroke diesel engine gave an indicated 210 bhp and 9.75 knots, had a 1/3rd smaller engine room than the steam equivalent, and the engine weighed just a quarter of the equivalent steam plant. It did well in all trials, and the simple controls meant the engine could be reversed faster than a steam engine. In 1911 the British Admiralty purchased a diesel motor launch they had been evaluating for some time. By Sulzer standards this had a very small 4-cylinder two-stroke engine of just 100 bhp. The boat was only 60 foot long, and as part of its trials the engine had been successfully run on full power for a period of 24 hours, reaching over 10 knots. The vessel was bought for £3,000 to be the subject for further experimental work. The MV Monte Penedo, Germany's first sea-going motor ship (in 1912), was fitted with two Sulzer 4S47 two-stroke crosshead diesel engines. These were replaced in 1949 with new Sulzer 7TS36 diesel engines. In 1912 Sulzer also provided the main diesel propulsion engine and the two diesel generator engines on the new outer Elbe lightship, the 'Burgermeister Oswald'. The US Navy also chose Sulzer for some of their submarines, following discussions held in Switzerland in 1915, a design for submarines was developed and tested. These were built by the Lake Torpedo Boat company as the US L Class, and the engines involved the US licensee Busch-Sulzer. Sulzer licensed their engine builds to many companies worldwide, without apparent compromise to the quality of their engines. In the mid-1920s Sulzer started to advertise their airless Diesel engines, meaning they were using liquid injection rather than injecting the fuel using an air-blast (as used by Rudolph Diesel). They exhibited their 300hp 2-stroke airless engine, suitable for yachts, tugs, barges, etc at the Olympia Shipping Exhibition of 1925. Sulzer's marine engine range in 1969 extended from the 3 cylinder version of the A25 auxiliary engine at 550 bhp to the 12 cylinder 48,000 bhp 12RND105 engine, 10 metres high, with cylinders over a metre in diameter. More recently these main engines have been replaced by the RTA series of engines, still two stroke. The RD type marine two-stroke cross-head engine ranged from the 5 cylinder 5RD44, to the 12 cylinder 12RD90. This was developed into the RND engine, with numerous improvements including eliminating mechanically operated exhaust valves, with sizes ranging from 5RND68 (8,250 bhp), to 12RND105 (48,000 bhp). A new marine range of engines in the later 1960s were the Z and ZV range covering 2,600 bhp to 6,600 bhp. These were again two-strokes but in 1969 four-stroke version were being planned. They were available either as in-line engines or as 50 degree V-engines. Trunk pistons were used instead of crossheads, and both long and short stroke engines were available. These are examples, not a full list. Type RTA76 Example applications, 5 cylinders USNS Paul Buck (T-AOT-1122) Example applications, 8 cylinders MV Rena Type TADS56 Example applications, 5 cylinders Galați-class cargo ship Stationary While Sulzer was well known for its two-stroke engines, in 1931 there was a brief resume of their four stroke diesel engines which had been built alongside the two stroke engines. In 1903 they offered four stroke low speed A-frame style engines from 20 to 800 hp. Enquiries for lighter high-speed engines led by 1911 to forced lubrication four-stroke engines built with closed box-frames, low set camshafts and pushrod valve operation. The airless (injection) four-stroke engines were then developed from this type and by 1931 were available with from 2 to 8 cylinders with DD engine designation. At the start of 1912 it was claimed that the largest diesel engines built for stationary work (of 2,000 and 2,400 bhp) were built by Sulzers, though this had been surpassed by an order received by Sulzers for four 4,000 hp 6-cylinder engines from Chile for electricity generation. These two-stroke crosshead engines were described as being basically the same as their marine engines without the reversing gear. In 1915 a 4,500 bhp engine was installed at Harland & Wolfe, Belfast. This installation had been delayed due to the war complicating commercial arrangements. A second engine of the same size had been installed at a Zurich electricity works, and another was being installed at a French electricity station - the engines were described as being "identical in essential principles and also external construction with large marine engines by the same firm". Sulzer could provide engines from 550 bhp upwards for electricity generation, pumping plant, and industrial applications. These designs were essentially versions of their wide range of marine engines, although without the thrust block, and normally without the reversing gear. The stationary versions of the marine RD and RND crosshead range were designated RF and RNF, but Z engines and the A25/AL25 auxiliary engine were designated for either marine or industrial use. The AL25 wasa higher output version of the A25 developed for emergency generating sets. Examples : Two twin cylinder engines at King Edward Mine, Camborne(http://kingedwardmine.co.uk/) They were originally installed for the Falmouth Water Company around 1926/27, one of them was kept as standby until the early 1970s. King Edward Mine removed the engines in 1989 and erected the better one at KEM around 1994. Licences Licences to build diesel engines to Sulzer's design were granted to Vickers-Armstrongs, and George Clark of Sunderland (after WW2), Wallsend Slipway and Engineering Co (c1925) in the United Kingdom; to Busch-Sulzer in the United States, to Reșița works in Romania, to Messrs Werkspoor in Holland, to Messrs. Workman, Clark and Company Ltd of Ireland, to Mitsubish Heavy Industries in Japan, and to H. Cegielski – Poznań in Poland. A new Vickers-Armstrong licence in 1957 also allowed associates of the licensee, Cockatoo Docks and Engineering Co Pty Ltd of Sydney, Australia and Canadian Vickers Ltd of Montreal, Canada, to build Sulzer engines under licence. References Sulzer Diesel engines Marine engines
History of Sulzer diesel engines
[ "Technology" ]
2,266
[ "Engines", "Science and technology studies", "Marine engines", "History of technology", "History of science and technology" ]
47,561,739
https://en.wikipedia.org/wiki/Penicillium%20shennonghianum
Penicillium shennonghianum is a species of fungus in the genus Penicillium. References shennonghianum Fungi described in 1988 Fungus species
Penicillium shennonghianum
[ "Biology" ]
34
[ "Fungi", "Fungus species" ]
47,561,855
https://en.wikipedia.org/wiki/Citrix%20Workspace
Citrix Workspace (formerly Citrix Workspace Suite) is a digital workspace software platform developed by Citrix Systems. Launched in 2018, it is Citrix Systems' flagship product. Citrix Workspace is an information retrieval service where users can access programs and files from a variety of sources through a central application or a Web browser. In addition to Citrix Virtual Apps and Desktops (formerly XenApp and XenDesktop), Citrix Workspace services include Citrix Endpoint Management (formerly XenMobile), Citrix Content Collaboration (formerly ShareFile), Citrix Access Control, microapp capabilities, usage analytics, and single sign-on capabilities to SaaS and Web apps. Its central application, Citrix Workspace app (formerly Citrix Receiver), is client software that allows access to all of a user's files and apps from one interface. This includes mobile files and desktops, in addition to SaaS and virtual apps. Citrix Workspace app replaced Citrix Receiver, which was the client component of Citrix products XenDesktop and XenApp, now Citrix Virtual Apps and Desktops. It was released initially in 2009; devices with Receiver installed were able to access full desktops via XenDesktop or individual applications via XenApp from a centralized host, such as a server or cloud infrastructure. The product's intended users were employees of businesses and organizations. References Business software Centralized computing Citrix Systems Remote desktop
Citrix Workspace
[ "Technology" ]
305
[ "Centralized computing", "IT infrastructure", "Computer systems" ]
47,562,544
https://en.wikipedia.org/wiki/Norman%20Hackerman%20Young%20Author%20Award
The Norman Hackerman Young Author Award was established in 1982 by The Electrochemical Society (ECS). The award is presented annually for the best paper published in the Journal of the Electrochemical Society for a topic in the field of electrochemical science and technology by a young author or authors. (This award incorporates the Turner Book Prize.) Recipients of the award are presented with a scroll, cash prize (divided equally among eligible authors), and travel assistance to enable winner(s) to attend the ECS meeting where the award is presented. This award is named after the chemist Norman Hackerman. Notable Recipients As listed by ECS: 1994 Hubert A. Gasteiger 1988 Jennifer A. Bardwell 1987 Joachim Maier 1975 Larry R. Faulkner 1971 M. Stanley Whittingham 1966 John Newman 1960 A. C. Makrides 1953 Jack Halpern 1948 Michael Streicher 1941 Edward Adler 1938 Nathaniel B. Nichols 1929 William C. Gardiner See also List of chemistry awards References Chemistry awards Electrochemistry
Norman Hackerman Young Author Award
[ "Chemistry", "Technology" ]
207
[ "Chemistry awards", "Electrochemistry", "Science award stubs", "Electrochemistry stubs", "Science and technology awards", "Physical chemistry stubs" ]
47,562,547
https://en.wikipedia.org/wiki/MicroLED
MicroLED, also known as micro-LED, mLED or μLED is an emerging flat-panel display technology consisting of arrays of microscopic LEDs forming the individual pixel elements. Inorganic semiconductor microLED (μLED) technology was first invented in 2000 by the research group of Hongxing Jiang and Jingyu Lin of Texas Tech University (TTU) while they were at Kansas State University (KSU). The first high-resolution and video-capable InGaN microLED microdisplay in VGA format was realized in 2009 by Jiang, Lin and their colleagues at Texas Tech University and III-N Technology, Inc. via active driving of a microLED array by a complementary metal-oxide semiconductor (CMOS) IC. Compared to widespread LCD technology, microLED displays offer better contrast, response times, and energy efficiency. MicroLED offers greatly reduced energy requirements when compared to conventional LCD displays while also offering pixel-level light control and a high contrast ratio. The inorganic nature of microLEDs gives them a longer lifetime advantage over OLEDs and allows them to display brighter images with minimal risk of screen burn-in. The sub-nanosecond response time of μLED has a huge advantage over other display technologies for 3D/AR/VR displays since these devices need more frames per second and fast response times to minimise ghosting. MicroLEDs are capable of high speed modulation, and have been proposed for chip-to-chip interconnect applications. , Sony, Samsung, and Konka started to sell microLED video walls. LG, Tianma, PlayNitride, TCL/CSoT, Jasper Display, Jade Bird Display, Plessey Semiconductors Ltd, and Ostendo Technologies, Inc. have demonstrated prototypes. Sony already sells microLED displays as a replacement for conventional cinema screens. BOE, Epistar, and Leyard have plans for microLED mass production. MicroLED can be made flexible and transparent, just like OLEDs. According to a report by Market Research Future, the MicroLED display market will reach around USD 24.3 billion by 2027. Custom Market Insights reported that the MicroLED display market is expected to reach around USD 182.7 Billion by 2032. Research Following the first report of electrical injection microLEDs based on indium gallium nitride (InGaN) semiconductors in 2000 by the research group of Hongxing Jiang and Jingyu Lin, several groups have quickly engaged in pursuing this concept. Many related potential applications have been identified. Various on-chip connection schemes of microLED pixel arrays have been employed by AC LED Lighting, LLC (a company funded by Jiang and Lin) allowing for the development of single-chip high voltage DC/AC-LEDs to address the compatibility issue between the high voltage electrical infrastructure and low voltage operation nature of LEDs and high brightness self-emissive microdisplays. The microLED array has also been explored as a light source for optogenetic applications and for visible light communications. Early InGaN based microLED arrays and microdisplays were primarily passively driven. The first actively driven video-capable self-emissive InGaN microLED microdisplay in VGA format ( pixels, each 12μm in size with 15μm between them) possessing low voltage requirements was patented and realized in 2009 by Jiang, Lin and their colleagues at Texas Tech and III-N Technology, Inc.(a company funded by Jiang and Lin) via integration between microLED array and CMOS integrated circuit (IC) and the work was also published in the following years. The first microLED products were demonstrated by Sony in 2012. These displays, however, were very expensive. There are several methods to manufacture microLED displays. The flip-chip method manufactures the LED on a conventional sapphire substrate, while the transistor array and solder bumps are deposited on silicon wafers using conventional manufacturing and metallization processes. Mass transfer is used to pick and place several thousand LEDs from one wafer to another at the same time, and the LEDs are bonded to the silicon substrate using reflow ovens. The flip-chip method is used for micro displays used on virtual reality headsets. Another microLED manufacturing method involves bonding the LEDs to an IC layer on a silicon substrate and then removing the LED bonding material using conventional semiconductor manufacturing techniques. The current bottleneck in the manufacturing process is the need to individually test every LED and replace faulty ones using an excimer laser lift-off apparatus, which uses a laser to weaken the bond between the LED and its substrate. Faulty LED replacement must be performed using high accuracy pick-and-place machines and the test and repair process takes several hours. The mass transfer process alone can take 18 days, for a smartphone screen with a glass substrate. Special LED manufacturing techniques can be used to increase yield and reduce the amount of faulty LEDs that need to be replaced. Each LED can be as small as 5μm across. LED epitaxy techniques need to be improved to increase LED yields. Excimer lasers are used for several steps: laser lift-off to separate LEDs from their sapphire substrate and to remove faulty LEDs, for manufacturing the LTPS-TFT backplane, and for laser cutting of the finished LEDs. Special mass transfer techniques using elastomer stamps are also being researched. Other companies are exploring the possibility of packaging 3 LEDs: one red, one green and one blue LED into a single package to reduce mass transfer costs. Quantum dots are being researched as a way to shrink the size of microLED pixels, while other companies are exploring the use of phosphors and quantum dots to eliminate the need for different-colored LEDs. Sensors can be embedded in microLED displays. Over 130 companies are involved in microLED research and development. MicroLED light panels are also being made, and are an alternative to conventional OLED and LED light panels. Digital pulse-width modulation is well-suited to driving microLED displays. MicroLEDs experience a color shift as the current magnitude changes. Analog schemes change current to change brightness. With a digital pulse, only one current value is used for the on state. Thus, there is no color shift that occurs as brightness changes. Current microLED display offerings by Samsung and Sony consist of "cabinets" that can be tiled to create a large display of any size, with the display's resolution increasing with size. They also contain mechanisms to protect the display against water and dust. Each cabinet is diagonally with a resolution of . Commercialization MicroLEDs have already demonstrated performance advantages over LCD and OLED displays, including higher brightness, lower latency, higher contrast ratio, greater color saturation, intrinsic self-illumination, better efficiency and longer lifetime. Compared with OLED displays and LCDs, microLED displays stand out for their combination of high performance, durability, and energy efficiency. Ultrahigh brightness is particularly relevant for applications in augmented-reality displays that compete with the Sun’s brightness in outdoor environments. Glo and Jasper Display Corporation demonstrated the world's first RGB microLED microdisplay, measuring diagonally, at SID Display Week 2017. Glo transferred their microLEDs to the Jasper Display backplane. Sony launched a "Crystal LED Display" in 2012 with resolution, as a demonstration product. Sony announced its CLEDIS (Crystal LED Integrated Structure) brand which used surface mounted LEDs for large display production. , Sony offers CLEDIS in , and displays. On 12 September 2019, Sony announced Crystal LED availability to consumers ranging from 1080p to 16K displays. Samsung demonstrated a microLED display called The Wall at CES 2018. In July 2018, Samsung announced plans on bringing a 4K microLED TV to consumer market in 2019. At CES 2019, Samsung demonstrated a 4K microLED display and 6K microLED display. On June 12 at InfoComm 2019, Samsung announced the global launch of The Wall Luxury microLED display configurable from in 2K to in 8K. On October 4, 2019, Samsung announced that The Wall Luxury microLED display shipments had begun. In March 2018, Bloomberg reported Apple to have about 300 engineers devoted to in-house development of microLED screens. At IFA 2018 in August, LG Display demonstrated a microLED display. At SID's Display Week 2019 in May, Tianma and PlayNitride demonstrated their co-developed microLED display with over 60% transparency. China Star Optoelectronics Technology (CSoT) demonstrated a transparent microLED display with around 45% transparency, also co-developed with PlayNitride. Plessey Semiconductors Ltd demonstrated a monolithic monochrome blue GaN-on-silicon wafer bonded to a Jasper Display CMOS backplane active-matrix microLED display with an 8μm pixel pitch. At SID's Display Week 2019 in May, Jade Bird Display demonstrated their 720p and 1080p microLED microdisplays with 5μm and 2.5μm pitch respectively, achieving luminance in the millions of candelas per square metre. In 2021, Jade Bird Display and Vuzix have entered a Joint manufacturing agreement for making microLED based projectors for smart glasses and augmented reality glasses At Touch Taiwan 2019 on September 4, 2019, AU Optronics demonstrated a microLED display and indicated that microLED was 12 years from mass commercialization. At IFA 2019 on September 13, 2019, TCL Corporation demonstrated their Cinema Wall featuring a 4K microLED display with maximum brightness of 1,500cd/m and contrast ratio produced by their subsidiary China Star Optoelectronics Technology (CSoT). As of 2024, Samsung has already launched microLED display products including The Wall. Samsung’s microLED display technology transfers micrometer-scale LEDs into LED modules, resulting in what resembles wall tiles composed of mass-transferred clusters of almost microscopic lights. Samsung has also debuted at 2024 CES their Transparent MicroLED display. LG has also debuted at 2024 CES their microLED display - LG MAGNIT. In terms of microLED microdisplay, Jade Bird Display launched 0.13" series of MicroLED displays which has an active area of 0.13” (3.3 mm) in diagonal and a resolution of 640X480 for AR and VR display products. Apple reportedly invested billions of dollars in development of microLED displays in the years leading up to 2024, intending to transition its products to the technology beginning with the Apple Watch Ultra, before ultimately abandoning the effort after deciding it was unviable. However, the company is reportedly still "eyeing microLED for other projects down the road". See also OLED AMOLED Mini LED List of flat panel display manufacturers References External links First actively driven video-capable high-resolution microLED microdisplay in VGA format MicroLED review (2013) Putting microLED technology on display (2024) Crystal LED - Sony The Wall LG MAGNIT MicroLED "The Long View With John Doerr", John Doerr of KPC&B describes the microLED concept, starts around the 5 minute mark. LED screens are significantly different from microLED Display technology Light-emitting diodes
MicroLED
[ "Engineering" ]
2,304
[ "Electronic engineering", "Display technology" ]
74,521,801
https://en.wikipedia.org/wiki/Agroathelia%20rolfsii
Agroathelia rolfsii is a corticioid fungus in the order Amylocorticiales. It is a facultative plant pathogen and is the causal agent of "southern blight" disease in crops. Taxonomy The species was first described in 1911 by Italian mycologist Pier Andrea Saccardo, based on a specimen collected by Peter Henry Rolfs, sent by John A. Stevenson at the US national mycological collection. Rolfs first considered the unnamed fungus to be the cause of tomato blight in Florida and subsequently caused diseases on multiple hosts. The specimens sent to Saccardo were sterile, consisting of hyphae and sclerotia. Saccardo placed the species in the old form genus Sclerotium, naming it Sclerotium rolfsii. It is, however, not a species of Sclerotium in the modern sense. In 1932, Mario Curzi discovered that the teleomorph (spore-bearing state) was a corticioid fungus and accordingly placed the species in the genus Corticium. Uncertainty on its classification when the broadly defined genus Corticium was being partitioned by taxonomists, led to placement in Pellicularia, then Botryobasidium and finally Athelia. Subsequently, it has been shown via phylogenetic analyses of DNA sequences, that Agroathelia rolfsii was a member of the Amylocorticiales and not the Atheliales where it was classified until recently. Description The fungus produces effused basidiocarps (fruit bodies) that are smooth and white. Microscopically, they consist of ribbon-like hyphae with clamp connections. Basidia are club-shaped, bearing four smooth, ellipsoid basidiospores, measuring 4–7 by 3–5 μm. Small, brownish sclerotia (hyphal propagules) are also formed, arising from the hyphae. Diseases Southern blight Agroathelia rolfsii occurs in soil as a saprotroph, but can also attack living plants. It has an almost indiscriminate host range, but its capacity to form sclerotia (propagules that remain in the soil) means that it particularly attacks seasonal crops. It mostly occurs in warm soils (above ) and can be a serious pest of vegetables in tropical and subtropical regions (including Florida, where it was first recognized), causing "southern blight". Mustard seed fungus It can also be called mustard seed fungus. Root rot Causes a root rot of Cassava. Disease cycle The soil-borne fungal pathogen Agroathelia rolfsii is a basidiomycete that typically exists only as mycelium and sclerotia (anamorph: Sclerotium rolfsii, or asexual state). It causes the disease Southern Blight and typically overwinters as sclerotia. The sclerotia is a survival structure composed of a hard rind and cortex containing hyphae and is typically considered the primary inoculum. The pathogen has a very large host range, affecting over 500 plant species (including tomato, onion, snapbean and pea) in the United States of America. The fungus attacks the host crown and stem tissues at the soil line by producing a number of compounds such as oxalic acid, in addition to enzymes that are pectinolytic and cellulolytic. These compounds effectively kill plant tissue and allow the fungus to enter other areas of the plant. After gaining entry, the pathogen uses the plant tissues to produce mycelium (often forming mycelial mats), as well as additional sclerotia. Sclerotia formation occurs when conditions are especially warm and humid, primarily in the summer months in the United States of America. Susceptible plants exhibit stem lesions near the soil line, and thus often wilt and eventually die. Infection caused by Southern Blight is not considered systemic. Environment Agroathelia rolfsii typically prefers warm, humid climates (whence the name of the disease, Southern Blight) which is required for optimal growth (i.e. to produce mycelium and sclerotia). This makes the disease an important issue in regions such as the Southern United States of America, especially for solanaceous crops. In addition, oxygen rich and acidic soils have also been found to favor growth of the pathogen. Southern Blight can be spread (by way of sclerotia and mycelium) by contaminated farm tools and implements, irrigation systems and infected soil and plant material. Management Thus, management of the disease is critical, especially in agricultural regions. Although historically management has been difficult, there are several practical ways to reduce disease pressure. Simply avoiding infected fields is perhaps the most straightforward management technique given the large host range and durability of survival structures (i.e. sclerotia). However, when this is not possible, practicing proper sanitation and implementing effective crop rotations can help. Deep tillage has also been shown to reduce Southern Blight occurrence by burying infected plant tissues and creating an anaerobic environment that hinders pathogen growth. Soil solarization and certain organic amendments (e.g. composted chicken manure and rye-vetch green manure), as well as introducing certain Trichoderma spp. have also been shown to reduce plant death and number of sclerotia produced in the field in tomatoes. In addition to these cultural methods, chemical methods (e.g. fungicides) can also be employed. These methods all disrupt the production of mycelium and sclerotia, thus reducing the spread of disease. See also List of soybean diseases References External links "Kudzu of the Fungal World" at NC State University Southern blight, Southern stem blight, White mold The Plant Health Instructor Fungal plant pathogens and diseases Amylocorticiales Soybean diseases Fungus species
Agroathelia rolfsii
[ "Biology" ]
1,240
[ "Fungi", "Fungus species" ]
74,522,492
https://en.wikipedia.org/wiki/Ann%20Sandifur
Ann Elizabeth Sandifur (born 14 May 1949) is an American business woman, composer, teacher and writer. She has produced several electronic and multimedia works. Biography Sandifur was born in Spokane, Washington. She earned a BA at Eastern Washington University as well as a BA in composition and a MFA in electronic music and recording media at Mills College in California. She also studied at the University of California (Berkeley). Her teachers included Robert Ashley, Charles Bestor, Paul Creston, Alden Jenks, and Stanley Lunetta. After graduating, Sandifur taught music, broadcast engineering, and radio and television communications. In 1969, Sandifur received third prize in a Mu Phi Epsilon competition for her composition Prenatal. She received a grant from the National Center for Experiments in Television, based in San Francisco. During the early 1970s, Sandifur’s works were performed by MAFISHCO at the Cat’s Paw Palace of Performing Arts, a major alternative theatre in East Bay, California. Later in the 1970s, she worked with David Tudor’s performance group, Composers Inside Electronics (CIE). As a business woman, Sandifur founded the Rosonant Communications Network. She was vice president and in charge of data processing at the Metropolitan Mortgage and Securities Company from 1980 to 1982. In 1983, she completed a 5-year commission from Metropolitan Mortgage and Securities to create Cosmography, a multimedia sculpture. In a 1999 joint interview with composer Janice Giteck, Sandifur and Giteck described themselves as “life-oriented, not career oriented,” noting that they sought to be “versatile rather than specialized.” Sandifur belongs to the Phi Lambda Chapter of Mu Phi Epsilon. Her music is published by Arsciene Publishing. Her compositions include: Chamber Double Chamber Music Prenatal (chamber ensemble) Still, Still (mixed chorus, flute, oboe, electronic piano and double bass) Suite for Oboe Electronic Big Belly Biorhythms of Performance Bridging Space Columbia River (synthesizer, electronic and acoustic pianos) Five Part Fugue for the Collective Consciousness Fugue for Touch In Celebration of Movement (piano and tape) Jona One Learning to Talk (voice and synthesizer) Poetry is Sleeping (voice and tape) P.P.G. Rite of Birth TVCOMOO1.TEL (synthesizer) Keyboard Scored Improvisations (acoustic or electric keyboard) Shared Improvisations (four hand piano) Multimedia Cosmography (multimedia statue) Letting Go of Home (musical sculpture) Word Park (musical Sculpture) Theatre I Extol the Nestle and Cuddle Sequence II References American women composers Electronic music Multimedia 1949 births Living people
Ann Sandifur
[ "Technology" ]
543
[ "Multimedia" ]
74,522,612
https://en.wikipedia.org/wiki/HD%20173047
HD 173047 is a solitary, bluish-white hued star located in the southern constellation Telescopium. It has an apparent magnitude of 6.24, placing it near the limit for naked eye visibility, even under ideal conditions. The object is located relatively far at a distance of 1,050 light-years based on Gaia DR3 parallax measurements, but it is drifting closer with a heliocentric radial velocity of . At its current distance, HHD 173047's brightness is heavily diminished by 0.44 magnitudes due to interstellar extinction and it has an absolute magnitude of −1.52 HD 173047 has a stellar classification of B8/9 II, indicating that it is an evolved B-type star with the characteristics of a B8 and B9 bright giant. It has 4 times the mass of the Sun and a slightly enlarged radius 5.54 times that of the Sun's. It radiates a bolometric luminosity 692 times that of the Sun's from its photosphere at an effective temperature of HD 173047 is metal enriched with an iron abundance 126% that of the Sun's ([Fe/H] = +0.10) and it is estimated to be 194 million years old. References B-type bright giants Telescopium Telescopii, 28 CD-50 12122 173047 092072
HD 173047
[ "Astronomy" ]
293
[ "Telescopium", "Constellations" ]
74,523,207
https://en.wikipedia.org/wiki/HD%20177365
HD 177365 is a visual binary located in the southern constellation Telescopium. It has an apparent magnitude of 6.27, placing it near the limit for naked eye visibility, even under ideal conditions. Gaia DR3 parallax measurements imply a distance of 373 light-years and it is currently receding with a heliocentric radial velocity of . At its current distance, HD 177365's brightness is diminished by two-tenths of a magnitude due to interstellar extinction and it has an absolute magnitude of +0.16. The binarity of the system was first noticed in a 1996 United States Naval Observatory survey. A Hipparcos proper motion survey published in 2006 catalogued the primary as a probable astrometric binary with an 89.6% chance. HD 177365 B, the companion, is a 16th magnitude star located 101.3" away along a position angle of 218° as of 2015. The visible component has a stellar classification of B9 V, indicating that it is an ordinary B-type main-sequence star that is generating energy via hydrogen fusion. It has 3.05 times the mass of the Sun and 3.33 times the radius of the Sun. It radiates 119 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 177365 A is slightly metal deficient with an iron abundance 78% that of the Sun's ([Fe/H] = −0.11) and it is estimated to be 226 million years old. References B-type main-sequence stars Binary stars Astrometric binaries Telescopium Telescopii, 46 CD-50 12326 177365 093860
HD 177365
[ "Astronomy" ]
351
[ "Telescopium", "Constellations" ]
74,524,119
https://en.wikipedia.org/wiki/Christine%20Floss
Christine Floss (1961–2018) was a German-born American cosmochemist whose research involved studying the atomic composition of meteorites, interplanetary dust, and moon rocks in order to understand the formation of the Solar System. She was a research professor of physics at Washington University in St. Louis, affiliated with the university's Laboratory for Space Sciences and McDonnell Center for the Space Sciences. Early life and education Floss was born in Munich, but moved to the US with her family as a child of five. She majored in German at Purdue University, graduating in 1983, but cast around in many directions for a career, eventually finding her life interest in a geology class she took to fulfil a general education requirement. Floss earned a second bachelor's degree in geology from Indiana University Bloomington, in 1987, with a senior thesis on moon rocks advised by Abhijit Basu. She completed a Ph.D. in geochemistry at Washington University in St. Louis in 1991, under the supervision of Ghislaine Crozaz. Her dissertation was Rare earth element and other trace element microdistributions in two unusual extraterrestrial igneous systems: The enstatite achondrite (aubrite) meteorites and the lunar ferroan anorthosites. She entered the doctoral program already married, with two children; the marriage ended during her graduate studies. Crozaz later wrote: "She was definitely one of our best students, and I wondered how she managed to complete her PhD in only four years while at the same time raising two young girls". Career and later life She became a postdoctoral researcher at the Max Planck Institute for Nuclear Physics in Heidelberg, Germany, "mostly for personal reasons": following her future husband, Frank Stadermann, a German researcher in the same specialty whom she had met when he was a visiting student at Washington University. They married in 1993, and had another child before returning together to Washington University in 1996. Floss became a research scientist in the Laboratory for Space Sciences. Eventually she became a research professor. Her husband died at age 48, in 2010, of a cerebral hemorrhage. She was found dead on April 19, 2018 of a heroin overdose. At the time of her death, she was in the process of becoming a regular-rank full professor at Washington University. Recognition Asteroid 6689 Floss, discovered in 1981 by Schelte J. Bus, was named for Floss. A special issue of the journal Meteoritics & Planetary Science was published in her memory in 2020. A lunar crater was named after her in 2023. References Further reading External links 1961 births 2018 deaths American geochemists Women geochemists American planetary scientists American women planetary scientists Scientists from Munich Emigrants from West Germany to the United States Purdue University alumni Indiana University Bloomington alumni Washington University in St. Louis alumni Washington University in St. Louis faculty Deaths by heroin overdose in the United States
Christine Floss
[ "Chemistry" ]
605
[ "Geochemists", "American geochemists", "Women geochemists" ]
74,525,897
https://en.wikipedia.org/wiki/Cyaarside
Cyaarside, also called cyarside, is the As≡C− anion. Featuring a triple bond between arsenic and carbon, it is the arsenic analogue of cyanide and cyaphide. Preparation An actinide cyaarside complex can be prepared by C−O bond cleavage of the arsaethynolate anion, the arsenic analogue of cyanate and phosphaethynolate. Reaction of the uranium complex [] with one molar equivalent of [ in the presence of 2.2.2-cryptand results in the formation of a dinuclear, oxo-bridged uranium complex featuring a C≡As ligand. See also arsaalkyne (As≡CR) References Anions Arsenic compounds
Cyaarside
[ "Physics", "Chemistry" ]
154
[ "Ions", "Matter", "Anions" ]
74,527,567
https://en.wikipedia.org/wiki/Hand%27s%20paradox
In statistics, Hand's paradox arises from ambiguity when comparing two treatments. It shows that a comparison of the effects of the treatments applied to two independent groups that can contradict a comparison between the effects of both treatments applied to a single group. Paradox Comparisons of two treatments often involve comparing the responses of a random sample of patients receiving one treatment with an independent random sample receiving the other. One commonly used measure of the difference is then the probability that a randomly chosen member of one group will have a higher score than a randomly chosen member of the other group. However, in many situations, interest really lies on which of the two treatments will give a randomly chosen patient the greater probability of doing better. These two measures, a comparison between two randomly chosen patients, one from each group, and a comparison of treatment effects on a randomly chosen patient, can lead to different conclusions. This has been called Hand's paradox, and appears to have first been described by David J. Hand. Examples Example 1 Label the two treatments A and B and suppose that: Patient 1 would have response values 2 and 3 to A and B respectively. Patient 2 would have response values 4 and 5 to A and B respectively. Patient 3 would have response values 6 and 1 to A and B respectively. Then the probability that the response to A of a randomly chosen patient is greater than the response to B of a randomly chosen patient is 6/9 = 2/3. But the probability that a randomly chosen patient will have a greater response to A than B is 1/3. Thus a simple comparison of two independent groups may suggest that patients have a higher probability of doing better under A, whereas in fact patients have a higher probability of doing better under B. Example 2 Suppose we have two random variables, and , corresponding to the effects of two treatments. If we assume that and are independent, then , suggesting that A is more likely to benefit a patient than B. In contrast, the joint distribution which minimizes leads to . This means that it is possible that in up to 62% of cases treatment B is better than treatment A. References Statistical paradoxes
Hand's paradox
[ "Mathematics" ]
429
[ "Mathematical problems", "Statistical paradoxes", "Mathematical paradoxes" ]
74,527,935
https://en.wikipedia.org/wiki/Literalism%20%28music%29
Literalism in music is a technique that emerged in the late 20th century. It involves composing music by utilising tangible representations of musical elements. With this approach, composers craft a diverse range of compositions, spanning from classical orchestral works to seemingly structureless instances of noise. Literalism is a technique of music composition that uses physical objects to represent musical elements. This technique was first developed in the 1960s and 1970s by composers such as Alvin Lucier, John Cage, and Pauline Oliveros. Interpretations Stephen Davies's wrote a paper on the defence of literalism which considers the emotional descriptions of music. He believes that literalism posits that when a piece of music is described as 'sad,' 'happy,' or other emotions, it actually possesses the expressive qualities we attribute to it. Davies's literalist approach leverages the concept of polysemy, where the meaning of emotion words in descriptions of expressive music is connected to their primary psychological sense. Davies identifies this connection through music's presentation of emotion-characteristics-in-appearance. Examples Alvin Lucier's "I Am Sitting in a Room" (1969), uses the physical properties of a room to create a soundscape. John Cage's "4'33" (1952), which is a composition consisting of four minutes and 33 seconds of silence. Pauline Oliveros' "Tuning Meditations" (1974), which uses the physical properties of tuning forks to create a meditative soundscape. References Sources Emotion Polysemy Musical terminology
Literalism (music)
[ "Biology" ]
308
[ "Emotion", "Behavior", "Human behavior" ]
74,529,398
https://en.wikipedia.org/wiki/Nishith%20Gupta
Nishith Gupta (born January 3, 1977) is an Indian-German molecular biologist and parasitologist known for his pioneering work in the field of Host–pathogen interaction and cell signalling. He is currently working as Professor, Senior Fellow of the Wellcome Trust-DBT (India Alliance) at the Hyderabad campus of Birla Institute of Technology and Science, Pilani. Early life and education Nishith Gupta was born in Shahjahanpur, Uttar Pradesh, India. He pursued his early education at the Rohilkhand university (BS), Banaras Hindu University (MS) and later obtained his PhD in biochemistry from Leipzig University (Germany) in 2003. Afterward, he was a post-doctoral fellow at the National Jewish Medical Center in Denver, United States. In 2009, he became a research group leader at the Humboldt University and affiliate scientist at the Max Planck Institute for Infection Biology in Berlin. In 2017, he was awarded Doctor habilitatus (DSc) by Humboldt University in the field of biochemistry and molecular parasitology. In 2020, he returned to India, joining BITS Pilani as a Professor in the Department of Biological Sciences at the Hyderabad campus. Career and research Nishith Gupta's research focuses on metabolic interactions between intracellular parasites and host cells. His group has published several research articles focusing on carbon metabolism, membrane biogenesis and signaling pathways primarily in a widespread model parasite Toxoplasma gondii, but also in Plasmodium, and Eimeria species. Gupta's research on membrane biogenesis in apicomplexan parasites, involving the discovery of unique phospholipids, holds potential for advancing the development of vaccines against parasitic diseases. Notably, his work on phosphatidylcholine, phosphatidylinositol and phosphatidylethanolamine identified potential therapeutic targets, while phosphatidylthreonine and a novel phosphatidylserine decarboxylase could serve as toxoplasmosis biomarkers. In collaboration with researchers at the University of Melbourne and National Key Laboratory of Agricultural Microbiology in Wuhan, Gupta's group has revealed Toxoplasma's metabolic adaptability and identified proteins crucial for the parasite's development, suggesting potential drug targets for toxoplasmosis and related infections. In addition, his group uncovered Plasmodium's reliance on host-derived sugars, offering new directions for the development of anti-malarial drugs. In a pioneering collaboration with Peter Hegemann, Gupta's work in Optogenetic regulation of cyclic nucleotide signaling within Toxoplasma gondii has paved the way for broader applications of light-activated proteins and biosensors in other intracellular pathogens, including parasites, bacteria, and viruses. Leadership and initiatives Throughout his career, Gupta has held leadership positions, including the Heisenberg Fellow of German Research Foundation (DFG) at the Humboldt University and Senior Fellow of the Wellcome Trust-DBT (India Alliance). He is currently serving as the editorial board member of Animal Diseases, Microbial Cell, and Communications Biology published by Nature Portfolio. In 2021, he founded the Intracellular Parasite Education and Research Labs (iPEARL) in the Biological Science division of BITS Pilani and an integrated One Health initiative – Veterinary And Medical Parasite Infection Research Ensemble (VAMPIRE). These initiatives seek to support the development of future scientists and leaders and contribute to the advancement of our knowledge about parasitic infections in clinical and veterinary settings. Awards and recognition In 2010 he was awarded ESCMID research grant for his contribution to the field of clinical microbiology and infectious diseases. In 2014, Nishith Gupta received with the Karl Asmund Rudolphi Medal, an award from the German Society for Parasitology. In 2015, he received the young scientist award in Microbiology from the Robert Koch Foundation. In 2020, he received the Outstanding Potential for Excellence in Research & Academics (OPERA) award from BITS Pilani. Banaras Hindu University honoured him with distinguished alumni award in recognition of his achievements in academics and research. In 2023, Indian Council of Medical Research awarded him an International Fellowships for Senior Biomedical Scientist in collaboration with the SickKids Hospital and University of Toronto. References External links Official Profile 21st-century German biologists Molecular biologists Microbiologists Living people 1977 births Leipzig University alumni German parasitologists Banaras Hindu University alumni 21st-century Indian scientists 21st-century biologists
Nishith Gupta
[ "Chemistry" ]
928
[ "Biochemists", "Molecular biology", "Molecular biologists" ]
74,531,061
https://en.wikipedia.org/wiki/Potential%20renal%20acid%20load
Potential renal acid load (PRAL) is a measure of the acid that the body produces after ingesting a food. This is different from pH, which is the acidity of a food before being consumed. PRAL is a different acidity measure than the food ash measurement. Some acidic foods actually have a negative PRAL measurement, meaning they reduce acidity in the stomach. A low PRAL diet (not to be confused with an alkaline diet) can lower acidity in the stomach, which can be helpful for people suffering GERD or Acid Reflux. However, it does not lower the pH of blood and therefore cannot treat osteoporosis or other conditions. References Medical scales Metabolism Digestive system
Potential renal acid load
[ "Chemistry", "Biology" ]
148
[ "Digestive system", "Organ systems", "Cellular processes", "Biochemistry", "Metabolism" ]
74,532,253
https://en.wikipedia.org/wiki/Ryan%20Doyle%20%28artist%29
Ryan C. Doyle is a visual artist known for his large-scale fabricated sculptures, parade floats, art cars, and sculptures, sometimes involving robotics, animatronics, pyrotechnics, and military technologies. He is from the Twin Cities, Minnesota, and resides in Detroit, Michigan, where he has contributed to permanent installations at The Lincoln Street Art Park and Recycle Here! recycling center. Ryan Doyle attended the Minneapolis College of Art and Design, and majored in 3D Design and Kinetic Sculpture. He was an apprentice for prop and animatronics artist Christian Ristow after college. Doyle previously lived and worked in New York, NY and Oakland, CA. Doyle has presented work at: Burning Man, Maker Faire, Coachella, AND festival, The Influencers, Big day Out, Device Art, RoboDock, and Performa. He has also appeared on TV shows including: Junk Yard Wars, Monster Garage, JUNKies, and The Rock’n Roll Acid Test, where he was also on the concept team. Work Along with other noted projects, Doyle has contributed fabricated art to Burning Man since 2000, led the Burning Man Department of Public Works (DPW) Metal Shop in previous years, built art cars, and assisted with artist fabrication at the festival. Notable works Freak Beacon Freak Beacon is a permanent installation located at Lincoln Street Art Park. It was made by Ryan Doyle, Ben Wolfe, Jon Isbell, and Zeph Alcala in 2017. Carcroach Carcroach is a road legal 2004 Honda Civic EX art car, also referred to as a Burning Man "mutant vehicle". Doyle led the construction of the art car, with the "Detroitus" artist group. Starting in 2020 during the United States racial unrest, the Carcroach has been used as a vehicle in protests and demonstrations. It is also used in parades and other public events. Gon Kirin (GKR) Gon KiRin (GKR) is an art project created by Teddy Lo and Ryan Doyle. This "art car" was designed using metal and LED fixtures to create a dragon onto a deconstructed 1963 Dodge dump truck with a 318 engine. It is 8-tons, measuring approximately long and tall. The dragon is lit with of linear RGB LED lighting fixtures and multiple Traxon wall-washer units. Gon KiRin has two levels of climbing space with seating for 20+ people in the dragon's mouth and on a couch on its back where riders can move its tail back and forth. A 1,500-pound DJ booth mounted on a Marine Zodiac attack boat sits on the second story. The dragon features a hydraulic neck and a massive flamethrower in its mouth. Gon KiRin was built in five months by a dedicated 15-person team. It debuted at the 2010 Burning Man, was featured at the Maker Faire and the New York Halloween Parade in 2011, and returned to Burning Man in 2012. Swimming Cities of Serenissima Ryan Doyle was a contributing artist and fabricator for Swimming Cities of Serenissima (2009). Exhibitions Lead Artist/Fabricator "UFO san" ADHOC gallery, Winwood District, Miami, 2009 "Williamsburg guy" collaboration with street artist UFO 907, ADA Gallery, Richmond VA, 2009 "Hand of Man tour" with Robochrist Ind, Gold Coast, Adelaide, Melbourne, Sydney, Perth, Australia, 2009 "Maker's Faire" with Robochrist Ind, San Mateo fairgrounds, San Mateo, California, 2008 “A Complete Mastery of Sinister Forces: Employed with Callous Disregard to Produce Catastrophic Changes in the Natural Order of Events” with SRL, Robodock, Amsterdam, Netherlands, 2007 “Movement” with Johnny Amerika, Stagecoach Country Music Festival, Palm Springs, California, 2007 “Movement” with Johnny Amerika, Coachella Valley Music and Arts Festival, Palm Springs, California, 2007 “Stairway to hell” with Robochrist, Device Art, Zagreb, Croatia, 2006 “Bad Kid’s Club#7” Blasthaus Gallery, San Francisco, California, 2006 Supporting Artist/Fabricator "Triptych: Trippy Threesome", Superchief Gallery, August 2016 "The Regurgitator" Gadgetoff, Staten Island New York, 2010 "The Influencers 2010" Center of Contemporary Culture of Barcelona, Catalunia, Spain, 2010 "Bici Muerte" Barcelona, Spain, 2010 "Skype Video Booth" Reno, Nevada, 2010 "Squishy Universe" RipIons, Winwood District, Miami, 2009 "Swimming Cities of Serenissima" with SWOON. Adriatic sea through to Venice Biennalle, Venice Italy, 2009 "Secret Project Robot" group show fundraiser for the swimming cities. Brooklyn, New York, 2009 "Anonymous" group show fundraiser for the swimming cities. Lower East Side, New York, 2009 "Scope NY: Lincoln Center, New York City, 2009 "Punk Rock Doesn't Dead" Burning Man, Black Rock City, Nevada, 2009 "Distance Don't Matter" Space 538, Portland, Maine "Alleyoup Tour" Carnegie Mellon, Seneca Nation Reservation, and the Watermill Center, 2008 "Swimming Cities of the Switchbback Seas" with SWOON. Hudson River, NY, 2009 "Transformazium" group show, Secret Project Robot, Brooklyn, NY, 2009 “F’ck Art/Let’s Dance” with Japanther, Art Basel, Miami Beach, Florida, 2007 “Haute Living” with MSTRKRFT, Art Basel, Miami Beach, Florida, 2007 “Dinosaur Death Dance” with Japanther, ps122, NY, NY, 2007 “Ex Digitalis Machina” with Robochrist, Robodock, Amsterdam, Netherlands, 2007 “ Bloodbath” with Robochrist, Virgin Festival, Baltimore, Maryland, 2007 “Wheel of Power” with Derevo, Mannheim, Germany, 2007 “Power Tool Drag Races” Maker's Faire, San Mateo, California, 2007 “Art!” with Chris Hackett, 3rd Ward, Brooklyn, NY, 2006 “Bike Kill” with Black Label Bike Club, Brooklyn, NY, 2006 “The Regurgitator” Device Art, Zagreb, Croatia, 2006 “Pulse Jet Wheelchair” N.I.M.B.Y. Oakland, California, 2006 “Spider’s Ride” Coachella Valley Music and Arts Festival, Palm Springs, California, 2006 “Dangerous Curve” with SRL, Los Angeles, California, 2006 References American contemporary artists Living people 21st-century American artists Multimedia artists People from Minnesota Artists from Minnesota Year of birth missing (living people)
Ryan Doyle (artist)
[ "Technology" ]
1,371
[ "Multimedia", "Multimedia artists" ]
74,532,850
https://en.wikipedia.org/wiki/Russula%20amoenolens
Russula amoenolens, also known by its common name camembert brittlegill, is a member of the genus Russula. The species has a greyish-brown cap, with clear scoring along the edge. While inedible, the mushroom is known for its distinctive smell like camembert cheese. The mushroom often appears under oak trees from summer to autumn. Taxonomy The species was first described by French mycologist Henri Romagnesi in 1952. Distribution The species is primarily found in Europe, but has also been reported in the United States, Costa Rica, Morocco and New Zealand. See also List of Russula species References amoenolens Fungus species
Russula amoenolens
[ "Biology" ]
138
[ "Fungi", "Fungus species" ]
74,532,862
https://en.wikipedia.org/wiki/Albert%20Strickler
Albert Strickler (25 July 1887 – 1 February 1963) was a Swiss mechanical engineer recognized for contributions to our understanding of hydraulic roughness in open channel and pipe flow. Strickler proposed that hydraulic roughness could be characterized as a function of measurable surface roughness and described the concept of relative roughness, the ratio of hydraulic radius to surface roughness. He applied these concepts to the development of a dimensionally homogeneous form of the Manning formula. Life Albert Strickler was the only child of Albert Strickler Sr. (1853–1936) and Maria Auguste Flentjen (1863–1945) of Wädenswil, Canton of Zürich, Switzerland. He was married twice, the second time as a widower. Neither marriage produced children. Strickler graduated from ETH Zurich as a mechical engineer in 1911. He earned a Ph.D. in 1917 while serving as the principal assistant to Professor Franz Prasil (1857–1929). Throughout his career, he was involved in the development of hydropower with interests ranging from hydraulic machinery to the regulation of river flows for inland navigation. Prior to World War II, he was the vice president of the Association of Exporting Electricity and a member of the board of directors on the Gotthard Electricity Mains AG, Altdorf, Uri. He subsequently worked as an engineering consultant until illness forced his withdrawal from practice in 1950. Strickler's Equation In 1923, Strickler published a report examining 34 formulas for the computation of flow in pipes and open channels and related experimental data. The report validated the Gauckler formula and by inference, the Manning formula. Strickler proposed that the Ganguillet-Kutter n-value, used to characterize hydraulic roughness in the Manning formula, could be defined as a function of surface roughness, . Strickler's equation introduces a new empirical coefficient which must be determined experimentally to define n-value. However, unlike n-value, which has units of T/L1/3, has units of length, and at least in theory, is a measurable quantity. A measurable quantity is potentially useful for channel design and stream restoration engineering where the design value of hydraulic roughness may be unknown. Stricker proposed that for a fixed boundary, surface roughness could be defined by the median grain size of a river's bed material. He also noted that the onset of sediment transport, the mobile boundary condition, increased the observed hydraulic roughness. For fixed boundary, gravel bed rivers, Strickler's equation can be quantified as: where is the median grain size in meters. Later researchers produced variations on Strickler's equation proposing different measures of surface roughness and corresponding variations in the empherical coefficient. For example, Strickler's equation has been used to estimate n-values for riprap lined channels from stone gradation. The equation also describes the scaling of hydraulic roughness in Froude scaled, physical hydraulic models. In 1933, Johann Nikuradse published a study of hydraulic roughness in pipes that validated Strickler's observations of the influence of surface roughness in turbulent flows. Dimensionally Homogeneous Gauckler–Manning–Strickler Formula Given the Gauckler–Manning–Strickler formula: where: is velocity in meters per second, is n-value in seconds per meter1/3, is the Strickler coefficient, , is hydraulic radius in meters, and is the dimensionless water surface slope. Substituting Strickler's equation for n-value and rearranging terms produces a dimensionally homogeneous form of the Manning's formula: Where is acceleration due to gravity in meters per second2. The first term on the right-hand side of the equation is the dimensionless ratio of hydraulic radius to roughness height, commonly referred to as relative roughness. The remaining term, known as the boundary shear velocity, approximates the flow of water downhill under the influence of gravity and has units of velocity, i.e., L/T. From experimental data, Stickler proposed that the dimensionally homogeneous form of the Manning formula could be quantified as: where In civil engineering practice, the Manning formula is more widely used than Stricker's dimensionally homogeneous form of the equation. However, Strickler's observations on the influence of surface roughness and the concept of relative roughness are common features of a variety of formulas used to estimate hydraulic roughness. Publications Source: Strickler, A. (1923). “Contributions to the question of velocity formula and the roughness numbers for rivers, channels and pipes.” Mitteilung 16, C. Mutzner, ed., Amt für Wasserwirtschaft, Bern, Switzerland (in German). Strickler, A. (1924). “Drag resistance of propeller boats, and their performance in inland navigation.” Mitteilung 17, Amt für Wasserwirtschaft, Bern, Switzerland (in German). Strickler, A. (1925). “The regulation of Rhine River between Strassburg and Basle.” Schweizerische Techniker-Zeitung, 22(33), 389–394 (in German). Strickler, A. (1926). “Studies on measurement of discharge.” Mitteilung 18, C. Mutzner, ed., Amt für Wasserwirtschaft, Bern, Switzerland (in German). Strickler, A. (1926). “Relation between the Swiss hydropower development and inland navigation.” Werft, Reederei, Hafen, 7(14), 345–346 (in German). Strickler, A. (1930). “The question of the coefficient in Chézy’s formula.” Gesamtbericht der 2. Weltkraftkonferenz, Berlin, 2, 137–152 (in German). References Notes See also Hager, W. H. (2014). “Albert Strickler: His life and work.” Wasser, Energie, Luft, 106(4), 297–302 (in German) Vischer, D. (1987). “Strickler formula, a Swiss contribution to hydraulics.”Wasser, Energie, Luft, 79(7/8), 139–142. https://www.research-collection.ethz.ch/handle/20.500.11850/487522 Fluid dynamics Hydrology Piping Civil engineering Hydraulic engineering Sedimentology Geomorphology 1887 births 1983 deaths ETH Zurich alumni
Albert Strickler
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
1,392
[ "Hydrology", "Building engineering", "Chemical engineering", "Physical systems", "Construction", "Hydraulics", "Civil engineering", "Mechanical engineering", "Environmental engineering", "Piping", "Hydraulic engineering", "Fluid dynamics" ]
60,897,141
https://en.wikipedia.org/wiki/International%20Biodeterioration%20and%20Biodegradation%20Society
The International Biodeterioration and Biodegradation Society (IBBS) is a scientific society with an international membership. It is a charity registered in the UK. IBBS belongs to the Federation of European Microbiological Societies (FEMS), along with national organizations from European countries and appears in the Yearbook of International Organisations On-line, published by the Union of International Associations. The aim of IBBS is to promote and spread knowledge of Biodeterioration and Biodegradation. Conferences are arranged on specific topics and every three years an International Symposium covering a wide range of research in these scientific areas is organized; the last (IBBS17) was held in Manchester, UK. Members can apply for various grants or bursaries. The Society's journal, International Biodeterioration and Biodegradation, is published by Elsevier. Aims and early history The International Biodeterioration and Biodegradation Society (IBBS) is a learned scientific society with a worldwide membership coming from academia and industry. Its aims are to promote the sciences of Biodeterioraion and Biodegradation by means of international meetings, conferences and publications. It appears in the Yearbook of International Organisations On-line, published by the Union of International Associations, in cooperation with the United Nations Economic and Social Council. It began as the Biodeterioration Society. The draft constitution of the Society was agreed in 1969 and the first annual general meeting was held on 9 July 1971 . The aim of the Society was to promote the science of Biodeterioration, which is defined as any undesirable change in the properties of a material caused by the vital activities of living organisms. The economic importance of biodeterioration was discussed in an article by Dennis Allsopp, a former president and secretary of the Society. The first Biodeterioration Symposium was held prior to the inauguration of the Society, in Southampton, UK, in 1968. A copy of the abstracts is available at . The Second International Biodeterioration Symposium, and the first to be held under the auspices of the newly-formed Society, was held in Lunteren, The Netherlands, in September, 1971. The Third International Symposium, held at the University of Rhode Island, USA, in 1975, was designated the "Third International Biodegradation Symposium", this being the more recognized word in the USA. It was not until the 8th Symposium, however, in Windsor, Ontario, in 1990, that the term was reintroduced. Since then, all triennial events have been entitled "International Biodeterioration and Biodegradation Symposia" and the Society adopted the word into its name, becoming the International Biodeterioration and Biodegradation Society, or IBBS. Governance and publications IBBS is a charity registered in the UK. It has an executive body, the Council, with elected honorary officers , which meets three times each year. The Honorary Scientific Programme Officers collaborate on the organization of conferences and small meetings suggested by members. A Newsletter is produced under the aegis of its Honorary Managing Editor and emailed to members three times each year. IBBS has no physical headquarters, any physical records and publications being kept by Council members. Back issues of the Society's first publication, International Biodeterioration Bulletin (1965-1986, now discontinued) have been converted into digital format and made freely available on the website . From 1984, the Journal was published by the Commonwealth Agricultural Bureaux (CAB) in the UK, under ISSN 0265-3036. In 1987, the Society agreed with Elsevier that the journal "International Biodeterioration and Biodegradation" (ISSN 0964-8305) would be published by them and acknowledged as the Official Journal of IBBS. Reduced subscriptions are available to IBBS members. Membership and meetings The Society is a member of FEMS (Federation of European Microbiological Societies), but its members are not restricted to Europe. IBBS has a diverse membership with scientists from all over the world and with approximately equal numbers of male and female members. "Country Representatives" have the role of promoting IBBS in their countries and acting as a focal point for members in that area. Meetings have been held in the UK, USA, Austria, Canada, Czech Republic, France, Germany, Holland, India, Italy, Poland and Spain, with overarching international symposia held every 3 years. The last triennial International Biodeterioration and Biodegradation Symposium (IBBS17) was held in Manchester, UK, in September, 2017. The 2020 Symposium was delayed because of the COVID outbreak and was held on-line in September, 2021, www.ibbs18.org. References Biodegradation British biology societies International scientific organizations Microbiology societies
International Biodeterioration and Biodegradation Society
[ "Chemistry" ]
998
[ "Biodegradation" ]
60,897,406
https://en.wikipedia.org/wiki/Peccot%20Lectures
The Peccot Lecture (Cours Peccot in French) is a semester-long mathematics course given at the Collège de France. Each course is given by a mathematician under 30 years old who has distinguished themselves by their promising work. The course consists in a series of conferences during which the laureate exposes their recent research works. Being a Peccot lecturer is a distinction that often foresees an exceptional scientific career. Several future recipients of the Fields Medal, Abel Prize, members of the French Academy of Sciences, and professors at the Collège de France are among the laureates. Some of the most illustrious recipients include Émile Borel and the Fields medalists Laurent Schwartz, Jean-Pierre Serre, or Alain Connes. Some Peccot lectures may additionally be granted – exceptionally and irregularly – the Peccot prize or the Peccot–Vimont prize. History The Peccot lectures are among several manifestations organized at the Collège de France which are funded and managed by bequests from the family of Claude-Antoine Peccot, a young mathematician who died while aged 20. Several successive donations to the foundation (in 1886, 1894, and 1897) by Julie Anne Antoinette Peccot and Claudine Henriette Marguerite Lafond (widow Vimont) – respectively the mother and the godmother of Claude-Antoine Peccot – first allowed to create annual stipend, followed by annual lectureship appointments, awarded to mathematicians under 30 who have proved promising. Since 1918, the Peccot lectures have been enlarged to two or three mathematicians each year. Laureates Laureates of the Peccot lecture and prize who subsequently obtained the Fields medal Laurent Schwartz: Peccot lecture and prize 1945–1946, Fields medal 1950 Jean-Pierre Serre: Peccot lecture and prize 1954–1955, Fields medal 1954 Alexandre Grothendieck: Peccot lecture 1957–1958, Fields medal 1966 Pierre Deligne: Peccot lecture 1971–1972, Fields medal 1978 Alain Connes: Peccot lecture and prize 1975–76, Fields medal 1982 Pierre-Louis Lions: Peccot lecture 1983–1984, Fields medal 1994 Jean-Christophe Yoccoz: Peccot lecture 1987–1988, Fields medal 1994 Laurent Lafforgue: Peccot lecture and prize 1995–1996, Fields medal 2002 Wendelin Werner: Peccot lecture 1998–1999, Fields medal 2006 Cédric Villani: Peccot lecture and prize 2002–2003, Fields medal 2010 Artur Avila: Peccot lecture 2004–2005, Fields medal 2014 Alessio Figalli: Peccot lecture 2011–2012, Fields medal 2018 Peter Scholze: Peccot lecture and prize 2012–2013, Fields medal 2018 Hugo Duminil-Copin: Peccot lecture 2014–2015, Fields medal 2022 All Peccot lectures See also List of mathematics awards References Collège de France Awards with age limits Early career awards Mathematics awards Mathematics education 1899 establishments in France University and college lecture series Recurring events established in 1899
Peccot Lectures
[ "Technology" ]
625
[ "Science and technology awards", "Mathematics awards" ]
60,898,046
https://en.wikipedia.org/wiki/Geolitica
Geolitica, formerly known as PredPol, Inc, is a predictive policing company that attempts to predict property crimes using predictive analytics. PredPol is also the name of the software the company produces. PredPol began as a project of the Los Angeles Police Department (LAPD) and University of California, Los Angeles professor Jeff Brantingham. PredPol has produced a patented algorithm, which is based on a model used to predict earthquake aftershocks. As of 2020, PredPol's algorithm is the most commonly used predictive policing algorithm in the U.S. Police departments that use PredPol are given printouts of jurisdiction maps that denote areas where crime has been predicted to occur throughout the day. The Los Angeles Times reported that officers are expected to patrol these areas during their shifts, as the system tracks their movements via the GPS in their patrol cars. Scholar Ruha Benjamin called PredPol a "crime production algorithm," as police officers then more heavily patrol these predicted crime zones, expecting to see crime, which leads to a self-fulfilling prophecy. In an August 2023 earnings call, the CEO of SoundThinking announced that the company had begun the process of absorbing parts of Geolitica, including its engineering team, patents, and customers. According to SoundThinking, Geolitica would cease operations at the end of 2023. Controversies PredPol was created in 2010 and was a leading vendor of predictive policing technology by 2012. Smithsonian magazine remarked in 2018 that no independent published research had ever confirmed PredPol's claims of its software's accuracy. In March 2019, the LAPD's internal audit concluded that there were insufficient data to determine if PredPol software helped reduce crime. In October 2018 Cory Doctorow described the secrecy around identifying which police departments use PredPol. PredPol does not share this information. The information is not accessible to the public. In February 2019 Vice followed up to report that many police departments secretly use PredPol. According to PredPol in 2019, 60 police departments in the U.S. used PredPol, most of which were mid-size agencies of 100 to 200 officers. In 2019, several cities reported cancelling PredPol contracts due to cost. The city of Mountain View, California spent more than $60,000 on the program between 2013 and 2018, and Hagerstown, Maryland spent $15,000 a year on the service until 2018. In 2016 Mic reported that PredPol inappropriately directs police to minority neighborhoods. In 2017 Santa Cruz placed a moratorium on the use of predictive policing technology. In 2020, the Santa Cruz City Council banned the use of predictive policing, a move that was supported by a coalition of civil liberties and racial justice groups. Institutions like the Brennan Center have urged transparency from police departments employing the technology, because in order for policymakers and auditors to evaluate these algorithms, audit logs of who creates and accesses the predictions need to be kept and disclosed. In April 2020, the Los Angeles Police Department, one of the oldest customers of PredPol, ended its program without being able to measure its effectiveness in reducing crime. In December 2021, a report was published by Gizmodo and The Markup indicating that PredPol perpetuated racial biases by targeting Latino and Black neighborhoods, while crime predictions for white middle- to upper-class areas were absent. In October 2023, an investigation by The Markup revealed the crime predictions generated by PredPol's algorithm for the Plainfield Police Department had an accuracy rate less than half of 1%. References External links Law enforcement organizations Crime prevention Big data Predictive analytics Government by algorithm Companies based in Santa Cruz, California Privately held companies based in California
Geolitica
[ "Technology", "Engineering" ]
771
[ "Government by algorithm", "Data", "Automation", "Big data" ]
60,899,807
https://en.wikipedia.org/wiki/Single%20particle%20extinction%20and%20scattering
Single Particle Extinction and Scattering (SPES) is a technique in physics that is used to characterise micro and nanoparticles suspended in a fluid through two independent parameters, the diameter and the effective refractive index. A laser generates a gaussian beam which focuses inside a flow cell. A particle that passes through the focal region generates an interference in the beam, which is collected with a sensor. Through this signal it is possible to derive the real part and the imaginary part of the forward scattering field from each particle. The technique was developed to measure aerosols in air and particles in liquid. References Experimental physics Laser science
Single particle extinction and scattering
[ "Physics" ]
128
[ "Experimental physics" ]
60,899,825
https://en.wikipedia.org/wiki/Neptunian%20desert
The Neptunian desert or sub-Jovian desert is broadly defined as the region close to a star where no Neptune-sized exoplanets are found. This zone receives strong irradiation from the star, meaning the planets cannot retain their gaseous atmospheres: they evaporate, leaving just a rocky core. Neptune-sized planets should be easier to find in short-period orbits, and many sufficiently massive planets have been discovered with longer orbits from surveys such as CoRoT and Kepler. The physical mechanisms that result in the observed Neptunian desert are currently unknown, but have been suggested to be due to a different formation mechanism for short-period super-Earth and Jovian exoplanets, similar to the reasons for the brown-dwarf desert. Candidates NGTS-4b The exoplanet NGTS-4b, with mass of 20 , and a radius 20% smaller than Neptune, was found to still have an atmosphere while orbiting every 1.3 days within the Neptunian desert of NGTS-4, a K-dwarf star located 922 light-years from Earth. The atmosphere may have survived due to the planet's unusually high core mass, or it might have migrated to its current close-in orbit after this epoch of maximum stellar activity. LTT 9779 b LTT 9779 b is an ultra-hot Neptune in the Neptunian desert. It has an unusually high albedo of 0.8, and likely has a metal-rich atmosphere. Vega b Vega b, reported in 2021, is a candidate ultra-hot Neptune with a mass of ≥21.9 that revolves around Vega every 2.43 days, a mere from its luminous host star. The equilbrium temperature of the planet is a white-hot assuming a Bond albedo of 0.25, which, if confirmed, would make it the second-hottest exoplanet after KELT-9b. See also Brown-dwarf desert Planetary migration List of exoplanets discovered in 2019 Notes Planetary science
Neptunian desert
[ "Astronomy" ]
423
[ "Planetary science", "Astronomical sub-disciplines" ]
60,899,832
https://en.wikipedia.org/wiki/Secondary%20electrospray%20ionization
Secondary electro-spray ionization (SESI) is an ambient ionization technique for the analysis of trace concentrations of vapors, where a nano-electrospray produces charging agents that collide with the analyte molecules directly in gas-phase. In the subsequent reaction, the charge is transferred and vapors get ionized, most molecules get protonated (in positive mode) and deprotonated (in negative mode). SESI works in combination with mass spectrometry or ion-mobility spectrometry. History The fact that trace concentrations of gases in contact with an electrospray plume were efficiently ionized was first observed by Fenn and colleagues when they noted that tiny concentrations of plasticizers produced intense peaks in their mass spectra. However, it was not until 2000 when this problem was reframed as a solution, when Hill and coworkers used an electrospray to ionize molecules in the gas phase, and named the technique Secondary Electrospray Ionization. In 2007, the almost simultaneous works of Zenobi and Pablo Sinues applied SESI to breath analysis for the first time, marking the beginning of a fruitful field or research. With sensitivities in the low pptv range (10−12), SESI has been used in other applications, where the detection of low volatility vapors is important. Detecting low volatility species in the gas phase is important because larger molecules tend to have higher biological significance. Low volatility species have been overlooked because it is technically difficult to detect them, as they are in very low concentration, and they tend to condensate in the inner piping of instruments. However, as this problem is solved, and new instruments are able to handle larger and more specific molecules, the ability to perform on-line, real time analysis of molecules naturally released in the air, even at minute concentrations, is attracting attention to this ionization technique. Principle of operation In the early days of SESI, two ionization mechanisms were under debate.: the droplet-vapor interaction model postulates that vapors are adsorbed in the electrospray ionization (ESI) droplets, and then reemitted as the droplet shrinks, just as regular liquid phase analytes are produced in electrospray ionization; on the other hand, the ion-vapor interaction model postulates that molecules and ions or small clusters collide, and the charge is transferred in this collision. Currently available commercial SESI sources operate at high temperature so as to better handle low volatility species. In this regime, nanodroplets from the electrospray evaporate very quickly to form ion clusters in equilibrium. This results in ion-vapor reactions dominating the majority of the ionization region. As charging ions originate from nano-droplets, and no high energy ions are involved at any point of the ionization process nor the creation of ionizing agents, fragmentation in SESI is remarkably low, and the resulting spectra are very clean. This allows for a very high dynamic range, where low intensity peaks are not affected by more abundant species. Some related techniques are laser ablation electrospray ionization, proton-transfer-reaction mass spectrometry and selected-ion flow-tube mass spectrometry. Applications The main feature of SESI is that it can detect minuscule concentrations of low volatility species in real time, with molecular masses as high as 700 Da, falling in the realm of metabolomics. These molecules are naturally released by living organisms, and are commonly detected as odors, which means that they can be analyzed non-invasively. SESI, combined with High Resolution Mass Spectrometry, provides time-resolved, biologically relevant information of living systems, where the system does not need to be interfered with. This allows to seamlessly capture the time evolution of their metabolism and their response to controlled stimuli. SESI has been widely used for breath gas analysis for biomarker discovery, and in vivo pharmacokinetic studies: Biomarker discovery Bacterial infection It has been widely reported the identification of bacteria by their volatile organic compound fingerprint. SESI-MS has proven to be a robust technique for the identification of bacteria from cell cultures and infections in vivo from breath samples, after the development of libraries of vapor profiles. Other studies include: In vivo differentiation between critical pathogens Staphylococcus aureus and Pseudomonas aeruginosa. or differential detection among antibiotic resistant S. aureus and its non-resistant strains. Bacterial infection detection from other fluids such as saliva have also been reported. Respiratory diseases Many chronic respiratory diseases lack of an appropriate method of monitoring and differentiation among disease stages. SESI-MS has been used to diagnose and distinguish exacerbations from breath samples in chronic obstructive pulmonary disease. Metabolic profiling of breath samples has accurately differentiated healthy individuals from idiopathic pulmonary fibrosis or obstructive sleep apnea patients. Cancer SESI-MS is being studied as a non-invasive detection system of cancer biomarkers in breath. A preliminary study differentiates patients suffering from breast neoplasia. Skin Volatiles released from the skin can be detected by sampling the ambient gas surrounding it, providing a fast method for detecting metabolic changes in fatty acids composition patterns. Pharmacokinetics To study pharmacokinetics, it is necessary a robust technique because of the complex nature of the samples' matrix, be it plasma, urine, or breath. Recent studies show that secondary electrospray ionization (SESI) is a powerful technique to monitor drug kinetics via breath analysis. Because breath is naturally produced, several datapoints can be readily collected. This allows for the number of collected data-points to be greatly increased. In animal studies, this approach SESI can reduce animal sacrifice while yielding pharmacokinetic curves with unmatched time resolutions. In humans, SESI-MS non-invasive analysis of breath can help study the kinetics of drugs at a personalized level. Monitoring exogenously introduced species allows tracking their specific metabolic pathway, which reduces the risk of picking confounding factors. Time-resolved metabolic analysis Introducing known stimuli, such as specific metabolites isotopically labeled compounds, or other sources of stress triggers metabolic changes which can be easily monitored with SESI-MS. Some examples if this include: cell culture volatile compounds profiling; and metabolic studies for plant or trace human metabolic pathways. Other applications Other applications developed with SESI-MS include: Detection of illicit drugs; Detection of explosives; Food quality control monitoring. References Mass spectrometry Ion source External links Deep Breath Initiative Fossiliontech Sinueslab Breath Research Breath tests Mathematical and theoretical biology Mathematical modeling
Secondary electrospray ionization
[ "Physics", "Chemistry", "Mathematics" ]
1,384
[ "Mathematical modeling", "Spectrum (physical sciences)", "Mathematical and theoretical biology", "Instrumental analysis", "Applied mathematics", "Mass", "Ion source", "Mass spectrometry", "Matter" ]
60,901,165
https://en.wikipedia.org/wiki/Q-Gaussian%20process
q-Gaussian processes are deformations of the usual Gaussian distribution. There are several different versions of this; here we treat a multivariate deformation, also addressed as q-Gaussian process, arising from free probability theory and corresponding to deformations of the canonical commutation relations. For other deformations of Gaussian distributions, see q-Gaussian distribution and Gaussian q-distribution. History The q-Gaussian process was formally introduced in a paper by Frisch and Bourret under the name of parastochastics, and also later by Greenberg as an example of infinite statistics. It was mathematically established and investigated in papers by Bozejko and Speicher and by Bozejko, Kümmerer, and Speicher in the context of non-commutative probability. It is given as the distribution of sums of creation and annihilation operators in a q-deformed Fock space. The calculation of moments of those operators is given by a q-deformed version of a Wick formula or Isserlis formula. The specification of a special covariance in the underlying Hilbert space leads to the q-Brownian motion, a special non-commutative version of classical Brownian motion. q-Fock space In the following is fixed. Consider a Hilbert space . On the algebraic full Fock space where with a norm one vector , called vacuum, we define a q-deformed inner product as follows: where is the number of inversions of . The q-Fock space is then defined as the completion of the algebraic full Fock space with respect to this inner product For the q-inner product is strictly positive. For and it is positive, but has a kernel, which leads in these cases to the symmetric and anti-symmetric Fock spaces, respectively. For we define the q-creation operator , given by Its adjoint (with respect to the q-inner product), the q-annihilation operator , is given by q-commutation relations Those operators satisfy the q-commutation relations For , , and this reduces to the CCR-relations, the Cuntz relations, and the CAR-relations, respectively. With the exception of the case the operators are bounded. q-Gaussian elements and definition of multivariate q-Gaussian distribution (q-Gaussian process) Operators of the form for are called q-Gaussian (or q-semicircular) elements. On we consider the vacuum expectation state , for . The (multivariate) q-Gaussian distribution or q-Gaussian process is defined as the non commutative distribution of a collection of q-Gaussians with respect to the vacuum expectation state. For the joint distribution of with respect to can be described in the following way,: for any we have where denotes the number of crossings of the pair-partition . This is a q-deformed version of the Wick/Isserlis formula. q-Gaussian distribution in the one-dimensional case For p = 1, the q-Gaussian distribution is a probability measure on the interval , with analytic formulas for its density. For the special cases , , and , this reduces to the classical Gaussian distribution, the Wigner semicircle distribution, and the symmetric Bernoulli distribution on . The determination of the density follows from old results on corresponding orthogonal polynomials. Operator algebraic questions The von Neumann algebra generated by , for running through an orthonormal system of vectors in , reduces for to the famous free group factors . Understanding the structure of those von Neumann algebras for general q has been a source of many investigations. It is now known, by work of Guionnet and Shlyakhtenko, that at least for finite I and for small values of q, the von Neumann algebra is isomorphic to the corresponding free group factor. References Probability distributions
Q-Gaussian process
[ "Mathematics" ]
821
[ "Functions and mappings", "Mathematical relations", "Mathematical objects", "Probability distributions" ]
60,901,743
https://en.wikipedia.org/wiki/Environmental%20impact%20of%20illicit%20drug%20production
The environmental impacts caused by the production of illicit drugs is an often neglected topic when analysing the effects of such substances. However, due to the clandestine nature of illicit drug production, its effects can be highly destructive yet difficult to detect and measure. The consequences differ depending upon the drug being produced but can be largely categorised into impacts caused by natural drugs or caused by synthetic/semi-synthetic drugs. Natural drugs refer to drugs which are primarily extracted from a natural source such as cocaine or cannabis. Synthetic drugs are produced from material that can't be found in nature and semi-synthetic drugs are made from both natural and synthetic materials such as methamphetamine and MDMA. Drug policy is a large determinant on how organisations produce drugs and thereby, how their processes affect the environment, thus prompting Government bodies to analyse the current drug policy. It is inevitable that solutions to such environmental impacts are synonymous with solutions to overall illicit drug production, however many have noted the reactionary measures undertaken by government bodies and elevate the need of preventative measures instead. Environmental impacts of natural drugs Natural drugs are those whose constituents are primarily extracted from natural sources such as cocaine or marijuana. The environmental impacts associated with such drugs include deforestation, watershed depletion and greenhouse gas emissions. Marijuana With the ease of access to marijuana increasing due to legalisation in parts of North America and Canada many have noted the increasing importance of measuring its possible environmental ramifications. As marijuana has been previously illegal in these areas there is now an opportunity to measure these outcomes. However, there have already been a variety of known consequences caused by the production of marijuana. Watershed depletion is a serious issue that can be caused by marijuana production. Marijuana cultivation requires large amounts of water, where a single plant can require 8-10 gallons of water per day. This sparks concern, especially in areas susceptible to water shortages such as California. California is the largest producer of marijuana in the U.S yet has had issues surrounding water supply and sanitation for a number of years. In 2012, it was estimated that at least 3,177,241,050 gallons of water were used in the production of marijuana in California. Thus, marijuana production can have severe implications on watershed levels with a number of organisations calling for stricter regulations as marijuana becomes more widespread. The production of marijuana also requires large amounts of energy due to the controlling of environmental conditions. This further causes high levels of greenhouse gas emissions and energy consumption. "In 2015, the average electricity consumption of a 5,000-square-foot indoor facility in Boulder County was 41,808 kilowatt-hours per month, while an average household in the county used about 630 kilowatt-hours". Such high levels of energy consumption in turn, result in high greenhouse gas emissions. In 2016, it was estimated that on average the production of one kilogram of marijuana produced 4,600 kilograms of carbon dioxide. Thus, marijuana cultivation produces 15 million metric tons of carbon dioxide in the United States in a single year. Cocaine Most of the world's cocaine is produced in South America, particularly in the Andean region. The environmental destruction caused by the production of cocaine has been well documented, with reports made the UN and other government bodies. Due to the illegal nature of coca production, farmers make little effort in soil conservation and sustainability practices as seen in the high mobility and short life of coca plots in Colombia. One of the major implications of cocaine production is deforestation as large areas of forest are cleared for coca cultivation. The UNODC approximated that 97,622 hectares of primary forest were cleared for coca cultivation during 2001-2004 in the Andean region. This further causes habitat destruction, especially in biodiversity hotspots, areas rich in a variety of species. Such areas are chosen for coca cultivation due to their remote locations, minimising chances of detection. Deforestation has further impacts of soil erosion which further inhibits the survival of native species. The use of pesticides can also cause severely affect the environment. Farmers are able to sue un-regulated and highly toxic pesticides due to the clandestine nature of drug production. The use of such pesticides can have both direct and indirect effects on the ecosystem. Where lethal levels of exposure directly cause the death of fauna, which is further carried up the food chain where secondary feeders who consume the poisoned animals are also impacted. Furthermore, non-lethal levels of exposure can also cause weaker immune system development and neurological issues, further increasing mortality rates. Environmental impacts of synthetic/semi-synthetic drugs Synthetic drugs are those which are primarily derived from inorganic substances. Semi-synthetic are a hybrid of both synthetic and natural drugs, however as both synthetic and semi-synthetic drugs undergo an array of chemical processes during production, their environmental impact are quite similar. Methamphetamine Methamphetamine or meth is a synthetic drug which can be produced on a domestic scale. The dumping of toxic waste is a major issue associated with the production of meth. It has been approximated that for each pound of meth produced, five pounds of toxic waste are also generated. The methods of disposal of these substances can be extremely damaging to the environment as producers may simply pour them down the sink or toilet. However, such methods allow producers to be more easily detected thus, producers sometimes adopt more environmentally destructive methods such as leaving waste in remote locations such as forests or buried underground where the waste can harm flora and fauna. Producers have also used specialised trucks or vans, equipped with pumps and hose to drain waste onto the road as the vehicle moves. This decreases their chance of detection yet spreads the damage caused by the toxic waste. The production of meth also produces a number of toxic gases that can harm the human respiratory system and devastate the environment. High levels of phosphine gas can be produced during meth production which can further cause headaches, convulsions and death. The production of meth further produces hydrogen chloride gas, which when released into the environment cause damage metal structures and buildings. Hydrogen chloride is also highly soluble and readily dissolves into water bodies where it can harm the aquatic life. This high solubility also causes it to be quickly washed out by rain in the atmosphere, further causing acid rain where high levels of such rain can have drastic impacts on the environment. Environmental impacts of drug policy Drug policy is a determining factor on drug production as it partially dictates the methods through which illicit drugs are produced and transported. Thus, when determining such policies the environmental consequences are sometimes overlooked, resulting in effects which magnify the damage done unto the environment. This is apparent in coca cultivation in the Andean Region, where drug policy has forced producers into more remote locations to avoid detection. In such ungoverned areas, producers maximise their damage through deforestation and toxic pesticide use, destroying these resource rich areas. These effects of drug policy have been noted by a number of government bodies including the UNDP who stated that some eradication campaigns “have not eradicated illicit production but rather displaced it to new areas of greater environmental significance.” Policies involving drug trafficking have also had adverse effects on the environment. One key aspect of drug trafficking is the need to establish landing areas, usually by clearing land and deforestation. Once established, such areas further accelerate other illegal trafficking activities such as wildlife, marine and timber trafficking as drug traffickers may diversify their operations to expand their networks. Furthermore, as governments policies restrict the movement of traffickers, they must find alternate and more remote routes to transport their materials. These alternate routes typically require further land clearing and habitat destruction, thus further harming the environment. Drug policy can further inhibit biodiversity conservation. As drug policy can displace the actions of traffickers and producers into more biodiverse locations, their impact on global biodiversity is magnified. As producers relocate into more remote locations, their actions of deforestation and dumping of toxic materials such as kerosene and hydrochloric acid can greatly damage biodiversity. Furthermore, anti-drug initiatives and policies can further drain funding and diminish resources available for environmental protection initiatives. Areas known for illicit-drug production can further discourage tourism, conservation activists and local law enforcement. This allows drug producers to conduct themselves with more freedom and thereby increase their damage. Furthermore, the lack of tourism in such areas limits the revenue of local conservation efforts and the transparency of theses issues. Possible solutions Due to the nature of illicit drug production, it is inevitable that solutions to these environmental issues are synonymous with overall drug production prevention. However, by taking environmental impacts into account when formulating drug policies it is possible to better mitigate this damage. Changes in approach have been highlighted as a key method to help target these environmental concerns. This involves analysing and these environmental impacts when assessing the effects of illicit drugs and informing the illicit drug consumer base and law-makers of these impacts. Improved cooperation between international, national and regional-level organisations allows for a more-informed and sustainable solution to drug production. Previous collaborative efforts have involved more reactory responses which moreso displaced drug operations rather than prevented. A more integrated response between different organisations allows for more preventative measures to be implemented. Furthermore, as much of the environmental impacts occur in transit countries, not just countries of origin, greater integration between different organisations could allow for preventative policies in transit countries to be established. An example of this improved cooperation can be seen in Plan Colombia, which saw the collaboration between the U.S and the Colombian Government to combat drug production. The project saw a decrease in coca cultivation in Colombia from 160,000 hectares to 48,000 hectares and a decrease in the drug-related economy from US$7.5 billion to US$4.5 billion from 2008-2013. Quelling the demand for illicit drugs has also been considered as a solution to the environmental impacts involved with drug production. That is, by reassessing current anti-drug propaganda and intertwining drug-related health issues with the environmental impacts of illicit drug production a decrease in demand may be achieved. Shifting the approach of current advertisements to focus on such issues may better inform the public and consumers of illicit drugs of these environmental problems. This notion can be further carried into children's drug education, where placing greater emphasis on the environmental effects alongside the traditional and well known health effects may incite a greater reaction. It has also been suggested that besides just revealing these issues it is important for advertising bodies to communicate the contribution individuals make by consuming illicit drugs, thereby increasing their sense of self-value and lessening their dependence on illicit drugs. Enlightening more consumers of such problems may also accrue a larger audience and support for anti-drug solutions. However, even if such a response fails to stem demand, shedding light on these issues may foster voter concerns who still may appeal to legislators. Sources External links Pollution Environmental science Water pollution Environmental impact of products
Environmental impact of illicit drug production
[ "Chemistry", "Environmental_science" ]
2,217
[ "nan", "Water pollution" ]
60,902,291
https://en.wikipedia.org/wiki/Protein%20detection
Protein detection is used for clinical diagnosis, treatment and biological research. Protein detection evaluates the concentration and amount of different proteins in a particular specimen. There are different methods and techniques to detect protein in different organisms. Protein detection has demonstrated important implications for clinical diagnosis, treatment and biological research. Protein detection technique has been utilized to discover protein in different category food, such as soybean (bean), walnut (nut), and beef (meat). Protein detection method for different type food vary on the basis of property of food for bean, nut and meat. Protein detection has different application in different field. Protein Detection in Soybeans, Walnuts, Beef Purpose for protein detection in food Allergies from food have been noted to become common disease nowadays. The food allergies in the clinical demonstration present different signs, for example mild symptoms from itching in the mouth and swelling of the lips to critical anaphylactic response result in fatal consequences. According to statistic, about 2% adults and 8% children are experiencing hypersensitivity from industrialized countries. In order to reduce potential threatening reactions for life, avoiding the consumption from these allergenic foods strictly is  the valid therapy. Therefore, sufficient description in term of potentially allergenic ingredients existing in food products is crucial and indispensable which can be monitored through protein detection. Rationale for protein detection in soybeans The soybean has been consumed in processed foods all over the word because of its high nutrient and easy processing characteristic such as soybean milk, tofu, meat alternatives, and brewed soybean products. microorganisms is used in brewage process for brewed soybean products like miso, soy sauce, natto and tempeh. Allergenicity stays in brewed soybean products. In Asian countries, these brewed soybean products are popular and traditional. The amount of patients from soybean allergy and the nearly infinite uses for soybean have gone up in the past a couple of years. Previous method for protein detection in soybeans During the last 30 years, broad methods and techniques were experimented to discover soybean protein. These methods and techniques can be conveyed to lab environment easily. The original and traditional methods were designed and tested in molecular biology spectrum. Enzyme‐Linked Immunosorbent Assay technique containing high susceptibility and specificity is reliable method to investigate soybean proteins through applying a protein which can identify a foreign molecule. This has been evaluated as a vacuolar protein including a molecular block of 34 kDa. The ELISA illustrated sufficient repeatability and reproducibility in lab assessment. But it can not test protein in soybean existing in brewed soybean products. There are different studies to conduct experiments to assess soybean protein through ELISA. However, reproducibility, cross-reactivity and low repeatability make measurement difficult to be reliable in processed foods. These methods can not discover soybean protein staying in brewed soybean products. Current method for protein detection in soybeans Compared with previous method, a heating process is involved in current abstraction technique to investigate soybean protein existing in brewed products. Since the heating process can deactivate the microbial proteolytic enzymes, the current abstraction technique can be used to disclose soybean protein in brewed soybean products. The heating abstraction technique can be demonstrated as the following. To produce the good dispersibility for the specimen in the extraction buffer to carry out the heating process, 19mL of abstraction buffer is mixed with five glass beads in five millimeter diameter and 1 g of food homogenate. At 5, 15 and 60 min variable time, the mixture is abstracted under 25, 40, 60, 80 and 100 ° variable temperature through the heating in a water bath followed by every 5 minutes vortexing. Food abstractions generated through the previous and the current technique are centrifuged for 20 minutes at three thousand gram, then the supernatant is filtered off by a filter paper. The filtrate is gathered and applied for analysis immediately acting as the food specimen abstract. The calibration standard solutions needs to be prepared to disclose soybean proteins by using ELISA.  A three hundred milligram soybean powder specimen is mixed with a twenty milliliter compound including 0.5 M NaCl, 0.5% SDS, 20 mM Tris-HCl (pH 7.5), and 2% 2-ME. The compound is then shaken at room temperature for 16 hours for abstraction. The abstract is centrifuged for 30 minutes at twenty thousand gram, then the supernatant is selected by a 0.8-μm microfilter paper. The protein substance from the initial abstract is inspected with a 2-D Quant Kit. The initial abstract is diluted to 50 ng/mL combined with 0.1% SDS, 0.1% 2-ME, 0.1 M PBS (pH 7.4), 0.1% BSA, and 0.1% Tween 20, and it is deposited for ELISA at 4 °C playing as the calibration standard solution. Conclusion for current protein detection method in soybeans. The detection limit for the ELISA is 1 μg/g and it can not assess soybean proteins existing in brewed soybean products due to degradation of the proteins in soybean through microbial proteolytic enzymes staying in the brewed products. The microbial proteolytic enzymes possibly restrain the detection of soybean protein storing in the brewed soybean products. The current abstraction technique can control protein degradation through the microbial proteolytic enzymes. The microbial proteolytic enzymes can be inhibited by heating, pH, and protease inhibitors in general. The variable heating temperatures and abstraction times are examined to decide the ideal heating temperature and time to control microbial proteolytic enzymes. The heating conditions showed to optimize the control of microbial proteolytic enzymes is 80 °C for 15 minutes. So the heating temperature for the abstraction is set to 80 °C and the time is set to 15 minutes for the current abstraction technique. The current abstraction technique can restrain the degradation of soybean proteins through microbial proteolytic enzymes and can detect soybean protein in most brewed soybean products. The current abstraction technique combined with the heating is a useful and sensitive tool to discover soybean protein stored in processed foods and brewed soybean products. Without impacting microbial proteolytic enzymes, this method is appropriate to quantify soybean protein in processed foods. The proposed extraction and ELISA technique can be applied to control labeling systems for soybean ingredient through a trusty manner. Rationale for protein detection in walnuts English walnuts (Juglans regia) and black walnuts (Juglans nigra) are two main types of walnuts in the market across the world. Walnuts are utilized as a valuable ingredient due to favorable health attributes, sensory properties and consumer sensation. Shelled walnuts are broadly applied as ingredients in different foods such as salad, ice creams, bread and meat alternative. Walnut oil is introduced as a good source of mono- and polyunsaturated fatty acids and tocopherols. And it is adopted as a food ingredient in salad dressings particularly. Walnut hull extract is considered as a dietary supplement and a seasoning in the food industry. In addition, ground walnut shells can be used in industrial field as extenders, carriers, fillers and abrasives for example jet cleaners. Tree nuts are regarded as one of the most common allergenic foods around the world.  Allergic reactions from tree nuts can be fierce and life threatening. Individuals with walnut allergies can have result in fatal and near-fatal reactions from the unintended ingestion of walnuts, other tree nuts or possibly contamination of food with the walnuts ingredient. To prevent walnut allergic reactions, the only effective way is to avoid walnuts in the diet. The appropriate labeling of processed foods with walnuts ingredient is critical to protect walnut-allergic consumers. There are a couple of circumstances cause undeclared walnut residues such as sharing equipment between walnut-containing and other formulations and undeclared walnuts in ingredients. The enzyme-linked immunosorbent assay (ELISA) can be used as the technique to detect walnuts residues with great sensitivity and specificity since walnuts allergic Individuals can have allergic reactions with low (milligram) amounts of walnuts. Several different techniques can be applied to discover walnut residues as well such as polymerase chain reaction (PCR) method and ELISA method on the basis of polyclonal antisera raised against a particular 2S albumin walnut protein. Current method for protein detection in walnuts The sandwich-type walnut ELISA is the current method used to detect protein in walnuts. The sandwich-type walnut ELISA can be applied as a critical analytical technique by food manufacturers and regulatory agencies for hygiene validation and the assessment of allergen control strategies. Immunogen preparation A mixture of several brands of English walnuts are used to produce the immunogen. The mixed walnuts need to be washed by deionized distilled water 6 times and air-dried. Portion of the walnuts are dry-roasted for 10 minutes at 270 ◦F. The roasted or raw walnuts are cleaved, frozen, and ground to a refined particle size through the blender. The ground roasted and ground raw walnuts are defatted and filtered. Then, the powdered raw or roasted walnuts are air-dried thoroughly. Both the defatted, powdered raw, and roasted walnuts can be utilized as immunogens. Protein concentrations of the defatted powdered immunogens are set through the Kjeldahl method with 46.4% raw defatted walnut and 34.9% roasted defatted walnut. Polyclonal antibody production and titer determination Polyclonal antibodies are generated in 1 sheep, 1 goat, and 3 New Zealand white rabbits with each immunogen. The initial subcutaneous injections are given to the 10 animals including 3 rabbits, 1 sheep, and 1 goat on multiple sites with the defatted powdered immunogen and Freunds Complete Adjuvant. Titer values of collected antisera are evaluated by a noncompetitive ELISA method with walnut protein from abstracts of the proper raw or roasted immunogen. Cross-reactivity study and ELISA method A variety of tree nuts, seeds, legumes, fruits and food ingredients are assessed for cross-reactivity in the walnut ELISA assay. The modified sandwich ELISA can be used to detect walnuts residues with sheep antiroasted walnut and rabbit antiroasted walnut antisera used as the capture and detector antibodies respectively. Conclusion for current protein detection method in walnuts Walnut residues can be disclosed at 1 ppm quantitation limit in a diversity of food such as ice cream, muffins, cookies and chocolate. The walnut ELISA can be conducted to detect possible walnut residues allergy in other foods from sharing equipment and to evaluate the sanitation procedures targeted on removal of walnut residues from shared equipment in the food industry. Rationale for protein detection in beef It has been reported that animal feedingstuffs containing processed animal protein (PAP) contaminated with prions have caused BSE infection of the cattle. Processed animal proteins (PAP) has been prohibited to apply as feed material for all farmed animals except fish meal currently. In addition, infections from consumption of undercooked raw beef has been declaimed to be an important pathogen for Enterohemorrhagic Escherichia coli O157:H7. Method for protein detection in beef For processed animal protein, the specific polymerase chain reaction (PCR) based procedure parallelled with microscopic method is utilized to detect processed animal protein (PAP) in feedingstuffs. The limit detection for PCR has been evaluated on 0.05% for beef, 0.1% for pork and 0.2% for poultry meat and bone meal. Microscopic method can disclose 66.13% doubtful samples of feedingstuffs. Combined the results from the use of the microscopic and PCR methods, it has been stated that the molecular biology methods can be executed as a supplementary method for PAP detection. For undercooked raw beef, in order to make sure a safe beef supply, sensitive and quick detection techniques for E. coli O157:H7 are important in the meat industry. Three different techniques can be used in raw ground beef: the VIDAS ultraperformance E. coli test (ECPT UP), a noncommercial real-time (RT) PCR method and the U.S. Department of Agriculture, Food Safety and Inspection Service (USDA-FSIS) reference method to detect E. coli O157:H7. 25 g of individual raw beef samples and 375 g of raw beef composites can be examined for optimal enrichment times and the efficacy of testing. 6 hours of enrichment is sufficient for both the VIDAS ECPT UP and RT-PCR methods for 25 g samples of each type of raw ground beef, but 24 hours of enrichment is acquired for 375 g samples, Both the VIDAS ECPT UP and RT-PCR methods can generate similar results with those gained from the USDA-FSIS reference method after 18 to 24 hours of enrichment. Low levels of E. coli O157:H7 in 25 g of various types of raw ground beef can be disclosed through these methods, E. coli O157:H7 in composite raw ground beef up to 375 g can be detected as well. Implication from protein detection Protein detection in cells from the human rectal mucous membrane can imply colorectal disease such as colon tumours, inflammatory bowel disease. Protein detection based on antibody microarrays can implicate life signature for example organics and biochemical compounds in the solar system in astrobiology field. Protein detection can monitor soybean protein labeling system in processed foods to protect consumers in a reliable way. The labeling for soybean protein declaimed by protein detection has indicated to be the most important solution. Detailed labeling description for the soybean ingredients in refined foods is required to protect the consumer. References External links Fermentation improves nutritional value of beans Research Collaboratory for Structural Bioinformatics (see also Molecule of the Month, presenting short accounts on selected proteins from the PDB)
Protein detection
[ "Chemistry", "Biology" ]
3,012
[ "Biochemistry methods", "Protein methods", "Protein biochemistry" ]
60,902,985
https://en.wikipedia.org/wiki/Pok%C3%A9mon%20Sleep
Pokémon Sleep is a sleep-tracking game that rewards the user with Pokémon depending on the quality of their sleep. The app was first released in Australia, Canada, New Zealand, and Latin American countries for Android and iOS on July 17, 2023. Gameplay Pokémon Sleep is based around tracking the sleep of the player, and earning rewards for sleeping longer. The game uses the microphone and accelerometer of the player's phone to track sleep. Players can optionally use the Pokémon Go Plus+ to play sounds to remind the player of their bedtime, as well as to track sleep without needing a phone on. Players are given a Snorlax by Professor Neroli to observe its sleeping patterns when they start playing for the first time. The player can give Snorlax berries and meals throughout the day. The more you feed Snorlax, the higher its power, and the more types of Pokémon it can attract. After reaching certain milestones with Snorlax's score, its rank will increase, allowing rarer visiting Pokémon after sleep. After each sleep session, the player is given a report based on how they slept. In this report, sleep is categorized into three different types: Dozing, Snoozing, and Slumbering. The type of sleep that makes up most of the total sleeping time is the overall sleep type, unless an equal amount makes it a "Balanced Type." The report also states how much noise they made throughout their sleep. Based on these factors, the player is given a Sleep Score. When the player is finished reading their report, their Sleep Score is multiplied by Snorlax's score to give a final Drowsy Power. The higher Drowsy Power received, the more Pokémon the player will be visited by. Pokémon are classified into the same three different types as sleep, being Dozing, Snoozing, and Slumbering. Based on the type of sleep the player received, they will get that type of Pokémon visiting them. Pokémon have multiple sleep styles that are recorded in the Sleep-Style Dex. After recording the sleep styles of the visiting Pokémon, the player can feed the Pokémon PokéBiscuits to gain friendship with the Pokémon. If the player gains enough friendship with the Pokémon, it will join the player, and can be put on a team to collect berries and ingredients. Development Pokémon Sleep is developed by Japanese studio Select Button, and published by The Pokémon Company. It was announced on May 28, 2019, by Tsunekazu Ishihara, CEO of the Pokémon Company, during a press conference held in Tokyo as well as on the company's official Twitter page. The app was set to be released in 2020, but did not appear, leading to speculation that it was cancelled. However, on 5 April 2021, a fan reported on Twitter that a SSL certificate had been registered for dev-eotxawxn.sleep.pokemon.co.jp. A Pokémon Go APK data mine in January 2022 revealed several additions to the Pokémon Go app to integrate with Pokémon Sleep, including connecting to the Pokémon Go Plus + accessory, reviewing sleep patterns, and earning in-app rewards. These were confirmed by the Pokémon Day 2023 Pokémon Presents, in which they officially announced the Pokémon Go Plus + (Plus Plus) Accessory. On February 27, 2023, Pokémon Sleep was showcased during the Pokémon Presents and set for a release sometime in the summer of 2023. From July 6, 2023, Android users could pre-register the app. On July 17, 2023, Pokémon Sleep was rolled out in Australia, Canada, New Zealand, and Latin American countries. References External links Pokemon Sleep Grader - Calculator (Fan Site) 2023 video games Android (operating system) games IOS games Sleep Video games developed in Japan Video games developed in the United States Single-player video games
Pokémon Sleep
[ "Biology" ]
793
[ "Behavior", "Sleep" ]
60,904,181
https://en.wikipedia.org/wiki/NGC%205331
NGC 5331 is a pair of two interacting spiral galaxies in the constellation Virgo. They were discovered by William Herschel on May 13, 1793. One supernova, SN 2020abir (type Ia, mag. 17.3), was discovered in NGC 5331 on 1 December 2020. References External links Spiral galaxies Interacting galaxies Luminous infrared galaxies Virgo (constellation) 5331 8774 49264
NGC 5331
[ "Astronomy" ]
86
[ "Virgo (constellation)", "Constellations" ]
60,907,013
https://en.wikipedia.org/wiki/Hass%E2%80%93Bender%20oxidation
In organic chemistry, the Hass–Bender oxidation (also called the Hass–Bender carbonyl synthesis) is an organic oxidation reaction that converts benzyl halides into benzaldehydes using the sodium salt of 2-nitropropane as the oxidant. This name reaction is named for Henry B. Hass and Myron L. Bender, who first reported it in 1949. The reaction process begins with the deprotonation of 2-nitropropane at the α carbon to form a nitronate. This compound then initiates an SN2 reaction to displace the benzyl halide. Unlike in the nitroaldol reaction, where the deprotonated carbon of the nitroalkyl group is the nucleophilic atom, it is instead an oxygen of the nitro itself that attacks the benzylic carbon. The O-benzyl structure then undergoes a pericyclic reaction to produce a benzaldehyde, with dimethyloxime as a byproduct. Although originally developed for benzyl compounds, the reaction also works for allyl halides, giving the respective α,β-enones and enals. References Name reactions Organic oxidation reactions
Hass–Bender oxidation
[ "Chemistry" ]
257
[ "Name reactions", "Organic oxidation reactions", "Organic chemistry stubs", "Organic reactions" ]
60,909,058
https://en.wikipedia.org/wiki/Broadleaf%20weeds
Broadleaf weeds are dicotyledonous weeds that may grow in lawns, gardens or yards. They can be easy to spot when growing among grasses. They are tougher than grassy monocot weeds, multiply with ease, and can be very hard to eradicate. Basic characteristics Broadleaf weeds can emerge annually, biennially, or perennially, making consistent management difficult. Perennial weeds are often very difficult to control as the weeds regenerate faster than they can be eradicated. Broadleaf weeds, as their name suggests, often have wide leaves and grow from a stem. Most broadleaf weeds develop clusters of blossoms or single flowers as they mature that can be considered undesirable. The roots of most broadleaf weeds are fibrous in nature. The roots can be thin, a large taproot, or a combination. Many broadleaf weeds spread through their seeds and rhizomes, although some only spread through seeds. Popular broadleaf weeds are chickweed, clover, dandelion, wild geranium, ivy, milkweed, plantain (broadleaf), and thistle. Contrast with grassy weeds The differences in broadleaf weeds' structure and growth habits make them easy to distinguish from narrow-leaved weedy grasses. Most broadleaf weeds have leaves with net-like veins and nodes that contain one or more leaves, and they may have showy flowers, while grassy weeds appear as a single leaf from a germinated seed. Furthermore, grassy weeds are different because they may initially appear like desirable grasses. Control methods Although broadleaf weeds can grow aggressively, they can be controlled via different methods. When there are few broadleaf weeds present, an effective approach is hand-pulling. This should be carried out regularly to check the spread of weeds. In a thick lawn that is overgrown with broadleaf weeds, a lawn mower may be necessary. When there are abundant broadleaf weeds, a chemical herbicide (weed killer) can be useful. There are chemical herbicides meant for controlling broadleaf weeds. Perennial broadleaf weeds are often controlled with chemical herbicides, although they can sometimes return after several months. Broadleaf weeds can be controlled by shading them out. This involves covering the affected area with flat materials such as boards, nylon, or plastic sheets, blocking the weeds' access to sunlight and water and killing them. References Garden pests Plants and humans Agricultural pests
Broadleaf weeds
[ "Biology" ]
490
[ "Plants and humans", "Garden pests", "Plants", "Pests (organism)", "Humans and other species", "Agricultural pests" ]
54,512,835
https://en.wikipedia.org/wiki/NGC%205011
NGC 5011 is an elliptical galaxy in the constellation of Centaurus. It was discovered on 3 June 1834 by John Herschel. It was described as "pretty bright, considerably small, round, among 4 stars" by John Louis Emil Dreyer, the compiler of the New General Catalogue. Optical companions Several galaxies are not physically associated with NGC 5011, but appear close to NGC 5011 in the night sky. PGC 45847 is a spiral galaxy that is also known as NGC 5011A. PGC 45918 is a lenticular galaxy some 156 million light-years away from the Earth, in the Centaurus Cluster, and is designated NGC 5011B. PGC 45917 is a dwarf galaxy, also designated NGC 5011C. Although NGC 5011B and 5011C appear close together, they are no signs of them interacting. NGC 5011C is actually much closer and is in the Centaurus A/M83 Group, at 13 million light years away. References Notes External links Elliptical galaxies 5011 045898 Centaurus 18340603
NGC 5011
[ "Astronomy" ]
223
[ "Centaurus", "Constellations" ]
54,513,154
https://en.wikipedia.org/wiki/Gilles%20Holst
Gilles Holst (20 March 1886 – 11 October 1968) was a Dutch physicist, known worldwide for his invention of the low-pressure sodium lamp in 1932. Early life His father was a manager of a shipyard. In 1904 he went to ETH Zurich to study mechanical engineering, switching to mathematics and physics after a year. Career He worked with Balthasar van der Pol, known for the Van der Pol oscillator, and Frans Michel Penning, known for Penning ionization and the Penning mixture. In 1908 he became a geprüfter Fachlehrer, or qualified teacher. And most important, he became the science director of the Philips Physics Laboratory in Eindhoven. In 1909 he became an assistant to Heike Kamerlingh Onnes at Leiden University. At Leiden, it is believed that he was the first to witness the phenomenon of superconductivity. In 1926 he became a member of the Royal Netherlands Academy of Arts and Sciences. The Gilles Holst Award was first awarded in 1939. Personal life He died in the Netherlands at the age of 82. References External links Holst Centre 1886 births 1968 deaths 20th-century Dutch physicists ETH Zurich alumni Academic staff of Leiden University Members of the Royal Netherlands Academy of Arts and Sciences Superconductivity
Gilles Holst
[ "Physics", "Materials_science", "Engineering" ]
269
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
54,513,789
https://en.wikipedia.org/wiki/J%C3%BCrgen%20Gehrels
Jürgen Carlos Gehrels FIET (born 24 July 1935) is a German businessman, and a former Chief Executive and Chairman of Siemens UK (Siemens Holdings plc). Early life He was the son of Dr Hans Gehrels and Ursula da Rocha. He attended the Technical University of Munich and Technische Universität Berlin. Career Siemens From 1965-79 he worked for Siemens AG in Germany. Siemens UK He became Chief Executive of Siemens UK in 1986. In 1995 he was responsible for opening the Siemens Semiconductors plant on North Tyneside, a £1.1bn inward investment; the largest-ever inward investment in the UK. The plant was opened by the Queen in May 1997. The site is now the Cobalt Business Park, off the A19. He left as Chairman of Siemens UK in September 2007. At the time, Siemens employed around 20,000 people in the UK, turning over around £3.5bn. Personal life He lives at Porlezza in Italy. He married Sigrid Kausch in 1963, and they had a son and a daughter. He is an Anglophile. References External links 2005 photography Siemens UK 1935 births Fellows of the Institution of Engineering and Technology German chief executives German electronics engineers Siemens people Technische Universität Berlin alumni Technical University of Munich alumni Living people Honorary Knights Commander of the Order of the British Empire
Jürgen Gehrels
[ "Engineering" ]
279
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
54,514,043
https://en.wikipedia.org/wiki/List%20of%20smallest%20exoplanets
Below is a list of the smallest exoplanets so far discovered, in terms of physical size, ordered by radius. List The sizes are listed in units of Earth radii (). All planets listed are smaller than Earth and Venus, up to 0.7 Earth radii. The NASA Exoplanet Archive is used as the main data source. Excluded objects Kepler-37e is listed with a radius of in the Exoplanet Archive based on KOI data, but the existence of this planet is doubtful, and assuming its existence, a 2023 study found a mass of , inconsistent with such a small radius. KOI-6705.01, listed as a potential very small planet in the KOI dataset, was shown to be a false positive in 2016. Candidate planets Below shows a list candidate planets below . These planets have yet to be confirmed. See also List of largest exoplanets List of exoplanet extremes Lists of exoplanets References Lists of exoplanets E
List of smallest exoplanets
[ "Physics", "Mathematics" ]
212
[ "Smallest things", "Quantity", "Physical quantities", "Size" ]
54,514,552
https://en.wikipedia.org/wiki/NGC%201573
NGC 1573 is an elliptical galaxy in the constellation of Camelopardalis. It was discovered on 1 August 1883 by Wilhelm Tempel. It was described as "very faint, small" by John Louis Emil Dreyer, the compiler of the New General Catalogue. It is located about 190 million light-years (58 megaparsecs) away. The galaxy PGC 16052 is not a NGC object, nor is it physically associated with NGC 1573, but is often called NGC 1573A. It is an intermediate spiral galaxy with an apparent magnitude of about 14.0. In 2010, a supernova was discovered in PGC 16052 and was designated as SN 2010X. References Notes Elliptical galaxies 1573 03077 015570 Camelopardalis
NGC 1573
[ "Astronomy" ]
160
[ "Camelopardalis", "Constellations" ]
54,514,634
https://en.wikipedia.org/wiki/NGC%201077
NGC 1077 is a spiral galaxy in the constellation Perseus. It was discovered on 16 August 1886 by Lewis A. Swift. It was described as "very faint, pretty large, extended" by John Louis Emil Dreyer, the compiler of the New General Catalogue. NGC 1077 is a galaxy pair with another galaxy appearing close to it. This galaxy, NGC 1077B, has a recessional velocity of 8529 km/s. This is similar to NGC 1077's recessional velocity of 8964 km/s, so they are assumed to be physically related. References Spiral galaxies Galaxies discovered in 1885 1077 +07-06-069 010468 Perseus (constellation) 02230
NGC 1077
[ "Astronomy" ]
149
[ "Perseus (constellation)", "Constellations" ]
54,516,626
https://en.wikipedia.org/wiki/Direction%20%28geometry%29
In geometry, direction, also known as spatial direction or vector direction, is the common characteristic of all rays which coincide when translated to share a common endpoint; equivalently, it is the common characteristic of vectors (such as the relative position between a pair of points) which can be made equal by scaling (by some positive scalar multiplier). Two vectors sharing the same direction are said to be codirectional or equidirectional. All codirectional line segments sharing the same size (length) are said to be equipollent. Two equipollent segments are not necessarily coincident; for example, a given direction can be evaluated at different starting positions, defining different unit directed line segments (as a bound vector instead of a free vector). A direction is often represented as a unit vector, the result of dividing a vector by its length. A direction can alternately be represented by a point on a circle or sphere, the intersection between the sphere and a ray in that direction emanating from the sphere's center; the tips of unit vectors emanating from a common origin point lie on the unit sphere. A Cartesian coordinate system is defined in terms of several oriented reference lines, called coordinate axes; any arbitrary direction can be represented numerically by finding the direction cosines (a list of cosines of the angles) between the given direction and the directions of the axes; the direction cosines are the coordinates of the associated unit vector. A two-dimensional direction can also be represented by its angle, measured from some reference direction, the angular component of polar coordinates (ignoring or normalizing the radial component). A three-dimensional direction can be represented using a polar angle relative to a fixed polar axis and an azimuthal angle about the polar axis: the angular components of spherical coordinates. Non-oriented straight lines can also be considered to have a direction, the common characteristic of all parallel lines, which can be made to coincide by translation to pass through a common point. The direction of a non-oriented line in a two-dimensional plane, given a Cartesian coordinate system, can be represented numerically by its slope. A direction is used to represent linear objects such as axes of rotation and normal vectors. A direction may be used as part of the representation of a more complicated object's orientation in physical space (e.g., axis–angle representation). Two directions are said to be opposite if the unit vectors representing them are additive inverses, or if the points on a sphere representing them are antipodal, at the two opposite ends of a common diameter. Two directions are parallel (as in parallel lines) if they can be brought to lie on the same straight line without rotations; parallel directions are either codirectional or opposite. Two directions are obtuse or acute if they form, respectively, an obtuse angle (greater than a right angle) or acute angle (smaller than a right angle); equivalently, obtuse directions and acute directions have, respectively, negative and positive scalar product (or scalar projection). See also Body relative direction Euclidean vector Tangent direction Notes References Elementary mathematics Euclidean geometry
Direction (geometry)
[ "Mathematics" ]
656
[ "Elementary mathematics" ]
54,517,522
https://en.wikipedia.org/wiki/NGC%207069
NGC 7069 is a lenticular galaxy located about 400 million light-years away in the constellation of Aquarius. NGC 7069 is also classified as a LINER galaxy. NGC 7069 was discovered by astronomer Albert Marth on October 12, 1863. See also NGC 7033 References External links Lenticular galaxies Aquarius (constellation) 7069 66807 Astronomical objects discovered in 1863 11747 LINER galaxies
NGC 7069
[ "Astronomy" ]
86
[ "Constellations", "Aquarius (constellation)" ]
54,517,916
https://en.wikipedia.org/wiki/Holmium%20titanate
Holmium titanate is an inorganic compound with the chemical formula Ho2Ti2O7. Holmium titanate is a spin ice material like dysprosium titanate and holmium stannate. References Holmium compounds Titanates Inorganic compounds
Holmium titanate
[ "Chemistry" ]
52
[ "Inorganic compounds", "Inorganic compound stubs" ]
54,517,958
https://en.wikipedia.org/wiki/Barrier%20pointing
Barrier pointing (or "edge pointing") is a term used in human–computer interaction to describe a design technique in which targets are placed on the peripheral borders of touchscreen interfaces to aid in motor control. Where targets are placed alongside raised edges on mobile devices, the user has a physical barrier to aid navigation, useful for situational impairments such as walking; similarly, screen edges that stop the cursor mean that targets placed along screen edges require less precise movements to select. This allows the most common or important functions to be placed on the edge of a user interface, while other functions that may require more precision can utilise the interface's 'open space'. Barrier pointing is also a term used in accessible design, as a design technique that makes targets easier to press. For example, barrier pointing using raised edges on touchscreens, alongside a stylus and a 'lift-off' or 'take-off' selection mode, can improve usability for a user with cerebral palsy. One example of assistive technology focused on barrier pointing is the SUPPLE system, which redesigns the size, shape, and arrangement of interfaces based on its measurement of motor articulation input. References Bibliography Farris, J. S., Jones, K. S. and Anders, B. A. (2001). "Acquisition speed with targets on the edge of the screen: An application of Fitts' Law to commonly used web browser controls." Proceedings of the Human Factors and Ergonomics Society 45th Annual Meeting (HFES '01). Santa Monica, California: Human Factors and Ergonomics Society, pp. 1205–1209. Johnson, B. R., Farris, J. S. and Jones, K. S. (2003). "Selection of web browser controls with and without impenetrable borders: Does width make a difference?" Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting (HFES '03). Santa Monica, California: Human Factors and Ergonomics Society, pp. 1380–1384. Wobbrock, J. O. (2003). "The benefits of physical edges in gesture-making: Empirical support for an edge-based unistroke alphabet." Extended Abstracts of the ACM Conference on Human Factors in Computing Systems (CHI '03). New York: ACM Press, pp. 942–943. Walker, N. and Smelcer, J. B. (1990). "A comparison of selection time from walking and bar menus." Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI '90). New York: ACM Press, pp. 221–225. Human–computer interaction Surface computing
Barrier pointing
[ "Engineering" ]
570
[ "Human–computer interaction", "Human–machine interaction" ]
54,519,462
https://en.wikipedia.org/wiki/Eduard%20von%20Stackelberg
Eduard Otto Emil Karl Adam Freiherr von Stackelberg (6 November 1867 in Sillamäe, Estonia – 7 April 1943 in Munich, Germany) was an Estonian chemist, landowner and politician who belonged to the Stackelberg family. As a chemist, he proposed a model for the periodic table in 1911. He was among the Baltic German landowners deported to Siberia, first by the Tsarist authorities and later by the Bolsheviks. Following World War I he lived in Germany. In 1927, he published a memoir. Early life and education Eduard von Stackelberg was the son of Otto Ferdinand Wolter von Stackelberg (1837-1909) and Sophie Marie Elizabeth von Seydlitz (vt. Seidlitz) (1837-1920). He was born at the manor of Sillamäggi, near the village of Repnik, Kreis Wierland, Governorate of Estonia (now Sillamäe and Hiiemetsa, Ida-Viru County, Estonia). His father was a younger son in a large family, while his mother inherited the manor and lands of Sillamäggi. Eduard von Stackelberg attended Friedrich Kollmann's Gymnasium in Dorpat (now Tartu) from 1881 to 1884. He also studied from 1885 to 1886 at the Nicolai Gymnasium in Reval (now Tallinn). Chemistry From 1886 to 1892, Eduard von Stackelberg studied mathematics, chemistry and physics at the Imperial University of Dorpat (now the University of Tartu), graduating with a degree in chemistry in 1893. He also studied at the Naturwissenschaften Leipzig from 1892 to 1893; with Gabriel Lippmann at the Sorbonne in Paris from 1893 to 1894; and in the laboratory of the Akademie der Wissenschaften in St. Petersburg from 1894 to 1895. From 1895 to 1896, he worked in the laboratory of Wilhelm Ostwald at the University of Leipzig He was an assistant at the University of Dorpat in 1896, and a teacher at the Riga Technical University from 1898 to 1899. After returning to Livonia he worked as an assistant professor with Gustav Tammann at the University of Dorpat from 1896 to 1898. In 1911, Eduard von Stackelberg published a paper discussing a possible model for the periodic table, Versuch einer neuen tabellarischen Gruppierung der Elemente auf Grund des periodischen Systems ("A New Tabular Grouping of the elements on the basis of the periodic system"). It was positively reviewed in Chemical Abstracts: "The author gives a form of periodic table, which possesses certain advantages, especially that it aids in enabling one to remember the variation of certain physical and chem. properties of related elements in passing from group to group of the table." Marriage In 1896 in St. Petersburg, Eduard von Stackelberg married Elisabeth (Else) Marie von Sievers (vt. Sivers). They had three children: Nicolai Mark Otto August von Stackelberg (1896-1971), Brigitte, and Elisabeth. Eduard von Stackelberg's sister Sophie Amelie von Stackelberg married the brother of the chemist Andreas von Antropoff (1878-1956). Eduard's son Mark von Stackelberg studied chemistry with his uncle Andreas von Antropoff, completing his dissertation at the Rheinische Friedrich-Wilhelms-Universität, and co-authoring an extensive discussion of the periodic table in the Atlas der physikalischen und anorganischen Chemie ("Atlas of physical and inorganic chemistry; The properties of the elements and their connections", 1929). Mark von Stackelberg later taught at the University of Bonn, working with polarography and voltammetry. Land and politics As a landowner, Eduard Baron Stackelberg held the manors of Sutlem, Limmat and Mäeküla (Mähküll, ) (now Sutlema, Lümandu and Mäeküla, Rapla County, Estonia) in Kreis Harrien. He was a proponent of Baltic German patriotism and a leader of the Baltic Constitutional Party. Eduard von Stackelberg served as secretary of the Estonian Knighthood from 1899 to 1911 and served as deputy chief captain of the knighthood from 1912 to 1918. Stackelberg was one of the founders of the Deutscher Verein in Estland (German Association of Estonia) in 1905, and served as its chairman. The association promoted a pan-Baltic organization, in sympathy with pan-Germanic ideals, while emphasizing that it still supported the Russian Tsar and constitution. When von Stackelberg attempted to organize a conference in Reval to bring together similar organizations, he was fiercely attacked by Russian press, the Duma and the Russian authorities. From 1915 to 1917, during World War I, Stackelberg and his family were sent to Jeniseisk in Siberia by the Tsarists. They were exiled for their pro-German political position. Stackelberg remained in Siberia until the 1917 revolution, when he was allowed to return to Estonia. He quarreled severely with Count Hermann von Keyserling, leader of a more cosmopolitan group of exiled Baltic Germans. In 1918, Stackelberg was deported again, when the Bolsheviks exiled German landlords. This time he was sent to Krasnoyarsk, Siberia. His baronial lands were confiscated and turned over to the state to become collective farms. After the signing of the Brest-Litovsk Peace Treaty, which allowed deportees to return, he moved to Germany, where his wife still owned land in Lochen. As of autumn 1918, Eduard von Stackelberg lived in Upper Bavaria near the outskirts of Munich. He returned to Berlin, where he served on the Baltic Confidence Council from 1919 to 1920. He worked with the association of Christian charities in Schleswig-Holstein until 1927. In 1927-1934, Stackelberg wrote a two-part memoir, A life in the Baltic struggle; Looking back on what is aspired, lost and won. Eduard von Stackelberg died in Munich on 7 April 1943. Bibliography References 1867 births 1943 deaths People from Sillamäe Estonian chemists 20th-century Estonian politicians Estonian people of Baltic German descent People involved with the periodic table Eduard
Eduard von Stackelberg
[ "Chemistry" ]
1,278
[ "Periodic table", "People involved with the periodic table" ]
54,520,002
https://en.wikipedia.org/wiki/Magellanic%20moorland
The Magellanic moorland or Magellanic tundra () is an ecoregion on the Patagonian archipelagos south of latitude 48° S. It is characterized by high rainfall with a vegetation of scrubs, bogs and patches of forest in more protected areas. Cushion plants, grass-like plants and bryophytes are common. At present there are outliers of Magellanic moorland as far north as in the highlands of Cordillera del Piuchén (latitude 42° 22' S) in Chiloé Island. During the Llanquihue glaciation Magellanic moorland extended to the non-glaciated lowlands of Chiloé Island and further north to the lowlands of Chilean lake district (latitude 41° S). The classification of Magellanic moorland has proven problematic as substrate, low temperatures and exposure to the ocean influences the development of the Magallanic moorland. It thus may qualify either as polar tundra or heathland. Flora and plant communities Edmundo Pisano identifies the following plant communities for the Magellanic moorland: Bogs Sphagnum bogs Magellanic sphagnum tundra Juncus bogs Non-sphagniferous bryophytic tundra Non-sphagnum moss bog Hepatica bogs Pluvinar mires Hygrophytic mire tundra Montane pulvinar tundra Bryophyte and dwarf shrub tundra Gramineous mires Tufty sedge tundra Subantarctic gramineous mire Woody synusia tundras Tundras with Pilgerodendron uvifera Association Pilgerodendretum uviferae Sub-association Pilgerodendro-Nothofagetum betuloidis Sub-association Nano-Pilgerodendretum uviferae Interior nanophanerophytic tundras Interior heath of low to medium elevation Montane nanophaneritic tundra Where forests occur they are made up of the following trees Nothofagus betuloides (coigüe de Magallanes), Drimys winteri (canelo), Pseudopanax laetevirens (sauco del diablo), Embothrium coccineum (notro), Maytenus magellanica (maitén), Pilgerodendron uviferum (ciprés de las Guaitecas) and Tepualia stipularis (tepú). Soils and climate Soils are usually rich in turf and organic matter and poor in bases. Often they are also water-saturated. Granitoids, schists and ancient volcanic rocks make up the basement on which soils develop. Any previously existing regolith has been eroded by the Quaternary glaciations. It is not rare for bare rock surfaces to be exposed in the interior of islands. The climate where Magellanic moorland grows can be defined as oceanic, snowy and isothermal with cool and windy summers. In the Köppen climate classification it has a tundra climate ET. References Bibliography Shrublands Ecology of Patagonia Temperate broadleaf and mixed forests Temperate rainforests Ecoregions of Chile Andean forests Ecoregions of South America Neotropical ecoregions Magellanic subpolar forests
Magellanic moorland
[ "Biology" ]
679
[ "Ecosystems", "Shrublands" ]
54,520,246
https://en.wikipedia.org/wiki/Damoiseau%20Rhum
Damoiseau is a rhum agricole distillery located in Le Moule, Guadeloupe. It is one of five distilleries in the Guadeloupe archipelago, and the only one in the Grande-Terre region. It has roots back to the 19th century and was originally founded as an agricultural estate. Damoiseau is the leading rum producer in Guadeloupe, producing more than 8 million litres per year and exported to more than 40 countries worldwide. History The Bellevue estate, upon which the Damoiseau distillery is built, was originally founded as a sugar plantation by the Rimbaud family of Martinique at the end of the 19th century, after the emancipation of slaves in the French colonies. The estate was purchased by Roger Damoiseau in 1942 and transformed into a rhum agricole distillery, and is currently run by Roger's grandson, Hervé Damoiseau. Production Damoiseau rhums are produced from sugar cane that is harvested and crushed on the same day before fermenting for 24–36 hours. It is distilled in a continuous column still to between 86% and 88% alcohol, and then aged in charred oak barrels previously used for aging bourbon. After aging for six months or more than six years, the rum is diluted with local spring water to a consistent commercial proof and bottled. In 2011, Damoiseau opened a bottling and storage facility near Pointe-à-Pitre International Airport in Les Abymes, Pointe-à-Pitre that is capable of storing up to 5 million bottles. Products Damoiseau Virgin Cane Rhum Agricole Blanc: (40% ABV) aged for six months in oak vats. Damoiseau Pure Cane Rhum 110: (55% ABV) aged for six months in oak vats. Damoiseau VSOP Rhum Vieux: (42% ABV) aged for a minimum of four years in ex-bourbon barrels. Damoiseau XO Rhum Vieux: (42% ABV) aged for a minimum of six years in ex-bourbon barrels. Recognition In 2017, Damoiseau Virgin Cane won the gold medal for Rums Pure Juice & Blanc Agricole <50% ABV. Damoiseau XO Rhum Agricole received the Chairman's Trophy in the 2016 Ultimate Spirits Challenge. See also List of rum producers Ti' Punch References External links Damoiseau Rum official website Rum brands Distilleries Sugar industry of France
Damoiseau Rhum
[ "Chemistry" ]
512
[ "Distilleries", "Distillation" ]
51,652,535
https://en.wikipedia.org/wiki/30th%20Street%20Station%20District
The 30th Street Station District, also referred to as the 30th Street District, is a master planned urban development centered around 30th Street Station located in West Philadelphia in Philadelphia, Pennsylvania, United States. The area, if approved and built, will be home to eight modern skyscrapers or high rises ranging in heights between 405 ft and 1,200 ft with four other buildings under 400 feet. The property will be owned by Amtrak and will be a major addition to the City of Philadelphia. The project is expected to cost between seven and eleven billion dollars. The project would be a huge addition to the city with some of the largest buildings not in Center City and expanding downtown west of the Schuylkill River. Aside from adding new buildings to the skyline, architects have put in their plans to connect West Philadelphia to Center City by adding new walking paths, a walking bridge, and more connections to make traveling by car or bus from the 30th Street Station to downtown Philadelphia easier and faster. In addition, the placement of the current phase of construction would allow expansion north towards the Philadelphia Museum of Art and the Philadelphia Zoo. Project history The 30th Street Station District Plan is a long-range, joint master planning effort led by Amtrak, Brandywine Realty Trust, Drexel University, the Pennsylvania Department of Transportation, and the Southeastern Pennsylvania Transportation Authority to develop a comprehensive vision for the future of the 30th Street Station District in the year 2050 and beyond. A $2 billion investment in roads, utilities, parks, bridges, and extension of transit services will unlock $4.5 billion in private real estate investment, in addition to an estimated $3.5 billion for Drexel’s Schuylkill Yards project, which is located within the district. These investments in the District will have robust and widespread economic development benefits, with the potential to generate $3.8 billion in City and State taxes and 40,000 jobs when complete. The project launched in the summer of 2014, and had its first public meeting by that winter. The project had four additional public meetings in summer of 2015, the fall of 2015, and two in the spring of 2016. The planning process was officially completed in June 2016, and is set to be undertaken in various phases. As of January 2024, there are currently three projects undergoing. The renovation of 30th Street Station began in 2023, and will conclude in 2027. These renovations including building maintenance such as façade cleaning and restoration, improvements of Market Street Plaza, expanded retail offerings, station modernization, corporate office renovation, etc. There is a project underway to improve the SEPTA 30th Street subway and trolley station, including new elevators at 31st Street and a new head house at 30th Street, of which the former is complete. Additionally, a ramp modification study is underway for I-76. Plan The District Plan lays out a vision for the next 35 years and beyond to: Accommodate a projected 20 to 25 million passenger trips per year – double the current capacity – circulating through an enhanced 30th Street Station; Build 18 million square feet of new development; House between 8,000 and 10,000 new residents Support up to 40,000 jobs; and Create 40 acres of new open space or green space for the city, including a new civic space at the station’s front door. Reconnecting the station Passenger volume at 30th Street Station is projected to more than double over the next 25 years and beyond. Travel to the District is easily achieved by a number of modes, with nearly 100,000 trips made daily by train, subway, bus, trolley, car, bicycle, or on foot. However, the modes do not clearly connect, creating a confusing and sometimes precarious experience for visitors. For almost 30 years, passengers transferring between 30th Street Station and the trolley and subway lines below Market Street have lacked a covered, climate-controlled route, forced instead to leave the station and cross a busy 30th Street. The Plan proposes to re-establish a convenient and safe connection between these stations, via a new stairway within 30th Street Station’s Main Hall and through an active and day-lit below-grade retail concourse. The Plan also envisions a permanent home for intercity buses (BoltBus, Megabus, and others) on the north side of Arch Street as part of an integrated, multimodal transportation facility. The new intercity bus terminal connects directly via pedestrian bridge to 30th Street Station and provides an indoor waiting area along with bus queuing. In the long-term, an additional Amtrak concourse could anchor this new transit center. Funding and Cost Amtrak and its partners in the proposed redevelopment of a massive swath around 30th Street Station in University City say the decades-long plan, including partially capping the adjacent rail yard, will involve $6.5 billion in infrastructure funding and private investment. The financial projection is part of the planning team's final blueprint for the 175-acre site extending northeast from 30th Street Station. Publication of the 30th Street Station District Plan ends a two-year, $5.25 million study led by Amtrak, Drexel University, Brandywine Realty Trust, SEPTA, and PennDot for the area between Walnut and Spring Garden Streets east of Drexel's campus and Powelton Village. Buildings The 35-year plan to build a dense urban neighborhood, largely over what are now 88 acres of rail yards, will require about $2 billion in infrastructure investment, according to a plan summary. That spending on roads, bridges, parks, and transit would enable about $4.5 billion in private investment by developers of office towers, residential buildings, hotels, and other projects. Anticipated is about 18 million square feet of new development, including enough housing to accommodate up to 10,000 residents. The commercial space includes about 1.2 million square feet planners hope will be occupied by a single corporate, commercial, or institutional tenant that will anchor the development, though none has yet been secured. Project partners As of January 2024 the following are partners of the project: Amtrak Brandywine Realty Trust Drexel University Pennsylvania Department of Transportation SEPTA City of Philadelphia CSX Corporation Delaware Valley Regional Planning Commission Philadelphia Industrial Development Corporation NJ Transit Schuylkill Banks University City District University of Pennsylvania Awards See also Schuylkill Yards List of tallest buildings in Philadelphia References External links 30th Street Station District Website 30th Street Station District - Drexel University Proposed buildings and structures in Pennsylvania Proposed skyscrapers in the United States Urban planning Multi-building developments in Philadelphia
30th Street Station District
[ "Engineering" ]
1,311
[ "Urban planning", "Architecture" ]
51,652,568
https://en.wikipedia.org/wiki/Quadruple%20glazing
Quadruple glazing (quadruple-pane insulating glazing) is a type of insulated glazing comprising four glass panes, commonly equipped with low emissivity coating and insulating gases in the cavities between the glass panes. Quadruple glazing is a subset of multipane (multilayer) glazing systems. Multipane glazing with up to six panes is commercially available. Multipane glazing improves thermal comfort (by reducing downdraft convection currents adjacent to the windowpane), and it can reduce greenhouse gas emissions by minimising heating and cooling demand. Quadruple glazing may be required to achieve desired energy efficiency levels in Arctic regions, or to allow for higher glazing ratios in curtain walling without increasing winter heat loss. Quadruple glazing allows building glazing elements to be designed without modulated external sun-shading, given that the low thermal transmittance of having four or more glazing layers enables solar gain to be adequately managed directly by the window glazing itself. In Nordic countries, some existing buildings with triple glazing are being upgraded to glazing with four or more layers. Features With quadruple glazing, the center-of-panel U-value (Ug) of 0.33 W/(m2K) [R-value 17] is readily achievable. With six-pane glazing, a Ug value as low as 0.24 W/(m2K) [R-value 24] was reported. This brings several advantages, such as: Energy efficient buildings without modulated sun shading The desired overall window thermal transmittance value of lower than about 0.4 W/(m2K) is possible without having to depend on modulated external shading. A study by Svendsen et al. showed that at such low window U-values, glazing with moderate solar gain performs comparably to glazing of comparable U-value with variable external shading and high solar gain. This is so because with improved overall U-values, a building's heating demand diminishes, to the point that wintertime solar heat gain alone may be enough to heat the building. Pronounced seasonal-dependence of the solar gain Due to incidence-angle-dependent Fresnel reflections, the optical characteristics of multipane glazing, also notably vary seasonally. As the sun's average elevation varies throughout the year, the effective solar gain tends to be meaningfully less in the summer. The effect is also visible to an extent to the naked eye. Comfort for occupants When compared to traditional double-pane or triple-pane windows with mechanical or structural shading arrangements, multipane glazing enables easier viewing between indoor and outdoor environments. A low U-value maintains inside glass temperatures at a more uniform level throughout the year. During the winter, downwards convection currents (downdrafts) are very small, thereby enabling people seated near such a multipane window to feel as comfortable adjacent to the window as they would feel if they were seated adjacent to a solid wall. However, occlusion or shading might still be wanted for purposes of privacy. Nearly zero heating building In 1995, it was predicted that with a glazing U-value of 0.3 W/(m2K) zero-heating building could be attained. It has also been shown that the heating demand might be decreased to nearly zero for glazed buildings with system U-values as low as 0.3 W/(m2K). Theoretically, in the summer, the remaining cooling demand could be satisfied by photovoltaic generation alone, with the greatest need for cooling nearly coinciding with the strongest sunlight incident on solar panels. However, in practice, temporal lags between cooling demand and the output from solar panels could occur due to factors such as ambient humidity and the need for dehumidification, as well as the thermal inertia of the building and its contents. Engineering Multipane glazing is often designed with thinner intermediate glass panes in order to save weight. To prevent intermediate panes from thermal stress cracking it is sometimes required to use heat-strengthened glass. With more than three glass panes, special care must be taken of the spacer and sealant temperatures as intermediate glass panes in contact with these glazing elements can readily exceed design temperature limits of respective materials due to solar radiation (irradiance) heating. Solar irradiance heating of intermediate glass panes increases substantially with an increased number of glass panes. Multipane glazing must be carefully designed to account for the expansion of the insulating gases that are placed between the glass layers, because such gaseous expansion becomes an increasingly important consideration as the number of glass panes is increased. Special breather vents, as well as small vents communicating between the layer spaces, can be incorporated in order to manage this glass-bulging effect. Finite element analysis is often used to calculate appropriate glass sheets' strengths. Calculating static equilibrium with thin glass panes used in multipane glazing may involve nonlinear plate mechanics. Performance Double-pane windows have been the industry standard for decades. They represent a vast improvement over single-pane windows but the potential for even greater energy savings with more highly insulating windows has been elusive. Recent price reductions in the thin glass used in both smartphones and flat-screen TVs, as well as in the krypton gas used in halogen lights, however, have made it possible to build lighter, high- efficiency quad-pane windows at a lower cost. Researchers from the National Renewable Energy Laboratory evaluated two configurations of Alpen High Performance (an American manufacturer) quad-pane windows at an office building at the Denver Federal Center. Both configurations have the same thickness and a comparable weight as a standard commercial double-pane window—one model uses two layers of film suspended between two panes of standard glass, the other replaces the film with two panes of ultra-thin glass. Researchers found that on average, quad-pane windows saved 24% heating and cooling energy compared with a high-performing double- pane window. For new construction and window replacements, the quad-pane windows have payback between one and six years, depending on climate zone and utility rates. See also Passive house Curtain wall (architecture) Window Passive solar building design History of passive solar building design References External links quadruple glazing seasonal energy transmittance video 6-pane glazing destructive test video Reflex - Q-Air multipane glazing EN 1279:2018 Glass in building — Insulating glass unit EN 16612:2019 Glass in building — Determination of the lateral load resistance of glass panes by calculation Glass Building materials Energy efficiency Heat transfer Thermal protection Building insulation materials Low-energy building
Quadruple glazing
[ "Physics", "Chemistry", "Engineering" ]
1,421
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Glass", "Building engineering", "Unsolved problems in physics", "Architecture", "Homogeneous chemical mixtures", "Construction", "Materials", "Thermodynamics", "Amorphous solids", "Matter", "Building materials" ]
51,652,572
https://en.wikipedia.org/wiki/Post-assault%20treatment%20of%20sexual%20assault%20victims
After a sexual assault or rape, victims are often subjected to scrutiny and, in some cases, mistreatment. Victims undergo medical examinations and are interviewed by police. If there is a criminal trial, victims suffer a loss of privacy, and their credibility may be challenged. Victims may also become the target of slut-shaming, abuse, social stigmatization, sexual slurs and cyberbullying. These factors, contributing to a rape culture, are among some of the reasons that may contribute up to 80% of all rapes going unreported in the U.S, according to a 2016 study done by the U.S. Department of Justice. Various laws have been created with a motive to protect victims. During criminal proceedings, publication bans and rape shield laws protect victims from excessive public scrutiny. Laws may also prohibit defense lawyers from obtaining a victim's medical, psychiatric or therapeutic records. Statutory rape laws set the age of legal consent for sexual activity and prohibits perpetrators from alleging the victim consented to the activity. Victims in some jurisdictions can seek damages from police and institutions if warnings were not issued. Numerous victims' rights groups operate to improve the treatment of victims. Examination and investigation of victim Medical examination and "rape kits" Following a rape, a victim may seek medical care and obtain a medical examination that obtains and preserves physical evidence. Examiners may collect fingernail scrapings and pluck head and pubic hairs. If the facility has the means the examiner will also take photographs of genital injuries. A 'rape kit' contains the items used by medical personnel for gathering and preserving physical evidence. Many "rape kits" are untested because they are never submitted to crime labs or because crime labs have insufficient resources to test all of the submitted kits. In the United States, national surveys of law enforcement agencies suggest there may be upwards of 200,000 untested rape kits in 2015. The Survivors' Bill of Rights Act of 2016 enacted by the United States federal government gives victims the right to: request the preservation of sexual assault evidence collection kit or its probative contents; have the sexual assault evidence collection kit preserved, without charge, for the duration of the maximum applicable statute of limitations or 20 years, whichever is shorter; and be informed of any result of a sexual assault evidence collection kit, including a DNA profile match, toxicology report, or other information collected as part of a medical forensic examination, if the disclosure would not impede or compromise an ongoing investigation. In 2019, the Debbie Smith Reauthorization Act of 2019 was enacted to provide funding to help eliminate backlogs in rape kit testing. More than 100,000 rape kits in the United States remain untested. Interview by police As part of police investigations, victims are interviewed by police. The interview is usually recorded, and a transcript may be prepared. If police believe the reported victim is making a false accusation of rape, they may interrogate that person as a suspect rather than a victim. In some cases, harsh questioning and social stigmatization has caused even actual victims of rape to recant and then be charged with making a false report. Treatment of Victims during Court hearings A study done in Australia by the Crime Statistics Agency revealed that most reports of sexual assault to the Police do not make it past the initial stage, with "police identifying an offender for half of incidents reported to them and charging an offender for one in four". If rape accusations reported to the police head to trial, further questioning and public scrutiny may impact on a victim's recovery and compound their trauma. Typically, the cross-examination will scrutinize the “reliability of the witness, the plausibility of an alleged assault or the credibility or consistency of evidence”. Australian Criminologist Dr. Andy Kaladelfos says that the treatment of sexual assault survivors during trials is worse than it was during the 1950s in Australia. Dr. Kaladelfos says this treatment is one of the reasons many drop their cases before or during the trial, with many describing the experience akin to reliving the experience or “another rape”. Victims are asked to describe in detail the experience, as well as personal information such as what clothes they were wearing, and other unrelated information not directly involved in the incident, such as their background and relationship history. Questions like these are typically only directed at the victims, and can confuse, mislead, or intimidate victims, Dr. Kaladelfos says. The length of defendant cross-examination questioning has risen by approximately 30% since the 1950s - in the case of child case examinations, they are now three times as long. He cites one child victim who was asked 1,000 questions during a cross-examination. Confidentiality, harassment, smear campaigns and silencing In Canada, pursuant to sections 278.1 to 278.91 of the Criminal Code defence counsel are not allowed to obtain a victim's medical, psychiatric or therapeutic records . The prohibition was found to be constitutional by the Supreme Court of Canada even though it limited the accused ability to provide a full answer and defence; see R. v Mills. An accused may hire a private investigator to investigate and harass a victim. For example, according to the complaint in the civil lawsuit against Penn State University, Nate Parker hired a private investigator to investigate and harass an 18-year-old student. The student alleged she was raped by Parker and another student. The harassment included exposing the victim's identity by posting enlarged photos of the victim around the university campus. The lawsuit against Penn State was settled for $17,500. (Parker was found not guilty of the rape charges. The student attempted suicide three months after the assault and died by suicide at age 30.) Institutions may attempt to cover up sexual abuse, as seen in the Catholic Church sexual abuse cases. The movie Spotlight deals with the coverup of sexual abuse by Catholic priests. Dennis Hastert allegedly promised to pay $5 million to his victim in exchange for agreeing not to discuss the sexual assaults he committed. Authorities may fail to investigate allegations of sexual assault because the victims are perceived to be unreliable witnesses. The BBC One miniseries Three Girls deals with the systematic sexual abuse of young girls by a group of older Asian men in Rochdale. Authorities were slow to respond even though more than 100 referrals were made to the police and social services. Harvey Weinstein dispatched defense lawyers and publicists to undermine the credibility of his accusers. Campus sexual assault victims In the United States, various legislation, government agencies, and initiatives have been created to deal with campus sexual assault. These include: White House Task Force to Protect Students from Sexual Assault Title IX requires schools receiving federal funds to protect students from gender-based violence and harassment—including sexual assault. The Office for Civil Rights investigates Title IX sexual violence complaints. The Campus Accountability and Safety Act (CASA) is a United States bill to reform the sexual assault investigation process and protect victims. CASA includes a requirement that colleges and universities publish statistics relating to crime on their campuses. The Safe Campus Act is proposed U.S. legislation that would require schools to report allegations of sexual assault to law enforcement, but only if the victim consents. However, if the victim does not consent the statute prohibits schools from initiating their own internal disciplinary procedures. The Hunting Ground is a critically acclaimed American documentary film about the incidence of sexual assault on college campuses in the United States. Its creators say college administrations are failing to respond adequately to claims of sexual assault, while its critics have countered that the film was based on misleading statistics and biased against the alleged perpetrator. Sexual assault victims in the military Sexual assault in the military is more prevalent than outside of the military, and it is frequently underreported. In the United States, Sexual Assault Prevention Response is a training program designated to educate military service members and provide support and treatment to service members and their families. Individuals are assigned to an advocate, who assists them with the different treatment options that are available to them and educates them about their rights. Sexual assault of prisoners The Prison Rape Elimination Act is United States legislation enacted for the purpose of preventing sexual assaults in U.S. prisons. The legislation requires reporting and information gathering. Publication bans, duty to warn and self-censorship Bans on publishing the victim's name In the United States, laws prohibiting the publication of a victim's name have been found to be unconstitutional and struck down. In Cox Broadcasting Corp. v. Cohn , the U.S. Supreme Court ruled a Georgia statute that imposed civil liability on media for publishing a rape victim's name was unconstitutional. The Court stated: "the First and Fourteenth Amendments command nothing less than that the States may not impose sanctions on the publication of truthful information contained in official court records open to public inspection." Also see Florida Star v. B. J. F., and State of Florida v. Globe Communications Corp., 648 So.2d 110 (Fla. 1994). In Canada, the Canadian Press generally does not publish the names of alleged sexual assault victims without their consent. In addition, a court order may operate to prohibit publication. Pursuant to paragraph 486.4(1)(a) of the Criminal Code, the court may make an order "directing that any information that could identify the complainant or a witness" not be published, broadcast or transmitted for any sexual offences. The victim has the option of waiving the publication ban. Bans on publishing names of accused juveniles In cases where the accused is under 18 years of age, the court may impose a publication ban to protect the privacy of the offender. After 17-year-old Savannah Dietrich pursued a case against two teenage boys for sexual assault, the court initially prohibited anyone from discussing the case. Dietrich was threatened by the accused to be found in contempt when she published the names of the perpetrators on social media, but the court then clarified that no "gag order" was in place. In Canada, the section 110 of the Youth Criminal Justice Act generally prohibits the publication of the name of the offender if the offender was under the age of 18 at the time of the offence. Duty to warn the public The police, colleges and universities may be required to warn the public that a sexual assault has occurred. Women should be warned of the risk they face and have the opportunity to take any specific measures to protect themselves from future attacks. Victims of sexual assault may sue for damages if warnings are not issued. In 1998, a sexual assault victim successfully sued the Toronto police for their failure to warn her that a serial rapist was active in her neighbourhood. The complainant in the Nate Parker case sued Pennsylvania State University for violating her Title IX rights. In the United States, the Clery Act imposes fines on colleges and universities that fail to warn students of criminal activity on or near campuses. The law is named after Jeanne Clery, a 19-year-old Lehigh University student who was raped and murdered in her campus hall of residence in 1986. In relation to the Jerry Sandusky sexual abuse scandal, U.S. federal investigators are seeking to fine Penn State University $2.4 million for failing to warn of threats to the campus. Censorship by travel websites TripAdvisor apologized for repeatedly deleting forum posts about a resort where the victim had been raped by a security guard. Self-censorship Self-censorship is when publications or people censor themselves in order to withhold information to protect their image or avoid receiving backlash. Some instances include: Self-censorship because subject is distasteful – Newspapers may self-censor, choosing not to publish material about a sexual assault on the grounds that the subject is distasteful. For example, there were complaints when The Des Moines Register published stories about the rape of Nancy Ziegenmeyer. One reader complained: "The disgusting and degrading details of Nancy Ziegenmeyer's rape have no place in a family newspaper the caliber of The Register." Self-censorship because of ethnicity of the accused – Publication of information regarding sexual assaults may be suppressed because of fears of being thought racist. A 2006 report regarding the child sexual assaults in Rotherham, England, stated: "It is believed by a number of workers that one of the difficulties that prevent this issue [child sexual exploitation] being dealt with effectively is the ethnicity of the main perpetrators." With respect to the Rochdale child sex abuse ring, it was suggested that police and social work departments failed to act when details of the gang emerged for fear of appearing racist, and vulnerable white teenagers being groomed by Pakistani men were ignored. Self-censorship because "women would become hysterical" – In 1998, the Toronto police did not issue warnings regarding a serial rapist "because women would become hysterical or panic" and "the rapist would flee and the investigation would be compromised". The Toronto police were ultimately found liable for damages for failing to issue warnings. Self-censorship to avoid harming international relations – Korea has agreed to refrain from criticising Japan over the issue of comfort women. During World War II, many Korean women were forced to be the sex slaves of members of the Japanese military. Self-censorship to avoid deportation and intimidation using "rape trees" – Sexual assaults of illegal immigrants are rarely reported as the victims fear they may be deported after coming forward. Women often seek out birth control methods to prevent pregnancy from anticipated sexual assaults. Rape trees are trees or bushes that mark where sexual assaults have occurred by arranging the victim's undergarments on or around the trees branches or on the ground. "Rape trees" are commonly and increasingly found along the United States and Mexico border. The rape trees serve to intimidate illegal immigrants by suggesting the perpetrators are able to commit acts of violence with impunity. Financial costs Victims may incur various financial costs because of the sexual assault; for example, costs related to moving (to get away from the perpetrator), a new mattress, legal fees, lost wages, lost tuition fees, new clothing, therapy and medication. Conduct of legal proceedings Statutes of limitations A statute of limitations law may preclude a victim from pressing criminal charges if too many years have passed since the assault occurred. Limitations periods vary from jurisdiction to jurisdiction. For example, in Pennsylvania charges must be filed within 12 years of the assault. Statutory rape Statutory rape is sexual activity in which one of the individuals is below the age of consent. The age of consent is the age at which a young person is capable of consenting to sexual activity. These laws were enacted to recognize the inherent power imbalance between adults and children and that children are incapable of giving true consent to sexual acts with adults. The laws protect children from undue influence, persuasion and manipulation. In 2016, the Government of Turkey proposed legislation that would overturn a man's conviction for child sex assault if they married their victim. Consent Defence counsel may argue the victim consented to the sexual activity. Determining if the victim consented can be problematic when the victim is intoxicated. A Nova Scotia court found there was no sexual assault because the prosecution could not prove the sexual activity was non-consensual. In that case, the sexual activity occurred in the back of a taxi cab between a taxi driver and an intoxicated customer. In 2011, Cindy Gladue bled to death from an 11-centimetre cut in her vagina. At trial, the accused said Gladue died after a night of consensual, rough sex in an Edmonton motel. He was acquitted. The case is being appealed to the Supreme Court of Canada. Discrepancies between the victim's statements to police and other evidence are grounds for the defence lawyer to impeach the credibility of the victim. In investigations involving acquaintance or date rape, the electronic communications between the accused and the victim may be reviewed in order to determine if the victim consented to the sexual activity. In People v. Jovanovic, the New York appeals court determined that emails from the alleged victim should be included in evidence and that the rape shield law did not apply. The alleged victim had written about her sadomasochistic interests and experiences. In Canada, a defense lawyer may be allowed to obtain copies of the victim's e-mails and other private documents using a legal procedure called a third-party records application. Rape shield laws In Australia, Canada and the United States, the prior sexual history of the victim is generally not admissible as evidence during a criminal proceeding. These laws are referred to as rape shield laws. In Canada, the constitutionality of the rape shield law was challenged on the grounds that it hampers a defendant's ability to present a defence. The law was found to be constitutional by the Supreme Court of Canada: see R v Darrach. Misconduct by prosecutors In the prosecution of Anthony Broadwater, the man convicted of assaulting Alice Sebold, the prosecutor lied to Sebold. Sebold was unable to identify Broadwater in a police lineup. In response, the prosecutor told her that Broadwater had come to the police lineup with a friend for the purpose of confusing her. Broadwater was convicted of the assault. Subsequently, attorneys working to exonerate Broadwater argued that the falsehood by the prosecutor had influenced Sebold's testimony. In her book Lucky, Sebold wrote that the prosecutor had coached her into changing her identification. Broadwater was exonerated in 2021. Victim representation In Canadian criminal proceedings, the Crown prosecutor does not act on behalf of the victim and is not the victim's lawyer. 29. Who represents the victim during the trial? Should I get a lawyer to ensure that my rights are met during the trial? Crown counsel is not and can never function as the victim's lawyer. Although the Crown appears to be representing the interests of the victim, the Crown is the lawyer for the Queen and the government during the trial. In Canadian criminal cases, the harm is perceived to have been committed against the State. This is why cases are referred to as Regina v. Smith (or R. v. Smith), Regina being the Queen in Latin. The Crown is truly representing the society, of which you are a part. Other Sexual assault cases commonly involve multiple complainants. In these cases, the defence is likely to apply for separate trials for each offence. Joinder of multiple counts is only permitted in certain circumstances. The decision to have separate trials is discretionary and is exercised in order to prevent prejudice to the defendant. The threshold question for holding a joint trial is whether or not each complainant's evidence will be admissible in respect of the charges involving the other complainants (that is, whether such evidence is 'cross-admissible'). Decisions to hold separate trials or refuse to admit relevant tendency or propensity evidence about a defendant's sexual behaviour can be seen as barriers to the successful prosecution of sex offences. The sexual assault charges against Jian Ghomeshi involved multiple complainants and were dealt with in two separate proceedings. The second trial did not proceed after Ghomeshi agreed to issue an apology. A victim of sexual assault may be subjected to speculative allegations of wrongdoing during cross-examination. For example, in R. v. Sofyan Boalag the defence lawyer asserted during the cross-examination that the victim was in fact asking for money in exchange for sex. In that case, the accused was convicted of 13 criminal offences involving six victims including multiple sexual assaults at knifepoint. A victim's alcohol consumption at the time of the assault may undermine the victim's ability to remember the assault and his or her ability to provide credible testimony. As noted by Justice Zuker of the Ontario Court of Justice: [414] What makes someone a perfect target in the context of alcohol-facilitated sexual assaults may make that person a poor witness in any ensuing proceedings due to his or her inability to remember part or all of what happened. [415] One of the more challenging (and ever-present) issues to evaluate in sexual assault is the question of consent. Sexual assaults may involve alcohol consumption by one or both parties. Frequently, parties to an incident will have limited or no memory of the events in question, and the court will be required to obtain and evaluate information provided by other witnesses or other corroborating evidence. During cross examination a victim's attire may be scrutinized and they may be accused of wearing revealing clothing. For example, a Toronto woman, who alleged she was sexually assaulted by three police officers after a party, was asked about her choice of wearing a top described as "really low cut with open sleeves" to a party mostly attended by "young men ... who would be drinking." During cross-examination, a victim's criminal record may be noted in order to impeach their credibility. This tactic was unsuccessfully used in the trial of Daniel Holtzclaw. Post-assault social stigma, mistreatment and abuse Victims of sexual assault are often subjected to social stigma. The victim may be subject to inappropriate post-assault behaviour or language by medical personnel or other organizations. Slut-shaming is the practice of blaming the victim for rape and other forms of sexual assault. It rests on the idea that sexual assault is caused (either in part or in full) by the woman wearing revealing clothing or acting in a sexually provocative manner, before refusing consent to sex, and thereby absolving the perpetrator of guilt. In 2016, Indian politicians suggested that women's clothing choices could invite sexual assault. The comments were made after a number of women were molested at a New Year's Eve party in Bangalore. On the U.S. television program The View, Joy Behar referred to Bill Clinton's sexual assault accusers as "tramps"; Behar apologized for the sexual slur shortly afterwards. The SlutWalk challenges the idea of explaining or excusing rape by referring to any aspect of a woman's appearance. The SlutWalk was created in response to comments by a Toronto Police officer who said: "I've been told I'm not supposed to say this—however, women should avoid dressing like sluts in order not to be victimized." Sexual assault victims may stay away from work in order to avoid contact with the perpetrator. For example, Caroline Lamarre, an office worker who was sexually assaulted at work, has not returned to work in order to avoid contact with her assailant. The assailant was convicted of sexual assault. Victims of sexual assault may be subjected to cyberbullying, as in the cases of the suicide of Audrie Pott, suicide of Rehtaeh Parsons and sexual assault of Savannah Dietrich. The Netflix documentary Audrie & Daisy is about two American high school students who were sexually assaulted by other students. At the time they were assaulted, Audrie Pott was 15 and Daisy Coleman was 14 years old. After the assaults, the victims and their families were subjected to abuse and cyberbullying. The movie Taking Back My Life: The Nancy Ziegenmeyer Story is the true story of Nancy Ziegenmeyer, an Iowa resident who was raped in 1991. Her story was groundbreaking because she spoke openly about her experiences including her interactions with hospital staff, the police, prosecutors, the accused, and the criminal justice system. The Des Moines Register won the Pulitzer Prize for Public Service in 1991 for publishing the story of Ziegenmeyer's sexual assault. At the time the article was published it was unusual to publish the victim's name. Ziegenmeyer wrote a book, Taking Back My Life, to encourage women to seek help after they have been victimized. The movie Share is a story about a 16-year-old girl who is sexually assaulted by boys from her high school and a video of the assault is circulated. She is incessantly harassed with anonymous text messages. The film premiered at the 2019 Sundance Film Festival. Many celebrities have publicly supported and contributed to making a difference on how victims of sexual assault are perceived. Law & Order: SVU actress Mariska Hargitay has dedicated her life to making a difference in society's views. After gaining her fame playing Olivia Benson, she began receiving letters from fans disclosing their sexual assault stories. This inspired her to start her own nonprofit titled Joyful Heart Foundation. This organization provides resources and support to survivors, while trying to change society's response to assault, and end the violence altogether. Hargitay puts an emphasis on the fact that anyone can be targeted, and, no matter who they are, they deserve to be treated the same. She stated that while the United States has made progress, we need to not put blame on the victims, no matter the gender. In many cultures, rape victims are at very high risk of suffering additional violence or threats of violence after the rape. These acts may be perpetrated by the rapist or by friends and relatives of the rapist as a way of preventing the victims from reporting the rape, of punishing them for reporting it, or of forcing them to withdraw the complaint. In addition, the relatives of the victim may threaten the victim as a punishment for "bringing shame" to the family. This is especially the case in cultures where female virginity is highly valued and considered mandatory before marriage; in extreme cases, rape victims are killed in honor killings. In some countries, girls and women who are raped are forced by their families to marry their rapist. A marriage is arranged because victims are deemed to have their "reputation" tarnished. There is extreme social stigma related to being the victim of rape and the loss of virginity. Marriage is claimed to be advantageous for both the victim—who does not remain unmarried and does not lose social status—and of the rapist, who avoids punishment. In 2012, the suicide of a 16-year-old Moroccan girl—who, after having been forced by her family to marry her rapist at the suggestion of the prosecutor, and who subsequently endured abuse by the rapist after they married—sparked protests from activists against the law which allows the rapist to marry the victim in order to escape criminal sanctions, and against this social practice which is common in Morocco. In 2016, the government of Turkey introduced legislation which would overturn a man's conviction for child sex assault if he married his victim. Victim support Victims often receive support from the community and other victims. For example, when Nancy Ziegenmeyer went public regarding her experience, "she was inundated with letters and phone calls of support, thanks and praise for her courage in going public. Many of the letters were from women who had been raped and had lived with their secrets, often telling no one, often blaming themselves." Victims' rights In the United Kingdom, the Giving Victims a Voice report was published in response to allegations of sexual abuse made against English DJ and BBC Television presenter Jimmy Savile. The Director of Public Prosecutions described the report as marking a "watershed moment" and apologised for "shortcomings" in the handling of prior abuse claims. The report's publication resulted in some highlighting what could be systemic failure because of the number of complainants and institutions identified, but others criticised it for treating allegations as facts. In the United States, the Crime Victims' Rights Act grants rights to victims including the right to be reasonably protected from the accused. Similarly, in Canada the Canadian Victims Bill of Rights provides rights to victims. There are victims' rights groups. Rise is an NGO working to implement a bill of rights for sexual assault victims. The Rape, Abuse & Incest National Network (RAINN) is an American group which carries out programs to prevent sexual assault, help victims, and to ensure that rapists are brought to justice. The Victim Rights Law Center is an American non-profit organization that provides free legal services to victims of rape and sexual assault. In the United States, the Office on Violence Against Women works to administer justice and strengthen services for victims of domestic violence, dating violence, sexual assault, and stalking. In South Africa, People Opposing Women Abuse (POWA) is a women's rights organisation that provides services and engages in advocacy to promote women's rights and improve women's quality of life. See also Effects and aftermath of rape Laws regarding rape Rape investigation References External links Clery Centre for Security on Campus People Opposing Women Abuse - POWA Abuse Rape Sex crimes Sexual abuse Sexual violence Social stigma
Post-assault treatment of sexual assault victims
[ "Biology" ]
5,804
[ "Abuse", "Behavior", "Aggression", "Human behavior" ]
51,661,210
https://en.wikipedia.org/wiki/Probabilistic%20semantics
One of the most severe limitations of the Semantic Web is its inability to deal with uncertain knowledge. Probabilistic semantics extend the current semantic technology to overcome that limitation. However, due to their probabilistic approach, probabilistic semantics are able to describe only those uncertainties that can be quantified, namely they cannot model conceptual uncertainty. References Semantic Web
Probabilistic semantics
[ "Technology" ]
77
[ "Computing stubs", "World Wide Web stubs" ]
51,663,564
https://en.wikipedia.org/wiki/NGC%20236
NGC 236 is a spiral galaxy located in the constellation Pisces. It was discovered on August 3, 1864 by Albert Marth. References External links 0236 Spiral galaxies Pisces (constellation) 002596
NGC 236
[ "Astronomy" ]
46
[ "Pisces (constellation)", "Constellations" ]
51,663,612
https://en.wikipedia.org/wiki/NGC%20237
NGC 237 is a spiral galaxy located in the constellation Cetus. It was discovered on September 27, 1867 by Truman Safford. Image gallery References External links 0237 Barred spiral galaxies Cetus Discoveries by Truman Safford 002597
NGC 237
[ "Astronomy" ]
51
[ "Cetus", "Constellations" ]
51,663,815
https://en.wikipedia.org/wiki/Mischa%20Dohler
Mischa Dohler is a Fellow of the Royal Academy of Engineering, Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and Fellow of the Royal Society of Arts (RSA). He was a Chair Professor of Wireless Communications at King's College London, where he worked on 6G and the Internet of Skills. He has been appointed to the Spectrum Advisory Board of Ofcom. Career He was a CTO at Worldsensing. He was a CTO at Sirius Insight. References External links Faculty profile Personal website 1975 births Living people Alumni of King's College London Academics of King's College London Electronics engineers
Mischa Dohler
[ "Engineering" ]
128
[ "Electronics engineers", "Electronic engineering" ]
51,663,914
https://en.wikipedia.org/wiki/Tlalocite
Tlalocite is a rare and complex tellurate mineral with the formula Cu10Zn6(TeO4)2(TeO3)(OH)25Cl · 27 H2O. It has a Mohs hardness of 1, and a cyan color. It was named after Tlaloc, the Aztec god of rain, in allusion to the high amount of water contained within the crystal structure. It is not to be confused with quetzalcoatlite, which often looks similar in color and habit. Occurrence Tlalocite was first identified in the Bambollite mine (La Oriental), Moctezuma, Municipio de Moctezuma, Sonora, Mexico and it was approved by the IMA in 1974. It often occurs together with tenorite, azurite, malachite and tlapallite. It is found in partially oxidized portions of tellurium-bearing hydrothermal veins. References Copper(II) minerals Zinc minerals Tellurite minerals Tellurate minerals 25 Orthorhombic minerals Minerals described in 1974
Tlalocite
[ "Chemistry" ]
225
[ "Hydrate minerals", "Hydrates" ]
51,665,202
https://en.wikipedia.org/wiki/Software%20monetization
Software monetization is a strategy employed by software companies and device vendors to maximize the profitability of their software. The software licensing component of this strategy enables software companies and device vendors to simultaneously protect their applications and embedded software from unauthorized copying, distribution, and use, and capture new revenue streams through creative pricing and packaging models. Whether a software application is hosted in the cloud, embedded in hardware, or installed on premises, software monetization solutions can help businesses extract the most value from their software. Another way to achieve software monetization is through paid advertising and the various compensation methods available to software publishers. Pay-per-install (PPI), for example, generates revenue by bundling third-party applications, also known as adware, with either freeware or shareware applications. History The exact origin of the term 'software monetization' is unknown, however, it has been in use in the information security industry since 2010. It was first used to articulate the value of licensing for cloud-hosted applications, but later came to encompass applications embedded in hardware and installed on premises. Today, software monetization broadly applies to software licensing, protection, and entitlement management solutions. In the digital advertising space, the term refers to solutions that increase revenue through installs, traffic, display ads, and search. Key areas of software monetization IP protection Software constitutes a significant part of a software company or device vendor's intellectual property (IP) and, as such, may benefit from strong security, encryption, and digital rights management (DRM). Depending on a company's particular use case, they can choose to implement a hardware, software, or cloud-based licensing solution, or by open sourcing software and relying on donations and/or compensation for support, customization or enhancements. A hardware-based protection key, or dongle, is best suited to software publishers concerned about the security of their product as it offers the highest level of copy protection and IP protection. Although a key must be physically connected in order to access or run an application, end users are not required to install any device drivers on their machines. A software-based protection key is ideal for software publishers who require flexible license delivery. The virtual nature of software keys eliminates the need to ship a physical product, thus enabling end users to quickly install and use an application with minimal fuss. Cloud-based licensing, on the other hand, provides automatic and immediate license enablement, so users can access software from any device including virtual machines and mobile devices. It is in the best interests of software companies and device vendors to take the necessary measures to protect their code from software piracy, a problem that costs the global software industry more than $100 billion annually. However, software protection is not just about preventing revenue loss; it is also about an organization's ability to protect the integrity of its product or service and brand reputation. Pricing and packaging An independent report by Vanson Bourne found that software vendors are losing revenue due to rigid licensing and delivery options. Since the demands of enterprise and end users are constantly evolving, software companies and device vendors must be able to adapt their pricing and packaging strategies on the fly. Separating an application's features and selling them individually at a premium is a highly effective way to reach new market segments. Customers have come to expect the freedom to consume a software offering on their own terms, which is why software companies and device vendors are increasingly turning to flexible licensing solutions. Entitlement management An entitlement management solution makes it possible to activate and provision cloud, on-premises, and embedded software applications from a single platform. Having the ability to manage homegrown or third-party licensing systems from one, centralized interface is conducive to an operationally efficient back office. With such a solution in place, time-consuming manual tasks can be automated for greater accuracy and reduced costs. Self-service web portals allow end users to perform a variety of tasks themselves, cutting down on support calls and improving customer satisfaction. Usage tracking Usage tracking provides essential business insight into end-user entitlements, as well as the consumption of products and features. Advanced data collection and reporting tools help optimize investment in the product roadmap and drive future business strategies. Furthermore, making usage data accessible to users helps them stay in compliance with their license agreements Advertising The use of commercial advertisements and contextual advertisements have been a foundation of software monetization since free software first hit the market. Advertisements can come out in many different ways such as text ads, banners, short commercial videos and other types of software advertisements. Software Monetization Strategies Software monetization is a critical aspect for developers and companies seeking to generate revenue from their software products. Strategies range from traditional models like one-time purchases, where users pay a single fee for perpetual access, to more dynamic approaches like subscriptions and freemium models, where users pay for ongoing access or advanced features. Other methods include advertising, licensing, and pay-per-use systems. The choice of monetization strategy depends on various factors including the software type, target audience, and desired revenue model. Emerging trends also indicate a move towards blockchain and value-based pricing. Effective monetization is essential for long-term success and sustainability in the software industry. Emerging trends Many traditional device vendors still see themselves as hardware providers, first and foremost, even though the most valuable component of their offering is the embedded software driving it. However, since the advent of the Internet of Things (IoT), that paradigm is shifting toward a more software-centric focus, as device vendors large and small make the inevitable business transformation into software companies. The need to license software, manage entitlements, and protect trade secrets cuts across all industries; from medtech to industrial automation and telecommunications. Antitrust compliance of software monetization A number of software companies are some of the most profitable businesses in the world. For example, Amazon is the dominant market leader in e-commerce with 50% of all online sales in the United States going through the platform. Another highly-successful software company, Apple shares a duopoly with Alphabet in the field of mobile operating systems: 27% of the market share belonging to Apple (iOS) and 72% to Google (Android). Alphabet, Facebook and Amazon have been referred to as the "Big Three" of digital advertising. In most jurisdictions around the world, is an essential legal obligation for any software company to utilize their software monetization strategy in compliance with antitrust laws. Unfortunately, the e-commerce is highly susceptible for antitrust violations that often have to do with improper software monetization. Some software companies systematically utilize price fixing, kickbacks, dividing territories, tying agreements and anticompetitive product bundling (although, not all product bundling is anticompetitive), refusal to deal and exclusive dealing, vertical restraints, horizontal territorial allocation, and similar anticompetitive practices to limit competition and to increase the opportunity for monetization. In 2019 and 2020, the Big Tech industry become center of antitrust attention from the United States Department of Justice and the United States Federal Trade Commission that included requests to provide information about prior acquisitions and potentially anticompetitive practices. Some Democratic candidates running for president proposed plans to break up Big Tech companies and regulate them as utilities. "The role of technology in the economy and in our lives grows more important every day," said FTC Chairman Joseph Simons. "As I’ve noted in the past, it makes sense for us to closely examine technology markets to ensure consumers benefit from free and fair competition." In June 2020, the European Union opened two new antitrust investigations into practices by Apple. The first investigation focuses on issues including whether Apple is using its dominant position in the market to stifle competition using its Apple music and book streaming services. The second investigation focuses on Apple Pay, which allows payment by Apple devices to brick and mortar vendors. Apple limits the ability of banks and other financial institutions to use the iPhones' near field radio frequency technology. Fines are insufficient to deter anti-competitive practices by high tech giants, according to European Commissioner for Competition Margrethe Vestager. Commissioner Vestager explained, "fines are not doing the trick. And fines are not enough because fines are a punishment for illegal behaviour in the past. What is also in our decision is that you have to change for the future. You have to stop what you're doing." Gig economy online marketplaces like Uber, Lyft, Handy, Amazon Home Services, DoorDash, and Instacart have perfected a process where workers deal bilaterally with gigs whose employers have none of the standard obligations of employers, while the platform operates the entire labor market to its own benefit – what some antitrust experts call a "for-profit hiring hall." Gig workers, such as Uber drivers are not employees, and hence Uber setting the terms on which they transact with customers, including fixing the prices charged to customers, constitutes a violation of the ban on restraints of trade in the Sherman Antitrust Act of 1890. In the United States, the issue of whether companies such as Uber is a price-fixing conspiracy, and whether that price fixing is horizontal has yet to be resolved at trial. In response to price fixing allegations, Uber publicly stated that: "we believe the law is on our side and that"s why in four years no anti-trust agency has raised this as an issue and there has been no similar litigation like it in the U.S." The spirit of the antitrust law is to protect consumers from the anticompetitive behavior of businesses that have either monopoly power in their market or companies that have banded together to exert cartel market behavior. Monopoly or cartel collusion creates market disadvantages for consumers. However, the antitrust law clearly distinguishes between purposeful monopolies and businesses that found themselves in a monopoly position purely as the result of business success. The purpose of the antitrust law is to stop businesses from deliberately creating monopoly power. Discussions of antitrust policy in software are often clouded by common myths about this widely misunderstood area of the law. For example, the United States federal Sherman Antitrust Act of 1890 criminalizes monopolistic business practices, specifically agreements that restraint of trade or commerce. At the same time, the Sherman Act allows organic creation of legitimately successful businesses that gain honest profits from consumers. The Act's main function is to preserve a competitive marketplace. The Big Tech companies are large and successful companies, but success alone is not reason enough for antitrust action. A legitimate breach of antitrust law must be the cause of any action against a business. Antitrust law doesn't condemn a firm for developing a universally popular search engine, such as Google, even if that success leads to market dominance. It's how a monopoly is obtained or preserved that matters — not its mere existence. See also Business models for open-source software License manager Software protection dongle Big Tech Gig economy Competition law References Software industry
Software monetization
[ "Technology", "Engineering" ]
2,250
[ "Computer industry", "Software industry", "Software engineering" ]
51,666,001
https://en.wikipedia.org/wiki/Aspergillus%20clavatus
Aspergillus clavatus is a species of fungus in the genus Aspergillus with conidia dimensions 3–4.5 x 2.5–4.5 μm. It is found in soil and animal manure. The fungus was first described scientifically in 1834 by the French mycologist John Baptiste Henri Joseph Desmazières. The fungus can produce the toxin patulin, which may be associated with disease in humans and animals. This species is only occasionally pathogenic. Other sources have identified many species of Aspergillus as producing dry, hydrophobic spores that are easily inhaled by humans and animals. Due to the small size of the spores, about 70% of spores of A. fumigatus are able to penetrate into the trachea and primary bronchi and close to 1% into alveoli. Inhalation of spores of Aspergillus is a health risk. A. clavatus is allergenic, causing the occupational hypersensitivity pneumonitis known as malt-worker's lung. History and taxonomy Aspergillus clavatus is a species of Aspergillus and is characterized by elongated club-shaped vesicles, and blue-green uniseriate conidia. The fungus was first described scientifically in 1834 by the French mycologist John Baptiste Henri Joseph Desmazières. It belongs to the Aspergillus section Clavati, (formerly known as the Aspergillus clavatus group) recognized by Charles Thom and Margaret Church (1926), alongside two species, Aspergillus clavatus and Aspergillus giganteus. In the succeeding years, four more species were discovered belonging to the Aspergillus section Clavati, which included Aspergillus rhizopodus, Aspergillus longivesica, Neocarpenteles acanthosporus and Aspergillus clavatonanicus. Later, Aspergillus pallidus was concluded to be a white variant (synonym) of A. clavatus by Samson (1979), which was supported by the identical DNA sequences of the two species. A sexual stage was described in 2018 with a Neocarpenteles teleomorph but under the one fungus-one name convention the original A. clavatus epithet was retained. Growth and morphology Aspergillus clavatus undergoes rapid growth, resulting in the formation of a velvety and fairly dense felt that is observed to be bluish-grey green in colour. The emerging conidial heads are large and clavate when very young, quickly splitting into conspicuous and compact divergent columns. The conidia bearing conidiophores are generally coarse, smooth walled, uncoloured, hyaline and can grow to be very long. Elongated club-shaped vesicles clavate, and bear phialides (singular: phialide) over their entire-surface, contributing to its short and densely packed structure. The sterigmata are usually found to be uniseriate, numerous and crowded. Conidia formed in them are elliptical, smooth and comparatively thick-walled. A. clavatus usually express conidiophores 1.5–3.00 mm in length, which arises from specialized and widened hyphal cells that eventually become the branching foot cells. The conidia on A. clavatus has been measured up to 3.0 – 4.5 X 2.5 – 3.5 μm. Cleistothecia are produced in crosses after approximately 4–10 weeks of incubation on suitable growth media at 25 °C. Cleistothecia are yellowish-brown (fawn) to dark brown in colour and range in diameter from 315-700 μm in diameter and have a relatively hard outer wall (peridium). At maturity the cleistothecia contain asci that themselves contain ascospores, which are clear, lenticular (with ridges evident) and between 6.0-7.0 μm in diameter. Growth on Czapek's solution agar Aspergillus clavatus colonies grow rapidly on Czapek's solution agar, reaching 3.0–3.5 cm, in 10 days at 24–26 °C. Growth is usually plane or moderately furrowed, with occasional appearance of floccose strains. But generally, a comparatively thin surface layer of mycelial felt is observed, which produces a copious number of erect conidiophores. The reverse is usually uncoloured but becomes brown with passing time in some strains. While odor is not prominent in some strains, it can be extremely unpleasant in others. Large conidial heads extend from 300 to 400 μm by 150 to 200 μm when young. However, with time, they split into two or more divergent and compressed cordial chains reaching 1.00 mm portraying a colour consisting of artemisia green to slate olive. The observed conidiophores grow up to 1.5–3.00 mm in length with 20–30 μm in diameter. They slowly and ultimately enlarge at the apex into a clavate vesicle, which consists of a fertile area, 200 to 250 μm in length and 40–60 μm wide. The sterigmata usually ranges from 2.5 to 3.5 μm by 2.0 to 3.0 μm at the base of the vesicle, to 7.0 or 8.0 and occasionally 10 μm to 2.5 to 3.0 μm at the apex. The conidia are comparatively thick-walled and measures 3.0 to 4.5 μm by 2.5 to 3.5 μm. While they can be larger in some strains, in others their appearance may be irregular. Growth on malt extract agar On malt extract agar, the structural morphology of A. clavatus appears to be different than in Czapek's solution agar. The typical strains extracted from malt media contain less abundant conidial structures, which could be larger in size. In other (non-typical) strains, the conidial heads increase in number but decrease in size. The conidiophores range from 300 to 500 μm and bear loose, columnar heads. Typical strains may be resembled by strong and unpleasant odor whereas non-typical strains are characterized being odorless. The colonies arising from one conidium on malt extract agar, consisted of 25X10^7 conidia after being observed for six days. Examination The phialide development and conidium formation in A. clavatus has been examined using TEM. And by using SEM, it was discovered that the first-formed conidium and phialide share a continuous wall. Additionally recombination with an albino mutant led to the production of heterokaryotic conidial heads with mixed conidial colours. A GC-content of 52.5–55% was also detected upon DNA analysis. And its soluble wall carbohydrates consist of mannitol and arabitol. Physiology Light stimulates the elongation of conidiophores in A. clavatus. And the more favourable C sources include starch, dextrin, glycogen and especially fructose. Substantial degree of lipid synthesis occurs, whereas cellulose and usnic acid are degraded. A. clavatus also produces riboflavin, ribonuclease, acid phosphodiesterase and acid phosphatase when in liquid culture. A. clavatus has the properties to oxidize tryptamine to indole acetic acid. It can absorb and collect hydrocarbons from fuel oil, incorporate metaphosphate and synthesize ethylene, clavatol and kojic acid. It is also responsible for the production of mycotoxins Patulin and sterigmatocystin. And has extremely high capacity for alcohol fermentation. When it comes to genomics, bioinformatic analysis revealed that A. clavatus contains a full complement of identified euascomycete sex genes. A heterothallic sexual cycle involving outcrossing between MAT1-1 and MAT1-2 isolates was subsequently described . A. clavatus can also be a food source for Collembola and has been found to be parasitized by Fusarium solani. Habitat and ecology Aspergillus clavatus is often described as a spoilage organism occurring on dung and in soil and can also grow in strong alkaline conditions. When it comes to geographical distribution, A. clavatus has been spotted in the tropical, subtropical and Mediterranean areas. It has been accounted in low frequencies in the soils of India. And is also found in Bangladesh, Sri Lanka, Hong Kong, Jamaica, Brazil, Argentina, South Africa, the Ivory Coast, Egypt, Libya, Turkey, Greece, Italy, the United States of America, Japan, the USSR and Czechoslovakia. It was tracked in rocks of a carst cave and stratigraphic core samples descending to 1200 m in Central Japan. However, it is usually and solely collected from cultivated soils, including the ones that bear cotton, potatoes, sugar canes, legumes, paddy and Artemisia herba-alba. It has also been garnered from soil under burnt steppe vegetation, desert soils, the rhizospheres of banana, ground-nuts and wheat. A. clavatus has also been detected in the ripe compost of municipal waste, and Nitrogen and NPK fertilizers are found to play an important role in its stimulation process. A. clavatus is also referred as a cosmopolitan fungus. Other than soil and dung, it can additionally be found in stored products with high levels of entrapped moisture. Such as stored cereals, rice, corn and millet. It has been further isolated from insects, especially from dead adult bees and honeycombs. Moreover, it has been collected from the feathers and droppings from free-living birds. A. clavatus is also common is decomposing materials. Their ability to resist strongly alkaline conditions, allows them to act as decomposition catalysts in situations where other fungus usually do not function. Applications and medical uses Weisner in March 1942 first noted the production of an antibiotic by strains of A. clavatus, and the active substance was known as clavatin. Later the antibiotic was named clavacin in August 1942 by Waksman, Horning and Spencer. Clavacin is also known as patulin. Patulin is receiving significant attention in the world today because of its manifestations in apple juices. Clavacin was noted to be valuable in the treatment of common-cold and applies a fungistatic or fungicidal effect on certain dermatophytes. A. clavatus with Phytophthora cryptogea in soil provided protection against damping of tomato seedlings, by decreasing the spreading of pathogens. Reversely, A. clavatus with the addition of glucose, increased the pathogenicity of Verticillium albo-atrum to tomatoes. A. clavatus also produces the following: Cytochalasin E, Cytochalasin K, Tryptoquivaline, Nortryptoquivalone, Nortryptoquivaline, Deoxytryptoquivaline, Deoxynortryptoquivaline, Tryptoquivaline E, and Tryptoquivaline N. Furthermore, A. clavatus isolates produce ribotoxins, which can help develop immunotherapy processes for cancer. A.clavatus has also been used in the formation of extracellular bionanoparticles from silver nitrate solutions. These nanoparticles display antimicrobial properties, which work against MRSA and MRSE. Pathogenicity Aspergillus clavatus is known as an agent of allergic aspergillosis and has been implicated in multiple pulmonary infections. It has also been labelled as an opportunistic fungus, as it is responsible for causing aspergillosis in compromised patients. A. clavatus can also cause neurotoxicosis in sheep and otomycosis. In Scotland and elsewhere, A. clavatus is reported for causing the mould allergy "malster's lung" otherwise "maltster's lung". Extrinsic allergic alveolitis (EAA) is also caused by Aspergillus clavatus with a Type 1 immune reaction. It is described as a true hypersensitivity pneumonia, which usually occurs among malt workers, including symptoms of fever, chills, cough and dyspnea. In severe cases, glucocorticoids are used. Microgranulomatous hypersensitivity pneumonitis, where interstitial granulomatous infiltration occurs, usually in malt workers, is caused by allergy to antigens of Aspergillus clavatus. EAA is caused by allergy to Aspergillus conidia, usually in the non-atopic individual. Such individuals are usually exposed to organic dust heavily packed with conidia and mycelial debris. This condition involves the lung parenchyma. A strain of A. clavatus has also caused hyperkeratosis in calves. Spore walls of a sputum-derived isolate of Aspergillus clavatus were extracted and treated with ethanol following alkaline hydrolysis. And it yielded mutagens. The extracts were given to unimmunised mice, causing lung reaction and leading to cases of pulmonary mycotoxicosis. A rising incidence of lung tumours were also observed. This study revealed that an isolate of A. clavatus, which is able to convert highly toxic metabolites in bacterial and mammalian cells, will cause inflammatory response in the lungs of unimmunized mice. References clavatus Food microbiology Fungi described in 1834 Taxa named by John Baptiste Henri Joseph Desmazières Fungal pathogens of humans Fungus species
Aspergillus clavatus
[ "Biology" ]
2,959
[ "Fungi", "Fungus species" ]
51,667,206
https://en.wikipedia.org/wiki/Ji%C5%99%C3%AD%20Linhart
Jiří Linhart (13 April 1924 – 6 January 2011) Nuclear fusion physicist and Czech Olympic swimmer. He competed in the men's 200 metre breaststroke at the 1948 Summer Olympics in London. He stayed on in London after which he took his PhD under the supervision of Denis Gabor. He was a pioneer of Nuclear Fusion, author of "Plasma Physics" (1960) - the first textbook on Plasma science, and many academic papers and early patents on nuclear reactors. In 1956 he became group Head of Acceleration at CERN, and in 1960 he became the head of the EURATOM group in Frascati. He was also a very keen chess player, playing in the Haifa Olympiad in 1976. References External links 1924 births 2011 deaths Czech male breaststroke swimmers Olympic swimmers for Czechoslovakia Swimmers at the 1948 Summer Olympics Swimmers from Prague Nuclear fusion Plasma physicists Nuclear reactors People associated with CERN Czechoslovak male swimmers Czech chess players Czechoslovak chess players
Jiří Linhart
[ "Physics", "Chemistry" ]
188
[ "Nuclear fusion", "Plasma physicists", "Plasma physics", "Nuclear physics" ]
51,668,579
https://en.wikipedia.org/wiki/LG%20V20
LG V20 is an Android phone manufactured by LG Electronics, in its LG V series, succeeding the LG V10 released in 2015. Unveiled on September 6, 2016, it was the first phone with the Android Nougat operating system. Like the V10, the V20 has a secondary display panel near the top of the device that can display additional messages and controls, and a quad DAC for audio. The V20 has a user-replaceable battery, unlike its successor, the LG V30, unveiled on 31 August 2017. Specifications The LG V20 was released in 2016 as LG's second V-series flagship smartphone. Its list of specifications includes the Qualcomm Snapdragon 820 system-on-chip, 4GB of RAM and 64GB of storage, 5.7-inch Quad HD (2560×1440) IPS LCD with additional secondary display, dual 16MP (75°, f/1.8) + 8MP (135°, f/2.4) rear cameras, 5MP (120°, f/1.9) front-facing camera, and a 3,200mAh removable battery. Hardware The LG V20 continues the user-friendly hardware access design of the LG G5, having a removable back chassis of aluminum alloy for a significantly streamlined and convenient battery removal as well as easy access to internal components for any repairs, with polycarbonate plastic top and bottom caps, a USB-C connector compliant with Qualcomm's Quick Charge 3.0, and a rear-mounted power button with an integrated fingerprint reader. It is available in Dark Grey (named "Titan"), Pink, and Silver color finishes. The V20 features a 5.7-inch 1440p IPS LCD display with up to 500 nits of brightness, coated in Gorilla Glass 4, utilizing the Qualcomm Snapdragon 820 processor with 4 GB of LPDDR4 RAM. The device includes 64 GB of internal storage, expandable via microSD card up to 2TB, and a 3,200 mAh removable battery. The removable aluminum alloy cover, as well as the removable battery, is designed to act as shock and impact dissipation in the event that the V20 is dropped, in which case both will pop out from the main body and absorb the impact, dispersing the weight over the battery and cover, leaving the main components and screen less affected by drop damage compared to other smartphones. This makes the LG V20 one of the most drop shock resistant, durable and resilient consumer smartphones currently available. Similar to the V10, a second, supplemental display is located at the top of the device to the right of the 120° wide-angle front-facing camera. The secondary separate display can be used to show notifications, access controls, and apps, as well as display time and incoming messages. Both screens were made larger and brighter than those found on the V10. Additional features include an IR blaster, FM radio, a dedicated 24-bit high fidelity audio recorder able to record up to 24-bit/192 kHz with manual channel controls for effective noise elimination of up to 50% in audio/video recording compared to other smartphone audio recorders, Bluetooth 4.2, NFC, as well as dual sim support for the H990N/H990DS international versions which doesn't take the microSD card slot like in most other dual sim supported smartphones. The V20 shipped with Bang & Olufsen H3 in-ear headphones for a limited time, and the phone's audio specifications and sound is tuned by the same company in some countries, including the international variants (indicated by the B&O logo on the back of the cover). Every model of the V20 includes the dedicated ESS Sabre ES9218 32-bit Hi-Fi Quad DAC, able to drive up to 600 ohm headphones to enhance wired headphones sound output quality with specifications of 130 dB SNR, 124 dB DNR and -112 dB THD+N. The LG V20 was the most powerful smartphone to have a removable battery at the time, having later been superseded by the newer Fairphone devices. Videos can be recorded with FLAC (lossless) audio tracks. Software The V20 ships with Android 7.0 Nougat and LG UX 5.0+ software. It was the first Android device to ship with Nougat. Updates to Android 8.0 Oreo for various models were released, but later versions are not supported. LG supports unlocking the bootloader on US996, allowing them to be rooted and custom ROM images to be installed if available. There is no LG support for unlocking the V20 bootloader; it is reported to be possible though means of DirtySanta (except H918 which instead relies on Lafsploit), but difficult and with the risk of damaging the phone, and custom ROM images such as LineageOS have been unofficially produced. Unique features The V20 released with a strong combination of features, including user-replaceable battery, higher quality audio than the competition and strong camera hardware and software. It also had an infrared (IR) blaster that allowed it to control televisions and other remote controlled devices. As part of its focus on audio and video, it had several strong points. It was one of the few phones at the time with an ultrawide camera, as well as laser autofocus. It had high acoustic overload point (AOP) microphones that allowed recording in very loud concert settings. It also had configurable bitrate video and audio recording, with lossless audio and steerable sound focus and waveform display while recording. As of Q1 2021, the V20 remains the only phone with a user-replaceable battery, DAC audio, 3.5mm headphone jack and IR transmitter. The phone developed a cult following, despite the bloatware and lack of an easily unlocked bootloader. References External links Phone specifications Measured technical specifications of the ESS Sabre ES9218 32-bit DAC Audiophile DAC Reddit community discussion LG Electronics smartphones LG Electronics mobile phones Mobile phones introduced in 2016 Android (operating system) devices Mobile phones with multiple rear cameras Mobile phones with user-replaceable battery Mobile phones with 4K video recording Discontinued flagship smartphones Mobile phones with infrared transmitter
LG V20
[ "Technology" ]
1,345
[ "Discontinued flagship smartphones", "Flagship smartphones" ]
51,669,100
https://en.wikipedia.org/wiki/Electric-pump-fed%20engine
The electric-pump-fed engine is a bipropellant rocket engine in which the fuel pumps are electrically powered, and so all of the input propellant is directly burned in the main combustion chamber, and none is diverted to drive the pumps. This differs from traditional rocket engine designs, in which the pumps are driven by a portion of the input propellants. An electric cycle engine uses electric pumps to pressurize the propellants from a low-pressure fuel tank to high-pressure combustion chamber levels, generally from to . The pumps are powered by an electric motor, with electricity from a battery bank. Electrical pumps had been used in the secondary propulsion system of the Agena upper stage vehicle. As of December 2020, the only rocket engines to use electric propellant pump systems are the Rutherford engine, ten of which power the Electron rocket, and the Delphin engine, five of which power the first stage of Astra Space's Rocket 3. On 21 January 2018, Electron was the first electric pump-fed rocket to reach orbit. In comparison to turbo-pumped rocket cycles such as staged combustion and gas generator, an electric cycle engine has potentially worse performance due to the added mass of batteries, but may have lower development and manufacturing costs due its mechanical simplicity, its lack of high temperature turbomachinery, and its easier controllability. See also Combustion tap-off cycle Expander cycle Gas-generator cycle Pressure-fed engine Rocket engine Solid-propellant rocket Staged combustion cycle References Combustion Rocket engines Rocket engines by cycle Spacecraft propulsion Thermodynamic cycles
Electric-pump-fed engine
[ "Chemistry", "Technology" ]
321
[ "Rocket engines", "Combustion", "Engines" ]
51,670,195
https://en.wikipedia.org/wiki/Soupeur
Soupeur is a sexual practice involving attraction to other male secretions, specifically bread soaked in urine, or semen. Bread soaked in urine This specific meaning refers to individuals who take pleasure in consuming food soaked in the urine of others, in particular bread abandoned and later retrieved at public urinals. This practice was popular in Paris and Marseille up until the 1960s and 1970s. There were numerous contemporary references in popular culture. There existed an alternative where a public urinal is stopped in order to wait for it to fill. Then a person would enter it and submerge his penis into the urine of previous users. This was alternatively called dipping. Semen in brothels The term alternatively describes the act of individuals visiting brothels to consume the semen left on the prostitutes by the customers. This act is also named "do dinette." In her autobiography One two two, former prostitute Fabienne Jamet evokes this practice: "Back when I ruled the 122, I had a soupeur who could take thirty to forty loads at a time." Sometimes prostitutes "fake" their performance by brushing their pubic hair with ersatz sperm made from a mixture of egg white, urine and a few drops of bleach. Mentions in popular culture These practices both extreme and often adorned innocuous descriptions slums of Paris in the literature of the mid-twentieth century: "There were fairies still too green for the Bois... One of them came around every day, his specialty was the urinals and especially the crusts of bread soaking in the drains... He told us his adventures... He knew an old Jew who loved the stuff, a butcher on the rue des Archives... They'd go and eat it together... One day they got caught... " Louis-Ferdinand Celine, Death on Credit, 1936. "And I will not cause my old pervert funds bogeymen nor soupeurs ... I had nothing to complain about it. "Albert Simonin, Hands Off the Loot 1953. "Not far from the subway, two or three soupeurs were waiting" Auguste Le Breton, Raid on chnouf, 1954. "I know a soupeur ... one of those guys that put bread in public urinals ... which revert to eat urine-soaked" Silvio Fanti, Man micropsychoanalysis, 1981. "Some drunks, prostitutes, and even a soupeur" Joann Sfar, Pascin 2005. "I was twelve. [...] Martial, my boyfriend of Clos Street, had teamed up with a guy who lived Orteaux Street, just above the urinal where diners began to dip their piece of bread in piss. They put the whole loaves coming and resume the gentle evening. We had spotted them, we were naive, we did not realize they ate the bread swollen with urine. "Nan Aurousseau, District carrion, 2012 Stock p. 100. References Bibliography Brenda B. Love Dictionnaire des fantasmes et perversions, Éditions Blanche, 2000. Fabienne Jamet One two two, éditions Olivier Orban, 1975. Laud Humphreys Le Commerce des pissotières, Pratiques anonymes dans l’Amérique des années 1960, La Découverte, 2005. Marc Lemonier et Alexandre Dupouy, Histoire(s) du Paris libertin, La Musardine, 2003. Robert Stoller La perversion, forme érotique de la haine, Payot, coll. "Petite Bibliothèque Payot", 2007, . Véronique Willemin, La Mondaine, histoire et archives de la Police des Mœurs, Hoëbeke, 2009. Urine Sexual fetishism
Soupeur
[ "Biology" ]
789
[ "Urine", "Excretion", "Animal waste products" ]
51,673,289
https://en.wikipedia.org/wiki/38%20Virginis
38 Virginis is an F-type main sequence star in the constellation of Virgo. It is around 108 light years distant from the Earth. Nomenclature The name 38 Virginis derives from the star being the 38th star in order of right ascension catalogued in the constellation Virgo by Flamsteed in his star catalogue. The designation b of 38 Virginis b derives from the order of discovery and is given to the first planet orbiting a given star, followed by the other lowercase letters of the alphabet. In the case of 38 Virginis, only one was discovered, which was designated b. Stellar characteristics 38 Virginis is an F-type main sequence star that is approximately 118% the mass of and 145% the radius of the Sun. It has a temperature of 6557 K and is about 1.9 billion years old. In comparison, the Sun is about 4.6 billion years old and has a temperature of 5778 K. The star is metal-rich, with a metallicity ([Fe/H]) of 0.07 dex, or 117% the solar amount. Its luminosity () is 3.48 times that of the Sun. A companion star is cataloged in the CCDM at a separation of half an arcsecond. Planetary system The star is known to host one exoplanet, 38 Virginis b, discovered in 2016. This planet has a relatively low eccentricity out of any long-period giant exoplanet discovered, with an eccentricity of 0.03. The planet has a mass of around 4.5 times that of the planet Jupiter. Its orbit very likely puts it and any moons it may have in the habitable zone of its star. Notes References F-type main-sequence stars Planetary systems with one confirmed planet Virgo (constellation) Double stars Durchmusterung objects Virginis, 38 111998 062875 4891
38 Virginis
[ "Astronomy" ]
393
[ "Virgo (constellation)", "Constellations" ]
63,033,051
https://en.wikipedia.org/wiki/International%20Network%20Working%20Group
The International Network Working Group (INWG) was a group of prominent computer science researchers in the 1970s who studied and developed standards and protocols for interconnection of computer networks. Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, its goal was to develop an international standard protocol for internetworking. INWG became a subcommittee of the International Federation for Information Processing (IFIP) the following year. Concepts developed by members of the group contributed to the Protocol for Packet Network Intercommunication proposed by Vint Cerf and Bob Kahn in 1974 and the Transmission Control Protocol and Internet Protocol (TCP/IP) that emerged later. History Founding and IFIP affiliation The International Network Working Group was formed by Steve Crocker, Louis Pouzin, Donald Davies, and Peter Kirstein in June 1972 in Paris at a networking conference organised by Pouzin. Crocker saw that it would be useful to have an international version of the Network Working Group, which developed the Network Control Program for the ARPANET. At the International Conference on Computer Communication (ICCC) in Washington D.C. in October 1972, Vint Cerf was approved as INWG's Chair on Crocker's recommendation. The group included American researchers representing the ARPANET and the Merit network, the French CYCLADES and RCP networks, and British teams working on the NPL network, EPSS, and European Informatics Network. During early 1973, Pouzin arranged affiliation with the International Federation for Information Processing (IFIP). INWG became IFIP Working Group 1 under Technical Committee 6 (Data Communication) with the title "International Packet Switching for Computer Sharing" (WG6.1). This standing, although informal, enabled the group to provide technical input on packet networking to CCITT and ISO. Its purpose was to study and develop "international standard protocols for internetworking". INWG published a series numbered notes, some of which were also RfCs. Gateways/routers The idea for a router (called a gateway at the time) initially came about through INWG. These gateway devices were different from most previous packet switching schemes in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that function entirely to the hosts. This particular idea, the end-to-end principle, had been pioneered in the CYCLADES network. Proposal for an international end-to-end protocol INWG met in New York in June 1973. Attendees included Cerf, Bob Kahn, Alex McKenzie, Bob Metcalfe, Roger Scantlebury, John Shoch and Hubert Zimmermann, among others. They discussed a first draft of an International Transmission Protocol (ITP). Zimmermann and Metcalfe dominated the discussions; Zimmermann had been working with Pouzin on the CYCLADES network while Metclafe, Shoch and others at Xerox PARC had been developing the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking. Notes from the meetings were recorded by Cerf and McKenzie, which was circulated after the meeting (INWG 28). There was a follow-up meeting in July. Gerard LeLann and G. Grossman made contributions after the June meeting. Building on this work, in September 1973, Kahn and Cerf presented a paper, Host and Process Level Protocols for Internetwork Communication, at the next INWG meeting at the University of Sussex in England (INWG 39). Their ideas were refined further in long discussions with Davies, Scantlebury, Pouzin and Zimmerman. Pouzin circulated a paper on Interconnection of Packet Switching Networks in October 1973 (INWG 42), in which he introduced the term catenet for an interconnected network. Zimmerman and Michel Elie wrote a Proposed Standard Host-Host Protocol for Heterogenous Computer Networks: Transport Protocol in December 1973 (INWG 43). Pouzin updated his paper with A Proposal for Interconnecting Packet Switching Networks in March 1974 (INWG 60), published two months later in May. Zimmerman and Elie circulated a Standard host-host protocol for heterogeneous computer networks in April 1974 (INWG 61). Pouzin published An integrated approach to network protocols in May 1975. Kahn and Cerf published a significantly updated and refined version of their proposal in May 1974, A Protocol for Packet Network Intercommunication. A later version of the paper acknowledged several people including members of INWG and attendees at the June 1973 meeting. It was updated in INWG 72/RFC 675 in December 1974 by Cerf, Yogen Dalal and Carl Sunshine, which introduced the term internet as a shorthand for internetwork. Two competing proposals had evolved, the early Transmission Control Program (TCP), originally proposed by Kahn and Cerf, and the CYCLADES transport station (TS) protocol, proposed by Pouzin, Zimmermann and Elie. There were two sticking points: whether there should be fragmentation of datagrams (as in TCP) or standard-sized datagrams (as in TS); and whether the data flow was an undifferentiated stream or maintained the integrity of the units sent. These were not major differences. After "hot debate", McKenzie proposed a synthesis in December 1974, Internetwork Host-to-Host Protocol (INWG 74), which he refined the following year with Cerf, Scantlebury and Zimmerman (INWG 96). After reaching agreement with the wider group, a Proposal for an international end to end protocol, was published by Cerf, McKenzie, Scantlebury, and Zimmermann in 1976. It was presented to the CCITT and ISO by Derek Barber, who became INWG chair earlier that year. Although the protocol was adopted by networks in Europe, it was not adopted by the CCITT, ISO nor the ARPANET. The CCITT went on to adopt the X.25 standard in 1976, based on virtual circuits. ARPA began testing TCP in 1975 at Stanford, BBN and University College London. Ultimately, ARPA developed the Internet protocol suite, including the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service, which reflects concepts in Pouzin's CYCLADES project. Email Ray Tomlinson is well known as the creator of network mail (i.e., email) in INWG Protocol note 2 (a separate series of INWG notes), in September 1974. Derek Barber proposed an electronic mail protocol in 1979 in INWG 192 and implemented it on the European Informatics Network. This was referenced by Jon Postel in his early work on Internet email, published in the Internet Experiment Note series. Later Alex McKenzie served as chair from 1979-1982 and Secretary beginning in 1983. Carl Sunshine, who had worked with Vint Cerf and Yogen Dalal at Stanford on the first TCP specification, subsequently served as INWG chair until 1987, when Harry Rudin took over. Later international work led to the OSI model in 1984, of which many members of the INWG became advocates. During the Protocol Wars of the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite would result in the best and most robust computer networks. ARPA partnerships with the telecommunication and computer industry led to widespread private sector adoption of the Internet protocol suite as a communication protocol. The INWG continued to work on protocol design and formal specification until the 1990s when it disbanded as the Internet grew rapidly. Nonetheless, issues with the Internet Protocol suite remain and alternatives have been proposed building on INWG ideas such as Recursive Internetwork Architecture. Legacy The work of INWG was a significant step in the creation of the Transmission Control Program and ultimately the Internet. Members The group had about 100 members, including the following: Derek Barber B. Barker Vint Cerf W. Clipsham Donald Davies Rémi Despres V. Detwiler Frank Heart Alex McKenzie Louis Pouzin O. Riml Harry Rudin K. Samuelson K. Sandum Roger Scantlebury B. Sexton P. Shanks C.D. Shepard Carl Sunshine J. Tucker Barry Wessler Hubert Zimmerman See also Coloured Book protocols History of email History of the Internet List of Internet pioneers Protocol Wars Public data network Notes References Primary sources In chronological order: Cerf, Vinton (editor) (June 1973), International Transmission Protocol, IFIP WG6.1, INWG 28. Cerf, Vinton; Kahn, Robert (September 1973), Host and Process Level Protocols for Internetwork Communication, IFIP WG6.1, INWG 39. Pouzin, Louis (October 1973), Interconnection of Packet Switching Networks, IFIP WG6.1, INWG 42. Zimmermann, Hubert; Elie, Michel (December 1973), Proposed Standard Host-Host Protocol for Heterogenous Computer Networks: Transport Protocol, IFIP WG6.1, INWG 43. Pouzin, Louis (March 1974), A Proposal for Interconnecting Packet Switching Networks, IFIP WG6.1, INWG 60. Zimmermann, Hubert; Elie, Michel (April 1974), Transport Protocol: Standard Host-Host Protocol for Heterogeneous Computer Networks, IFIP WG6.1, INWG 61. Pouzin, Louis (May 1974), A Proposal for Interconnecting Packet Switching Networks, Proceedings of EUROCOMP, Brunel University, pp. 1023-36. McKenzie, Alex (December 1974), Internetwork Host-to-Host Protocol, IFIP WG6.1, INWG 74. Cerf, Vinton; McKenzie, Alex; Scantlebury, Roger; Zimmermann, Hubert (July 1975), Proposal for an Internetwork End-to-End Protocol, IFIP WG6.1, INWG 96. Further reading External links International Packet Network Working Group (INWG), Charles Babbage Institute Archives, University of Minnesota Archival Collection Communications protocols Network protocols
International Network Working Group
[ "Technology" ]
2,119
[ "Computer standards", "Communications protocols" ]
63,033,543
https://en.wikipedia.org/wiki/Complemented%20subspace
In the branch of mathematics called functional analysis, a complemented subspace of a topological vector space is a vector subspace for which there exists some other vector subspace of called its (topological) complement in , such that is the direct sum in the category of topological vector spaces. Formally, topological direct sums strengthen the algebraic direct sum by requiring certain maps be continuous; the result retains many nice properties from the operation of direct sum in finite-dimensional vector spaces. Every finite-dimensional subspace of a Banach space is complemented, but other subspaces may not. In general, classifying all complemented subspaces is a difficult problem, which has been solved only for some well-known Banach spaces. The concept of a complemented subspace is analogous to, but distinct from, that of a set complement. The set-theoretic complement of a vector subspace is never a complementary subspace. Preliminaries: definitions and notation If is a vector space and and are vector subspaces of then there is a well-defined addition map The map is a morphism in the category of vector spaces — that is to say, linear. Algebraic direct sum The vector space is said to be the algebraic direct sum (or direct sum in the category of vector spaces) when any of the following equivalent conditions are satisfied: The addition map is a vector space isomorphism. The addition map is bijective. and ; in this case is called an algebraic complement or supplement to in and the two subspaces are said to be complementary or supplementary. When these conditions hold, the inverse is well-defined and can be written in terms of coordinates as The first coordinate is called the canonical projection of onto ; likewise the second coordinate is the canonical projection onto Equivalently, and are the unique vectors in and respectively, that satisfy As maps, where denotes the identity map on . Motivation Suppose that the vector space is the algebraic direct sum of . In the category of vector spaces, finite products and coproducts coincide: algebraically, and are indistinguishable. Given a problem involving elements of , one can break the elements down into their components in and , because the projection maps defined above act as inverses to the natural inclusion of and into . Then one can solve the problem in the vector subspaces and recombine to form an element of . In the category of topological vector spaces, that algebraic decomposition becomes less useful. The definition of a topological vector space requires the addition map to be continuous; its inverse may not be. The categorical definition of direct sum, however, requires and to be morphisms — that is, continuous linear maps. The space is the topological direct sum of and if (and only if) any of the following equivalent conditions hold: The addition map is a TVS-isomorphism (that is, a surjective linear homeomorphism). is the algebraic direct sum of and and also any of the following equivalent conditions: is the direct sum of and in the category of topological vector spaces. The map is bijective and open. When considered as additive topological groups, is the topological direct sum of the subgroups and The topological direct sum is also written ; whether the sum is in the topological or algebraic sense is usually clarified through context. Definition Every topological direct sum is an algebraic direct sum ; the converse is not guaranteed. Even if both and are closed in , may still fail to be continuous. is a (topological) complement or supplement to if it avoids that pathology — that is, if, topologically, . (Then  is likewise complementary to .) Condition 2(d) above implies that any topological complement of is isomorphic, as a topological vector space, to the quotient vector space . is called complemented if it has a topological complement (and uncomplemented if not). The choice of can matter quite strongly: every complemented vector subspace has algebraic complements that do not complement topologically. Because a linear map between two normed (or Banach) spaces is bounded if and only if it is continuous, the definition in the categories of normed (resp. Banach) spaces is the same as in topological vector spaces. Equivalent characterizations The vector subspace is complemented in if and only if any of the following holds: There exists a continuous linear map with image such that . That is, is a linear projection onto . (In that case, , and it is the continuity of that implies that this is a complement.) For every TVS the restriction map is surjective. If in addition is Banach, then an equivalent condition is is closed in , there exists another closed subspace , and is an isomorphism from the abstract direct sum to . Examples If is a measure space and has positive measure, then is complemented in . , the space of sequences converging to , is complemented in , the space of convergent sequences. By Lebesgue decomposition, is complemented in . Sufficient conditions For any two topological vector spaces and , the subspaces and are topological complements in . Every algebraic complement of , the closure of , is also a topological complement. This is because has the indiscrete topology, and so the algebraic projection is continuous. If and is surjective, then . Finite dimension Suppose is Hausdorff and locally convex and a free topological vector subspace: for some set , we have (as a t.v.s.). Then is a closed and complemented vector subspace of . In particular, any finite-dimensional subspace of is complemented. In arbitrary topological vector spaces, a finite-dimensional vector subspace is topologically complemented if and only if for every non-zero , there exists a continuous linear functional on that separates from . For an example in which this fails, see . Finite codimension Not all finite-codimensional vector subspaces of a TVS are closed, but those that are, do have complements. Hilbert spaces In a Hilbert space, the orthogonal complement of any closed vector subspace is always a topological complement of . This property characterizes Hilbert spaces within the class of Banach spaces: every infinite dimensional, non-Hilbert Banach space contains a closed uncomplemented subspace, a deep theorem of Joram Lindenstrauss and Lior Tzafriri. Fréchet spaces Let be a Fréchet space over the field . Then the following are equivalent: is not normable (that is, any continuous norm does not generate the topology) contains a vector subspace TVS-isomorphic to contains a complemented vector subspace TVS-isomorphic to . Properties; examples of uncomplemented subspaces A complemented (vector) subspace of a Hausdorff space is necessarily a closed subset of , as is its complement. From the existence of Hamel bases, every infinite-dimensional Banach space contains unclosed linear subspaces. Since any complemented subspace is closed, none of those subspaces is complemented. Likewise, if is a complete TVS and is not complete, then has no topological complement in Applications If is a continuous linear surjection, then the following conditions are equivalent: The kernel of has a topological complement. There exists a "right inverse": a continuous linear map such that , where is the identity map. (Note: This claim is an erroneous exercise given by Trèves. Let and both be where is endowed with the usual topology, but is endowed with the trivial topology. The identity map is then a continuous, linear bijection but its inverse is not continuous, since has a finer topology than . The kernel has as a topological complement, but we have just shown that no continuous right inverse can exist. If is also open (and thus a TVS homomorphism) then the claimed result holds.) The Method of Decomposition Topological vector spaces admit the following Cantor-Schröder-Bernstein–type theorem: Let and be TVSs such that and Suppose that contains a complemented copy of and contains a complemented copy of Then is TVS-isomorphic to The "self-splitting" assumptions that and cannot be removed: Tim Gowers showed in 1996 that there exist non-isomorphic Banach spaces and , each complemented in the other. In classical Banach spaces Understanding the complemented subspaces of an arbitrary Banach space up to isomorphism is a classical problem that has motivated much work in basis theory, particularly the development of absolutely summing operators. The problem remains open for a variety of important Banach spaces, most notably the space . For some Banach spaces the question is closed. Most famously, if then the only complemented infinite-dimensional subspaces of are isomorphic to and the same goes for Such spaces are called (when their only infinite-dimensional complemented subspaces are isomorphic to the original). These are not the only prime spaces, however. The spaces are not prime whenever in fact, they admit uncountably many non-isomorphic complemented subspaces. The spaces and are isomorphic to and respectively, so they are indeed prime. The space is not prime, because it contains a complemented copy of . No other complemented subspaces of are currently known. Indecomposable Banach spaces An infinite-dimensional Banach space is called indecomposable whenever its only complemented subspaces are either finite-dimensional or -codimensional. Because a finite-codimensional subspace of a Banach space is always isomorphic to indecomposable Banach spaces are prime. The most well-known example of indecomposable spaces are in fact indecomposable, which means every infinite-dimensional subspace is also indecomposable. See also Proofs References Bibliography Functional analysis
Complemented subspace
[ "Mathematics" ]
2,026
[ "Functional analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
63,033,632
https://en.wikipedia.org/wiki/Protocol%20Wars
The Protocol Wars were a long-running debate in computer science that occurred from the 1970s to the 1990s, when engineers, organizations and nations became polarized over the issue of which communication protocol would result in the best and most robust networks. This culminated in the Internet–OSI Standards War in the 1980s and early 1990s, which was ultimately "won" by the Internet protocol suite (TCP/IP) by the mid-1990s when it became the dominant protocol suite through rapid adoption of the Internet. In the late 1960s and early 1970s, the pioneers of packet switching technology built computer networks providing data communication, that is the ability to transfer data between points or nodes. As more of these networks emerged in the mid to late 1970s, the debate about communication protocols became a "battle for access standards". An international collaboration between several national postal, telegraph and telephone (PTT) providers and commercial operators led to the X.25 standard in 1976, which was adopted on public data networks providing global coverage. Separately, proprietary data communication protocols emerged, most notably IBM's Systems Network Architecture in 1974 and Digital Equipment Corporation's DECnet in 1975. The United States Department of Defense (DoD) developed TCP/IP during the 1970s in collaboration with universities and researchers in the US, UK and France. IPv4 was released in 1981 and was made the standard for all DoD computer networking. By 1984, the international reference model OSI model, which was not compatible with TCP/IP, had been agreed upon. Many European governments (particularly France, West Germany and the UK) and the United States Department of Commerce mandated compliance with the OSI model, while the US Department of Defense planned to transition from TCP/IP to OSI. Meanwhile, the development of a complete Internet protocol suite by 1989, and partnerships with the telecommunication and computer industry to incorporate TCP/IP software into various operating systems laid the foundation for the widespread adoption of TCP/IP as a comprehensive protocol suite. While OSI developed its networking standards in the late 1980s, TCP/IP came into widespread use on multi-vendor networks for internetworking and as the core component of the emerging Internet. Early computer networking Packet switching vs circuit switching Computer science was an emerging discipline in the late 1950s that began to consider time-sharing between computer users and, later, the possibility of achieving this over wide area networks. In the early 1960s, J. C. R. Licklider proposed the idea of a universal computer network while working at Bolt Beranek & Newman (BBN) and, later, leading the Information Processing Techniques Office (IPTO) at the Advanced Research Projects Agency (ARPA, later, DARPA) of the US Department of Defense (DoD). Independently, Paul Baran at RAND in the US and Donald Davies at the National Physical Laboratory (NPL) in the UK invented new approaches to the design of computer networks. Baran published a series of papers between 1960 and 1964 about dividing information into "message blocks" and dynamically routing them over distributed networks. Davies conceived of and named the concept of packet switching using high-speed interface computers for data communication in 1965–1966. He proposed a national commercial data network in the UK, and designed the local-area NPL network to demonstrate and research his ideas. The first use of the term protocol in a modern data-communication context occurs in an April 1967 memorandum A Protocol for Use in the NPL Data Communications Network written by two members of Davies' team, Roger Scantlebury and Keith Bartlett. Licklider, Baran and Davies all found it hard to convince incumbent telephone companies of the merits of their ideas. AT&T held a monopoly on communications infrastructure in the United States, as did the General Post Office (GPO) in the United Kingdom, which was the national postal, telegraph and telephone service (PTT). They both believed speech traffic would continue to dominate and continued to invest in traditional telegraphic techniques. Telephone companies were operating on the basis of circuit switching, alternatives to which are message switching or packet switching. Bob Taylor became the director of the IPTO in 1966 and set out to achieve Licklider's vision to enable resource sharing between remote computers. Taylor hired Larry Roberts to manage the programme. Roberts brought Leonard Kleinrock into the project; Kleinrock had applied mathematical methods to study communication networks in his doctoral thesis. At the October 1967 Symposium on Operating Systems Principles, Roberts presented the early "ARPA Net" proposal, based on Wesley Clark's idea for a message switching network using Interface Message Processors (IMPs). Roger Scantlebury presented Davies' work on a digital communication network and referenced the work of Paul Baran. At this seminal meeting, the NPL paper articulated how the data communications for such a resource-sharing network could be implemented. Larry Roberts incorporated Davies' and Baran's ideas on packet switching into the proposal for the ARPANET. The network was built by BBN. Designed principally by Bob Kahn, it departed from the NPL's connectionless network model in an attempt to avoid the problem of network congestion. The service offered to hosts by the network was connection oriented. It enforced flow control and error control (although this was not end-to-end). With the constraint that, for each connection, only one message may be in transit in the network, the sequential order of messages is preserved end-to-end. This made the ARPANET what would come to be called a virtual circuit network. Datagrams vs virtual circuits Packet switching can be based on either a connectionless or connection-oriented mode, which are different approaches to data communications. A connectionless datagram service transports data packets between two hosts independently of any other packet. Its service is best effort (meaning out-of-order packet delivery and data losses are possible). With a virtual circuit service, data can be exchanged between two host applications only after a virtual circuit has been established between them in the network. After that, flow control is imposed to sources, as much as needed by destinations and intermediate network nodes. Data are delivered to destinations in their original sequential order. Both concepts have advantages and disadvantages depending on their application domain. Where a best effort service is acceptable, an important advantage of datagrams is that a subnetwork may be kept very simple. A counterpart is that, under heavy traffic, no subnetwork is per se protected against congestion collapse. In addition, for users of the best effort service, use of network resources does not enforce any definition of "fairness"; that is, relative delay among user classes. Datagram services include the information needed for looking up the next link in the network in every packet. In these systems, routers examine each arriving packet, look at their routing information, and decide where to route it. This approach has the advantage that there is no inherent overhead in setting up the circuit, meaning that a single packet can be transmitted as efficiently as a long stream. Generally, this makes routing around problems simpler as only the single routing table needs to be updated, not the information for every virtual circuit. It also requires less memory, as only one route needs to be stored for any destination, not one per virtual circuit. On the downside, there is a need to examine every datagram, which makes them (theoretically) slower. On the ARPANET, the starting point in 1969 for connecting a host computer (i.e., a user) to an IMP (i.e., a packet switch) was the 1822 protocol, which was written by Bob Kahn. Steve Crocker, a graduate student at the University of California Los Angeles (UCLA) formed a Network Working Group (NWG) that year. He said "While much of the development proceeded according to a grand plan, the design of the protocols and the creation of the RFCs was largely accidental." Under the auspices of Leonard Kleinrock at UCLA, Crocker led other graduate students, including Jon Postel, in designing a host-host protocol known as the Network Control Program (NCP). They planned to use separate protocols, Telnet and the File Transfer Protocol (FTP), to run functions across the ARPANET. After approval by Barry Wessler at ARPA, who had ordered certain more exotic elements to be dropped, the NCP was finalized and deployed in December 1970 by the NWG. NCP codified the ARPANET network interface, making it easier to establish, and enabling more sites to join the network. Roger Scantlebury was seconded from the NPL to the British Post Office Telecommunications division (BPO-T) in 1969. There, engineers developed a packet-switching protocol from basic principles for an Experimental Packet Switched Service (EPSS) based on a virtual call capability. However, the protocols were complex and limited; Davies described them as "esoteric". Rémi Després started work in 1971, at the CNET (the research center of the French PTT), on the development of an experimental packet switching network, later known as RCP. Its purpose was to put into operation a prototype packet switching service to be offered on a future public data network. Després simplified and improved on the virtual call approach, introducing the concept of "graceful saturated operation" in 1972. He coined the term "virtual circuit" and validated the concepts on the RCP network. Once set up, the data packets do not have to contain any routing information, which can simplify the packet structure and improve channel efficiency. The routers are also faster as the route setup is only done once; from then on, packets are simply forwarded down the existing link. One downside is that the equipment has to be more complex as the routing information has to be stored for the length of the connection. Another disadvantage is that the virtual connection may take some time to set up end-to-end, and for small messages, this time may be significant. TCP vs CYCLADES and INWG vs X.25 Davies had conceived and described datagram networks, done simulation work on them, and built a single packet switch with local lines. Louis Pouzin thought it looked technically feasible to employ a simpler approach to wide-area networking than that of the ARPANET. In 1972, Pouzin launched the CYCLADES project, with cooperation provided by the French PTT, including free lines and modems. He began to research what would later be called internetworking; at the time, he coined the term "catenet" for concatenated network. The name "datagram" was coined by Halvor Bothner-By. Hubert Zimmermann was one of Pouzin's principal researchers and the team included Michel Elie, Gérard Le Lann, and others. While building the network, they were advised by BBN as consultants. Pouzin's team was the first to tackle the highly-complex problem of providing user applications with a reliable virtual circuit while using a best-effort service. The network used unreliable, standard-sized, datagrams in the packet-switched network and virtual circuits for the transport layer. First demonstrated in 1973, it pioneered the use of the datagram model, functional layering, and the end-to-end principle. Le Lann proposed the sliding window scheme for achieving reliable error and flow control on end-to-end connections. However, the sliding window scheme was never implemented on the CYCLADES network and it was never interconnected with other networks (except for limited demonstrations using traditional telegraphic techniques). Louis Pouzin's ideas to facilitate large-scale internetworking caught the attention of ARPA researchers through the International Network Working Group (INWG), an informal group established by Steve Crocker, Pouzin, Davies, and Peter Kirstein in June 1972 in Paris, a few months before the International Conference on Computer Communication (ICCC) in Washington demonstrated the ARPANET. At the ICCC, Pouzin first presented his ideas on internetworking, and Vint Cerf was approved as INWG's Chair on Steve Crocker's recommendation. INWG grew to include other American researchers, members of the French CYCLADES and RCP projects, and the British teams working on the NPL network, EPSS and the proposed European Informatics Network (EIN), a datagram network. Like Baran in the mid-1960s, when Roberts approached AT&T about taking over the ARPANET to offer a public packet-switched service, they declined. Bob Kahn joined the IPTO in late 1972. Although initially expecting to work in another field, he began work on satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In Spring 1973, Vint Cerf moved to Stanford University. With funding from DARPA, he began collaborating with Kahn on a new protocol to replace NCP and enable internetworking. Cerf built a research team at Stanford studying the use of fragmentable datagrams. Gérard Le Lann joined the team during the period 1973-4 and Cerf incorporated his sliding windows scheme into the research work. Also in the United States, Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking. INWG met in Stanford in June 1973. Zimmermann and Metcalfe dominated the discussions. Notes from the meetings were recorded by Cerf and Alex McKenzie, from BBN, and published as numbered INWG Notes (some of which were also RfCs). Building on this, Kahn and Cerf presented a paper at a networking conference at the University of Sussex in England in September 1973. Their ideas were refined further in long discussions with Davies, Scantlebury, Pouzin and Zimmerman. Most of the work was done by Kahn and Cerf working as a duet. Peter Kirstein put internetworking into practice at University College London (UCL) in June 1973, connecting the ARPANET to British academic networks, the first international heterogeneous computer network. By 1975, there were 40 British academic and research groups using the link. The seminal paper, A Protocol for Packet Network Intercommunication, published by Cerf and Kahn in 1974 addressed the fundamental challenges involved in interworking across datagram networks with different characteristics, including routing in interconnected networks, and packet fragmentation and reassembly. The paper drew upon and extended their prior research, developed in collaboration and competition with other American, British and French researchers. DARPA sponsored work to formulate the first version of the Transmission Control Program (TCP) later that year. At Stanford, its specification, , was written in December by Cerf with Yogen Dalal and Carl Sunshine as a monolithic (single layer) design. The following year, testing began through concurrent implementations at Stanford, BBN and University College London, but it was not installed on the ARPANET at this time. A protocol for internetworking was also being pursued by INWG. There were two competing proposals, one based on the early Transmission Control Program proposed by Cerf and Kahn (using fragmentable datagrams), and the other based on the CYCLADES transport protocol proposed by Pouzin, Zimmermann and Elie (using standard-sized datagrams). A compromise was agreed and Cerf, McKenzie, Scantlebury and Zimmermann authored an "international" end-to-end protocol. It was presented to the CCITT by Derek Barber in 1975 but was not adopted by the CCITT nor by the ARPANET. The fourth biennial Data Communications Symposium later that year included presentations from Davies, Pouzin, Derek Barber, and Ira Cotten about the current state of packet-switched networking. The conference was covered by Computerworld magazine which ran a story on the "battle for access standards" between datagrams and virtual circuits, as well as a piece describing the "lack of standard access interfaces for emerging public packet-switched communication networks is creating 'some kind of monster' for users". At the conference, Pouzin said pressure from European PTTs forced the Canadian DATAPAC network to change from a datagram to virtual circuit approach, although historians attribute this to IBM's rejection of their request for modification to their proprietary protocol. Pouzin was outspoken in his advocacy for datagrams and attacks on virtual circuits and monopolies. He spoke about the "political significance of the [datagram versus virtual circuit] controversy," which he saw as "initial ambushes in a power struggle between carriers and the computer industry. Everyone knows in the end, it means IBM vs. Telecommunications, through mercenaries." After Larry Roberts and Barry Wessler left ARPA in 1973 to found Telenet, a commercial packet-switched network in the US, they joined the international effort to standardize a protocol for packet switching based on virtual circuits shortly before it was finalized. With contributions from the French, British, and Japanese PTTs, particularly the work of Rémi Després on RCP and TRANSPAC, along with concepts from DATAPAC in Canada, and Telenet in the US, the X.25 standard was agreed by the CCITT in 1976. X.25 virtual circuits were easily marketed because they permit simple host protocol support. They also satisfy the INWG expectation of 1972 that each subnetwork can exercise its own protection against congestion (a feature missing with datagrams). Larry Roberts adopted X.25 on Telenet and found that "datagram packets are now more expensive than VC packets" in 1978. Vint Cerf said Roberts turned down his suggestion to use TCP when he built Telenet, saying that people would only buy virtual circuits and he could not sell datagrams. Roberts predicted that "As part of the continuing evolution of packet switching, controversial issues are sure to arise." Pouzin remarked that "the PTT's are just trying to drum up more business for themselves by forcing you to take more service than you need." Common host protocol vs translating between protocols Internetworking protocols were still in their infancy. Various groups, including ARPA researchers, the CYCLADES team, and others participating in INWG, were researching the issues involved, including the use of gateways to connect between two networks. At the National Physical Laboratory in the UK, Davies' team studied the "basic dilemma" involved in interconnecting networks: a common host protocol requires restructuring existing networks that use different protocols. To explore this dilemma, the NPL network connected with the EIN by translating between two different host protocols, that is, using a gateway. Concurrently, the NPL connection to the EPSS used a common host protocol in both networks. NPL research confirmed establishing a common host protocol would be more reliable and efficient. The CYCLADES project, however, was shut down in the late 1970s for budgetary, political and industrial reasons and Pouzin was "banished from the field he had inspired and helped to create". DoD model vs X.25/X.75 vs proprietary standards The design of the Transmission Control Program incorporated both connection-oriented links and datagram services between hosts. A DARPA internetworking experiment in July 1977 linking the ARPANET, SATNET and PRNET demonstrated its viability. Subsequently, DARPA and collaborating researchers at Stanford, UCL and BBN, among others, began work on the Internet, publishing a series of Internet Experiment Notes. Bob Kahn's efforts led to the absorption of MIT's proposal by Dave Clark and Dave Reed for a Data Stream Protocol (DSP) into version 3 of TCP in January 1978 written by Cerf, now at DARPA, and Jon Postel at the Information Sciences Institute of the University of Southern California (USC). Following discussions with Yogen Dalal and Bob Metcalfe at Xerox PARC, in version 4 of TCP, first drafted in September 1978, Postel split the Transmission Control Program into two distinct protocols, the Transmission Control Protocol (TCP) as a reliable connection-oriented service and the Internet Protocol (IP) as connectionless service. For applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added in order to provide direct access to the basic service of IP. Referred to as TCP/IP from December 1978, Version 4 was made standard for all military computer networking in March 1982. It was installed on SATNET and adopted by NORSAR/NDRE in March and Peter Kirstein's group at UCL in November. On January 1, 1983, known as "flag day", TCP/IP was installed on the ARPANET. This resulted in a networking model that became known as the DoD internet architecture model (DoD model for short) or DARPA model. Leonard Kleinrock's theoretical work published in the mid-1970s on the performance of the ARPANET was referred to during the development of the protocol. The Coloured Book protocols, developed by British Post Office Telecommunications and the academic community at UK universities, gained some acceptance internationally as the first complete X.25 standard. First defined in 1975, they gave the UK "several years lead over other countries" but were intended as "interim standards" until international agreement was reached. The X.25 standard gained political support in European countries and from the European Economic Community (EEC). The EIN, which was based on datagrams, was replaced with Euronet, which used X.25. Peter Kirstein wrote that European networks tended to be short-term projects with smaller numbers of computers and users. As a result, the European networking activities did not lead to any strong standards except X.25, which became the main European data protocol for fifteen to twenty years. Kirstein said his group at University College London was widely involved, partly because they were one of the groups with the most expertise, and partly to try to ensure that the British activities, such as the JANET NRS, did not diverge too far from the US. The construction of public data networks based on the X.25 protocol suite continued through the 1980s; international examples included the International Packet Switched Service (IPSS) and the SITA network. Complemented by the X.75 standard, which enabled internetworking across national PTT networks in Europe and commercial networks in North America, this led to a global infrastructure for commercial data transport. Computer manufacturers developed proprietary protocol suites such as IBM's Systems Network Architecture (SNA), Digital Equipment Corporation's (DEC's) DECnet, Xerox's Xerox Network Systems (XNS, based on PUP) and Burroughs' BNA. By the end of the 1970s, IBM's networking activities were, by some measures, two orders of magnitude larger in scale than the ARPANET. During the late 1970s and most of the 1980s, there remained a lack of open networking options. Therefore, proprietary standards, particularly SNA and DECnet, as well as some variants of XNS (e.g., Novell NetWare and Banyan VINES), were commonly used on private networks, becoming somewhat "de facto" industry standards. Ethernet, promoted by DEC, Intel, and Xerox, outcompeted MAN/TOP, promoted by General Motors and Boeing. DEC was an exception among the computer manufactures in supporting the peer-to-peer approach. In the US, the National Science Foundation (NSF), NASA, and the United States Department of Energy (DoE) all built networks variously based on the DoD model, DECnet, and IP over X.25. Internet–OSI Standards War The early research and development of standards for data networks and protocols culminated in the Internet–OSI Standards War in the 1980s and early 1990s. Engineers, organizations and nations became polarized over the issue of which standard would result in the best and most robust computer networks. Both standards are open and non-proprietary in addition to being incompatible, although "openness" may have worked against OSI while being successfully employed by Internet advocates. OSI reference model Researchers in the UK and elsewhere identified the need for defining higher-level protocols. The UK National Computing Centre publication 'Why Distributed Computing', which was based on extensive research into future potential configurations for computer systems, resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977. Hubert Zimmermann, and Charles Bachman as chairman, played a key role in the development of the Open Systems Interconnections reference model. They considered it too early to define a set of binding standards while technology was still developing since irreversible commitment to a particular standard might prove sub-optimal or constraining in the long run. Although dominated by computer manufacturers, they had to contend with many competing priorities and interests. The rate of technological change made it necessary to define a model that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards. Although not a standard itself, it was an architectural framework that could accommodate existing and future standards. Beginning in 1978, international work led to a draft proposal in 1980. In developing the proposal, there were clashes of opinions between computer manufacturers and PTTs, and of both against IBM. The final OSI model was published in 1984 by the International Organization for Standardization (ISO) in alliance with the International Telecommunication Union Telecommunication Standardization Sector (ITU-T), which was dominated by the PTTs. The most fundamental idea of the OSI model was that of a "layered" architecture. The layering concept was simple in principle but very complex in practice. The OSI model redefined how engineers thought about network architectures. Internet protocol suite The DoD model and other existing protocols, such as X.25 and SNA, all quickly adopted a layered approach in the late 1970s. Although the OSI model shifted power away from the PTTs and IBM towards smaller manufacturer and users, the "strategic battle" remained the competition between the ITU's X.25 and proprietary standards, particularly SNA. Neither were fully OSI compliant. Proprietary protocols were based on closed standards and struggled to adopt layering while X.25 was limited in terms of speed and higher-level functionality that would become important for applications. As early as 1982, criticised "zealous" advocates of the OSI reference model and criticised the functionality of the X.25 protocol and its use as an "end-to-end" protocol in the sense of a Transport or Host-to-Host protocol". Vint Cerf formed the Internet Configuration Control Board (ICCB) in 1979 to oversee the network's architectural evolution and field technical questions. However, DARPA was still in control and, outside the nascent Internet community, TCP/IP was not even a candidate for universal adoption. The implementation in 1985 of the Domain Name System proposed by Paul Mockapetris at USC, which enabled network growth by facilitating cross-network access, and the development of TCP congestion control by Van Jacobson in 1986–88, led to a complete protocol suite, as outlined in and in 1989. This laid the foundation for the growth of TCP/IP as a comprehensive protocol suite, which became known as the Internet protocol suite. DARPA studied and implemented gateways, which helped to neutralize X.25 as a rival networking paradigm. The computer science historian Janet Abbate explained: "by running TCP/IP over X.25, [D]ARPA reduced the role of X.25 to providing a data conduit, while TCP took over responsibility for end-to-end control. X.25, which had been intended to provide a complete networking service, would now be merely a subsidiary component of [D]ARPA's own networking scheme. The OSI model reinforced this reinterpretation of X.25's role. Once the concept of a hierarchy of protocols had been accepted, and once TCP, IP, and X.25 had been assigned to different layers in this hierarchy, it became easier to think of them as complementary parts of a single system, and more difficult to view X.25 and the Internet protocols as distinct and competing systems." The DoD reduced research funding for networks, responsibilities for governance shifted to the National Science Foundation and the ARPANET was shut down in 1990. Philosophical and cultural aspects Historian Andrew L. Russell wrote that Internet engineers such as Danny Cohen and Jon Postel were accustomed to continual experimentation in a fluid organizational setting through which they developed TCP/IP. They viewed OSI committees as overly bureaucratic and out of touch with existing networks and computers. This alienated the Internet community from the OSI model. A dispute broke out within the Internet community after the Internet Architecture Board (IAB) proposed replacing the Internet Protocol in the Internet with the OSI Connectionless Network Protocol (CLNP). In response, Vint Cerf performed a striptease in a three-piece suit while presenting to the 1992 Internet Engineering Task Force (IETF) meeting, revealing a T-shirt emblazoned with "IP on Everything". According to Cerf, his intention was to reiterate that a goal of the IAB was to run IP on every underlying transmission medium. At the same meeting, David Clark summarized the IETF approach with the famous saying "We reject: kings, presidents, and voting. We believe in: rough consensus and running code." The Internet Society (ISOC) was chartered that year. Cerf later said the social culture (group dynamics) that first evolved during the work on the ARPANET was as important as the technical developments in enabling the governance of the Internet to adapt to the scale and challenges involved as it grew. François Flückiger wrote that "firms that win the Internet market, like Cisco, are small. Simply, they possess the Internet culture, are interested in it and, notably, participate in IETF." Furthermore, the Internet community was opposed to a homogeneous approach to networking, such as one based on a proprietary standard such as SNA. They advocated for a pluralistic model of internetworking where many different network architectures could be joined into a network of networks. Technical aspects Russell notes that Cohen, Postel and others were frustrated with technical aspects of OSI. The model defined seven layers of computer communications, from physical media in layer 1 to applications in layer 7, which was more layers than the network engineering community had anticipated. In 1987, Steve Crocker said that although they envisaged a hierarchy of protocols in the early 1970s, "If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required." Although some sources say this was an acknowledgement that the four layers of the Internet Protocol Suite were inadequate. Strict layering in OSI was viewed by Internet advocates as inefficient and did not allow trade-offs ("layer violation") to improve performance. The OSI model allowed what some saw as too many transport protocols (five compared with two for TCP/IP). Furthermore, OSI allowed for both the datagram and the virtual circuit approach at the network layer, which are non-interoperable options. By the early 1980s, the conference circuit became more acrimonious. Carl Sunshine summarized in 1989: "In hindsight, much of the networking debate has resulted from differences in how to prioritize the basic network design goals such as accountability, reliability, robustness, autonomy, efficiency, and cost effectiveness. Higher priority on robustness and autonomy led to the DoD Internet design, while the PDNs have emphasized accountability and controllability." Richard des Jardins, an early contributor to the OSI reference model, captured the intensity of the rivalry in a 1992 article by saying "Let's continue to get the people of good will from both communities to work together to find the best solutions, whether they are two-letter words or three-letter words, and let's just line up the bigots against a wall and shoot them." In 1996, described the "Architectural Principles of the Internet" by saying "in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network." Practical and commercial aspects Beginning in the early 1980s, DARPA pursued commercial partnerships with the telecommunication and computer industry which enabled the adoption of TCP/IP. In Europe, CERN purchased UNIX machines with TCP/IP for their intranet between 1984 and 1988. Nonetheless, Paul Bryant, the UK representative on the European Academic and Research Network (EARN) Board of Directors, said "By the time JNT [the UK academic network JANET] came along [in 1984] we could demonstrate X25… and we firmly believed that BT [British Telecom] would provide us with the network infrastructure and we could do away with leased lines and experimental work. If we had gone with DARPA then we would not have expected to be able to use a public service. In retrospect the flaws in that argument are clear but not at the time. Although we were fairly proud of what we were doing, I don't think it was national pride or anti USA that drove us, it was a belief that we were doing the right thing. It was the latter that translated to religious dogma." JANET was a free X.25-based network for academic use, not research; experiments and other protocols were forbidden. The DARPA Internet was still a research project that did not allow commercial traffic or for-profit services. The NSFNET initiated operations in 1986 using TCP/IP but, two years later, the US Department of Commerce mandated compliance with the OSI model and the Department of Defense planned to transition away from TCP/IP to OSI. Carl Sunshine wrote in 1989 that "by the mid-1980s ... serious performance problems were emerging [with TCP/IP], and it was beginning to look like the critics of "stateless" datagram networking might have been right on some points". The major European countries and the EEC endorsed OSI. They founded RARE and associated national network operators (such as DFN, SURFnet, SWITCH) to promote OSI protocols, and restricted funding for non-OSI compliant protocols. However, by 1988, the Internet community had defined the Simple Network Management Protocol (SNMP) to enable management of network devices (such as routers) on multi-vendor networks and the Interop '88 trade show showcased new products for implementing networks based on TCP/IP. The same year, EUnet, the European UNIX Network, announced its conversion to Internet technology. By 1989, the OSI advocate Brian Carpenter made a speech at a technical conference entitled "Is OSI Too Late?" which received a standing ovation. OSI was formally defined, but vendor products from computer manufactures and network services from PTTs were still to be developed. TCP/IP by comparison was not an official standard (it was defined in unofficial RFCs) but UNIX workstations with both Ethernet and TCP/IP included had been available since 1983 and now served as a de facto interoperability standard. Carl Sunshine notes that "research is underway on how to optimize TCP/IP performance over variable delay and/or very-high-speed networks" However, Bob Metcalfe said "it has not been worth the ten years wait to get from TCP to TP4, but OSI is now inevitable" and Sunshine expected "OSI architecture and protocols ... will dominate in the future." The following year, in 1990, Cerf said: "You can't pick up a trade press article anymore without discovering that somebody is doing something with TCP/IP, almost in spite of the fact that there has been this major effort to develop international standards through the international standards organization, the OSI protocol, which eventually will get there.  It's just that they are taking a lot of time.". By the beginning of the 1990s, some smaller European countries had adopted TCP/IP. In February 1990, RARE stated "without putting into question its OSI policy, [RARE] recognizes the TCP/IP family of protocols as an open multivendor suite, well adapted to scientific and technical applications." In the same month, CERN established a transatlantic TCP/IP link with Cornell University in the United States. Conversely, starting in August 1990, the NSFNET backbone supported the OSI CLNP in addition to TCP/IP. CLNP was demonstrated in production on NSFNET in April 1991, and OSI demonstrations, including interconnections between US and European sites, were planned at the Interop '91 conference in October that year. At the Rutherford Appleton Laboratory (RAL) in the United Kingdom in January 1991, DECnet represented 75% of traffic, attributed to Ethernet between VAXs. IP was the second most popular set of protocols with 20% of traffic, attributed to UNIX machines for which "IP is the natural choice". Paul Bryant, Head of Communications and Small Systems at RAL, wrote "Experience has shown that IP systems are very easy to mount and use, in contrast to such systems as SNA and to a lesser extent X.25 and Coloured Books where the systems are rather more complex." The author continued "The principal network within the USA for academic traffic is now based on IP. IP has recently become popular within Europe for inter-site traffic and there are moves to try and coordinate this activity. With the emergence of such a large combined USA/Europe network there are great attractions for UK users to have good access to it. This can be achieved by gatewaying Coloured Book protocols to IP or by allowing IP to penetrate the UK. Gateways are well known to be a cause of loss of quality and frustration. Allowing IP to penetrate may well upset the networking strategy of the UK." Similar views were shared by others at the time, including Louis Pouzin. At CERN, Flückiger reflected "The technology is simple, efficient, is integrated into UNIX-type operating systems and costs nothing for the users' computers. The first companies that commercialize routers, such as Cisco, seem healthy and supply good products. Above all, the technology used for local campus networks and research centres can also be used to interconnect remote centers in a simple way." Beginning in March 1991, the JANET IP Service (JIPS) was set up as a pilot project to host IP traffic on the existing network. Within eight months, the IP traffic had exceeded the levels of X.25 traffic, and the IP support became official in November. Also in 1991, Dai Davies introduced Internet technology over X.25 into the pan-European NREN, EuropaNet, although he experienced personal opposition to this approach. The EARN and RARE adopted IP around the same time, and the European Internet backbone EBONE became operational in 1992. OSI usage on the NSFNET remained low when compared to TCP/IP. In the UK, the JANET community talked about a transition to OSI protocols, which was to begin with moving to X.400 mail as the first step, but this never happened. The X.25 service was closed in August 1997. Mail was commonly delivered via Unix to Unix Copy Program (UUCP) in the 1980s, which was well suited for handling message transfers between machines that were intermittently connected. The Government Open Systems Interconnection Profile (GOSIP), developed in the late 1980s and early 1990s, would have led to X.400 adoption. Proprietary commercial systems offered an alternative. In practice, use of the Internet suite of email protocols (SMTP, POP and IMAP) grew rapidly. The invention of the World Wide Web in 1989 by Tim Berners-Lee at CERN, as an application on the Internet, brought many social and commercial uses to what was previously a network of networks for academic and research institutions. The Web began to enter everyday use in 1993–4. The US National Institute for Standards and Technology proposed in 1994 that GOSIP should incorporate TCP/IP and drop the requirement for compliance with OSI, which was adopted into Federal Information Processing Standards the following year. NSFNET had altered its policies to allow commercial traffic in 1991, and was shut down in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic. Subsequently, the Internet backbone was provided by commercial Internet service providers and Internet connectivity became ubiquitous. Legacy As the Internet evolved and expanded exponentially, an enhanced protocol was developed, IPv6, to address IPv4 address exhaustion. In the 21st century, the Internet of things is leading to the connection of new types of devices to the Internet, bringing reality to Cerf's vision of "IP on Everything". Nonetheless, shortcomings exist with today's Internet; for example, insufficient support for multihoming. Alternatives have been proposed, such as Recursive Network Architecture, and Recursive InterNetwork Architecture. The seven-layer OSI model is still used as a reference for teaching and documentation; however, the OSI protocols conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing. Others say the original OSI model doesn't fit today's networking protocols and have suggested instead a simplified approach. Other standards such as X.25 and SNA remain niche players. Historiography Katie Hafner and Matthew Lyon published one of the earliest in-depth and comprehensive histories of the ARPANET and how it led to the Internet. Where Wizards Stay Up Late: The Origins of the Internet (1996) explores the "human dimension" of the development of the ARPANET covering the "theorists, computer programmers, electronic engineers, and computer gurus who had the foresight and determination to pursue their ideas and affect the future of technology and society". Roy Rosenzweig suggested in 1998 that no one single account of the history of the Internet is sufficient and there will need to be a more adequate history written that includes aspects of many books. Janet Abbate's 1999 book Inventing the Internet was widely reviewed as an important work on the history of computing and networking, particularly in highlighting the role of social dynamics and of non-American participation in early networking development. The book was also praised for its use of archival resources to tell the history. She has since written about the need for historians to be aware of the perspectives they take in writing about the history of the Internet and explored the implications of defining the Internet in terms of "technology, use and local experience" rather than through the lens of the spread of technologies from the United States. In his many publications on the "histories of networking", Andrew L. Russell argues scholars could and should look differently at the history of the Internet. His work shifts scholarly and popular understanding about the origins of the Internet and contemporary work in Europe that both competed and cooperated with the push for TCP/IP. James Pelkey conducted interviews with Internet pioneers in the late 1980s and completed his book with Andrew Russell in 2022. Martin Campbell-Kelly and Valérie Schafer have focused on British and French contributions as well as global and international considerations in the development of packet switching, internetworking and the Internet. See also History of the Internet History of email History of the World Wide Web List of Internet pioneers Notes References Sources Primary sources In chronological order: , private papers. Crocker, Steve; McKenzie, Alex; Postel, Jon (January 1972), Host-Host Protocol for the Arpa Network. NIC 8246. Pouzin, Louis (May 1974), A Proposal for Interconnecting Packet Switching Networks, Proceedings of EUROCOMP, Brunel University, pp. 1023-36. Further reading External links Roger Scantlebury: Intro to the Protocol Wars, Computer History Museum Computer Freaks Podcasts, Inc. magazine Internet Histories: Digital Technology, Culture and Society, Routledge Internet protocols History of the Internet Communications protocols Network protocols OSI model X.25
Protocol Wars
[ "Technology" ]
9,051
[ "Computer standards", "Communications protocols" ]
63,034,277
https://en.wikipedia.org/wiki/Bromperidol%20decanoate
Bromperidol decanoate, sold under the brand names Bromidol Depot, Bromodol Decanoato, and Impromen Decanoas, is an antipsychotic which has been marketed in Europe and Latin America. It is an antipsychotic ester and long-acting prodrug of bromperidol which is administered by depot intramuscular injection once every 4 weeks. See also List of antipsychotics § Antipsychotic esters References 4-Phenylpiperidines Aromatic ketones 4-Bromophenyl compounds Butyrophenone antipsychotics Decanoate esters 4-Fluorophenyl compounds Prodrugs Tertiary alcohols Typical antipsychotics
Bromperidol decanoate
[ "Chemistry" ]
159
[ "Chemicals in medicine", "Prodrugs" ]
63,035,175
https://en.wikipedia.org/wiki/Behavioural%20archaeology
Behavioural archaeology is an archaeological theory that expands upon the nature and aims of archaeology in regards to human behaviour and material culture. The theory was first published in 1975 by American archaeologist Michael B. Schiffer and his colleagues J. Jefferson Reid, and William L. Rathje. The theory proposes four strategies that answer questions about past, and present cultural behaviour. It is also a means for archaeologists to observe human behaviour and the archaeological consequences that follow. The theory was developed as a reaction to changes in archaeological thought, and expanding archaeological practise during the mid-late 20th century. It reacted to the increasing number of sub-disciplines emerging within archaeology as each came with their own unique methodologies. The theory was also a reaction to the processual thought process that emerged within the discipline some years prior. In recent years the use of behavioural archaeology has been regarded as a significant contribution to the archaeological community. The strategies outlined by Schiffer and his colleagues have developed into sub-disciplines or methodologies that are used and well-regarded in contemporary archaeological practise. Behavioural archaeology has positive effects on the method in which archaeologists use to reconstruct human behaviour. Background "Behavioural Archaeology" was first published by Michael B. Schiffer, J. Jefferson Reid, and William L. Rathje in 1975 in the American Anthropologist journal. Leading up to the publication, archaeology as a discipline was expanding in its practice and theory due to the specialisation of various areas and new ideas that were being presented to the community. Archaeology was beginning to break up into various sub-disciplines such as ethnoarchaeology, experimental archaeology, and industrial archaeology. Furthermore, Michael Schiffer challenges notions of processual archaeology (or 'New' archaeology) which was introduced prior within the discipline. The paper aimed to address the gaps within the processualist tradition and improve idea presented in processual archaeology, particularly those by James N. Hill and William A. Longacre. Rather than a paradigm shift occurring with the published paper into a new standard thought process within archaeology, Behavioural archaeology became one of many ideas within a vast and expanding theoretical landscape. Through behavioural archaeology, Michael Schiffer and his colleagues explain the aims and nature of archaeology in relation to the new theories and forms of archaeology that were emerging during this time. They show that the fundamental concepts of archaeology can be represented as the relations between material culture and human behaviour. By examining these relationships and asking questions surrounding them, archaeologists can answer questions about human behavioural change for the past, present, and future. Theory The theory of behavioural archaeology outlines four strategies in which human behaviour and material culture can be examined in order to answer questions associated with archaeological inquiry. Behavioural archaeology also defines archaeology as a discipline that transcends time and space as it is the study of not only the past, but also of the present and future. It distinguishes the differences between systematic and archaeological contexts and examines how the archaeological record can be distorted through cultural and non-cultural transformation processes. Michael Schiffer stresses the importance of analysing the formation processes at various sites. This allows archaeologists to discern the most appropriate line of questioning regarding the material culture and how it relates to human behaviour. Strategies Strategy 1 Strategy 1 as outlined by Michael Schiffer and his colleagues examines how material culture from a past society or cultural group can be used to answer questions about past behaviour. These questions can include ones that involve the population of specific peoples, the occupation of a certain site or the resources that were used by humans at a certain location. For example, when studying the changes in technology of past societies, inferences regarding changes in diet of individuals can be made. Strategy 2 Strategy 2 looks at how present material culture can provide archaeologists with information regarding past human behaviour. Questions within this strategy become experimentally charged as they are not confined to a specific time. Due to the nature of this questioning, this strategy relates to the sub-disciplines of experimental archaeology and ethnoarchaeology. During the time in which this theory was developed, experimental archaeology was being tested. However, in the 21st-century experimental archaeology has undergone further testing and is seen as a useful means of enquiry about the past within archaeological practise. It is often used to recreate the practises and technologies of past societies in order to understand how they operated and the strategic decisions made. Strategy 3 Strategy 3 concerns itself with studying past material culture in order to answer questions about present human behaviour. Questions include how humans adapt to population changes, such as storage facilities and societal organisation. The past is often seen to be separate from the present, however, Michael Schiffer challenges this by examining how ancient cultures are relevant to modern social problems and issues. This theme of social relevance to contemporary society is inspired by the writings of Paul S. Martin. Most notably, Martin is credited with the theory known as the 'overkill hypothesis' theorising that humans lead to the rapid extinction of prehistoric animals. Although this theory is considered to be controversial, this can be seen as an example of how humans adapt to rising population, a situation that plagues modern society. This strategy can be seen today through the archaeological practise of ethnoarchaeology. Strategy 4 Strategy 4 examines present-day material culture to examine contemporary human behaviour. This strategy seeks to ask specific questions about ongoing societies such as the consumption of goods by certain groups of people. This strategy can be studied in relation to industrial and non-industrial societies, however, is particularly useful for industrial societies. Additionally, this strategy is useful as by studying present material culture, archaeologists may also be able to look into future human behaviour. Strategy 4 is able to explain many modern behavioural patterns are also promote the relevance of archaeology in a 21st-century society. Debates With the introduction of a new theory within the archaeological community there comes a series of debates around how the ideas need to be interpreted. Michael Schiffer and his colleagues initially believed that behavioural archaeology would become a unifying principle for archaeological practise. However it has become one of many theories within archaeology. Behavioural archaeology has often been compared to other theories such as processual and evolutionary archaeology as reacts to ideas within these theories and is often compared to them when analysed in practise. In this sense not all archaeologists believe it is a revolutionary practise, and many believe that similar to other archaeological theories they should be used in conjunction with each other when practising archaeology. In 2010 the Society for American Archaeology held a forum concerning 'Assessing Michael B. Schiffer and his Behavioral Archaeology. At this forum, researchers such as Michael J. O'Brien, Alexander Bentley, Robert L. Kelly, Linda S. Cordell, Stephen Plog, and Diane Gifford-Gonzales discussed and raised issues about behavioural archaeology. In 2011 Michael Schiffer responded to these issues after they were published, by clarifying and addressing these points. Applications in archaeology Behavioural archaeology can be applied in many different contexts and situations in archaeological practice. It encourages archaeologists to examine an idea that may not be concrete such as belief systems, gender relations, or power relations. When these ideas are studied in conjunction with material culture, human behaviour and experience within various societies is revealed. For example, when examining changes within technology in the archaeological record, inferences can be made surrounding diet, environmental and social factors within human society. In particular Strategies 2 and 4 have significant applications within modern archaeology although Strategies 1 and 3 are also generally applied. Strategy 2 Strategy 2 also known as experimental archaeology, has developed within archaeological practise into a sub-discipline. Experimental archaeology allows an assumption of what occurred in the past to become an inference of what may have actually occurred. Although this concept is not a new idea in archaeological thought, since Michael Schiffer's 1975 paper, experimental archaeology has increasingly become an important subdiscipline within archaeology. Schiffer himself in 1990 and 1987 conducted research surrounding the properties of ceramics in order to understand the decisions of craftsmen when creating these objects. Experimental archaeology surrounding ceramics can be recreating furnaces and vessels in order to see how craftsmen made decisions surrounding the manufacture of ceramic products. Experiments such as this allow archaeologists to have a greater understanding of past human behaviour. Strategy 4 Strategy 4 is currently being used in practise today particularly in America by William Rathje, one of the original authors of the theory. In the 1970s Rathje began the Garbage Project in Tucson, Arizona. In this project Rathje and his students examined the waste of Tucson locals in order to answer questions regarding human consumption, and decomposition of waste. Through this, they were able to examine human behaviour and make comparisons between what people claim their behaviour is against their actual consumption behaviour. For example, individuals claimed they drank less beer when they were actually consuming more of the substance. This analysis of human behaviour and consumption is useful when examining consumption in industrial societies and predicting future consumption behaviours. Pompeii Premise The 'Pompeii Premise' is an idea that was first proposed by Robert Ascher in 1961 that the remains an archaeologist uncovers is the representation of a group of people frozen at a certain point in time, and that inferences can only be made by the archaeologist when a site has assemblages like those at Pompeii. However rather than seeing the archaeological record as a 'preserved past', it is a combination of material culture over various points in time. Lewis Binford suggests using the methods of behavioural archaeology in order to avoid viewing material culture in this stagnant way. One method of this is understanding the formation processes and context surrounding the creation of the archaeological record. In this respect, it is important for the archaeologist to remember the difference between the archaeological context and the systematic context of the archaeological record. In this way, cultural and non-cultural transformation processes can be determined and aid the archaeologist in determining if there is any distortion of context within the record. Within cultural transformation processes, human behaviour can be determined as it directly affects the formation of the material culture at a site. Behavioural archaeology and memory The concept of memory is something so pivotal for those in archaeology. It is through memory itself that an artifact can be understood. Laurent Olivier wrote "[t]he subject of archaeology is nothing other than this imprint of the past inscribed in matter." If that is all archaeology is, then the goal would be how to properly find and later portray this particular "imprint" for everyone to know about the artifact. With Behavioural Archaeology, the imprint in question is with certain artifacts, how exactly a human or multiple reacted to and with the artifact being analyzed. Olivier also states "[f]undamentally, [archaeology] is an investigation into archives of memory, which is what remains are. Behavioural Archaeology takes the remains found by individuals and then further analyzes their meanings and what possible meanings they held for the humans that they interacted. For example, in Bonna D. Wescoat's book, lamps found in different archaeological excavation sites "have been taken to confirm nocturnal timing". There was much discussion and deliberation before the academic community as a whole agreed that what was found was a lamp and that its function was to act as a light-bringer during the night. As such, some artifacts hold a singular, clear meaning while others found in excavation may hold multiple uses or were used in ways that the excavators cannot fathom as they were not there in the time when the artifact held much relevance. Memory should always be used in conjunction with behavioral archaeology for memory dictates how an object is seen. Contributions to archaeology The introduction of behavioural archaeology in 1975 followed by the work of Michael Schiffer and his students has been seen as a significant contribution to the field of archaeology. All four strategies have been significant in expanding the thought process surrounding material culture and human behaviour in various contexts. Furthermore, due to the significance of Behavioral archaeology, it is often used with other archaeological schools of thought when analysing the archaeological record. The act of looking at the relationships between material culture and human behaviour in itself is a significant thought process. In 2010 Society for American Archaeology held a forum in which archaeologists significant to the American archaeological community discussed the contributions of Michael Schiffer and Behavioral archaeology. Behavioural archaeology is significant as it explores concepts that allow the archaeological record to characterised in terms of context and formation processes. This allows archaeologists to understand variations of different contexts in order to answer questions of inquiry. It has also made contributions to archaeology as it looks at the creation of the archaeological record over time. This emphasises the fundamental idea of understanding a variety of contexts when examining material culture. This is an idea that was overlooked by processual thinking as processualism did not define specific contexts. Behavioral archaeology fills this gap in order to have a more thorough understanding of the archaeological record. Behavioural archaeology supports the idea that the scientific process is a fundamental part of archaeological practise. This comes as a reaction to the introduction of postmodern ideas to archaeology and archaeological thought. As the idea of forming a narrative from the archaeological record became common, Behavioral archaeology stresses the importance of using the scientific process in order to construct a sound analysis. Additionally, it is significant to archaeology as it places importance on creating principles or establishing relationships between human behaviour and material culture. This process is vital to archaeological practise as it allows archaeologists to identify patterns within material culture, and examine the archaeological record across cultures. Overall behavioural archaeology challenges archaeologists to reconsider how they conduct archaeological practise and how they think about the nature and aims of archaeology. References Archaeological theory Human behavior
Behavioural archaeology
[ "Biology" ]
2,777
[ "Behavior", "Human behavior" ]
63,035,486
https://en.wikipedia.org/wiki/PSR%20J1141%E2%88%926545
PSR J1141−6545 is a pulsar in the constellation of Musca (the fly). Located at 11h 41m 07.02s −65° 45′ 19.1″, it is a binary pair composed of a white dwarf star orbiting a pulsar. Because of this unusual configuration and the close proximity of the two stars it has been used to test several of Einstein's theories. PSR J1141−6545 is notable because it has shown several relativistic theories to have real-world results. The star is emitting gravitational waves and the process of time dilation appears to be affecting the orbit of the white dwarf. In January 2020 it was announced that the stars were also showing the Lense-Thirring effect, whereby a rotating mass drags the surrounding spacetime with it. See also Hulse–Taylor pulsar PSR J0737−3039 References Binary stars Pulsars Musca White dwarfs
PSR J1141−6545
[ "Astronomy" ]
204
[ "Musca", "Constellations" ]
63,035,870
https://en.wikipedia.org/wiki/Prime%20Suspects%3A%20The%20Anatomy%20of%20Integers%20and%20Permutations
Prime Suspects: The Anatomy of Integers and Permutations is a graphic novel by Andrew Granville, Jennifer Granville, and Robert J. Lewis, released on August 6, 2019 and published by Princeton University Press. Plot Prime Suspects: Anatomy of Integers and Permutations is a unique graphic novel that blurs the boundaries between pure visual art, deep mathematics, film noir and police procedurals whilst exploring the nature of scientific research, the role of women in mathematics and paying homage to the titans of mathematical history. Reception Judith Reveal of the New York Journal of Books said that "Prime Suspects will appeal to... the mathematician who eats, sleeps, and drinks numbers, start on page one and just enjoy the story . . . the book is fun, and interesting, and a challenge on many levels." Benjamin Linowitz of MAA Reviews stated that ". . . It's very difficult to write a book on an advanced topic in mathematics that's accessible to math students and enthusiasts yet touches on contemporary research that is of interest to a broad swath of practicing mathematicians. Prime Suspects is such a book. And it's entertaining to boot. I recommend it in the strongest terms." Paolo Mancosu of the Journal Of Humanistic Mathematics said the book "does a terrific job at presenting readers with a fascinating and realistic picture of how mathematical research is conducted. It does so in a deep way and yet with a light hand without falling into the trap of transforming the novel into a lecture on advanced mathematics or on methodology. Both the story and the illustrations are a delight." References External links Prime Suspects documentary videos by Tommy Britt Prime Suspects trailer by Hasan Abdulla 90.5 WICN Public Radio podcast interview with Andrew and Jennifer Granville Reviews New York Journal Of Books by Judith Reveal, European Mathematical Society review by Adhemar Bultheel Rogues Portal review by Anelise Farris Sondrabooks review by Sondra Eklund Mathematical Association Of America review by Benjamin Linowitz Mathematical Fiction review by Alex Kasman MathemAfrica.org review by Jonathan Shock Journal Of Humanistic Mathematics review by Paolo Mancosu 2019 comics debuts 2019 graphic novels Mathematics fiction books Princeton University Press books
Prime Suspects: The Anatomy of Integers and Permutations
[ "Mathematics" ]
445
[ "Recreational mathematics", "Mathematics fiction books" ]
63,036,352
https://en.wikipedia.org/wiki/American%20eccentric%20cinema
American eccentric cinema is a mode of contemporary American filmmaking that emerged in what has been termed the metamodern or new sincerity. Its attachment to indie cinema has led some to consider it a movement and genre of cinema in the United States. Its key filmmakers, including Wes Anderson, Charlie Kaufman, and Spike Jonze, are at times referred to as the "American Eccentrics". It occurred during the 1990s and 2000s, when indie directors sought to create films that diverted from the style and content of Hollywood franchise films. American eccentric cinema came in opposition to the mainstream ideas of formulaic narratives and the digitisation within films and new technologies that came about during the time period. American eccentric cinema is marked by films that are "deeply concerned with ethics and morality, the obligations of the individual, the effects of family breakdown, and social alienation." Background American eccentric cinema was critically conceived in response to traditional Hollywood and films of popular culture which often had clear, predictable characters and narratives. American eccentric cinema has been framed as influenced by the new Hollywood era. Both traditions have similar themes and narratives of existentialism and the need for human interaction. New Hollywood focuses on the darker elements of humanity and society within the context of the American Dream in the mid-1960s to the early 1980s. with themes that were reflective of sociocultural issues and were centered around the potential meaninglessness of pursuing the American Dream as generation upon generation motivated to possess it. In comparison, American eccentric cinema does not have a distinct context, its films show characters who are very individual and their concerns are very distinct to their own personalities. American eccentric cinema is considered a shift in contemporary American cinema in the 1990s and 2000s driven by metamodern philosophical and moral beliefs. Far Out Magazine critic Swapnil Dhruv Bose writes that, "As a response to the suffocating excesses of the mainstream, many directors sought to examine the alienation imposed by modernity through fresh perspectives and unconventional methods. Although the creative consciousnesses of the artists vary to a great extent, their works have been collectively labelled as the 'American Eccentric Cinema' movement". American eccentric cinema is also known as "American smart cinema", a tradition delineated by Jeffrey Sconce and Claire Perkins. Both film traditions consist mostly of American indie films from the 1990s and 2000s and have similar aesthetic strategies, particularly a focus on irony. However, as Kim Wilkins notes, despite the crossovers between the two forms of cinema, American eccentricity "uses irony not primarily for its tonal qualities but, rather, for dramatic and thematic functions". At a period of American history where postwar communities were growing older, many ideals were being re-evaluated and looked at critically in American eccentric cinema. The genre's films look at emotions and where they come from as well as expectations for what a happy life should be. American eccentric cinema sees happiness as not necessarily being in a domestic life; in a marriage or family. The filmmakers of the genre were influenced by their own lives where their perception of the domestic world could include feelings and emotions of "abandonment, alienation and frustration". The film tradition also takes influences from postmodernism through the movement's attitudes of irony and sensations of "detachment" to society. But while filmmakers of American eccentric cinema position themselves in a critical manner, they also strive to create significant and unique art forms through various techniques and film features employed. Films of American eccentric cinema were also made before, during and after the September 11 attacks, when social desires were primarily concerned with being safe and secure. At this time the rights and principles of American liberalism were being challenged and ideas of existentialism were common within art. Characteristics While American eccentric cinema films have distinct individual and stylistic visions, they share common themes and textual practices. American eccentric cinema is concerned about an individual's internal dilemmas and existentialism as a human being, regardless of the context. The film techniques of the genre use aspects of mainstream cinema but alter the mainstream conventions slightly through characterisation, tone and style. American eccentric cinema is also known for its use of inter-textual references, quotations and irony. By doing this, an audience's expectation of what may happen is subverted. Filmmakers go into the depth of a protagonist's journey finding their sense of self as the narrative. American eccentric cinema falls within independent cinema culture . Independent cinema are types of films whose conventions oppose Hollywood mainstream sensibilities with characteristics such as no "forward-moving narrative drive" where the structure is not as ordered or bound by a sense of needed fast pace. Characters in eccentric cinema divert from those in mainstream Hollywood, which are comprehensible with journeys having a distinct beginning, complicated middle and happy ending. "Indie" characters, as well as American eccentric cinema characters, do not necessarily have goals to achieve in the films, or feel defined by them, and a sense of strength of morality may not be as present. This type of cinema has been called "quirky", "cute" and "smart". There are many alternate methods of exploring romantic love and sexuality within American eccentric cinema. The films explore gender roles as changing and often take a postfeminist stance. Characters often challenge and explore the expectations of marriage prior to the 1990s within the narrative as well as the complexities of sex and how society views it. Most characters are heterosexual and the complications of love are dealt with from the man-woman relationship perspective. However, director Todd Haynes, whom Swapnil Dhruv Bose labels a pioneer of the American eccentric movement, comes as an exception as he explores LGBT relationships with films that were part of the beginning of the new queer cinema movement. In his 1995 acclaimed drama, Safe, he commented on self-help culture as a metaphor for the AIDS crisis in the late 1980s. Race is not explored as prevalently within American eccentric films. According to Jesse Fox Mayshark portrayals of characters of a different race, that is, not white, are categorised within a "comic ethnic type". Mayshark perceives lack of diversity as a direct correlation within the genre's directors being primarily white Americans who may think of "other races and cultures" as only outsiders, alien in their comedic nature. Kim Wilkins states that to date the politics and style of American eccentric cinema have been informed by the overwhelmingly white male middle-upper demographic of its key filmmakers. She writes "The focus in American eccentric films (like those in the 'smart' tendency) on 'white male urban sophisticates' situates them as a form of 'men's cinema', in Stella Bruzzi's terms. While neither existential anxiety nor irony is, in reality, the sovereign domain of white men, their cinematic articulation in the key films of the American eccentric mode, such as P.T. Anderson's Magnolia or Punch Drunk Love, David O. Russell's I Heart Huckabees or films by Wes Anderson or Charlie Kaufman, has largely evolved as a form 'grounded in the relationship between, masculinity—its ideology as well as its representation—and aesthetics.' Indeed, many of these films position masculinity as bound to the inability to directly articulate anxiety. Thus, the use of irony in these films—both by characters and through aesthetic and formal strategies—is conveyed as a particularly masculine strategy; a means by which 'ugly' feelings can be repackaged as intellectual gameplay while simultaneously begging to be recognized for what it truly is." She goes on to note that the focus on white, urban, heterosexual men in American eccentric cinema adds to its relationship with neoliberalism: "It cannot be ignored that the protagonists of American Eccentric films are not only male but, on the whole, tend come from socioeconomically privileged backgrounds. The socioeconomic (and gendered) status of these characters situates them as those most likely to succeed within capitalist systems. Unlike indie films within realist modes, such as the neorealist works of Kelly Reichardt or Sean Baker, American Eccentric Cinema does not tend to portray characters at crossroads where decisions made or changing circumstances have the capacity to fundamentally affect their livelihoods, safety, or personal agency. Often the absurd narrative goals of characters are only possible within these films because these characters are not beholden to the financial imperatives that drive more naturalistic characters toward what may be considered more realistic goals." Politics are explored within the films, through the characters and their journeys. Rather than preaching political messages and creating controversial debates about political issues, they create subtle means to explore politics. Major events such as the September 11 attacks meant that the sense of American uncertainty that was pervading the national was reflected in themes such as self-doubt and insecurity within the characters. Interpretations on defining the genre Scholars Kim Wilkins and Jesse Fox Mayshark have written extensively on American eccentric cinema and its place within film genre in their books American Eccentric Cinema and Post-Pop Cinema: The Search for Meaning in New American Film. Wilkins, a film scholar at the University of Oslo, maintains that American eccentricity is a mode rather than a genre. She demarcates five criteria for the American eccentric mode: "1: The presence of allusion, parody, and intertextuality formally (in terms of genre and meta-cinematic depiction) and playfulness/cinephilia; 2: Sincere thematic underpinnings that are presented at a distance due to the film's perceived 'quirkiness', amusing occurrences, and/or absurd aesthetic; 3: A form of ironic expression that is both reflexive and sincere; 4: Characters and cinematic worlds that are designed to encourage audience alignment despite being clearly constructed; and, above all, 5: Effective and intellectual engagement with an experience of existential anxiety". Wilkins analyses these textual tactics through four analytic lens—genre (with a focus on the road film), characterization, hyper-dialogue, and eccentric worlds. Although Wilkins states that American eccentricity is not an auteurist demarcation, she pays particular attention to the films of Spike Jonze and Wes Anderson. Wilkins explains that there is a distinct relationship between neoliberalism and American eccentric cinema. Neoliberalism is a set of principles proposing "that human well-being can be best advanced by liberating entrepreneurial freedoms and skills". In a neoliberal world, the person will constantly be shifting and altering facets of their life such as expertise and abilities and even their own sense of self to keep up with what is happening within the economy. Within American eccentric films, this idea of neoliberalism aligns to the films' desires to portray individuals as their own selves rather than purely "national or community citizens". Wilkins states that because of this, and individuals not belonging in a community, it provides a foundation for many existential tensions and anxieties that are explored in the films. Thus, she concludes that American eccentric cinema responds to neoliberalism, as well as the existential concerns that were present during the new Hollywood era through means that are presented as ahistoric and primarily concerned with the characters' own experiences rather than broader socio-cultural or political concerns. Mayshark, an editor at the New York Times, writes on American eccentric cinema filmmakers and analyses specific films within the genre. Mayshark says that the group of filmmakers were not explicitly categorised within any genre at the beginning of the movement because their films were extremely niche and individual, with varying styles and conventions. Their work defied convention and saw a new-found exploration of dark human themes through being idiosyncratic and individual. They wanted the audience to feel like they were a part of the stories and have a "transcendent connection". As technologies emerge so to have discussions surrounding the expansion of the independent cinema genre and subsequently, American eccentric cinema. In a 1999 keynote address at the Independent Spirit Awards, in California, screenwriter and film producer, James Schamus "voiced the common concern that" commercial and major studios' "empires would ultimately threaten the existence" of independent cinema. "In Schamus' evaluation, independent film is...in decline" however, other commentators see evolution and cultural "transition...to give way to new and different possibilities." List of notable films List of notable figures Filmmakers Wes Anderson Paul Thomas Anderson Richard Linklater Todd Haynes David O. Russell Charlie Kaufman Spike Jonze Michel Gondry David Fincher Sofia Coppola Richard Kelly Neil LaBute Todd Solondz Noah Baumbach Alexander Payne Peter Berg Hal Hartley Ang Lee John Herzfeld Whit Stillman Miranda July Lena Dunham Greta Gerwig Alex Ross Perry Mike Mills Jared Hess Jason Reitman Kevin Smith Actors Ben Stiller Mark Wahlberg Bill Murray Julianne Moore Jason Schwartzman Owen Wilson Luke Wilson Elliot Page Michael Cera Zooey Deschanel Frances McDormand Tilda Swinton Angelina Jolie Anjelica Huston Adrien Brody Adam Sandler Scarlett Johansson Legacy Rushmore, Slacker and Clerks were each inducted into the National Film Registry. Three films of Wes Anderson (Grand Budapest Hotel, Royal Tenenbaums and Moonrise Kingdom) alongside Before Sunset, Lost in Translation, Eternal Sunshine of the Spotless Mind, Far From Heaven and Synecdoche, New York were listed on the BBC's 100 Greatest Films of the 21st Century. See also Alternative rock Frat Pack Generation X Indiewood Music videos Slow cinema Toronto New Wave Transnational cinema References Film genres 1990s in film 2000s in film 2010s in film Film genres particular to the United States Postmodern art Eccentricity (behavior)
American eccentric cinema
[ "Biology" ]
2,774
[ "Behavior", "Human behavior", "Eccentricity (behavior)" ]
63,036,934
https://en.wikipedia.org/wiki/Reverse%20Mathematics%3A%20Proofs%20from%20the%20Inside%20Out
Reverse Mathematics: Proofs from the Inside Out is a book by John Stillwell on reverse mathematics, the process of examining proofs in mathematics to determine which axioms are required by the proof. It was published in 2018 by the Princeton University Press. Topics The book begins with a historical overview of the long struggles with the parallel postulate in Euclidean geometry, and of the foundational crisis of the late 19th and early 20th centuries, Then, after reviewing background material in real analysis and computability theory, the book concentrates on the reverse mathematics of theorems in real analysis, including the Bolzano–Weierstrass theorem, the Heine–Borel theorem, the intermediate value theorem and extreme value theorem, the Heine–Cantor theorem on uniform continuity, the Hahn–Banach theorem, and the Riemann mapping theorem. These theorems are analyzed with respect to three of the "big five" subsystems of second-order arithmetic, namely arithmetical comprehension, recursive comprehension, and the weak Kőnig's lemma. Audience The book is aimed at a "general mathematical audience" including undergraduate mathematics students with an introductory-level background in real analysis. It is intended both to excite mathematicians, physicists, and computer scientists about the foundational issues in their fields, and to provide an accessible introduction to the subject. However, it is not a textbook; for instance, it has no exercises. One theme of the book is that many theorems in this area require axioms in second-order arithmetic that encompass infinite processes and uncomputable functions. Reception and related reading Jeffry Hirst criticizes the book, writing that "if one is not too obsessive about the details, Proofs from the Inside Out is an interesting introduction," while finding details that he would prefer to be handled differently, in a topic for which details are important. In particular, in this area, there are multiple choices for how to build up the arithmetic on real numbers from simpler data types such as the natural numbers, and while Stillwell discusses three of them (decimal numerals, Dedekind cuts, and nested intervals), converting between them itself requires nontrivial axiomatic assumptions. However, James Case calls the book "very readable", and Roman Kossak calls it "a stellar example of expository writing on mathematics". Several other reviewers agree that this book could be helpful as a non-technical way to create interest in this topic in mathematicians who are not already familiar with it, and lead them to more in-depth material in this area. As additional reading on reverse mathematics in combinatorics, Hirst suggests Slicing the Truth by Denis Hirschfeldt. Another book suggested by reviewer Reinhard Kahle is Stephen G. Simpson's Subsystems of Second Order Arithmetic. References Mathematical logic Proof theory Computability theory Real analysis Mathematics books 2018 non-fiction books Princeton University Press books
Reverse Mathematics: Proofs from the Inside Out
[ "Mathematics" ]
599
[ "Computability theory", "Mathematical logic", "Proof theory" ]
63,037,627
https://en.wikipedia.org/wiki/John%20P.%20Peters
John Punnett Peters (December 4, 1887 – December 29, 1955) was an American physician, the John Slade Ely Professor of Medicine at Yale University from 1928 until his death in 1955. He was "one of the founders of modern clinical chemistry". His 1932 textbook Quantitative Clinical Chemistry, coauthored with Donald Van Slyke, established clinical chemistry as a distinct discipline. His research articles and textbooks advanced the laboratory in the diagnosis and management of disease. The book advanced the study of clinical chemistry, i.e. the measurement of blood levels of sodium, potassium, glucose, etc. as essential components of understanding health and disease.  Along with other physicians and scientists, he “established the basis of our present scientific understanding of body fluids.” Peters was born in Philadelphia. He spent much of his growing up years in New York City. He did his undergraduate education at Yale University and earned his medical degree at Columbia University. During World War I he served as a medical doctor in the US army. Loyalty/McCarthyism/Supreme Court Case John P. Peters was known not only for his pioneering work in human metabolism, but also for his passionate efforts to bring National Health Insurance to the United States at the close of the second World War.  He was accused of being a Communist because of this cause and his loyalty was questioned by a review board of the National Institutes of Health where he served in a $50-per-year consultant position on a Public Health Service peer review panel.  He fought to clear his name, to have the right to face who had accused him in a case that ultimately came before the Supreme Court in 1955 (Peters v. Hobby, 349 U.S. 331). He won the case, but not the precedent he had hoped for: the right to face his accuser.  Instead, the Supreme Court stated his dismissal was illegal by the Loyalty Review Board.  They had reopened his case in 1953 after he had been cleared by 2 other NIH loyalty boards and the Court decided they had no cause to address his loyalty yet again.  (Interestingly, the chairman of this Loyalty Review Board was also a Yalie, Hiriam Bingham, the man who “discovered” Machu Picchu). He was deeply disappointed that he was unable to get the Supreme Court to decide that he had a right to face his accuser.  He knew it was a secret informant, not under oath, not present at the loyalty hearings, and he died without knowing who this person was.  His third child, a son and also a physician, Richard Morse Peters, M.D. tried for many years using the Freedom of Information Act to request government records, without success even at his death in 2006. This tactic was used frequently during McCarthyism and John Punnett Peters, M.D. was quoted as saying, “It doesn’t matter much to an old fellow like myself, but it is the principle of the thing that counts.” In 2007, an undergraduate history major at Yale, Jonathan Bressler, was able to find out the name of his accuser, Louis Budenz, a former active member of the Communist Party USA and managing editor of the Daily Worker, who had recanted his beliefs and become well paid as an informant.  He had testified publicly against Linus Pauling in 1948 and “Unfortunately, Budenz was granted immunity from prosecution for perjury by the very same congressional committee Pauling was attacking.  The ‘rat’ was allowed to scurry away in silence.” Early life and education John P. Peters, Jr was born on December 4, 1887 in Philadelphia, PA, the third child of John Punnett Peters and Gabriella Brooke Forman Peters.  His oldest brother, Thomas McClure Peters had died in 1885, at less than 2 years of age and his older sister, Brooke, had been born on August 1, 1885.  He was one of 6 surviving siblings that included Frazier Forman Peters, who became a well known architect, especially known for his signature stone houses in Westport, CT. John P. Peters lived abroad with his family as a young child while his father conducted excavations in Babylonia and then moved to Dresden, Germany and then Beirut, Lebanon.  His father, John Punnett Peters, was then elected Rector of St. Michael’s Episcopal Church in New York, the same church his own father had served as Rector. At the age of 13, in 1900, John P. Peters, Jr.(AKA “Jack”) enrolled in St. John Manlius School(a military academy) in upstate New York and 4 years later entered Yale College just shy of his 17th birthday.  He competed as a diver while in college, graduating in 1908 and then returned to St. John Manlius School for one year to teach English and Latin. He started medical school in 1909 at the College of Physicians and Surgeons, New York City (now Columbia University College of Physicians and Surgeons). While in medical school, he met his future wife, Charlotte Morse Hodge, but they were not allowed to marry until he had completed the first 2 years of his post graduate training.  This was a standard policy of medical training at the time, because it was deemed a distraction to aspiring doctors and this vow of celibacy was not abandoned at all U.S. medical schools until after WWII. Jack and Charlotte married in June, 1915, the same month he finished his 2-year internship at Presbyterian Hospital in New York.  He then began a 2-year fellowship for research in clinical medicine at the same hospital and while there also held appointments as instructor in clinical medicine at Columbia University and as assistant visiting physician at Presbyterian Hospital. When the United States entered World War I, he was ordered abroad in May, 1917 with the Presbyterian Base Hospital Unit, which took over British General Hospital No. 1 at Etretat, France.  While there, he was initially able to do some “research work in war medicine” but this fell by the wayside as his“administrative work increased to such an extent as to stop all investigation.” Career He returned to New York after the war and held positions of instructor of medicine at Cornell Medical College and Adjunct assistant physician to Bellevue Hospital, New York. He received the appointment of associate professor of Medicine at Vanderbilt University in July, 1920, but was granted a leave of absence for the first year while the medical school was being re-organized.  He spent this year under the tutelage of Dr. D.D.Van Slyke at the Hospital of the Rockefeller Institute.  He continued to collaborate with Dr. Van Slyke and together they published the landmark book, Quantitative Clinical Chemistry in 2 volumes in 1931. The job at Vanderbilt was delayed another year so he resigned his position there and in July, 1921, accepted a position of Associate Professor of Medicine at Yale where he remained until his death in 1955. In 1927, he was promoted to full professor and in January of 1928, he was appointed John Slade Ely Professor of Medicine and held this position until his death in 1955.  He established a research program in metabolism.  built up the clinical laboratory and was at the forefront of clinicians who “understood the basic knowledge of physics and chemistry that underlay their usefulness in the treatment of human disease”. He published over 200 articles spanning a wide range of “contributions to the understanding of diseases of metabolism; electrolyte and acid base equilibrium; nephritis; water exchange; the interrelation of proteins, carbohydrates, and lipids in metabolism; the role of the thyroid in health and disease; medical education; and the role of the government in medical care”. He co-authored Quantitative Clinical Chemistry, Volumes 1 and 2 with D.D. Van Slyke in 1931, and authored Body Water in 1935.  “Every medical student, whether he knows it or not, is a student of Peters,” wrote Donald D. Van Slyke.  “His contributions to the pathologic physiology of the circulatory, respiratory, excretory and endocrine systems, of the metabolism of proteins, fats, carbohydrates, electrolytes and water have been integrated into the science of medicine.” Each year, the American Society of Nephrology gives out “The John P. Peters Award which recognizes individuals who have made substantial research contributions to the discipline of nephrology and have sustained achievements in one or more domains of academic medicine including clinical care, education and leadership.” Death John P. Peters, M.D. died of the complications of a heart attack he had suffered several months earlier on December 29, 1955, at the age of 68, less than a year after his Supreme Court case was adjudicated. References External links John Punnett Peters papers (MS 897). Manuscripts and Archives, Yale University Library. 1887 births 1955 deaths 20th-century American biochemists Clinical chemists Yale University alumni Columbia University Vagelos College of Physicians and Surgeons alumni
John P. Peters
[ "Chemistry" ]
1,819
[ "Biochemists", "Clinical chemists" ]
63,039,495
https://en.wikipedia.org/wiki/Maldon%20Sea%20Salt
Maldon Crystal Salt Company Limited, trading as Maldon Salt, is a salt-producing company in Maldon on the high-salinity banks of the River Blackwater in Essex, England. The river is favoured by flat tide-washed salt marshes and low rainfall. History Sea salt production in the coastal town of Maldon dates back to the time of Roman Britain when clay-lined salt evaporation ponds were constructed, and according to the Domesday Book, 45 lead pans were used to manufacture salt there in 1086. The Maldon Salt Company was founded under its current name in 1882, having previously been part of a local coal firm. In the 1990s and early 2000s, Maldon's salt grew in popularity after being used by prominent chefs including Ruth Rogers, Delia Smith, and Jamie Oliver. Salt Maldon Sea Salt is made on a large scale by evaporating brine over an elaborate network of modern gas-fired brick flues. At one time crystal drying was by woodburning stove and later by industrial oven before the use of an oscillator was introduced. In 2017 it was said that inverted pyramid-shaped crystals prevented the salt from caking, and the resulting flakes from their breakdown were used as a finishing salt. The company claims that the salt's low magnesium content means it has less of a bitter aftertaste than other salts. Salt gained from evaporating seawater has a higher magnesium ion content than some table salts. References External links Companies based in Essex Salt production Food and drink companies of England Maldon, Essex 1882 establishments in England
Maldon Sea Salt
[ "Chemistry" ]
323
[ "Salt production", "Salts" ]
63,041,061
https://en.wikipedia.org/wiki/Leishmaniasis%20vaccine
A Leishmaniasis vaccine is a vaccine which would prevent leishmaniasis. As of 2017, no vaccine for humans was available. Currently some effective leishmaniasis vaccines for dogs exist. The parasite which causes leishmaniasis is Leishmania, which is a Trypanosomatida. The disease spreads from sandflies. Animals such as dogs can be a vector for having the parasite, spreading it to sandflies, and from sandflies to humans. A vaccination strategy to control or eliminate Leishmaniasis might include developing a vaccine for humans and other vaccines for animals. Scientists wish for a vaccine and there is vaccine research. There is also consideration that public health practices can control or eliminate leishmaniasis without a vaccine. Leishmanization People who recover from leishmaniasis gain immunity from reinfection. "Leishmanization" is the practice of inoculation with live Leishmania to induce mild cutaneous leishmaniasis (CL) to prevent future dangerous infection. Some Bedouin and Kurdish cultures practiced leishmanization as traditional medicine. There are historic accounts and decades of medical research describing the efficacy of this. Traditional knowledge about leishmanization has informed the development of a leishmaniasis vaccine. Vaccine development A 2015 paper claimed that the development and use of a vaccine would be the best way to eliminate leishmaniasis from South Asia. Attempts to create a vaccine with live, inactivated or attenuated Leishmania have failed. Attempts to create a peptide, DNA, or protein vaccine have shown efficacy in animal vaccine models but not effective in humans. There are a series of challenges with explanations in molecular biology which explain the difficulty of vaccine development. Vaccine development is difficult because the parasites live in humans, sandflies, and other animals, so a vaccine in humans alone would not eliminate the protozoan in insects and animals. There is a challenge in interpreting the data in animal models to apply to humans. Another challenge is effectively transferring knowledge from laboratory settings to field practice. There is also a basic lack of scientific understanding of how an antiparasitic vaccine should generate and maintain immunological memory during parasitic infection. The development of a vaccine using CRISPR-Cas9 technology was published in 2020 which showed that inoculation with a live attenuated Leishmania major strain induces durable protection, analogous to leishmanization. Another gene deletion mutant was created in a Leishmania mexicana strain in 2022, showing complete inhibition of the typical cutaneous lesions in mouse models thanks to a diminished induction of the Th2 cytokines. Clinical trials As of 2016 there are several vaccines in development and three have gone to clinical trials. One clinical trial in Brazil used an inactivated vaccine for human immunotherapy. Another in Uzbekistan used an attenuated vaccine for human immunotherapy. Another in Brazil was vaccination of dogs to prevent those animals from spreading the disease. The dog vaccines are successful in providing immunity. In 2008, a vaccine for dogs was launched in Brazil, which is known as LeishTech, a recombinant protein based vaccine. In 2011 CaniLeish, a vaccine made with antigens from L. infantum, was licensed in Europe. Efforts are on ongoing to develop further vaccines for dogs with one second generation vaccine up for approval in Brazil as of 2017. History In the early 1900s scientists learned how to culture the parasite, and work in the 1940s led by Saul Adler led to the practice of leishmanization being widespread in Israel and Russia until the 1980s, when large-scale clinical trials showed that the practice led to long-term skin lesions, exacerbation of psoriasis, and immunosuppression in some people. During the Iran–Iraq War over 2 million people in Iran were vaccinated this way. As of 2006 such vaccines were still licensed and used in Uzbekistan. Clinical trials with killed parasites had conflicting results in the 1940s, and work on such vaccines did not resume until the 1970s, when there were promising small clinical trials, and which continued with extensive clinical trials in Ecuador, Brazil, in Iran, through the 1990s. Preclinical work with genetically modified live attenuated parasite vaccines was conducted in the 1990s and 2000s, as did work with synthetic peptides, recombinant proteins, glycoproteins and glycolipids from leishmania species, and naked DNA. As of 2016, none of these second-generation vaccine candidates had reached the market, and few had been tested in clinical trials. References Leishmaniasis Vaccines Animal vaccines
Leishmaniasis vaccine
[ "Biology" ]
943
[ "Vaccination", "Vaccines" ]
63,042,435
https://en.wikipedia.org/wiki/Institution%20of%20Municipal%20Engineers
The Institution of Municipal Engineers (IMunE) was a professional association for municipal engineers in the United Kingdom. Founded in 1873, the institution grew following the expansion in municipal engineering roles under the Public Health Act 1875 (38 & 39 Vict. c. 55). It was incorporated in 1890 and received a royal charter in 1948. The IMunE was a founding member of the Council of Engineering Institutions in 1964 and by the later 20th-century was responsible for examining most British building inspectors. The IMunE merged with the Institution of Civil Engineers in 1984. History The Institution of Municipal Engineers has its origins in 1871 when the surveyor for West Ham, Lewis Angell, held discussions with other professionals about founding a society to promote the need for municipal engineers to work "according to principle and not policy". The Association of Municipal and Sanitary Engineers and Surveyors was founded in 1873, with Angell as its first president. The association's first conference was held in 1874 and was attended by Joseph Chamberlain, who was then mayor of Birmingham. This conference became an annual four-day event during which technical papers were presented and discussed by members. There was a large increase in the number of municipal engineers in this period, particularly after the Public Health Act 1875 (38 & 39 Vict. c. 55) which required local authorities to appoint engineers with responsibilities for water supply, sewerage, waste disposal, street sweeping and general surveying. In 1886 the association introduced its first examinations, known as testamurs, the syllabus of which included the principles of building construction and the law. The membership expanded to include civil engineers and civil engineering surveyors following the Local Government Act 1888 which introduced county councils with responsibilities in these areas. Membership of the IMunE was not a pre-requisite for employment by a local authority, as such there was a wide variation in the competence of practising municipal engineers in this period. The association was incorporated in 1890 and changed its name to the "Institution of Municipal and County Engineers" in 1908. Beginning in 1913 the institution published a monthly Journal of the Institution of Municipal and County Engineers; other publications included The Presentation of Evidence - a series of monologues by GS Short and the Proceedings of its annual conference. In 1922 the institution tightened its membership criteria, restricting admission to those who held an academic degree, membership of the Institution of Civil Engineers or who passed the testamur. The IMunE introduced an examination for building inspectors in 1937, though passing this was not mandatory to secure employment in the field. In 1948 the institution received a royal charter and shortened its name to the "Institution of Municipal Engineers"; at the same time the testamur exam became the only means to secure membership. After receiving the charter the IMunE was able to award the title of chartered municipal engineer to members. In 1962 the institution introduced diplomas in traffic engineering and town planning; it later introduced a diploma in administration. The institution registered as a charity on 6 June 1963. In 1964 the IMunE became a founder member of the Council of Engineering Institutions. By the following year it had 8,500 members, the majority of whom worked for local authorities in the UK or overseas. During this period it held more than 150 district-level meetings each year and maintained a scientific advisory service, library and number of science committees. The membership grew to 9,175 in 1967. The IMunE's headquarters were at 25 Eccleston Square in London. By late 20th century the IMunE was responsible for examining the majority of building inspectors in the UK. It supported reform and modernisation in this area but opposed any relaxation in the requirement for examination. Merger The IMunE merged with the Institution of Civil Engineers (ICE) in 1984. The IMunE abandoned its royal charter but retained its constitution for members now within the ICE. The IMunE's registration as a charity ceased on 25 May 1993. The modern-day ICE caters for municipal engineering by means of its Municipal Engineering Expert Panel. This acts to support municipal engineers within the ICE and is the UK representative body at the International Federation of Municipal Engineers. The ICE maintains the records of the IMunE, including the first minute book (from July 1871) and complete sets of minutes from 1921-77 and membership lists from 1873-1978. The publications of the IMunE are held in the ICE library. Heraldry The Institution of Municipal and County Engineers was awarded heraldic arms in 1931, which it retained throughout its existence. Its blazon (formal description) is: "Barry wavy of six Argent and Vert, a Pale Sable, thereon in chief a Torch Or, firted, and enfiled with a Mural Crown proper, and in base a Winged Wheel Gold". The arms bore the motto "Amenity, Progress, Stability" in English. The design of the arms represented the activities of its members: A green and white wavy field Green representing parks and open spaces White representing rivers and water supplies A black vertical dividing line representing highways and bridges A winged wheel representing traffic and machinery A torch representing education and public lighting A battlemented mural crown representing local government and building construction; gold coloured to represent the wealth generated by engineering works. The layout of the overall arms represented town planning References Civil engineering professional associations Organizations established in 1873 Organizations disestablished in 1984
Institution of Municipal Engineers
[ "Engineering" ]
1,070
[ "Civil engineering professional associations", "Civil engineering organizations" ]
63,043,301
https://en.wikipedia.org/wiki/Oxyprothepin%20decanoate
Oxyprothepin decanoate, sold under the brand name Meclopin, is a typical antipsychotic which was used in the treatment of schizophrenia in the Czech Republic but is no longer marketed. It is administered by depot injection into muscle. The medication has an approximate duration of 2 to 3 weeks. The history of oxyprothepin decanoate has been reviewed. References Abandoned drugs Antipsychotic esters Decanoate esters Dibenzothiepines Piperazines
Oxyprothepin decanoate
[ "Chemistry" ]
105
[ "Drug safety", "Abandoned drugs" ]
63,044,039
https://en.wikipedia.org/wiki/Semiorthogonal%20decomposition
In mathematics, a semiorthogonal decomposition is a way to divide a triangulated category into simpler pieces. One way to produce a semiorthogonal decomposition is from an exceptional collection, a special sequence of objects in a triangulated category. For an algebraic variety X, it has been fruitful to study semiorthogonal decompositions of the bounded derived category of coherent sheaves, . Semiorthogonal decomposition Alexei Bondal and Mikhail Kapranov (1989) defined a semiorthogonal decomposition of a triangulated category to be a sequence of strictly full triangulated subcategories such that: for all and all objects and , every morphism from to is zero. That is, there are "no morphisms from right to left". is generated by . That is, the smallest strictly full triangulated subcategory of containing is equal to . The notation is used for a semiorthogonal decomposition. Having a semiorthogonal decomposition implies that every object of has a canonical "filtration" whose graded pieces are (successively) in the subcategories . That is, for each object T of , there is a sequence of morphisms in such that the cone of is in , for each i. Moreover, this sequence is unique up to a unique isomorphism. One can also consider "orthogonal" decompositions of a triangulated category, by requiring that there are no morphisms from to for any . However, that property is too strong for most purposes. For example, for an (irreducible) smooth projective variety X over a field, the bounded derived category of coherent sheaves never has a nontrivial orthogonal decomposition, whereas it may have a semiorthogonal decomposition, by the examples below. A semiorthogonal decomposition of a triangulated category may be considered as analogous to a finite filtration of an abelian group. Alternatively, one may consider a semiorthogonal decomposition as closer to a split exact sequence, because the exact sequence of triangulated categories is split by the subcategory , mapping isomorphically to . Using that observation, a semiorthogonal decomposition implies a direct sum splitting of Grothendieck groups: For example, when is the bounded derived category of coherent sheaves on a smooth projective variety X, can be identified with the Grothendieck group of algebraic vector bundles on X. In this geometric situation, using that comes from a dg-category, a semiorthogonal decomposition actually gives a splitting of all the algebraic K-groups of X: for all i. Admissible subcategory One way to produce a semiorthogonal decomposition is from an admissible subcategory. By definition, a full triangulated subcategory is left admissible if the inclusion functor has a left adjoint functor, written . Likewise, is right admissible if the inclusion has a right adjoint, written , and it is admissible if it is both left and right admissible. A right admissible subcategory determines a semiorthogonal decomposition , where is the right orthogonal of in . Conversely, every semiorthogonal decomposition arises in this way, in the sense that is right admissible and . Likewise, for any semiorthogonal decomposition , the subcategory is left admissible, and , where is the left orthogonal of . If is the bounded derived category of a smooth projective variety over a field k, then every left or right admissible subcategory of is in fact admissible. By results of Bondal and Michel Van den Bergh, this holds more generally for any regular proper triangulated category that is idempotent-complete. Moreover, for a regular proper idempotent-complete triangulated category , a full triangulated subcategory is admissible if and only if it is regular and idempotent-complete. These properties are intrinsic to the subcategory. For example, for X a smooth projective variety and Y a subvariety not equal to X, the subcategory of of objects supported on Y is not admissible. Exceptional collection Let k be a field, and let be a k-linear triangulated category. An object E of is called exceptional if Hom(E,E) = k and Hom(E,E[t]) = 0 for all nonzero integers t, where [t] is the shift functor in . (In the derived category of a smooth complex projective variety X, the first-order deformation space of an object E is , and so an exceptional object is in particular rigid. It follows, for example, that there are at most countably many exceptional objects in , up to isomorphism. That helps to explain the name.) The triangulated subcategory generated by an exceptional object E is equivalent to the derived category of finite-dimensional k-vector spaces, the simplest triangulated category in this context. (For example, every object of that subcategory is isomorphic to a finite direct sum of shifts of E.) Alexei Gorodentsev and Alexei Rudakov (1987) defined an exceptional collection to be a sequence of exceptional objects such that for all i < j and all integers t. (That is, there are "no morphisms from right to left".) In a proper triangulated category over k, such as the bounded derived category of coherent sheaves on a smooth projective variety, every exceptional collection generates an admissible subcategory, and so it determines a semiorthogonal decomposition: where , and denotes the full triangulated subcategory generated by the object . An exceptional collection is called full if the subcategory is zero. (Thus a full exceptional collection breaks the whole triangulated category up into finitely many copies of .) In particular, if X is a smooth projective variety such that has a full exceptional collection , then the Grothendieck group of algebraic vector bundles on X is the free abelian group on the classes of these objects: A smooth complex projective variety X with a full exceptional collection must have trivial Hodge theory, in the sense that for all ; moreover, the cycle class map must be an isomorphism. Examples The original example of a full exceptional collection was discovered by Alexander Beilinson (1978): the derived category of projective space over a field has the full exceptional collection , where O(j) for integers j are the line bundles on projective space. Full exceptional collections have also been constructed on all smooth projective toric varieties, del Pezzo surfaces, many projective homogeneous varieties, and some other Fano varieties. More generally, if X is a smooth projective variety of positive dimension such that the coherent sheaf cohomology groups are zero for i > 0, then the object in is exceptional, and so it induces a nontrivial semiorthogonal decomposition . This applies to every Fano variety over a field of characteristic zero, for example. It also applies to some other varieties, such as Enriques surfaces and some surfaces of general type. A source of examples is Orlov's blowup formula concerning the blowup of a scheme at a codimension locally complete intersection subscheme with exceptional locus . There is a semiorthogonal decomposition where is the functor with is the natural map. While these examples encompass a large number of well-studied derived categories, many naturally occurring triangulated categories are "indecomposable". In particular, for a smooth projective variety X whose canonical bundle is basepoint-free, every semiorthogonal decomposition is trivial in the sense that or must be zero. For example, this applies to every variety which is Calabi–Yau in the sense that its canonical bundle is trivial. See also Derived noncommutative algebraic geometry Notes References Algebraic geometry
Semiorthogonal decomposition
[ "Mathematics" ]
1,631
[ "Mathematical structures", "Fields of abstract algebra", "Category theory", "Algebraic geometry", "Homological algebra" ]
63,044,352
https://en.wikipedia.org/wiki/99%20Points%20of%20Intersection
99 Points of Intersection: Examples—Pictures—Proofs is a book on constructions in Euclidean plane geometry in which three or more lines or curves meet in a single point of intersection. This book was originally written in German by Hans Walser as 99 Schnittpunkte (Eagle / Ed. am Gutenbergplatz, 2004), translated into English by Peter Hilton and Jean Pedersen, and published by the Mathematical Association of America in 2006 in their MAA Spectrum series (). Topics and organization The book is organized into three sections. The first section provides introductory material, describing different mathematical situations in which multiple curves might meet, and providing different possible explanations for this phenomenon, including symmetry, geometric transformations, and membership of the curves in a pencil of curves. The second section shows the 99 points of intersection of the title. Each is given on its own page, as a large figure with three smaller figures showing its construction, with a one-line caption but no explanatory text. The third section provides background material and proofs for some of these points of intersection, as well as extending and generalizing some of these results. Some of these points of intersection are standard; for instance, these include the construction of the centroid of a triangle as the point where its three median lines meet, the construction of the orthocenter as the point where the three altitudes meet, and the construction of the circumcenter as the point where the three perpendicular bisectors of the sides meet, as well as two versions of Ceva's theorem. However, others are new to this book, and include intersections related to silver rectangles, tangent circles, the Pythagorean theorem, and the nine-point hyperbola. Audience John Jensen writes that "the clear and uncluttered illustrations of intersection make for a rich source for geometric investigation by high school geometry students". And although Gerry Leversha calls the book "eccentric" and states that it "is clearly nothing to do with any syllabus anywhere", Jensen suggests that its examples would make a good complement to coursework both in exploratory geometry using interactive geometry software and in a geometry course focused on the formal proof of geometry propositions. He adds that the book itself is a proof of the possibility of presenting geometry without detailed explanations, and of introducing students to the beauty of the subject. References External links Schnittpunkte, web site with a larger collection of points of intersection, by Hans Walser Euclidean plane geometry Mathematics books 2004 non-fiction books 2006 non-fiction books
99 Points of Intersection
[ "Mathematics" ]
525
[ "Planes (geometry)", "Euclidean plane geometry" ]
63,045,034
https://en.wikipedia.org/wiki/Paleobiota%20of%20Burmese%20amber
Burmese amber is fossil resin dating to the early Late Cretaceous Cenomanian age recovered from deposits in the Hukawng Valley of northern Myanmar. It is known for being one of the most diverse Cretaceous age amber paleobiotas, containing rich arthropod fossils, along with uncommon vertebrate fossils and even rare marine inclusions. A mostly complete list of all taxa described up to the end of 2023 can be found in Ross (2024). Amoebozoa Dictyostelia Myxogastria Incertae sedis Apicomplexa Aconoidasida Haemosporida Conoidasida Eugregarinorida Euglenozoa Kinetoplastea Trypanosomatida Metamonada Anaeromonadea Oxymonadida Trichonymphea Trichonymphida Trichomonadea Cristamonadida Spirotrichonymphida Trichomonadida "Opisthokonta" Mesomycetozoea Eccrinales Proteobacteria Alphaproteobacteria Rickettsiales Plants Chlorophyte green algaes Chaetophorales Chlamydomonadales Bryophyte true mosses Dicranales Hypnodendrales Lycopods and spike mosses Marchantiophyte liverworts Porellales Polypodiopsid ferns Cyatheales Polypodiales Angiosperm flowering plants Cornales Laurales Liliales Nymphaeales Oxalidales Poales Rosales Incertae sedis Fungi Ascomycota Hypocreales Ophiostomatales Basidiomycota Agaricales Boletales incertae sedis "Zygomycetes" Priscadvenales Echinodermata Crinoidea Isocrinida Arthropoda Arachnida Amblypygi Araneae Ixodida Opilioacariformes Opiliones Palpigradi Pseudoscorpiones Ricinulei Sarcoptiformes Schizomida Scorpiones Solifugae Uropygi (Thelyphonida) Trombidiformes Incertae sedis Chilopoda Diplopoda Callipodida Chordeumatida Platydesmida Polydesmida Polyxenida Siphoniulida Siphonophorida Spirostreptida "Entognatha" Entomobryomorpha Poduromorpha Symphypleona Insecta Malacostraca Decapoda Isopoda Tanaidacea Ostracoda Symphyla Scolopendrellidae Mollusca Cephalopoda Ammonitida Gastropoda "Architaenioglossa" Heterobranchia Littorinimorpha Neritimorpha Bivalvia Nematoda Chromadorea Rhabditida Enoplea Mermithida Secernentea Aphelenchida Oxyurida Nematomorpha Gordioidea Onychophora Vertebrata Amphibia Allocaudata Anura Reptilia Archosauria Squamata Ichnotaxa Insecta Dictyoptera Mollusca Bivalvia References Burmese amber
Paleobiota of Burmese amber
[ "Biology" ]
692
[ "Mesozoic paleobiotas", "Prehistoric biotas" ]
63,045,799
https://en.wikipedia.org/wiki/Equal%20detour%20point
In Euclidean geometry, the equal detour point is a triangle center denoted by X(176) in Clark Kimberling's Encyclopedia of Triangle Centers. It is characterized by the equal detour property: if one travels from any vertex of a triangle to another by taking a detour through some inner point , then the additional distance traveled is constant. This means the following equation has to hold: The equal detour point is the only point with the equal detour property if and only if the following inequality holds for the angles of : If the inequality does not hold, then the isoperimetric point possesses the equal detour property as well. The equal detour point, isoperimetric point, the incenter and the Gergonne point of a triangle are collinear, that is all four points lie on a common line. Furthermore, they form a harmonic range (see graphic on the right). The equal detour point is the center of the inner Soddy circle of a triangle and the additional distance travelled by the detour is equal to the diameter of the inner Soddy Circle. The barycentric coordinates of the equal detour point are and the trilinear coordinates are: References External links isoperimetric and equal detour points – interaktive Illustration auf Geogebratube Triangle centers
Equal detour point
[ "Physics", "Mathematics" ]
270
[ "Point (geometry)", "Triangle centers", "Points defined for a triangle", "Geometric centers", "Symmetry" ]
63,046,013
https://en.wikipedia.org/wiki/Nathalie%20Wahl
Nathalie Wahl (born 1976) is a Belgian mathematician specializing in topology, including algebraic topology, homotopy theory, and geometric topology. She is a professor of mathematics at the University of Copenhagen, where she directs the Copenhagen Center for Geometry and Topology. Education and career Wahl was born in Brussels, and earned a license in mathematics in 1998 at the Université libre de Bruxelles, advised by Jean-Paul Doignon. Her undergraduate thesis concerned infinite antimatroids, and she published the same material in 2001 as her first journal paper. She completed a Ph.D. at the University of Oxford in 2001, with a dissertation Ribbon Graphs and Related Operads in algebraic topology supervised by Ulrike Tillmann. After short-term positions at Northwestern University, Aarhus University, and the University of Chicago, she joined the Department of Mathematical Sciences at the University of Copenhagen in 2006, and was promoted to full professor there in 2010. In 2020 she became Center Leader of the Copenhagen Center for Geometry and Topology. Recognition In 2008, Wahl won the Young Elite Researcher Award (Ung Eliteforskerprisen) of the Independent Research Fund Denmark (Danmarks Frie Forskningsfond). In 2016, she was elected to the Danish Academy of Natural Sciences. References External links Home page 1976 births Living people Belgian mathematicians Belgian women mathematicians Topologists Université libre de Bruxelles alumni Alumni of the University of Oxford Academic staff of the University of Copenhagen
Nathalie Wahl
[ "Mathematics" ]
296
[ "Topologists", "Topology" ]
63,046,895
https://en.wikipedia.org/wiki/Silvestrol
Silvestrol is a natural product from the flavagline family, with a cyclopenta[b] benzofuran core structure and an unusual dioxane ether side chain, which is found in the bark of trees from the genus Aglaia, especially Aglaia silvestris and Aglaia foveolata. Bioactivity It acts as a potent and selective inhibitor of the RNA helicase enzyme eIF4A, and has both broad-spectrum antiviral activity against diseases such as Ebola and coronaviruses, and anti-cancer properties, which makes it of considerable interest in medical research. However, as it cannot be extracted from tree bark in commercial amounts and is prohibitively complex to produce synthetically, practical applications have focused more on structurally simplified analogues such as CR-31-B. See also Rocaglamide References Antiviral drugs Dioxanes Benzofuran ethers at the benzene ring
Silvestrol
[ "Biology" ]
202
[ "Antiviral drugs", "Biocides" ]
53,219,227
https://en.wikipedia.org/wiki/Prepro-alpha-factor
Prepro-alpha-factor is a precursor to alpha-factor secreted by MAT alpha Saccharomyces cerevisiae, which is a small peptide mating pheromone. Prepro-alpha-factor is translocated into the endoplasmic reticulum and glycosylated at three sites as part of the chemical reaction leading to the formation of the alpha-factor. References Pheromones Fungal proteins
Prepro-alpha-factor
[ "Chemistry" ]
92
[ "Molecular biology stubs", "Chemical ecology", "Pheromones", "Molecular biology" ]
53,219,445
https://en.wikipedia.org/wiki/Electronic%20Transactions%20on%20Numerical%20Analysis
Electronic Transactions on Numerical Analysis is a peer-reviewed scientific open access journal publishing original research in applied mathematics with the focus on numerical analysis and scientific computing. It is published by the Kent State University and the Johann Radon Institute for Computational and Applied Mathematics (RICAM). Articles for this journal are published in electronic form on the journal's web site. The journal is one of the oldest scientific open access journals in mathematics. The Electronic Transactions on Numerical Analysis were founded in 1992 by Richard S. Varga, Arden Ruttan, and Lothar Reichel (all Kent State University) as a fully open access journal (no fee for reader or authors). The first issue appeared in September 1993. The current editors-in-chief are Lothar Reichel and Ronny Ramlau. Editors-in-chief 1993–2008: Richard S. Varga 1993–1998: Arden Ruttan 2005–2013: Daniel Szyld since 1993: Lothar Reichel since 2010: Ronny Ramlau No-fee open access and copyright Since its foundation, the journal follows an open access policy that allows free access to readers and charges no fee for authors ("diamond open access"). Authors transfer the copyright of published articles to the editors. This publication model is based on the one hand on support of the editing institutions and on donations. On the other hand, the editing process is carried out by volunteers from the scientific community. Abstracting and indexing The journal is abstracted and indexed in the Science Citation Index Expanded Mathematical Reviews, and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2015 impact factor of 0.671 (highest 1.261 in 2012) External links Official website (RICAM) Official website (Kent) References Academic journals established in 1992 Applied mathematics journals Open access journals English-language journals
Electronic Transactions on Numerical Analysis
[ "Mathematics" ]
375
[ "Applied mathematics", "Applied mathematics journals" ]
53,222,035
https://en.wikipedia.org/wiki/Bertram%20Martin%20Wilson
Prof Bertram Martin Wilson FRSE (14 November 1896, London – 18 March 1935, Dundee, Scotland) was an English mathematician, remembered primarily as a co-editor, along with G. H. Hardy and P. V. Seshu Aiyar, of Srinivasa Ramanujan's Collected Papers. (It seems probable that Wilson did not know about Ramanujan's lost notebook, which was probably passed by G. H. Hardy to G. N. Watson some years after Wilson's death.) Life He was born in London on 14 November 1896 the son of Rev Alfred Henry Wilson and his wife, Ellen Elizabeth Vincent. Wilson was educated at King Edward's School, Birmingham and then studied Mathematics at Trinity College, Cambridge, graduating MA. In 1920 he was appointed as a Lecturer in Mathematics at the University of Liverpool, and was promoted to Senior Lecturer in 1926. He remained there for slightly more than thirteen years, working under three professors, Frank Stanton Carey (1860–1928), J. C. Burkhill, and E. C. Titchmarsh. In 1933 Wilson was appointed Professor of Pure and Applied Mathematics at University College, Dundee as successor to John Edward Aloysius Steggall, who retired. Wilson was elected on 5 March 1934 a Fellow of the Royal Society of Edinburgh. His proposers were Sir Edmund Taylor Whittaker, James Hartley Ashworth, Nicholas Lightfoot and Edward Thomas Copson. In 1934 he gave a talk Ramanujan's Note-Books and their Place in Modern Mathematics at the third Colloquium of the Edinburgh Mathematical Society at the University of St Andrews. Wilson died on 18 March 1935 following a brief illness. Family In 1930 he married Margaret Fancourt Mitchell. Subsequent history for Ramanujan's Notebooks G. N. Watson and B. M. Wilson never completed their project of editing Ramanujan's notebooks (not including the "lost" notebook), but Bruce C. Berndt completed their project in a 5-volume publication Ramanujan's Notebooks, Parts I—V. The following quote refers to the three notebooks involved in Watson and Wilson's project: Berndt benefited substantially from Wilson's considerable efforts in editing Ramanujan's second notebook. Because some journals require the permission of each author when an article is to be published, for some of Berndt's work he was not permitted to put Wilson or Watson as a coauthor. However, Berndt published several articles with Wilson as a coauthor. Selected publications References 1896 births 1935 deaths 20th-century English mathematicians Mathematical analysts People educated at King Edward's School, Birmingham Academics of the University of Liverpool Academics of the University of Dundee Fellows of the Royal Society of Edinburgh Alumni of Trinity College, Cambridge
Bertram Martin Wilson
[ "Mathematics" ]
572
[ "Mathematical analysis", "Mathematical analysts" ]
53,223,464
https://en.wikipedia.org/wiki/Pharyngeal%20aspiration
Pharyngeal aspiration is the introduction of a substance into the pharynx and its subsequent aspiration into the lungs. It is used to test the respiratory toxicity of a substance in animal testing. It began to be used in the late 1990s. Pharyngeal aspiration is widely used to study the toxicity of a wide variety of substances, including nanomaterials such as carbon nanotubes. Pharyngeal aspiration has benefits over the alternative methods of inhalation and intratracheal instillation, the introduction of the substance directly into the trachea. Inhalation studies have the disadvantages that they are expensive and technically difficult, the dose and location of the substance has poor reproducibility, they require large amounts of material, and they potentially allow exposure to laboratory workers and to the skin of laboratory animals. Intratracheal instillation overcomes some of these difficulties, but because a needle or tube is needed to access the trachea, it remains technically challenging and causes trauma to the animal, which can be a confounding factor. It also results in a less uniform distribution of the substance than inhalation, and bypasses effects from the upper respiratory tract. In pharyngeal aspiration, the substance is placed in the pharynx, which is higher in the respiratory tract, avoiding the major source of technical difficulty and trauma to the animal. The deposition pattern of pharyngeal aspiration is also more dispersed than that of intratracheal instillation, making it more similar to inhalation, and the lung responses are qualitatively similar. Nevertheless, pharyngeal aspiration still leads to more particle agglomeration than inhalation, making its effects less potent. Methodology Pharyngeal aspiration is often performed on mice and rats. Prior to introduction of the stubstance, the animal is anesthetized and its tongue extended, preventing the animal from swallowing the material and allowing it to be aspirated into the lungs over the course of at least two deep breaths. A liquid suspension of particles in saline solution is usually used, in a typical volume of 50 μL. Sometimes the substance is introduced into the larynx instead of the pharynx to avoid contamination from food particles and other contaminants present in the mouth. References Occupational safety and health Toxicology Pharynx
Pharyngeal aspiration
[ "Environmental_science" ]
491
[ "Toxicology" ]