id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
49,143,227 | https://en.wikipedia.org/wiki/List%20of%20women%20botanists | This is a list of women botanists.
See also
List of botanists
Lists of women
References
External links
Women in Botany - Interactive database, containing biographical and bibliographical information on more than 10.000 women in all fields of botany
Lists of botanists
Botanists
Botanists
Botanists | List of women botanists | [
"Technology"
] | 61 | [
"Lists of women in STEM fields",
"Lists of people in STEM fields",
"Women in science and technology"
] |
49,143,377 | https://en.wikipedia.org/wiki/Rhizopus%20oryzae | Rhizopus oryzae is a filamentous heterothallic microfungus that occurs as a saprotroph in soil, dung, and rotting vegetation. This species is very similar to Rhizopus stolonifer, but it can be distinguished by its smaller sporangia and air-dispersed sporangiospores. It differs from R. oligosporus and R. microsporus by its larger columellae and sporangiospores. The many strains of R. oryzae produce a wide range of enzymes such as carbohydrate digesting enzymes and polymers along with a number of organic acids, ethanol and esters giving it useful properties within the food industries, bio-diesel production, and pharmaceutical industries. It is also an opportunistic pathogen of humans causing mucormycosis.
History and taxonomy
Rhizopus oryzae was discovered by Frits Went and Hendrik Coenraad Prinsen Geerligs in 1895. The genus Rhizopus (family Mucoraceae) was erected in 1821 by the German mycologist, Christian Gottfried Ehrenberg to accommodate Mucor stolonifer and Rhizopus nigricans as distinct from the genus Mucor. The genus Rhizopus is characterized by having stolons, rhizoids, sporangiophores sprouting from the points of which rhizoids were attached, globose sporangia with columellae, striated sporangiospores. In the mid 1960s, researchers divided the genus based on temperature tolerance. Numerical methods were later used in the early 1970s where researchers arrived at similar conclusions. R. oryzae was relegated to a distinct section because it grew well at 37 °C but failed to grow at 45 °C. In the past, strains were identified through isolating active components of the species that were commonly found in food and alcoholic drinks in Indonesia, China, and Japan. There are approximately 30 synonyms, the most common being R. arrhizus. Scholer popularized R. oryzae because he thought R. arrhizus represented an extreme form of R. oryzae.
Growth and morphology
Rhizopus oryzae grows quickly in optimal temperatures, at 1.6 mm per hour (nearly 0.5 μm per second - enough to be able to directly visualize hyphal elongation in real-time under the microscope). R. oryzae can grow in temperature of 7 °C to 44 °C and the optimum growth temperature is 37 °C. There is very poor growth from 10 °C to 15 °C and negligible growth at 45 °C. There is substantial growth in media containing 1% NaCl, very poor growth at 3% NaCl, and none at 5% NaCl. R. oryzae favors slightly acidic media. Good growth is observed at a pH of 6.8; in the range of 7.7-8.1, there is very poor growth. Most amino acids—with the exception of L-valine—promote R. oryzae growth, with L-tryptophan and L-tyrosine being the most effective. It also grows well on mineral nitrogen sources, except nitrate, and can utilize urea.
Rhizopus oryzae has variable sporangiosphores. They can be straight or curved, swollen or branched, and the walls can be smooth or slightly rough. The colour of sporangiosphores range from pale brown to brown. Sporangiosphores grow between 210-2500 μm in length and 5-18 μm in diameter. The sporangia in R. oryzae are globose or subglobose, wall spinous and black when mature, 60-180 μm in diameter. They can be distinguishable from Rhizopus stolonifer as they have smaller sporangia and spores. The optimal conditions for sporangium production are temperatures between 30 °C to 35 °C and low water levels. Sporulation is stimulated by amino acids (except L-valine) when grown in light, while in darkness only L-tryptophan and L-methionine effect stimulation of growth. The columellae are globose, subglobose, or oval in shape. The wall is usually smooth and the colour is pale brown. The average diameter growth ranges from 30-110 μm. Sporangiospores are elliptical, globose, or polygonal, they are striated and grow 5-8 μm in length. Dormant and germinated sporangiospores show deep furrows and prominent ridges with a pattern that makes it distinguishable from that of R. stolonifer. The germination of sporangiospores can be induced by the combined action of L-proline and phosphate ions. L-ornithine, L-arginine, D-glucose and D-mannose are also effective. Optimal germination occurs on media containing D-glucose and mineral salts.R. oryzae has abundant, root-shaped rhizoids. Zygospores are produced by diploid cells when sexual reproduction occurs under nutrient poor conditions. They have colors that range from red to brown, they are spherical or laterally flattened, and ranges from 60-140μm in size. In high nutrient levels, R. oryzae reproduces asexually, producing azygospores. The stolons found in R. oryzae are smooth or slightly rough, almost colorless or pale brown, 5-18 μm in diameter. The chlamydospores are abundant, globose ranging in 10-24 μm in diameter, elliptical, and cylindrical. Colonies of R. oryzae are white initially, becoming brownish with age and can grow to about 1 cm thick.
Habitat and ecology
Rhizopus oryzae can be found in various soils across the world. For example, it has been found in India, Pakistan, New Guinea, Taiwan, Central America, Peru, Argentina, Namibia, South Africa, Iraq, Somalia, Egypt, Libya, Tunisia, Israel, Turkey, Spain, Italy, Hungary, Czech Republic, Slovakia, Germany, Ukraine, British Isles, and the USA. The soils where R. oryzae has been isolated are varied ranging from grassland, cultivated soils under lupin, corn, wheat, groundnuts, other legumes, sugar canes, rice, citrus plantations, steppe type vegetation, alkaline soils, salt-marshes, farm manure soils, to sewage filled soils. The pH of the soils where the species has been isolated typically range from 6.3 to 7.2.
Rhizopus oryzae is often identified as R. arrhizus when isolated from foods. It is found in rotting fruits and vegetables where it is often called R. stolonifer. Unlike the other species such as R. stolonifer, R. oryzae is common in tropical conditions. In East Asia, it is common in peanuts. For instance, there was 21% isolation from peanut kernels from Indonesia. It is present in maize, beans, sorghum, and cowpeas, pecans, hazelnuts, pistachios, wheat, barley, potatoes, sapodillas, and various other tropical foods. Maize meal on which isolates of R. oryzae had been grown was found to be toxic to ducklings and rats, causing growth depression.
Pathogenicity
Rhizopus oryzae is one of the most common causes of a disease known as mucormycosis, characterized by growing hyphae within and surrounding blood vessels. The causal agents of mucormycosis may also produce toxins like agroclavine which is toxic to humans, sheep and cattle. This infection usually occurs in immunocompromised individuals but is rare. Common risk factors associated with primary cutaneous mucormycosis is ketoacidosis, neutropenia, acute lymphobloastic leukemia, lymphomas, systemic steroids, chemotherapy, and dialysis. Treatment includes amphotericin B, posaconazole, itraconazole, and fluconazole. The majority of the cases of infection are rhinocerebral infections. At the same time, it has been found in literature that R. oryzae can produce antibiotic activity on some bacteria.
The pathogenicity towards plants is attributed to the presence of large number of carbohydrate digesting enzymes.
Physiology and industrial uses
Rhizopus oryzae is involved in steroid transformations and it produces 4-desmethyl steroids which has been useful in the fermentation industry. The carbon sources does influence the ratio of polar and neutral lipids. The mycelium found in R. oryzae contains lipids and the highest lipid content occurs when grown on fructose. The highest unsaturated fatty acid content is observed at 30 °C and lowest at 15 °C. Proteolytic properties have been observed well under the conditions of pH 7 at 35 °C. Pyridozine and thiamine prefer proteinase production. R. oryzae can degrade aflatoxin A1 to isomeric hydroxy compounds and aflatoxin G1 to fluorescent metabolite aflatoxin A1. There are various factors that influence the production of dextro-lactic acids, fumaric acid, and metabolism of R. oryzae. For examples, in 40 °C there is more favorable growth for glucose consumption, however this influenced production of d-lactic acid production negatively. Glucose concentration of 15% is needed for optimal production of d-lactic acid. Fumaric acid production was suppressed in media containing more than 6 grams of NH4NO3 per liter and is favorable to d-lactic acid production.
Rhizopus oryzae is considered GRAS by the FDA and thus recognized as safe to use industrially as it can consume a range of carbon sources. During fermentation. R. oryzae produce amylase, lipase, and protease activity to increase nutrient's ability to use many compounds as an energy and carbon source. Historically, it has been used in fermentation, specifically to ferment soybean and create tempeh in Malaysia and Indonesia. Using the same methods to create traditional tempeh, R. oryzae can be inoculated in other cooked legumes such as peas, beans, and fava beans. Similarly in tempeh making, there is an initial bacterial fermentation in legumes when they are soaked for a while before being cooked. Fermentation incubation lasts for 48 hours at 33 °C. After incubation, mycelium can be observed between the legumes creating a larger, uniform product. Overall, fruits, grains, nuts, and legumes mold-fermentation with R. oryzae produces sensory changes in foods such as creating acidity, sweetness and bitterness. R. oryzae can produce lactate from glucose at high levels, which is used as a food additive and can also degrade plastics. In enzyme-modified cheese products, R. oryzae provides microbial enzymes where milk fat and proteins are broken down to create powder and paste forms of cheese. Specifically, it breaks down cheese curds and acid casein.
Among finding cellulases and hemicellulases, other enzymes such as protease, urease, ribonuclease, pectate lyase, and polygalacturonase are found in cultural media of R. oryzae. Besides producing a number of enzymes, it can also produce a number of organic acids, alcohol, and esters. Cellulases in R. oryzae can be applied to biotechnology, in food, brewery and wine, animal feed, textiles and laundry, pulp and paper industries, and agriculture. R. oryzae can convert both glucose and xylose under aerobic conditions into pure L (+)-lactic acids with by-products such as xylitol, glycerol, ethanol, carbon dioxide and fungal biomass. Endo-xylanase is a key enzyme for xylan depolymerization and was produced by R. oryzae fermentation from different xylan-containing agricultural by-products such as wheat straw, wheat stems, cottons bagasse, hazelnut shells, corn cobs, and oat sawdust. Pectinases are required for extraction and clarification of fruit juices and wines, extraction of oils, flavors and pigmentation from plant material, preparation of cellulose fibers for linen, jute and hemp manufacture as well as, coffee and tea fermentations.
R. oryzae can break down starch content in rice plants and therefore shows amylolytic activities. Also, it has been reported to produce extra cellular isoamylase which is used in food industries. Isoamylase was found to saccharify potato starch, arrow root, tamarind kernel, tapioca, and oat. The saccharifying ability of the enzyme is highly applicable in sugar production industries. Proteases, which can be found in R. oryzae are highly useful in commercial industries. For instance, it has increased application in food, pharmaceutical, detergent, leather, tanning industries. It is also involved in silver recovery and peptide synthesis. One strain of R. oryzae was found to secrete alkaline serine protease which shows high pH stability within 3 to 6 and poor thermos-stability. Lipase that is extracted from R. oryzae have been consumed as digestive aids without adverse reactions. Lipases hydrolyze fats and oils with subsequent release of free fatty acids such as diacylglycerols, monoacylglycerols and glycerol. Lipases have been involved in biotechnology applications because of its ability to catalyze synthetic reactions in non-aqueous solutions. One study has reported the expression of a fungal 11 alpha-steroid hydroxylase from R. oryzae which can be used to perform the 11 alpha-hydroxylation of the steroid skeleton which has simplified steroid drug production.R. oryzae can produce intracellular ribonuclease in a metal ion-regulated liquid medium with the addition of calcium and molybdenum stimulating ribonuclease production. R. oryzae strain ENHE isolated from contaminated soil was found to be capable of tolerating and removing pentachlorophenol.
R. oryzae is known to produce L (+)-lactic acid because the fungus cells possess better resistance to high concentration of accumulated lactic acid and lower content of nutrient requirement compared to the commonly used bacterial procedures. Thus, R. oryzae is the most efficient approached to improve lactic acid production process that facilitates multiple reuses of fungal cells for long-term lactic acid production. Ethanol is the main by-product in the fermentation process of R. oryzae during the production of L-lactic acid. R. oryzae can be used as a biocatalyst for ester production in organic solvent. Dry mycelium of four R. oryzae strains proved effective for catalysing the synthesis of different flavor esters. For example, the pineapple flavour or butyl acetate esters was produced by the esterification reactions between acetic acid and butanol by R. oryzae. This flavor compound can be used in food, cosmetic and pharmaceutical industries. Within the biodiesel industry, biodiesel fuel as fatty acid methyl ester is produced by the esterification of plant oil or animal fat with methanol. This is a renewable fuel resource compared to the traditional petroleum-based fuels. Production of biodiesel fuel from plant oils from cells of R. oryzae immobilized within biomass support particles were investigated for the methanolysis of soybean oil. Olive oil or oleic acid was found to be effective for enhancing methanolysis activity which is a promising results within the biodiesel industry.
R. oryzae has been investigated as a bioremediation agent fluoride sequestrant.
References
Fungal fruit diseases
Carrot diseases
Mango tree diseases
Mucoraceae
Fungi described in 1895
Fungal pathogens of humans
Fungus species | Rhizopus oryzae | [
"Biology"
] | 3,507 | [
"Fungi",
"Fungus species"
] |
54,327,031 | https://en.wikipedia.org/wiki/Peter%20Bossaerts | Peter L. Bossaerts (10 January 1960 in Antwerp, Belgium) is a Belgian-American economist. He is considered one of the pioneers and leading researchers in neuroeconomics and experimental finance.
He is Professor of Neuroeconomics at the University of Cambridge.
Life
Bossaerts grew up in Belgium and studied at the Universitaire Faculteiten Sint-Ignatius Antwerpen (today University of Antwerp) from 1977 to 1982, where he obtained a Licenciate (Bachelor) and Doctorandus (Master) in applied economics. After coursework towards a PhD in statistics at the Vrije Universiteit Brussel, he earned a Ph.D. at University of California in Financial Economics under the supervision of Richard Roll.
He began his academic career as a research associate at Carnegie Mellon University, then worked as an assistant professor in finance from 1986 to 1990. He joined the California Institute of Technology (Caltech) in 1990 as an assistant professor and became an associate professor in 1994, full professor in 1998, William D. Hacker Professor of Economics and Management in 2003, before being appointed as Dean ("Division Chair") of the Humanities and Social Sciences. From 2007 to 2009, he was Swiss Finance Institute professor at the Swiss Federal Institute of Technology, Lausanne (EPFL). In 2013, he moved from Caltech to the Eccles School of Business of the University of Utah, and in 2016 on to the University of Melbourne (Australia), where he was professor in experimental finance and decision neuroscience, and was awarded a Redmond Barry Distinguished Professorship. He was co-head of the Brain, Mind and Markets Laboratory and was Honorary Professor at the Florey Institute of Neuroscience and Mental Health. In 2022, he moved to the University of Cambridge, UK, where he is now the Leverhulme International Professor of Neuroeconomics at the Faculty of Economics.
Bossaerts is an elected Fellow of the Econometric Society, the Society for the Advancement of Economic Theory, and the Academy of the Social Sciences in Australia. He was president of the Society for Neuroeconomics and the Society for Experimental Finance.
He has published numerous scientific articles in well-known field journals such as Econometrica, Journal of Political Economy, Journal of Finance, Review of Financial Studies, Neuron, Journal of Neuroscience, Econometric Theory, Mathematical Finance, as well as general science journals such as Science and Proceedings of the National Academy of Sciences. He summarised his earlier research on asset pricing and experimental finance in the 2002 book "The Paradox of Asset Pricing".
He is the father of two children and lives in Eltham, Victoria (Australia)
Research
Bossaerts is one of the pioneers of experimental finance, which is the use of controlled experiments to test theories in finance and designs for better allocation of risks and/or aggregation of information. He advanced the approach to test the core dynamic model used in finance, macro-economics and central banking to understand the link between asset prices, aggregate income, aggregate consumption, and business cycles (the "Lucas" model). This allowed him to test some of the major models of asset pricing (CAPM, Lucas Model or DGSE) that are used widely throughout academia, industry and government, in teaching, analysis of historical data from the field, and in setting policy and regulation. This allowed him also to try novel market designs, such as combinatorial double auctions, to improve allocation of risk, as well as to initiate a unique program on research and teaching of algorithmic (automated; robot) trading.
Bossaerts pioneered neuroeconomics, where decision theory and game theory is brought to bear on interpreting computational signals in the brain. This has led to the emergence of the new fields of decision neuroscience and computational neuropsychiatry.
In the past, his work has focused on decision making under uncertainty, where uncertainty is understood as in probability theory. Recently, he has been studying uncertainty that is generated by computational complexity.
References
External links
Resume (PDF; 202 kB) on the website of the California Institute of Technology
Resume on the website of the University of Melbourne
1960 births
Living people
Belgian economists
21st-century American economists
20th-century American economists
Fellows of the Econometric Society
American financial economists
Corporate finance theorists
Behavioral finance
Experimental economics | Peter Bossaerts | [
"Biology"
] | 877 | [
"Behavioral finance",
"Behavior",
"Human behavior"
] |
54,328,126 | https://en.wikipedia.org/wiki/Klafter | The klafter is an historical unit of length, volume and area that was used in Central Europe.
Unit of length
As a unit of length, the klafter was derived from the span of a man's outstretched arms and was traditionally about 1.80 metres (m). In Austria, its length was, for example, 1.8965 m, in Prussia 1.88 m. In Bavaria, however, a klafter was only 1.751155 m, in Hesse it was significantly larger at 2.50 m. The Viennese or Lower Austrian klafter was fixed by Rudolf II as a measure of length as of 19 August 1588. When, in 1835, the Swiss units were defined using the metric system, 1 Swiss klafter (of 6 Swiss feet each of 0.30 m) corresponded exactly to 1.80 m.
In Aachen, Baden, Bavaria, Bohemia, Hamburg, Leipzig, Poland, Trier and Zürich the klafter was exactly six feet, but in the Canton of Fribourg it measured 10 feet.
In nautical units of depth, the klafter corresponds to the fathom.
Baseline
The survey of Austria-Hungary began in 1762 with the construction of the Vienna Neustadt Baseline (Wiener Neustädter Grundlinie) which was 6,410, later 5,000, klafters long, represented by 5 measuring rods of 1 klafter in length made of varnished wood.
Unit of volume
The old unit of dry volume for split firewood, or Scheitholz, was based on this unit of length. A klafter of wood corresponded to a stack of wood with a length and height of one klafter; the depth of this pile corresponded to the length of the log but, as a rule was 3 feet long, that is 0.5 klafters. The volume of a pile of logs was therefore only 0.5 cubic klafters. This in turn corresponded, depending on the area, to 3 to 4 steres or approximately 2 to 3 m³ of wood. The old Prussian klafter corresponded to 3.339 m³; in Austria, a klafter was equivalent to 3.386 m³. By comparison the North American cord, used to measure firewood and pulpwood, is slightly larger at 3.62 m³.
In Switzerland, Werdenfelser Land and parts of Lower Franconia, a klafter of logs corresponds to 3.0 m³ (steres) of stacked firewood since the introduction of the metric system. Usually the logs are 1 m long. One klafter of firewood is thus equivalent to about 2.2 m³.
Hay was also sometimes measured in klafters in the 19th century.
The cubic klafter was not standardised as the length of a foot varied depending on the region. The cubic klafter used for wood could also differ. Here is an example of the Austrian units.
1 foot (Viennese) = 140.131 Paris lines = 0.3161 metres
1 cubic klafter (Viennese) = 216 cubic feet = 6.8224 cubic metres
The cubic foot generally had 1,728 cubic inches, each with 1,728 cubic lines, each with 1,728 cubic points.
1 cubic klafter (Viennese) = 6 cubic klafter feet of 12 cubic klafter inches of 12 cubic klafter lines of 12 cubic klafter points
216 cubic feet of 1,728 cubic inches of 1,728 cubic lines etc.
1 Viennese cubic klafter = 6.82234457176 cubic metres
1 cubic klafter foot = 1.1370574286 cubic metres
1 cubic-klafter -inch = 94754785.7 cubic millimetres
1 cubic-klafter line = 7896232.1 cubic millimetre
1 cubic klafter point = 658019.3 cubic millimetre
1 cubic foot = 0.0315749412 cubic metres
1 cubic inch = 18249.3 cubic millimetres
1 cubic line = 10.56 cubic millimetres
The Rahmklafter, as the unit of timber measurement was called in Austria, was defined for long and short firewood as follows:
1 Rahmklafterof long firewood = 6 feet long and 6 feet high, 1 1/4 ells of log length, about 111 cubic feet
1 Rahmklafterof short firewood = 6 feet long and 6 feet high, 1 ell of log length, about 90 cubic feet
Two klafters were counted for one Stoß or livestock unit.
Unit of area
In Austria, 1 yoke (Joch, with which the size of fields was measured) comprised 1,600 square klafters with sides measuring 8 by 200 klafters, thus about 5,754 m2 and 0.575 ha, respectively. 1 square klafter (Viennese) was equivalent to 3.5979 square metres.
In Croatia, the square klafter was used as unit of area and equalled 3.596652 m2. It is sometimes still used today.
In the Swiss Chur Rhine Valley and the Prättigau, the meadowland was measured in klafters.
In the adjoining Principality of Liechtenstein, the square klafter is still used today for the measurement of land areas. 1 m2 equals 0.27804 square klafters, 1 square klafter equals 3.59665 m2. The klafter as a unit of length was consequently about 1.8965 metres long.
In Darmstadt, 1 square klafter = 100 square feet = 10,000 square inches = 6.25 square metres.
Similar units in other countries
Conversion table of 1838
Canton of Neuchâtel, Canton of Bern (French-speaking part)
1 toise = 10 feet (pieds)
Canton of Valais (French-speaking part)
1 klafter = 6 French feet (pieds de roi)
Canton of Vaud (metric based from 1822)
1 toise = 10 feet (pieds) = 3.00 metres
1 toise carrée (square) = 100 square feet = 9.00 square metres
1 toise cube or toise courante (cubic) = 1,000 cubic feet = 27 cubic metres
Canton of Ticino
1 = 6 feet (piedi) = 1.808 metres
1 tesa = 6 feet (piedi) = 1.80 metres (introduced in 1851)
France
1 toise usuelle = 6 pieds = 2 metres = 1.026148 Parisian toise (old)
Piedmont during the French reign
1 tesa = 5 piedi manuali = 759.17 Paris lines = 3.0826 metres
Milan under French occupation
1 Cavese di Modena, Modenese klafter = 6 Modenese feet (1 foot = 281.2 Paris lines = 0.63433 m)
Russia
1 sazhen = 3 arshin = 2.13356 metres
Poland
1 sążeń = 8 feet
Spain and former colonies
1 toesa, braza, estado = 2 varas = 6 pies
Portugal, also Brazil (value deviating)
1 braça = 2 varas = 8 polegadas
Italy, Spain, France, North Coast Africa
1 cañe, xanne (great ell)
See also
Cord (unit), North American measure of timber volume
Hoppus, British unit of timber volume
Orders of magnitude (length)
Toise, French six-foot unit of length
Yoke (unit of measurement)
References
External links
Obsolete units of measurement
Units of length
Human-based units of measurement
Obsolete Croatian units of measurement
Units of measurement of the Holy Roman Empire | Klafter | [
"Mathematics"
] | 1,640 | [
"Obsolete units of measurement",
"Quantity",
"Units of measurement",
"Units of length"
] |
54,328,391 | https://en.wikipedia.org/wiki/Jacobi%20transform | In mathematics, Jacobi transform is an integral transform named after the mathematician Carl Gustav Jacob Jacobi, which uses Jacobi polynomials as kernels of the transform
.
The Jacobi transform of a function is
The inverse Jacobi transform is given by
Some Jacobi transform pairs
References
Integral transforms
Mathematical physics | Jacobi transform | [
"Physics",
"Mathematics"
] | 60 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
54,328,417 | https://en.wikipedia.org/wiki/Dicarbonyl%28acetylacetonato%29rhodium%28I%29 | Dicarbonyl(acetylacetonato)rhodium(I) is an organorhodium compound with the formula Rh(O2C5H7)(CO)2. The compound consists of two CO ligands and an acetylacetonate. It is a dark green solid that dissolves in acetone and benzene, giving yellow solutions. The compound is used as a precursor to homogeneous catalysts.
It is prepared by treating rhodium carbonyl chloride with sodium acetylacetonate in the presence of base:
[(CO)2RhCl]2 + 2 NaO2C5H7 → 2 Rh(O2C5H7)(CO)2 + 2 NaCl
The complex adopts square planar molecular geometry. The molecules stack with Rh---Rh distances of about 326 pm. As such, it is representative of a linear chain compound.
References
Organorhodium compounds
Homogeneous catalysis
Carbonyl complexes
Acetylacetonate complexes
Rhodium(I) compounds | Dicarbonyl(acetylacetonato)rhodium(I) | [
"Chemistry"
] | 218 | [
"Catalysis",
"Homogeneous catalysis"
] |
54,328,809 | https://en.wikipedia.org/wiki/Laguerre%20transform | In mathematics, Laguerre transform is an integral transform named after the mathematician Edmond Laguerre, which uses generalized Laguerre polynomials as kernels of the transform.
The Laguerre transform of a function is
The inverse Laguerre transform is given by
Some Laguerre transform pairs
References
Integral transforms
Mathematical physics | Laguerre transform | [
"Physics",
"Mathematics"
] | 65 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
54,328,979 | https://en.wikipedia.org/wiki/Helen%20Freedhoff | Helen Sarah Freedhoff (January 9, 1940 – June 10, 2017) was a Canadian theoretical physicist who studied the interaction of light with atoms. She gained her doctorate at the University of Toronto in 1965 and completed a postdoctoral fellowship at Imperial College in London. Freedhoff was the first woman appointed as a physics professor at York University in Toronto, and is believed to have been the only woman professor of theoretical physics in Canada at the time.
Early life and education
Helen Freedhoff was born Helen Sarah Goodman in Toronto in on 9 January 1940. Her parents were Ethel (Kohl) and Sholom Goodman and she had two brothers, David and Irving. Her nickname was "Henchy".
In 1957 she graduated from Harbord Collegiate Institute, a downtown public high school with predominantly Jewish students and a history of many earlier notable alumni. Pursuing an academic career in science was unusual for a woman in North America in the post-war 1950s, where young men entered science in great numbers and women were pressured to make way. At Harbord, however, Freedhoff did not face opposition, recalling "In high school it never occurred to me that I would have to play dumb to get dates. Nobody ever really discouraged me. The teachers really encouraged me, and nobody taught me that there was anything wrong with having a career".
Freedhoff enrolled in the Mathematics, Physics, and Chemistry stream at the University of Toronto, one of around 10-15 women among 120 first year students. Originally intending to study mathematics, she found that she preferred physics. Freedhoff was the only woman in her year to major in physics, graduating with the highest marks and being awarded the Governor General's Gold Medal. She did not feel professionally disadvantaged by being the only woman, and felt it could be an advantage to stand out.
Freedhoff had summer jobs in Harold Johns' biophysics lab. Johns was a pioneer of medical biophysics, developing cobalt radiation therapy for cancer in the 1940s. Although she enjoyed her time there, and was interested in the work Harry Welsh was doing on lasers, laboratory work was not her forte. Freedhoff was inspired by Jan Van Kranendonk, a theoretical physicist, who encouraged her to undertake postgraduate studies under his supervision. From then on, she dedicated her career to what she has described as "the exhilaration of scientific research" and teaching. "Basic science," she wrote, "is indeed a high form of culture, no less so than music or literature because it is also useful".
Career and research
Although women gained nearly 20% of the doctoral degrees awarded in physics by the University of Toronto between 1890 and 1933, Freedhoff was only the second woman to gain a PhD in physics after 1934 at the University of Toronto, following Olga Mracek Mitchell in 1962. Freedhoff earned her PhD in 1965 with a dissertation titled Theory of dipole-dipole interaction in coherent radiation processes. Women were awarded only 5% of the physics doctorates at the University of Toronto between 1960 and 1975.
Freedhoff was awarded a postdoctoral fellowship by the National Research Council of Canada, working at Imperial College, London, from 1965 to 1967. She studied means of identifying molecular features of atoms trapped in metals with spectroscopy, work which was partly sponsored by the United States Air Force Office of Scientific Research.
While in London, she wrote to the physics department at York University in Toronto enquiring about job opportunities. In 1967, she was appointed assistant professor in physics there, the university's first woman professor in physics and believed to be Canada's only woman professor in theoretical physics at that time.
Other than a sabbatical year at the Department of Physics of Technion, the Israel Institute of Technology in Haifa from 1986, Freedhoff remained at York University until her retirement in 2005, having published over 40 research papers. She also collaborated with physicists in Australia, which led to Terry Rudolph undertaking his doctoral studies under Freedhoff's supervision in the 1990s. He is a professor of physics at Imperial College, and together with Matthew Pusey and Jonathan Barrett, one of the developers of the PBR theorem, an important development in quantum mechanics named for its three authors. Rudolph, who is Erwin Schrödinger's grandson, delivered one of the eulogies at Freedhoff's funeral.
Personal life
Freedhoff married Stephen Freedhoff when she was around 20. Stephen Freedhoff had graduated with a bachelor of commerce from the University of Toronto in 1957, going on to a career as a chartered accountant and consultant. They had a daughter, Michal Ilana Freedhoff, a son, Yoni Freedhoff, and seven grandchildren. Michal Freedhoff gained a doctorate in solid state chemistry, and went on to serve as a US Congressional Science and Engineering Fellow in the office of Ed Markey. She subsequently worked in a variety of government environmental protection roles, and was appointed Assistant Administrator for the Office of Chemical Safety and Pollution Prevention (OCSPP) of the US Environmental Protection Agency (EPA) in 2021. Yoni Freedhoff is an associate professor of Family Medicine at the University of Ottawa and author. Helen Freedhoff's personal pastimes included reading, playing piano, solving KenKen puzzles, and yoga.
Helen Freedhoff died suddenly on 10 June 2017 at the family's cottage in Muskoka, Ontario, a lakeside area near Toronto.
Selected publications
W.R. Bruce, M.L. Pearson, Helen S. Freedhoff. The Linear Energy Transfer Distributions Resulting from Primary and Scattered X-Rays and Gamma Rays with Primary HVL's from 1.25 mm Cu to 11 mm Pb. Radiation Research, 19 (4): 606-620.
Helen Freedhoff, J. Van Kranendonk (1967). Theory of coherent resonant absorption and emission at infrared and optical frequencies. Can. J. Physics, 45(5): 1833-1859.
Helen S. Freedhoff (1979). Collective atomic effects in resonance fluorescence: Dipole-dipole interaction. Phys. Rev. A 19, 1132.
Helen S. Freedhoff (1982). Collective atomic effects in resonance fluorescence: The "scaling factor". Phys. Rev. A 26, 684.
Helen Freedhoff, Zhidang Chen (1990). Resonance fluorescence of a two-level atom in a strong bichromatic field. Phys. Rev. A 41, 6013.
Tran Quang, Helen Freedhoff (1993). Index of refraction of a system of strongly driven two-level atoms. Phys. Rev. A 48, 3216.
Helen Freedhoff (2004). Evolution in time of an N-atom system. I. A physical basis set for the projection of the master equation. Physical Review A. 69 (1).
References
1940 births
2017 deaths
Canadian women physicists
Jewish Canadian scientists
Jewish physicists
Scientists from Toronto
Theoretical physicists
University of Toronto alumni | Helen Freedhoff | [
"Physics"
] | 1,404 | [
"Theoretical physics",
"Theoretical physicists"
] |
54,329,238 | https://en.wikipedia.org/wiki/Lawbot | Lawbots are a broad class of customer-facing legal AI applications that are used to automate specific legal tasks, such as document automation and legal research. The terms robot lawyer and lawyer bot are used as synonyms to lawbot. A robot lawyer or a robo-lawyer refers to a legal AI application that can perform tasks that are typically done by paralegals or young associates at law firms. However, there is some debate on the correctness of the term. Some commentators say that legal AI is technically speaking neither a lawyer nor a robot and should not be referred to as such. Other commentators believe that the term can be misleading and note that the robot lawyer of the future won't be one all-encompassing application but a collection of specialized bots for various tasks.
Lawbots use various artificial intelligence techniques or other intelligent systems to limit humans' direct ongoing involvement in certain steps of a legal matter. The user interfaces on lawbots vary from smart searches and step-by-step forms to chatbots. Consumer and enterprise-facing lawbot solutions often do not require direct supervision from a legal professional. Depending on the task, some client-facing solutions used at law firms operate under an attorney supervision.
Levels of autonomy
The following levels of autonomy (LoA) are suggested for automated AI legal reasoning:
Level 0 (LoA0): No automation for AI legal reasoning
Level 1 (LoA1): Simple assistance automation
Level 2 (LoA2): Advanced assistance automation
Level 3 (LoA3): Semi-autonomous automation
Level 4 (LoA4): Domain automation
Level 5 (LoA5): Fully-autonomous automation
Level 6 (LoA6): Superhuman automation
Examples
Some legal AI solutions are developed and marketed directly to the customers or consumers, whereas other applications are tools for the attorneys at law firms. There are already hundreds of legal AI solutions that operate in multitude of ways varying in sophistication and dependence on scripted algorithms.
One notable legal technology chatbot application is DoNotPay. It had started off as an app for contesting parking tickets, but has since expanded to include features that help users with many different types of legal issues, ranging from consumer protection to immigration rights and other social issues.
Impact on the legal industry
In the 2016 report, Deloitte estimated that more than 110,000 law jobs in just the United Kingdom alone could disappear within the next twenty years due to automation. This change could result in the creation of more highly skilled jobs and in the reduction of paralegal and temporary positions. Deloitte's report asserts that "there is significant potential for high-skilled roles that involve repetitive processes to be automated by smart and self-learning algorithms". According to Lawyers to Engage, between 22% of a lawyer’s work and 35% of a legal assistant’s work can be automated in the US. Top law schools like Harvard have already begun to integrate Artificial Intelligence into the curriculum.
Legal tech start-up companies have begun developing applications that assist law firms with completing low-risk legal processes. These applications can enable lawyers to focus on more work that requires their specific expertise.
The automation of processes like contract reviewing, enforcement of negotiations (smart contracts) and client intake (expert systems) allows law firms to streamline their procedures and improve efficiency. In addition, automation benefits small-to-medium law firms that do not have the resources to utilize junior talent on such routine tasks.
The increase of law firms utilizing automated applications could result into legal tech becoming a necessity in the industry. Digital Reason CEO, Tim Estes, stated that those who refuse the opportunity to integrate AI in their workflow are “most at risk.”
In 2018, Forbes reported a 713% increase in investments in legal tech. This rapid growth is reflective of law firms beginning to “cede business to… new model legal providers… that meld technological, business and legal expertise.”
Access to law and justice
It has been widely estimated for at least the last generation that all the programs and resources devoted to ensuring access to justice address only 20% of the civil legal needs of low-income people in the United States. Drawing on this experience, in late 2011, the U.S. government-funded Legal Services Corporation decided to convene a summit of leaders to explore how best to use technology in the access-to-justice community. The group adopted a mission for The Summit on the Use of Technology to Expand Access to Justice (Summit) consistent with the magnitude of the challenge: "to explore the potential of technology to move the United States toward providing some form of effective assistance to 100% of persons otherwise unable to afford an attorney for dealing with essential civil legal needs".
In April 2017, joined by Microsoft and Pro Bono Net, the Legal Services Corporation (LSC) announced a pilot program to develop online, statewide legal portals to direct individuals with civil legal needs to the most appropriate forms of assistance.
Technological limitations
Current research in subjects such as computational privacy, explainable machine learning, Bayesian deep learning, knowledge-intensive machine learning, and transfer learning reveals that we do not yet have the technology to enable Level 4 to 6 AI lawbots.
In 2023, OpenLaw began developing a model called Law Bot, which interacts in a conversational way as an attorney. The dialogue format makes it possible for Law Bot to answer follow-up questions, challenge incorrect premises, and reject inappropriate requests. Currently, they try to ensure it is in full compliance with all laws and regulations while conducting further beta testing before releasing it to the general public.
See also
Automation
Artificial intelligence and law
Computational law
Document automation
DoNotPay
Government by algorithm
Legal expert systems
Legal informatics
Legal technology
Robo-advisor
References
External links
CodeX Techindex, Stanford Law School Legal Tech List
LawSites List of Legal Tech Startups
Argument technology
Automation
Practice of law
American inventions
Parallel computing | Lawbot | [
"Engineering"
] | 1,192 | [
"Control engineering",
"Automation"
] |
54,330,143 | https://en.wikipedia.org/wiki/Dichomitus%20hubeiensis | Dichomitus hubeiensis is a crust fungus that was described as a new species in 2013. The fungus is characterized by the cream to straw-yellow pore surface and large pores numbering 1–2 per millimetre. Microscopic features include both inamyloid and indextrinoid skeletal hyphae, the presence of cystidioles and dendrohyphidia in the hymenium, and roughly ellipsoid spores that measure 10–14 by 5.6–7.0 μm. The specific epithet refers to the type locality in Hubei, central China.
References
Fungi described in 2013
Fungi of China
Polyporaceae
Taxa named by Bao-Kai Cui
Fungus species | Dichomitus hubeiensis | [
"Biology"
] | 145 | [
"Fungi",
"Fungus species"
] |
54,330,259 | https://en.wikipedia.org/wiki/Dichomitus%20eucalypti | Dichomitus eucalypti is a crust fungus that was described as a new species in 1985 by Norwegian mycologist Leif Ryvarden. The fruit body of the fungus measures 1–2 cm in diameter, and has a white to pale cream pore surface with small round pores numbering 2–3 per millimetre. D. eucalypti has a dimitic hyphal structure, containing both generative and binding hyphae. Generative hyphae are thin walled with clamps, and measure 2.5–4 μm in diameter. Found in the context, the binding hyphae are solid, hyaline, and measure 2–5 μm. Spores are more or less cylindrical, thin-walled and hyaline, and have dimensions of 7–8.5 by 3–4 μm.
The type was collected in George Gill Range (Northern Territory, Australia), where it was growing on river red gum (Eucalyptus camaldulensis). At the time of its description, D. eucalypti was, in addition to D. epitephrus and D. leucoplacus, the third species of Dichomitus found in Australia.
References
Fungi described in 1985
Fungi of Australia
Polyporaceae
Taxa named by Leif Ryvarden
Fungus species | Dichomitus eucalypti | [
"Biology"
] | 274 | [
"Fungi",
"Fungus species"
] |
54,330,521 | https://en.wikipedia.org/wiki/Pentamethyltantalum | Pentamethyltantalum is a homoleptic organotantalum compound.
It has a propensity to explode when it is melted. Its discovery was part of a sequence that led to Richard R. Schrock's Nobel Prize winning discovery in olefin metathesis.
Production
Pentamethyltantalum can be made from the reaction of methyllithium with dichlorotrimethyltantalum. Ta(CH3)3Cl2 is in turn made from tantalum pentachloride and dimethylzinc.
The preparation was inspired by the existence of pentaphenylphosphorus, and the discovery of hexamethyltungsten. The discoverer Richard R. Schrock considered tantalum to be a metallic phosphorus, and thus tried the use of methyllithium.
Properties
The pentamethyltantalum adopts a square pyramid shape. Ignoring the C-H bonds, the molecule has C4v symmetry. The four carbon atoms at the base of the pyramid are called basal, and the carbon atom at the top is called apical or apex. The distance from tantalum to the apical carbon atom is 2.11 Å, and to the basal carbon atoms is 2.180 Å. The distance from hydrogen to carbon in the methyl groups is 1.106 Å. The angle subtended by two basal carbon bonds is 82.2°, and the angle between the bonds to the apex and a carbon on the base is about 111.7°.
At room temperature pentamethyltantalum can spontaneously explode, so samples are usually stored in a -20°c freezer.
Reactions
With many carbon-hydrogen bonds near Ta, analogues of pentamethyltantalum are susceptible to alpha elimination.
Excess methyllithium reacts to yield higher coordinated methyl tantalum ions [Ta(CH3)6]− and [Ta(CH3)7]2−.
Pentamethyltantalum in solution forms stable insoluble complex material when mixed with dmpe (CH3)2PCH2CH2P(CH3)2.
With nitric oxide it gives a white coloured dimer with formula {TaMe3[ON(Me)NO]2}2 (Me=CH3).
References
Organometallic compounds
Tantalum compounds
Methyl complexes
Organotantalum compounds
Explosive chemicals | Pentamethyltantalum | [
"Chemistry"
] | 502 | [
"Inorganic compounds",
"Organometallic compounds",
"Organic compounds",
"Explosive chemicals",
"Organometallic chemistry"
] |
54,330,661 | https://en.wikipedia.org/wiki/Good%20spanning%20tree | In the mathematical field of graph theory, a good spanning tree of an embedded planar graph is a rooted spanning tree of whose non-tree edges satisfy the following conditions.
there is no non-tree edge where and lie on a path from the root of to a leaf,
the edges incident to a vertex can be divided by three sets and , where,
is a set of non-tree edges, they terminate in red zone
is a set of tree edges, they are children of
is a set of non-tree edges, they terminate in green zone
Formal definition
Source:
Let be a plane graph. Let be a rooted spanning tree of . Let be the path in from the root to a vertex . The path divides the children of , , except , into two groups; the left group and the right group . A child of is in group and denoted by if the edge appears before the edge in clockwise ordering of the edges incident to when the ordering is started from the edge . Similarly, a child of is in the group and denoted by if the edge appears after the edge in clockwise order of the edges incident to when the ordering is started from the edge . The tree is called a good spanning tree of if every vertex of satisfies the following two conditions with respect to .
[Cond1] does not have a non-tree edge , ; and
[Cond2] the edges of incident to the vertex excluding can be partitioned into three disjoint (possibly empty) sets and satisfying the following conditions (a)-(c)
(a) Each of and is a set of consecutive non-tree edges and is a set of consecutive tree edges.
(b) Edges of set , and appear clockwise in this order from the edge .
(c) For each edge , is contained in , , and for each edge , is contained in , .
Applications
In monotone drawing of graphs, in 2-visibility representation of graphs.
Finding good spanning tree
Every planar graph has an embedding such that contains a good spanning tree. A good spanning tree and a suitable embedding can be found from in linear-time. Not all embeddings of contain a good spanning tree.
See also
Spanning tree
Schnyder realizer
References
Computational problems in graph theory
Spanning tree
Planar graphs | Good spanning tree | [
"Mathematics"
] | 466 | [
"Computational problems in graph theory",
"Planar graphs",
"Graph theory",
"Computational problems",
"Computational mathematics",
"Mathematical relations",
"Planes (geometry)",
"Mathematical problems"
] |
54,332,547 | https://en.wikipedia.org/wiki/Data%20blending | Data blending is a process whereby big data from multiple sources are merged into a single data warehouse or data set.
Data blending allows business analysts to cope with the expansion of data that they need to make critical business decisions based on good quality business intelligence. Data blending has been described as different from data integration due to the requirements of data analysts to merge sources very quickly, too quickly for any practical intervention by data scientists. A study done by Forrester Consulting in 2015 found that 52 percent of companies are blending 50 or more data sources and 12 percent are blending over 1,000 sources.
Extract, transform, load
Data blending is similar to extract, transform, load (ETL). Both ETL and data blending take data from various sources and combine them. However, ETL is used to merge and structure data into a target database, often a data warehouse. Data blending differs slightly as it's about joining data for a specific use case at a specific time. With some software, data isn't written into a database, which is very different to ETL. For example, with Google Data Studio.
Software products
Representing the increased demand for analysts to combine data sources, multiple software companies have seen large growth and raised millions of dollars, with some early entrants into the market now public companies. Examples include AWS, Alteryx, Microsoft Power Query, and Incorta, which enable combining data from many different data sources, for example, text files, databases, XML, JSON, and many other forms of structured and semi-structured data.
Tableau
In tableau software, data blending is a technique to combine data from multiple data sources in the data visualization. A key differentiator is the granularity of the data join. When blending data into a single data set, this would use a SQL database join, which would usually join at the most granular level, using an ID field where possible. A data blend in tableau should happen at the least granular level.
Looker Studio
In Google's Looker Studio, data sources are combined by joining the records of one data source with the records of up to 4 other data sources.
Similar to Tableau, the data blend only happens on the reporting layer. The blended data is never stored as a separate combined data source.
Challenges with data blending
The most common custom metadata question is: "How can this dataset blend with (join or union to) my other datasets?"
See also
Data preparation
Data fusion
Data wrangling
Data cleansing
Data editing
Data scraping
Data curation
Data preprocessing
References
Data management | Data blending | [
"Technology"
] | 526 | [
"Data management",
"Data"
] |
54,332,568 | https://en.wikipedia.org/wiki/Chemical%20Database%20Service | The Chemical Database Service is an EPSRC-funded mid-range facility that provides UK academic institutions with access to a number of chemical databases. It has been hosted by the Royal Society of Chemistry since 2013, before which it was hosted by Daresbury Laboratory (part of the Science and Technology Facilities Council).
Currently, the included databases are:
ACD/I-Lab, a tool for prediction of physicochemical properties and NMR spectra from a chemical structure
Available Chemicals Directory, a structure-searchable database of commercially available chemicals
Cambridge Structural Database (CSD), a crystallographic database of organic and organometallic structures
Inorganic Crystal Structure Database (ICSD), a crystallographic database of inorganic structures
CrystalWorks, a database combining data from CSD, ICSD and CrystMet
DETHERM, a database of thermophysical data for chemical compounds and mixtures
SPRESIweb, a database of organic compounds and reactions
References
Chemical databases
Engineering and Physical Sciences Research Council
Information technology organisations based in the United Kingdom
Royal Society of Chemistry
Science and technology in Cheshire | Chemical Database Service | [
"Chemistry"
] | 223 | [
"Chemical databases",
"Royal Society of Chemistry"
] |
54,333,425 | https://en.wikipedia.org/wiki/List%20of%20Bangladeshi%20engineers | This is a list of notable Bangladeshi engineers.
F
Fazlur Rahman Khan
I
Iqbal Mahmud
J
Jamilur Reza Choudhury
K
Khondkar Siddique-e-Rabbani
M
Muhammad M. Hussain
M. Rezwan Khan
Mahmudur Rahman
Muhammad Shahid Sarwar
S
Sunny Sanwar
engineers
Bangladeshi | List of Bangladeshi engineers | [
"Technology"
] | 69 | [
"Lists of engineers",
"Lists of people in STEM fields"
] |
54,334,250 | https://en.wikipedia.org/wiki/Proper%20reference%20frame%20%28flat%20spacetime%29 | A proper reference frame in the theory of relativity is a particular form of accelerated reference frame, that is, a reference frame in which an accelerated observer can be considered as being at rest. It can describe phenomena in curved spacetime, as well as in "flat" Minkowski spacetime in which the spacetime curvature caused by the energy–momentum tensor can be disregarded. Since this article considers only flat spacetime—and uses the definition that special relativity is the theory of flat spacetime while general relativity is a theory of gravitation in terms of curved spacetime—it is consequently concerned with accelerated frames in special relativity. (For the representation of accelerations in inertial frames, see the article Acceleration (special relativity), where concepts such as three-acceleration, four-acceleration, proper acceleration, hyperbolic motion etc. are defined and related to each other.)
A fundamental property of such a frame is the employment of the proper time of the accelerated observer as the time of the frame itself. This is connected with the clock hypothesis (which is experimentally confirmed), according to which the proper time of an accelerated clock is unaffected by acceleration, thus the measured time dilation of the clock only depends on its momentary relative velocity. The related proper reference frames are constructed using concepts like comoving orthonormal tetrads, which can be formulated in terms of spacetime Frenet–Serret formulas, or alternatively using Fermi–Walker transport as a standard of non-rotation. If the coordinates are related to Fermi–Walker transport, the term Fermi coordinates is sometimes used, or proper coordinates in the general case when rotations are also involved. A special class of accelerated observers follow worldlines whose three curvatures are constant. These motions belong to the class of Born rigid motions, i.e., the motions at which the mutual distance of constituents of an accelerated body or congruence remains unchanged in its proper frame. Two examples are Rindler coordinates or Kottler-Møller coordinates for the proper reference frame of hyperbolic motion, and Born or Langevin coordinates in the case of uniform circular motion.
In the following, Greek indices run over 0,1,2,3, Latin indices over 1,2,3, and bracketed indices are related to tetrad vector fields. The signature of the metric tensor is (-1,1,1,1).
History
Some properties of Kottler-Møller or Rindler coordinates were anticipated by Albert Einstein (1907) when he discussed the uniformly accelerated reference frame. While introducing the concept of Born rigidity, Max Born (1909) recognized that the formulas for the worldline of hyperbolic motion can be reinterpreted as transformations into a "hyperbolically accelerated reference system". Born himself, as well as Arnold Sommerfeld (1910) and Max von Laue (1911) used this frame to compute the properties of charged particles and their fields (see Acceleration (special relativity)#History and Rindler coordinates#History). In addition, Gustav Herglotz (1909) gave a classification of all Born rigid motions, including uniform rotation and the worldlines of constant curvatures. Friedrich Kottler (1912, 1914) introduced the "generalized Lorentz transformation" for proper reference frames or proper coordinates () by using comoving Frenet–Serret tetrads, and applied this formalism to Herglotz' worldlines of constant curvatures, particularly to hyperbolic motion and uniform circular motion. Herglotz' formulas were also simplified and extended by Georges Lemaître (1924). The worldlines of constant curvatures were rediscovered by several author, for instance, by Vladimír Petrův (1964), as "timelike helices" by John Lighton Synge (1967) or as "stationary worldlines" by Letaw (1981). The concept of proper reference frame was later reintroduced and further developed in connection with Fermi–Walker transport in the textbooks by Christian Møller (1952) or Synge (1960). An overview of proper time transformations and alternatives was given by Romain (1963), who cited the contributions of Kottler. In particular, Misner & Thorne & Wheeler (1973) combined Fermi–Walker transport with rotation, which influenced many subsequent authors. Bahram Mashhoon (1990, 2003) analyzed the hypothesis of locality and accelerated motion. The relations between the spacetime Frenet–Serret formulas and Fermi–Walker transport was discussed by Iyer & C. V. Vishveshwara (1993), Johns (2005) or Bini et al. (2008) and others. A detailed representation of "special relativity in general frames" was given by Gourgoulhon (2013).
Comoving tetrads
Spacetime Frenet–Serret equations
For the investigation of accelerated motions and curved worldlines, some results of differential geometry can be used. For instance, the Frenet–Serret formulas for curves in Euclidean space have already been extended to arbitrary dimensions in the 19th century, and can be adapted to Minkowski spacetime as well. They describe the transport of an orthonormal basis attached to a curved worldline, so in four dimensions this basis can be called a comoving tetrad or vierbein (also called vielbein, moving frame, frame field, local frame, repère mobile in arbitrary dimensions):
Here, is the proper time along the worldline, the timelike field is called the tangent that corresponds to the four-velocity, the three spacelike fields are orthogonal to and are called the principal normal , the binormal and the trinormal . The first curvature corresponds to the magnitude of four-acceleration (i.e., proper acceleration), the other curvatures and are also called torsion and hypertorsion.
Fermi–Walker transport and proper transport
While the Frenet–Serret tetrad can be rotating or not, it is useful to introduce another formalism in which non-rotational and rotational parts are separated. This can be done using the following equation for proper transport or generalized Fermi transport of tetrad , namely
where
or together in simplified form:
with as four-velocity and as four-acceleration, and "" indicates the dot product and "" the wedge product. The first part represents Fermi–Walker transport, which is physically realized when the three spacelike tetrad fields do not change their orientation with respect to the motion of a system of three gyroscopes. Thus Fermi–Walker transport can be seen as a standard of non-rotation. The second part consists of an antisymmetric second rank tensor with as the angular velocity four-vector and as the Levi-Civita symbol. It turns out that this rotation matrix only affects the three spacelike tetrad fields, thus it can be interpreted as the spatial rotation of the spacelike fields of a rotating tetrad (such as a Frenet–Serret tetrad) with respect to the non-rotating spacelike fields of a Fermi–Walker tetrad along the same world line.
Deriving Fermi–Walker tetrads from Frenet–Serret tetrads
Since and on the same worldline are connected by a rotation matrix, it is possible to construct non-rotating Fermi–Walker tetrads using rotating Frenet–Serret tetrads, which not only works in flat spacetime but for arbitrary spacetimes as well, even though the practical realization can be hard to achieve. For instance, the angular velocity vector between the respective spacelike tetrad fields and can be given in terms of torsions and :
Assuming that the curvatures are constant (which is the case in helical motion in flat spacetime, or in the case of stationary axisymmetric spacetimes), one then proceeds by aligning the spacelike Frenet–Serret vectors in the plane by constant counter-clockweise rotation, then the resulting intermediary spatial frame is constantly rotated around the axis by the angle , which finally gives the spatial Fermi–Walker frame (note that the timelike field remains the same):
For the special case and , it follows and and , therefore () is reduced to a single constant rotation around the -axis:
Proper coordinates or Fermi coordinates
In flat spacetime, an accelerated object is at any moment at rest in a momentary inertial frame , and the sequence of such momentary frames which it traverses corresponds to a successive application of Lorentz transformations , where is an external inertial frame and the Lorentz transformation matrix. This matrix can be replaced by the proper time dependent tetrads defined above, and if is the time track of the particle indicating its position, the transformation reads:
Then one has to put by which is replaced by and the timelike field vanishes, therefore only the spacelike fields are present anymore. Subsequently, the time in the accelerated frame is identified with the proper time of the accelerated observer by . The final transformation has the form
These are sometimes called proper coordinates, and the corresponding frame is the proper reference frame. They are also called Fermi coordinates in the case of Fermi–Walker transport (even though some authors use this term also in the rotational case). The corresponding metric has the form in Minkowski spacetime (without Riemannian terms):
However, these coordinates are not globally valid, but are restricted to
Proper reference frames for timelike helices
In case all three Frenet–Serret curvatures are constant, the corresponding worldlines are identical to those that follow from the Killing motions in flat spacetime. They are of particular interest since the corresponding proper frames and congruences satisfy the condition of Born rigidity, that is, the spacetime distance of two neighbouring worldlines is constant. These motions correspond to "timelike helices" or "stationary worldlines", and can be classified into six principal types: two with zero torsions (uniform translation, hyperbolic motion) and four with non-zero torsions (uniform rotation, catenary, semicubical parabola, general case):
Case produces uniform translation without acceleration. The corresponding proper reference frame is therefore given by ordinary Lorentz transformations. The other five types are:
Hyperbolic motion
The curvatures , where is the constant proper acceleration in the direction of motion, produce hyperbolic motion because the worldline in the Minkowski diagram is a hyperbola:
The corresponding orthonormal tetrad is identical to an inverted Lorentz transformation matrix with hyperbolic functions as Lorentz factor and as proper velocity and as rapidity (since the torsions and are zero, the Frenet–Serret formulas and Fermi–Walker formulas produce the same tetrad):
Inserted into the transformations () and using the worldline () for , the accelerated observer is always located at the origin, so the Kottler-Møller coordinates follow
which are valid within , with the metric
.
Alternatively, by setting the accelerated observer is located at at time , thus the Rindler coordinates follow from () and (, ):
which are valid within , with the metric
Uniform circular motion
The curvatures , produce uniform circular motion, with the worldline
where
with as orbital radius, as coordinate angular velocity, as proper angular velocity, as tangential velocity, as proper velocity, as Lorentz factor, and as angle of rotation. The tetrad can be derived from the Frenet–Serret equations (), or more simply be obtained by a Lorentz transformation of the tetrad of ordinary rotating coordinates:
The corresponding non-rotating Fermi–Walker tetrad on the same worldline can be obtained by solving the Fermi–Walker part of equation (). Alternatively, one can use () together with (), which gives
The resulting angle of rotation together with () can now be inserted into (), by which the Fermi–Walker tetrad follows
In the following, the Frenet–Serret tetrad is used to formulate the transformation. Inserting () into the transformations () and using the worldline () for gives the coordinates
which are valid within , with the metric
If an observer resting in the center of the rotating frame is chosen with , the equations reduce to the ordinary rotational transformation
which are valid within , and the metric
.
The last equations can also be written in rotating cylindrical coordinates (Born coordinates):
which are valid within , and the metric
Frames (, , ) can be used to describe the geometry of rotating platforms, including the Ehrenfest paradox and the Sagnac effect.
Catenary
The curvatures , produce a catenary, i.e., hyperbolic motion combined with a spacelike translation
where
where is the velocity, the proper velocity, as rapidity, is the Lorentz factor. The corresponding Frenet–Serret tetrad is:
The corresponding non-rotating Fermi–Walker tetrad on the same worldline can be obtained by solving the Fermi–Walker part of equation (). The same result follows from (), which gives
which together with () can now be inserted into (), resulting in the Fermi–Walker tetrad
The proper coordinates or Fermi coordinates follow by inserting or into ().
Semicubical parabola
The curvatures , produce a semicubical parabola or cusped motion
The corresponding Frenet–Serret tetrad with is:
The corresponding non-rotating Fermi–Walker tetrad on the same worldline can be obtained by solving the Fermi–Walker part of equation (). The same result follows from (), which gives
which together with () can now be inserted into (), resulting in the Fermi–Walker tetrad (note that in this case):
The proper coordinates or Fermi coordinates follow by inserting or into ().
General case
The curvatures , , produce hyperbolic motion combined with uniform circular motion. The worldline is given by
where
with as tangential velocity, as proper tangential velocity, as rapidity, as orbital radius, as coordinate angular velocity, as proper angular velocity, as angle of rotation, is the Lorentz factor. The Frenet–Serret tetrad is
The corresponding non-rotating Fermi–Walker tetrad on the same worldline is as follows: First inserting () into () gives the angular velocity, which together with () can now be inserted into (, left), and finally inserted into (, right) produces the Fermi–Walker tetrad. The proper coordinates or Fermi coordinates follow by inserting or into () (the resulting expressions are not indicated here because of their length).
Overview of historical formulas
In addition to the things described in the previous #History section, the contributions of Herglotz, Kottler, and Møller are described in more detail, since these authors gave extensive classifications of accelerated motion in flat spacetime.
Herglotz
Herglotz (1909) argued that the metric
where
satisfies the condition of Born rigidity when . He pointed out that the motion of a Born rigid body is in general determined by the motion of one of its point (class A), with the exception of those worldlines whose three curvatures are constant, thus representing a helix (class B). For the latter, Herglotz gave the following coordinate transformation corresponding to the trajectories of a family of motions:
(H1) ,
where and are functions of proper time . By differentiation with respect to , and assuming as constant, he obtained
(H2)
Here, represents the four-velocity of the origin of , and is a six-vector (i.e., an antisymmetric four-tensor of second order, or bivector, having six independent components) representing the angular velocity of around . As any six-vector, it has two invariants:
When is constant and is variable, any family of motions described by (H1) forms a group and is equivalent to an equidistant family of curves, thus satisfying Born rigidity because they are rigidly connected with . To derive such a group of motion, (H2) can be integrated with arbitrary constant values of and . For rotational motions, this results in four groups depending on whether the invariants or are zero or not. These groups correspond to four one-parameter groups of Lorentz transformations, which were already derived by Herglotz in a previous section on the assumption, that Lorentz transformations (being rotations in ) correspond to hyperbolic motions in . The latter have been studied in the 19th century, and were categorized by Felix Klein into loxodromic, elliptic, hyperbolic, and parabolic motions (see also Möbius group).
Kottler
Friedrich Kottler (1912) followed Herglotz, and derived the same worldlines of constant curvatures using the following Frenet–Serret formulas in four dimensions, with as comoving tetrad of the worldline, and as the three curvatures
corresponding to (). Kottler pointed out that the tetrad can be seen as a reference frame for such worldlines. Then he gave the transformation for the trajectories
(with )
in agreement with (). Kottler also defined a tetrad whose basis vectors are fixed in normal space and therefore do not share any rotation. This case was further differentiated into two cases: If the tangent (i.e., the timelike) tetrad field is constant, then the spacelike tetrads fields can be replaced by who are "rigidly" connected with the tangent, thus
The second case is a vector "fixed" in normal space by setting . Kottler pointed out that this corresponds to class B given by Herglotz (which Kottler calls "Born's body of second kind")
,
and class (A) of Herglotz (which Kottler calls "Born's body of first kind") is given by
which both correspond to formula ().
In (1914a), Kottler showed that the transformation
,
describes the non-simultaneous coordinates of the points of a body, while the transformation with
,
describes the simultaneous coordinates of the points of a body. These formulas become "generalized Lorentz transformations" by inserting
thus
in agreement with (). He introduced the terms "proper coordinates" and "proper frame" () for a system whose time axis coincides with the respective tangent of the worldline. He also showed that the Born rigid body of second kind, whose worldlines are defined by
,
is particularly suitable for defining a proper frame. Using this formula, he defined the proper frames for hyperbolic motion (free fall) and for uniform circular motion:
In (1916a) Kottler gave the general metric for acceleration-relative motions based on the three curvatures
In (1916b) he gave it the form:
where are free from , and , and , and linear in .
Møller
Møller (1952) defined the following transport equation
in agreement with Fermi–Walker transport by (, without rotation). The Lorentz transformation into a momentary inertial frame was given by him as
in agreement with (). By setting , and , he obtained the transformation into the "relativistic analogue of a rigid reference frame"
in agreement with the Fermi coordinates (), and the metric
in agreement with the Fermi metric () without rotation. He obtained the Fermi–Walker tetrads and Fermi frames of hyperbolic motion and uniform circular motion (some formulas for hyperbolic motion were already derived by him in 1943):
Worldlines of constant curvatures by Herglotz and Kottler
References
Bibliography
Textbooks
; First edition 1911, second expanded edition 1913, third expanded edition 1919.
New edition 2013: Editor: Domenico Giulini, Springer, 2013 .
Journal articles
Historical sources
External links
Physics FAQ: Acceleration in Special Relativity
Eric Gourgoulhon (2010): Special relativity from an accelerated observer perspective
Special relativity
Acceleration
Frames of reference | Proper reference frame (flat spacetime) | [
"Physics",
"Mathematics"
] | 4,162 | [
"Physical quantities",
"Acceleration",
"Coordinate systems",
"Frames of reference",
"Quantity",
"Classical mechanics",
"Special relativity",
"Theory of relativity",
"Wikipedia categories named after physical quantities"
] |
54,334,451 | https://en.wikipedia.org/wiki/Unit%20of%20volume | A unit of volume is a unit of measurement for measuring volume or capacity, the extent of an object or space in three dimensions. Units of capacity may be used to specify the volume of fluids or bulk goods, for example water, rice, sugar, grain or flour.
Units
According to the SI system, the base unit for measuring length is the metre. The SI unit of volume is thus the cubic metre, which is a derived unit, where:
1 m3 = 1 m • 1 m • 1 m.
Comparison
Forestry and timber industry
British Commonwealth
Hoppus, cubic foot measure used in the British Empire and, nowadays, some Commonwealth countries for timber.
Germany
Festmeter (fm), a unit of volume for logs
Erntefestmeter (Efm), a unit of volume for trees or forests which assumes a 10% loss due to bark and 10% during the felling process.
Vorratsfestmeter (Vfm), a unit of volume for trees or forests based on measurements including the bark.
Raummeter (rm), or stere (stacked firewood) = 0.7 m3 (stacked woodpile with air spaces)
Schüttmeter, or Schüttraummeter (piled wood with air spaces)
USA and Canada
Board foot, unit of lumber
Cord, a unit of dry volume used to measure firewood and pulpwood
Cubic yard, equal to
See also
Faggot (unit)
History of measurement
List of unusual units of measurement#Volume
Orders of magnitude (volume)
Metre Convention
Physical quantity
References
External links | Unit of volume | [
"Mathematics"
] | 313 | [
"Units of volume",
"Quantity",
"Units of measurement"
] |
54,335,434 | https://en.wikipedia.org/wiki/Timeline%20of%20the%20G%C3%B6kt%C3%BCrks | This is a timeline of the Göktürks from the origins of the Turkic Khaganate to the end of the Second Turkic Khaganate.
5th century
6th century
7th century
8th century
References
Bibliography
.
(alk. paper)
(paperback).
.
Timeline
Göktürks | Timeline of the Göktürks | [
"Physics"
] | 61 | [
"Wikipedia timelines",
"Spacetime",
"Physical quantities",
"Time"
] |
54,337,393 | https://en.wikipedia.org/wiki/Precoloring%20extension | In graph theory, precoloring extension is the problem of extending a graph coloring of a subset of the vertices of a graph, with a given set of colors, to a coloring of the whole graph that does not assign the same color to any two adjacent vertices.
Complexity
Precoloring extension has the usual graph coloring problem as a special case, in which the initially colored subset of vertices is empty; therefore, it is NP-complete.
However, it is also NP-complete for some other classes of graphs on which the usual graph coloring problem is easier. For instance it is NP-complete on the rook's graphs, for which it corresponds to the problem of completing a partially filled-in Latin square.
The problem may be solved in polynomial time for graphs of bounded treewidth, but the exponent of the polynomial depends on the treewidth.
It may be solved in linear time for precoloring extension instances in which both the number of colors and the treewidth are bounded.
Related problems
Precoloring extension may be seen as a special case of list coloring, the problem of coloring a graph in which no vertices have been colored, but each vertex has an assigned list of available colors.
To transform a precoloring extension problem into a list coloring problem, assign each uncolored vertex in the precoloring extension problem a list of the colors not yet used by its initially-colored neighbors,
and then remove the colored vertices from the graph.
Sudoku puzzles may be modeled mathematically as instances of the precoloring extension problem on Sudoku graphs.
References
External links
Bibliography on precoloring extension, Dániel Marx
Graph coloring
NP-complete problems | Precoloring extension | [
"Mathematics"
] | 340 | [
"Graph coloring",
"Graph theory",
"Computational problems",
"Mathematical relations",
"Mathematical problems",
"NP-complete problems"
] |
57,641,853 | https://en.wikipedia.org/wiki/Lisocabtagene%20maraleucel | Lisocabtagene maraleucel, sold under the brand name Breyanzi, is a cell-based gene therapy used to treat B-cell lymphomas, including follicular lymphoma.
Side effects include hypersensitivity reactions, serious infections, low blood cell counts, and a weakened immune system. The most common side effects include decreases in neutrophils (a type of white blood cell that fights infections), in red blood cells or in blood platelets (components that help the blood to clot), as well as cytokine release syndrome (a potentially life-threatening condition that can cause fever, vomiting, shortness of breath, pain and low blood pressure) and tiredness. The most common adverse reactions for treating follicular lymphoma include cytokine release syndrome, headache, musculoskeletal pain, fatigue, constipation, and fever.
Lisocabtagene maraleucel, a chimeric antigen receptor (CAR) T cell (CAR-T) therapy, is the third gene therapy approved by the US Food and Drug Administration (FDA) for certain types of non-Hodgkin lymphoma, including diffuse large B-cell lymphoma. Lisocabtagene maraleucel was approved for medical use in the United States in February 2021.
Medical uses
In the US, lisocabtagene maraleucel is indicated for the treatment of adults with large B-cell lymphoma, including diffuse large B-cell lymphoma not otherwise specified (including diffuse large B-cell lymphoma arising from indolent lymphoma), high-grade B cell lymphoma, primary mediastinal large B-cell lymphoma, and follicular lymphoma grade 3B, who have refractory disease to first-line chemoimmunotherapy or relapse within 12 months of first-line chemoimmunotherapy; or disease to first-line chemoimmunotherapy or relapse after first-line chemoimmunotherapy and are not eligible for hematopoietic stem cell transplantation (HSCT) due to comorbidities or age; or relapsed or refractory disease after two or more lines of systemic therapy. It is also indicated for adults with relapsed or refractory chronic lymphocytic leukemia or small lymphocytic lymphoma who have received at least two prior lines of therapy, including a Bruton tyrosine kinase inhibitor and a B-cell lymphoma 2 inhibitor.
In the EU, lisocabtagene maraleucel is indicated for the treatment of adults with diffuse large B-cell lymphoma, high grade B-cell lymphoma, primary mediastinal large B-cell lymphoma and follicular lymphoma grade 3B, who relapsed within 12 months from completion of, or are refractory to, first-line chemoimmunotherapy.
Lisocabtagene maraleucel is not indicated for the treatment of people with primary central nervous system lymphoma.
In May 2024, the US Food and Drug Administration (FDA) expanded the indication for lisocabtagene maraleucel to include adults with relapsed or refractory follicular lymphoma who have received two or more prior lines of systemic therapy; and the treatment of adults with relapsed or refractory mantle cell lymphoma who have received at least two prior lines of systemic therapy, including a Bruton tyrosine kinase inhibitor.
Adverse effects
The US Food and Drug Administration (FDA) prescription label carries a boxed warning for cytokine release syndrome (CRS), which is a systemic response to the activation and proliferation of CAR-T cells, causing high fever and flu-like symptoms and neurologic toxicities.
In April 2024, the FDA prescription label boxed warning was expanded to include T cell malignancies.
History
Lisocabtagene maraleucel's safety and efficacy were established in a multicenter clinical trial of more than 250 adults with refractory or relapsed large B-cell lymphoma. The complete remission rate after treatment was 54%.
The US Food and Drug Administration (FDA) granted lisocabtagene maraleucel priority review, orphan drug, regenerative medicine advanced therapy (RMAT), and breakthrough therapy designations. Lisocabtagene maraleucel is the first regenerative medicine therapy with RMAT designation to be licensed by the FDA. The FDA granted approval of Breyanzi to Juno Therapeutics Inc., a Bristol-Myers Squibb Company.
Efficacy was evaluated in TRANSFORM (NCT03575351), a randomized, open-label, multicenter trial in adults with primary refractory large B-cell lymphoma or relapse within twelve months of achieving complete response (CR) to first-line therapy. Participants had not yet received treatment for relapsed or refractory lymphoma and were potential candidates for autologous HSCT. A total of 184 participants were randomized 1:1 to receive a single infusion of lisocabtagene maraleucel following fludarabine and cyclophosphamide lymphodepleting chemotherapy or to receive second-line standard therapy, consisting of three cycles of chemoimmunotherapy followed by high-dose therapy and autologous HSCT in participants who attained CR or partial response (PR).
Efficacy was also evaluated in PILOT (NCT03483103), a single-arm, open-label, multicenter trial in transplant-ineligible patients with relapsed or refractory large B-cell lymphoma after one line of chemoimmunotherapy. The study enrolled participants who were ineligible for high-dose therapy and HSCT due to organ function or age, but who had adequate organ function for CAR-T cell therapy. Efficacy was based on CR rate and duration of response (DOR) as determined by an IRC. Of 74 participants who underwent leukapheresis (median age, 73 years), 61 (82%) received lisocabtagene maraleucel of whom 54% (95% CI: 41, 67) achieved CR. The median DOR was not reached (95% CI: 11.2 months, not reached) in participants who achieved CR and 2.1 months (95% CI: 1.4, 2.3) in participants with a best response of PR. Among all leukapheresed participants, the CR rate was 46% (95% CI: 34, 58).
For the treatment of follicular lymphoma, efficacy was evaluated in TRANSCEND-FL (NCT04245839), a phase II, open-label, multicenter, single-arm trial in adults with relapsed or refractory follicular lymphoma after two or more lines of systemic therapy (including an anti-CD20 antibody and an alkylating agent). Participants were eligible to enroll in the study if they had adequate bone marrow function to receive lymphodepleting chemotherapy and an ECOG performance status of 1 or less.
Society and culture
Legal status
In January 2022, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Breyanzi, intended for the treatment of adults with relapsed or refractory diffuse large B cell lymphoma, primary mediastinal large B-cell lymphoma and follicular lymphoma grade 3B, after at least two previous lines of treatments. The applicant for this medicinal product is Bristol-Myers Squibb Pharma EEIG. Lisocabtagene maraleucel was approved for medical use in the European Union in April 2022.
Names
Lisocabtagene maraleucel is the international nonproprietary name (INN).
References
External links
Drugs developed by Bristol Myers Squibb
Cancer treatments
Drugs that are a gene therapy
Approved gene therapies
CAR T-cell therapy
Orphan drugs
Antineoplastic drugs | Lisocabtagene maraleucel | [
"Biology"
] | 1,786 | [
"Cell therapies",
"CAR T-cell therapy"
] |
57,641,874 | https://en.wikipedia.org/wiki/Benzylidene%20acetal | In organic chemistry, a benzylidene acetal is the functional group with the structural formula C6H5CH(OR)2 (R = alkyl, aryl). Benzylidene acetals are used as protecting groups in glycochemistry. These compounds can also be oxidized to carboxylic acids in order to open important biological molecules, such as glycosaminoglycans, to other routes of synthesis. They arise from the reaction of a 1,2- or 1,3-diols with benzaldehyde. Other aromatic aldehydes are also used.
References
Acetals
Functional groups
Protecting groups | Benzylidene acetal | [
"Chemistry"
] | 140 | [
"Protecting groups",
"Acetals",
"Functional groups",
"Reagents for organic chemistry"
] |
57,641,927 | https://en.wikipedia.org/wiki/Research%20in%20Human%20Development | Research in Human Development is a quarterly peer-reviewed interdisciplinary scientific journal that publishes research on all aspects of human development. Its scope includes the perspectives of biology, psychology, and sociology, among other disciplines. It was established in 2004 and is published by Taylor & Francis. It is the official journal of the Society for the Study of Human Development. The editor-in-chief is Michael Cunningham (Tulane University). According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.222.
References
External links
Academic journals established in 2004
Quarterly journals
Developmental psychology journals
Taylor & Francis academic journals
English-language journals
Human development | Research in Human Development | [
"Biology"
] | 127 | [
"Behavioural sciences",
"Behavior",
"Human development"
] |
57,642,066 | https://en.wikipedia.org/wiki/Society%20for%20the%20Study%20of%20Human%20Development | The Society for the Study of Human Development (SSHD) is a professional society formed by a group of scholars from multiple disciplines (e.g., medicine, biology, psychology, sociology, economics, and history). The central focus of SSHD is to provide an organization that moves beyond age-segmented scholarly organizations to take an integrative, interdisciplinary approach to ages/stages across the life span, generational and ecological contexts of human development, and research and applications to human development policies and programs. It was founded in 1998 when a group of scholars met at the Radcliffe Institute for Advanced Study. Its first meeting was held in November 1999. The current president of the society is Carolyn Aldwin (Oregon State University), and the president-elect is Lynn S. Liben (The Pennsylvania State University). The society's official journal is Research in Human Development.
Past presidents
Former presidents of the SSHD include:
David Henry Feldman (Tufts University)
Kristine Ajrouch (Eastern Michigan University)
Willis Overton (Temple University)
Cynthia García Coll (Brown University/University of Puerto Rico)
Lawrence Schiamberg (Michigan State University)
Toni Antonucci (University of Michigan)
Susan Whitbourne (University of Massachusetts)
Jacquelyn James (Boston College)
Richard M. Lerner (Tufts University)
References
External links
International learned societies
Learned societies of the United States
Scientific organizations established in 1998
Human development | Society for the Study of Human Development | [
"Biology"
] | 290 | [
"Behavioural sciences",
"Behavior",
"Human development"
] |
57,643,049 | https://en.wikipedia.org/wiki/Fancy%20Nancy%20%28TV%20series%29 | Fancy Nancy, titled Fancy Nancy Clancy internationally, is an American animated family comedy children's television series developed by Jamie Mitchell and Krista Tucker and produced by Disney Television Animation for Disney Junior based on the eponymous children's picture book series by Jane O'Connor with illustrations by Robin Preiss Glasser. The show follows the adventures of Nancy Clancy, a 6 (and then later 7) year-old girl who loves everything fancy and French, while living with her family and friends in a fictional version of Plainfield, Ohio.
The series premiered on July 13, 2018, in the United States and Canada the following day. Disney Junior renewed the series for a second season, which premiered on October 4, 2019, in the United States. On September 18, 2019, a third season was commissioned, and Krista Tucker confirmed that it would be the last for the entire series. The third season began simulcasting on Disney Junior, DisneyNOW and Disney+ on November 12, 2021. The series finale aired on February 18, 2022. Fancy Nancy received generally positive reviews from critics.
Premise
Six-year-old Nancy Clancy enjoys fancy and French things that range from her outfit to her creative elaborate attire as she plans to teach some fanciness to her ordinary family and her friends.
Characters
Nancy Margaret Clancy (voiced by Mia Sinclair Jenness) is a little girl who enjoys fancy things and is a bit of Francophile; she loves France and can speak French. She was 6 years old until the episode "Nancy's Parfait Birthday!", where she turned 7. Her favorite color is fuchsia, and she adores butterflies; she has an adorable Golden Doodle named Frenchy, her own secret delivery mailbox to share all her party invitations with her best friend Bree, and own playhouse. She also has a doll named Marabelle and sometimes carries her everywhere. Her mother sometimes uses her full name when she is in trouble. Her middle name is Margaret after her late maternal grandmother. In "Paris, Adieu!", Nancy starts saving money in hopes of eventually affording to visit Paris; she finally completes her goal in the series finale.
Josephine Jane "JoJo" Clancy (voiced by Spencer Moss) is Nancy's little sister. She has an imaginary friend named Dudley, is a PIT (Pirate in Training), loves to help, is 3 years old but she turned 4 in "Big Top Nancy", laughs a lot, and loves her stuffed animal "Mr. Monkey".
Douglas "Doug" Clancy (voiced by Rob Riggle) and Claire Clancy (voiced by Alyson Hannigan) are Nancy and Jojo's parents.
Mrs. Dolores Devine (voiced by Christine Baranski) is Nancy's elderly widowed neighbor. Her name is a play on the word "divine". Her late husband's name was Ronnie.
Franklin "Frank" Anderson (voiced by George Wendt) is Nancy's widowed maternal grandfather. His late wife was named "Margaret."
Frenchy (voiced by Fabio Tassone) is Nancy's Golden Doodle.
Poppy (Sid Clancy) (voiced by John Ratzenberger) is Nancy's paternal grandfather. He is a professor of geology. Grammy and Poppy live in Chicago.
Grammy (Fay Clancy) (voiced by Miriam Flynn) is Nancy's paternal grandmother. Nancy often thinks she's a spy when she's actually a librarian who works for the government.
Briana Rose "Bree" James (voiced by Dana Heath) is Nancy's best friend. She's also Nancy's next-door neighbor, loves nature, is a fantastique ice skater; she has a dog named Waffles and has a doll named Chiffon.
Frederick "Freddy" James (voiced by Blake Moore) is Bree's younger brother and JoJo's best friend. He is 3 years old.
Calvin and Gloria James (voiced by Geno Henderson and Tatyana Ali respectively) are Bree's parents.
Mrs. Priya Singh (voiced by Aparna Nancherla) is Doug's accountant agency boss.
Mr. Ravi Singh (voiced by Kal Penn) is Priya's husband.
Jonathan (voiced by Ian Chen) is Nancy's cousin who also enjoys fancy stuff. Prior to the series, he went by "Johnny", but now he goes by "Jonathan". He's a magician and loves clothes.
Gus (voiced by Chi McBride) is a local courier who makes deliveries. He can be quite goofy at times and a bit clumsy. He also hates lying and being dishonest. In the episode "Parcel Pursuit" he loses his kitten Parcel while on his mail route.
Lionel (voiced by Malachi Barton) is a boy who is a bit of a comedian. Lionel has curly blond hair and has blue eyes. He also has a dog named Flash and a rubber chicken named Bok-Bok, which he carries with everywhere he goes, that he got to get over the chicken incident that happened when he was younger. He has an autistic cousin named Sean that he cares about in the episode "Nancy's New Friend", which he teaches Nancy to be calm around him. In the episode "Love, Lionel", he has a huge crush on Wanda and had trouble revealing his feelings to her, as she and her other friends already know him for his cracking of jokes and where Nancy tries to help him get over his fear and come clean.
Sean (voiced by George Yionoulis) is Lionel's cousin who's autistic. He loves trains and is really educated about them.
Brigitte (voiced by Madison Pettis) is Nancy's favorite waitress who always serves her and her friends and family their favorite pizza at the pizza parlor. She was also her and JoJo's babysitter. In the third season, Brigitte moves to Chicago to go to college.
Grace White (voiced by Hannah Nordberg) is Nancy's frenemy. She is from a wealthy family. Grace often brags about what she has, which can sometimes annoy the other kids. In the Season 3 episode, "Grace Gets Real", however, Grace finally learns how to be humbler and not brag.
Rhonda and Wanda (both voiced by Ruby Jay) are identical twin sisters who are Nancy's friends and Lionel has trouble telling them apart. They love to play sports. They both wear bows on their heads, each with their own pattern with Rhonda's bow having stripes and Wanda's having polka dots. They also have the first letter of their names on the side at the top of their shirts to tell them apart. They are both tomboys and they love to play sports in their backyard, and when they go to practice.
Roberto (voiced by Nathan Arenas) is Nancy's friend and Lionel's new buddy who just moved to their neighborhood from Paris, Texas in the episode "Le Boy Next Door."
Daisy (voiced by Darci Lynne) is Nancy's friend that she met at a Food Drive. In Season 3, she moves closer to Nancy's neighborhood.
Lucille (voiced by Rachael MacFarlane) is Nancy's dance teacher.
Mr. Chen (voiced by James Sie) is the shoe store owner and judge for the Plainfield ballroom dance competition.
Flash is Lionel's dog.
Waffles is Bree's dog.
Serena and Venus are Rhonda and Wanda's pet hamsters. They are named after the famous tennis-playing sisters, Serena and Venus Williams.
Fritters is Grace's pet bunny.
Pepper is Grace's pet pony.
Dumpling is Grace's pet Silkie chicken.
Jean Claude is JoJo's fish that died and had a funeral in the episode "Au Revior Jean Claude".
Jean Claude Jr. is Nancy and JoJo's fish that they got in the episode "Au Revior Jean Claude" after Jean Claude died.
Marabelle is Nancy's favorite doll.
Chiffon is Bree's favorite doll.
Bok Bok is Lionel's favorite toy chicken that he likes to be funny with and makes jokes with. He carries him everywhere and got him when he was younger to get over the chicken incident.
Penelope is Grace's favorite doll who looks identical to her.
Flower Shop Owner is a kindly middle-aged woman who owns the Flower Shop and invites Nancy and Bree to participate in the poetry contest.
Episodes
Release
Fancy Nancy premiered in the United States and in Canada on July 13, 2018. The series was later made available to stream on Disney+.
Home media
Reception
Critical response
Alex Reif of LaughingPlace.com called Fancy Nancy "full of bright colors, fun characters, and musical numbers," writing, "Like the beloved book series, kids are going to think Disney’s Fancy Nancy is très magnifique. Nancy will inspire viewers to be themselves and to make every day extra special. Her interests gravitate towards things that are pink and sparkly, but her personality is optimistic and friendly. Not only will kids expand their vocabulary, but they will also see a great role model for overcoming personal struggles and being a better friend." Emily Ashby of Common Sense Media gave Fancy Nancy a grade of four out of five stars, complimented the educational value, citing self-expression and individuality, and praised the depiction of positive messages and role models, stating that the show promotes respect and positivity across its characters.
Dave Trumbore of Collider included Fancy Nancy in their "2018's Best New Animated Series for Kids" list, saying that the series celebrates "uniqueness, diversity, and individuality." Azure Hall and Casey Suglia of Romper included Fancy Nancy in their "Great Shows Your Kids Will Love To Stream On Disney+" list, stating, "Although Nancy likes everything fancy, the TV series promotes individuality and self expression, rather than materialism. Throughout the show, Nancy learns about the beauty of people’s differences and learns to appreciate all that makes her friends unique. This mirrors itself in the show’s positive messages about family members, as Nancy loves on her younger sister and her supportive parents. While Nancy might not necessarily be your cup of tea, you have to give credit to a show that teaches kids to embrace their truest selves."
Nuray Bulbul of WalesOnline included Fancy Nancy in their "6 Best Disney Plus animated films and TV shows in 2021" list, asserting, "From her vast vocabulary to her creative attire, six-year-old Nancy is one to look out for. A high-spirited young girl whose imagination and enthusiasm transforms the ordinary into the extraordinary - showcases it’s important to make the most of each day and encourage others to do the same." Charles Curtis of USA Today ranked Fancy Nancy 6th in their "20 Best Shows For Kids Right Now (March 2020)" list.
Accolades
References
External links
Disney Jr. original programming
2010s American animated television series
2020s American animated television series
2010s preschool education television series
2020s preschool education television series
2018 American television series debuts
2018 animated television series debuts
2022 American television series endings
American English-language television shows
American preschool education television series
Animated preschool education television series
Television shows set in Columbus, Ohio
Television series by Disney Television Animation
American television shows based on children's books
American children's animated adventure television series
American children's animated fantasy television series
American computer-animated television series
Animated television series about children
Computers | Fancy Nancy (TV series) | [
"Technology"
] | 2,352 | [] |
57,644,254 | https://en.wikipedia.org/wiki/NGC%20527 | NGC 527, also occasionally referred to as PGC 5128 or PGC 5141, is a lenticular galaxy located approximately 259 million light-years from the Solar System in the constellation Sculptor. It was discovered on 1 September 1834 by astronomer John Herschel.
Observation history
Herschel discovered the object along with NGC 526. The object was later catalogued by John Louis Emil Dreyer in the New General Catalogue, where the galaxy was described as "faint, small, a little extended, brighter middle, the following (eastern) of 2" with the other one being NGC 526.
Description
The galaxy has an apparent visual magnitude of 13.2 and can be classified as type SB0-a using the Hubble Sequence. The object's distance of roughly 260 million light-years from the Solar System can be estimated using its redshift and Hubble's law.
Companion galaxy PGC 5142
NGC 527 has a much dimmer magnitude 14 companion galaxy (PGC 5142). Although this galaxy is not an NGC object, it is sometimes referred to as NGC 527B. The galaxy has an apparent size of 1.6' × 0.3' and a recessional velocity of approximately 5880 km/s.
See also
List of NGC objects (1–1000)
References
External links
SEDS
Lenticular galaxies
Sculptor (constellation)
0527
5128
Astronomical objects discovered in 1834
Discoveries by John Herschel | NGC 527 | [
"Astronomy"
] | 291 | [
"Constellations",
"Sculptor (constellation)"
] |
57,645,046 | https://en.wikipedia.org/wiki/Byssomerulius%20psittacinus | Byssomerulius psittacinus is a species of crust fungus in the family Irpicaceae. It was described as new to science in 2000 by mycologists Peter Buchanan, Leif Ryvarden, and Masana Izawa. The type was found in Fiordland National Park, where it was growing on the dead wood of Nothofagus. The specific epithet psittacinus ("parrot-like") refers to the wide range of colours observed in the fruit bodies. Initially a striking reddish-purple when fresh, it dries to brownish orange, pale orange yellow, or pale orange.
References
Fungi of New Zealand
Irpicaceae
Fungi described in 2000
Taxa named by Leif Ryvarden
Fungus species | Byssomerulius psittacinus | [
"Biology"
] | 151 | [
"Fungi",
"Fungus species"
] |
57,645,380 | https://en.wikipedia.org/wiki/Vancosamine | Vancosamines are aminosugars that are a part of vancomycin and other molecules within the vancomycin family of antibiotics. Vancosamine synthesis is encoded by the vancomycin (vps) biosynthetic cluster. Epivancosamine, a closely related aminosugar, is encoded by the chloroeremomycin (cep) biosynthetic cluster.
History
Vancosamine was first isolated by Lomakina et al in 1968. In 1972, Johnson et al were the first to identify and completely characterize vancosamine. Epivancosamine was subsequently isolated in 1988 by Hunt et al at Eli Lilly
Biosynthesis
The biosynthesis of vancosamine and epivancosamine are identical, except in the last step. The enzymes that catalyze the reactions have been designated EvaA-E. A molecule of TDP-D-glucose enters the pathway via conversion to molecule 1 by an oxidoreductase enzyme and then a dehydratase enzyme. In the next step, EvaA dehydrates molecule 1 by deprotonating at 3-C to form and enolate, which then eliminates 2-OH, to form molecule 2. Molecule 2 is transformed into molecule 3 by tautomerizing to its keto form and then being transaminated by EvaB using L-Glu as the ammonia source and PLP as a cofactor.
EvaC then methylates molecule 3 at the 3-C to form molecule 4 by deprotonating to form an enolate intermediate, which then attacks a SAM methyl group in the active site of EvaC. EvaD then epimerizes molecule 4 at 5-C to form molecule 5. Finally, EvaE can form either epi/vancosamine by reduction using either NADH or NADPH to reduce the carbonyl at 4-C. The stereochemical outcome is dependent on the EvaE that is encoded in the biosynthetic cluster. Vancomycin vps EvaE results in vancosamine, whereas chloroeremomycin cep EvaE results in epivancosamine.
The vancosamines are then used by the cell to synthesized vancomycin and related molecules. A glycosyltransferase attaches the amino sugar through α-1 ether linkages.
Additional modifications are possible at the 3-C amino group to create N-alkyl or N-acyl derivatives of this sugar.
Total syntheses
Several syntheses of vancosamine have been published.
See also
Vancomycin
Oritavancin
Glycopeptide antibiotics
References
Amino sugars | Vancosamine | [
"Chemistry"
] | 556 | [
"Amino sugars",
"Carbohydrates"
] |
57,645,524 | https://en.wikipedia.org/wiki/Quercus%20%C3%97%20saulii | Quercus × saulii is a hybrid oak tree in the genus Quercus. The tree is a hybrid of Quercus montana (chestnut oak) and Quercus alba (white oak).
References
saulii
Hybrid plants | Quercus × saulii | [
"Biology"
] | 45 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
57,645,549 | https://en.wikipedia.org/wiki/CH%20Cygni | CH Cygni (CH Cyg / HIP 95413 / BD +49 2999) is a red giant, variable, symbiotic binary in the constellation Cygnus. It is the nearest symbiotic star to Earth, and one of the brightest, making it an ideal candidate for study.
Properties
CH Cygni has a mass of and a radius of . Its white-dwarf companion has a mass of , and the orbital period of the two stars is 5689 days. CH Cygni is classified as M7IIIab + Be.
Observation history
The earliest observations of CH Cygni were made in 1890 by Pickering and Wendel using wedge photometer, and was classified as a M6III variable star in 1924. In 1963 strong H I emissions were observed, indicating CH Cygni was likely in a symbiotic relationship with a white dwarf. Similar emissions were observed in 1965, 1967, 1977, 1992, and 1998. The system was briefly thought to contain a third star but this was later disproved.
In 1984 bipolar jets were detected coming from CH Cygni, which were suspected to be due to accretion from its companion star. The luminosity of the system decreased significantly in 1986, likely owing to dust thrown out of the system by the jets or a concurrent helium flash. This dust had dissipated by 2002, with subsequent luminosities returning to pre-1985 levels.
References
Cygnus (constellation)
M-type giants
182917
095413
Durchmusterung objects
Z Andromedae variables
Cygni, CH
Semiregular variable stars | CH Cygni | [
"Astronomy"
] | 329 | [
"Cygnus (constellation)",
"Constellations"
] |
57,646,061 | https://en.wikipedia.org/wiki/Cerocorticium%20molle | Cerocorticium molle is a species of crust fungus in the family Meruliaceae.
Taxonomy
The fungus was first described by Miles Berkeley and Moses Ashley Curtis in 1868 as Corticium molle. They described the fruit body of the type specimen as resembling "a thin coating of wax poured over the surface". It was transferred to genus Cerocorticium by Walter Jülich in 1975.
Habitat and distribution
Cerocorticium molle grows on the dead bark and wood of a variety of angiosperms, and it has occasionally been recorded growing on or under the bark of living trees. It is found in tropical and subtropical regions of Africa, Asia, North America, and South America.
References
Meruliaceae
Fungi described in 1868
Fungi of Asia
Fungi of Africa
Fungi of North America
Fungi of South America
Taxa named by Miles Joseph Berkeley
Taxa named by Moses Ashley Curtis
Fungus species | Cerocorticium molle | [
"Biology"
] | 183 | [
"Fungi",
"Fungus species"
] |
57,647,283 | https://en.wikipedia.org/wiki/Hong%20Kong%20ICT%20Awards | Hong Kong ICT Awards is a technology-related award in Hong Kong, organised annually by the Office of the Government Chief Information Officer of the Innovation and Technology Bureau of the Government of Hong Kong.
The eight categories of the award include Smart Business, Digital Entertainment, FinTech, ICT Startup, Smart Living, Smart People, Smart Mobility, and Student Innovation. The recipients attract widespread media coverage every year.
See also
List of computer science awards
References
Information science awards | Hong Kong ICT Awards | [
"Technology"
] | 92 | [
"Science and technology awards",
"Information science awards"
] |
57,647,913 | https://en.wikipedia.org/wiki/Hirao%20coupling | The Hirao coupling (also called the Hirao reaction or the Hirao cross-coupling) is the chemical reaction involving the palladium-catalyzed cross-coupling of a dialkyl phosphite and an aryl halide to form a phosphonate.
This reaction is named after Toshikazu Hirao and is related to the Michaelis-Arbuzov reaction. In contrast to the classic Michaelis-Arbuzov reaction, which is limited to alkyl phosphonates, the Hirao coupling can also deliver aryl phosphonates.
References
Organic reactions
Name reactions | Hirao coupling | [
"Chemistry"
] | 130 | [
"Coupling reactions",
"Chemical reaction stubs",
"Name reactions",
"Organic reactions"
] |
57,649,303 | https://en.wikipedia.org/wiki/List%20of%20exoplanets%20observed%20during%20Kepler%27s%20K2%20mission | This is a list of exoplanets observed during the Kepler space telescope's K2 mission.
On 31 March 2022, K2-2016-BLG-0005Lb was reported to be the most distant exoplanet found by Kepler to date.
List
References
Kepler space telescope
Lists of exoplanets
Transiting exoplanets | List of exoplanets observed during Kepler's K2 mission | [
"Astronomy"
] | 75 | [
"Space telescopes",
"Kepler space telescope"
] |
57,649,922 | https://en.wikipedia.org/wiki/Steccherinum%20straminellum | Steccherinum straminellum is a toothed crust fungus of the family Steccherinaceae. It was first described by Giacomo Bresadola in 1902 as Odontia straminella. The type collection was made in Portugal by Camille Torrend. After examining the type specimen, Ireneia Melo transferred the species to the genus Steccherinum in 1995.
References
Fungi described in 1902
Fungi of Europe
Steccherinaceae
Fungus species | Steccherinum straminellum | [
"Biology"
] | 98 | [
"Fungi",
"Fungus species"
] |
57,650,229 | https://en.wikipedia.org/wiki/Urethral%20resistance%20pressure | Urethral resistance pressure is the pressure existing in urethra during urination or other conditions generated by the detrusor muscle. It forces urine into and through the urethra in order for micturition. In the urethra, part of that pressure is converted to dynamic (forward) pressure which helps voiding happen. On the other hand, static (lateral) pressure helps preventing involuntary dribbling. Decline in urethral resistance pressure is one of the contributing factors is some forms of incontinence for example stress incontinence as a result of atrophy in menopause.
Decline in urethral resistance pressure is commonly associated with decline in bladder outlet.
Urethral retro-resistance pressure (URP) is a new clinical measure of urethral function measured by a new urodynamic measurement system. URP is the pressure required to achieve and maintain an open sphincter.
References
Urine | Urethral resistance pressure | [
"Biology"
] | 193 | [
"Urine",
"Excretion",
"Animal waste products"
] |
57,650,474 | https://en.wikipedia.org/wiki/Loop%20sectioning | In geometry and the mathematical discipline of topology, loop strip-mining, or sectioning, is a special case of tiling, namely 1-dimensional tiling: a loop is transformed into a depth-2 loop nest, where the outer loop is called tile/block loop and the innermost loop is called element loop.
Strip-mining was introduced for vector processors. It is a loop-transformation technique for enabling vectorization of loops and improving memory performance.
The term strip-mine is really inspired from mining coal, for example, with the excavator, which uses a bucket (or bucket wheel) to "strip" the coal.
Tiling | Loop sectioning | [
"Mathematics"
] | 134 | [
"Topology stubs",
"Topology"
] |
61,899,876 | https://en.wikipedia.org/wiki/K2-32 | K2-32 is a G9-type main sequence star slightly smaller and less massive than the sun. Four confirmed transiting exoplanets are known to orbit this star. A study of atmospheric escape from the planet K2-32b caused by high-energy stellar irradiation indicates that the star has always been a very slow rotator.
Planetary system
Discovery
The star K2-32 was initially found to have three transiting planet candidates by Andrew Vanderburg and collaborators in 2016. The innermost planet candidate, at that time, K2-32b was confirmed using radial velocity measurements made with the Keck telescope. Confirmation of planets c and d was made by Sinukoff et al. using adaptive optics imaging and computer analysis to eliminate possible false positives.
The Earth-sized planet K2-32e was discovered and validated by René Heller and team in 2019.
Characteristics
With periods of 4.34, 8.99, 20.66 and 31.71 days the four planets orbits are very close to a 1:2:5:7 orbital resonance chain. The densities of planets b, c, and d are between those of Saturn and Neptune, which suggests large and massive atmospheres. The planet K2-32e with a radius almost identical to that of the Earth is almost certainly a terrestrial planet. All four planets are well inside even the optimistic inner boundary of the habitable zone located at 0.58 astronomical units.
References
External links
The Extrasolar Planets Encyclopaedia entry for K2-32b
The Extrasolar Planets Encyclopaedia entry for K2-32c
The Extrasolar Planets Encyclopaedia entry for K2-32d
The Extrasolar Planets Encyclopaedia entry for K2-32e
G-type main-sequence stars
Ophiuchus
Planetary systems with four confirmed planets
Planetary transit variables | K2-32 | [
"Astronomy"
] | 389 | [
"Ophiuchus",
"Constellations"
] |
61,900,168 | https://en.wikipedia.org/wiki/ASASSN-19bt | ASASSN-19bt was a tidal disruption event (TDE) discovered by the All Sky Automated Survey for SuperNovae (ASAS-SN) project, with early-time, detailed observations by the TESS satellite. It was first detected on January 21, 2019, and reached peak brightness on March 4. The black hole which caused the TDE is in the 16th magnitude galaxy 2MASX J07001137-6602251 in the constellation Volans at a redshift of 0.0262, around 375 million light years away.
Observations in UV light made with NASA's Neil Gehrels Swift Observatory showed a drop in the temperature of the tidal disruption from around 71,500 to 35,500 degrees Fahrenheit (40,000 to 20,000 degrees Celsius) over a few days. This is the first time such an early temperature drop has been seen in a tidal disruption event. The transient resulting from the tidal disruption event has been cataloged as AT 2019ahk.
References
Black holes
Volans
2019 in outer space
Tidal disruption events | ASASSN-19bt | [
"Physics",
"Astronomy"
] | 224 | [
"Black holes",
"Tidal disruption events",
"Physical phenomena",
"Physical quantities",
"Concepts in astronomy",
"Astronomical events",
"Unsolved problems in physics",
"Astronomy stubs",
"Astrophysics",
"Constellations",
"Volans",
"Stellar astronomy stubs",
"Astrophysics stubs",
"Density",
... |
61,900,989 | https://en.wikipedia.org/wiki/Magic%20wavelength | The magic wavelength (also known as a related quantity, magic frequency) is the wavelength of an optical lattice where the polarizabilities of two atomic clock states have the same value, such that the AC Stark shift caused by the laser intensity fluctuation has no effect on the transition frequency between the two clock states.
AC Stark shift by optical lattice
The laser field in an optical lattice induces an electric dipole moment in the atoms to exert forces on them and hence confine them. However, the difference in polarizabilities of the atomic states leads to an AC Stark shift in the transition frequency between the two states, a shift that is dependent on the laser optical intensity at the particular atom location in the lattice. When it comes to precise measurements of transition frequency such as atomic clocks, the temporal fluctuations of the laser optical intensity would then deteriorate the clock accuracy. Furthermore, due to the spatial variation of laser intensity in the lattice, the atom's motion within the lattice would also be coupled into the uncertainty of the internal transition frequency of the atom.
Polarizability depends on wavelength
Despite having different function forms, the polarizabilities of two atomic states do have a dependency on the wavelength of the laser field. In some cases, it is then possible to find a particular wavelength at which the two atomic states happen to have exactly the same polarizability. This particular wavelength, where the AC Stark shift vanishes for the transition frequency, is called the magic wavelength, and the frequency that corresponds to this wavelength is called the magic frequency. This idea was first introduced by Hidetoshi Katori's calculation in 2003, and then experimentally achieved by Katori's group in 2005.
References
Physical quantities
Atomic clocks
Atomic physics | Magic wavelength | [
"Physics",
"Chemistry",
"Mathematics"
] | 350 | [
"Physical phenomena",
"Physical quantities",
"Time",
"Time stubs",
"Quantity",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
"Spacetime",
"Physical properties",
" and optical physics"
] |
61,902,400 | https://en.wikipedia.org/wiki/Invisible%20ships | The invisible ships (or ships not seen) myth claims that when European explorers' ships approached either North America, South America, or Australia, the appearance of their large ships was so foreign to the native people that they could not even see the vessels in front of them. It is likely based on a passage of Joseph Banks' diary describing the HMS Endeavour's arrival in Botany Bay. Banks wrote that the natives did not appear surprised or concerned from a distance, but unlike the myth, once the ships approached land they were confronted by armed men. Though the common versions of the myth are apocryphal and not based in science, it has been promoted by New Age personalities, prominently in the 2004 film What the Bleep Do We Know!?
Variations
There are several apocryphal variations on the myth, all of which involve native people being unable to see ships approaching due to perceptual blindness. In some versions, the explorer is not Captain Cook but Ferdinand Magellan or Christopher Columbus, or the land is the coast of North or South America.
The story has come to be associated with New Age works. A prominent example is the 2004 film What the Bleep Do We Know!?, created by the New Age sect Ramtha's School of Enlightenment. During a discussion in the film of the influence of experience on perception, neuroscientist Candace Pert relays a version of the myth whereby Native Americans were unable to see Columbus's ships because they were outside the natives' experience. The movie goes on to add a shaman to the narrative, who began to see ripples in the water and eventually could see the ship. Once the shaman started to tell people about them, others began to be able to see as well.
Historical basis
The invisible ships myth is likely based on the diary of botanist Joseph Banks, who traveled with Captain Cook on the HMS Endeavour and documented his account of the natives when entering Botany Bay in Australia in April 1770:
Banks goes on to say that while he was surprised the 106-foot ship did not receive more attention from a distance, when they came a bit closer they were confronted by armed men. This passage is also preceded by his observation that ten people had gone up to a hill to see the ship. Contrary to the myth, there was no reason to think natives did not see the ship apart from Banks' surprise at their reception from afar.
Explanation
According to various versions of the myth, Native Americans or Australians could not see the ships because they did not have a concept for such an object or because they did not fit into their experience. The large sailing ships did not resemble the smaller canoe-like ships that were more familiar. Philosopher J. R. Hustwit wrote that if these premises of the myth were true, "that unfamiliar objects are coated in some sort of cognitive Teflon ... learning would not be possible".
In the case of Banks and other versions of the myth, the natives' inability to see the ships is not based on native people describing their perception but on the perception of the explorers who expected a different reception. Bernie Hobbs of ABC Science, writing about the version in What the Bleep Do We Know!?, points out there is no known historical documentation of the Native Americans' perspective, that Native Americans at the time did not have a written language to document the event, and Columbus did not know the language even if the myth did originate with him.
Barry Evans of North Coast Journal suggests the more likely explanation is that "anything that wasn't a threat or didn't contribute to their well-being could be safely ignored" and that when it was perceived as a threat, they engaged directly. Hobbs of ABC Science likens the natives' likely experience to the inattentional blindness and selective attention demonstrated by the Invisible Gorilla Test produced by Christopher Chabris and Daniel Simons. The test takes the form of a video that includes several people passing a basketball back and forth while moving around the frame. The viewer is asked to count the number of times people wearing white shirts pass the ball. In the middle of the video a person in a gorilla suit walks from one side of the frame to the other, but many people who watch the video do not see the gorilla because they are focused on their task. Similarly, David Hambling wrote in Fortean Times that Europeans were "used to being the star attraction wherever they go", that it should not be surprising that they were perceived as hostile and so not warmly greeted, and that perhaps "the aborigines did not think that this outsize canoe was quite so 'remarkable' as Banks himself did".
According to an interviewee in a National Museum of Australia oral history project, the natives Banks wrote about may have ignored the explorers because "in Dharawal culture, contact with strangers or spirits from the afterlife caused spiritual consequences and was mostly avoided by the general community."
References
Perception
European folklore
Age of Discovery
Ships | Invisible ships | [
"Physics"
] | 1,009 | [
"Optical phenomena",
"Physical phenomena",
"Invisibility"
] |
61,902,722 | https://en.wikipedia.org/wiki/Interspirituality | Interspirituality, also known as interspiritual, is an interfaith concept where a diversity of spiritual practices are embraced for common respect for the individual and shared aspects across a variety of spiritual paths.
History
Interspirituality originates in the work of Wayne Teasdale, who developed this term to reflect commonalities between religious traditions, specifically those that are spiritual in nature. These commonalities across religious practices do not erase differences in beliefs, rather they build community and sharing across practices, leading to the ultimate goal of more human responsibility to one another and the planet as a whole. At its core, this is an "assimilation of insights, values, and spiritual practices" drawn from many different traditions that can be applied to one's own life to further personal, spiritual development.
Critique
While interspirituality is involved with common spiritual practices, these are not synonymous with how religious traditions practice. As such, interspirituality should not be considered synonymous with interfaith work, in part because some spiritual practices may be considered antithetic to certain religious practice, thereby including elements that would not be accepted by some conservative approaches. New insights that can be gained through aspects of other spiritual practices can be threatening to some faiths, as postmodern approaches to beliefs and practices can be challenging when individuals are encouraged to explore other practices to deepen one's own.
Interspiritual meditation
One way interspirituality is practiced is through interspiritual meditation. This was originally developed by Edward Bastian from the Snowmass Conferences convened by Thomas Keating, who organized gatherings of people from other spiritual practices, including the Dalai Lama. Through these gatherings, interspiritual meditation grew to incorporate insights in meditative and contemplative practices across many spiritual traditions, primarily through engaging in shared spiritual practices and then discussing them, rather than through lectures or formal teachings about them. These practices continued to develop and expand beyond Keating's death.
See also
Interfaith
Spirituality
Thomas Keating
Wayne Teasdale
References
Spirituality
Interfaith dialogue | Interspirituality | [
"Biology"
] | 424 | [
"Behavior",
"Human behavior",
"Spirituality"
] |
61,902,844 | https://en.wikipedia.org/wiki/IC%201993 | IC 1993 is an unbarred spiral galaxy in the constellation Fornax. It was discovered by Lewis Swift on November 19, 1897. At a distance of about 50 million light-years, and redshift of 1057 km/s, it is one of the closest to us of the 200 galaxies in the Fornax Cluster.
IC 1993 is a galaxy with several spiral arms in its disc, and it has a Hubble classification of (R')SA(s)bc, indicating it is an intermediate spiral galaxy with a ring on its outer edges. It is a remote galaxy, far from the center of the Fornax Cluster. It is at the edge of the Fornax Cluster. Near the galaxy is a bright foreground star that makes deep observations more difficult, so the galaxy's apparent magnitude is 12.6. Its size in the night sky is 2.5' x 2.2', and it has a diameter of 45000 light-years.
IC 1993 is one of the 25 galaxies known to have rings or partial rings. Most resemble local collisional ring galaxies in morphology, size, and clumpy star formation. Clump ages range from to yr, and clump masses go up to several × solar masses, based on color evolution models. The clump ages are consistent with the expected lifetimes of ring structures if they are formed by collisions.
There are 15 other galaxies that resemble the arcs in partial ring galaxies but haven't evident disk emission. Their clumps have bluer colors at all redshifts compared to the clumps in the ring and partial ring sample, and their clump ages are younger than in rings and partial rings by a factor of ~10. In most respects, they resemble chain galaxies except for their curvature.
Several rings are symmetric with centered nuclei and no obvious companions. They could be outer Lindblad resonance rings, although some have no obvious bars or spirals to drive them. If these symmetric cases are resonance rings, then they could be the precursors of modern resonance rings, which are only ~30% larger on average. This similarity in radius suggests that the driving pattern speed has not slowed by more by ~30% during the last ~7 Gyr. Those without bars could be examples of dissolved bars.
See also
NGC 1425, a similar spiral galaxy, also in the Fornax Cluster
NGC 1532, another spiral galaxy in the Fornax Cluster
References
Unbarred spiral galaxies
Fornax
Astronomical objects discovered in 1897
1993 | IC 1993 | [
"Astronomy"
] | 516 | [
"Fornax",
"Constellations"
] |
61,903,668 | https://en.wikipedia.org/wiki/Alice%20Christine%20Stickland | Alice Christine Stickland (16 March 1906 – 16 April 1987) was an applied mathematician and astrophysics engineer with interests in radar and radiowave propagation.
Early life
Alice Christine Stickland was born in Camberwell, London, on 16 March 1906. Her father was a publisher's clerk.
Education
Stickland studied mathematics at King's College, London, and graduated with a BSc in 1927. She then went on to study privately while working at the Radio Research Station, Ditton Park. First receiving an MSc in mathematical physics in 1929 and then being awarded a PhD in mathematical physics from University of London in 1943. Her dissertation title was ‘The Propagation of the Magnetic Field of the Electron Magnetic Wave along the Ground and in the Lower Atmosphere’.
Career
Stickland worked as a scientific civil servant at the Radio Research Station between 1928 and 1947. She worked with radar pioneer, Robert Watson-Watt, on long-wave propagation, Reginald Smith-Rose on short-wave propagation, and Edward Appleton on the properties of the ionosphere.
Stickland, along with Smith-Rose, read a paper entitled 'Ultra-Short Wave Propagation - Comparison Between Theory and Experimental data' at the Institution of Electrical Engineers. The paper described the results of field intensity measurements obtained between 1937 and 1939 using the Post Office radio-telephone link between Guernsey and Chaldon.
She officially retired in 1968 but continued to work as General Editor of the Annals of the International Years of the Quiet Sun (1964-65), and with the International Council for Science’s Committee on Space Research (COSPAR). She was heavily involved in the Girl Guides’ Association.
Selected publications
Ultra-Short Wave Propagation - Comparison Between Theory and Experimental data - Dr. R. L. Smith-Rose, Miss A. C. Stickland
References
1906 births
1987 deaths
British mathematicians
Applied mathematicians
British electrical engineers
Alumni of King's College London
British women engineers
People from Camberwell
British women mathematicians | Alice Christine Stickland | [
"Mathematics"
] | 391 | [
"Applied mathematics",
"Applied mathematicians"
] |
61,905,432 | https://en.wikipedia.org/wiki/Cig%C3%A9o | Cigéo (an acronym for "centre industriel de stockage géologique", or "Industrial Centre for Geological Disposal") is a French project to construct a geological disposal facility for radioactive waste. It is conceived for the disposal of High-level waste (HLW) produced by French nuclear facilities, including during their decommissioning, and by nuclear reprocessing of spent fuel.
The Agence nationale pour la gestion des déchets radioactifs (Andra) is the French national radioactive waste management agency and is responsible for delivery of Cigéo. After more than thirty years of research, including at the Meuse/Haute Marne Underground Research Laboratory, Andra applied in 2023 to ASN, the French nuclear safety authority, for permission to construct the facility.
The Cigéo project is planned for a site several kilometres to the north, at the boundary of the departments of Meuse and Haute-Marne, within the bounds of Ribeaucourt, Bure, Mandres-en-Barrois, and Bonnet, in the drainage basin of the Seine, at the boundary with that of the Meuse. It is intended to site the waste – comprising approximately of long-lived HLW and intermediate-level waste (ILW) – in a layer of clay.
The principle of geological disposal was put into French law in 2006. After a, which took place in 2013, the commission concluded that it was not urgent to begin disposal and the timescale for implementation envisaged at the time should be revised. The law defines in parallel alternative disposal routes: long-term storage of radioactive waste, pending final disposal; or separation and transmutation of nuclear wastes into radioisotopes with weaker activity or shorter half-lives.
The estimated cost of the project varies between 15 and 36 billion Euros. The financing, theoretically the responsibility of waste producers, rests partly on the government budget. Social acceptability is one of the major parameters of the project – one billion Euros have been spent to this effect.
Two departmental Public Interest Groups have been created. The Haute-Marne group is presided over by Nicolas Lacroix, president of the Haute-Marne departmental council, and the Meuse group is presided over by Jérôme Dumont, president of the Meuse departmental council.
Since 1996, the project has provoked controversies concerning the financing, the reversibility of the process, uncertainties regarding the capability to guarantee containment of the waste for , the volume of waste requiring disposal, and whether the public debate has been genuine or illusory.
Storage of long-lived radioactive waste
Objectives of storage
The activities of nuclear facilities generate fission products with very high levels of radioactivity and lifetimes in the tens of millennia. Additionally, there are actinides that are less radioactive but have lifetimes in the millions of years, such as neptunium-237, which has a half-life of 2.1 million years, fission products with lower activity such as iodine-129 (half-life of 16 million years), and activation products such as chlorine-36 (half-life of ). These elements are non-reusable nuclear wastes. In nuclear reprocessing, they are separated from uranium and plutonium, which are potentially reusable.
The strategy for management of long-lived HLW (whether fission products, actinides or activation products) consists of isolating them in places inaccessible to humans for long enough for their radiotoxicity to reduce, the principal challenge residing in the capacity of the facility to contain the radionuclides for a sufficiently long time by menas of different barriers placed between the waste and ecosystems on the surface. One of the options currently retained consists of storing the waste at a depth of 300–500 metres in vaults dug out in a geological layer that is stable, dense and as impervious as possible (e.g. granite, volcanic tuff, or clay, as is envisaged in France). The hazard of the wastes diminishes as their radioactivity decays; the activity of the majority of these wastes will reach background levels in roughly a thousand years.
The dangers of irradiation are poorly quantified for low doses, but according to the international authorities on radioprotection (United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), International Commission on Radiological Protection (ICRP)), the effect is negligible for doses at similar levels to natural background radiation (on the order of a micro-Sievert per hour, or 5mSv/year). On the other hand, according to the IRSN, the radiological impact on people and ecosystems should be evaluated equally in the short or very long term. Underground storage allows the containment of radioactivity over the very long term: groundwater flow being very weak in the impermeable region, only certain mobile radionuclides are able to migrate over a period of tens of millennia, potentially reaching the surface only in extremely small quantities.
Two doctoral theses in 2008 and 2011 on archeological glasses and obsidians estimate that the vitrification process used to immobilise HLW should by itself be capable of assuring containment of radioactive materials for .
Nevertheless, for the evaluation of performance of geological disposal, migration models do not take credit for artificial confinement (the containers); only the natural rock is considered. The example of the natural nuclear fission reactors in the Oklo Mine, where non-volatile fission products have only migrated some centimetres in nearly 2 billion years, was used in preparatory works for Yucca Mountain nuclear waste repository to show that confinement over such timescales is possible.
According to a 2017 thesis in the history of science at the School for Advanced Studies in the Social Sciences (EHESS), Andra has had to retreat little by little from attempting to produce a formal proof of the absolute safety of disposal to instead presenting a body of arguments demonstrating that the evolution of Cigéo is controlled in the very long-term.
Study of Callovo-Oxfordian Clay
The region proposed by Andra for the location of Cigéo is in the East of France, at the boundary of the departments of Meuse and Haute-Marne.
Safety performance of a geological disposal site is dependant, among other factors, on the characteristics of the host rock. The geological layer planned for the location of the wastes is the "Callovo-Oxfordian". It consists of a layer of clay rock, about 160 million years old, situated at a depth of around in the East of the Paris basin (between 420 and depth at the site of the Bure laboratory. The argilites (a mix of clay and quartz from the Callovo-Oxfordian stages of the Jurassic) possess physico-chemical characteristics which tend to limit the migration of radionuclides. The clay layer, with a thickness of more than and at a depth of has revealed excellent containment properties: stable for at least 100 million years, homogenous over several hundred km, the region has very low permeability and is resistant to groundwater flow (the principal cause of degradation of waste containers and dispersion of radionuclides), and the clay has an elevated retention capacity (capacity for sorption of radioactive elements).
During operation of the facility, Andra is aiming at a maximum acceptable dose of for the public and for monitored workers, which is a quarter of the current regulatory limit. For long term, the objective is that the committed dose must remain lower than for the most affected reference group. Modelled estimates of the dose peak at the at the end of (dominated by iodine-129 and chlorine-36, which are both soluble); while staying significantly under the objective the dose would be higher () in the case of hypothetical disposal of "CU1" and "CU2" spent fuel from EdF.
The goal of the Meuse/Haute Marne Underground Research Laboratory was the study of the clay layer, with a view to determining if its characteristics are consistent with the safety objectives of a disposal facility located in the transposition zone.
Andra's work has permitted to show evidence that the properties of the Callovo-Oxfordian argilites will strongly reduce the mobility of actinides and thus the activity flux out of the host rock formation, by confining them in the near field. The ASN nevertheless underlined the necessity of taking into account the residual uncertainties regarding the homogeneity of the clay layer. These uncertainties have been cited by the France Nature Environnement association in justifying its opposition to the project.
Description of the project
General description
The planned facility is composed of surface facilities, notably for receipt and preparation of waste packages or support services for excavation and construction works.
It is envisaged that the wastes will be placed in underground stores situated at a depth of around , in a layer of clay rock which should be impermeable and have properties which support confinement over the very long term. A funicular railway should enable the waste packages to be taken underground or returned to the surface. The design and eventual construction, maintenance and operation of the funicular have been entrusted to the Grenoble-based Poma, a specialist in cable lifts, for a cost of 68 M€ and with a potential start of operations in 2025 (if the facility is constructed by that point).
Having entered the pre-industrial phase in 2011, the Cigéo project could accept its first waste packages in 2025, after a series of stages and a calendar defined by law. It is planned to operate Cigéo for at least 100 years. The underground disposal tunnels will be constructed progressively, as and when needed. Their footprint will extend to around after about 100 years of operations.
The law requires that the waste disposal will be reversible for a minimum of 100 years. in order to allow future generations the possibility of modifying or adjusting to disposal process, for example by removal of the stored packages if another "mode of management" is planned or if the safety of the site is called into question. It is not, however, planned to make financial provision to cover some or all of the cost of such a reversal operation.
Wastes destined for Cigéo
Cigéo was conceived for the disposal of high-level waste, and long-lived intermediate-level (LL-ILW) waste which could not be disposed of in surface or near-surface disposal facilities for reasons of nuclear safety or radiation protection. For high-level wastes, the dose rate at 1 metre from an unshielded package can be multiple Sieverts per hour at the time of disposal.
The wastes are conditioned in "parcels" by their producer before being placed in a disposal container. The estimated volumes of wastes for disposal at Cigéo are:
approximately conditioned HLW (approx. parcels), on the order of containers;
approximately LL-ILW (approx. parcels), on the order of containers.
The inventory considered by Andra for the conception of the Cigéo project only takes into account nuclear installations that were authorised (or were on the point of being authorised) on 31 December 2010, for a projected operational period of 50 years. However, for waste coming from the current operation of the fleet of nuclear power stations, Andra's reference inventory assumes that all spent fuels will ultimately be completely recycled (including MOX and enriched reprocessed uranium, which are not currently recycled). So, the adjournment of the complete recycling of all spent fuels would have a strong impact on the nature of waste to be stocked, but only towards the end of the century. If it were ultimately decided to dispose of untreated spent fuel in Cigéo, the design would have to be adapted accordingly and the footprint would be expected to increase to around (from around 15). Additionally, in the case of complete cessation of nuclear operations, separated plutonium (which could no longer be considered as a recyclable nuclear material but would rather be a waste) would add to the inventory to be taken into account. According to Hervé Kempf, of Reporterre, retreatment, which produces 5 types of wastes (minor actinides, plutonium, spent MOX, reprocessing uranium and spent uranium fuel), should be stopped, storage conditions at the la Hague site should be rediscussed and the project for a spent MOX fuel pool at Belleville-sur-Loire should likewise be rediscussed.
The disposal of wastes from future nuclear installations at Cigéo would be possible, provided they are compatible with the site authorisation (in terms of volume, nature and level of activity. If the inventory to be taken into account exceeds the Cigéo's authorised limits, these should be changed by a modification of the authorisation, the procedure for which would include a public inquiry.
The volumes to be stocked are closely dependent on energy policy, with an increase in the volume expected in the case of early closure of some power stations. The opponents of the project in the public debate in 2013 demanded the adjournment of debate until after the law on the energy transition programme, while ASN recommended, because of these uncertainties, that these "expansion hypotheses" are taken into account.
Reversibility of emplacement
In order to allow future generations the possibility of revisiting the choice for disposal, the law on the radioactive waste programme states that disposal shall be reversible, as a means of precaution.
The conditions of reversibility are not fixed a priori; they must be discussed during the public debate. After the public debate,
the Government will present a bill setting out these conditions, leading to a parliamentary debate; Only then can the authorisation to costruct the storage centre be issued. This authorisation will fix the minimum period during which the reversibility of storage must be ensured; and this duration may not be less than one hundred years
The notion of « reversibility » is relative: it depends on the containers retaining their integrity and the stores being left accessible, but also on the price one is willing to pay for a retrieval operation. Containers which have been placed hundreds of metres below ground and left there for decades or even centuries could perhaps remain technically recoverable, but the cost of doing so under acceptable safety conditions might be prohibitive. Thus, reversibility is considered in progressive stages, including conditioning in containers, emplacement, closure of cells, end of active operations in a storage gallery, backfilling of the gallery, through to the definitive closure of the centre. Each step taken makes reversal a little more difficult and expensive.
Reversibility must be taken into account in the design of the facility: which must facilitate the safe recovery of waste packages, despite the depth, for as long as the facility is not fully closed. To make this recovery possible "in complete safety":
containers and storage facilities must be so constructed as to be durable for at least the entire life of the storage facility, to allow easy access to waste packages;
the automated devices designed to place waste containers in storage facilities must be equally durable but also capable of removing these containers.
These devices and their maintenance obviously have a cost, all the more important as the requirements for reversibility will be onerous. The question of financing this reversibility is part of the global reflection of intergenerational responsibility. The option taken by the actors of the project is for current generations to finance the laboratory, construction, operation and closure of Cigéo, since only they have chosen this storage method.
For Andra,
Some commentators, such as Jean-Marc Jancovici, believe that reversibility leads to undue complexity.
Cost of deep geological storage and sources of financing
Evaluation of the total cost of Cigéo must take into account all the costs of storage over more than 100 years: studies, construction of the first structures (surface buildings, shafts, declines (sloped tunnels)), operation (staff, maintenance, energy...), the gradual construction of underground structures, then their closure, their monitoring, etc. Part of these costs/investments will be the salaries of the workforce employed in the digging, construction and storage work, who, according to Andra, will number to for at least a hundred years.
In 2003, Andra published a first estimate of the cost, based on technical concepts from 2002. Several scenarios were selected, with costs ranging from €15.9 billion to €55 billion depending on the reprocessing options chosen.
In 2009, Andra sent producers a new design dossier and a new estimate (known as "SI 2009") of the cost of deep storage, then estimated at €33.8 billion in 2008 Euros (€35.9 billion in 2010 Euros). The 2009 file includes an increase in the inventory to be stored, and technical developments to better take into account the requirements of safety and reversibility.
In 2013, Andra had to make a new estimate. On the basis of the technical outline refined by Andra at the beginning of 2013, and after an initial optimisation exercise, the estimate amounted to €28 billion (in 2013 Euros) at the end of 2013, excluding research, insurance and tax expenditure, i.e. a substantially identical amount at constant perimeter. Optimisation avenues still need to be investigated between Andra and producers to refine this costing.
In November 2013, Andra stated during a public debate that this re-evaluation would not be submitted to the government until 2014. After collecting the comments of waste producers and the opinion of the Nuclear Safety Authority, the Minister responsible for energy must adopt the assessment of the costs and make it public.
In January 2016, the cost was officially set at 25 billion euros by the Ministry of Ecology and Sustainable Development, in charge of energy.
The cost will theoretically be financed by waste producers (EdF, the CEA and Areva (now Orano)), through agreements with Andra, which will constitute a "fund intended to finance the construction, operation, permanent shutdown, maintenance and monitoring of storage or storage facilities for high- or intermediate-level long-lived waste". For a new nuclear reactor over its entire operating life, this cost represents in the order of 1 to 2% of the total cost of electricity production.
Safety expectations of the Nuclear Safety Authority
In France, any entity planning to establish or operate a nuclear installation must file a "Safety Options Case".
ASN published a safety guide for final geological disposal of radioactive waste in 2008 and issued several opinions on the file before the 2013 public inquiry (whose conclusions were issued at the beginning of 2014).
After the public debate on the project (end of 2013), Andra announced that it wanted to start operating the storage in 2025, with a "pilot industrial phase" "of 5 to 10 years" preceding a long phase of current operation. On this occasion, it announced that it would submit a safety options file to ASN in 2015, prior to the application for authorisation to construct. This file will include "documents relating to technical recoverability options, draft preliminary package acceptance specifications and a master plan for operations".
On 20 January 2015, ASN replied to Andra by informing it by letter of 19 December 2014 of its expectations regarding the safety options case:
full coverage of the site, including all installations (surface, underground and surface-underground connections)
Self-supporting structure of the installations
clear presentation of the objectives, concepts and principles chosen for safety (in operation and in the long term, and at all phases of the life of the installation: design, construction, operation, shutdown, dismantling or closure, maintenance and monitoring, as applicable depending on the sub-assemblies of the installation concerned);
reversibility (in the broad sense of the OECD), with dual requirements;
the requirement for the adaptability of the installation (so that uses can be reallocated at the time of construction or operation, in order to be able to develop the installations), and
Waste recoverability requirement "for a specified period of time", addressing common problems of difficulty in the accessibility of waste packages (including after closure of storage cells and access galleries, or in case of loss of integrity of containment of waste containers, and taking into account aging or structural damage.
ASN also insists on knowing Andra's subcontracting policy and on seeing in the file an initial draft of the notice provided for in paragraph II Article 8 of the Decree of 2 November 2007 [7] presenting Andra's technical capabilities for the construction and operation of this facility as defined in Article 2.1.1 of the Decree of 7 February 2012, and lists other requirements in an annex to the letter.
ASN's opinion on the safety options case, published on 15 January 2018, confirms the analysis of its technical expert, judging that the project has reached "satisfactory technological maturity". However, it takes up the concerns expressed in the summer of 2017 by the Institute for Radiological Protection and Nuclear Safety (IRSN) on bituminous waste, which represent 16% of the volumes and 18% by number of the packages that Andra plans to store, which would present fire risks. Two solutions are therefore available to Andra with respect to bituminous wastes: treat them to make them inert, for example by a pyrolysis process, or modify the design of Cigéo to avoid a chain reaction in the event of a fire in a package.
The problem of discounting and the stability of financing
In accordance with the 2006 law on radioactive wastes, producers are legally obliged to evaluate the long term costs posed by their wastes and to set aside funds to meet those costs. These expenses are not accounted for in "gross value", but are discounted: dedicated assets are invested and earn financial interest. If, for example, the interest rate is 3.04%, a euro invested today will theoretically yield 1.0304^100= after a century, which makes it possible to balance an expenditure twenty times higher in a hundred years' time.
One difficulty raised by opponents of the project is that, as a result of the discounting, the provisions for charges made by waste producers therefore only very partially cover what the future costs of the storage centre will be, with the balance to be made up by the expected return on investments. The high discount (5% and/or 3%) for long-term charges allows operators to set aside only €5 billion for the Cigeo project, whereas this project is expected to cost at least seven times more. If the cost is under-estimated or the return on investments over-estimated, the fund would be insufficient to cover the cost.
This objection is based on the ability of financial investments to perform over the long term. However, the discount rate used by waste producers is not in fact fixed, but is itself constrained: "it cannot exceed the rate of return, as expected with a high degree of confidence, of the hedging assets, managed with a degree of security and liquidity sufficient to meet their purpose" and must be assessed annually: if the financial return on provisions is lower than expected, producers must reassess their charges (upwards), which unbalances their expense balance. In this case, "the administrative authority notes an insufficiency or inadequacy in the assessment of the charges, the calculation of the provisions or the amount, [and may] prescribe the measures necessary to regularize its situation by setting the deadlines within which it must implement them". Operators are then required to increase provisions to rebalance their long-term expense accounts.
The State has decided not to cover the CEA's expenses from its own assets, but will ensure its financing through the budget; For operators whose costs are mainly long-term, the deadline for complying with this coverage rule has been extended from 2011 to 2014.
History
Law of 30 December 1991
The Law of 30 December 1991 on research into the management of radioactive waste organises, over a period of 15 years, research on the management of high-level and long-lived radioactive waste and work according to three families of possible methods:
separation and transmutation of long-lived radioactive elements present in this waste;
reversible or irreversible storage in deep geological formations, in particular through the construction of underground laboratories;
processes for conditioning and long-term surface storage of the waste.
This law provides that, at the end of a period which may not exceed fifteen years, the government will submit to parliament a global report evaluating this research, accompanied by a draft law authorizing, if necessary, the creation of a storage centre for high-level and long-lived radioactive waste.
Developments from 1992 to 2005
In 1992, a call for applications was launched for the choice of departments to host underground laboratories. Thirty applications were received from 11 departments. At the end of 1993, four departments were selected by the government: Gard, Vienne, Meuse and Haute-Marne.
In 1998, after geological and public investigations, the Government of Lionel Jospin opted to build a single laboratory in Bure.
From 1999 to 2004, the Bure underground laboratory was built. In 2005, Andra published the "Argile 2005" (Clay 2005) dossier, which took stock of 15 years of research supplemented by experiments carried out in the underground laboratory, and concluded that it was feasible in principle for the waste to be stored in a geological clay layer, subject to a certain amount of additional research.
In January 2006, the National Commission for the Evaluation of Research on the Management of Radioactive Waste (CNE), created by the 1991 law, published a global report on the results of 15 years of work in preparation for a future bill "authorising, if necessary, the creation of a storage centre for high-level and long-lived radioactive waste". In particular, the CNE recommends "reversible disposal in deep geological situations" which represents the "reference route" for the definitive management of final waste. It also proposes the continuation of research in the underground laboratory located in Bure.
Law of 28 June 2006
The 2006 law stipulated that the decision whether or not to authorise Cigéo would be preceded by:
the organisation of a public debate The public debate was opened on 15 May 2013 by the National Commission for Public Debate, with 15 public meetings announced (they will take place from 15 May to 15 October 2013 with "interventions by various experts on the subject"). They were to be organised by the Special Committee on Public Debate (CPDP). During this time, the public were also be able to express themselves via a "participatory website". This debate shall:
inform the public about Cigéo, its industrial design, safety, reversibility, location and monitoring;
collect opinions on the objectives, modalities, characteristics and impacts of Cigéo according to the actors and people wishing to express themselves on this subject;
inform the State on the decision to be made.
Before mid-December, the CPDP will publish a report of the debates "and the CNDP (National Commission for Public Debate) will draw up the report. Andra will then have three months to indicate, by means of a reasoned act, the follow-up it intends to give to its project in the light of the lessons learned from the public debate.".
Submission of the application for authorisation to construct (by the Agence nationale pour la gestion des déchets radioactifs) in 2015;
From 2015 to 2018: examination of this request by the competent authorities and collection of opinions from local authorities; Law on the Conditions of Reversibility of Storage; opening of a public inquiry; based on the results of the previous steps, permission to carry out storage.
The solutions proposed by Andra will be subject to independent control:
The National Evaluation Commission (CNE) carries out a scientific and technical control to ensure the technical feasibility and performance of the storage method. It reports annually on this control to Parliament and the government;
The French Nuclear Safety Authority (ASN) monitors the project's compliance with regulatory requirements (radiation protection and safety). It relies on the scientific and technical expertise of the Institute for Radiation Protection and Nuclear Safety (IRSN) and on Permanent Groups of Experts;
a local information and monitoring committee (CLIS) has the role of reviewing information and consultation processes in general about the storage site.
Finally, Parliament is monitoring the progress of the project through the Parliamentary Office for the Evaluation of Scientific and Technological Options (OPECST).
The application for authorisation to create Cigéo, which was due to be sent to ASN in 2018, was postponed in 2017 until mid-2019.
Debates and controversies
At the beginning of 2013, the National Commission for Public Debate (CNDP) prepared the debate on the storage site project. On 4 February 2013, the Minister of Ecology, Delphine Batho, went to Bure to visit the underground laboratory. On 6 February, she validated the dossier prepared by Andra to present the project during the public debate, which was to be held from 15 May to 31 July and from 31 August to 15 October 2013.
For the director of Andra, "the decision to create a storage site in Meuse and Haute-Marne has not yet been taken. [...] On the one hand, [...] it will require the green light from the French Nuclear Safety Authority (ASN). On the other hand, [...] both departments have agreed to the underground laboratory, but they have not yet said 'yes' to the storage centre, and we are perfectly aware of this."
Boycott of the debates
On 15 May 2013, around 40 organisations called for a boycott of the debate, in particular many local groups including Bure Zone Libre, the national federation of Friends of the Earth and the Sortir du nucléaire Network.
On 23 May and again on 18 June, opponents of the project prevented the debates from taking place, believing that decisions had already been taken. The chairman of the Committee for the Debate on this project, Claude Bernet, suspended the session after a quarter of an hour, to the regret of the CNDP, which noted that "many participants had been deprived of their rights to information and expression on the project." Similarly, the Haut comité pour la transparence et l'information sur la sécurité nucléaire (HCTISN) announced that it deplored "these obstacles to the proper conduct of public meetings of the debate, which debate is organized precisely within the framework of the laws of the Republic in order to guarantee a real exercise of democracy."
A poll of residents of Meuse and Haute-Marne showed that 83% of them were in favour of opponents of the project participating in the public debate, but 68% agree with the statement that "the debate will be useless, as the conclusions are known in advance", while considering the debate useful to raise the level of information.
However, Andra declared that "there is no statutory instrument that says that the debate must take the form of public meetings. The National Commission for Public Debate (CNDP) has just proposed alternative solutions, such as adversarial forums on the Internet or a citizens' conference."
On 12 February 2014, the President of the CNDP, Christian Leyrit, proposed to mark out the creation of the industrial centre for the geological disposal of nuclear waste (Cigéo) by starting with a "significant stage" of "pilot storage".
Law of 2016
In June 2015, the Conseil constitutionnel (constitutional council) censured the inclusion in the Macron law of an article on reversibility. This was finally included in the law setting the framework for the Cigéo project adopted in July 2016.
On 8 November 2017, at the request of Andra, the CNDP announced the appointment of two guarantors who would support it in the process of informing and involving civil society in the project (Pierre Guinot-Delery and Jean-Michel Stievenard). Given the complexity of the case and the resignation of one of the two guarantors, the CNDP decided on 6 June 2018 to appoint three guarantors (Jean-Michel Stievenard, Marie-Line Meaux and Jean-Daniel Vazelle)
Intensification of protests and judicial response
From 2016, the Lejuc wood in Mandres-en-Barrois, on which Cigeo's facilities could be built, became the symbol of the project's protest. It was occupied by activists while the secret ballot of the municipal council of Mandres authorising its transfer to Andra was challenged for formal defects. The ballot was annulled on 28 February 2017 by the Administrative Court of Nancy, which led the city council to meet again on 18 May to confirm its first decision. However, the forest remained occupied by opponents of the project, who were evicted by the gendarmes on 22 February 2018. As the legal remedies on the transfer of the Lejuc wood have not been exhausted, the legality of this eviction is contested by the lawyers of the opponents of the project. In the days that followed, the materials that the opponents had installed in the woods to prevent access and facilitate occupation were removed.
The protests, moreover, sometimes took a violent turn (attempt to set fire to the hotel-restaurant located near the Laboratory, damage to the Court of Bar-le-Duc, threats against parliamentarians and journalists), which led the judiciary to open an investigation into several anti-nuclear activists who had come to settle in Bure and neighbouring villages for criminal association. To this end, it uses criminal analysis methods. Telephone tapping carried out in this context is presented by Reporterre and Mediapart as part of an "inordinate intelligence machine on the anti-nuclear movement", the cost of which is estimated to be around one million euros.
Prior Opinions and Authorisations and Preparatory Works
The public inquiry file was filed on August 3, 2020. On 13 January 2021, the Environmental Authority issued its opinion, in which it recommended the presentation of a detailed programme of additional risk management and monitoring studies, while the National Commission for Public Debate (CNDP) stressed the importance of in-depth consultation on the rehabilitation of the Nançois-Tronville-Gondrecourt railway line. In February 2021, the General Secretariat for Investment published a favourable opinion on the Cigéo project, highlighting the "strong prudential and insurance value" of the project, while pointing out the "significant and serious risk of cost drift".
As the public inquiry file was updated to reflect these recommendations, the inquiry was launched on 9 August 2021 and ran from 15 September to 23 October. On 20 December, the investigating commissioners gave an opinion "unreservedly" in favour of the declaration of public utility and the compatibility of the urban planning documents.
On July 8, 2022, the declaration of public utility (DUP) for the Cigéo project was published by decree. This DUP will allow the urban planning documents to be brought into compliance and the acquisition by the National Agency for the Management of Radioactive Waste (Andra) of the necessary land by expropriation. The decree specifies that the expropriations of land necessary for the realization of the project will be "carried out before 31 December 2037", and those "concerning only the subterranean [aspects] [...] no later than 31 December 2050".
On 17 January 2023, Andra submitted the application for authorisation to create the Cigéo site to the Ministry of Energy Transition. The Nuclear Safety Authority has five years to examine the file and decide whether or not to authorise the creation of the site.
See also
Agence nationale pour la gestion des déchets radioactifs (Andra)
Geology of France
Radioactive waste disposal
Meuse/Haute Marne Underground Research Laboratory
France Nature Environnement
Long-term nuclear waste warning messages
Notes and references
Bibliography
.
External links
Andra's Cigéo website (English page)
, prepared for The Local Committee for Information and Monitoring of the Bure Laboratory (CLIS)
Dossier presented for the public debate in 2013.
Radioactive waste
Waste management in France
Nuclear technology in France
Proposed infrastructure in France
Meuse (department)
Haute-Marne | Cigéo | [
"Chemistry",
"Technology"
] | 7,484 | [
"Radioactive waste",
"Environmental impact of nuclear power",
"Radioactivity",
"Hazardous waste"
] |
61,906,019 | https://en.wikipedia.org/wiki/Vanadyl%20isopropoxide | Vanadyl isopropoxide is the metal alkoxide with the formula VO(O-iPr)3 (iPr = CH(CH3)2). A yellow volatile liquid, it is a common alkoxide of vanadium. It is used as a reagent and as a precursor to vanadium oxides. The compound is diamagnetic. It is prepared by alcoholysis of vanadyl trichloride:
VOCl3 + 3 HOCH(CH3)2 → VO(OCH(CH3)2)3 + 3 HCl
The related cyclopentanoxide VO(O-CH(CH2)4)3 is a dimer, one pair of alkoxide ligands bind weakly trans to the vanadyl oxygens.
References
Vanadium(V) compounds
Alkoxides
Vanadyl compounds
Isopropyl compounds | Vanadyl isopropoxide | [
"Chemistry"
] | 188 | [
"Functional groups",
"Bases (chemistry)",
"Alkoxides"
] |
61,906,365 | https://en.wikipedia.org/wiki/Librem%205 | The Librem 5 is a smartphone manufactured by Purism that is part of their Librem line of products. The phone is designed with the goal of using free software whenever possible and includes PureOS, a Linux operating system, by default. Like other Librem products, the Librem 5 focuses on privacy and freedom and includes features like hardware kill switches and easily-replaceable components. Its name, with a numerical "5", refers to its screen size, not a release version. After an announcement on 24 August 2017, the distribution of developer kits and limited pre-release models occurred throughout 2019 and most of 2020. The first mass-production version of the Librem 5 was shipped on 18 November 2020.
History
On August 24, 2017, Purism started a crowdfunding campaign for the Librem 5, a smartphone aimed not only to run purely on free software provided in PureOS but to "[focus] on security by design and privacy protection by default". Purism claimed that the phone would become "the world's first ever IP-native mobile handset, using end-to-end encrypted decentralized communication". Purism has cooperated with GNOME in its development of the Librem 5 software. It is planned that KDE and Ubuntu Touch will also be offered as optional interfaces.
The release of the Librem 5 was delayed several times. It was originally planned to launch in January 2019. Purism announced on September 4, 2018 that the launch date would be postponed until April 2019, due to two power management bugs in the silicon and the Europe/North America holiday season. Development kits for software developers, which were shipped out in December 2018 were unaffected by the bugs, since developers normally connect the device to a power outlet rather than rely on the phone battery. In February, the launch date was postponed again to the third quarter of 2019, because of the necessity of further CPU tests.
Specifications and pre-orders, for $649, to increase to $699, were announced in July 2019. On September 5, 2019, Purism announced that shipping was scheduled to occur later that month, but that it would be done as an "iterative" process. The iterative release plan included the announcement of six different "batches" of Librem 5 releases, of which the first four would be limited pre-production models. Each consecutive batch, which consisted of different arboreal-themed code names and release dates, would feature hardware, mechanical, and software improvements. Purism contacted each customer that had pre-ordered to allow them to choose which batch they'd prefer to receive. Pre-mass production batches, in order of release, included code names "Aspen", "Birch", "Chestnut", and "Dogwood". The fifth batch, "Evergreen", would be the official mass-production model, while the sixth batch, "Fir", would be the second mass-production model.
On September 24, 2019, Purism announced that the first batch of limited-production Librem 5 phones (Aspen) had started shipping. A video of an early phone was produced and a shipping and status update was released soon after. However, it was later reported that the Aspen batch had been shipped only to employees and developers. On November 22, 2019, it was reported that the second batch (Birch) would consist of around 100 phones and would be in the hands of backers by the first week of December. In December 2019, Jim Salter of Ars Technica reported "prototype" devices were being received; however, they were not really a "phone" yet. There was no audio when attempting to place a phone call (which was fixed with a software update a few weeks later), and cameras didn't work yet. Reports of the third batch of limited pre-mass-production models (Chestnut) being received by customers and reviewers occurred in January 2020. By May 2020, TechRadar reported that the call quality was fine, though the speaker mode was "a bit quiet", and volume adjustment did not work. According to TechRadar, the 3 to 5-hour battery time and the inability of the phone to charge while turned on was "A stark reminder of the Librem 5's beta status".
On November 18, 2020, Purism announced via press release that they had begun shipping the finished version of the Librem 5, known as "Evergreen". Following its release, in December 2019, Purism announced that it will offer a "Librem 5 USA" version of the phone for the price of $1999, which is assembled in the United States for extra supply chain security. According to Purism CEO Todd Weaver, "having a secure auditable US based supply chain including parts procurement, fabrication, testing, assembly, and fulfillment all from within the same facility is the best possible security story."
Hardware
The Librem 5 features an i.MX 8M Quad Core processor with an integrated GPU which supports OpenGL 3.0, OpenGL ES 3.1, Vulkan 1.0 and OpenCL 1.2 with default drivers; however, since the driver used is the open source Etnaviv driver, it currently only supports OpenGL 2.1 and OpenGL ES 2.0. It has 3 GB of RAM, 32 GB of eMMC storage, a 13 MP rear camera, and an 8 MP front camera. The left side of the phone features three hardware kill switches, which cut power to the camera and microphone, Wi-Fi and Bluetooth modem, and the baseband modem.) The device uses a USB-C connector for charging. The 144 mm (5.7-inch) IPS display has a resolution of 1440×720 pixels. It also has a 3.5 mm TRRS headphone/mic jack, a single SIM slot, and a microSD card slot.
Battery
The Librem 5 is powered by a lithium-ion battery. The capacity of the battery was 2000 mAh in earliest development batches, which was increased to 4500 mAh in the mass-production batch. The battery is designed to be user-replaceable. The battery is unique to Librem 5 and cannot be replaced by any other battery type. In addition, Purism ships replacement batteries only within the US unless combined with another device.
Mobile security
The hardware features three hardware kill switches that physically cut off power from both cameras and the microphone, Wi-Fi and Bluetooth, and baseband processor, respectively. Further precautionary measures can be used with Lockdown Mode, which, in addition to powering off the cameras, microphone, WiFi, Bluetooth and cellular baseband, also cuts power to the GNSS, IMU, ambient light and proximity sensor. This is possible due to the fact that these components are not integrated into the system on a chip (SoC) like they are in conventional smartphones. Instead, the cellular baseband and Wi-Fi/Bluetooth components are located on two replaceable M.2 cards, which means that they can be changed to support different wireless standards. The kill switch to cut the circuit to the microphone will prevent the 3.5 mm audio jack being used for acoustic cryptanalysis.
In place of an integrated mobile SoC found in most smartphones, the Librem 5 uses six separate chips: i.MX 8M Quad, Silicon Labs RS9116, Broadmobi BM818 / Gemalto PLS8, STMicroelectronics Teseo-LIV3F, Wolfson Microelectronics WM8962, and Texas Instruments bq25895.
The downside to having dedicated chips instead of an integrated system-on-chip is that it takes more energy to operate separate chips, and the phone's circuit boards are much larger. On the other hand, using separate components means longer support from the manufacturers than with mobile SoCs, which have short support timelines. According to Purism, the Librem 5 is designed to avoid planned obsolescence and will receive lifetime software updates.
The Librem 5 is the first phone to contain a smartcard reader, in which an OpenPGP card can be inserted for secure cryptographic operations. Purism plans to use OpenPGP cards to implement storage of GPG keys, disk unlocking, secure authentication, a local password vault, protection of sensitive files, user persons, and travel persons.
To promote better security, all the source code in the root file system is free/open source software and can be reviewed by the user. Purism publishes the schematics of the Librem 5's printed circuit boards (PCBs) under the GPL 3.0+ license, and publishes x-rays of the phone, so that the user can verify that there haven't been any changes to the hardware, such as inserted spy chips.
Software
The Librem 5 ships with Purism's PureOS, a Debian GNU/Linux derivative. The operating system uses a new mobile user interface developed by Purism called Phosh, a portmanteau from "phone shell". It is based on Wayland, wlroots, GTK 3, and GNOME. Unlike other mobile Linux interfaces, such as Ubuntu Touch and KDE Plasma Mobile, Phosh is based on tight integration with the desktop Linux software stack, which Purism developers believe will make it easier to maintain in the long-term and incorporate into existing desktop Linux distributions. Phosh has been packaged in a number of desktop distros (Debian, Arch, Manjaro, Fedora and openSUSE) and is used by eight of the sixteen Linux ports for the PinePhone.
The phone is a convergence device: if connected to a keyboard, monitor, and mouse, it can run Linux applications as a desktop computer would. Many desktop Linux applications can run on the phone as well, albeit possibly without a touch-friendly UI.
Purism is taking a unique approach to convergence by downsizing existing desktop software to reuse it in a mobile environment. Purism has developed the libhandy library (now replaced with Libadwaita) to make GTK software adaptive so its interface elements adjust to smaller mobile screens. In contrast, other companies such as Microsoft and Samsung with Ubuntu (and Canonical before Unity8) tried to achieve convergence by having separate sets of software for the mobile and desktop PC environments. Most iOS apps, Android apps and Plasma Mobile's Kirigami implement convergence by upsizing existing mobile apps to use them in a desktop interface.
Purism claims that the "Librem 5 will be the first ever Matrix-powered smartphone, natively using end-to-end encrypted decentralised communication in its dialer and messaging app".
Purism was unable to find a free/open-source cellular modem, so the phone uses a modem with proprietary hardware, but isolates it from the rest of the components rather than having it integrated with the system on a chip (SoC). This prevents code on the modem from being able to read or modify data going to and from the SoC.
See also
Comparison of open-source mobile phones
List of open-source mobile phones
Microphone blocker
Modular smartphone
PinePhone
Libadwaita
References
External links
Librem 5
Linux-based devices
Mobile Linux
Mobile security
Mobile/desktop convergence
Modular smartphones
Open-source mobile phones
Secure communication
Mobile phones introduced in 2020
Mobile phones with user-replaceable battery
Right to repair | Librem 5 | [
"Technology",
"Engineering"
] | 2,387 | [
"Mobile security",
"Cybersecurity engineering",
"Modular design",
"Modular smartphones"
] |
61,907,040 | https://en.wikipedia.org/wiki/U-JIN%20Tech%20Corp. | U-JIN Tech Corp. is a South Korean manufacturer of friction welding machines and automated manufacturing cells.
History
U-JIN Tech Corp. was founded in February 2009. It stablished its own R&D center within the Korea Industrial Technology Association (KOITA) in 2010. The R&D center has the objective to develop new products.
U-Jin has initially developed and manufactured hydraulic friction welding machines, and it built Korea's first CNC friction welding machine in 2012.
In 2015 the company was recognized as Contributor for Development of Excellent Capital Goods by the Minister of Trade, Industry, and Energy. In November 2016 it received the European CE Certificate and started exporting machines to Europe. On Trade Day in December 2016, it received the 10 Million Dollar Export Tower Award.
Friction welding machines
CNC technology is used by U-JIN Tech Corp both for automatic material transport and in cases where high accuracy is required. Due to the position measuring devices known from CNC milling machines, the length tolerance of the components can be maintained more accurately than with conventional hydraulic machines. It is even possible, to bring the spindle to a standstill in a given position so that the two eyes of a drive shaft can be positioned at an angle to each other.
The two spindles of U-JIN's computer numerical controlled double-head friction welding machines are driven by servo motors that allow the angular position of their motor shaft to be controlled, as well as the speed of rotation and acceleration, since they are equipped with position sensors. If the spindles are controlled in the same way as CNC-controlled servo motors, angular accuracies of ±0.5° can be achieved, e.g. at both ends of a cardan shaft.
Friction welded products
As friction welding operates below the melting point of the materials, even dissimilar material joints can be produced with high tensile strength. In many cases, the tensile strength of the bimetallic joint is higher than that of the softer base material.
U-JIN's friction welding machines are used industrially for a wide variety of products:
Gear shafts made of chrome-molybdenum steel
Electric terminals and cable lugs made of pure copper and pure aluminum
Long screws and bolts made of structural or high-speed steel
Stainless steel and aluminum adapters for refrigerants or coolants in superconductors
Shaft-hub connections in hollow shafts for the drive train of cars
Carbon steel (Advanced High Strength Steel, AHSS) and stainless steel pump shafts
Transition pieces in carbon steel S25C and stainless steel SUS304 (tensile strength 443 N/mm²)
Transition pieces made of carbon steel S45C and stainless steel SUS304 (tensile strength 639 N/mm²)
Motor shafts made of structural and stainless steel
References
Industrial machine manufacturers
Manufacturing companies established in 2009
Engineering companies of South Korea
South Korean brands | U-JIN Tech Corp. | [
"Engineering"
] | 594 | [
"Industrial machine manufacturers",
"Industrial machinery"
] |
61,908,255 | https://en.wikipedia.org/wiki/Marine%20heatwave | A marine heatwave is a period of abnormally high sea surface temperatures compared to the typical temperatures in the past for a particular season and region. Marine heatwaves are caused by a variety of drivers. These include shorter term weather events such as fronts, intraseasonal events (30 to 90 days) , annual, and decadal (10-year) modes like El Niño events, and human-caused climate change. Marine heatwaves affect ecosystems in the oceans. For example, marine heatwaves can lead to severe biodiversity changes such as coral bleaching, sea star wasting disease, harmful algal blooms, and mass mortality of benthic communities. Unlike heatwaves on land, marine heatwaves can extend over vast areas, persist for weeks to months or even years, and occur at subsurface levels.
Major marine heatwaves have occurred for example in the Great Barrier Reef in 2002, in the Mediterranean Sea in 2003, in the Northwest Atlantic in 2012, and in the Northeast Pacific during 2013–2016. These events have had drastic and long-term impacts on the oceanographic and biological conditions in those areas.
Scientists predict that the frequency, duration, scale (or area) and intensity of marine heatwaves will continue to increase. This is because sea surface temperatures will continue to increase with global warming. The IPCC Sixth Assessment Report in 2022 has summarized research findings to date and stated that "marine heatwaves are more frequent [...], more intense and longer [...] since the 1980s, and since at least 2006 very likely attributable to anthropogenic climate change". This confirms earlier findings in a report by the IPCC in 2019 which had found that "marine heatwaves [...] have doubled in frequency and have become longer lasting, more intense and more extensive (very likely).". The extent of ocean warming depends on greenhouse gas emission scenarios, and thus humans' climate change mitigation efforts. Scientists predict that marine heatwaves will become "four times more frequent in 2081–2100 compared to 1995–2014" under the lower greenhouse gas emissions scenario, or eight times more frequent under the higher emissions scenario.
Definition
The IPCC Sixth Assessment Report defines marine heatwave as follows: "A period during which water temperature is abnormally warm for the time of the year relative to historical temperatures, with that extreme warmth persisting for days to months. The phenomenon can manifest in any place in the ocean and at scales of up to thousands of kilometres."
Another publication defined it as follows: an anomalously warm event is a marine heatwave "if it lasts for five or more days, with temperatures warmer than the 90th percentile based on a 30-year historical baseline period".
The term marine heatwave was coined following an unprecedented warming event off the west coast of Australia in the austral summer of 2011, which led to a rapid dieback of kelp forests and associated ecosystem shifts along hundreds of kilometers of coastline.
Categories
The quantitative and qualitative categorization of marine heatwaves establishes a naming system, typology, and characteristics for marine heatwave events. The naming system is applied by location and year: for example Mediterranean 2003. This allows researchers to compare the drivers and characteristics of each event, geographical and historical trends of marine heatwaves, and easily communicate marine heatwave events as they occur in real-time.
The categorization system is on a scale from 1 to 4. Category 1 is a moderate event, Category 2 is a strong event, Category 3 is a severe event, and Category 4 is an extreme event. The category applied to each event in real-time is defined primarily by sea surface temperature anomalies (SSTA), but over time it comes to include typology and characteristics.
The types of marine heatwaves are symmetric, slow onset, fast onset, low intensity, and high intensity. Marine heatwave events may have multiple categories such as slow onset, high intensity. The characteristics of marine heatwave events include duration, intensity (max, average, cumulative), onset rate, decline rate, region, and frequency.
While marine heatwaves have been studied at the sea surface for more than a decade, they can also occur at the sea floor.
Drivers
Local processes and regional climate patterns
The drivers for marine heatwave events can be broken into local processes, teleconnection processes, and regional climate patterns. Two quantitative measurements of these drivers have been proposed to identify marine heatwave, mean sea surface temperature and sea surface temperature variability.
At the local level marine heatwave events are dominated by ocean advection, air-sea fluxes, thermocline stability, and wind stress. Teleconnection processes refer to climate and weather patterns that connect geographically distant areas. For marine heatwave, the teleconnection process that play a dominant role are atmospheric blocking/subsidence, jet-stream position, oceanic kelvin waves, regional wind stress, warm surface air temperature, and seasonal climate oscillations. These processes contribute to regional warming trends that disproportionately effect Western boundary currents.
Regional climate patterns such as interdecadal oscillations like El Niño Southern Oscillation (ENSO) have contributed to marine heatwave events such as "The Blob" in the Northeastern Pacific.
Drivers that operate on the scale of biogeographical realms or the Earth as a whole are decadal oscillations, like Pacific decadal oscillations (PDO), and anthropogenic ocean warming due to climate change.
Ocean areas of carbon sinks in the mid-latitudes of both hemispheres and carbon outgassing areas in upwelling regions of the tropical Pacific have been identified as places where persistent marine heatwaves occur; the air-sea gas exchange is being studied in these areas.
Climate change
Scientists predict that the frequency, duration, scale (or area) and intensity of marine heatwaves will continue to increase. This is because sea surface temperatures will continue to increase with global warming, and therefore the frequency and intensity of marine heatwaves will also increase. The extent of ocean warming depends on emission scenarios, and thus humans' climate change mitigation efforts. Simply put, the more greenhouse gas emissions (or the less mitigation), the more the sea surface temperature will rise. Scientists have calculated this as follows: there would be a relatively small (but still significant) increase of 0.86 °C in the average sea surface temperature for the low emissions scenario (called SSP1-2.6). But for the high emissions scenario (called SSP5-8.5) the temperature increase would be as high as 2.89 °C.
The prediction for marine heatwaves is that they may become "four times more frequent in 2081–2100 compared to 1995–2014" under the lower emissions scenario, or eight times more frequent under the higher emissions scenario. The emissions scenarios are called SSP for Shared Socioeconomic Pathways. A mathematical model called CMIP6 is used for these predictions. The predictions are for the average of the future period (years 2081 to 2100) compared to the average of the past period (years 1995 to 2014).
Global warming is projected to push the tropical Indian Ocean into a basin-wide near-permanent heatwave state by the end of the 21st century, where marine heatwaves are projected to increase from 20 days per year (during 1970–2000) to 220–250 days per year.
Many species already experience these temperature shifts during the course of marine heatwave events. There are many increased risk factors and health impacts to coastal and inland communities as global average temperature and extreme heat events increase.
List of events
Sea surface temperatures have been recorded since 1904 in Port Erin, Isle of Man, and measurements continue through global organizations such as NOAA, NASA, and many more. Events can be identified from 1925 till present day. The list below is not a complete representation of all marine heatwave events that have ever been recorded.
Impacts
On marine ecosystems
Changes in the thermal environment of terrestrial and marine organisms can have drastic effects on their health and well-being. Marine heatwave events have been shown to increase habitat degradation, change species range dispersion, complicate management of environmentally and economically important fisheries, contribute to mass mortality of species, and in general reshape ecosystems.
Habitat degradation occurs through alterations of the thermal environment and subsequent restructuring and sometimes complete loss of biogenic habitats such as seagrass beds, corals, and kelp forests. These habitats contain a significant proportion of the oceans' biodiversity. Changes in ocean current systems and local thermal environments have shifted many tropical species' ranges northward, while temperate species have lost their southern limits. Large range shifts, along with outbreaks of toxic algal blooms, have impacted many species across taxa. Management of these affected species becomes increasingly difficult as they migrate across management boundaries and the food web dynamics shift.
Increases in sea surface temperature have been linked to a decline in species abundance such as the mass mortality of 25 benthic species in the Mediterranean in 2003, sea star wasting disease, and coral bleaching events. Climate change-related exceptional marine heatwaves in the Mediterranean Sea during 2015–2019 resulted in widespread mass sealife die-offs in five consecutive years. Repeated marine heatwaves in the Northest Pacific led to dramatic changes in animal abundances, predator-prey relationships, and energy flux throughout the ecosystem. The impact of more frequent and prolonged marine heatwave events will have drastic implications for the distribution of species.
Coral bleaching
On weather patterns
Research on how marine heatwaves influence atmospheric conditions is emerging. Marine heatwaves in the tropical Indian Ocean are found to result in dry conditions over the central Indian subcontinent. At the same time, there is an increase in rainfall over south peninsular India in response to marine heatwaves in the northern Bay of Bengal. These changes are in response to the modulation of the monsoon winds by the marine heatwaves.
Options for reducing impacts
To address the root cause of more frequent and more intense marine heatwaves, climate change mitigation methods are needed to curb the increase in global temperature and in ocean temperatures.
Better forecasts of marine heatwaves and improved monitoring can also help to reduce impacts of these heatwaves.
See also
References
External links
Marine Heatwaves International Working Group
Climate change and the environment
Physical oceanography
Ocean pollution
Heat waves
Effects of climate change | Marine heatwave | [
"Physics",
"Chemistry",
"Environmental_science"
] | 2,147 | [
"Ocean pollution",
"Applied and interdisciplinary physics",
"Physical oceanography",
"Water pollution"
] |
61,909,148 | https://en.wikipedia.org/wiki/Dona%20Strauss | Dona Anschel Papert Strauss (born April 1934) is a South African mathematician working in topology and functional analysis. Her doctoral thesis was one of the initial sources of pointless topology. She has also been active in the political left, lost one of her faculty positions over her protests of the Vietnam War, and became a founder of European Women in Mathematics.
Mathematician Neil Hindman, with whom Strauss wrote a book on the Stone–Čech compactification of topological semigroups, has stated the following as advice for other mathematicians: "Find someone who is smarter than you are and get them to put your name on their papers", writing that for him, that someone was Dona Strauss.
Education and career
Strauss is originally from South Africa, the descendant of Jewish immigrants from Eastern Europe. Her father was a physicist at the University of Cape Town. She grew up in the Eastern Cape, and earned a master's degree in mathematics at the University of Cape Town.
She completed her Ph.D. at the University of Cambridge in 1958. Her dissertation, Lattices of Functions, Measures, and Open Sets, was supervised by Frank Smithies.
After completing her doctorate, she took a faculty position at the University of London. Following her husband's dream of living on a farm in Vermont, she moved to Dartmouth College in 1966. By 1972, she was working at the University of Hull and circa 2008 she became a professor at the University of Leeds. After retiring, she has been listed by Leeds as an honorary visiting fellow.
Activism
In South Africa, Strauss developed a strong antipathy to racial discrimination from a combination of being a Jew at the time of the Holocaust and her own observations of South African society. At the University of Cape Town, she became a member of the Non-European Unity Movement. After completing her degree, she left the country in protest over apartheid; her parents also left South Africa, after her father's retirement, for Israel. In the 1950s, she regularly published editorial works in Socialist Review, and in the 1960s she was active in Solidarity (UK).
As an assistant professor at Dartmouth College in 1969, Strauss took part in a student anti-war protest that occupied Parkhurst Hall, the building that housed the college administration. In response, Dartmouth announced that Strauss and another faculty protester would not have their contracts renewed, and that they would
be suspended from the faculty and "denied all rights and privileges of membership on the Dartmouth faculty", the first time in the college's history that it had taken this step.
In 1986, Strauss became one of the five founders of European Women in Mathematics, together with Bodil Branner, Caroline Series, Gudrun Kalmbach, and Marie-Françoise Roy.
Books
Strauss is the co-author of:
Algebra in the Stone-Čech compactification: Theory and applications (with Neil Hindman, De Gruyter Expositions in Mathematics 27, Walter de Gruyter & Co., 1998; 2nd ed., 2012)
Banach algebras on semigroups and on their compactifications (with H. Garth Dales and Anthony T.-M. Lau, Memoirs of the American Mathematical Society 205, 2010)
Banach spaces of continuous functions as dual spaces (with H. Garth Dales, Frederick K. Dashiell Jr., and Anthony T.-M. Lau, CMS Books in Mathematics, Springer, 2016)
Recognition
In 2009 the University of Cambridge hosted a meeting, "Algebra and Analysis around the Stone-Cech Compactification", in honour of Strauss's 75th birthday.
Personal life
Strauss married (as the first of his four wives) Seymour Papert. Papert was also South African, and became a co-author and fellow student of Frank Smithies with Strauss at Cambridge. She met her second husband, Edmond Strauss, at the University of London.
She is a strong amateur chess player, and was director of the Brighton and Hove Progressive Synagogue for 2014–2015.
References
1934 births
Living people
South African mathematicians
South African Jews
British women mathematicians
20th-century women mathematicians
Functional analysts
University of Cape Town alumni
Alumni of the University of Cambridge
Academics of the University of London
Dartmouth College faculty
Academics of the University of Hull
Academics of the University of Leeds
21st-century British mathematicians
21st-century women mathematicians
20th-century South African mathematicians
South African emigrants to the United Kingdom
21st-century South African mathematicians
20th-century British mathematicians
Topologists | Dona Strauss | [
"Mathematics"
] | 897 | [
"Topologists",
"Topology"
] |
70,063,207 | https://en.wikipedia.org/wiki/Bernoulli%20quadrisection%20problem | In triangle geometry, the Bernoulli quadrisection problem asks how to divide a given triangle into four equal-area pieces by two perpendicular lines. Its solution by Jacob Bernoulli was published in 1687. Leonhard Euler formulated a complete solution in 1779.
As Euler proved, in a scalene triangle, it is possible to find a subdivision of this form so that two of the four crossings of the lines and the triangle lie on the middle edge of the triangle, cutting off a triangular area from that edge and leaving the other three areas as quadrilaterals. It is also possible for some triangles to be subdivided differently, with two crossings on the shortest of the three edges; however, it is never possible for two crossings to lie on the longest edge. Among isosceles triangles, the one whose height at its apex is 8/9 of its base length is the only one with exactly two perpendicular quadrisections. One of the two uses the symmetry axis as one of the two perpendicular lines, while the other has two lines of slope , each crossing the base and one side.
This subdivision of a triangle is a special case of a theorem of Richard Courant and Herbert Robbins that any plane area can be subdivided into four equal parts by two perpendicular lines, a result that is related to the ham sandwich theorem. Although the triangle quadrisection has a solution involving the roots of low-degree polynomials, the more general quadrisection of Courant and Robbins can be significantly more difficult: for any computable number there exist convex shapes whose boundaries can be accurately approximated to within any desired error in polynomial time, with a unique perpendicular quadrisection whose construction computes .
In 2022, the first place in an Irish secondary school science competition, the Young Scientist and Technology Exhibition, went to a project by Aditya Joshi and Aditya Kumar using metaheuristic methods to find numerical solutions to the Bernoulli quadrisection problem.
Notes and references
Area
Triangle geometry | Bernoulli quadrisection problem | [
"Physics",
"Mathematics"
] | 405 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Size",
"Geometry",
"Geometry stubs",
"Wikipedia categories named after physical quantities",
"Area"
] |
70,063,966 | https://en.wikipedia.org/wiki/George%20Mullen | George Mullen is an astronomer who co-authored several peer-reviewed articles with Carl Sagan. He, along with Sagan, pointed out the faint young Sun paradox. In addition to studying the early Earth atmosphere, he studied the atmosphere of Jupiter.
References
Living people
Year of birth missing (living people)
Place of birth missing (living people)
Astrobiologists
Astrochemists | George Mullen | [
"Chemistry",
"Astronomy"
] | 78 | [
"Astronomers",
"Astronomer stubs",
"Astrochemists",
"Astronomy stubs"
] |
70,064,140 | https://en.wikipedia.org/wiki/Cilofexor | Cilofexor (also known as GS-9674) is a nonsteroidal farnesoid X receptor (FXR) agonist in clinical trials for the treatment of non-alcoholic fatty liver disease (NAFLD), non-alcoholic steatohepatitis (NASH), and primary sclerosing cholangitis (PSC). It is being investigated for use alone or in combination with firsocostat, selonsertib, or semaglutide. In rat models and human clinical trials of NASH it has been shown to reduce fibrosis and steatosis, and in human clinical trials of PSC it improved cholestasis and reduced markers of liver injury.
It is being developed by the pharmaceutical company Gilead Sciences.
References
Pyridines
Chlorobenzene derivatives
Cyclopropyl compounds
Oxazoles
Azetidines
Carboxylic acids
Farnesoid X receptor agonists | Cilofexor | [
"Chemistry"
] | 200 | [
"Carboxylic acids",
"Functional groups"
] |
70,065,399 | https://en.wikipedia.org/wiki/Materials%20Science%20and%20Engineering%20B | Materials Science and Engineering: B — Advanced Functional Solid-State Materials is a peer-reviewed scientific journal. It is the section of Materials Science and Engineering dedicated to "calculation, synthesis, processing, characterization, and understanding of advanced quantum materials" and is published monthly by Elsevier. It aims at providing a leading international forum for material researchers across the disciplines of theory, experiment, and device applications. The current editor-in-chief is Jing Xia (University of California Irvine).
According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.407.
References
External links
Physics review journals
Materials science journals
Elsevier academic journals
Academic journals established in 1993
English-language journals
Monthly journals | Materials Science and Engineering B | [
"Materials_science",
"Engineering"
] | 142 | [
"Materials science journals",
"Materials science"
] |
70,065,445 | https://en.wikipedia.org/wiki/Boba%20liberal | Boba liberal is a term mostly used within the Asian diaspora communities in the West, especially in the United States. It describes someone of East or Southeast Asian descent living in the West who has a shallow, surface-level liberal outlook. It is also occasionally used to describe conservatives who weaponize their East or Southeast Asian identity. The neologism emerged among the Asian American leftist community on Twitter who accused "boba liberals" of only holding their liberal beliefs to appear more White adjacent, by engaging in progressive social movements or viewpoints, while at the same time disregarding and trivializing issues concerning Asians.
Mary Chao, writing for The North Jersey Record, said that "Asians call peers boba liberals when they aspire to liberal whiteness." An article in The Yale Herald described it as a term "used to describe the ethnocentric politics of Asian Americans, usually of East Asian descent, who exclusively advocate for issues that benefit themselves, without acknowledging problematic dimensions of their own history and working to support other people of color." The feminist magazine Fem said that "the faces of boba liberalism are Asian Americans that are part of the middle and upper economic class. As a result, boba liberals disregard the negative effects of capitalism because they profit from it. For instance, boba liberals tend to focus on advocating for Asian representation in white spaces, or discussing whether or not wearing chopsticks in one's hair is culture appropriation. These topics are popular within boba liberal circles, all while dialogue regarding inequality, globalization, and racial injustice are purposely neglected."
UnHerd notes that conservative Asian Americans have used the term not to critique capitalism, but to "aim at a small but influential group of progressive Asian-American activists who are supposedly selling out other Asians, especially working-class Asians, in order to win brownie points from elite, generally white liberals."
The Asian identity of boba liberals has often been accused of being shallow and superficial. Boba liberals are accused of using surface-level stereotypical Asian traits such as liking boba tea to bolster their Asian credentials.
Plan A Magazine, an Asian diaspora magazine, described the film Crazy Rich Asians and the sitcom Fresh Off the Boat as "boba liberal media", calling them the result of "a specific kind of atomized identity politics". Other media outlets have connected the Crazy Rich Asians film to boba liberalism.
Controversy
The term "boba liberal" was coined in 2019 by Vietnamese American Twitter user Redmond (@diaspora_is_red) to analyze a form of Asian American liberalism through a Marxist lens. Redmond has criticized the misappropriation of their neologism by stripping away the Marxist framework by failing to discuss "socialism, communism, the capitalist system, imperialism, and the diaspora bourgeoisie" and conflating "boba liberalism" with the flawed concept of "East Asian privilege". In 2024, Redmond criticized misuse of the term by conservatives and liberals, and said "The term boba liberalism can go away for all I care. It's corny and stale".
United States
One commentator described boba liberals as supporting policies that primarily benefit upper-income Asian-Americans, and not necessarily the Asian-American community as a whole. Therefore, while the word "liberal" is used in the term, it is not mutually exclusive to one specific ideology, as it may also extend to conservative-aligned Asians in some areas, as they would often take advantage of the "model minority" label by defending such measures.
See also
Acting white
Baizuo
Chinilpa
Crab mentality
Inferiority complex
Internalized oppression
Internalized racism
Hanjian
Limousine liberal
Makapili
Model minority
Sarong party girl
Tall poppy syndrome
Race traitor
Uncle Tom
Liberal elite
References
Further reading
External links
Why I Hate Subtle Asian Traits by Sarah Mae Dizon (30 August 2020).
Asian-American culture
Asian-American history
Asian-American issues
Asian-American-related controversies
Canadian people of Asian descent
Cultural studies
Cultural assimilation
Liberalism in the United States
Political neologisms
Politics and race in the United States
2019 neologisms
Social inequality
Social media
Asian-Australian issues | Boba liberal | [
"Technology"
] | 848 | [
"Computing and society",
"Social media"
] |
70,071,645 | https://en.wikipedia.org/wiki/SN%202020tlf | SN 2020tlf was a Type II supernova that occurred 120 million light years away in the galaxy NGC 5731. The supernova marked the first time that a red supergiant star had been observed before, during, and after the event, being observed up to 130 days before. The progenitor star was between 10 and 12 solar masses.
Observations
The star was first observed by the Pan-STARRS telescope in the summer of 2020, with other telescopes such as ATLAS also observing it. It was initially believed that red supergiants were quiet before their demise; however, SN 2020tlf was observed emitting bright, intense radiation and ejecting massive amounts of gaseous material. Observations were also made throughout the electromagnetic spectrum, such as in the X-ray, ultraviolet, infrared and radio wave spectrums.
References
Supernovae
Astronomical objects discovered in 2020
Boötes | SN 2020tlf | [
"Chemistry",
"Astronomy"
] | 181 | [
"Supernovae",
"Astronomical events",
"Boötes",
"Constellations",
"Explosions"
] |
70,072,648 | https://en.wikipedia.org/wiki/Time%20Travel%3A%20A%20History | Time Travel: A History is a book by science history writer James Gleick, published in 2016, which covers time travel, the origin of idea and of its usage in literature. The book received mostly positive reviews.
Synopsis
In the book Gleick researches time travel, the emergence of this idea and its usage in literature, and how it shapes life of a modern person. In an interview for National Geographic Gleick said:
At some point during the four years I worked on this book, I also realized that, in one way or another, every time travel story is about death. Death is either explicitly there in the foreground or lurking in the background because time is a bastard, right? Time is brutal. What does time do to us? It kills us. Time travel is our way of flirting with immortality. It's the closest we’re going to come to it.
Reception
The book received mostly positive reviews. Nicola Davis of The Guardian wrote that "Time Travel is intoxicating, but that is only in part down to Gleick's execution. Much of this is well trodden ground, our enduring fascination with the notion sown long ago by many adroit hands. At times, Gleick seems to get lost in his own, sometimes opaque, musings. Parts of the book are frustratingly repetitive, while his practice of paraphrasing obscure time travel stories before analysing their finer points too often feels like the dinner party anecdote that rather feebly concludes 'Well, you had to be there really'." Nick D Burton wrote for the Wired that the book "quantum leaps from HG Wells's The Time Machine – the original – via Proust and alt-history right up to your Twitter timeline. Until we get the DeLorean working for real, fellow travellers, consider it the next best thing". Anthony Doerr wrote for The New York Times that "Time Travel, like all of Gleick's work, is a fascinating mash-up of philosophy, literary criticism, physics and cultural observation. It's witty ("Regret is the time traveler's energy bar"), pithy ("What is time? Things change, and time is how we keep track") and regularly manages to twist its reader's mind into those Gordian knots I so loved as a boy."
Will Mann, reviewing the book for International Policy Digest, praised it though pointed that
However, despite these praises, Gleick’s argument about the intersection between scientific discovery and art starts to dissipate towards the final third of the book. Some chapters, such as the penultimate one, entitled "What is Time?" less resembles a history of a subgenre within the greater science fiction canon and more resembles a heavy philosophic dissertation. Certainly, time travel is a concept that philosophers have tried to grasp and theorize about ever since its invention.
Dave Goldberg wrote for Nature Physics that "As to the practical possibility of time travel, Gleick is something of a sceptic. Common sense, he argues, suggests that the past really is immutable, no matter how clever the theoretical models that imply otherwise. And despite the apparent symmetry of the microscopic laws of physics, there really is, he argues, something different about the future and the past. 'The future hasn't been written yet. When did that become controversial?'"
References
External links
Presentation by Gleick on Time Travel, October 15, 2016, C-SPAN
Presentation by Gleick on Time Travel, November 19, 2016, C-SPAN
2016 non-fiction books
Popular science books
Works about time
Pantheon Books books
Fourth Estate books
Time travel | Time Travel: A History | [
"Physics"
] | 759 | [
"Physical quantities",
"Time",
"Time travel",
"Works about time",
"Spacetime"
] |
70,073,044 | https://en.wikipedia.org/wiki/%C3%97%20Holcosia%20taiwaniana | × Holcosia taiwaniana is a natural hybrid of the orchid species Holcoglossum quasipinifolium and Luisia teres. It is an epiphyte endemic to Taiwan.
Description
The occasionally branched, pendulous, terete stems are 30 to 60 cm in length and 4 to 4.5 mm wide. The terete, 14 to 21 cm long and 2 to 3.5 mm wide leaves are not strictly distichously arranged, but rather laxly alternate. Two 4–5 cm wide, yellowish flowers are produced on 2 to 3 cm long inflorescences. The labellum bears brown-red longitudinal stripes.
References
Aeridinae
Orchid hybrids
Endemic flora of Taiwan
Orchids of Taiwan
Intergeneric hybrids
Plants described in 1989 | × Holcosia taiwaniana | [
"Biology"
] | 158 | [
"Intergeneric hybrids",
"Hybrid organisms"
] |
70,073,971 | https://en.wikipedia.org/wiki/Period%20room | A period room is a display that represents the interior design and decorative art of a particular historical social setting usually in a museum. Though it may incorporate elements of an individual real room that once existed somewhere, it is usually by its nature a composite and fictional piece. Period rooms at encyclopedic museums may represent different countries and cultures, while those at historic house museums may represent different eras of the same structure. As with the glamorization of luxury in costume drama, this can be considered as a conservative genre that traditionally privileges Eurocentric elite views.
In the 21st century, the focus has shifted toward using period rooms in new ways or in diversifying them.
References
External links
Decorative arts
Heritage interpretation
Interior design
Museology
Period pieces
Relocated buildings and structures
Rooms | Period room | [
"Engineering"
] | 154 | [
"Rooms",
"Architecture"
] |
70,073,990 | https://en.wikipedia.org/wiki/Queen%27s%20graph | In mathematics, a queen's graph is an undirected graph that represents all legal moves of the queen—a chess piece—on a chessboard. In the graph, each vertex represents a square on a chessboard, and each edge is a legal move the queen can make, that is, a horizontal, vertical or diagonal move by any number of squares. If the chessboard has dimensions , then the induced graph is called the queen's graph.
Independent sets of the graphs correspond to placements of multiple queens where no two queens are attacking each other. They are studied in the eight queens puzzle, where eight non-attacking queens are placed on a standard chessboard. Dominating sets represent arrangements of queens where every square is attacked or occupied by a queen; five queens, but no fewer, can dominate the chessboard.
Colourings of the graphs represent ways to colour each square so that a queen cannot move between any two squares of the same colour; at least n colours are needed for an chessboard, but 9 colours are needed for the board.
Properties
There is a Hamiltonian cycle for each queen's graph, and the graphs are biconnected (they remain connected if any single vertex is removed). The special cases of the and queen's graphs are complete.
Independence
An independent set of the graph corresponds to a placement of several queens on a chessboard such that no two queens are attacking each other. In an chessboard, the largest independent set contains at most n vertices, as no two queens can be in the same row or column. This upper bound can be achieved for all n except n=2 and n=3. In the case of n=8, this is the traditional eight queens puzzle.
Domination
A dominating set of the queen's graph corresponds to a placement of queens such that every square on the chessboard is either attacked or occupied by a queen. On an chessboard, five queens can dominate, and this is the minimum number possible (four queens leave at least two squares unattacked). There are 4,860 such placements of five queens, including ones where the queens control also all occupied squares, i.e. they attack respectively protect each other. In this subgroup, there are also positions where the queens occupy squares on the main diagonal only (e.g. from a1 to h8), or all on a subdiagonal (e.g. from a2 to g8).
Modifying the graph by replacing the non-looping rectangular chessboard with a torus or cylinder reduces the minimum dominating set size to four.
The queen's graph is dominated by the single vertex at the centre of the board. The centre vertex of the queen's graph is adjacent to all but 8 vertices: those vertices that are adjacent to the centre vertex of the knight's graph.
Domination numbers
Define the domination number d(n) of an queen's graph to be the size of the smallest dominating set, and the diagonal domination number dd(n) to be the size of the smallest dominating set that is a subset of the long diagonal. Note that for all n. The bound is attained for , but not for .
The domination number is linear in n, with bounds given by:
Initial values of d(n), for , are 1, 1, 1, 2, 3, 3, 4, 5, 5, 5, 5 .
Let Kn be the maximum size of a subset of such that every number has the same parity and no three numbers form an arithmetic progression (the set is "midpoint-free"). The diagonal domination number of an queen's graph is .
Define the independent domination number ID(n) to be the size of the smallest independent, dominant set in an queen's graph. It is known that .
Colouring
A colouring of the queen's graph is an assignment of colours to each vertex such that no two adjacent vertices are given the same colour. For instance, if a8 is coloured red then no other square on the a-file, eighth rank or long diagonal can be coloured red, as a queen can move from a8 to any of these squares. The chromatic number of the graph is the smallest number of colours that can be used to colour it.
In the case of an queen's graph, at least n colours are required, as each square in a rank or file needs a different colour (i.e. the rows and columns are cliques). The chromatic number is exactly n if (i.e. n is one more or one less than a multiple of 6).
The chromatic number of an queen's graph is 9.
Irredundance
A set of vertices is irredundant if removing any vertex from the set changes the neighbourhood of the set i.e. for each vertex, there is an adjacent vertex that is not adjacent to any other vertex in the set. This corresponds to a set of queens which each uniquely control at least one square. The maximum size IR(n) of an irredundant set on the queen's graph is difficult to characterise; known values include
Pursuit–evasion game
Consider the pursuit–evasion game on an queen's graph played according to the following rules: a white queen starts in one corner and a black queen in the opposite corner. Players alternate moves, which consist of moving the queen to an adjacent vertex that can be reached without passing over (horizontally, vertically or diagonally) or landing on a vertex that is adjacent to the opposite queen. This game can be won by white with a pairing strategy.
See also
King's graph
Knight's graph
Rook's graph
Bishop's graph
References
Mathematical chess problems
Parametric families of graphs | Queen's graph | [
"Mathematics"
] | 1,170 | [
"Recreational mathematics",
"Mathematical chess problems"
] |
70,074,170 | https://en.wikipedia.org/wiki/Xylaria%20culleniae | Xylaria culleniae is a species of fungus in the family Xylariaceae. This species known to grow on dried fruits and seeds.
Taxonomy
Xylaria culleniae belongs to the family Xylariaceae. The species grows on fruits and seeds are generally considered as host-specific. This species was collected from Sri Lanka during July 1868 by George Gardner (botanist) and George Henry Kendrick Thwaites who was superintendent of the botanical gardens at Peradeniya, Ceylon. The specimens were sent for identification to Royal Botanic Gardens, Kew in 1872. There English botanists and mycologists Miles Joseph Berkeley and Christopher Edmund Broome described this species in 1873.
Distribution
This species is reported from Sri Lanka, China, Thailand and Anaimalai Hills Southern Western Ghats, India. This species is also known to occur in Central America, South America and Africa.
Description
The fruit bodies are erect, elongated black branches, whitened from midway to tips. The hairs of stem is septate. The ascospores (fruit bodies) of X. culleniae relatively smaller and the stromata are generally less robust. Spore dimensions are 8.5-9.5 X 3.5-4.5 μm. Sporidia .016 X .005 - .006 mm. Color of the spores are brown and are ellipsoid or inequilateral in shape. Germ slit is straight and long. Length of stroma is up to 7 cm. Stromata unbranched or branched, cylindrical, long conical. Texture soft. Perithecia 0.1-0.3 mm diam. Ostioles minutely papillate.
Hosts plants
X. culleniae are recorded growing on Cullenia exarillata pods hence the species name culleniae. It is assumed to be host-specific, however it has been recorded growing on and Inga sp. fruits which is a Legume. Hence their host specificity is uncertain.
See also
Cullenia exarillata
References
External links
GBIF
Xylariales
Fungi of Asia
Inedible fungi
Fungi described in 1873
Fungus species | Xylaria culleniae | [
"Biology"
] | 441 | [
"Fungi",
"Fungus species"
] |
70,074,257 | https://en.wikipedia.org/wiki/Staci%20Simonich | Staci Simonich is an American environmental scientist who is a professor and dean for the College of Agricultural Sciences at Oregon State University. Her research considers how chemicals move through the environment. She was appointed Fellow of the American Association for the Advancement of Science in 2021.
Family
Simonich has two children, Noah a sophomore at Oregon State University, and Grace a senior at Crescent Valley High School. Grace was adopted from South Korea and has been an outstanding student all throughout high school.
Early life and education
Simonich grew up in Green Bay, Wisconsin. Her father worked in a paper mill. Her house was near the Fox River, which suffered from issues with pollution. These experiences inspired Simonich to work on environmental issues. Simonich was the first in her family to attend college. She studied chemistry at the University of Wisconsin–Green Bay. As part of her undergraduate research, she studied polychlorinated biphenyls in Green Bay. After graduating she moved to Indiana University Bloomington, where she studied the role of vegetation in removing organic pollutants from the atmosphere. During her doctoral research she studied polycyclic aromatic hydrocarbons in the atmospherere. Her research combines lab-based studies with field experiments and computational modelling. Simonich earned a Master of Business Administration at Oregon State University in 2020.
Research and career
Simonich joined Procter & Gamble, where she spent six years working on consumer food products. She investigated the environmental impacts of P&G ingredients.
Simonich joined Oregon State University in 2001 and continued her work on polycyclic aromatic hydrocarbons (PAHs). Elevated levels of combustion means that emissions of PAHs are high in Asia. Simonich collected PAHs before, during and after the 2008 Summer Olympics and analyzed for various different forms of hydrocarbons. She established a series of remote sites across the Pacific Northwest to monitor atmospheric transport of the PAHs from Beijing to North America. She has shown that PAHs persist over long distances, that they reach with other chemicals, and that they make use of various transport pathways.
Simonich has studied several different types of PAH and monitored their environmental impact. She is particularly interested in environmental remediation and ways to remove PAHs from soil. Unfortunately, some forms of bioremidation can lead the breakdown products that are more toxic than the original compounds.
Simonich was made Executive Associate Dean in 2020.
Awards and honors
2003 National Science Foundation CAREER Award
2011 Scientific and Technological Achievement Award Level III for Innovative Design, Implementation and Synthesis Assessing Impact of Airborne Contaminants on Western National Parks, United States Environmental Protection Agency
2013 Oregon State University Impact Award for Outstanding Scholarship
2013 Super Reviewer Award, Environmental Science & Technology (Journal)
2015 Oregon State University Excellence in Graduate Mentoring Award
2015 James and Mildred Oldfield/E.R. Jackman Team Award, in recognition of Oregon State University Superfund Research Program
2021 Elected Fellow of the American Association for the Advancement of Science
Selected publications
References
Year of birth missing (living people)
Living people
21st-century American scientists
21st-century American women scientists
Indiana University Bloomington alumni
University of Wisconsin–Green Bay alumni
Oregon State University alumni
Oregon State University faculty
Procter & Gamble people
People from Green Bay, Wisconsin
Environmental scientists
American women scientists | Staci Simonich | [
"Environmental_science"
] | 654 | [
"American environmental scientists",
"Environmental scientists"
] |
70,075,619 | https://en.wikipedia.org/wiki/Floating%20Freedom%20School | The Floating Freedom School was an educational facility for free and enslaved African Americans on a steamboat on the Mississippi River. It was established in 1847 by the Baptist minister John Berry Meachum. After Meachum's death in 1854, the Freedom School was taken over by Reverend John R. Anderson, a former student, and closed sometime after 1860.
History
In 1847, John Berry Meachum was forced to close the school he had been operating in a St. Louis church basement. Earlier that year, the Missouri legislature had passed a law that made it illegal to provide "the instruction of negroes or mulattoes, in reading or writing". Meachum and one of his teachers were arrested by the sheriff and threatened.
To circumvent the new state law in Missouri, Reverend Meachum bought a steamboat which he anchored in the middle of the Mississippi River, thus placing it under the authority of the federal government. The new floating "Freedom School" was outfitted with desks, chairs, and a library. Students were ferried back and forth between St. Louis and the Freedom School in small skiffs. The school eventually attracted teachers from the East.
Hundreds of black children were educated at the Freedom School in the 1840s and 1850s. Those who could pay were charged one dollar a month. One of the early students was James Milton Turner, who would go on to establish 30 new schools for African Americans in Missouri after the Civil War. Another was John R. Anderson, who received much of his reading and religious training from the school. Reverend Anderson later took over management of the school after Meachum's death in 1854. School attendance dropped off just before the Civil War, with only 155 black children enrolled in 1860.
Notes
References
Further reading
Steamboats of the Mississippi River
Floating architecture
Former school buildings in the United States
African-American history of Missouri
Historically segregated African-American schools in the United States
Anti-black racism in Missouri
1847 establishments in the United States | Floating Freedom School | [
"Technology",
"Engineering"
] | 402 | [
"Structural system",
"Floating architecture",
"Architecture"
] |
70,076,218 | https://en.wikipedia.org/wiki/HM%20Sagittae | HM Sagittae is a dusty-type symbiotic nova in the northern constellation of Sagitta. It was discovered by O. D. Dokuchaeva and colleagues in 1975 when it increased in brightness by six magnitudes (a factor of around 250 brighter). The object displays an emission line spectrum similar to a planetary nebula and was detected in the radio band in 1977. Unlike a classical nova, the optical brightness of this system did not rapidly decrease with time, although it showed some variation. It displays activity in every band of the electromagnetic spectrum from X-ray to radio.
Observations in the infrared during 1978 showed this to be a very strong source with a spectrum that is consistent with a binary symbiotic system similar to V1016 Cyg. The cooler stellar component is emitting material that is then ionized by a hot component, with the emission spectrum coming from heated dust generated by the cooler star. By 1983, the infrared emission of the system was shown to vary by a factor of 1.5 magnitudes in the K-band with a time scale of about 500 days. High resolution spectral examination of the system in 1984 showed a bipolar outflow of matter with a velocity of . A series of knots extend outward on both sides of the central star to an angular distance of . The nebula surrounding the system shows a bipolar, S-shaped morphology, similar to R Aqr.
The features of the system are consistent with a central red giant star being orbited by a compact object that is accreting matter from the giant. The pair have an angular separation of , with the axis aligned along a position angle of . Their physical separation is estimated at . The giant component is most likely a Mira variable and measurements up to 1989 found a period of 527 days. It is surrounded by a dusty shell that is mostly composed of silicates. The compact object is a hot white dwarf with 70% of the mass of the Sun, which is orbited by an accretion disk. The nova-like outburst of 1975 may have been generated by a burst of mass transfer from the giant to the white dwarf during the periastron passage of an eccentric orbit, leading to a thermonuclear outburst.
Winds from both stars are colliding to produce a shock region that is a source of ultraviolet emission. By 1985, a fading of the brightness and an increase in redness were observed, caused by dust obscuration. The hot component may be inhibiting dust formation around the giant except in the shadow region behind the star. This could explain observed individual dust obscuration events.
References
Further reading
Cataclysmic variable stars
Mira variables
M-type giants
White dwarfs
Binary stars
Sagitta
Sagittae, HM | HM Sagittae | [
"Astronomy"
] | 562 | [
"Sagitta",
"Constellations"
] |
70,076,585 | https://en.wikipedia.org/wiki/High%20integrity%20software | High-integrity software is software whose failure may cause serious damage with possible "life-threatening consequences." "Integrity is important as it demonstrates the safety, security, and maintainability of... code." Examples of high-integrity software are nuclear reactor control, avionics software, automotive safety-critical software and process control software.
A number of standards are applicable to high-integrity software, including:
DO-178C, Software Considerations in Airborne Systems and Equipment Certification
CENELEC EN 50128, Railway applications - Communication, signalling and processing systems - Software for railway control and protection systems
IEC 61508, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems (E/E/PE, or E/E/PES)
ISO 26262, Road Vehicles - Functional Safety (especially 'part 6' of the standard, which is titled "Product development at the software level"
See also
Safety-critical system
High availability software
Formal methods
Software of unknown pedigree
References
External links
Software by type
Software quality
Safety engineering | High integrity software | [
"Technology",
"Engineering"
] | 214 | [
"Systems engineering",
"Safety engineering",
"Software by type",
"Software engineering stubs",
"Software engineering"
] |
70,076,628 | https://en.wikipedia.org/wiki/Kiritimatiellota | The Kiritimatiellota are a phylum of bacteria.
References
Bacteria phyla
Bacteria | Kiritimatiellota | [
"Biology"
] | 22 | [
"Bacteria stubs",
"Prokaryotes",
"Microorganisms",
"Bacteria"
] |
70,076,933 | https://en.wikipedia.org/wiki/Edward%20David%20Hughes | Edward David Hughes (June 18, 1906June 30, 1963) was a British organic chemist. He was a professor first at University College, Bangor and then at University College in London, eventually rising to the rank of dean at each. He was elected as a Fellow of the Royal Society in 1949.
Hughes studied organic reaction mechanisms and reaction kinetics, including being one of the first chemists to use isotopes to understand them. He collaborated with Christopher Kelk Ingold, leading to development of the eponymous Hughes–Ingold rules and Hughes–Ingold symbols.
References
Fellows of the Royal Society
20th-century British chemists
Academics of Bangor University
Academics of University College London
British organic chemists
1906 births
1963 deaths | Edward David Hughes | [
"Chemistry"
] | 145 | [
"Organic chemists",
"British organic chemists"
] |
68,581,881 | https://en.wikipedia.org/wiki/Manifold%20hypothesis | The manifold hypothesis posits that many high-dimensional data sets that occur in the real world actually lie along low-dimensional latent manifolds inside that high-dimensional space. As a consequence of the manifold hypothesis, many data sets that appear to initially require many variables to describe, can actually be described by a comparatively small number of variables, likened to the local coordinate system of the underlying manifold. It is suggested that this principle underpins the effectiveness of machine learning algorithms in describing high-dimensional data sets by considering a few common features.
The manifold hypothesis is related to the effectiveness of nonlinear dimensionality reduction techniques in machine learning. Many techniques of dimensional reduction make the assumption that data lies along a low-dimensional submanifold, such as manifold sculpting, manifold alignment, and manifold regularization.
The major implications of this hypothesis is that
Machine learning models only have to fit relatively simple, low-dimensional, highly structured subspaces within their potential input space (latent manifolds).
Within one of these manifolds, it’s always possible to interpolate between two inputs, that is to say, morph one into another via a continuous path along which all points fall on the manifold.
The ability to interpolate between samples is the key to generalization in deep learning.
The information geometry of statistical manifolds
An empirically-motivated approach to the manifold hypothesis focuses on its correspondence with an effective theory for manifold learning under the assumption that robust machine learning requires encoding the dataset of interest using methods for data compression. This perspective gradually emerged using the tools of information geometry thanks to the coordinated effort of scientists working on the efficient coding hypothesis, predictive coding and variational Bayesian methods.
The argument for reasoning about the information geometry on the latent space of distributions rests upon the existence and uniqueness of the Fisher information metric. In this general setting, we are trying to find a stochastic embedding of a statistical manifold. From the perspective of dynamical systems, in the big data regime this manifold generally exhibits certain properties such as homeostasis:
We can sample large amounts of data from the underlying generative process.
Machine Learning experiments are reproducible, so the statistics of the generating process exhibit stationarity.
In a sense made precise by theoretical neuroscientists working on the free energy principle, the statistical manifold in question possesses a Markov blanket.
References
Further reading
Machine learning
Theoretical computer science | Manifold hypothesis | [
"Mathematics",
"Engineering"
] | 493 | [
"Theoretical computer science",
"Applied mathematics",
"Artificial intelligence engineering",
"Machine learning"
] |
68,582,708 | https://en.wikipedia.org/wiki/Ad%C3%A9lia%20Sequeira | Adélia da Costa Sequeira is a Portuguese applied mathematician specializing in the mathematical modeling of blood flow and the circulatory system. She is a professor of mathematics at the Instituto Superior Técnico, part of the University of Lisbon, where she is coordinator for the Scientific Area on Numerical Analysis and Applied Analysis and director of the Research Center for Computational and Stochastic Mathematics.
Education
Sequeira earned a doctorat de troisième cycle in France in 1981, at Pierre and Marie Curie University, in numerical analysis. Her dissertation, Couplage entre la méthode des éléments finis et la méthode des équations integrales: application au problème de Stokes stationnaire dans le plan, was supervised by Jean-Claude Nédélec. She has a second doctorate in mathematics, earned in 1985 at the University of Lisbon, where she also earned a habilitation in 2001.
Books
Sequeira is a coauthor of the book Hemomath: the Mathematics of Blood (Springer, 2017), and is the editor of several edited volumes.
Recognition
She is a corresponding member of the Lisbon Academy of Sciences, elected in 2018.
References
External links
Adélia Sequeira, Mulheres na ciência, Portuguese Agency of Science and Technology "Ciência Viva"
Year of birth missing (living people)
Living people
20th-century Portuguese mathematicians
Women mathematicians
Applied mathematicians
University of Lisbon alumni
Academic staff of the University of Lisbon
21st-century Portuguese mathematicians | Adélia Sequeira | [
"Mathematics"
] | 298 | [
"Applied mathematics",
"Applied mathematicians"
] |
68,583,037 | https://en.wikipedia.org/wiki/Time%20in%20Djibouti | Time in Djibouti is given by a single time zone, officially denoted as East Africa Time (EAT; UTC+03:00). Djibouti does not observe daylight saving time.
IANA time zone database
In the IANA time zone database, Djibouti is given one zone in the file zone.tab – Africa/Djibouti, which is an alias to Africa/Nairobi. "DJ" refers to the country's ISO 3166-1 alpha-2 country code. Data for Djibouti directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
See also
Time in Africa
List of time zones by country
List of UTC time offsets
References
External links
Current time in Djibouti at Time.is
Time in Djibouti at TimeAndDate.com
Time by country
Geography of Djibouti
Time in Africa | Time in Djibouti | [
"Physics"
] | 194 | [
"Spacetime",
"Physical quantities",
"Time",
"Time by country"
] |
68,585,633 | https://en.wikipedia.org/wiki/Laundry-folding%20machine | A laundry-folding machine or laundry-folding robot is a machine or domestic robot which folds apparel such that they can be stored compactly and orderly.
A laundry folding machine can be a part of or integrated with a washing machine, clothes dryer, ironing machine and/or wardrobe. Some operate these processes autonomously, while other require varying degrees of manual intervention.
Industrial use
For industrial use, there are several types of laundry folding machines in different sizes and varieties, of which some are very specialized for certain types of clothing, or very large to be able to fold large textiles such as bedding.
Domestic use
There have been several attempts to produce commercial clothes folding machines for home use.
FoldiMate was an American company founded in 2010, which presented a prototype of a clothes folding machine first in 2016, then at the Consumer Electronics Show in 2017, and an updated prototype at CES in 2018. The garments had to be manually fed into the machine from top one at a time, and after a few minutes the user could pick up ready-made clothes at the bottom. Their goal was for FoldiMate to enter the market by the end of 2019,
but in July 2021 it became clear that the company would cease its operations.
Laundroid was a Japanese combined washing and folding machine that aimed at being able to wash, dry, iron and fold clothes, and then transport them to an integrated wardrobe - completely autonomously, with an estimated working duration being overnight. It was first shown at the consumer electronics exhibition show CEATEC in 2015, and was marketed as the world's first robot that could wash and fold clothes. The goal was for Laundroid to enter the market in 2017 (later adjusted to 2019), but in 2019 the company behind Laundroid, Seven Dreams, announced that they had gone bankrupt. The development had been supported by amongst others Daiwa House and Panasonic.
See also
Clothes dryer
Clothes horse
Clothes hanger
Smart home
Walk-in closet
Washing machine
References
Rotating machines
Folding machine
Domestic robots | Laundry-folding machine | [
"Physics",
"Technology"
] | 409 | [
"Home automation",
"Machines",
"Physical systems",
"Rotating machines",
"Domestic robots"
] |
68,586,247 | https://en.wikipedia.org/wiki/Stewart%20Andrew%20McDowall | Stewart Andrew McDowall (1882 – 13 January 1935) was an English biologist, eugenicist and philosopher.
McDowall was born in Bedford. He was educated at St Paul's School and University College London and graduated in natural science from Trinity College, Cambridge in 1904. He worked in the zoological laboratory at Cambridge and was assistant superintendent of the university's Museum of zoology. In 1905, he was appointed as Professor of Biology at the Christian College in Madras. In 1906, he became assistant master at Winchester College where he later became senior science master.
McDowall was a Christian and was ordained in the Church of England in 1908. He was the chaplain at Winchester College. He was a fellow of the Physical Society of London and the Cambridge Philosophical Society. McDowall was a theistic evolutionist and wrote several works on this topic. He held the view that evolution was non-materialistic, progressive and supported the values of Christian cosmology. He was influenced by Henri Bergson. In 1923–1924, McDowall gave the Hulsean Lectures on Evolution, Knowledge and Revelation, which were described as "an extreme form of metaphysical idealism".
He was an active eugenicist and was the author of Biology and Mankind (1931), which advocated sterilization of the feeble-minded.
McDowall died in Winchester on 13 January 1935.
Selected publications
Evolution and Spiritual Life (1915)
Seven Doubts of a Biologist (1917)
Evolution and the Doctrine of the Trinity (1918)
Beauty and the Beast: An Essay in Evolutionary Aesthetic (1920)
Evolution, Knowledge and Revelation (1924)
Creative Personality and Evolution (1928)
Biology and Mankind (1931)
Is Sin Our Fault? (1932)
References
1882 births
1935 deaths
20th-century British biologists
20th-century English zoologists
British Christian writers
English eugenicists
Idealists
People from Bedford
Teachers at Winchester College
Theistic evolutionists | Stewart Andrew McDowall | [
"Biology"
] | 387 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
68,588,557 | https://en.wikipedia.org/wiki/Verrucotoxin | Verrucotoxin (VTX) is a lethal venom produced by the dorsal fins of Synanceia verrucosa. This species of reef stonefish is connected to the family Synanceiidae. The venom of this species of stonefish is a tetrameric glycoprotein with cardiovascular and cytolytic effects.
Structure
The structure of verrucotoxin is a tetrameric protein with a molecular weight of 322 KDa; the protein consists of two parts of an alpha subunit (83 KDa) and two parts of a beta subunit (78 KDa). Verrucotoxin shares a total of 96% homology to the closely related venom stonustoxin beta subunit. Stonustoxin is the venom produced from Synanceia horrida.
Function and mechanism
Verrucotoxin has been studied to interact with both calcium ion channels and potassium ATP channels. The calcium ion channel is modulated by the activation of the β-adrenoceptors when verrucotoxin binds. There are three subunits of β-adrenoceptors, β1, β2, and β3, but only β2-adrenoceptors are responsible for the activation of the cyclic adenosine monophosphate (cAMP)-protein kinase (PKA) pathway. The cAMP will phosphorylate muscle regulatory proteins and modulate intercellular calcium concentrations. It is through this method that verrucotoxin operates the concentrations of calcium in the cell. Verrucotoxin is a concentration-dependent toxin to the concentration of calcium ions; the presence of verrucotoxin can increase the calcium concentration three times the standard intercellular concentration. Additionally, verrucotoxin has been observed to cause a reversible prolonged action potential duration with zero change in resting membrane potential from ventricular myocytes in guinea pigs.
The second method verrucotoxin disrupts cells are through potassium ion channels, particularly the potassium adenosine triphosphate (KATP) pathway. The potassium ATP channel operates by pumping potassium ions out of the intercellular membrane. It’s capable of doing so by phosphorylating the channel with Adenosine Triphosphate (ATP). Verrucotoxin is able to inhibit the operation of the potassium ATP through the activation of muscarinic M3 receptor-protein kinase C (PKC). It is the activation of the PKC that will inhibit the potassium ion channel. The PKC is presumably phosphorylating the KATP channel instead of ATP.
Adverse effects
The stonefish, Synanceia verrucosa, has a diverse set of toxins that disrupts basic human ability. When injected with the toxins found in the dorsal fins of the fish, individuals will suffer from skeletal muscle paralysis, extreme pain, seizures, convulsions, respiratory arrest, and damage to the cardiovascular system. Verrucotoxin has been studied to be the cause of cardiovascular system damage, convulsions, seizures, and paralysis. For the cardiovascular system damage, it is caused by the sudden change in the intercellular calcium concentration leading to arrhythmia. The direct cause of seizures, convulsions, and paralysis are still being investigated.
References
Ichthyotoxins
Potassium channel blockers
Calcium channel openers
Glycoproteins | Verrucotoxin | [
"Chemistry"
] | 698 | [
"Glycoproteins",
"Glycobiology"
] |
68,589,447 | https://en.wikipedia.org/wiki/Fumarranol | Fumarranol is a drug which acts as an inhibitor of the type 2 methionine aminopeptidase enzyme METAP2. It was derived by structural modification of the natural product fumagillin. It was originally developed as an anti-angiogenesis drug for the treatment of cancer, but it was subsequently found to bind with high affinity to the METAP2 enzyme in malaria parasites and has been investigated as a potential treatment for malaria.
See also
Beloranib
References
Enzyme inhibitors | Fumarranol | [
"Chemistry"
] | 103 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
68,589,911 | https://en.wikipedia.org/wiki/2C-T-28 | 2C-T-28 is a lesser-known psychedelic drug related to compounds such as 2C-T-7 and 2C-T-21. It was named by Alexander Shulgin but was never made or tested by him, and was instead first synthesised by Daniel Trachsel some years later. It has a binding affinity of 75 nM at 5-HT2A and 28 nM at 5-HT2C. It is reportedly a potent psychedelic drug with an active dose in the 8–20 mg range, and a duration of action of 8–10 hours, with prominent visual effects. 2C-T-28 is the 3-fluoropropyl instead of 2-fluoroethyl chain-lengthened homologue of 2C-T-21 and has very similar properties, although unlike 2C-T-21 it will not form toxic fluoroacetate as a metabolite.
See also
2C-T-16
2C-TFE
3C-DFE
DOPF
Trifluoromescaline
2C-x
DOx
25-NB
References
2C (psychedelics)
Entheogens
Thioethers
Amines
Methoxy compounds | 2C-T-28 | [
"Chemistry"
] | 250 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
68,590,385 | https://en.wikipedia.org/wiki/Origamics | Origamics: Mathematical Explorations Through Paper Folding is a book on the mathematics of paper folding by , a Japanese retired biology professor. It was edited and translated into English by Josefina C. Fonacier and Masami Isoda, based on material published in several Japanese-language books by Haga, and published in 2008 by World Scientific. The title is a portmanteau of "origami" and "mathematics", coined in the 1990s by Haga to describe the type of paper-folding mathematical exploration that would later be described in this book.
Topics
Although much of its content involves folding square sheets of origami paper, the book focuses on mathematical explorations developing from folding and unfolding paper rather than on the traditional use of origami to create paper figures and artworks. It is divided into ten chapters, exploring concepts in paper folding that are "so simple that they could be discovered by middle- or high-school students".
The book begins with the exploration of a single fold of a corner of a square to a midpoint of an opposite edge, and its analysis involving the geometry of the 3–4–5 right triangle. Later explorations (sometimes presented with colorful stories of knights and princesses as motivation) concern folding one or more corners of the square to other points on the square, similar folds on paper with the shape of a silver rectangle (such as A4 letter paper), the interactions of the fold lines produced in this way, and the use of these folds to obtain subdivisions of the interval into different numbers of parts.
Audience and reception
The book is primarily aimed at secondary-school mathematics teachers, and reviewer Gertraud Ehrig suggests that this book would be particularly helpful for them in providing inspiration for activities for their students.
Although the many activities discussed throughout the book are suitable for discovery learning by students, it also includes more technical material proving the mathematical insights found through these activities. These parts use only elementary methods in Euclidean geometry, such as the Pythagorean theorem and the use of triangle centers, and may be best omitted when presenting this material to students.
References
Mathematics books
Paper folding
2008 non-fiction books | Origamics | [
"Mathematics"
] | 442 | [
"Recreational mathematics",
"Paper folding"
] |
60,739,676 | https://en.wikipedia.org/wiki/Tropical%20desert | Tropical deserts are located in regions between 15 and 30 degrees latitude. The environment is very extreme, and they have the highest average monthly temperature on Earth. Rainfall is sporadic; precipitation may not be observed at all in a few years. In addition to these extreme environmental and climate conditions, most tropical deserts are covered with sand and rocks, and thus too flat and lacking in vegetation to block out the wind. Wind may erode and transport sand, rocks and other materials; these are known as eolian processes. Landforms caused by wind erosion vary greatly in characteristics and size. Representative landforms include depressions and pans, Yardangs, inverted topography and ventifacts. No significant populations can survive in tropical deserts due to extreme aridity, heat and the paucity of vegetation; only specific flora and fauna with special behavioral and physical mechanisms are supported. Although tropical deserts are considered to be harsh and barren, they are in fact important sources of natural resources and play a significant role in economic development. Besides the equatorial deserts, there are many hot deserts situated in the tropical zone.
Distribution
Geographical distribution
Tropical deserts are located in both continental interiors and coastal areas between the Tropic of Cancer and Tropic of Capricorn. Representative deserts include the Sahara Desert in North Africa, the Australian Desert in Western and Southern Australia, Arabian Desert and Syrian Desert in Western Asia, the Kalahari Desert in Southern Africa, Sonoran Desert in the United States and Mexico, Mojave Desert in the United States, Thar Desert in India and Pakistan, Dasht-e Margo and Registan Desert in Afghanistan and Dasht-e Kavir and Dasht-e Loot in Iran.
Controlling factor
Tropics form a belt around the equator from latitude 3 degrees north to latitude 3 degrees south, which is called the Intertropical Convergence Zone. Tropical heat generates unstable air in this area, and air masses become extremely dry due to the loss of moisture during the process of tropical ascent.
Another significant determinant of tropical desert climate are Hadley cells. Hadley cells concentrate all precipitations in the hotter humid lower pressure equator, leaving colder higher pressure deserts with no precipitation.
Characteristics
Temperature
Tropical deserts have the highest average daily temperature on the planet, as both the energy input during the day and the loss of heat at night are large. This phenomenon causes an extremely large daily temperature range. Specifically, temperatures in a low elevation inland desert can reach 40°C to 50°C during the day, and drop to approximately 5°C at night; the daily range is around 30 to 40°C.
There are some other reasons for significant changes in temperature in tropical deserts. For instance, a lack of water and vegetation on the ground can enhance the absorption of the heat due to insolation. Subsiding air from dominant high pressure areas in a cloud-free sky can also lead to large amounts of insolation; a cloudless sky enables day temperature to escape rapidly at night.
Precipitation is very irregular in tropical deserts. The average annual precipitation in low latitude deserts is less than 250 mm. Relative humidity is very low – only 10% to 30% in interior locations, and even the dewpoints are typically very low, often being well below the freezing mark. Some deserts do not have rainfall all year round, they are located far from the ocean. High-pressure cells and high temperatures can also increase the level of aridity.
Wind
Wind greatly contributes to aridity in tropical deserts. If wind speed exceeds 80 km/h, it can generate dust storms and sandstorms and erode the rocky surface. Therefore, wind plays an important role in shaping various landforms. This phenomenon is known as the eolian process. There are two types of eolian process: deflation and abrasion.
First, deflation may cause the light lowering of ground surface, leading to deflation hollows, plains, basins, blowouts, wind-eroded plains and parabolic dunes. Second, the eolian process leads to abrasion, which forms special landforms with a significant undercut.
Landforms
Various landforms are found in tropical deserts due to different kinds of eolian process. The major landforms are dunes, depressions and pans, yardangs, and inverted topography.
Dunes
There are various kinds of dune in tropical deserts. Representative dunes include dome dunes, transverse dunes, barchans, star dunes, shadow dunes, linear dunes and longitudinal dunes.
Depression
A desert depression is caused by polygenetic factors such as wind erosion, broad shallow warping and block faulting, stream erosion, karst activity, salt weathering mass wasting, and zoogenic processes; representative examples are the large enclosed basins in Africa, such as Farafra, Baharia, Dakhla, Qattara, Siwa and Kargha.
Pans
Pans are widespread in southern and western Australia, southern Africa and the high plains of the United States deserts. The factors responsible for pans include a vegetation-free surface and low humidity, a low water table and poorly consolidated sediment, and a huge amount of fine-grained sandstone and shale. Feedback mechanisms also play a significant role in the process of enlarging the pan; salts are left as water accumulates in depressions, which inhibits sedimentation due to weather and the growth of vegetation in the future. This affects both erosional processes and depositional processes in pans.
Yardangs
Yardangs can be observed in orbital and aerial images of Mars and Earth. Yardangs usually develop in arid regions, predominantly due to wind, processes. The, classic forms are streamlined and elongated ridges; they may also appear with flat tops or with stubby and short profiles. Their length-to-width ratios range from 3:1 to 10:1; this is determined by the wind direction, duration of exposure to wind and rock material.
Inverted topography
Inverted topography forms in areas previously at a low elevation, such as deltaic distributary systems and river systems. They are left at higher relief due to their relative resistance to wind erosion. Inverted topography is frequently observed in yardang fields, such as raised channels in Egypt, Oman and China and on Mars.
Biogeography
The environment in tropical deserts is harsh as well as barren; only certain plants and animals with special behavioral and physical mechanisms can live there.
Biological adaption to aridity
For flora, general adaptations including transforming leaves into spines to protect themselves. With the reduction in leaf area, the stem develops as a major photosynthetic structure, which is also responsible for storing water. A common example is the cactus, which has a specific means of storing and conserving water, along with few or no leaves to minimize transpiration.
In addition to the protection provided by spines, chemical defences are also very common. Desert plants grow slowly as less photosynthesis takes place, allowing them to invest more in defence.
Another adaption is the development of extremely long roots that allow the flora to acquire moisture at the water table. Furthermore, some desert plants exhibit behavioural adaption; for instance, some flora live for only one season or one year, and desert perennials can survive by staying dormant during extremely dry periods; when the environment receives more moisture, they become active again.
For fauna, the easiest way is to stay away from the surface of the tropical deserts as much as possible to avoid the heat and aridity. As a result of the scarcity of water, most animals in these regions get their water from eating succulent plants and seeds, or from the tissues and blood of their prey. They also have specific ways to store water and prevent water from leaving their bodies. Some animals live in burrows under the ground which are not too hot and relatively humid; they stay in their burrows during the heat of the day, and only come out to seek food at night. Examples of these animals include kangaroo rats and lizards. Other animals, such as wolf spiders and scorpions, have a thick outer covering that minimizes moisture loss. Animals in tropical deserts have also been found to concentrate their urine in their kidneys to excrete less water.
Flora
Representative desert plants include the barrel cactus, brittlebush, chain fruit cholla, creosote. Additionally, it is also common to see crimson hedgehog, cactus, common saltbush and desert ironwood, fairy duster, Joshua tree. In some deserts Mojave aster, ocotillo, organ pipe cactus and pancake prickly pear cactus can be found. Furthermore, paloverde, saguaro cactus, soaptree yucca, cholla guera, triangle-leaf bursage, tumbleweed and velvet mesquite can also be found in these regions.
Fauna
Representative fauna in tropical deserts include the armadillo lizard, banded Gila monster, bobcat, cactus wren and cactus ferruginous pygmy owl. Moreover, some other animals in deserts including coyote, desert bighorn sheep, desert kangaroo rat, desert tortoise, javelina and Mojave rattlesnake, cougar. Overall, different tropical deserts have different species, for example, Sonoran Desert toad, Sonoran pronghorn antelope are typical animals in Sonoran Desert.
Natural resources
Rich and sometimes unique mineral resources are located in tropical deserts. Representative minerals include borax, sodium nitrate, sodium, iodine, calcium, bromine, and strontium compounds. These minerals are created when the water in desert lakes evaporates.
Borax
Borax is a natural cleaner and freshener, also known as a detergent booster. Boric acid is derived from borax and can be used to manufacture agricultural chemicals such as herbicide and insecticide, It is also used widely in fire retardants, glass, ceramics, water softeners, pharmaceuticals, paint, enamel, cosmetics and coated paper. Billions of dollars of borax has been mined in the northern Mojave Desert since 1881.
Borax is also a key ingredient for slime-making, the trend that was popular during the 2016-2017 period.
Sodium nitrate
Sodium nitrate forms through the evaporation of water in desert areas. The richest cache of sodium nitrate is located in South America; approximately 3 million metric tons were mined during World War I. It was the earliest food preservative, and is still used today to cure fish and meat to produce bacon, ham, sausage and deli meats. It is also used in the manufacturing of pharmaceuticals, fertilizers, dyes, explosives flares and enamels.
Fossil fuels
Natural gas and oil are complex hydrocarbons that formed millions of years ago from the decomposition of animals and plants. They are the world's primary energy source and exist in viscous, solid, liquid or gaseous forms. The five largest oil fields are in Saudi Arabia, Iraq and Kuwait. The largest petroleum-producing region in the world is the Arabian Desert
Metallic minerals
Most major kinds of mineral deposits formed by groundwater are located in the deserts. For example, some valuable metallic minerals, such as gold, silver, iron, zinc, and uranium, are found in Western Desert in Australia. This is due to special geological processes, and climate factors in the desert can preserve and enhance mineral deposits.
Gemstones
Tropical deserts have various semi-precious and precious gemstones. The Some common semi-precious gemstones including chalcedony, opal, quartz, turquoise, jade, amethyst, petrified wood, and topaz. Precious gemstones such as diamonds are used in jewellery and decoration. Although some gemstones can also be found in temperate zones throughout the world, turquoise can only be found in tropical deserts. Turquoise is a very valuable and popular opaque gemstone, with beautiful blue-green or sky-blue colour and exquisite veins.
References
Deserts
Ecosystems
Geomorphology | Tropical desert | [
"Biology"
] | 2,423 | [
"Deserts",
"Symbiosis",
"Ecosystems"
] |
60,742,954 | https://en.wikipedia.org/wiki/Trimeresurus%20arunachalensis | Trimeresurus arunachalensis, the Arunachal pitviper, is a species of venomous pit viper endemic to the Indian state of Arunachal Pradesh. It is only known from the village of Ramda in the West Kameng district, where a single specimen was discovered during biodiversity surveys. It can physically be distinguished by its scalation, its acutely pointed snout reminiscent of the hump-nosed viper (Hypnale hypnale), and its brownish dorsal coloration with glossy orange-reddish-brown sides and belly. The last new species of (green) pit viper was described from India 70 years before the discovery of T. arunachalensis. Genetic analysis indicates that the closest relative of this species is the Tibetan bamboo pit viper (T. tibetanus). The single specimen known of this species makes it one of the rarest known pit vipers in the world, though further surveys of the forest habitat will likely reveal more individuals.
References
arunachelensis
Endemic fauna of India
Reptiles of India
Reptiles described in 2019
Taxa named by Veerappan Deepak
Species known from a single specimen | Trimeresurus arunachalensis | [
"Biology"
] | 226 | [
"Individual organisms",
"Species known from a single specimen"
] |
60,742,988 | https://en.wikipedia.org/wiki/Myriodontium%20keratinophilum | Apinisia keratinophila, formerly Myriodontium keratinophilum, is a fungus widespread in nature, most abundantly found in keratin-rich environments such as feathers, nails and hair. Despite its ability to colonize keratinous surfaces of human body, the species has been known to be non-pathogenic in man and is phylogentically distant to other human pathogenic species, such as anthropophilic dermatophytes (e.g., Trichophyton rubrum, Trichophyton interdigitale). However, its occasional isolation from clinical specimens along with its keratinolytic properties suggest the possibility it may contribute to disease.
History and taxonomy
Apinisia keratinophila was first isolated in Italy in 1978, during the screening of soil microbes for their ability to produce antibiotic and antiviral substances. The fungus was described by Luciano Polonelli and Robert A. Samson, and was classified into a separate taxon for its ability to produce multiple asexual spores (conidia) at different points across the surface of the hypha. Later in 1997, its asexual state (teleomorph) was independently described and named as Neoarachnotheca keratinophilia by Josef Cano and Krzysztof Ulfig. M. keratinophilum is the only member of the genus Myriodontium.
Growth and morphology
The vegetative and fertile hyphae are branched, hyaline, septate, smooth-walled, and ranges from 2.5 μm to 6 μm in width. The sexual fruiting structure (gymnothecium) consists of loosely interwoven hyphae that are hyaline to yellow in colour, and typically matures within a month, attaining a diameter of 150–700 μm. The gymnothecium encloses millions of round, hyaline asci that each bear 8 ascospores. The defining feature of A. keratinophila is that its ascospores are randomly wrinkled and pitted with an irregular sheath. The (conidia) are one-celled and emerge at multiple different points across the surface of fertile hyphae. Species of the genera Chromelosporium and Pulchromyces morphologically resemble Myriodontium.
Physiology
Apinisia keratinophila exhibits keratinolytic properties, and it is able to grow on and degrade keratinous surfaces such as feathers. The fungus shows moderate to rapid growth on YpSs agar, attaining a diameter of 6 cm in 14 days at ; the colonies are white with a felt base obscured by floccose mycelium. On hay-infusion agar, the growth rate is slightly reduced with a diameter of 5 cm in 14 days at . Similar growth rate is exhibited on oatmeal agar. On phytone extract agar at , its colony shows faster growth relative to other medium, spreading up to 2.7 mm per day. On potato dextrose agar at , it exhibits a mean daily spread of 1.8–2.3 mm. On Sabouraud agar at , the mean daily spread is slightly reduced, ranging from 1.4–2.0 mm. It shows no growth at .
Habitat and ecology
Apinisia keratinophila is commonly found in soils and river sediments. M. keratinophilum has been isolated from many different keratinous materials, such as the penis of a bull, hair of shrews and cats, and feathers of pigeons. The fungus also inhabits human hairs as an ectothrix that do not penetrate the cortex.
Isolation from human clinical specimens
Due to its uncertain pathogenic potential, A. keratinophila is classified as safe to be handled with Biosafety Level 1-equivalent containment. However, in 1985, a case of frontal sinusitis from a Nigerian patient reports its isolation from the mucosal tissue lining of the sinus. Considering the elevation in the levels of plasma cells and eosiophils, M. keratinophilum was determined to be the causal agent and the patient was treated with ketoconazole. The morpholocial features of the species isolated were described to be similar to those that cause allergic aspergillosis of paranasal sinuses, such as Aspergillus flavus and Aspergillus niger. However, it has been proposed that this case may have been misdiagnosed in confusion with Schizophyllum commune. M. keratinophilum is also regularly isolated from dermatophytic and psoriatic lesions infected by non-dermatophytes, and is identified as one of the agents of hyalohyphomycosis. A report of spectrum analysis of patients with fungal infection also documents its isolation from nails of patients.
Isolation from other clinical materials
A. keratinophila is identified to be entomopatheogenic and causes mycosis and mortality in oak lace bug (Corythucha arcuata) when exposed to conidia. Its isolates have also been found on mole cricket nymphs (Gryllotalpa gryllotalpa), and conidial application of A. keratinophila induced significant mortality and mycosis.
References
Fungi described in 1978
Onygenales
Fungus species | Myriodontium keratinophilum | [
"Biology"
] | 1,130 | [
"Fungi",
"Fungus species"
] |
60,743,304 | https://en.wikipedia.org/wiki/Fred%20P.%20Lossing%20Award | The Fred P. Lossing Award is awarded by the Canadian Society for Mass Spectrometry to a distinguished Canadian mass spectrometrist. The award is named after Frederick Lossing, a Canadian mass spectrometrist.
The award is made annually since 1994. Recipients of the award receive framed prints of Lake Louise by the local Canmore artist, Marilyn Kinsella.
Recipients
Past recipients of the award were:
See also
List of chemistry awards
References
Academic awards
Canadian science and technology awards
Mass spectrometry awards
Awards established in 1994 | Fred P. Lossing Award | [
"Physics"
] | 110 | [
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometry awards"
] |
60,743,354 | https://en.wikipedia.org/wiki/Canadian%20Society%20for%20Mass%20Spectrometry | The Canadian Society for Mass Spectrometry is an organization that promotes mass spectrometry in Canada. The goal of the society is to stimulate interest and collaborations in the Canadian mass spectrometry community. The society organizes conferences, awards prices and runs an online job board. The society is an affiliate society of the International Mass Spectrometry Foundation. Its current president is Derek Wilson.
The society awards the annual Fred P. Lossing Award.
References
External links
CSMS - Canadian Society for Mass Spectrometry
Chemistry education
Chemistry societies
Learned societies of Canada
Mass spectrometry
Science and technology in Canada
Scientific societies based in Canada | Canadian Society for Mass Spectrometry | [
"Physics",
"Chemistry"
] | 131 | [
"Chemistry organization stubs",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"nan",
"Chemistry societies",
"Matter"
] |
60,745,023 | https://en.wikipedia.org/wiki/Symbol%20Nomenclature%20For%20Glycans | The Symbol Nomenclature For Glycans (SNFG) is a community-curated standard for the depiction of simple monosaccharides and complex carbohydrates (glycans) using various colored-coded, geometric shapes, along with defined text additions. It is hosted by the National Center for Biotechnology Information at the NCBI-Glycans Page. It is curated by an international groups of researchers in the field that are collectively called the SNFG Discussion Group. The overall goal of the SNFG is to:
Facilitate communications and presentations of monosaccharides and glycans for researchers in the Glycosciences, and for scientists and students less familiar with the field.
Ensure uniform usage of the nomenclature in the literature, thus helping to ensure scientific accuracy in journal and online publications.
Continue to develop the SNFG and its applications to aid wider use by the scientific community.
Description and examples
The SNFG consists of a table that provides color coded symbols for various monosaccharides that are commonly found in nature. It also includes a set of footnotes that describe rules for rendering glycans, including guidelines on how to modify the base set of symbols depicted in the table. These footnotes are organized into 10 themes that provide streamlined recommendations for: i. general usage of the SNFG; ii. CMYK / RGB color codes; iii. symbol colors and shapes; iv. ring configurations; v. bond linkage presentation; vi. sialic acids; vii. glycan modifications; viii. amino substitutions; ix. handling ambiguous or partially defined glycans; and x. depicting non-glycan entities using SNFG renderings. More details are available at the main SNFG webpage, which is periodically updated with additional directions.
The monosaccharides can be linked together to describe complex carbohydrate structures or glycans. More exhaustive cases for mammalian species, other eukaryotes, plants and microbes are considered at the main SNFG page.
Software
Several software tools have been developed to support SNFG implementation by the community including:
GlycoGlyph: An open source glycan drawing and naming tool which enables drawing glycans in SNFG format using either a graphical user interface or from names in CFG linear nomenclature format. When structures are drawn, the application produces both the CFG linear name and the GlycoCT which in turn can be used to get the GlyTouCan ID numbers for the glycan.
3D-SNFG: For the cartoon representation of the SNFG in atomic models of carbohydrates. Here, the monosaccharides are depicted as large shapes, or icons centered within the rings.
DrawGlycan-SNFG: For the conversion of IUPAC-condensed string inputs to generate glycan and glycopeptide drawings. Bond fragmentation, glycan descriptors and other carbohydrate modifications can be included using string inputs.
GlycanBuilder2: A standalone version of GlycanBuilder that supports the expanded SNFG nomenclature.
Sugarsketcher: An intuitive web-based drag and drop tool for rendering SNFG images.
The SNFG nomenclature has also been adopted as a standard by major databases and journals in the Biomedical Sciences.
History
In 1978, Stuart Kornfeld and colleagues at the Washington University School of Medicine presented a system for symbolic representation of vertebrate glycans. This system gained popularity when it was implemented as a core method for glycan representation in the NCBI text book Essentials of Glycobiology edited by Ajit Varki (University of California, San Diego) and colleagues. While the first edition of this text published in 1999 used black-and-white symbols similar to the Kornfeld system, color was introduced in the second edition of the text (2009). The advantage of color is that different monosaccharide stereoisomers could now be depicted using the same shape, only with different colors. The system of carbohydrate representation was adopted and widely disseminated by many including the NIGMS-funded Consortium for Functional Glycomics, and thus was often referred to as "CFG Nomenclature". This color representation was vastly expanded in the third edition of the text to include 49 new monosaccharides that appear mostly in non-vertebrates, microbes and plants. Inputs and recommendations from a number of scientists beyond the editors of the Essentials textbook was included in this implementation, and the release of the expanded glycan symbol system was coordinated with the IUPAC Carbohydrate Nomenclature committee. For long-term development of this symbol nomenclature and standardization of glycan representation in the Glycosciences, in 2015, the Essentials editors suggested that the representation be formally called SNFG ('Symbol Nomenclature For Glycans'), and future development be entrusted to a global community of scientists. To aid this development, each of the SNFG monosaccharide symbols was linked to PubChem entries at NCBI/NLM and a dedicated website at NCBI was established for future SNFG updates. Thus, the development of the SNFG is currently undertaken by an international community of scientist that are called the SNFG Discussion Group.
References
External links
Symbol Nomenclature For Glycans (SNFG) site at NCBI-Glycans
Chemical nomenclature
Carbohydrates | Symbol Nomenclature For Glycans | [
"Chemistry"
] | 1,166 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Organic compounds",
"Carbohydrate chemistry",
"nan"
] |
60,745,322 | https://en.wikipedia.org/wiki/Imperial%20Iranian%20Air%20Force%20Flight%2048 | Imperial Iranian Air Force Flight 48 was a military cargo flight from Tehran, Iran, to McGuire Air Force Base in the United States with a stopover in Madrid, Spain. On May 9, 1976, the Boeing 747-131 freighter operating the flight crashed during its approach to Madrid, killing all 17 people on board.
Aircraft
The aircraft involved was a five year old Boeing 747-131 (serial number 19677 and line number 73) which made its first flight on September 15, 1970. On September 26, the aircraft was delivered to Trans World Airlines (TWA) with registration N53111.
On October 15, 1975, the aircraft was returned to the Boeing factory in Wichita, Kansas. It was converted into a freighter cargo model (747-131F), during which time a large cargo door was added on the left side.
In October 1975, the aircraft was sold to the Imperial Iranian Air Force with serial number 5-283. The IIAF received the aircraft on November 1. The aircraft was powered by four Pratt & Whitney JT9D-3B turbofan engines.
The aircraft's last maintenance check was performed by the Imperial Iranian Air Force on May 4, 1976, after which it flew for 16 hours. During the subsequent investigation, it would be determined that American specialists were unaware of the check's results.
Accident
Flight ULF48 took off from Mehrabad airport in Tehran at 08:20 GMT bound for New Jersey, via Madrid. There were 10 crew members and seven passengers on board. The aircraft climbed to flight level FL330, meaning roughly . At take off, the aircraft's weight was , including of fuel. The fuel was a mixture of type JP-4 and Jet-A. The aircraft's weight and centering were within required limits.
At 14:15, Flight 48 contacted the Madrid Air Route Traffic Control Center and reported that the estimated landing time would be 14:40. At 14:19, the Madrid ARTCC controller told the flight that they were identified on the radar screens and cleared the flight to descend to the CPL VOR via the Castejon radio beacon. At 14:22, the crew received the weather conditions at the airport; at 14:25, they were cleared to descend to FL100. The crew acknowledged and began descent.
A cyclone had passed over Spain earlier in the day, along with strong thunderstorms. However, visibility was good, and no dangerous weather alerts were issued by the weather service. At 14:30, the crew diverted to the left of their assigned route due to bad weather. At 14:32, the Madrid ARTCC controller cleared the flight to descend to and contact Madrid approach. At 14:33, the crew contacted Madrid approach and reported more bad weather ahead, subsequently requesting to deviate away from it.
The approach controller reported that he had established radar contact, and then asked the crew to confirm their instructions. The crew confirmed, and reported passing the Castejon radio beacon. The controller instructed them to maintain a heading of 260°. The crew acknowledged the transmission and reported their descent to . This was the last transmission from Flight ULF48.
At the same time, south of the town of Valdemoro, locals noticed the aircraft flying at around on a 220° heading. The crew was aware that they were flying into poor weather conditions, but none of them expressed any concern until 14:34, when a crew member said, "We're in the soup!" Three seconds later, two witnesses on the ground reported seeing lightning strike the aircraft, followed by an explosion on the left wing near engine #1 (outer left). The left wing exploded into three large parts, and then disintegrated into 15 fragments.
At this time, the Flight Data Recorder stopped recording, but the Cockpit Voice Recorder continued to record. The autopilot disconnect warning was then heard. Unaware of the loss of the left wing, the crew tried to regain control of the crippled aircraft in vain. The aircraft dove rapidly and it crashed onto a farm at a height above sea level at 14:35 (15:35 local time), 54 seconds after the moment of the lightning strike. All 17 people on board were killed and the aircraft was destroyed.
Investigation
The Imperial Iranian Air Force and the United States National Transportation Safety Board investigated the accident.
The Spanish government gave the Iranian government the primary responsibility to investigate, and the NTSB also successfully argued that it should help investigating as the aircraft type originated from the US.
It was established that a bolt of lightning struck the fuselage near the cockpit and exited the left wing's static discharger located at the wingtip. This created a spark in fuel tank number 1 (which contained fuel), igniting fuel vapor in the tank. The blast wave from the explosion, at more than , caused the tank walls to collapse.
It is most likely that the ignition spark originated from an open circuit in a fuel valve's wiring. The explosion led to part of the wing trim separating and damage to the side members; as a result, the air flow deteriorated sharply and the wings began to bend significantly. As the flight was passing through an area of turbulence at high speed, the wing experienced major mechanical stress. The entire left wing separated just seconds later.
The NTSB could not determine if the wing separated due to the explosion or the stress.
See also
Similar accidents
TWA Flight 891 – lightning strike
Pan Am Flight 214 – lightning strike and fuel tank explosion
LANSA Flight 508 – lightning strike
TWA Flight 800 – fuel tank explosion
Notes and References
Notes
References
External links
The accident aircraft
Aviation accidents and incidents caused by lightning strikes
Accidents and incidents involving the Boeing 747
Aviation accidents and incidents in Spain
Aviation accidents and incidents in 1983
Aviation accidents and incidents involving in-flight explosions
1976 meteorology
1976 in Spain
May 1976 events in Europe
History of Madrid
Pages with unreviewed translations
1970s in Madrid
Accidents and incidents involving military aircraft
Aviation accidents and incidents caused by in-flight structural failure
Aviation accidents and incidents caused by clear air turbulence | Imperial Iranian Air Force Flight 48 | [
"Chemistry"
] | 1,229 | [
"Aviation accidents and incidents involving in-flight explosions",
"Explosions"
] |
60,745,629 | https://en.wikipedia.org/wiki/Dome%20over%20Manhattan | The Dome over Manhattan was a 1959 proposal for a 3-kilometer-diameter geodesic domed city covering Midtown Manhattan by the architects Buckminster Fuller and Thomas C. Howard of Synergetics, Inc.
Fuller expanded on his earlier work designing geodesic domes and advocating for decreased use of resources, and made a variety of claims to support the "Dome Over Manhattan," such as that it would reduce energy usage in NYC to 20% of what it was in 1960.
The concept inspired the science fiction writer Ben Bova's story "Manhattan Dome" in the September 1968 issue of Amazing Stories, subsequently expanded into the 1976 novella City of Darkness. A Fuller dome over Manhattan also appeared in John Brunner's 1968 novel Stand on Zanzibar.
References
1960 architecture
1960 in New York City
1960s in Manhattan
Buckminster Fuller
Geodesic domes
Midtown Manhattan
Unbuilt buildings and structures in New York City
Proposed arcologies | Dome over Manhattan | [
"Technology",
"Engineering"
] | 190 | [
"Architecture stubs",
"Exploratory engineering",
"Proposed arcologies",
"Architecture"
] |
60,746,548 | https://en.wikipedia.org/wiki/Coded%20exposure%20photography | Coded exposure photography, also known as a flutter shutter, is the name given to any mathematical algorithm that reduces the effects of motion blur in photography. The key element of the coded exposure process is the mathematical formula that affects the shutter frequency. This involves the calculation of the relationship between the photon exposure of the light sensor and the randomized code. The camera is made to take a series of snapshots with random time intervals using a simple computer, this creates a blurred image that can be reconciled into a clear image using the algorithm.
Motion de-blurring technology grew due to increasing demand for clearer images in sporting events and other digital media. The relative inexpensiveness of the coded exposure technology makes it a viable alternative to expensive cameras and equipment that are built to take millions of images per second.
History
Photography was developed to enable imaging of the visible world. Early cameras used film made of plastic coated with compounds of silver. The film is highly sensitive to light. When photons (light) hit the film a reaction occurs which semi-permanently stores the data on its surface. This film is then developed by exposing it to several chemicals to create the image. The film is highly sensitive and the process is complicated. It must be stored away from light to prevent spoilage.
Digital cameras use digital technologies to create images. This process involves exposing light-sensitive material to photons, creating electrical signals that are recorded in computer files. This process is simple and has improved the availability of photography. One problem that digital cameras have faced is motion blur. Motion blur occurs when the camera or the subject are in motion. When motion blur happens, the resulting image is blurry, fuzzy edges and indistinct features. One solution to remove motion blur in photography is to increase the shutter speed of the camera. Unlike the coded exposure process, shutter speed is a purely physical process where the camera shutter is opened and closed more quickly, resulting in short exposure time. This reduces the amount of motion that occupies each frame. However shorter exposure times increase the 'noise', which can affect image quality.
Coded exposure
Coded exposure solves the motion blur problem without the negative effects of shorter exposure times. It is an algorithm designed to open the camera's shutter in a pattern that enables the image to be processed in such a way that motion blur and noise are almost completely removed. Contrary to other methods of de-blurring, coded exposure does not require additional hardware beyond a digital camera.
The key element of the coded exposure process is the formula that affects the shutter frequency. The process calculates the relationship between the exposure of the light sensor and the randomized code. The digital camera takes a series of snapshots at random intervals. This creates a blurred image that can be clarified given the code or the algorithm. Together with compressed sensing, this technique can be effective.
Application
The relative inexpensiveness of the coded exposure technology makes it a viable alternative to expensive cameras and equipment that take millions of images per second. However, the algorithm and subsequent de-blurring is a complicated process that requires specialists who can write the programs and create templates for companies to work from. Ownership of the technology is subject to dispute; no patent covers it.
Coded exposure could have application on live television. Accurate footage of sporting events requires a clear image and detail. Short exposure cameras have been used, but coded exposure is typically available at a lower cost. As of October 2019, the technology had not been widely used outside of a research environment.
References
Photographic techniques | Coded exposure photography | [
"Mathematics"
] | 706 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
60,746,878 | https://en.wikipedia.org/wiki/Write%20Once%20Read%20Forever | Write Once Read Forever (WORF) is a data storage method which allows users to write data once and allows storage of the users data without ever being refreshed. This differs from common digital storage techniques such as drives that need to be re-written often to prevent loss or corruption of data. WORF was tested on the International Space Station in 2019.
Method
WORF uses a novel high-density patented data storage mechanism based on silver halide, which after substantial testing has been determined to last for more than a century under conventional ambient environmental conditions. WORF digital data is stored as microscopic, metallic, interference-created standing waves (representing narrowband ‟colors” ) embedded in a modern, super-resolution, dye-free, photosensitive emulsion. Wavelengths encode multiple superimposed states allowing complex data permutations to be stored per data region. Permutations enable extremely high data density to be stored on WORF media. Multi-state data architecture within each domain also enhances data integrity, error-checking, and accelerates parallel writing and reading for the entire media module. Once data is written to WORF, energy is needed only for reading—no periodic refresh is necessary, and data is both immutable and truly permanent. Human readable text and images are embedded in the WORF module adjacent to the digital data. This text and imagery contain meta- information about the media's content, and instructions for decoding for future generations.
NASA Experiment
WORF payload was delivered and docked to the International Space Station (ISS) via SpaceX CRS-17 on May 9, 2019. NASA's ISS test will determine if WORF media can survive a hostile space environment during long-term space missions, such as Lunar, Mars missions, and beyond. The WORF media payload will stay on the ISS for up to one year.
Due to a previous NASA mission already named WORF, NASA renamed the experiment HELIOS (Hardened Extremely Long Life Information Optical Storage). The HELIOS mission returned to Earth and was deemed a success with the stored data showing no significant decay after six months of space conditions and solar radiation. WORF technology for the HELIOS experiment uses a proven archival media, redesigned, re-purposed and patented by CTech to store digital data for long periods, measured in decades and possibly centuries.
References
External links
WORF @ Creative Technology
Data storage
Computer data storage
Solid-state computer storage
Computer storage devices
Science experiments
International Space Station experiments
Space science experiments
Space exposure experiments | Write Once Read Forever | [
"Technology"
] | 513 | [
"Computer storage devices",
"Recording devices"
] |
60,746,996 | https://en.wikipedia.org/wiki/Small%20box%20respirator | The Small Box Respirator (SBR) was a British gas mask of the First World War and a successor to the Large Box Respirator. In late 1916, the respirator was introduced by the British with the aim to provide reliable protection against chlorine and phosgene gases. The respirator offered a first line of defence against these. The use of mustard gas, was begun by the Germans; a vesicant ("blister agent") that burnt the skin of individuals that were exposed to it. Death rates were high with exposure to both the mixed phosgene, chlorine and mustard gas, however with soldiers having readily available access to the small box respirator, death rates had lowered significantly. Light and reasonably fitting, the respirator was a key piece of equipment to protect soldiers on the battlefield.
Materials and construction
The small box respirator consists of a face mask made of rubberized fabric connected by a rubber fabric hose to a canister made of tinplate containing a chemical absorbent. The respirator mask is light in weight and is made from khaki cotton fabric that is plated with a thin layer of black rubber. Khaki cotton tape, located in the middle of forehead region of the mask, connects to black elastic strips from the cheeks to ascertain a suitable fit for the carrier. The circular eye pieces are set in metal rims that are consistent of celloid which is sealed on with rubber sealant. A circular wired nose clip with rubber covered jaws sits between the internal region of the eyes. The mask contains an internal mouthpiece with an exhale valve made of black rubber consisting of a flange to fit both mouth and teeth. The mouthpiece is joined by a brass tube to the rubberized hose leading to the canister. The rubber hose is around 30 cm in length and is made of vulcanized stockinette fabric making the hose flexible and strong.
The canister, which was oval in cross section, contained cotton and wire gauze filters (introduced in April 1917 to catch chlorarsine compound particulates) with charcoal and quicklime, later charcoal and soda lime to absorb the poison gases.
History of use
Chemical gas attacks
The Small Box Respirators were introduced into British and Imperial forces on the Western Front in 1916 and issue was complete early in 1917. The first use of phosgene and chlorine gas in combination had been on 19 December 1915, when it was used against French and Canadian units in the Second Battle of Ypres. It was used in six attacks up to August 1916. British anti-gas helmets - P then PH and PHG - were appointed to repel the chlorine gas; issues later presented when the helmets could not withstand the effects of the phosgene gas. Chlorine was readily detected in battles as the gas tainted a yellowish green cloud and had a pungent odour. The situation became problematic on the introduction of the mixed phosgene and chlorine as phosgene is colourless and smells of freshly cut hay. Phosgene was up to six times as potent than chlorine and did not suggest any urgent symptoms that was associated with the coughing and discomfort that chlorine did. Psychological impacts of the gas had resulted in unexplained anxiety attacks which would cause men to tear off their gas masks to breathe correctly exposing them to the gas. Soldiers that were affected by the gas, did not recall feeling symptoms until hours later. 85% of the fatalities that occurred due to chemical weapons, was from the phosgene mixed chlorine gas. Small Box Respirators lowered mortality rates significantly; for this reason the creation and usage of the mustard gas, a vesicant that burned the skin, was introduced as the new weapon of chemical warfare in 1917.
Canadian Usage
Canadian troops began to receive small box respirators in late November 1916. While the respirators acted as the first line of defence in some British troops, other Canadian and some British troops were still using the earlier and less effective gas masks, the PH helmet. The PH helmet was used throughout early 1916 by British troops in which was designed to be tucked under the shirt of the wearer. The masks were an evolution of the P Helmet, and were effective against phosgene gas by adding hexamine to sodium phenate solution which acted as an absorbent to the phosgene gas. Both equipment were to be present on the troop members during battle. It became an increasing issue that PH helmets were being dropped and lost during battle; an estimated 9 million PH helmets were dropped while barely any respirators were lost. Canadian and British troops were not convinced that double the protection was needed. Both masks were liable to damage and therefore it became necessary to have both masks .
Complications of the small box respirator
The Small Box Respirators was criticised by troops. The respirator restricted performance as it presented a very unnatural way of breathing during heavy activity of troops on the battlefield. The respirator came in six different sizes and had to be individually fitted to each man, and to be effective the fit had to checked constantly. The eye pieces were very prone to fogging and misting obstructing vision and the nose clip caused extreme discomfort. The flexible hose was vulnerable to damage and then let gas in. Adjusting the gas mask was problematic, death could be the result if it was not worn correctly; soldiers had compulsory practice with the mask before using it in combat. The respirator caused intensive wheezing and extreme heat and exhaustion could result in suffocation-like symptoms.
Evolution of the small box respirator
The first and proper respirator developed was the Black Veil Respirator by John Scott Haldane. It was used on the evening of 22 April 1915 in Belgium, close to Ypres, by British troops. Home made respirators, known as the black veil, comprised cotton wool that was wrapped in either muslin or flannelette. The mask was ineffective and almost completely useless when dry. When the mask was either moist and wet from being soaked in the absorbent solution, it formed an airtight fit over the troops mouth and noise. The cotton, which was loosely woven material, provided better absorption of the solution and allowed troop members to breathe effectively. A long piece of black veil cotton was folded to form a large sheath pocket to retain the chemical absorbents. The cotton veil was then wrapped around the user's head and tied. The chemical absorbents consisting of anti gas chemicals such as sodium hyposulfite, washing soda, Glycerine acetate and water allowed for consistent and dense moisturizing in the respirator. The respirator could be worn above the eyes to protect against tear gas. The structure and material of the respirator made it effective for about five minutes against the regular dosage concentrations of chlorine attacks. The mask was issued at 20 May 1915. A more effective respirator that could last longer was needed; the hypo helmet was created in hopes it would replace the inferior respirator.
Earlier versions of the gas mask prior to 1915s development of the small box respirator were crude and ineffective as no troops had yet experienced poison warfare. One of the first gas masks seen in the early part of the war was the British hypo helmet, after recent failure and ineffectiveness of the black veil respirator. The helmet was intended to replace the black veil in order to effectively protect against chlorine attacks. Yet, the mask provided unreliable protection as the eye pieces were extremely fragile. The protective valve of the Hypo Helmet was vulnerable and prone to breaking. The helmet, much like the black veil, was dipped in anti-gas chemicals such as sodium hyposulphite, washing soda, glycerine and water. The choice of chemicals used was refined to develop better and more effective masks. The helmet was made from viyella (cotton wool blend) and flannel fabric that was refined from material made from mica, which was brittle and damage prone. The helmet was hot and uncomfortable as the fitting required users to tuck inside uniforms. The helmet was a large improvement on the black veil but it was difficult for soldiers to use weapons with the helmet on. The helmet accumulated carbon dioxide in the uniforms of the users as no expiration valve was present. It was issued to troops by 6 June 1915.
The later and more refined gas mask in the form of the Large Box Respirator was developed and issued by April 1916 to specialist troops such as machine gunners, signallers and artillery. This was followed by the Small Box Respirator.
References
Military personal equipment
Gas masks of the United Kingdom
British inventions
1916 introductions | Small box respirator | [
"Chemistry"
] | 1,808 | [
"Gas masks of the United Kingdom",
"Gas masks"
] |
60,747,437 | https://en.wikipedia.org/wiki/Methyl%20phenyldiazoacetate | Methyl phenyldiazoacetate is the organic compound with the formula C6H5C(N2)CO2Me. It is a diazo derivative of methyl phenylacetate. Colloquially referred to as "phenyldiazoacetate", it is generated and used in situ after isolation as a yellow oil.
Methyl phenyldiazoacetate and many related derivatives are precursors to donor-acceptor carbenes, which can be used for cyclopropanation or to insert into C-H bonds of organic substrates. These reactions are catalyzed by dirhodium tetraacetate or related chiral complexes. Methyl phenyldiazoacetate is prepared by treating methyl phenylacetate with p-acetamidobenzenesulfonyl azide in the presence of base.
References
Diazo compounds
Reagents for organic chemistry | Methyl phenyldiazoacetate | [
"Chemistry"
] | 187 | [
"Reagents for organic chemistry"
] |
60,747,466 | https://en.wikipedia.org/wiki/Endangerment%20of%20orangutans | There are three species of orangutan. The Bornean orangutan, the most common, can be found in Kalimantan, Indonesia and Sarawak and Sabah in Malaysia. The Sumatran orangutan and the Tapanuli orangutan are both only found in Sumatra, Indonesia. The conservation status of all three of these species is critically endangered, according to the International Union for Conservation of Nature (IUCN) Red List.
Population decline
Over the past 60 years, the population of all three species has been steeply declining. The current population of orangutans cannot be accurately calculated; however, it is estimated that the number of individuals remaining is: 104,000 Bornean orangutans, 14,000 Sumatran orangutans, and 800 Tapanuli orangutans. The number of Bornean orangutans has decreased by more than 60% in 60 years, and the population of the Sumatran orangutan has decreased by 80% in the last 75 years. It is estimated that between 1999 and 2015, the population of Bornean orangutans has decreased by over 100,000.
The primary reason for population decline is habitat loss as a result of the unsustainable practice of timber extraction for the production of palm oil in areas in which orangutans habituate, notably Indonesia and Malaysia. Orangutans cannot survive without forests as they are both a home and food source, they build nests in trees for sleeping and survive off tree fruits. Additionally, orangutans are killed by poaching, where often mothers are killed and infants are seized and sold on the black market as pets.
There are numerous conservation sites and not-for-profit organisations that have been created in an effort to prevent further decline of the orangutan population; however, in 2016, it was predicted by experts that unless drastic changes are made to the current deforestation laws, orangutans face extinction within the next ten years.
Reasons for endangerment
Deforestation
Deforestation in Sumatra and Borneo is the primary reason for the endangerment of all species of orangutans. Timber is extracted from these areas for the production of palm oil, paper, and pulp. Majority of the logging is illegal, and with the rapid expansion of the palm oil industry, extraction rates have exponentially increased over the past 40 years. Deforestation is extremely harmful to orangutans because the forest is their habitat. As this deforestation continues, the orangutans will be exposed to humans more often. This is harmful because it leaves the orangutans vulnerable to poaching.
Logging first began occurring in the 1970s for the production of furniture and commercial products. During this time, Indonesian president Suharto introduced a transmigration program, where 18,000 poor transmigrants were sent to Kalimantan, Borneo, who turned to illegal logging to earn money. Additionally, President Suharto gave out large amounts of forests in order solidify political relationships.
When the production of palm oil was first introduced, the rate of deforestation grew significantly. Producers soon realised that by logging one hectare of the oil palm plant, over 5,000 kg of oil could be created, making the plant highly profitable to those who grew and extracted it. Today, palm oil makes up nearly 60% of the oils and fats trade and is the most consumed vegetable oil in the world. A large volume of palm oil is used in India and China for the use of cooking oil, and use is increasing in European countries for the production of biodiesel in response to the rise of climate change. In 1974/1975, the global crude palm oil output was less than 3 million tonnes, but due to a heavy escalation of demand for the product, this rate grew to 40 million tonnes in 2006/2007. This figure represents an annual growth rate of 8%. Indonesia and Malaysia account for 87% of palm oil output, with Indonesia producing 18.3 million tonnes and Malaysia producing 17.4 million tonnes in 2007/2008. The reason for the high rate of output from these countries is because it is highly cost-efficient; production costs and wages are very low compared to other countries. Additionally, the climate in this countries is ideal for growing the palm oil plant, making growth and production rates high.
By 1985, the annual rate of deforestation in Kalimantan, where majority of the orangutan population habituates, was 180,000 hectares. This rate of deforestation then further increased between the late 1980s and 2000, with the amount of land being logged annually increasing by 44% between 1997 and 2000 alone. In the 2000s, the rate decreased slightly; however, by 2007, the annual deforestation rate had reached 1.3 million hectares. In 2006, Indonesia overtook Malaysia as the world’s largest palm oil exporter, having exported over 20.9 million tonnes of palm oil. Presently, only 50% of the original forest cover remains in Borneo. It is expected that by 2020, this forest cover will reduce to 24% if production rates continue. As orangutans cannot survive outside forest areas, the extremely high rate of deforestation has caused the population to decrease significantly, resulting in the conservation status of critically endangered.
Deforestation is also occurring as a result of fires that wipe out large amounts of land and subsequently orangutan populations. Fires are set on purpose by palm oil companies in peat swamp forests. As a result of these fires, orangutans in these habitats will often die amidst the fire. If they survive, they will either be left to starve without a habitat, or flee, leaving them without a habitat and at risk of capture from residents, who will either kill them for meat, keep them as pets or sell them on the black market to wealthier counties.
Poaching
The illegal poaching of orangutans is the second largest factor contributing towards population decline. Orangutans are viewed as easy targets, according to hunters, because of their typically large size and lack of speed. Sumatran, Tapanuli and Bornean orangutans are killed at a high rate for many reasons, the most common being the trade of meat or because farmers believe they are a threat to their crops. A survey conducted by experts in the field reported that orangutans were killed for both conflict and non-conflict related reasons. According to the survey, 56% of people who had reported to have previously killed an orangutan did so to eat it. Out of the reasons related to conflict, the most common was killing orangutans out of fear or in an act of self defense. This research article states that other reasons for the poaching of orangutans include being paid to kill, traditional medicine, being killed to take infants to sell on the black market, sport hunting, or being killed accidentally as the hunters had the intention of poaching other animals. A national geographic survey revealed that “between 750 and 1,790 Bornean orangutans are killed each year in Kalimantan”, which largely outnumbers the annual birth rate. The poaching of orangutans is directly related to rates of deforestation. Those who grow and maintain palm oil plantations kill orangutans at a high rate if they habituate within their crops, therefore as deforestation rates rise, poaching rates subsequently grow. Orangutans often interfere with these crops, however, to look for food to eat since they often cannot find food in the forest.
Over the past few decades, the rate of orangutan poaching has increased significantly due to the discovery of more efficient weapons and methods of killing, such as the use of poisons, AK-47s and explosives. Poaching is predominately conducted by plantation workers or villagers who consume and sell orangutan meat, many of which believe contains medicinal benefits.
Illegal pet trading
Behind the illegal drug trade, the trade of wildlife is the 2nd most profitable illegal trade in the world, with a combined annual value of 10 billion dollars. Orangutans are one of the most expensive animals in this trade. Often, the poaching of orangutans is linked with the illegal pet trading, where it is highly common for poachers to kill adult females, and take the infant to sell on the black market. According to a survey, hunters are paid approximately USD$80 to $200 for an infant orangutan. They are then often sent to Jakarta, Indonesia to be sold to wealthy Indonesians or Chinese who keep them as pets. Additionally, some infants are sent by ship to Thailand, where they are sold on the black market for up to $55,000.
The illegal trade of orangutans as pets contributes to the severe decline of population, as often mothers are killed for the sole reason of selling the infant. Additionally, the orphans regularly do not survive the conditions they are kept in as pets, especially during transportation to other countries. It has been estimated that for every infant sold, between 1 and 6 adult orangutans are killed.
History of endangerment
Decline of population
Due to an expanding global demand for timber in the 1980s, this rate then increased; according to a satellite study, 56% or 2.9 million hectares of tropical rainforests in Kalimantan, Borneo were extracted between 1985 and 2001, with a rapid increase in deforestation rates in the late 1990s. The rate of deforestation during this time directly correlates with the decrease in orangutan population, as the species cannot survive in other areas. It is estimated that since 1950, the orangutan population has declined by 60%. Between 1999 and 2015, the population of Bornean orangutans decreased by 100,000 individuals.
Although the current population of orangutans is not precisely known, it is estimated that currently there are about 104,000 Bornean orangutans, 14,000 Sumatran orangutans, and 800 Tapanuli orangutans remaining in the wild, and 1,000 are being held in conservation sites.
Future predictions
It is predicted that the current rate of forest loss, poaching and illegal pet trading in Borneo will continue, therefore it is presumed that in the next 35 years the population of orangutans will continue to decline an additional 45,000 individuals. By 2025, it is estimated that there will be 47,000 Bornean orangutans left in the wild.
Conservation
Due to the dramatic decrease of the orangutan population, a number of conservation sites and not-for-profit organisations have been developed in an effort to prevent the extinction of orangutans. There are two main strategies that have been put in place to prevent this; rehabilitation of abandoned individuals or those that were previously being held illegally, and the protection of forest areas and prevention of deforestation in orangutan habitats. Through a Geographic Information System (GIS) analysis, it was discovered that neither strategy was highly effective; however, the cost of preventing deforestation is one twelfth of the cost of reintroducing individuals. It was concluded that for long term protection, it is more efficient to prevent logging than attempting to maintain current populations.
There are other methods that have been put in place to conserve the current orangutan population, these include research and monitoring, land and water protection, species management, education to create awareness, international legislation, and international management and trade controls. Additionally, some organisations that work to conserve the population of orangutans have put in efforts to work alongside palm oil companies and local governments to prevent further habitat loss. For example in 2011 a tri-party agreement was signed by one of the world’s largest palm oil producers Wilmar International, Central Kalimantan government, and Borneo Orangutan Survival Foundation (BOSF). The agreement was formed with the aim to provide long-term protection for Bornean orangutans, including monitoring palm oil plantation methods, establishing areas where orangutans can be protected, relocating abandoned individuals and providing training to plantation workers on how to manage orangutans and avoid conflict. World Wild Life (WWL) is in collaboration with TRAFFIC in attempts to stop orangutan trafficking and trading by enforcing strict rules and regulations through the governments, as well as rescuing orangutans that have been trafficked and releasing them back in the wild once they have been rehabilitated in refuges.
Scientists have researched and estimated that the only way of reducing the high rate of population decline is by ceasing deforestation in orangutan habitats, and putting extensive protection methods of current populations in place. However, due to the high demand of the palm oil product and lack of funding from the government, it is extremely unlikely the rapid decline and eventual extinction of orangutans can be prevented.
The tropical rainforests of Sumatra, home to the Sumatran orangutan and Tapanuli orangutan, have been a UNESCO World Heritage Site since 2004.
Conservation actions required to prevent extinction
According to the IUCN Redlist, there are many conservation actions in place that have been somewhat successful; however, there are numerous actions that are required in order to prevent the further endangerment and eventual extinction of orangutans. These include more area protection, species recovery, habitat and natural process restoration, resource protection and legislation. Additionally, IUCN suggests that more research is required, surrounding areas such as taxonomy, population size, distribution and trends, threats to orangutans, and area-based management plans.
References
External links
IUCN Red List: Bornean Orangutan (Pongo pygmaeus)
IUCN Red List: Sumatran Oranugtan (Pongo abelii)
IUCN Red List: Tapanuli Orangutan (Pongo tapanuliensis)
Orangutan conservation
Endangered species
Endangered fauna of Asia
Wildlife conservation | Endangerment of orangutans | [
"Biology"
] | 2,858 | [
"Wildlife conservation",
"Biota by conservation status",
"Biodiversity",
"Endangered species"
] |
60,748,702 | https://en.wikipedia.org/wiki/Vectorette%20PCR | Vectorette PCR is a variation of polymerase chain reaction (PCR) designed in 1988. The original PCR was created and also patented during the 1980s. Vectorette PCR was first noted and described in an article in 1990 by John H. Riley and his team. Since then, multiple variants of PCR have been created. Vectorette PCR focuses on amplifying a specific sequence obtained from an internal sequence that is originally known until the fragment end. Multiple researches have taken this method as an opportunity to conduct experiments in order to uncover the potential uses that can be derived from Vectorette PCR.
Introduction
Vectorette PCR is similar to PCR with the difference being that it is capable of obtaining the sequence desired for amplification from an already known primer site. While PCR needs information of already known sequences at both ends, Vectorette PCR only requires previous knowledge of one. This means that is able to apply the method of PCR which needs sequence information from both ends to fragments of DNA that contain the information of the sequence at only one end and not the other. In order to achieve this, there are specific steps that this method must first go through. These steps have been researched for the purpose of discovering the scientific uses of Vectorette PCR and how they can be applied.
Steps
Vectorette PCR can develop a strategy to bring about PCR amplification that is unidirectional. Vectorette PCR comprises three main steps. The first step includes utilizing a restriction enzyme in order to accomplish digestion of the sample DNA. The DNA that is to be utilized for the purpose of investigation has to be capable of being digested by restriction enzymes that are appropriate for that gene otherwise the DNA fragments that form the general population cannot be created. After that is completed, a Vectorette library is brought together by ligating the Vectorette units to the appropriate DNA fragments which were previously digested. Ligation is the act of binding two things together. A Vectorette unit is only partially not completely double stranded with a mismatched section located in the center of the unit. The reason it is mismatched is to help it avoid Vectorette primers’ attempts at causing it to undergo first strand synthesis. By doing this any priming that is nonspecific is also avoided. This ligation brings together the vectorette which is double stranded and the ends of the restriction fragments which were previously made in the first step. By doing this, the known sequence which is used to prime the PCR reaction at one side is introduced while the other is primed on the genomic sequence which is already known to the user. The third and last step has two parts to it. This is due to there being two primers, the initiating primer (IP) and the Vectorette primer (VP), that act in different stages. During the first part, the IP works on amplifying the primer extension while the VP remains hybridized with the product; thus, any background amplification is not carried out at this stage. However, this changes during the last and following part of PCR as the priming that is performed comes from both the IP and the VP.
Research
A lot of research has been conducted on Vectorette PCR and the applications it has in the field of biology. Scientists used Vectorette PCR to take the transgene flanking DNA and isolate it. They used this technique on the DNA belonging to mice that was next to transgene sections. From this the scientists were able to show that the use of Vectorettes is capable of facilitating the recovery and mapping of sequences in complex genomes. They have also found that Vectorette PCR can help in the analysis of sequences by subvectoretting when PCR products of a large size are the subject at hand.
Other work has looked at developing a method using Vectorette PCR in order to accomplish genomic walking. By using Vectorette PCR, scientists were able to acquire single-stranded DNA which were obtained from PCR products in order to sequence them. From this an approach was identified in which the amplification of sequences which were previously uncharacterized was possible. This research demonstrates how novel sequences can be rapidly developed when only a known sequence of DNA is used to start.
Further research has experimented with the creation of a method that progresses the isolation of microsatellite repeats. By using Vectorette PCR, researchers have found a rapid technique to accomplish this with novel, microsatellite repeats. They have attempted and succeeded in using this technique to isolate an amount of six microsatellite repeats.
Vectorette PCR has also been used to not only identify genomic positions of insertion sequences (IS) but also to map them. Research on this has shed light on a way to complete the typing of microbial stains and the identification and mapping of things like IS insertion sites that reside in microbial genomes. Vectorette PCR proves useful when it comes to rapidly and simply surveying genomes’ IS elements.
Transposable element, transposon, or TE is a variation of genetic elements that is capable of changing its location in a genome by a process called “jumping”. TE display is designed to present the different variations of TE insertion sites which helps to make numerous dominant markers. A problem that arose in the original method was finding a PCR method that was capable of being specific and efficient in its output of the transposon within the genome. Researchers have found a solution for this problem by using Vectorette PCR as the PCR method. Since Vectorette PCR is capable of being specific with its isolation and amplification of genes, this helped with their research and aided in improving the method of TE display by saving both time and costs. The researchers were then able to produce numerous dominant markers with the use of Vectorette PCR that is based on a TE display that is nonradioactive.
Thyroid lymphoma is an illness which leads to the transformation of the lymphocytes belonging to the thyroid into cells of a cancerous nature. Researchers have tested a new method that aids in the diagnosis of this condition. The use of Vectorette PCR was combined with restriction enzyme digestion, and it was found that Vectorette PCR proved to be useful in their study and aided in the diagnosis of thyroid lymphoma.
Researchers have looked into the potential use of Vectorette PCR in the examination of the genes of diseases. They have taken two methods, trinucleotide repeats which are specifically used for the targeting of transcribed regions and Vectorette PCR, to obtain simple sequence repeats or SSRs. It is believed that genetic markers can be made from these SSRs. The outcome from this research is hoped to aid researchers attempt the derivation of genetic markers which are transportable from unknown genomes. Vectorette PCR was used to uncover SSRs which flank the trinucleotide repeat that was targeted for testing. This is also known as TNR or trinucleotide Vectorette PCR. They believe that their TNR method combined with the amplification provided by Vectorette PCR can be used in eukaryotes to create molecular markers that are based on simple repeat sequences. The researchers also think that this method will be of value when attempting to isolate genes that are able to bring about diseases.
Uses
The uses that have been derived from Vectorette PCR are many and have been useful to the science of biology. For example, it gives rise to methods that can help during the outbreaks of diseases by making it easier to subtype pathogens that are similar or closely related. It can also be used to help diagnose certain diseases. Earlier in this page it was noted that Vectorette PCR can give rise to multiple functions that can be performed on novel DNA sequences located near a sequence that is already known. These functions like isolating DNA, amplifying it, and analyzing it are behind the uses for Vectorette PCR. These uses are things like genome walking, DNA sequencing for the termini of Yeast Artificial Chromosomes (YAC) and cosmid inserts, being able to map introns and promoters in genomic DNA and regions with mutations, facilitating the sequencing of clones of a large size, and filling in the gaps that arise during the mapping of genomes.
An intron is a DNA sequence that is flanked by exons and therefore located in between them. It is the region that gets cut out while exons are expressed, and so introns do not affect the code of amino acids. Gene expression can be affected by only a number of intronic sequences. Vectorette PCR has been found to be beneficial when it comes to the characterization of these intronic sequences when they are found to be next to known sequences.
cDNA or complementary DNA is a DNA sequence which is complementary to the RNA that is the template when synthesizing DNA during the reverse transcriptase process. Vectorette PCR that utilizes the primers that originate from cDNA gives rise to a method that is capable of acquiring intron sequences which are located adjacent to exons and aiding in the development of the structure of genes. It is able to achieve this when initializing the process with a sequence of cDNA and a clone of a genome.
Vectorette PCR also gives the user an advantage than if he/she were using other existing technologies. The user will be able to carry out tasks like gene manipulation that is cell-free, Vectorette PCR with minimal material to start with, and performing Vectorette PCR with DNA that needs not be of high purity. These advantages allow the user to save time and resources while increasing the range of DNA that can be targeted.
Chromosome Walking
Chromosome walking can be used for the purpose of cloning a gene. It does this by using the known gene’s markers that are closest and can therefore be used in techniques like isolating DNA sequences and aiding in the sequencing and cloning of the DNA of organisms. Chromosome walking is also useful when it comes to filling in the gaps that may be present in genomes by locating clones that overlap with a library clone end. This means that for chromosome walking to be carried out, it requires a clone library of a genomic format. This is why Vectorette PCR is one of the methods that can be used to create this library for chromosome walking to occur. Vectorette PCR comes in handy when it is necessary to obtain the regions that are both upstream and downstream and flank a sequence that is already known. By obtaining these regions, it provides the library of a genomic format that chromosome walking requires.
Yeast Artificial Chromosome
Yeast artificial chromosome or YAC is a DNA molecule that is developed by humans to take the DNA sequences that belong to yeast cells and clone them. Yeast artificial chromosomes can be inserted with fragments of DNA from the organism of interest. Yeast cells will then assimilate the yeast artificial chromosome that contains the DNA from the organism of interest. The yeast cells then multiply in number and this brings about the amplification of the DNA that has been incorporated into it which is then isolated for the purpose of things like sequencing and mapping of the DNA desired i.e. the DNA originally inserted into the yeast artificial chromosome. Vectorette PCR helps with this process by bringing about not only the isolation of the yeast artificial chromosome’s ends but also the amplification of the ends.
References
Polymerase chain reaction
Laboratory techniques
Molecular biology
DNA profiling techniques | Vectorette PCR | [
"Chemistry",
"Biology"
] | 2,332 | [
"Biochemistry methods",
"Genetics techniques",
"DNA profiling techniques",
"Polymerase chain reaction",
"nan",
"Molecular biology",
"Biochemistry"
] |
60,748,891 | https://en.wikipedia.org/wiki/Philosophy%20of%20ecology | Philosophy of ecology is a concept under the philosophy of science, which is a subfield of philosophy. Its main concerns centre on the practice and application of ecology, its moral issues, and the intersectionality between the position of humans and other entities. This topic also overlaps with metaphysics, ontology, and epistemology, for example, as it attempts to answer metaphysical, epistemic and moral issues surrounding environmental ethics and public policy.
The aim of the philosophy of ecology is to clarify and critique the 'first principles’, which are the fundamental assumptions present in science and the natural sciences. Although there has yet to be a consensus about what presupposes philosophy of ecology, and the definition for ecology is up for debate, there are some central issues that philosophers of ecology consider when examining the role and purpose of what ecologists practice. For example, this field considers the 'nature of nature', the methodological and conceptual issues surrounding ecological research, and the problems associated with these studies within its contextual environment.
Philosophy addresses the questions that make up ecological studies, and presents a different perspective into the history of ecology, environmental ethics in ecological science, and the application of mathematical models.
Background
History
Ecology is considered as a relatively new scientific discipline, having been acknowledged as a formal scientific field in the late nineteenth and early twentieth century. Although an established definition of ecology has yet to be presented, there are some commonalities in the questions proposed by ecologists.
Ecology was considered as “the science of the economy [and] habits,” according to Stauffer, and was proponent in understanding the external interrelations between organisms. It was recognised formally as a field of science in 1866 by German zoologist Ernst Haeckel (1834-1919). Haeckel termed ‘ecology’ in his book, Generelle Morphologie der Organismen (1866), in the attempt to present a synthesis of morphology, taxonomy, and the evolution of animals.
Haeckel aimed to refine the notion of ecology and propose a new area of study to investigate population growth and stability, as influenced by Charles Darwin and his work in Origin of Species (1859). He had first expressed ecology as an interchangeable term constituted within an area of biology and an aspect of ‘physiology of relationships’. In the English translation by Stauffer, Haeckel defined ecology as “the whole science of the relationship of organism to environment including, in the broad sense, all the ‘conditions for existence.'” This neologism was used to distinguish studies conducted on the field, as opposed to those conducted within the laboratory. He expanded upon this definition of ecology after considering the Darwinian theory of evolution and natural selection.
Defining ecology
There is yet to be an established consensus amongst philosophers about the exact definition of ecology, however, there are commonalities in the research agendas that helps differentiate this discipline from other natural sciences.
Ecology underlies an ecological worldview, wherein interaction and connectedness are emphasized and developed through several themes:
The idea that living and non-living beings are related and interconnected components in the biospherical web.
Living entities possess an identity that expresses their relatedness.
It is essential to understand the system of the biosphere and the components as a whole, rather than as their parts (also known as holism).
Occurrence of naturalism, whereby all living organisms are governed by the same natural laws.
Non-anthropocentrism, which is the rejection of anthropocentrism and its views on humans being the central entity, governed by the belief that value in the non-human world is to serve human interest. Non-anthropocentrism dictates that non-human world retains value and does not serve to benefit human interest.
Anthropogenic degradation of the environment dictates a necessity for environmental ethics.
There are three main disciplinary categories of ecology: Romantic ecology, political ecology, and scientific ecology. Romantic ecology, also called aesthetic or literary ecology, was a counter-movement to the increasingly anthropocentric and mechanistic ideology presented in modern Europe and America of the nineteenth century, especially during the Industrial Revolution. Some notable figures of this period include William Wordsworth (1770-1862), John Muir (1838-1914), and Ralph Waldo Emerson (1803-1882). Scope of romantic ecological influence also extends into politics, and in which political interrelation with ethics underline political ecology.
Political ecology, also known as axiological or values-based ecology, considers the socio-political implications surrounding the ecological landscape. Some fundamental questions political ecologists ask generally focus on the ethics between nature and society. American environmentalist Aldo Leopold (1886-1948), affirm that ethics should be extended to encompass the land and biotic communities as well, rather than pertaining exclusively to individuals. In this sense, political ecology can be denoted as a form of environmental ethics.
Finally, scientific ecology, or commonly known as ecology, addresses central concerns, such as understanding the role of the ecologists and what they study, and the types of methodology and conceptual issues that surround the development of these studies and what type of problem this may present.
Contemporary ecology
Defining contemporary ecology requires looking at certain fundamental principles, namely principles of system and evolution. System entails understanding the processes, of which interconnected sections establish a holistic identity, not separated or predictable from their components. Evolution results from the ‘generation of variety’ as a means to produce change. Certain entities that interact with their environments create evolution through survival, and it is the production of changes that shape ecological systems. This evolutionary process is central to ecology and biology.
There are three main concerns that ecologists generally concur with: naturalism, scientific realism, and the comprehensive scope of ecology.
Philosopher Frederick Ferre defines two different primary meanings for nature in Being and Value: Toward a Constructive Postmodern Metaphysics (1996). The first definition does not consider nature as 'artifacts of human manipulation’, and nature, in this sense, comprises those not of artificial origins. The second definition establishes natures as those not of supernatural conceptions, which includes artefacts of human manipulation in this case. However, there is confusion of meaning as both connotations are used interchangeably in its application in different contexts by different ecologists.
Naturalism
There is yet to be a defined explanation of naturalism within philosophy of ecology, however, its current usage connotes the idea that underlines a system containing a reality subsumed by nature, independent of the ‘supernatural’ world or existence. Naturalism, asserts the notion that scientific methodology is sufficient to obtain knowledge about reality. Naturalists who support this perspective view mental, biological, and social operations as physical entities. For example, considering a pebble or a human being, these existences occur concurrently within the same space and time. Applications of these scientific methods remain relevant and sufficient as it explains the spatiotemporal processes that physical entities undergo as spatiotemporal beings.
Methodology
Holism vs reductionism
The holism-reductionism debate encompasses ontological, methodological and epistemic concerns. Common questions involve examining whether the means to understanding an object is through critical analyses of its constituents (reductionism) or ‘contextualisation’ of its components (holism) to retain phenomenological value. Holists maintain that certain unique properties are attributed to the abiotic or biotic entity, such as an ecosystem, and how these characteristics are not intrinsically applicable to its separate components. Analysis of just the parts are insufficient in obtaining knowledge of the entire unit. On the other spectrum, reductionists argue that these parts are independent of each other, and that knowledge of the components provide understanding of the composite entity. This approach, however, has been criticised, as the entity does not just denote just the unity of its aggregates but rather a synthesis between the whole and its parts.
Rationalism vs empiricism
Rationalism within scientific ecology such methodologies remain necessary and relevant in their role for establishing ecological theory as a guide. Methodology employed under rationalist approaches became pronounced in the 1920s by Alfred Lotka's (1956) and Vito Volterra's (1926) logistic models that are known as Lotka-Volterra equations. Empiricism establishes the need for observational and empirical testing. An obvious consequence of this paradigm is the presence and usage of pluralistic methodology, although there has yet to be a unifying model adequate for application in ecology, and neither has there yet to establish a pluralistic theory as well.
Environmental ethics
Environmental ethics emerged in the 1970s in response to traditional anthropocentrism. It studies the moral implications between social and environmental interactions, prompted from concerns of environmental degradation, and challenged the ethical positionality of humans. A common belief amongst environmental philosophy is the view that biological entities are morally valuable and independent of human standards. Within this field, there is the shared assumption that environmental issues are prominently anthropogenic, and that this stems from an anthropocentric argument. The basis in rejecting anthropocentrism is to refute the belief that non-human entities are not worthy of value.
A main concern in environmental ethics is anthropogenically induced mass extinction within the biosphere. The attempt to interpret it non-anthropocentrically is vital to the foundations of environmental ethics. Paleontology, for example, details mass extinction as pivotal and a precursor to major radiations. Those with non-anthropocentric views interpret the death of dinosaurs as a preservation of biodiversity and principle to anthropocentric values. As ecology is closely entwined with ethics, understanding environmental approaches require understanding the world, which is the role of ecology and environmental ethics. The main issue is to also incorporate natural entities in its ethical concern, which involves conscious, sentient, living and existing beings.
Mathematical models
Mathematical models play a role in questioning the issues presented in ecology and conservation biology. There are mainly two types of models used to explore the relationship between applications of mathematics and practice within ecology. The first are descriptive models, which details single-species population growth, for example, and multi-species models like Lotka-Volterra predator-prey models or Nicholson-Baily host-parasitoid model. These models explain behavioural activity through the idealisation of the intended target. The second type are normative models, which describe the current state of variables and how certain variables should behave.
In ecology, complicated biological interactions require explanation, which is where the models are used to investigate hypotheses. For example, identification and explanations of certain organisms and population abundance is essential for understanding the role of ecology and biodiversity. Applications of equations provide an inclination towards a prediction, or a model to suggest an answer for these questions that come up. Mathematical model in particular also provide contextual supporting information regarding factors on a wider, more global scale as well.
The purpose of these models and the differences in normative models and scientific models is that the differences in their standards entail different applications. These models aid in illustrating decision making outcomes, and also aid in tackling group decisions. For example, mathematical models incorporate environmental decisions of people within a group holistically. The model helps represent the values of each members, and the weightings of respect in the matrix. The model will then deliver the final result. In the case of conflict about proceedings or how to represent certain quantities, the model may be limited in that it would be deemed not of use. Furthermore, the number of idealisations in the model are also presented.
Criticisms
The process of mathematical modelling presents distinction between reality and theory, or more specifically, the application of models against the genuine phenomena these models aim to represent. Critics of the employment of mathematical models within ecology question its use and the extent of their relevance, prompted by an imbalance in investigative procedure and theoretical propositions. According to Weiner (1995), deterministic models have been ineffectual within ecology. The Lotka-Volterra models, Weiner argues, have not yielded testable predictions. In cases where theoretical models within ecology produced testable predictions, they have been refuted.
The purpose of the Lotka-Volterra models is to track the predator and prey interaction and their population cycles. The usual pattern maintains that the predator population follows the prey population fluctuations. For example, as prey population increase, so does the predator, and likewise in prey population decrease, predator population decreases. However, Weiner argues that, in reality, prey population still maintains their oscillating cycles, even if the predator is removed, and is an inaccurate representation of natural phenomena. Criticism in how idealisation is inherent within modelling and application of this is methodologically deficient. They also maintain that mathematical modelling within ecology is an oversimplification of reality, and a misrepresentation or insufficient representation of the biological system.
Application of simple or complex models are also up for debate. There is concern regarding the model results, wherein complexities of a system are not able to be replicated or adequately captured with a complicated model.
See also
Chemical ecology
Circles of Sustainability
Cultural ecology
Dialectical naturalism
Ecological death
Ecological psychology
Ecology movement
Ecosophy
Ecopsychology
Industrial ecology
Information ecology
Landscape ecology
Natural resource
Normative science
Political ecology
Sensory ecology
Spiritual ecology
Sustainable development
References
ecology
Ecology | Philosophy of ecology | [
"Biology"
] | 2,738 | [
"Ecology"
] |
60,749,151 | https://en.wikipedia.org/wiki/Crocodile%20skin | Crocodile skin either refers to the skin of a live crocodile or a leather made from dead crocodile hide. It has multiple applications across the fashion industry such as use for bags, shoes, and upholstery after being farmed and treated in specialist farms and tanneries.
Crocodile leather
Crocodile leather is the processed hide of one of 23 crocodile species in the world. Crocodile leather is an exotic leather which as a group, makes up less than 1% of the world's leather production. It is rare compared to other hides such as sheep or cow and requires high levels of craftsmanship to prepare it for use in the consumer industry. Crocodile leather is considered a luxury item utilized by high fashion brands such as Hermes, Moet Hennessy Louis Vuitton (LVMH) and Gucci. As a material, crocodile leather is rare and expensive because of limited numbers of crocodiles, their relatively small size and the scarcity of dependable farms and tanning facilities to process and prepare the product for market.
Applications and Uses
Crocodile skin is primarily used in the production of handbags and other luxury items such as shoes, belts, wallets, upholstery, and furniture. For these products, Freshwater, Saltwater, Nile and Caiman are used because of the superior quality of skin which when tanned has an aesthetic finish. Not all these skins are valued the same. As one of the largest crocodile species, the Australian Saltwater Crocodile has a reputation for having the most desirable and high-quality hide. This makes it more popular than the smaller Caiman skins which, as a more common species, is a cheaper option. The value of a skin is dependent on what it will be used for. Freshwater Crocodile, particularly from New Guinea, is known for its flexibility which allows processors to skive it down to a thinness suitable for clothing whereas Nile crocodile, mostly available across Africa, is durable, making it desirable for heavy-duty items such as footwear and belts.
Farming
Crocodiles are either farmed or wild-caught. In Northern and Western Australia crocodile farms carry out ranching which includes captive breeding and harvesting of eggs from the wild. Eggs are collected and landowners sell the eggs to local farms to breed. In 2018 this method also became legal in Queensland.
On a crocodile farm, crocodiles are grown and prepared for slaughter before their skin is removed, treated, and sent to be tanned in specialist tanneries and used in the manufacture of commercial goods.
Ranching - this is the collection of wild eggs. Collection usually occurs in February and March.
Hatching - the eggs are incubated and protected to ensure the highest yield.
Growth - the crocodiles are grown to certain sizes dependent on what the skin will be used for. For example, most bags will require a 40 cm belly skin which will require a crocodile of generally 1.5 years old or 1.2 m long. The requirements vary depending on what is fashionable at the time, for example, if there is a trend for small handbags then a farm will reduce the growth stage and instigate slaughter earlier as smaller skins are required by the fashion industry. It is a case of supply and demand; if crocodile skin suits are "in fashion" then crocodile farmers will need to provide the fashion industry will larger skins suitable for such production.
Stunning - Once the crocodiles have reached the desired size, the crocodile is stunned with a rod and its eyes are covered to calm it. They are then sent to abattoirs where skins and meat are removed for sale.
Slaughter - Humane slaughter is carried out by the severing of the spinal cord.
Disinfection - According to food-safety guidelines the skin is disinfected.
Chilling - Before skinning the carcass is left in a cold room bleed. This often takes place overnight.
Skinning - Skin is carefully removed.
Meat Processing - meat is removed and packaged according to food safety requirements.
Skin Processing - the processing of the removed skin involves short and long term preservation, grading and measurement and storage until dispatch.
The main farm income is in crocodile skin for the fashion industry. It is important the skin is of good quality to achieve the highest revenue possible. Preservation is essential as quality of skin reduces substantially in warm conditions where the farms tend to be situated. To add value to skins, some farms include fleshing at the stage of short term preservation. Fleshing is usually carried out by tanners and is the trimming, scraping and removal of remaining muscle tissue using sharp equipment and high power water jets. It is often considered risky for farms to complete the fleshing process as the skin may be damaged, a costly mistake. 1 skin costs $12 in labor, not including operating or capital costs. Therefore, fleshing is usually carried out by tanners.
Value, quality and measurements
The skin is the most valuable part of a crocodile, followed by the meat and other body parts such as teeth. Value is decided in two ways: size and grade. Greater width increases the value of the skin and is measured across the third raised scute. The grade is measured on a scale of damage to the skin and value is deducted by 25% at each level. Therefore, skin value can drop significantly if the quality is not maintained by careful handling.
The value of first-grade skin per cm is $9 (USA), a 40 cm of skin therefore costs $360. For every imperfection, value decreases which is why crocodile farmers take precautionary measures such as covering corners of enclosures with plastic, to keep their crocodiles in good condition. Crocodiles are put into smaller groups to prevent fights and spread of infections are known to lead to scarring and damage of skin which will affect the value of leather.
The value of a skin is dependent on how much it is desired by fashion houses such as Louis Vuitton, Yves Saint Laurent and Hermes. Premium skins are usually transported to countries such as France, Italy and the United States of America where the most reputable tanneries treat the skins according to the designer's wishes and make them ready for manufacture into commercial goods such as bags, shoes and accessories. In Australia, (both a producer and manufacturer of crocodile hide) businesses like Di Croco offer custom products to customers and also use lesser skins and by-products to minimize waste.
Quality can be improved up to the point of slaughter and from here only maintained or reduced. Skin must be preserved carefully as after slaughter there is a loss of immune response and it becomes susceptible to microbial contamination such as scale slip, staining and discoloration and biological damage, e.g. bacterial or fungal infection. In short term preservation, a 60% brine solution is used for up to five days. In long term preservation, a commercial biocide is required which allows the skin to be kept for up to four months. The skins are kept in sealed individual bags, though not vacuum packed, to minimize exposure, prevent creasing and simplify handling. Farmers and tanners use specific methods of folding or rolling skins to prevent creases forming across the scales.
There are 2 main cuts of crocodile skin:
Back Cut - Scaly cut with a rough texture and mainly used in trimmings.
Belly Cut - Highly popular cut due to smooth texture and close, small scale structure which makes it pliable and suitable for many items such as handbags and clothing.
The largest width of the belly is measured to gauge the value of the hide. When designers are purchasing crocodile leather, they must take into consideration the measurements are for the overall size of the hide and not a pattern width. As a result, it can take several skins to produce a single item.
Treatment after Farming and Production
Development programs were set up to support the growth of crocodile populations during harvest in the 1960s and 1970s in the Americas and Rhodesia. Papua New Guinea put similar management programs in place which made the trading of crocodile skin economically and commercially viable as it prevented over hunting and depletion in numbers. Maintenance of these farms relies on skin-producing countries to export their products elsewhere for tanning and manufacture.
Often, it is impossible to tell if a skin has been preserved adequately until after tanning as there may be no signs of biological damage. A damaged skin resulting in a dull, discolored or scuffed finish which ultimately devalues the leather.
Australian Saltwater Crocodile is one of the most sought after skins because it is flexible which makes it good for handbag production. Bonier hides of Caiman crocodiles are more difficult to dye and work with, making them a less popular option. For items such as bags, suits or trousers, large panels of skin are required. With large areas of leather on display, damaged leather is obvious which is why cautions are taken to ensure high-grade skins come out of the crocodile farms. Small bags require a hide of 30 to 34 cm. Larger bags need skins of 40 to 50 cm. Manufacturers should use the maximum amount of hide to avoid waste. Scraps are used for straps, gussets and interior details. It is necessary for the designer to mark the skin with preparatory sewing lines using a rotatory tool to thin the line where the stitches will run. This reduces the risk of the needle hitting calcium deposits which may break the needle.
It takes an average of two artisan days to make a crocodile skin handbag. Timing depends on the glazing technique used on the hide as certain glazes affect the pliability of the leather, making it stiffer and prone to cracking. Longer, more complicated process is required when the leather has been treated like this as the leather cannot be turned inside out in the traditional way.
Legalities
The crocodile skin trade is legally complicated because it is important that the leather is sourced reliably from farms where crocodiles are treated in humane conditions. Unregulated commercial hunting has resulted in a decline of many crocodile populations so governments have put protection over many reptiles. CITES is an international agreement between 164 countries to protect endangered species from extinction. Established in 1973, it stands for "Convention on International Trade in Endangered Species of wild fauna and flora". Legally imported crocodile skin must come from reputable farms with CITES certification to prove legal possession. Any uncertificated skins are confiscated by customs and sale of an inherited (pre-CITES) or illegally imported skin is a criminal offense.
The laws on crocodile trade are different around the world. In America, it is legal to import sustainably sourced crocodile leather as long as it complies with the restrictions imposed by CITES. Crocodile leather trade for Freshwater Siamese Crocodile with Thailand, Vietnam or Cambodia is forbidden even if the skin is accompanied by a CITES certificate. In certain cases, illegal trade occurs when buyers are unaware of restrictions so companies or businesses purchasing crocodile hide must be sure of the origin of the skin they are purchasing.
Trade
Crocodile leather trade was established in the Caribbean, Mexico and Central America when it became a popular material in the 1800s. Since then, demand for skins has increased to the extent that hunting and production spread to Africa, Asia and Australia, where the majority of crocodile skins are sustainably sourced today.
In the Northern Territory, crocodile farms generate $107 million per year. This is a crucial form of income for a community lacking viable industry. Crocodile farming is valued as providing 264 jobs (2017) as well as encouraging harmony within communities with Indigenous and local people who carry out egg hunting and crocodile rearing.
Crocodile farming is not limited to the production of skins for the fashion industry. Tourism and on-farm breeding help maintain the state of farms and educate the public about the role of crocodile farming in certain communities. In the Northern Territory tourists can visit Crocodylus Park and Crocosaurus Cove to learn about the crocodiles and the trade.
Conservation
Within certain societies, the crocodile trade is extremely important. In 1945 - 1971 Northern Australians generated significant income at the expense of the crocodile as uncontrolled trade severely impacted on the populations of both saltwater and freshwater crocodiles. Full protection over the Australian Saltwater Crocodile was established in 1971 to allow the species to recover. When crocodile numbers increased, co-habitation with local people became a problem, and fatal and non-fatal attacks on people and fishing boats were reported in 1979/1980. In response, the Northern Territory established an 'incentive-driven conservation strategy' which encouraged people to protect crocodiles through commercial activity such as farming, tourism, and ranching. The Saltwater Crocodiles are seen as a commercial resource by communities who generate wealth and employment through the crocodile industry. This also promotes crocodile conservation which would otherwise be difficult because of their predatory nature.
Brands who use crocodile skins are encouraged to support conservation efforts. Australian brand Croc Stock and Barra use unwanted sections of skin to handcraft luxury items and ensure waste is limited. Other brands such as Roje Exotics American Leathers claim to use leather that is the byproduct of the international exotic cuisine industry which also ensures fewer skins are wasted within the system.
Animal Welfare
The Management Program within the Northern Territory maintains that the crocodiles are farmed in a humane way. It makes assessments on farming limits and population dynamics to ensure the numbers of Saltwater Crocodile are maintained and never reach the lows of 1972 again.
See also
Crocodile industry
Kangaroo industry
Alligator leather
Bibliography
Leathermaking
Leather
Industry in Australia
Skin
Fashion
Nature conservation
Crocodylidae
Leather goods
Leather industry
Bags (fashion)
Crocodiles of Australia
Animal products | Crocodile skin | [
"Chemistry"
] | 2,726 | [
"Animal products",
"Natural products"
] |
60,752,073 | https://en.wikipedia.org/wiki/Frederick%20Lossing | Frederick Pettit Lossing (1915-1998) was a Canadian chemist at the National Research Council in Ottawa. He was a prolific scientist and is mainly known for his contributions to mass spectrometry, the Fred P. Lossing Award awarded by the Canadian Society for Mass Spectrometry is named after him.
Lossing was born in Norwich and studied at the University of Western Ontario and obtained a PhD from McGill University in 1942. In 1946 he joined the National Research Council in Ottawa where he worked until his retirement in 1980. His work included measurements of the ionization energies of free radicals and thermochemistry.
Awards and honors
Fellow of the Royal Society of Canada (1956)
The Fred P. Lossing Award is named after him.
References
External links
Remembering Fred P. Lossing
Profile at the Royal Astronomical Society
Fellows of the Royal Society of Canada
Mass spectrometrists
McGill University alumni
20th-century Canadian chemists
University of Western Ontario alumni
1915 births
1998 deaths | Frederick Lossing | [
"Physics",
"Chemistry"
] | 200 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
60,752,454 | https://en.wikipedia.org/wiki/Minoan%20Moulds%20of%20Palaikastro | The Minoan Moulds of Palaikastro () are two double-sided pieces of schist, formed in the Minoan period as casting moulds for plaques with figures and symbols. These include female figures with raised arms, labrys double axes (Λάβρυες, labryes) and opium poppy flowers or capsules, two double axes with indented edges, the Horns of Consecration symbol, and a sun-like disc with complex markings, which has been claimed by some researchers to be for making objects to use in astronomical predictions of solar and lunar eclipses.
They were found in 1899 near Palaikastro in the eastern part of Crete, and are now in the Herakleion Archeological Museum in Crete.
Description
Stefanos Xanthoudidis, who published the find in 1900 described the two moulds, which were made from relatively soft and brittle schist as Plate Α and Plate Β. His plaster casts, which are also reproduced on the right hand side, are mirror images of the original moulds. Both moulds are wide, high and thick, while the width of the plaster casts is .
The front of Plate Α shows a large disc with rectangular spokes and a serrated edge (which some are keen to interpret as "geared"), a female figure with raised arms, who holds flowers in her hands and a small disc with a cross in the centre on top of a bell-shaped and horizontally striped base, above a crescent. Double horns, the 'Horns of Consecration' of the Minoan culture, and a trident are shown on the rear. A small piece of the lower edge of the mould is broken-off.
The front of Plate B shows engravings of a couple of double axes, dissimilar in size with teethed edges. The double axe or labrys was a cultural, almost certainly religious, symbol of the Minoan culture, often used for votive offerings, as were goddess figures with uplifted hands. The rear of the plate shows a female figure with raised arms holding two double axes. A small piece of the lower edge of the mould is broken-off as well. Both plates are exhibited side by side in the Heraklion Archaeological Museum. The visitors can only see their front sides. The captions in the museum say that they stem from 1370 to 1200 BCE.
Iconography
Very interesting objects are shown on the front of Plate Α, as recognised by Arthur Evans, who described them in his book The Palace of Minos at Knossos in 1921. On pages 478 and 479, he compares the base of an ivory object, of the Knossos board game, with the geared object on the mould of Palaikastro. On page 514 he shows drawings of the objects left and right of the female figurine of Plate Α, however, not very precisely. Evans refers to the isosceles cross being used in many cultures as the most simple representation of a star, and concludes that the geared object is a combination of a Morning Star with the disc of the sun. He interprets that the smaller object is a symbol for the goddess as the queen of the underworld and as the stars of the night. In combination with the crescent, the cross is then an Evening Star.
Possible historical astronomy function
In 2013, five scientists published a paper in the Mediterranean Archaeology and Archaeometry journal, in which they described the sun-like form on Plate Α as a casting mould for manufacturing a spoked disc, which was used in the Minoan times of the 15th century BC as a sun dial, for establishing the geographical latitude and for predicting solar and lunar eclipses. The straight gashes beside the sun shape they interpret as moulds for two pins and a compasses or tweezers-like object, to be used in conjunction with it. They claimed to be able to predict eclipses even in the modern era with some accuracy, when using it.
Similar comments have been made by Minas Tsikritsis in April 2011 in public. He described together with Efstratios Theodosiou the smaller round image to the right of the female figure as a Minoan cosmology model with the planetary system above the Flat Earth, in which the cross that symbolises the sun is surrounded by 18 dots and those including the crescent-shaped moon symbol are surrounded by 28 dots, an indication hinting at the Saros cycle with 28 lunar eclipses in 18 years. This is approximately long and high. They interpret the spoked disc on the other side of the female figurine, which is associated with Titaness Rhea, as a portable analog calculator, which was created 1400 years before the Antikythera mechanism.
Dating
Chronological dating of the moulds is difficult, because the precise original location of the find and its surroundings are not known. Stratigraphy or the assessment of age-equivalent stratigraphic markers are, therefore, not applicable. In 1927, Martin P. Nilsson compared the style of the female figurine of Plate Α with those on various Minoan-Mycenaean gold rings and a relief on the Hagia Triada sarcophagus.
In 1941, Luisa Banti classified both female figurines as variations of the type "goddess with raised hands", similar to the terracotta figurines found in Knossos, Gazi, Karphi and other places in Crete, which belong to the Late Minoan III phase. Stylianos Alexiou endorsed in 1958 the dating as belonging to Late Minoan III, but he noted the differences in the gesture, as the female figurines hold something in their hands.
In 2016, based on a stylistic and iconographic assessment, the casting moulds were dated as being older by Jan G. Velsink, who dates them as belonging to the Middle Minoan phases MM II or III.
Discovery and publication
The two moulds were discovered in October 1899 by a farmer from Karydi northeast of the village of Palaikastro. The Gendarmerie sent the finds to the then Cretan capital Chania, where they were assessed and kept by the archeologist and historian Stefanos Xanthoudidis. He recognised the importance of ancient craftsmanship and delivered the moulds to the museum in Heraklion which had been set up in 1883. Xanthoudidis described the objects in March 1900 in an article ("Ancient moulds from Sitia in Crete") in the journal of the Archaeological Society of Athens. This publication included photos of plaster casts of all four sides of the moulds.
References
External links
Minoan culture
Minoan art
Heraklion Archaeological Museum
17th-century BC works
16th-century BC works
1899 archaeological discoveries
Ancient Greek astronomy
Ancient Greek science
Ancient Greek technology
Mechanical calculators
Mechanical computers
Analog computers
Archaeoastronomy
Archaeological artifacts
Archaeological discoveries in Crete
Astronomical instruments | Minoan Moulds of Palaikastro | [
"Physics",
"Astronomy",
"Technology"
] | 1,437 | [
"Machines",
"Archaeoastronomy",
"Physical systems",
"Mechanical computers",
"Astronomical instruments",
"Astronomical sub-disciplines"
] |
60,752,925 | https://en.wikipedia.org/wiki/Present%20bias | Present bias is the tendency to settle for a smaller present reward rather than wait for a larger future reward, in a trade-off situation. It describes the trend of overvaluing immediate rewards, while putting less worth in long-term consequences. The present bias can be used as a measure for self-control, which is a trait related to the prediction of secure life outcomes.
In the field of behavioral economics, present bias is related to hyperbolic discounting, which differ in time consistency.
History
Even though the term of present bias was not introduced until the 1950s, the core idea of immediate gratification was already addressed in Ancient Greece. A historical record of a display of concern regarding procrastination is known from the Greek poet Hesiod, who wrote:
"Do not put your work off till to-morrow and the day after; for a sluggish worker does not fill his barn, nor one who puts off his work: industry makes work go well, but a man who puts off work is always at hand-grips with ruin."
Present bias and economics
The term of present bias was coined in the second half of the 20th century. In the 1930s economic research started investigating time preferences. The findings led to the model of exponential discounting, thus time consistent discounting. However, later research led to the conclusion that time preferences were indeed not consistent, but inconsistent. In other words, people were found to prefer immediate advantages to future advantages in that their discount over a short period of time falls rapidly, while falling less the more the rewards are in the future. Therefore, people are biased towards the present. As a result, Phelps and Pollak introduced the quasi-hyperbolic model in 1968. In economics, present bias is therefore a model of discounting.
Only when the preference for the present is time inconsistent do we call it biased. In recent years, the concept of present bias has also found its way into research concerning law and criminal justice.
Brain areas
Decisions concerning the choice between an immediate or a future reward are mediated by two separate systems, one dealing with impulsive decisions and the other with self-control.
Brain areas that are associated with emotion- and reward-processing, are much rather activated by the availability of immediate rewards than by future rewards, even if the future rewards are larger. Hence individuals tend to make decisions in favor of immediate outcomes rather than future outcomes.
The brain areas involved in present-biased decisions can be dissociated into three main groups. The medial prefrontal cortex and the medial orbitofrontal cortex respond to both the presence and the gain of an immediate reward, whereas the ventral striatum is sensitive to the availability and gain of a reward. The pregenual anterior cingulate cortex on the other hand is only responsive to the presence of an immediate reward. All these areas are associated with activity in response to an immediate reward.
McClure's dual-system model claims that these brain areas are impulsively triggered by immediate benefits and not so much by future rewards. Future rewards do not activate emotion- and reward-processing areas as much, because people tend to downgrade future benefits in respect of available immediate benefits.
The medial prefrontal cortex, pregenual anterior cingulate cortex and ventral striatum show different activity patterns, depending on whether the choices lead to an immediate reward or a future reward for oneself. This is not the case if these decisions affect another individual, which implies that more patience and less self-focus is involved in self-irrelevant decision-making. People who consider their present and future self as more alike also exhibit more patience when choosing a reward.
Activity in the ventral striatum, medial prefrontal cortex, orbitofrontal cortex, pregenual anterior cingulate cortex and posterior cingulate cortex is associated with an immediate reward merely being available for oneself. All these areas, which are also part of the rostral limbic system, build a network connected to the expectation and the gain of immediate gratification.
The dorsal anterior cingulate cortex, posterior cingulate cortex and precuneus get activated more if the rewards is immediate and less when the reward is available in the future, regardless of whether it affects the individual itself or another person.
Ventral striatum
The ventral striatum gets activated both when an individual personally decides for an immediate reward, as well as when an individual watches someone else making that decision for them. It is responsive to both the likelihood of getting an anticipated reward as well as its size. It also plays a role in evaluating after a choice has been made.
Medial prefrontal cortex
The medial prefrontal cortex is responsible for self-related attention and judgement, for example comparing the self to someone else. These evaluations take place even if the individual has not made the choice themselves. The ventral part of the medial prefrontal cortex, just like the ventral striatum, evaluates the outcome also after the decision was made.
Pregenual anterior cingulate cortex
The pregenual anterior cingulate cortex is a structure located close to the corpus callosum, which plays a role in positive emotions and responds to success reward when gambling.
Ventral posterior cingulate cortex
This brain area is playing a role in reflection on the self and emotions.
Delayed gratification
Delayed gratification is the ability to not give in to immediate rewards and instead strive for the more beneficial future rewards.
Stanford Marshmallow Experiment
The first Marshmallow Experiment was conducted at Stanford University by Walter Mischel and Ebbe B. Ebbesen in 1970. It led to a series of Marshmallow Experiments, which all tested children's ability to delay gratification. The children were offered an immediate reward and were told that if they manage to not eat the reward right away, but instead waited for a certain period of time (approximately 15 minutes), they would get another treat. Age correlated positively with the capability of delayed gratification. There has also been a correlation found between ability to delay gratification as a child and the child's success in follow-up studies several years later.
Political elections
Present bias is also reflected in the choice of whether an individual participates in political elections. Political elections are usually characterized by an immediate effort, for example making a political decision and casting the vote on election day, whereas the benefits of voting, such as favored political changes, often only occur later in the future. Patience is therefore a relevant factor that influences peoples’ decision whether to take part in elections. Individuals who exhibit more patience with future political changes, also show a greater willingness to take part in political elections. Whereas others, who focus more on the efforts to be paid, are less likely to take part in elections.
Brain areas
The ability to perform delayed gratification increases as the lateral prefrontal cortex and the medial prefrontal cortex mature. Particularly the left dorsolateral prefrontal cortex shows increased activity during delayed gratification. The thickness of these cortical areas as well as the volume of the left caudate nucleus is also linked to a better ability in delayed gratification and suppressing impulsivity. The frontal cortex involvement in self-regulation and self-control also play an important role.
Procrastination
Present-biased preferences often result in procrastination.
Procrastination mostly occurs when actions are followed by immediate costs. However, when actions are instead followed by immediate rewards, people tend to perform their tasks faster in order to get the reward.
The findings of a study in which students had to set deadlines for completing certain tasks for themselves, suggested that an interaction of present-bias as well as personal characteristics, e.g. overconfidence, may appear to be "procrastination". However, internal self-control and sophistication regarding the tasks may reduce present bias, whereas it has the opposite effect for naïve people.
Brain areas
Another study further investigates the common hypothesis that self-regulatory failure results in procrastination.
Furthermore, there appears to be a decrease in functional correspondence between the following brain areas: Between VMPC and DLPFC, dACC and caudate and in the right VLPFC. They posited that self-regulatory failure is associated with procrastination, although a body of replicated results would lend more credibility to this hypothesis.
Health
Present bias has an impact on people's individual health care decision-making. It affects a range of health-related behaviors, for example precaution with potential illnesses, such as breast cancer, living an unhealthy life style, like smoking, drinking alcohol and drug use and showing risky behavior, such as drunk driving.
Present bias often occurs when the negative consequences of a certain behavior are believed to be in distant future. It is characterized by short-term impatience. This impatience with the future benefits to occur minimizes the motivation for people to take unpleasant actions for their health, like maintaining a diet, refraining from a cigarette or regularly visiting a professional for check-ups.
Present biased decision-making often underlies the notion that a certain health care behavior induces costs first, while benefits occur only quite some time later. People are often more focused on the short-term benefits than on long-term consequences. For example, drunk-drivers exhibit less long-term concern than non-drunk drivers.
The lacking adherence to health care can also be explained by the naïve thinking about one's own present bias. People overestimate that they will take care of their behaviour's consequences in the future, which is often not the case. They tend to underestimate their own self-control and the effects of their present behavior on their future well-being and therefore postpone taking action before it is urgent. Many people procrastinate because they underestimate how their future selves are being affected by the present bias.
Present bias can explain failure to adhere effective health care guidelines, such as mammography. People tend to forget that precaution with their own health can maximize their lifetime and minimize their life time medical spending. A lot of people who are already diagnosed with an illness underestimate the importance of following health care guidelines, even though they are beneficial for their own health. Mostly, increasing age and nearing death eventually leads individuals to focus more on their own health.
Overcoming the present bias could lead to earlier detection of illnesses, such as breast cancer, to start treatment in time. These individual decisions not to take care early negatively affects the health care systems, whose costs could be minimized by a more precaution of their clients.
Visceral states
The educator and economist George Loewenstein described how strongly visceral states (e.g. hunger, thirst, strong emotions, sexual desire, mood or physical pain) can influence decision-making in ways that are not in one's long-term interest. According to Loewenstein, visceral factors have a direct hedonic impact and they influence how much one desires different rewards and actions. When visceral factors influence one highly, it can lead to self-destructive behavior such as overeating. Visceral factors lead one to focus on the present more than on some time in the future when making decisions that are associated with the visceral factor. In Loewenstein's opinion, visceral states have the most enormous impact on the following behaviors: drug addiction, sexual behavior, motivation and effort, and self-control.
Those factors are known as "hot states", because temporary emotions can have an influential effect on our behavior. Therefore, there are "cooling off" periods for many important purchases. Other factors such as age, gender, cultural background, education and self-control also play a role in making discounting decisions – but those can be dealt with more easily than with visceral states.
Wealth distribution
Economical models use present bias, also referred to as dynamic inconsistency, to explain distribution of wealth. If everybody would be present-biased wealth distribution would be unaffected. As this is only possible in an ideal economy, wealth inequality spurts from time-consistent individuals benefiting from the irrational monetary decisions present-biased economic rivals make. Indeed, present bias in economics is often linked to lack of self-control when making monetary decisions. It is associated with high desires to spend money and failure to commit to a saving plan.
A present-biased society is represented by individuals reaching an earlier peak in their mean wealth and trend to lose accumulated wealth as they reach retirement. Loss of wealth can be attributed to tendency to bend under the temptation to over-consume and under-save. Such irrational behavioral biases lead to lower average wealth and shortened planning horizons. Present-biased people fail to complete a consumption saving plan are more likely consistently re-optimize factors influencing their wealth accumulation. An association between deciding to obtain less education, lower lifetime earnings, and lower retirement consumption was observed in present-biased individuals.
Tourism
Present bias plays a role in tourism concerning travel costs and impulsivity of tourist's decision-making. Impulsivity is reasoned to be triggered by the escape of daily routines and a feeling of the moment. Hence present bias would specially apply while traveling. Although reference prices frame expenses, present bias which is influenced by the prospect theory, that grades the value of gains, and the attachment effect, tourists tend to overspend. Individual differences such as risk aversiveness play into overspending caused by the effect of present bias. Group decisions and a prior commitment to not overspend can reduce the bias and inhibit impulsivity.
See also
Cognitive bias
List of cognitive biases
Behavioral economics
Hyperbolic discounting
Procrastination
Delayed gratification
Stanford marshmallow experiment
References
Behavioral economics
Cognitive biases
Cognitive neuroscience
Psychology experiments | Present bias | [
"Biology"
] | 2,815 | [
"Behavior",
"Behavioral economics",
"Behaviorism"
] |
41,419,738 | https://en.wikipedia.org/wiki/Cobordism%20hypothesis | In mathematics, the cobordism hypothesis, due to John C. Baez and James Dolan, concerns the classification of extended topological quantum field theories (TQFTs). In 2008, Jacob Lurie outlined a proof of the cobordism hypothesis, though the details of his approach have yet to appear in the literature as of 2022. In 2021, Daniel Grady and Dmitri Pavlov claimed a complete proof of the cobordism hypothesis, as well as a generalization to bordisms with arbitrary geometric structures.
Formulation
For a symmetric monoidal -category which is fully dualizable and every -morphism of which is adjointable, for , there is a bijection between the -valued symmetric monoidal functors of the cobordism category and the objects of .
Motivation
Symmetric monoidal functors from the cobordism category correspond to topological quantum field theories. The cobordism hypothesis for topological quantum field theories is the analogue of the Eilenberg–Steenrod axioms for homology theories. The Eilenberg–Steenrod axioms state that a homology theory is uniquely determined by its value for the point, so analogously what the cobordism hypothesis states is that a topological quantum field theory is uniquely determined by its value for the point. In other words, the bijection between -valued symmetric monoidal functors and the objects of is uniquely defined by its value for the point.
See also
Cobordism
References
Further reading
Seminar on the Cobordism Hypothesis and (Infinity,n)-Categories, 2013-04-22
Jacob Lurie (4 May 2009). On the Classification of Topological Field Theories
External links
Quantum field theory | Cobordism hypothesis | [
"Physics",
"Mathematics"
] | 354 | [
"Quantum field theory",
"Quantum mechanics",
"Topology stubs",
"Topology",
"Quantum physics stubs"
] |
41,419,857 | https://en.wikipedia.org/wiki/Simplicial%20space | In mathematics, a simplicial space is a simplicial object in the category of topological spaces. In other words, it is a contravariant functor from the simplex category Δ to the category of topological spaces.
References
Homotopy theory
Topological spaces | Simplicial space | [
"Mathematics"
] | 58 | [
"Topological spaces",
"Mathematical structures",
"Topology",
"Space (mathematics)"
] |
41,419,956 | https://en.wikipedia.org/wiki/Description-experience%20gap | The description-experience gap is a phenomenon in experimental behavioral studies of decision making. The gap refers to the observed differences in people's behavior depending on whether their decisions are made towards clearly outlined and described outcomes and probabilities or whether they simply experience the alternatives without having any prior knowledge of the consequences of their choices.
In both described and experienced choice tasks, the experimental task usually involves selecting between one of two possible choices that lead to certain outcomes. The outcome could be a gain or a loss and the probabilities of these outcomes vary. Of the two choices, one is probabilistically safer than the other. The other choice, then, offers a comparably improbable outcome. The specific payoffs or outcomes of the choices, in terms of the magnitude of their potential gains and losses, varies from study to study.
Description
Description-based alternatives or prospects are those where much of the information regarding each choice is clearly stated. That is, the participant is shown the potential outcomes for both choices as well as the probabilities of all the outcomes within each choice. Typically, feedback is not given after a choice is selected. That is, the participant is not shown what consequences their selections led to. Prospect theory guides much of what is currently known regarding described choices.
According to prospect theory, the decision weight of described prospects are considered differently depending on whether the prospects have a high or low probability and the nature of the outcomes. Specifically, people's decisions differ depending on whether the described prospects are framed as gains or losses, and whether the outcomes are sure or probable.
Prospects are termed as gains when the two possible choices both offer a chance to receive a certain reward. Losses are those where the two possible choices both result in a reduction of a certain resource. An outcome is said to be sure when its probability is absolutely certain, or very close to 1. A probable outcome is one that is comparably more unlikely than the sure outcome. For described prospects, people tend to assign a higher value to sure or more probable outcomes when the choices involve gains; this is known as the certainty effect. When the choices involve losses, people tend to assign a higher value to the more improbable outcome; this is called the reflection effect because it leads to the opposite result of the certainty effect.
Experience
Previous studies focusing on description-based prospects suffered from one drawback: the lack of external validity. In the natural environment, people's decisions must be made without a clear description of the probabilities of the alternatives. Instead, decisions must be made by drawing upon past experiences. In experience-based studies, then, the outcomes and probabilities of the two possible choices are not initially presented to the participants. Instead, participants must sample from these choices, and they can only learn the outcomes from feedback after making their choices. Participants can only estimate the probabilities of the outcomes based on experiencing the outcomes.
Contrary to the results obtained by prospect theory, people tended to underweight the probabilities of rare outcomes when they made decisions from experience. That is, they in general tended to choose the more probable outcome much more often than the rare outcomes; they behaved as if the rare outcomes were more unlikely than they really were. The effect has been observed in studies involving repeated and small samples of choices. However, people tended to choose the riskier choice when deciding from experience for tasks that are framed in terms of gains, and this, too, is in contrast with decisions made from description.
As demonstrated above, decisions appear to be made very differently depending on whether choices are made from experience or description; that is, a description-experience gap has been demonstrated in decision making studies. The example of the reverse reflection effect aptly demonstrates the nature of the gap. Recall that description-based prospects lead to the reflection effect: people are risk averse for gains and risk seeking for losses. However, experience-based prospects results in a reversal of the reflection effect such that people become risk seeking for gains and risk averse for losses. More specifically, the level of risk-taking behavior towards gains for participants in the experience task is virtually identical to the level of risk-taking towards losses for participants in the description task. The same effect is observed for gains versus losses in experience and description tasks. There are a few explanations and factors that contribute to the gap; some of which will be discussed below.
One factor that may contribute to the gap is the nature of the sampling task. In a sampling paradigm, people are allowed to respond to a number of prospects. Presumably, they form their own estimations for the probabilities of the outcomes through sampling. However, some studies rely on people making decisions for a small sample of prospects. Due to the small samples, people may not even experience the low probability event, and this might influence peoples’ underweighting of the rare events. However, description-based studies involve making the exact probabilities known to the participant. Since the participants here are immediately made aware of the rareness of an event, they are unlikely to undersample rare events.
The results from experience-based studies may be the result of a recency effect. The recency effect shows that greater weight or value is assigned to more recent events. Given that rare events are uncommon, the more common events are more likely to take recency and therefore be weighted more than rare events. The recency effect, then, may be responsible for the underweighting of rare events in decisions made from experience. Given that description-based studies usually involving responding to a limited number of trials or only one trial, recency effects likely do not have as much of an influence on decision making in these studies or may even be entirely irrelevant.
Another variable which may be driving the results for the experience-based decisions paradigm is a basic tendency to avoid delayed outcomes: alternatives with positive rare events are on average advantageous only in the long term; while alternatives with negative rare events are on average disadvantageous in the long term. Hence, focusing on short term outcomes produces underweighting of rare events. Consistent with this notion, it has been found the increasing the short term temptation (e.g., by showing outcomes from all options; or foregone payoffs) increases the underweighting of rare events in decisions from experience
Since experience-based studies include multiple trials, participants must learn about the outcomes of the available choices. The participants must base their decisions on previous outcomes, so they must therefore rely on memory when learning the outcomes and their probabilities. Biases for more salient memories, then, may be the reason for greater risk seeking in gains choices in experience-based studies. The assumption here is that a more improbable but greater reward may produce a more salient memory.
To reiterate, prospect theory offers sound explanations for how people behave towards description-based prospects. However, the results from experience-based prospects tend to show opposite forms of responding. In described prospects, people tend to overweight the extreme outcomes such that they expect these probabilities to be more likely than they really are. Whereas in experienced prospects, people tend to underweight the probability of the extreme outcomes and therefore judge them as being even less likely to occur.
A highly relevant example of the description-experience gap has been illustrated: the difference in opinions on vaccination between doctors and patients. Patients who learn about vaccination are usually exposed to only information regarding the probabilities of the side effects of the vaccines so they are likely to overweight the likelihood of these side effects. Although doctors learn about the same probabilities and descriptions of the side effects, their perspective is also shaped by experience: doctors have the direct experience of vaccinating patients and they are more likely to recognize the unlikelihood of the side effects. Due to the different ways in which doctors and patients learn about the side effects, there is potential disagreement on the necessity and safety of vaccination.
Typically in natural settings, however, peoples’ awareness of the probabilities of certain outcomes and their prior experience cannot be separated when they make decisions that involve risk. In gambling settings, for instance, players can participate in a game with some level of understanding of the probabilities of the possible outcomes and what specifically the outcomes lead to. For example, players know that there are six sides to a die, and that each side has a one in six chance of being rolled. However, a player's decisions in the game must also be influenced by his or her past experiences of playing the game.
See also
Decision theory
Prospect theory
References
External links
An introduction to Prospect Theory (econport.com)
Prospect Theory (behaviouralfiance.net)
Behavioral economics
Prospect theory
Decision theory | Description-experience gap | [
"Biology"
] | 1,775 | [
"Behavior",
"Behavioral economics",
"Behaviorism"
] |
41,420,172 | https://en.wikipedia.org/wiki/Beta%20attenuation%20monitoring | Beta attenuation monitoring (BAM) is an air monitoring technique employing the absorption of beta radiation by solid particles extracted from air flow. The technique allows for the detection of PM10 and PM2.5, which are monitored by most air pollution regulatory agencies. The main principle is based on a kind of Bouguer (Lambert–Beer) law: the amount by which the flow of beta radiation (electrons) is attenuated by a solid matter is exponentially dependent on its mass and not on any other feature (such as density, chemical composition or some optical or electrical properties) of this matter. So, the air is drawn from outside of the detector through an "infinite" (cycling) ribbon made from some filtering material so that the particles are collected on it. There are two sources of beta radiation placed one before and one after the region where air flow passes through the ribbon leaving particles on it; and there are also two detectors on the opposite side of the ribbon, facing the detectors. The sources' intensity and detectors' sensitivity being the same (or corrected with appropriate calibration lookup table), the intensity of beta rays detected by one of detectors is compared to that of the other. Thus one can deduce how much mass has the ribbon acquired upon being exposed to air flow; knowing the drain velocity, actual particle mass concentration in air could be assessed.
The radiation source can be a gas chamber, filled with 86Kr gas, or a pieces of 14C-rich polymer plastic, such as PMMA. Detector is simply a Geiger–Mueller counter. The particulate matter content measured is affected by the moisture content in the air, unfortunately.
To discriminate between particle of different sizes (e. g., between PM10 and PM2.5), some preliminary separation could be accomplished, for example, by cyclone battery.
A similar method exists, where instead of beta particle flow an X-ray Fluorescence Spectroscopic monitoring is applied on the either side of air flow contact with the ribbon. This allows to obtain not only cumulative measurement of particle mass, but also to detect their average chemical composition (technique works for potassium and elements heavier than it).
References
Literature
List of Designated Reference and Equivalent Methods. EPA: Research Triangle park, 2013. Online: http://www.epa.gov/ttn/amtic/criteria.html .
Air pollution
Aerosols
Detectors
Measuring instruments
Radioactivity
Meteorological instrumentation and equipment | Beta attenuation monitoring | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 500 | [
"Meteorological instrumentation and equipment",
"Measuring instruments",
"Colloids",
"Aerosols",
"Nuclear physics",
"Radioactivity"
] |
41,420,328 | https://en.wikipedia.org/wiki/Interactions%20between%20the%20emotional%20and%20executive%20brain%20systems | The neurocircuitry that underlies executive function processes and emotional and motivational processes are known to be distinct in the brain. However, there are brain regions that show overlap in function between the two cognitive systems. Brain regions that exist in both systems are interesting mainly for studies on how one system affects the other. Examples of such cross-modal functions are emotional regulation strategies such as emotional suppression and emotional reappraisal, the effect of mood on cognitive tasks, and the effect of emotional stimulation of cognitive tasks.
A variety of methods can be used to examine the relationship between executive function and emotion, including behavioural studies, functional brain activity, and neuroanatomy. Some of the most prominent results are listed here.
Behavioural studies
Mood affects style of information processing
A large body of research has looked at the effects of positive or negative mood manipulations on performance in tasks of executive function. In most cases, positive mood inductions impair executive function, whereas negative mood has little effect. Overall, the best supported explanation for the observed effects is that mood affects processing style, with positive mood facilitating more heuristic methods of solving problems, and negative mood facilitating more algorithmic methods. Research in this area is incomplete, as negative mood inductions are less thoroughly studied.
Effects of mood on working memory and planning
In word span tasks, positive mood caused greater deficits in complex tasks compared to simpler tasks, where negative mood had no effect. In a Tower of London planning task, positive mood caused poorer planning performance compared to neutral mood. Researchers in both cases suggested that lack of effect could be explained by insufficient mood manipulation methods.
Effects of mood on fluency and creativity
In word fluency tasks, one study has shown that positive mood results in better fluency over negative mood, while another has shown that negative mood results in higher word production. A third study did not find any effect of either mood manipulation. However, there is some evidence that positive mood can result in increased performance in some tasks requiring creative thinking. No evidence of negative mood on creative thinking is available.
Effects of mood on inhibition and switching
In the Stroop task, a near significant trend was found for Stroop costs in positive mood conditions. In two tasks of switching, it was found that positive mood results in impaired switching compared to a neutral condition. Little evidence is found for the effect of negative mood.
Interpretation
Taken together positive mood impairs tasks of working memory, planning, word production, inhibition and switching, and facilitates word fluency. Negative mood impairs fluency, but facilitates planning tasks, word production, and has not shown any effect for tasks of working memory, creativity, inhibition, or switching. The results, while incomplete, would be consistent with the interpretation that mood influences style of processing.
Prefrontal cortex regions involved in emotional regulation
Some of the more significant cortical areas involved in emotional regulation include the ventrolateral prefrontal cortex, medial prefrontal cortex, dorsolateral prefrontal cortex and dorsomedial prefrontal cortex.
Ventrolateral prefrontal cortex (vlPFC)
The ventrolateral prefrontal cortex (vlPFC) is a subdivision of the prefrontal cortex. Its involvement in modulating existing behavior and emotional output given contextual demands has been studied extensively using cognitive reappraisal studies and emotion-attention tasks. Cognitive reappraisal studies indicate the vlFPC's role in reinterpreting stimuli, and reducing or augmenting responses. Studies using emotion-attention tasks demonstrate the vlFPC's function in ignoring emotional distractions while the brain is engaged in performing other tasks.
Medial prefrontal cortex (mPFC)
The medial prefrontal cortex (mPFC) is a subdivision of the prefrontal cortex. It encodes expected outcomes, both positive and negative, and signals when the expected outcomes do not occur. The mPFC, mediated by the amygdala, is also involved in the extinction and modulation of conditioned responses, including emotional ones, and the augmentation of emotional states. The function of the mPFC in higher order emotional processing is still unclear.
Dorsal prefrontal cortex
The dorsolateral prefrontal cortex (dlPFC) and the dorsomedial prefrontal cortex (dmPFC) are implicated in the enhancement of representations of stimuli relevant to current decisions, behaviors or tasks. These areas also play a role in modulating emotions and dealing with emotional distractions during demanding tasks, and are also implicated in facilitating decision/resolve perceptual or conflict making by augmenting representations of stimuli relevant to decision or behavior. The dmPFC's role in human emotional regulation decision making (decision conflict perspective – levels of indecision) e.g. Picking between similar items, acting in novel situations. There is also evidence of an inverse relationship between activation in the dPFC areas and activation in emotionally activated brain areas.
Ventral and dorsal streams
Ventral stream
The ventral stream primarily involves the vlPFC and mPFC. Signals of expected outcomes trigger the mPFC to update stimulus associations through exchanges with the amygdala and the nucleus accumbens. When a response change is needed, the mPFC interacts with the vlPFC. Then, the vlPFC modulates the emotional response to stimuli through interactions with the dorsal striatum. Preliminary findings indicate that the vlPFC may also modulate activity in the nucleus accumbens, temporal cortex, anterior insula and amygdala.
Dorsal stream
The dorsal stream activates by the presence of response conflict. The dmPFC relays information on past reinforcement to the dlPFC, which initiates selective attention. dlPFC influences action and emotion by weighing the importance of competing goals/representations in the temporal cortex. Representations opposite to what the stimulus originally elicited are rendered more salient and compete with the original representations. These competitions influence the modulation of activity in the amygdala and the mPFC.
Adolescent development
An imbalance between the relative influence between the emotional and executive systems is posited to be responsible for the heightened levels of risk-taking and emotionality observed in adolescents. Specifically, dopamine-rich regions related to motivation, including the ventral striatum which has been shown to represent the appetitive value of a stimulus, show increased signaling in adolescent years. This is suggested to be indicative of maturation in this region. In contrast, it is known that regions of the brain known to be involved with modulation of emotional effect on executive function, including the vlPFC, as well as the entire ventrolateral frontostriatal network, do not fully mature until late adolescence to early adulthood. Recent research has shown that adolescents are less capable of inhibiting responses to pre-potent stimuli. Additionally, the ventral striatum and frontolateral prefrontal cortex showed patterns of activity that are more connected with each other during adolescence than early adulthood. While it is accepted that adolescents are less able to inhibit responding to tempting stimuli, it is unclear the specific neural mechanism that modulates this phenomenon.
Other research
The emotional-oddball paradigm is a variation on the traditional oddball paradigm used in neuroscience. Studies show emotionally enhanced memory during trials depicting negative imagery when people participate in visual, simultaneous attention-emotional tasks. Emotional arousal has also been shown to cause augmentation in memory, and enhanced processing and information consolidation when paired with stimuli. This effect has been explained by the arousal-biased competition (ABC) model, which postulates that bottom-up sensory preference to arousing stimuli and top-down relevance to current activity or goal pursuit both influence how priority is determined for an event. More simply, if an event is paired with a particularly emotionally arousing stimulus, it will be more salient to processing and have greater resources devoted to it.
References
Neuropsychology
Emotion | Interactions between the emotional and executive brain systems | [
"Biology"
] | 1,611 | [
"Emotion",
"Behavior",
"Human behavior"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.