id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
51,513,135
https://en.wikipedia.org/wiki/Graph%20%28topology%29
In topology, a branch of mathematics, a graph is a topological space which arises from a usual graph by replacing vertices by points and each edge by a copy of the unit interval , where is identified with the point associated to and with the point associated to . That is, as topological spaces, graphs are exactly the simplicial 1-complexes and also exactly the one-dimensional CW complexes. Thus, in particular, it bears the quotient topology of the set under the quotient map used for gluing. Here is the 0-skeleton (consisting of one point for each vertex ), are the closed intervals glued to it, one for each edge , and is the disjoint union. The topology on this space is called the graph topology. Subgraphs and trees A subgraph of a graph is a subspace which is also a graph and whose nodes are all contained in the 0-skeleton of . is a subgraph if and only if it consists of vertices and edges from and is closed. A subgraph is called a tree if it is contractible as a topological space. This can be shown equivalent to the usual definition of a tree in graph theory, namely a connected graph without cycles. Properties The associated topological space of a graph is connected (with respect to the graph topology) if and only if the original graph is connected. Every connected graph contains at least one maximal tree , that is, a tree that is maximal with respect to the order induced by set inclusion on the subgraphs of which are trees. If is a graph and a maximal tree, then the fundamental group equals the free group generated by elements , where the correspond bijectively to the edges of ; in fact, is homotopy equivalent to a wedge sum of circles. Forming the topological space associated to a graph as above amounts to a functor from the category of graphs to the category of topological spaces. Every covering space projecting to a graph is also a graph. See also Graph homology Topological graph theory Nielsen–Schreier theorem, whose standard proof makes use of this concept. References Topological spaces
Graph (topology)
[ "Mathematics" ]
425
[ "Topological spaces", "Mathematical structures", "Topology", "Space (mathematics)" ]
51,513,238
https://en.wikipedia.org/wiki/Contact%20graph
In the mathematical area of graph theory, a contact graph or tangency graph is a graph whose vertices are represented by geometric objects (e.g. curves, line segments, or polygons), and whose edges correspond to two objects touching (but not crossing) according to some specified notion. It is similar to the notion of an intersection graph but differs from it in restricting the ways that the underlying objects are allowed to intersect each other. The circle packing theorem states that every planar graph can be represented as a contact graph of circles. The contact graphs of unit circles are called penny graphs. Representations as contact graphs of triangles, rectangles, squares, line segments, or circular arcs have also been studied. References Geometric graph theory Graph families Planar graphs
Contact graph
[ "Mathematics" ]
155
[ "Graph theory stubs", "Planar graphs", "Graph theory", "Mathematical relations", "Planes (geometry)", "Geometric graph theory" ]
51,514,596
https://en.wikipedia.org/wiki/Tissue%20growth
Tissue growth is the process by which a tissue increases its size. In animals, tissue growth occurs during embryonic development, post-natal growth, and tissue regeneration. The fundamental cellular basis for tissue growth is the process of cell proliferation, which involves both cell growth and cell division occurring in parallel. How cell proliferation is controlled during tissue growth to determine final tissue size is an open question in biology. Uncontrolled tissue growth is a cause of cancer. Differential rates of cell proliferation within an organ can influence proportions, as can the orientation of cell divisions, and thus tissue growth contributes to shaping tissues along with other mechanisms of tissue morphogenesis. Mechanisms of tissue growth control in animals Mechanical control of tissue growth in animal skin For some animal tissues, such as mammalian skin, it is clear that the growth of the skin is ultimately determined by the size of the body whose surface area the skin covers. This suggests that cell proliferation in skin stem cells within the basal layer is likely to be mechanically controlled to ensure that the skin covers the surface of the entire body. Growth of the body causes mechanical stretching of the skin, which is sensed by skin stem cells within the basal layer and consequently leads to both an increased rate of cell proliferation as well as promoting the planar orientation of stem cell divisions to produce new skin stem cells, rather than only producing differentiating supra-basal daughter cells. Cell proliferation in skin stem cells within the basal layer can be driven by the mechanically-regulated YAP/TAZ family of transcriptional co-activators, which bind to TEAD-family DNA binding transcription factors in the nucleus to activate target gene expression and thereby drive cell proliferation. For other animal tissues, such as the bones of the skeleton or the internal mammalian organs intestine, pancreas, kidney or brain, it remains unclear how developmental gene regulatory networks encoded in the genome lead to organs of such different sizes and proportions. Hormonal control of tissue growth in the entire animal body Although different animal tissues grow at different rates and produce organs of very different proportions, the overall growth rate of the entire animal body can be modulated by circulating hormones of the Insulin/IGF-1 family, which activate the PI3K/AKT/mTOR pathway in many cells of the body to increase the average rate of both cell growth and cell division, leading to increased cell proliferation rates in many tissues. In mammals, production of IGF-1 is induced by another circulating hormone called Growth Hormone. Excessive production of Growth Hormone or IGF-1 is responsible for giantism while insufficient production of these hormones is responsible for dwarfism. Developmental control of tissue growth during adult tissue homeostasis Adult animal tissues such as skin or intestine maintain their size but undergo constant turnover of cells by proliferation of stem cells and progenitor cells while undergoing an equivalent loss of differentiated daughter cells via sloughing off. Gradients of Wnt signaling pathway activity appear to have a fundamental role in maintaining proliferation of stem and progenitor cells, at least in the intestine, and possibly also in skin. Regenerative tissue growth after wounding or other types of damage Upon tissue damage, there is an upregulation in the activity of many pathways that control tissue growth, including the YAP/TAZ pathway, Wnt signaling pathway, and growth factors that activate the PI3K/AKT/mTOR pathway. References Developmental biology Cell biology Cell cycle Cellular processes
Tissue growth
[ "Biology" ]
699
[ "Behavior", "Cell biology", "Developmental biology", "Reproduction", "Cellular processes", "Cell cycle" ]
51,514,659
https://en.wikipedia.org/wiki/Influenza%20and%20Other%20Respiratory%20Viruses
Influenza and Other Respiratory Viruses is a peer-reviewed scientific journal covering virology, published by John Wiley & Sons for the International Society for Influenza and other Respiratory Virus Diseases. As of 2018, the editor is Benjamin Cowling. According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.380. Influenza and Other Respiratory Viruses is the first journal to specialise exclusively on influenza and other respiratory viruses and strives to play a key role in the dissemination of information in this broad and challenging field.  It is aimed at laboratory and clinical scientists, public health professionals, and others around the world involved in a broad range of activities in this field.  In turn, topics covered will include: surveillance epidemiology prevention by vaccines prevention and treatment by antivirals clinical studies public health & pandemic preparedness basic scientific research transmission between animals and humans References Virology journals Wiley (publisher) academic journals
Influenza and Other Respiratory Viruses
[ "Biology" ]
187
[ "Virus stubs", "Viruses" ]
51,515,150
https://en.wikipedia.org/wiki/Terse
TERSE is an IBM archive file format that supports lossless compression. A TERSE file may contain a sequential data set, a partitioned data set (PDS), partitioned data set extended (PDSE), or a large format dataset (DSNTYPE=LARGE). Any record format (RECFM) is allowed as long as the record length is less than 32 K (64 K for RECFM=VBS). Records may contain printer control characters. Terse files are compressed using a modification of Ziv, Lempel compression algorithm developed by Victor S. Miller and Mark Wegman at the Thomas J. Watson Research Center in Yorktown Heights, New York. The Terse algorithm was proprietary to IBM; however, IBM has released an open source Java decompressor under the Apache 2 license. The compression/decompression program (called terse and unterse)—AMATERSE or TRSMAIN—is available from IBM for z/OS; the z/VM equivalents are the TERSE and DETERSE commands, for sequential datasets only. Versions for PC DOS, OS/2, AIX, Windows (2000, XP, 2003), Linux, and Mac OS/X are available online. AMATERSE The following JCL can be used to invoke AMATERSE on z/OS (TRSMAIN uses INFILE and OUTFILE instead of SYSUT1 and SYSUT2): //jobname JOB ... //stepname EXEC PGM=AMATERSE,PARM=ppppp //SYSPRINT DD SYSOUT=* //SYSUT1 DD DISP=SHR,DSN=input.dataset //SYSUT2 DD DISP=(NEW,CATLG),DCB=ddd,DSN=output.dataset, // SPACE=space_parameters //SYSUT3 DD DISP=(NEW,DELETE),SPACE=space_parameters Optional temporary dataset Uses Terse can be used as a general-purpose compression/decompression tool. IBM also distributes downloadable Program temporary fixs (PTFs) as tersed datasets. Terse is also used by IBM customers to package diagnostic information such as z/OS dumps and traces, for transmission to IBM. References External links Terse PC versions at Vetusware IBM software Archive formats Data compression American inventions
Terse
[ "Technology" ]
514
[ "Computing stubs" ]
51,516,730
https://en.wikipedia.org/wiki/Gould%27s%20sequence
Gould's sequence is an integer sequence named after Henry W. Gould that counts how many odd numbers are in each row of Pascal's triangle. It consists only of powers of two, and begins: 1, 2, 2, 4, 2, 4, 4, 8, 2, 4, 4, 8, 4, 8, 8, 16, 2, 4, ... For instance, the sixth number in the sequence is 4, because there are four odd numbers in the sixth row of Pascal's triangle (the four bold numbers in the sequence 1, 5, 10, 10, 5, 1). Gould's sequence is also a fractal sequence. Additional interpretations The th value in the sequence (starting from ) gives the highest power of 2 that divides the central binomial coefficient , and it gives the numerator of (expressed as a fraction in lowest terms). Gould's sequence also gives the number of live cells in the th generation of the Rule 90 cellular automaton starting from a single live cell. It has a characteristic growing sawtooth shape that can be used to recognize physical processes that behave similarly to Rule 90. Related sequences The binary logarithms (exponents in the powers of two) of Gould's sequence themselves form an integer sequence, 0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, ... in which the th value gives the number of nonzero bits in the binary representation of the number , sometimes written in mathematical notation as . Equivalently, the th value in Gould's sequence is Taking the sequence of exponents modulo two gives the Thue–Morse sequence. The partial sums of Gould's sequence, 0, 1, 3, 5, 9, 11, 15, 19, 27, 29, 33, 37, 45, ... count all odd numbers in the first rows of Pascal's triangle. These numbers grow proportionally to , but with a constant of proportionality that oscillates between 0.812556... and 1, periodically as a function of . Recursive construction and self-similarity The first values in Gould's sequence may be constructed by recursively constructing the first values, and then concatenating the doubles of the first values. For instance, concatenating the first four values 1, 2, 2, 4 with their doubles 2, 4, 4, 8 produces the first eight values. Because of this doubling construction, the first occurrence of each power of two in this sequence is at position . Gould's sequence, the sequence of its exponents, and the Thue–Morse sequence are all self-similar: they have the property that the subsequence of values at even positions in the whole sequence equals the original sequence, a property they also share with some other sequences such as Stern's diatomic sequence. In Gould's sequence, the values at odd positions are double their predecessors, while in the sequence of exponents, the values at odd positions are one plus their predecessors. History The sequence is named after Henry W. Gould, who studied it in the early 1960s. However, the fact that these numbers are powers of two, with the exponent of the th number equal to the number of ones in the binary representation of , was already known to J. W. L. Glaisher in 1899. Proving that the numbers in Gould's sequence are powers of two was given as a problem in the 1956 William Lowell Putnam Mathematical Competition. References Integer sequences Factorial and binomial topics Fractals Scaling symmetries
Gould's sequence
[ "Physics", "Mathematics" ]
763
[ "Sequences and series", "Symmetry", "Functions and mappings", "Integer sequences", "Mathematical structures", "Mathematical analysis", "Factorial and binomial topics", "Recreational mathematics", "Mathematical objects", "Fractals", "Number theory", "Combinatorics", "Mathematical relations", ...
51,516,805
https://en.wikipedia.org/wiki/NGC%20183
NGC 183 is an elliptical galaxy located in the constellation Andromeda. It was discovered on November 5, 1866, by Truman Safford. References External links 0183 Elliptical galaxies Discoveries by Truman Safford Andromeda (constellation) 002298
NGC 183
[ "Astronomy" ]
55
[ "Andromeda (constellation)", "Constellations" ]
51,516,877
https://en.wikipedia.org/wiki/NGC%20184
NGC 184 is a spiral galaxy located in the constellation Andromeda. It was discovered on October 6, 1883, by Édouard Stephan. References External links 0184 Lenticular galaxies Andromeda (constellation) Astronomical objects discovered in 1883 Discoveries by Édouard Stephan 002309
NGC 184
[ "Astronomy" ]
57
[ "Andromeda (constellation)", "Constellations" ]
51,516,895
https://en.wikipedia.org/wiki/Meizu%20PRO%205
The Meizu PRO 5 is a smartphone designed and produced by the Chinese manufacturer Meizu, which runs on Flyme OS, Meizu's modified Android operating system. It is the company's first model of the flagship PRO series. It was unveiled on September 23, 2015, in Beijing. History First rumors about Meizu releasing a new flagship device featuring a Samsung Exynos SoC appeared in September 2015, after the device was listed on the AnTuTu benchmark, It was also mentioned that the upcoming flagship device would be called “Niux”. On September 9, 2015, Meizu officially announced that it would release a new flagship device on September 23, 2015. On September 11, 2015, Meizu VP Li Nan announced that the name of the new flagship device will be PRO 5. Release Pre-orders for the PRO 5 began after the launch event on September 23, 2015. The release of the device was delayed until November due to flooding of the factory. Features Flyme The Meizu PRO 5 was released with an updated version of Flyme OS, a modified operating system based on Android Lollipop. It features an alternative, flat design and improved one-handed usability. Hardware and design The Meizu PRO 5 features a Samsung Exynos 7420 Octa with an array of eight ARM Cortex CPU cores, an ARM Mali-T760 MP8 GPU and 3 GB or 4 GB of RAM, which scores a result of 85,652 points on the AnTuTu benchmark. The PRO 5 was ranked first on the AnTuTu benchmark rating for Q3 2015. Meizu Global Brand Manager Ard Boudeling explained in November 2015 that Meizu decided to use the Samsung Exynos SoC because it is “currently [..] the only option if you want to build a genuine premium device”. The Meizu PRO 5 has a full-metal body, which measures x x and weighs . It has a slate form factor, being rectangular with rounded corners and has only one central physical button at the front. Unlike most other Android smartphones, the PRO 5 doesn't have capacitive buttons nor on-screen buttons. The functionality of these keys is implemented using a technology called mBack, which makes use of gestures with the physical button. This button also includes a fingerprint sensor called mTouch. The PRO 5 is available in four different color variants (grey body with black front, champagne gold body with white front and white body with black or white front) and comes with 32 GB or 64 GB of internal storage. The PRO 5 features a 5.7-inch AMOLED multi-touch capacitive touchscreen display with a (FHD resolution of 1080 by 1920 pixels. The pixel density of the display is 387 ppi. In addition to the touchscreen input and the front key, the device has a volume/zoom control and the power/lock button on the right side and a 3.5mm TRS audio jack, which is powered by a dedicated Hi-Fi amplifier supporting 32-bit audio with a frequency range of up to 192 kHz. The PRO 5 uses a USB-C connector for both data connectivity and charging. The Meizu PRO 5 has two cameras. The rear camera has a resolution of 21.16 MP, a ƒ/2.2 aperture and a 6-element lens. Furthermore, the phase-detection autofocus of the rear camera is laser-supported. The front camera has a resolution of 5 MP, a ƒ/2.0 aperture and a 5-element lens. Reception The PRO 5 received mostly favorable reviews. Android Authority gave an overall rating of 8.8 out of 10 points, concluding that the PRO 5 “is easily the best flagship Meizu has released to date [..] and should certainly not be overlooked”. Android Headlines noted that it “is significantly more affordable than other similarly specced offerings out there”, concluding that the “Meizu PRO 5 is one of the best devices [..] reviewed to date”. See also Meizu Meizu PRO 5 Ubuntu Edition Meizu PRO 6 Comparison of smartphones References External links Official product page Meizu Android (operating system) devices Mobile phones introduced in 2015 Meizu smartphones Discontinued flagship smartphones
Meizu PRO 5
[ "Technology" ]
875
[ "Discontinued flagship smartphones", "Flagship smartphones" ]
51,516,985
https://en.wikipedia.org/wiki/NGC%20186
NGC 186 is a barred lenticular galaxy located 3.4 million light-years away in the constellation Pisces. It was discovered by Bindon Blood Stoney in 1852. References External links 0186 Barred lenticular galaxies Pisces (constellation) 0390 2291 +00-02-098
NGC 186
[ "Astronomy" ]
64
[ "Pisces (constellation)", "Constellations" ]
51,517,157
https://en.wikipedia.org/wiki/Eric%20Grunsky
Eric Christopher Grunsky is a Canadian mathematical geoscientist specialized in statistical petrology. Grunsky received the Felix Chayes Prize in 2005 from the International Association for Mathematical Geosciences and served as Editor-in-Chief for the journal Computers & Geosciences from 2006-2011. He was awarded the Krumbein Medal in 2012 by the International Association for Mathematical Geosciences. He is currently serving International Association for Mathematical Geosciences (IAMG) as its appointed Secretary General. Education PhD University of Ottawa, 1988 MSc University of Toronto, 1978 BSc University of Toronto, 1973 Early professional career Awards and honors William Christian Krumbein Medal (2012) IAMG Distinguished Lectureship (2014) Books References Living people Scientists from Ontario Canadian statisticians Canadian geochemists 20th-century Canadian geologists 20th-century Canadian mathematicians 21st-century Canadian scientists University of Toronto alumni University of Ottawa alumni University of Waterloo alumni Academic staff of the University of Waterloo Geological Survey of Canada personnel Year of birth missing (living people)
Eric Grunsky
[ "Chemistry" ]
214
[ "Geochemists", "Canadian geochemists" ]
51,517,289
https://en.wikipedia.org/wiki/NGC%20187
NGC 187 is a barred spiral galaxy located around 3.2 million light-years away in the constellation Cetus. It was discovered in 1893 by William Herschel. References 0187 Barred spiral galaxies Cetus 002380
NGC 187
[ "Astronomy" ]
47
[ "Cetus", "Constellations" ]
51,517,341
https://en.wikipedia.org/wiki/NGC%20190
NGC 190 is a pair of interacting galaxies located in the constellation Pisces. This galaxy is due to the collision of the two galaxies around 30 million years ago. It was discovered in 1894. References External links 0190 Interacting galaxies Pisces (constellation)
NGC 190
[ "Astronomy" ]
54
[ "Pisces (constellation)", "Constellations" ]
51,517,357
https://en.wikipedia.org/wiki/Nintendo%20hard
"Nintendo hard" is an informal term used to describe extreme difficulty in video games. It often refers to games with trial-and-error gameplay and limited or nonexistent saving of progress. The enduring term originated with Nintendo Entertainment System (NES) games from the mid-1980s to early 1990s, such as Ghosts 'n Goblins (1986), Contra (1988), Ninja Gaiden (1988), and Battletoads (1991). History The Nintendo hard difficulty of many games released for the Nintendo Entertainment System (NES) was influenced by the popularity of arcade games in the mid-1980s, a period where players put countless coins in machines trying to beat a game that was brutally hard yet very enjoyable. The difficulty of many games released in the 1980s and 1990s has also been attributed to the hardware limitations affecting gameplay. Former Nintendo president Satoru Iwata said in an interview regarding how NES games were made: "Everyone involved in the production would spend all night playing it, and because they made games, they became good at them. So these expert gamers make the games, saying 'This is too easy'". Also, Damiano Gerli of Ars Technica observed that extreme difficulty made it possible for a game with little actual content (in terms of number of levels or opponents) to provide a long period of gameplay. This specific method of increasing length through difficulty was also employed to combat video game rentals, with some games being made more difficult to prevent them from being beaten within a rental period and thus costing the developer potential sales. The number of current games considered Nintendo hard decreased significantly with the fourth-generation 16-bit period of video gaming, including Super Star Wars (1992).According to Michael Enger, indie games like I Wanna Be the Guy (2007) and Super Meat Boy (2010) are an "obvious homage" to the Nintendo hard games of the NES era, labeled as "masocore". Analysis Arcade conversions and 2D platform games are commonly called Nintendo hard. The Houston Press described the Nintendo hard era as a period where games "universally felt like they hated us for playing them". GamesRadar journalist Maxwell McGee noted the variety of types of "Nintendo hard" games in the NES library: "A game can be difficult because it's genuinely hard, or because it demands you finish the entire adventure in one sitting. It can litter the playing field with spikes and bottomless pits ... or be so hopelessly obtuse you have no idea how to advance". He wrote that several NES games, such as Yo! Noid (1990), Silver Surfer (1990), and Teenage Mutant Ninja Turtles (1989) garnered their Nintendo hard difficulty "for all the wrong reasons". Journalist Michael Enger did not qualify games with challenges that came from poorly-designed gameplay as Nintendo hard, but rather only games that were well made and are replayable but still extremely hard. Examples The games in the following list have been recognized as being some of the hardest NES games and for some, all platforms. References Nintendo Entertainment System Video game terminology
Nintendo hard
[ "Technology" ]
629
[ "Computing terminology", "Video game terminology" ]
51,517,440
https://en.wikipedia.org/wiki/Pomeranchuk%27s%20theorem
Pomeranchuk's theorem, named after Soviet physicist Isaak Pomeranchuk, states that difference of cross sections of interactions of elementary particles and (i. e. particle with particle , and with its antiparticle ) approach 0 when , where is the energy in center of mass system. See also Pomeron References . Eponymous theorems of physics Scattering theory
Pomeranchuk's theorem
[ "Physics", "Chemistry" ]
78
[ "Scattering theory", "Equations of physics", "Eponymous theorems of physics", "Scattering", "Particle physics", "Particle physics stubs", "Physics theorems" ]
54,365,986
https://en.wikipedia.org/wiki/Luigi%20Di%20Lella
Luigi Di Lella (born in Naples, 7 December 1937) is an Italian experimental particle physicist. He has been a staff member at CERN for over 40 years, and has played an important role in major experiments at CERN such as CAST and UA2. From 1986 to 1990 he acted as spokesperson for the UA2 Collaboration, which, together with the UA1 Collaboration, discovered the W and Z bosons in 1983. Education After moving from his child-hood home in Naples, Italy, Di Lella studied physics at the University of Pisa and Scuola Normale Superiore in Pisa. Di Lella obtained his doctoral degree in 1959 from the University of Pisa on the subject of muon capture. Written under the supervision of Marcello Conversi, his thesis was on the measurement of longitudinal polarization of neutrons emitted from muon capture in nuclei (in Italian, unpublished). Career and Research Following his degree, Di Lella continued his work with Marcello Conversi, now at the University of Rome. He was commuting between Rome and CERN, using the Synchrocyclotron at CERN as an accelerator for his experiments, before he in 1961 secured a two-year position as a Fellow at CERN. In the 1950s physicist had started to wonder why processes like the decay of a positive muon to a positron and a photon, , or electron emission from nuclear capture of a negative muon, , were not observed. Given the knowledge of that time, there was no reason why these reactions could not exist – energy, charge and spin are conserved. Di Lella took part in two consecutive experiments with increased sensitivity on the search for electron emission from nuclear capture of negative muons, strengthening the hypothesis that muon and electron have different quantum number (today named "lepton flavour"). The definitive experimental proof of this hypothesis was achieved in 1962 in the first high-energy neutrino experiment at the Brookhaven 30 GeV Alternating Gradient Synchrotron (AGS), by showing that neutrinos from only produced muons, and not electrons, when interacting in the detector, a result for which Leon Lederman, Mel Schwartz and Jack Steinberger shared the 1988 Nobel Prize in Physics. From 1964 to 1968 Di Lella held a position as a Research Physicist at CERN. During this time he took part in experiments at the Proton Synchrotron (PS), on high-energy elastic scattering of hadrons from polarized targets, discovering unexpected spin effects in the diffractive region, with opposite sign for and . The following year, Di Lella became an Associate Professor of Physics at Columbia University, New York, a position he held for two years, until 1970. After receiving an offer from CERN for an indefinite appointment as a Research Physicist, Di Lella returned to CERN in 1970. The construction of the Intersecting Storage Rings (ISR) at CERN, the world's first hadron collider, had recently been completed. While still at Columbia University, Di Lella, together with physicists from CERN, Columbia and Rockefeller University, wrote a proposal for an ISR experiment to search for high-mass electron-positron pairs. The experiment, known as R-103, had two large detectors at 90 degrees to the beam directions at opposite azimuth angles, to detect electrons, positrons and photons and to measure their energies and angles. It soon found an unexpected high rate of high-energy photons from the decay of neutral mesons () emitted at large angles to the beams. Because in the early 1970s there were no high-capacity hard disks, nor sophisticated data acquisition systems, data were written onto magnetic tapes at a rate that could not exceed 10 events per second (even so, a magnetic tape became full after 15 minutes of data taking). To keep the event rate below this limit, the electron detection threshold used in the event trigger was raised above 1,5 GeV, thus excluding from detection the yet undiscovered -particle with 3.1 GeV mass (this particle, a bound state of a charmed quark-antiquark pair, was discovered in 1974 at the Brookhaven AGS and at the electron-positron collider SPEAR at Stanford, and for this discovery the 1976 Nobel Prize in Physics was awarded to B. Richter and S.C.C. Ting). The production of high-energy mesons at large angles was soon understood as due to the strong interaction of point-like constituents of the proton (quarks, antiquarks and gluons). Evidence for electrically charged, point-like proton constituents, interacting electromagnetically with electrons, had already been found in 1968 at the Stanford Linear Collider at SLAC from deep-inelastic electron scattering experiments, for which J. Friedman, H. Kendall and R. Taylor received the 1990 Nobel Prize in Physics 1990. The R-103 experiment found that these constituents behaved as point-like particles also when interacting strongly. The R-103 results were in contrast with earlier theories of proton-proton collisions, which predicted that only low-energy mesons would be produced at large angles. The experiment was a step towards understanding the strong interaction between hadron constituents. Unfortunately, the discovery of high-energy meson production at large angle prevented the more important discovery of the -particle. In 1978 Di Lella was one of four senior physicist who proposed the UA2 experiment. The purpose of the experiment was to detect the production and decay of the W and Z bosons at the Proton-Antiproton Collider (SpS) — a modification of the Super Proton Synchrotron (SPS). UA2, together with the UA1 collaboration, succeeded in discovering these particles in 1983, leading to the 1984 Nobel Prize in Physics being awarded to Carlo Rubbia and Simon van der Meer. UA2 was also the first experiment to observe hadronic jet production at high transverse momentum from hadronic collisions. Di Lella was the spokesperson of the UA2 experiment from 1986 to 1990, when high-luminosity operation of the SpS was discontinued. During the 1990s Di Lella became interested in neutrino oscillations. He was among the proponents of the WA96/NOMAD experiment, which aimed at searching for νμ-ντ oscillations using high-energy neutrinos (predominantly νμ) from the CERN SPS, and he became the spokesperson for the experiment in 1995. Guided by a theoretical conjecture that the 3rd neutrino might be the main component of dark matter in the Universe, they looked for oscillations over an average distance of ~650 m. They found no oscillations, and when these oscillations were first observed by the Super-Kamiokande experiment in Japan using neutrinos produced by cosmic rays in the Earth atmosphere, they were found to occur over distances of the order of 1000 km (T. Kajita and A. McDonald shared the 2015 Nobel Prize in Physics for the discovery of neutrino oscillations). From 2000 until his retirement Di Lella took part in the CAST experiment (CERN Axion Solar Telescope experiment), searching for axions produced in the core of the Sun. After retiring in 2004, Di Lella has been a research associate in the schools where he did his first strides as a physicist: Scuola Normale Superiore in Pisa and University of Pisa. He is still actively working at CERN, doing experiments on charged K-meson decay at the NA62 experiment. From 1991 to 2006 Di Lella was a supervisory editor if the journal Nuclear Physics B. Most Cited Publications UA2 Collaboration, 1983, 'Evidence for Z0 ---> e+ e- at the CERN anti-p p Collider', Phys. Lett. B, vol. 129, no. 1-2, pp. 130-140 UA2 Collaboration, 1983, 'Observation of Single Isolated Electrons of High Transverse Momentum in Events with Missing Transverse Energy at the CERN anti-p p Collider', Phys. Lett B., vol. 122, no. 5-6, pp. 476-485 CAST Collaboration, 2007, 'An Improved limit on the axion-photon coupling from the CAST experiment', JCAP 0704, vol. 2007, no. 10. CAST Collaboration. 2004, 'First results from the CERN Axion Solar Telescope (CAST)', Phys. Rev. Letter, vol. 94, no. 12, pp. 1-5 UA2 Collaboration, 1992, 'An Improved determination of the ratio of W and Z masses at the CERN antiproton-proton collider', Phys. Letter. B, vol. 276, pp. 354-364 UA2 Collaboration, 1987, 'Measurement of the Standard Model Parameters from a Study of W and Z Bosons', Phys. Letter. B, vol. 186, pp. 440-451 UA2 Collaboration, 1982, 'Observation of Very Large Transverse Momentum Jets at the CERN anti-p p Collider', Phys. Lett. B, vol. 118, pp. 203-210 F.W. Brusser et al., 1973, 'Observation of pi0 mesons with large transverse momentum in high-energy proton proton collisions', Phys. Lett. B, vol. 46, pp. 471-476 References External links CERN interview Scientific publications of Luigi Di Lella on INSPIRE-HEP 1937 births People associated with CERN Living people Experimental physicists 20th-century Italian physicists Particle physicists University of Pisa alumni 21st-century Swiss physicists
Luigi Di Lella
[ "Physics" ]
2,014
[ "Particle physicists", "Particle physics" ]
54,366,204
https://en.wikipedia.org/wiki/Rod%20Smallwood%20%28medical%20engineer%29
Professor Rodney Harris Smallwood FREng, HonFRCP, FIET, FInstP, FIPEM (born 1945), known as Rod, is a British medical engineer and computer scientist. Smallwood graduated in Physics from University College London, then studied solid-state physics at Lancaster University, before working for the National Health Service in Sheffield and gaining a PhD from the University of Sheffield. He was appointed Professor of Medical Engineering and Head of the academic Medical Physics and Clinical Engineering Department at the University of Sheffield in 1995, took a computer science post in 2002, and subsequently became Professor of Computational Systems Biology and the Director of Research for Engineering. He has served as president of the Institute of Physics and Engineering in Medicine. References External links 1945 births Place of birth missing (living people) Fellows of the Institute of Physics and Engineering in Medicine Fellows of the Royal Academy of Engineering Fellows of the Institution of Engineering and Technology Living people Alumni of the University of Sheffield
Rod Smallwood (medical engineer)
[ "Engineering" ]
191
[ "Institution of Engineering and Technology", "Fellows of the Institution of Engineering and Technology" ]
54,366,296
https://en.wikipedia.org/wiki/Pierre%20Darriulat
Pierre Darriulat (born 17 February 1938) is a French experimental particle physicist. As staff member at CERN, he contributed in several prestigious experiments. He was the spokesperson of the UA2 collaboration from 1981 to 1986, during which time the UA2 collaboration, together with the UA1 collaboration, discovered the W and Z bosons in 1983. Education Darriulat studied at École Polytechnique. He served his military service in the French Navy, and between 1962 and 1964 he spent two years at Berkeley, United States, before receiving a PhD from the University of Orsay in 1965 on research done at Berkeley. Career and research Until the mid 1960s, Darriulat did his research on nuclear physics and took part in several experiments on scattering of deuterons and alpha particles. Darriulat was employed at Saclay Nuclear Research Centre, France. After a few years at CERN as visiting physicist and CERN fellow, Darriulat was offered a tenure position in 1971. For six years he was a member of the research group of Carlo Rubbia, that made essential contributions to the physics of CP violation in the neutral kaon sector. He then took part in experiments conducted at the Intersecting Storage Rings (ISR) — the world’s first hadron collider. Using his experience from the ISR, Darriulat and collaborators proposed the UA2 experiment in 1978 at the commissioned Proton-Antiproton Collider — a modification of the Super Proton Synchrotron. Darriulat acted as the spokesperson for the experiment from 1981 to 1986. In 1983 the UA2 collaboration, together with the UA1 collaboration, discovered the W and Z boson, an important milestone in modern particle physics, as it confirmed the electroweak theory. The discovery led to the 1984 Nobel Prize in Physics being awarded to Carlo Rubbia and Simon van der Meer for their decisive contributions to the design and construction of the proton-antiproton collider. Prior to the discovery the UA2 collaboration made the first observation of emission of quarks and gluons in the form of hadronic jet – an important experimental support of the theory of quantum chromodynamics. From 1987 to 1994 Darriulat held the position as Research Director at CERN, during which time the Large Electron–Positron Collider (LEP) began its operation. Subsequent, Darriulat turned to solid state physics, conducting research in the field of superconductivity on the property of niobium films. In 2000, Darriulat launched a research group in Vietnam, in which he is still active. The group does research in the field of astrophysics. They first did research on extreme energy cosmic rays in collaboration with the Pierre Auger Observatory. Subsequently, the group turned to millimeter/submillimeter radio astronomy, studying stellar physics and galaxies of the early Universe. The group is now the Department of AstroPhysics (DAP) of the Vietnam National Space Center, all part of the larger Vietnam Academy of Science and Technology. In 2011 Darriulat gave a talk The ISR Legacy at the international symposium on subnuclear physics held in Vatican City. Awards and honors 1973 Joliot-Curie Award 1986 Member of the French Academy of Science 1987 Grand Prix de l’Académie des Sciences: prix du Commissariat à l’énergie atomique 1985 Award from the French Academy of Science 1997 Nominated for the French Legion of Honour 2008 André Lagarrigue Award 2014 Vietnamese Friendship Medal 2016 Phan Chau Trinh Prize for education and culture Honorary degree from the University of Pavia Most notable publications Darriulat, P. (2007). Réflexions sur la science contemporaine. Les Ulis: EDP Sciences Darriulat, P. and Chohan, V. (2017). The CERN Antiproton Programme: Imagination and Audacity Rewarded. In: Technology Meets Research. Hamburg: World Scientific, pp. 179-215 Darriulat P. and Di Lella, L. (2015). Revealing Partons in Hadrons> From the ISR til the SPS Collider. In: 60 years of CERN experiments and discoveries. World Scientific, pp. 313-341 UA2 Collaboration (1982), 'Observation of single isolated electrons of high transverse momentum in events with missing transverse energy at the CERN ppbar collider', Phys. Lett. B , vol. 122, no 5, pp. 476-485 UA2 Collaboration (1983), ‘Evidence for Z0 ---> e+ e- at the CERN anti-p p Collider’, Phys. Lett. B, vol. 129, no. 1-2, pp. 130-140 UA2 Collaboration, 1987, ‘Measurement of the Standard Model Parameters from a Study of W and Z Bosons’, Phys. Letter. B, vol. 186, pp. 440-451 UA2 Collaboration, 1982, ‘Observation of Very Large Transverse Momentum Jets at the CERN anti-p p Collider’, Phys. Lett. B, vol. 118, pp. 203-210 Darriulat, P. (2016), Looking at science and education in my second homeland, The Gioi, Viet Nam References External links The W and Z particles: a personal recollection (2004) Scientific publications of Pierre Darriulat on INSPIRE-HEP Current institution 1938 births People associated with CERN Living people Experimental physicists French physicists Particle physicists Members of the French Academy of Sciences
Pierre Darriulat
[ "Physics" ]
1,162
[ "Particle physicists", "Particle physics" ]
54,366,399
https://en.wikipedia.org/wiki/Sasanian%20defense%20lines
The defense lines of the Sasanians were part of their military strategy and tactics. They were networks of fortifications, walls, and/or ditches built opposite the territory of the enemies. These defense lines are known from tradition and archaeological evidence. The fortress systems of the Western, Arabian, and Central Asian fronts were of both defensive and offensive functions. Mesopotamia The rivers Euphrates, Great Zab, and Little Zab acted as natural defenses for Mesopotamia (Asoristan). Sasanian development of irrigation systems in Mesopotamia further acted as water defense lines, notably the criss-crossing trunk canals in Khuzestan and the northern extension of the Nahrawan Canal, known as the Cut of Khusrau, which made the Sasanian capital Ctesiphon virtually impregnable in the late Sasanian period. In the early period of the Sasanian Empire, a number of buffer states existed between Persia and the Roman Empire, which played a major role in Roman-Persian relations. Both empires gradually absorbed these states, and replaced them by an organized defense system run by the central government and based on a line of fortifications (the limes) and the fortified frontier cities, such as Dara, Nisibis (Nusaybin), Amida, Singara, Hatra, Edessa, Bezabde, Circesium, Rhesaina (Theodosiopolis), Sergiopolis (Resafa), Callinicum (Raqqa), Dura-Europos, Zenobia (Halabiye), Sura, Theodosiopolis (Erzurum), Sisauranon, etc. According to R. N. Frye, the expansion of the Persian defensive system by Shapur II () was probably in imitation of Diocletian's construction of the limes of the Syrian and Mesopotamian frontiers of the Roman Empire over the previous decades. The defense line was in the edge of the cultivated land facing the Syrian Desert. Along the Euphrates (in Arbayistan), there was a series of heavily fortified cities as a line of defence. During the early years of Shapur II (), nomadic Arabian tribesmen made incursions into Persia from the south. After his successful campaign in Arabia (325) and having secured the coasts around Persian Gulf, Shapur II established a defensive system in southern Mesopotamia to prevent raids via land. The defensive line, called the Wall of the Arabs (Middle Persian: War ī Tāzīgān, in Khandaq Sābūr, literally "Ditch of Shapur", also possibly "Wall of Shapur"), consisted of a large moat, probably also with an actual wall on the Persian side, with watchtowers and a network of fortifications, at the edge of the Arabian Desert, located between modern-day Basra and the Persian Gulf. The defense line ran from Hit to Basra, on the margin of fertile lands west of Euphrates. It included small forts at key spots, acting as outliers for larger fortifications, some of which have been uncovered. The region and its defense line was apparently governed by a marzban. In the second half of the Sasanian history, the Lakhmid/Nasrid chiefs also became its rulers. They would have protected the area against the Romans and against the Romans' Arab clients, the Ghassanids, sheltering the agricultural lands of Sasanian Mesopotamia from the nomadic Arabs. The Sasanians eventually discontinued the maintenance of this defense line, since they perceived the main threats to the empire lay elsewhere. However, in 633, the empire's ultimate conquerors actually came from this direction. In the Caucasus Massive fortification activity was conducted in the Caucasus during the reign of Kavad I () and later his son Khosrow I (), in response to the pressure by people in the north, such as the Alans. Key components of this defensive system were the strategic passes Darial in the Central Caucasus and Derbent just west of the Caspian Sea, the only two practicable crossing of the Caucasus ridge through which the land traffic between the Eurasian Steppe and the Middle East was conducted. A formal system of rulership was also created in the region by Khusrow I, and the fortifications were assigned to local rulers. This is reflected in titles like "Sharvān-shāh" ("King of Shirvan"), "Tabarsarān-shāh", "Alān-shāh/Arrānshāh", and "Lāyzān-shāh". Pass of Derbent The pass of Derbent (Middle Iranian name is uncertain) was located on a narrow, three-kilometer strip of land in the North Caucasus between the Caspian Sea and the Caucasus mountains. It was in the Sasanian sphere of influence after the victory over the Parthians and the conquest of Caucasian Albania by Shapur I (). During periods when the Sasanians were distracted by war with the Byzantines or conflicts with the Hephthalites in the east, the northern tribes succeeded in advancing into the Caucasus. A mud-brick wall (maximum thickness 8 m, maximum height ca. 16 m) near Torpakh-Kala has been attributed to Yazdegerd II () as the first Sasanian attempt to block the Derbent pass, though it may have been a reconstruction of earlier defenses. It was destroyed in a rebellion in 450. With a length of 3,650 m on the north side and 3,500 m on the south and featuring seven gates, massive rectangular and round towers and outworks, the Wall of Derbent connected 30 already existing fortifications. Today the northern wall and the main city walls remain, but most of the southern wall is lost. The construction techniques used resemble those of Takht-e Soleymān, also built in the same period. Derbent was also the seat of a Sasanian marzban. Derbent Wall was the most prominent Sasanian defensive structure in the Caucasus. Later Muslim Arab historians tended to attribute the entire defense line to Khosrow I, and included it among the seven wonders of the world. In the Middle Ages, Alexander the Great was credited with having sealed off the Darband pass against the tribes of Gog and Magog advancing from the north; whence the name "Gate of Alexander" and the "Caspian Gates" for the Derbent pass. Apzut Kawat (Gilgilchay) Location: . The second known Sasanian reconstruction of the fortifications in the Caucasus is attributed to the second reign of Kavadh I (), who constructed the long fortification walls at Besh Barmak (recorded as Barmaki Wall in Islamic sources), Shabran and Gilgilchay (recorded as Arabic Sur al-Tin in Islamic sources), also called the Apzut Kawat (recorded in Armenian sources, from Middle Persian *Abzūd Kawād, literally "Kavadh increased [in Glory]" or "has prospered"). The lines were constructed using a combination of mud brick, stone blocks, and baked bricks. The construction was carried out in three phases, extending to the end of the reign of Khusrow I, but was never actually completed. The defensive line is about 60 km in length, from the Caspian Sea to the foot of Mount Babadagh. In 1980, the Ghilghilchay wall was excavated by an expedition of Azeri archaeologists from the Institute of History of Azerbaijan. Not far from the Gilgilchay wall is the Shabran wall, located near Shabran village. Darial Gorge Darial Gorge ( Arrānān dar, ; meaning "Gate of the Alans"), located in the Caucasus, fell into Sasanian hands in 252/253 as the Sasanian Empire conquered and annexed Iberia. It was fortified by both Romans and Persians. The fortification was known as Gate of the Alans, Iberian Gates, and the Caucasian Gates. South-east Caspian For the defense of the Central Asian border, a different strategy was needed: the maximum concentration of forces in large strongholds, with Marv as the outer bulwark, backed by Nishapur. The defense line was based on a three-tier system that allowed the enemy to penetrate deep into the Sasanian territories and to be channeled into designated kill zones between the tiers of forts. The mobile aswaran cavalry would then carry out counter-attacks from strategically positioned bases, notably Nev-Shapur (Nishapur). Kaveh Farrokh likens the strategy to the Central Asian tactic of Parthian shot—a feigned retreat followed by a counter-attack. Great Wall of Gorgan The Great Wall of Gorgan (or simply the Gorgan Wall) was located in north of the Gorgan River in Hyrcania, at a geographic narrowing between the Caspian Sea and the mountains of northeastern Persia. It is widely attributed to Khosrow I, though it may date back to the Parthian period. It was on the nomadic route from the northern steppes to the Gorgan Plain and the Persian heartland, probably protecting the empire from the peoples to the north, in particular, the Hephthalites. The defensive line was long and wide, featuring over 30 fortresses spaced at intervals of between . It is described as "amongst the most ambitious and sophisticated frontier walls" ever built in the world, and the most important fortification in Persia. The garrison size for the wall is estimated to be 30,000 strong. Wall of Tammisha The Wall of Tammisha (also Tammishe), with a length of around 11 km, stretched from the Gorgan Bay to the Alborz mountains, in particular, the ruined town of Tammisha at the foot of the mountains. There is another fortified wall 22 km to the west running parallel to the mentioned wall, between modern cities of Bandar-e Gaz and Behshahr. The Wall of Tammisha is considered to be the second line of defence after the Gorgan Wall. Other defense lines the limes of Sistan Khurasan Wall, a defense line west of modern-day Afghanistan the Gawri Wall, a wall near modern-day Iran–Iraq border, possibly built in the Parthian or Sasanian period Interpretation Recently, Touraj Daryaee has suggested the defensive walls may have had symbolic, ideological and psychological dimension as well, connecting the practice of enclosing the Iranian (ēr) lands against non-Iranian (anēr) barbarians to the cultural elements and ideas present among Iranians since ancient times, such as the idea of walled paradise gardens. See also Roman military frontiers and fortifications Marzban Gog and Magog References Further reading R. N. Frye, “The Sasanian System of Walls for Defense,” Studies in Memory of Gaston Wiet, Jerusalem, 1977. Military history of the Sasanian Empire Geography of the Sasanian Empire Walls Persian-Caucasian architecture
Sasanian defense lines
[ "Engineering" ]
2,253
[ "Fortification lines", "Sasanian defense lines" ]
54,366,493
https://en.wikipedia.org/wiki/Skeletocutis%20subvulgaris
Skeletocutis subvulgaris is a species of poroid, white rot fungus in the family Polyporaceae. Found in China, it was described as a new species in 1998 by mycologist Yu-Chen Dai. It was named for its resemblance to Skeletocutis vulgaris. The type collection was made in Hongqi District, Jilin Province, where it was found growing on the rotting wood of Korean pine (Pinus koraiensis). Description The fungus has a soft, thin, crust-like fruit body forming strips that measure long by wide; these strips are sometimes joined to make larger patches. The pore surface is whitish with small pores numbering 6–8 per millimetre. S. subvulgaris has a dimitic hyphal system. Some of the hyphae of the dissepiment edges (the tissue between the pores) is encrusted with spiny crystals. The skeletal hyphae have a distinct lumen, which helps distinguish this species from the similar S. vulgaris. Spores of S. subvulgaris are roughly cylindrical, thin walled and hyaline, and measure 3.1–4.1 by 1.1–1.6 μm. References Fungi described in 1998 Fungi of China subvulgaris Taxa named by Yu-Cheng Dai Fungus species
Skeletocutis subvulgaris
[ "Biology" ]
280
[ "Fungi", "Fungus species" ]
54,366,688
https://en.wikipedia.org/wiki/Local%20linearization%20method
In numerical analysis, the local linearization (LL) method is a general strategy for designing numerical integrators for differential equations based on a local (piecewise) linearization of the given equation on consecutive time intervals. The numerical integrators are then iteratively defined as the solution of the resulting piecewise linear equation at the end of each consecutive interval. The LL method has been developed for a variety of equations such as the ordinary, delayed, random and stochastic differential equations. The LL integrators are key component in the implementation of inference methods for the estimation of unknown parameters and unobserved variables of differential equations given time series of (potentially noisy) observations. The LL schemes are ideals to deal with complex models in a variety of fields as neuroscience, finance, forestry management, control engineering, mathematical statistics, etc. Background Differential equations have become an important mathematical tool for describing the time evolution of several phenomenon, e.g., rotation of the planets around the sun, the dynamic of assets prices in the market, the fire of neurons, the propagation of epidemics, etc. However, since the exact solutions of these equations are usually unknown, numerical approximations to them obtained by numerical integrators are necessary. Currently, many applications in engineering and applied sciences focused in dynamical studies demand the developing of efficient numerical integrators that preserve, as much as possible, the dynamics of these equations. With this main motivation, the Local Linearization integrators have been developed. High-order local linearization method High-order local linearization (HOLL) method is a generalization of the Local Linearization method oriented to obtain high-order integrators for differential equations that preserve the stability and dynamics of the linear equations. The integrators are obtained by splitting, on consecutive time intervals, the solution x of the original equation in two parts: the solution z of the locally linearized equation plus a high-order approximation of the residual . Local linearization scheme A Local Linearization (LL) scheme is the final recursive algorithm that allows the numerical implementation of a discretization derived from the LL or HOLL method for a class of differential equations. LL methods for ODEs Consider the d-dimensional Ordinary Differential Equation (ODE) with initial condition , where is a differentiable function. Let be a time discretization of the time interval with maximum stepsize h such that and . After the local linearization of the equation (4.1) at the time step the variation of constants formula yields where results from the linear approximation, and is the residual of the linear approximation. Here, and denote the partial derivatives of f with respect to the variables x and t, respectively, and Local linear discretization For a time discretization , the Local Linear discretization of the ODE (4.1) at each point is defined by the recursive expression The Local Linear discretization (4.3) converges with order 2 to the solution of nonlinear ODEs, but it match the solution of the linear ODEs. The recursion (4.3) is also known as Exponential Euler discretization. High-order local linear discretizations For a time discretization a high-order local linear (HOLL) discretization of the ODE (4.1) at each point is defined by the recursive expression where is an order (> 2) approximation to the residual r The HOLL discretization (4.4) converges with order to the solution of nonlinear ODEs, but it match the solution of the linear ODEs. HOLL discretizations can be derived in two ways: 1) (quadrature-based) by approximating the integral representation (4.2) of r; and 2) (integrator-based) by using a numerical integrator for the differential representation of r defined by for all , where HOLL discretizations are, for instance, the followings: Locally Linearized Runge Kutta discretization which is obtained by solving (4.5) via a s-stage explicit Runge–Kutta (RK) scheme with coefficients . Local linear Taylor discretization which results from the approximation of in (4.2) by its order-p truncated Taylor expansion. Multistep-type exponential propagation discretization which results from the interpolation of in (4.2) by a polynomial of degree p on , where denotes the j-th backward difference of . Runge Kutta type Exponential Propagation discretization which results from the interpolation of in (4.2) by a polynomial of degree p on , Linealized exponential Adams discretization which results from the interpolation of in (4.2) by a Hermite polynomial of degree p on . Local linearization schemes All numerical implementation of the LL (or of a HOLL) discretization involves approximations to integrals of the form where A is a d × d matrix. Every numerical implementation of the LL (or of a HOLL) of any order is generically called Local Linearization scheme. Computing integrals involving matrix exponential Among a number of algorithms to compute the integrals , those based on rational Padé and Krylov subspaces approximations for exponential matrix are preferred. For this, a central role is playing by the expression where are d-dimensional vectors, , , being the d-dimensional identity matrix. If denotes the (p; q)-Padé approximation of and k is the smallest natural number such that If denotes the (m; p; q; k) Krylov-Padé approximation of , then where is the dimension of the Krylov subspace. Order-2 LL schemes where the matrices , L and r are defined as and with . For large systems of ODEs Order-3 LL-Taylor schemes where for autonomous ODEs the matrices and are defined as . Here, denotes the second derivative of f with respect to x, and p + q > 2. For large systems of ODEs Order-4 LL-RK schemes where and with and p + q > 3. For large systems of ODEs, the vector in the above scheme is replaced by with Locally linearized Runge–Kutta scheme of Dormand and Prince Naranjo-Noda, Jimenez J.C. (2021) "Locally Linearized Runge_Kutta method of Dormand and Prince for large systems of initial value problems." J.Comput. Physics. 426: 109946. doi:10.1016/j.jcp.2020.109946. where s = 7 is the number of stages, with , and are the Runge–Kutta coefficients of Dormand and Prince and p + q > 4. The vector in the above scheme is computed by a Padé or Krylor–Padé approximation for small or large systems of ODE, respectively. Stability and dynamics By construction, the LL and HOLL discretizations inherit the stability and dynamics of the linear ODEs, but it is not the case of the LL schemes in general. With , the LL schemes (4.6)-(4.9) are A-stable. With q = p + 1 or q = p + 2, the LL schemes (4.6)–(4.9) are also L-stable. For linear ODEs, the LL schemes (4.6)-(4.9) converge with order p + q. In addition, with p = q = 6 and = d, all the above described LL schemes yield to the ″exact computation″ (up to the precision of the floating-point arithmetic) of linear ODEs on the current personal computers. This includes stiff and highly oscillatory linear equations. Moreover, the LL schemes (4.6)-(4.9) are regular for linear ODEs and inherit the symplectic structure of Hamiltonian harmonic oscillators. These LL schemes are also linearization preserving, and display a better reproduction of the stable and unstable manifolds around hyperbolic equilibrium points and periodic orbits that other numerical schemes with the same stepsize. For instance, Figure 1 shows the phase portrait of the ODEs with , and , and its approximation by various schemes. This system has two stable stationary points and one unstable stationary point in the region . LL methods for DDEs Consider the d-dimensional Delay Differential Equation (DDE) with m constant delays and initial condition for all where f is a differentiable function, is the segment function defined as for all is a given function, and Local linear discretization For a time discretization , the Local Linear discretization of the DDE (5.1) at each point is defined by the recursive expression where is the segment function defined as and is a suitable approximation to for all such that Here, are constant matrices and are constant vectors. denote, respectively, the partial derivatives of f with respect to the variables t and x, and . The Local Linear discretization (5.2) converges to the solution of (5.1) with order if approximates with order for all . Local linearization schemes Depending on the approximations and on the algorithm to compute different Local Linearizations schemes can be defined. Every numerical implementation of a Local Linear discretization is generically called local linearization scheme. Order-2 polynomial LL schemes where the matrices and are defined as and , and . Here, the matrices , , and are defined as in (5.2), but replacing by and where with , is the Local Linear Approximation to the solution of (5.1) defined through the LL scheme (5.3) for all and by for . For large systems of DDEs with and . Fig. 2 Illustrates the stability of the LL scheme (5.3) and of that of an explicit scheme of similar order in the integration of a stiff system of DDEs. LL methods for RDEs Consider the d-dimensional Random Differential Equation (RDE) with initial condition where is a k-dimensional separable finite continuous stochastic process, and f is a differentiable function. Suppose that a realization (path) of is given. Local Linear discretization For a time discretization , the Local Linear discretization of the RDE (6.1) at each point is defined by the recursive expression where and is an approximation to the process for all Here, and denote the partial derivatives of with respect to and , respectively. Local linearization schemes Depending on the approximations to the process and of the algorithm to compute , different Local Linearizations schemes can be defined. Every numerical implementation of the local linear discretization is generically called local linearization scheme. LL schemes Jimenez J.C.; Carbonell F. (2009). "Rate of convergence of local linearization schemes for random differential equations". BIT Numer. Math. 49 (2): 357–373. doi:10.1007/s10543-009-0225-0. where the matrices are defined as , , and p+q>1. For large systems of RDEs, The convergence rate of both schemes is , where is the exponent of the Holder condition of . Figure 3 presents the phase portrait of the RDE and its approximation by two numerical schemes, where denotes a fractional Brownian process with Hurst exponent H=0.45. Strong LL methods for SDEs Consider the d-dimensional Stochastic Differential Equation (SDE) with initial condition , where the drift coefficient and the diffusion coefficient are differentiable functions, and is an m-dimensional standard Wiener process. Local linear discretization For a time discretization , the order- (=1,1.5) Strong Local Linear discretization of the solution of the SDE (7.1) is defined by the recursive relation where and Here, denote the partial derivatives of with respect to the variables and t, respectively, and the Hessian matrix of with respect to . The strong Local Linear discretization converges with order (= 1, 1.5) to the solution of (7.1). High-order local linear discretizations After the local linearization of the drift term of (7.1) at , the equation for the residual is given by for all , where A high-order local linear discretization of the SDE (7.1) at each point is then defined by the recursive expression where is a strong approximation to the residual of order higher than 1.5. The strong HOLL discretization converges with order to the solution of (7.1). Local linearization schemes Depending on the way of computing , and different numerical schemes can be obtained. Every numerical implementation of a strong Local Linear discretization of any order is generically called Strong Local Linearization (SLL) scheme. Order 1 SLL schemes where the matrices , and are defined as in (4.6), is an i.i.d. zero mean Gaussian random variable with variance , and p + q > 1. For large systems of SDEs, in the above scheme is replaced by . Order 1.5 SLL schemes where the matrices , and are defined as , is a i.i.d. zero mean Gaussian random variable with variance and covariance and p+q>1 . For large systems of SDEs, in the above scheme is replaced by . Order 2 SLL-Taylor schemes where , , and are defined as in the order-1 SLL schemes, and is order 2 approximation to the multiple Stratonovish integral . Order 2 SLL-RK schemes For SDEs with a single Wiener noise (m=1) where with . Here, for low dimensional SDEs, and for large systems of SDEs, where , , , and are defined as in the order-2 SLL-Taylor schemes, p+q>1 and . Stability and dynamics By construction, the strong LL and HOLL discretizations inherit the stability and dynamics of the linear SDEs, but it is not the case of the strong LL schemes in general. LL schemes (7.2)-(7.5) with are A-stable, including stiff and highly oscillatory linear equations. Moreover, for linear SDEs with random attractors, these schemes also have a random attractor that converges in probability to the exact one as the stepsize decreases and preserve the ergodicity of these equations for any stepsize. These schemes also reproduce essential dynamical properties of simple and coupled harmonic oscillators such as the linear growth of energy along the paths, the oscillatory behavior around 0, the symplectic structure of Hamiltonian oscillators, and the mean of the paths. For nonlinear SDEs with small noise (i.e., (7.1) with ), the paths of these SLL schemes are basically the nonrandom paths of the LL scheme (4.6) for ODEs plus a small disturbance related to the small noise. In this situation, the dynamical properties of that deterministic scheme, such as the linearization preserving and the preservation of the exact solution dynamics around hyperbolic equilibrium points and periodic orbits, become relevant for the paths of the SLL scheme. For instance, Fig 4 shows the evolution of domains in the phase plane and the energy of the stochastic oscillator and their approximations by two numerical schemes. Weak LL methods for SDEs Consider the d-dimensional stochastic differential equation with initial condition , where the drift coefficient and the diffusion coefficient are differentiable functions, and is an m-dimensional standard Wiener process. Local Linear discretization For a time discretization , the order- Weak Local Linear discretization of the solution of the SDE (8.1) is defined by the recursive relation where with and is a zero mean stochastic process with variance matrix Here, , denote the partial derivatives of with respect to the variables and t, respectively, the Hessian matrix of with respect to , and . The weak Local Linear discretization converges with order (=1,2) to the solution of (8.1). Local Linearization schemes Depending on the way of computing and different numerical schemes can be obtained. Every numerical implementation of the Weak Local Linear discretization is generically called Weak Local Linearization (WLL) scheme. Order 1 WLL scheme Carbonell F.; Jimenez J.C.; Biscay R.J. (2006). "Weak local linear discretizations for stochastic differential equations: convergence and numerical schemes". J. Comput. Appl. Math. 197: 578–596. doi:10.1016/j.cam.2005.11.032. where, for SDEs with autonomous diffusion coefficients, , and are the submatrices defined by the partitioned matrix , with and is a sequence of d-dimensional independent two-points distributed random vectors satisfying . Order 2 WLL scheme where , and are the submatrices defined by the partitioned matrix with and Stability and dynamics By construction, the weak LL discretizations inherit the stability and dynamics of the linear SDEs, but it is not the case of the weak LL schemes in general. WLL schemes, with preserve the first two moments of the linear SDEs, and inherits the mean-square stability or instability that such solution may have. This includes, for instance, the equations of coupled harmonic oscillators driven by random force, and large systems of stiff linear SDEs that result from the method of lines for linear stochastic partial differential equations. Moreover, these WLL schemes preserve the ergodicity of the linear equations, and are geometrically ergodic for some classes of nonlinear SDEs. For nonlinear SDEs with small noise (i.e., (8.1) with ), the solutions of these WLL schemes are basically the nonrandom paths of the LL scheme (4.6) for ODEs plus a small disturbance related to the small noise. In this situation, the dynamical properties of that deterministic scheme, such as the linearization preserving and the preservation of the exact solution dynamics around hyperbolic equilibrium points and periodic orbits, become relevant for the mean of the WLL scheme. For instance, Fig. 5 shows the approximate mean of the SDE computed by various schemes. Historical notes Below is a time line of the main developments of the Local Linearization (LL) method. Pope D.A. (1963) introduces the LL discretization for ODEs and the LL scheme based on Taylor expansion. Ozaki T. (1985) introduces the LL method for the integration and estimation of SDEs. The term "Local Linearization" is used for first time. Biscay R. et al. (1996) reformulate the strong LL method for SDEs. Shoji I. and Ozaki T. (1997) reformulate the weak LL method for SDEs. Hochbruck M. et al. (1998) introduce the LL scheme for ODEs based on Krylov subspace approximation. Jimenez J.C. (2002) introduces the LL scheme for ODEs and SDEs based on rational Padé approximation. Carbonell F.M. et al. (2005) introduce the LL method for RDEs. Jimenez J.C. et al. (2006) introduce the LL method for DDEs. De la Cruz H. et al. (2006, 2007) and Tokman M. (2006) introduce the two classes of HOLL integrators for ODEs: the integrator-based and the quadrature-based. De la Cruz H. et al. (2010) introduce strong HOLL method for SDEs. References Numerical analysis Numerical integration (quadrature)
Local linearization method
[ "Mathematics" ]
4,108
[ "Computational mathematics", "Mathematical relations", "Approximations", "Numerical analysis" ]
54,367,239
https://en.wikipedia.org/wiki/Abell%201201%20BCG
Abell 1201 BCG (short for Abell 1201 Brightest Cluster Galaxy) is a type-cD massive elliptical galaxy residing as the brightest cluster galaxy (BCG) of the Abell 1201 galaxy cluster. At a redshift of 0.169, this system is around 2.7 billion light-years from Earth, and offset about 11 kiloparsecs from the X-ray peak of the intracluster gas. With an ellipticity of 0.32±0.02, the stellar distribution is far from spherical. In solar units, the total stellar luminosity is 4×1011  in SDSS r-band, and 1.6×1012  in 2MASS K-band. Half the stars orbit within an effective radius of 15 kpc, and their central velocity dispersion is about 285 km s−1 within 5 kpc rising to 360 km s−1 at 20 kpc distance. The BCG also acts as a gravitational lens, bending the light of a more distant background galaxy (at redshift 0.451) into an apparent tangential arc about 6 kpc to one side. This makes the galaxy an important case in investigations of the intrinsic properties of dark matter. Detailed models of the lens mass distribution, starlight and stellar kinematics indicates that the galaxy cluster's dark halo has a shallow inner density gradient and perhaps a soft dark matter core. At face value, this is incompatible with the dark matter cusp predicted by collisionless cold dark matter theories, and adds to evidence that dark matter experiences additional non-gravitational forces. Years later, a faint smaller counterimage to the arc was discovered at a closer radius. Explaining the position and brightness of this counterimage requires a dark central concentration of unseen mass. Based on lens modelling, it could be a supermassive black hole equivalent to 13 billion suns: (1.3±0.6)×1010 . At the time of measurement, this is one of the most massive black hole candidates (without relying on assumptions about quasar luminosities and efficiencies). This UMBH may be ten times larger than expected from the usual scaling relations between black holes and host galaxies. However, alternative methods of modelling the stellar velocity dispersion maps (accounting for an aggregate constraint on the lens mass) reveals an ambiguity between the UMBH mass and the dark halo profile. In solutions where the UMBH is more massive, the dark matter is more cuspy. In solutions where the UMBH is smaller or absent, the dark matter is more cored. The dark halo's ellipticity and the mass-to-light ratio of stars also enter the ambiguity. Thus, if the standard computational models are robust, then Abell 1201 BCG presents a dilemma and a challenge to either the conventional ideas of black hole growth or the simplest theories about dark matter, or both. In 2023, a follow-up study of higher resolution images with greater signal-to-noise ratio, using lens modelling techniques, strongly supported the presence of a central supermassive black hole and provided a lower as well as an upper limit and an upward revision of its mass to . If correct, this would be the first identification and mass determination of a supermassive black hole using analysis of a gravitational lens, a method the authors suggest could be useful for the discovery of more supermassive black holes at higher redshift, i.e., outside the local universe, which previously has been limited to actively accreting black holes. See also Scalar field dark matter References External links "NED" 1430978 Elliptical galaxies Gravitational lensing Leo (constellation)
Abell 1201 BCG
[ "Astronomy" ]
754
[ "Leo (constellation)", "Constellations" ]
54,368,509
https://en.wikipedia.org/wiki/European%20Young%20Engineers
The European Young Engineers (EYE)  is a European non-profit organisation, listed in the register of Engineering associations of the World Federation of Engineering Associations. History Young members of the European engineers’ organisations created a pan-European platform and founded the European Young Engineers (EYE) in 1994. During the following years several engineering associations in Europe were invited to join. EYE became an organisation consisting of more than 23 associations and representing approximately more than 250.000 young engineers in Europe. EYE started to offer its member organisations and their students and young engineers the access to a Europe-wide network by linking the engineering associations. EYE offers a member-hosted conference. Between these events, the community stays in contact via their website. In 2007 the European Young Engineers signed a Memorandum of Cooperation with FEANI. List of member organisations References Engineering societies International scientific organizations based in Europe Organisations based in Brussels Organizations established in 1994 Youth science
European Young Engineers
[ "Engineering" ]
182
[ "Engineering societies" ]
54,368,685
https://en.wikipedia.org/wiki/NGC%20473
NGC 473 is a lenticular galaxy in the constellation of Pisces. Its velocity with respect to the cosmic microwave background is 1819 ± 22km/s, which corresponds to a Hubble distance of . In addition, one non redshift measurement gives a distance of . It was discovered on December 20, 1786 by William Herschel. See also List of NGC objects (1–1000) References External links 0473 Pisces (constellation) Lenticular galaxies 17861220 004785 00859 Discoveries by William Herschel 01172+1616 +03-04-022
NGC 473
[ "Astronomy" ]
124
[ "Pisces (constellation)", "Constellations" ]
54,369,419
https://en.wikipedia.org/wiki/NGC%207035%20and%20NGC%207035A
NGC 7035 and NGC 7035A are a pair of interacting lenticular galaxies located around 400 to 430 million light-years away in the constellation of Capricornus. The main galaxy, NGC 7035 was discovered by astronomer Frank Muller in 1886. See also Arp 272 List of NGC objects (7001–7840) References External links Interacting galaxies Lenticular galaxies Capricornus 7035 66258 Astronomical objects discovered in 1886
NGC 7035 and NGC 7035A
[ "Astronomy" ]
91
[ "Capricornus", "Constellations" ]
54,369,532
https://en.wikipedia.org/wiki/Facebook%20Stories
Facebook Stories are short user-generated photo or video collections that can be uploaded to the user's Facebook. Facebook Stories were created on March 28, 2017. They are considered a second news feed for the social media website. It is focused around Facebook's in-app camera which allows users to add fun filters and Snapchat-like lenses to their content as well as add visual geolocation tags to their photos and videos. The content is able to be posted publicly on the Facebook app for only 24 hours or can be sent as a direct message to a Facebook friend. "As people mostly post photos and videos, Stories is the way they’re going to want to do it," says Facebook Camera product manager Connor Hayes, noting Facebook's shift away from text status updates after ten years as its primary sharing option. "Obviously we’ve seen this doing very well in other apps. Snapchat has really pioneered this," explained Hayes. Facebook has seen much success through other applications like Snapchat and Instagram, especially since Facebook bought Instagram for $1 billion in 2012. History After the many failed attempts of trying to incorporate Snapchat-like features on Facebook, (date=January 2018) the company decided to test run Messenger Day. In 2016, Facebook created a feature called Messenger Day, which allowed users to post videos and pictures with filters for 24 hours only. This project was only used in Poland because of the unpopularity of Snapchat in that region. Users are able to add texts and colorful graphics. However, this was only a test for Facebook to be later turned into a feature on Facebook's app. Facebook's introduction of the Story function may have been in response to the wider success of Instagram Story advertising over the advertising on Facebook Wall; Instagram Story ads were found to be more successful than Facebook Wall advertising in all demographics aside from non-millennial men. Popularity and criticism , Facebook Stories is much less popular among social media users than Snapchat and Instagram. In August 2016, Instagram stories, which is a part of the Facebook owned Instagram, was created and as of June 2017, had 250 million active users. Mark Zuckerberg states, "It is important to release products that people are familiar with, but (Facebook Stories) is going to have the first mainstream augmented reality platform." In a campaign to get more Facebook users to use Facebook Stories, "Facebook is turning friends into ghosts who aren’t using stories. So, instead of the blank space that used to be there above the news feed, Facebook will show grayed-out icons of some frequently contacted friends, regardless of whether they’ve ever posted to their Facebook story before." Plugin extensions to Chrome and Firefox have been written specifically to hide Facebook stories. As of September 2019, Facebook itself has so far not created an off option for its users. Features Access Stories There are two ways that a user can view Facebook Stories. First, by scrolling to the top of the feed, the users are able to view their friends' Stories and create a story. Second, swipe right from any screen on the Facebook app. Users can "like" Stories and reply to them. Saving Stories Before uploading content to a story, users are able to save it to the camera roll. Once users are done creating the story, press the down arrow to save on the camera roll, or the center arrow to share. Users are able to send a direct message to any friends, post to a timeline or add to a Story. Views If users post a story to a timeline it will appear at the top of the profile as if it were any other picture or video. And just like posting to a Timeline, users can decide who sees it (Public, Friends and so on). But posting to a "Story" will make it available to all friends for a 24-hour period and will appear as a bubble at the top of their feeds. Right now, there's no way to select who sees—or doesn't see—a Story. To delete a story, go to the bottom right of the screen and click view icon tab and can delete a story by pressing on the buttons on the three dots at the top. Tools Facebook is the first app to have animated face filters. The company worked with artists Hattie Stewart and Douglas Coupland to design original filters for the Facebook app. To access lenses, swipe up and down, but users have to apply them before recording or taking a picture, which is a key difference between Facebook stories and Snapchat. As well as video stories being 20 seconds and being able to replay a friend's direct message. List of what is included in Facebook Camera: Drawing with resizable marker and chalk brushes Emoji stickers Colored captions Animated selfie lenses and masks Environmental effects like highlight lines and funhouse mirrors Reactive filters that respond to movement like lava lamp colors Alternative filters that surprise you with new effects if you get more people in frame Fine-art-style transfers that make your images look like line drawings or impressionist paintings Professional artist filters like Hattie Stewart's doodle bombs and Doug Copeland's psychedelia Licensed filters from six movie studios, including a Minions filter Cause-supporting filters like rainbows for gay pride Geotagged location filters for certain places Country-specific filters for around ten initial markets References Facebook Stories - Did Facebook Copied Snapchat's Main Feature? Software features Internet properties established in 2017 Social software Facebook
Facebook Stories
[ "Technology" ]
1,126
[ "Mobile content", "Software features", "Social software" ]
54,369,991
https://en.wikipedia.org/wiki/KELT-18b
KELT-18b is a hot Jupiter orbiting the F-type main sequence star KELT-18 approximately 1,058 light years away in the northern circumpolar constellation Ursa Major. The planet was discovered using the transit method, and was announced in June 2017. Discovery KELT-18b was discovered in 2017 by scientists using the KELTNorth telescope at the Winer Observatory. The paper states that this planet is the most "inflated" of its type due to its low mass, density, and high radius. Properties KELT-18b has 1.18 times Jupiter's mass, and is 57% larger than Jupiter. Despite the high mass, its density is lower than Saturn's, and has a high equilibrium temperature of 2,085 K due to orbiting close to a hot star. The planet orbits at a distance 10 times closer than Mercury's in almost 3 days. References Ursa Major Transiting exoplanets Hot Jupiters Exoplanets discovered in 2017 Exoplanets discovered by KELT
KELT-18b
[ "Astronomy" ]
216
[ "Ursa Major", "Constellations" ]
54,370,150
https://en.wikipedia.org/wiki/NGC%207038
NGC 7038 is an intermediate spiral galaxy located about 210 million light-years away in the constellation of Indus. Astronomer John Herschel discovered NGC 7038 on September 30, 1834. NGC 7038 along with NGC 7014 are the brightest members of Abell 3742. Abell 3742 is located near the center of the Pavo–Indus Supercluster. Supernovae Three supernovae have been observed in NGC 7038: SN 1983L (type unknown, mag. 17.1) was discovered by H. Schild and M. Pizarro on 14 June 1983. SN 2010dx (type II, mag. 17.4) was discovered by CHASE (CHilean Automatic Supernovas sEarch) on 8 June 2010. SN 2018hsa (type Ia, mag. 16) was discovered by the Backyard Observatory Supernova Search on November 1, 2018. See also NGC 4725 NGC 7001 List of NGC objects (7001–7840) References External links Intermediate spiral galaxies Indus (constellation) 7038 66414 Astronomical objects discovered in 1834 Abell 3742
NGC 7038
[ "Astronomy" ]
227
[ "Indus (constellation)", "Constellations" ]
54,371,260
https://en.wikipedia.org/wiki/IGap
iGap is a free Iranian instant messaging application for smart phones and personal computers. iGap allows users to interact with each other and exchange information through text, image, video, audio and other types of messages. iGap also supports P2P-based voice calls over the internet. iGap is developed for Android, iOS and Windows. Open-source Clients iGap has published the source code of its Android and iOS client on GitHub. However, back-end source code is proprietary software. Response Supreme leader of Iran Ali Khamenei referred to this messenger on his personal website. and Mohammad-Javad Azari Jahromi as the ICT Minister of Iran has joined this messaging application in September 2017 in order to support local social networking. Earlier, the Minister has claimed to support local Messengers when he was deputy of the ICT Minister. External links iGap in App Store iGap in Google Play References 2015 software Communication software Cross-platform software Instant messaging clients IOS software Secure communication Free security software Free instant messaging clients Free and open-source Android software 2015 establishments in Iran Communications in Iran
IGap
[ "Technology" ]
229
[ "Instant messaging", "Instant messaging clients" ]
54,372,951
https://en.wikipedia.org/wiki/GESTIS%20Substance%20Database
GESTIS Substance Database is a freely accessible online information system on chemical compounds. It is maintained by the Institut für Arbeitsschutz der Deutschen Gesetzlichen Unfallversicherung (IFA, Institute for Occupational Safety and Health of the German Social Accident Insurance). Information on occupational medicine and first aid is compiled by Henning Heberer and his team (TOXICHEM, Leuna). The database contains information for the safe handling of hazardous substances and other chemical substances at work: toxicology/ecotoxicology important physical and chemical properties application and handling health effects protective measures and such in case of danger (incl. first aid) special regulations e.g. GHS classification and labelling according to CLP Regulation (pictograms, H phrases, P phrases). The available information relates to about 9,400 substances. Data are updated immediately after publication of new official regulations or after the issue of new scientific results. A mobile version of the GESTIS Substance Database, suitable for smartphones and tablets, is also available. References Literature External links GESTIS Substance Database Biochemistry databases Online databases Occupational safety and health
GESTIS Substance Database
[ "Chemistry", "Biology" ]
234
[ "Biochemistry", "Biochemistry databases" ]
54,373,564
https://en.wikipedia.org/wiki/Fortifications%20of%20Derbent
The Fortifications of Derbent (Darband) are one of the fortified defense lines, some of which date to the times as early as those built by the Persian Sasanian Empire to protect the eastern passage of the Caucasus Mountains (the "Caspian Gates") against the attacks of the nomadic peoples of the Pontic–Caspian steppe. With the first parts built in the 6th century during the reign of Persian emperor Khosrow I and maintained by various later Arab, Turkish and Persian regimes, the fortifications comprise three distinct elements: the citadel of Naryn-Kala at Derbent, the twin long walls connecting it with the Caspian Sea in the east, and the "mountain wall" of Dagh-Bary, running from Derbent to the Caucasus foothills in the west. The immense wall, with a height of up to twenty meters and a thickness of about , stretched for forty kilometers between the Caspian Sea and the Caucasus Mountains, thirty north-looking towers stretched for forty kilometers between the Caspian Sea and the Caucasus Mountains, effectively blocking the passage across the Caucasus. The fortification complex was made a UNESCO World Heritage Site in 2003. History Already in Classical Antiquity, the settlement of Derbent and its wider region (the "Caspian Gates") were known for their strategic location between the Caspian Sea and the eastern foothills of the Caucasus Mountains, separating the settled regions south of the Caucasus from the nomadic peoples dominating the Pontic–Caspian steppe to the north. Archaeological evidence points to the establishment of a fortified settlement on the Derbent hill as early as the late 8th century BCE, probably under the impact of Scythian raids. This settlement initially covered only the more protected northeastern side of the hill (some 4–5 hectares), but over the 6th–4th centuries BCE expanded to cover its entire surface ( hectares). The walls of that settlement were some high and maximally thick, with evidence of repeated destruction and rebuilding throughout the period. From the 4th century BCE, the settlement began to expand beyond the hill fortress, which became a citadel to an expanding city. In the 1st century BCE, Derbent became incorporated in the kingdom of Caucasian Albania, probably as its northernmost possession. Derbent experienced a period of considerable prosperity in the first three centuries of the Common Era, but the resumption of nomad raids in the 4th century (the Alans and later the Huns) meant that it quickly reverted to its role as a frontier post and a "symbolic boundary between nomadic and agrarian ways of life". In the late 4th century CE, Albania passed under Sasanian influence and control; in the 5th century, it was a Sasanian border fortress and the seat of a march-warden (marzban). During the reign of Khosrow I the fortress was built. There are also various Middle Persian (Pahlavi) inscriptions on the walls of the fortress and Northern/Southern walls inside the city. After the Arab conquest of Persia various Arabic inscriptions were also made. The Citadel of Derbent is one of the most popular tourist attractions in the city of Derbent and the Republic of Dagestan. Documentary film In 2022 Pejman Akbarzadeh made the documentary film "Derbent: What Persia Left Behind". The film which explores the history and architecture of Derbent fortification was screened at various academic conferences including the German Orientalists Day in Berlin and the biennial of Iranian Studies Association in Salamanca. References Sources BBC: Dagestan gunmen kill one at south Russia fortress UNESCO: Citadel, Ancient City and Fortress Buildings of Derbent External links Derbent Online Military history of Derbent Buildings and structures in Derbent Derbent Sasanian defense lines Fortifications in Russia Border barriers World Heritage Sites in Russia Buildings and structures in Dagestan Persian-Caucasian architecture 6th century in Iran Cultural heritage monuments in Derbent History of Derbent Cultural heritage monuments in Dagestan Cultural heritage monuments of federal significance in Dagestan
Fortifications of Derbent
[ "Engineering" ]
803
[ "Border barriers", "Separation barriers", "Fortification lines", "Sasanian defense lines" ]
54,374,854
https://en.wikipedia.org/wiki/OMS%20encoding
OMS (aka TeX math symbol) is a 7-bit TeX encoding developed by Donald E. Knuth. It encodes mathematical symbols with variable sizes like for capital Pi notation, brackets, braces and radicals. Character set See also OML encoding OT1 encoding References Character sets TeX
OMS encoding
[ "Mathematics" ]
60
[ "TeX", "Mathematical markup languages" ]
54,375,950
https://en.wikipedia.org/wiki/The%20Resilience%20Project
The Resilience Project is a project, undertaken by the Icahn School of Medicine at Mount Sinai in collaboration with Sage Bionetworks. Overview The project seeks to identify protective factors against disease through collaboration with people who have significant risk factors for disease that nevertheless do not manifest typical signs and symptoms. In a pilot study, big data was used to identify individuals with apparent resistance to severe genetic disease. This approach may seem weird, since the gene that is known to cause a genetic disorder could also be dealt with (head on) by just using overwriting the genetic code of this faulty gene with "good code" using gene therapy. However, there is never just one version of "good code" (even people that do not have a disorder, the gene that is otherwise known to cause the defect can be present with different code). So rather than having to deal with these problems, Stephen Friend decided to use a workaround method (which consists of the approach noted above). Diseases Initially, the diseases the project looked at were 170 severe, Mendelian, disorders. However, the genetic data gathered from 600,000 people was not enough(only resilient individuals of 8 of the targeted diseases were found). The list of diseases it know look at is the following: Cystic fibrosis Smith–Lemli–Opitz syndrome Familial dysautonomia Epidermolysis Bullosa simplex Pfeiffer syndrome Autoimmune polyendocrine syndrome type 1 (APECED) Acampomelic campomelic dysplasia Atelosteogenesis Data DNA sequences from 589,306 people were used, obtained from 23andMe, Beijing Genomics Institute, Broad Institute and others. Criticism Critics have argued that the researchers could not contact any of people to positively ensure that they were indeed healthy, despite having the disease mutation. Human geneticist Daniel MacArthur of the Broad Institute in Cambridge, Massachusetts still regards the study as “important as a proof-of-principle”. In response to this criticism, Friend and Schadt have modified their Resilience Project by inviting new volunteers who agree to be recontacted to participate through a website Participatory Study In April 2020, the Resilience Project launched a participatory research study open to individuals in the USA. Similar projects The 100000 Genomes Project U.S. Precision Medicine Initiative's 1 million person cohort study Notes References Applied genetics Big data Databases in the United States Medical genetics Technology forecasting
The Resilience Project
[ "Technology" ]
516
[ "Data", "Big data" ]
61,948,910
https://en.wikipedia.org/wiki/Surface%20Duo
The Surface Duo is a discontinued dual-touchscreen Android smartphone manufactured by Microsoft. Announced during a hardware-oriented event on October 2, 2019, and officially released on September 10, 2020, it is part of the Microsoft Surface series of touchscreen hardware devices, and the first device in the line that does not run Windows. It also marks Microsoft's first smartphone since the dissolution of Microsoft Mobile and the Windows Phone platform. The Surface Duo received mixed reviews, with critics praising its design and battery life, but mixed on its multitasking features, software quality, and believing that its hardware (including its RAM, camera, and wireless network support) was outdated and underpowered for its class. A successor, the Surface Duo 2, was unveiled in September 2021. Microsoft released the final security update for the Surface Duo on September 10, 2023. Specifications Hardware The Surface Duo is a folio-styled device, with two 5.6-inch OLED displays with a 4:3 aspect ratio. When unfolded, they form an 8.1-inch surface with a 3:2 aspect ratio and total resolution of 2700×1800. The 360-degree hinge allows the device to be used in several "postures", including being fully unfolded as a flat surface, a landscape mode where the virtual keyboard occupies the entirety of the bottom screen, and the other screen folded backwards for single-screen use. It is compatible with Surface Pen styluses. It uses a Qualcomm Snapdragon 855 system-on-chip with 6 GB of RAM. Microsoft stated that it worked with Qualcomm to optimize the device for multitasking. The device contains two batteries with a total capacity of 3577 mAh, which are split between the two halves. The Surface Duo is sold in models with 128 and 256 GB of non-expandable internal storage. The Surface Duo includes an 11-megapixel camera; nominally front-facing, it can be used like a rear-facing camera when the other screen is folded backwards. When unveiling the Surface Duo, Microsoft avoided referring to the device as being a "phone", with chief product officer Panos Panay primarily referring to it as a "Surface device" Software The Surface Duo ships with Android 10 and Google Mobile Services, and is pre-loaded with both Google and Microsoft-developed apps. The two displays are designed to act like a multi-monitor configuration on a PC; individual apps can be displayed across the displays, an app on one display can open external links on the other, and supported apps can display different views on each display. Microsoft stated that it would contribute the associated APIs for these features to the upstream Android Open Source Project (AOSP), so that this functionality could be leveraged by other dual-screen and foldable smartphones. Microsoft committed to upgrading the Surface Duo to Android 11 by the end of 2021. The update was released in February 2022. Android 12 L was released in October 2022, seven months after its release. Microsoft stated that the update would include visual elements of Windows 11 and Fluent Design System. The ARM version of Windows 11 was adapted to function on Surface Duo by a third-party developer. On September 10, 2023, Microsoft stopped releasing all updates for the device, including security patches. Reception The Verge noted that the Surface Duo was thin and had a relatively light weight for its class, and observed that its design "encourages you to be intentional with your use of the device and to be intentional in not using it." It was felt that the dual-screen configuration of the Surface Duo made multitasking more natural and less "tacked on[to]" Android than other devices, and that apps such as Amazon Kindle and Microsoft Outlook were well-optimized to the dual-screen setup. The camera was criticized for its placement on the device and its low quality (being described as akin to a webcam and poor for a $300 phone, let alone one that cost $1,400), and the device was also panned for being underpowered for its class (including a lack of support for 5G and NFC, and an insufficient amount of RAM for a multitasking-oriented device), although noting that Microsoft had likely used the time since the 2019 announcement to optimize its software to the Snapdragon 855. In conclusion, it was felt that "there are more than a few glimmers of vision and potential in the Surface Duo", but that "the execution is bad in places, and a lot of people aren’t going to get what Microsoft is going for." Mary Jo Foley of ZDNet described the Duo's hardware as "premium and drool-worthy", and found it to be the first Surface-branded device that was able to fulfill Microsoft's promise of "all-day" battery life, but felt that its multitasking systems were not intuitive (drawing comparisons to Windows 8, which "assume[d] users can easily figure out how to do basic things and errs on the side of providing too little information"). IFixit gave the Surface Duo a repairability score of 2/10, citing that although its displays and black glass "can be replaced without disassembling any other components", its construction contained extensive use of adhesive, and that its USB port was soldered directly to its mainboard. Timeline See also Microsoft Courier Nokia X family Microsoft Lumia 650 References Microsoft Surface Foldable smartphones Android (operating system) devices Mobile phones introduced in 2020
Surface Duo
[ "Technology" ]
1,140
[ "Crossover devices", "Foldable smartphones" ]
61,949,595
https://en.wikipedia.org/wiki/Direct%20function
A direct function (dfn, pronounced "dee fun") is an alternative way to define a function and operator (a higher-order function) in the programming language APL. A direct operator can also be called a dop (pronounced "dee op"). They were invented by John Scholes in 1996. They are a unique combination of array programming, higher-order function, and functional programming, and are a major distinguishing advance of early 21st century APL over prior versions. A dfn is a sequence of possibly guarded expressions (or just a guard) between and , separated by or new-lines, wherein denotes the left argument and the right, and denotes recursion (function self-reference). For example, the function tests whether each row of is a Pythagorean triplet (by testing whether the sum of squares equals twice the square of the maximum). PT← {(+/⍵*2)=2×(⌈/⍵)*2} PT 3 4 5 1 x 4 5 3 3 11 6 5 13 12 17 16 8 11 12 4 17 15 8 PT x 1 0 1 0 0 1 The factorial function as a dfn: fact← {0=⍵:1 ⋄ ⍵×∇ ⍵-1} fact 5 120 fact¨ ⍳10 ⍝ fact applied to each element of 0 to 9 1 1 2 6 24 120 720 5040 40320 362880 Description The rules for dfns are summarized by the following "reference card": A dfn is a sequence of possibly guarded expressions (or just a guard) between and , separated by or new-lines. expression guard: expression guard: The expressions and/or guards are evaluated in sequence. A guard must evaluate to a 0 or 1; its associated expression is evaluated if the value is 1. A dfn terminates after the first unguarded expression which does not end in assignment, or after the first guarded expression whose guard evaluates to 1, or if there are no more expressions. The result of a dfn is that of the last evaluated expression. If that last evaluated expression ends in assignment, the result is "shy"—not automatically displayed in the session. Names assigned in a dfn are local by default, with lexical scope. denotes the left function argument and the right; denotes the left operand and the right. If occurs in the definition, then the dfn is a dyadic operator; if only occurs but not , then it is a monadic operator; if neither or occurs, then the dfn is a function. The special syntax is used to give a default value to the left argument if a dfn is called monadically, that is, called with no left argument. The is not evaluated otherwise. denotes recursion or self-reference by the function, and denotes self-reference by the operator. Such denotation permits anonymous recursion. Error trapping is provided through error-guards, . When an error is generated, the system searches dynamically through the calling functions for an error-guard that matches the error. If one is found, the execution environment is unwound to its state immediately prior to the error-guard's execution and the associated expression of the error-guard is evaluated as the result of the dfn. Additional descriptions, explanations, and tutorials on dfns are available in the cited articles. Examples The examples here illustrate different aspects of dfns. Additional examples are found in the cited articles. Default left argument The function adds to ( or ) times . 3 {⍺+0j1×⍵} 4 3J4 ∘.{⍺+0j1×⍵}⍨ ¯2+⍳5 ¯2J¯2 ¯2J¯1 ¯2 ¯2J1 ¯2J2 ¯1J¯2 ¯1J¯1 ¯1 ¯1J1 ¯1J2 0J¯2 0J¯1 0 0J1 0J2 1J¯2 1J¯1 1 1J1 1J2 2J¯2 2J¯1 2 2J1 2J2 The significance of this function can be seen as follows: Moreover, analogous to that monadic ⇔ (negate) and monadic ⇔ (reciprocal), a monadic definition of the function is useful, effected by specifying a default value of 0 for : if , then ⇔ ⇔ . j←{⍺←0 ⋄ ⍺+0j1×⍵} 3 j 4 ¯5.6 7.89 3J4 3J¯5.6 3J7.89 j 4 ¯5.6 7.89 0J4 0J¯5.6 0J7.89 sin← 1∘○ cos← 2∘○ Euler← {(*j ⍵) = (cos ⍵) j (sin ⍵)} Euler (¯0.5+?10⍴0) j (¯0.5+?10⍴0) 1 1 1 1 1 1 1 1 1 1 The last expression illustrates Euler's formula on ten random numbers with real and imaginary parts in the interval . Single recursion The ternary construction of the Cantor set starts with the interval [0,1] and at each stage removes the middle third from each remaining subinterval: The Cantor set of order defined as a dfn: Cantor← {0=⍵:,1 ⋄ ,1 0 1 ∘.∧ ∇ ⍵-1} Cantor 0 1 Cantor 1 1 0 1 Cantor 2 1 0 1 0 0 0 1 0 1 Cantor 3 1 0 1 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 0 1 Cantor 0 to Cantor 6 depicted as black bars: The function computes a bit vector of length so that bit (for and ) is 1 if and only if is a prime. sieve←{ 4≥⍵:⍵⍴0 0 1 1 r←⌊0.5*⍨n←⍵ p←2 3 5 7 11 13 17 19 23 29 31 37 41 43 p←(1+(n≤×⍀p)⍳1)↑p b← 0@1 ⊃ {(m⍴⍵)>m⍴⍺↑1 ⊣ m←n⌊⍺×≢⍵}⌿ ⊖1,p {r<q←b⍳1:b⊣b[⍵]←1 ⋄ b[q,q×⍸b↑⍨⌈n÷q]←0 ⋄ ∇ ⍵,q}p } 10 10 ⍴ sieve 100 0 0 1 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 b←sieve 1e9 ≢b 1000000000 (10*⍳10) (+⌿↑)⍤0 1 ⊢b 0 4 25 168 1229 9592 78498 664579 5761455 50847534 The last sequence, the number of primes less than powers of 10, is an initial segment of . The last number, 50847534, is the number of primes less than . It is called Bertelsen's number, memorably described by MathWorld as "an erroneous name erroneously given the erroneous value of ". uses two different methods to mark composites with 0s, both effected using local anonymous dfns: The first uses the sieve of Eratosthenes on an initial mask of 1 and a prefix of the primes 2 3...43, using the insert operator (right fold). (The length of the prefix obtains by comparison with the primorial function .) The second finds the smallest new prime remaining in (), and sets to 0 bit itself and bits at times the numbers at remaining 1 bits in an initial segment of (). This second dfn uses tail recursion. Tail recursion Typically, the factorial function is define recursively (as above), but it can be coded to exploit tail recursion by using an accumulator left argument: fac←{⍺←1 ⋄ ⍵=0:⍺ ⋄ (⍺×⍵) ∇ ⍵-1} Similarly, the determinant of a square complex matrix using Gaussian elimination can be computed with tail recursion: det←{ ⍝ determinant of a square complex matrix ⍺←1 ⍝ product of co-factor coefficients so far 0=≢⍵:⍺ ⍝ result for 0-by-0 (i j)←(⍴⍵)⊤⊃⍒|,⍵ ⍝ row and column index of the maximal element k←⍳≢⍵ (⍺×⍵[i;j]ׯ1*i+j) ∇ ⍵[k~i;k~j] - ⍵[k~i;j] ∘.× ⍵[i;k~j]÷⍵[i;j] } Multiple recursion A partition of a non-negative integer is a vector of positive integers such that , where the order in is not significant. For example, and are partitions of 4, and and and are considered to be the same partition. The partition function counts the number of partitions. The function is of interest in number theory, studied by Euler, Hardy, Ramanujan, Erdős, and others. The recurrence relation derived from Euler's pentagonal number theorem. Written as a dfn: pn ← {1≥⍵:0≤⍵ ⋄ -⌿+⌿∇¨rec ⍵} rec ← {⍵ - (÷∘2 (×⍤1) ¯1 1 ∘.+ 3∘×) 1+⍳⌈0.5*⍨⍵×2÷3} pn 10 42 pn¨ ⍳13 ⍝ OEIS A000041 1 1 2 3 5 7 11 15 22 30 42 56 77 The basis step states that for , the result of the function is , 1 if ⍵ is 0 or 1 and 0 otherwise. The recursive step is highly multiply recursive. For example, would result in the function being applied to each element of , which are: rec 200 199 195 188 178 165 149 130 108 83 55 24 ¯10 198 193 185 174 160 143 123 100 74 45 13 ¯22 and requires longer than the age of the universe to compute ( function calls to itself). The compute time can be reduced by memoization, here implemented as the direct operator (higher-order function) : M←{ f←⍺⍺ i←2+'⋄'⍳⍨t←2↓,⎕cr 'f' ⍎'{T←(1+⍵)⍴¯1 ⋄ ',(i↑t),'¯1≢T[⍵]:⊃T[⍵] ⋄ ⊃T[⍵]←⊂',(i↓t),'⍵}⍵' } pn M 200 3.973E12 0 ⍕ pn M 200 ⍝ format to 0 decimal places 3972999029388 This value of agrees with that computed by Hardy and Ramanujan in 1918. The memo operator defines a variant of its operand function to use a cache and then evaluates it. With the operand the variant is: {T←(1+⍵)⍴¯1 ⋄ {1≥⍵:0≤⍵ ⋄ ¯1≢T[⍵]:⊃T[⍵] ⋄ ⊃T[⍵]←⊂-⌿+⌿∇¨rec ⍵}⍵} Direct operator (dop) Quicksort on an array works by choosing a "pivot" at random among its major cells, then catenating the sorted major cells which strictly precede the pivot, the major cells equal to the pivot, and the sorted major cells which strictly follow the pivot, as determined by a comparison function . Defined as a direct operator (dop) : Q←{1≥≢⍵:⍵ ⋄ (∇ ⍵⌿⍨0>s)⍪(⍵⌿⍨0=s)⍪∇ ⍵⌿⍨0<s←⍵ ⍺⍺ ⍵⌷⍨?≢⍵} ⍝ precedes ⍝ follows ⍝ equals 2 (×-) 8 8 (×-) 2 8 (×-) 8 ¯1 1 0 x← 2 19 3 8 3 6 9 4 19 7 0 10 15 14 (×-) Q x 0 2 3 3 4 6 7 8 9 10 14 15 19 19 is a variant that catenates the three parts enclosed by the function instead of the parts per se. The three parts generated at each recursive step are apparent in the structure of the final result. Applying the function derived from to the same argument multiple times gives different results because the pivots are chosen at random. In-order traversal of the results does yield the same sorted array. Q3←{1≥≢⍵:⍵ ⋄ (⊂∇ ⍵⌿⍨0>s)⍪(⊂⍵⌿⍨0=s)⍪⊂∇ ⍵⌿⍨0<s←⍵ ⍺⍺ ⍵⌷⍨?≢⍵} (×-) Q3 x ┌────────────────────────────────────────────┬─────┬┐ │┌──────────────┬─┬─────────────────────────┐│19 19││ ││┌──────┬───┬─┐│6│┌──────┬─┬──────────────┐││ ││ │││┌┬─┬─┐│3 3│4││ ││┌┬─┬─┐│9│┌┬──┬────────┐│││ ││ │││││0│2││ │ ││ ││││7│8││ │││10│┌──┬──┬┐││││ ││ │││└┴─┴─┘│ │ ││ ││└┴─┴─┘│ │││ ││14│15││││││ ││ ││└──────┴───┴─┘│ ││ │ │││ │└──┴──┴┘││││ ││ ││ │ ││ │ │└┴──┴────────┘│││ ││ ││ │ │└──────┴─┴──────────────┘││ ││ │└──────────────┴─┴─────────────────────────┘│ ││ └────────────────────────────────────────────┴─────┴┘ (×-) Q3 x ┌───────────────────────────┬─┬─────────────────────────────┐ │┌┬─┬──────────────────────┐│7│┌────────────────────┬─────┬┐│ │││0│┌┬─┬─────────────────┐││ ││┌──────┬──┬────────┐│19 19│││ │││ │││2│┌────────────┬─┬┐│││ │││┌┬─┬─┐│10│┌──┬──┬┐││ │││ │││ │││ ││┌───────┬─┬┐│6│││││ │││││8│9││ ││14│15││││ │││ │││ │││ │││┌┬───┬┐│4│││ │││││ │││└┴─┴─┘│ │└──┴──┴┘││ │││ │││ │││ │││││3 3│││ │││ │││││ ││└──────┴──┴────────┘│ │││ │││ │││ │││└┴───┴┘│ │││ │││││ │└────────────────────┴─────┴┘│ │││ │││ ││└───────┴─┴┘│ │││││ │ │ │││ │││ │└────────────┴─┴┘│││ │ │ │││ │└┴─┴─────────────────┘││ │ │ │└┴─┴──────────────────────┘│ │ │ └───────────────────────────┴─┴─────────────────────────────┘ The above formulation is not new; see for example Figure 3.7 of the classic The Design and Analysis of Computer Algorithms. However, unlike the pidgin ALGOL program in Figure 3.7, is executable, and the partial order used in the sorting is an operand, the the examples above. Dfns with operators and trains Dfns, especially anonymous dfns, work well with operators and trains. The following snippet solves a "Programming Pearls" puzzle: given a dictionary of English words, here represented as the character matrix , find all sets of anagrams. a {⍵[⍋⍵]}⍤1 ⊢a ({⍵[⍋⍵]}⍤1 {⊂⍵}⌸ ⊢) a pats apst ┌────┬────┬────┐ spat apst │pats│teas│star│ teas aest │spat│sate│ │ sate aest │taps│etas│ │ taps apst │past│seat│ │ etas aest │ │eats│ │ past apst │ │tase│ │ seat aest │ │east│ │ eats aest │ │seta│ │ tase aest └────┴────┴────┘ star arst east aest seta aest The algorithm works by sorting the rows individually (), and these sorted rows are used as keys ("signature" in the Programming Pearls description) to the key operator to group the rows of the matrix. The expression on the right is a train, a syntactic form employed by APL to achieve tacit programming. Here, it is an isolated sequence of three functions such that ⇔ , whence the expression on the right is equivalent to . Lexical scope When an inner (nested) dfn refers to a name, it is sought by looking outward through enclosing dfns rather than down the call stack. This regime is said to employ lexical scope instead of APL's usual dynamic scope. The distinction becomes apparent only if a call is made to a function defined at an outer level. For the more usual inward calls, the two regimes are indistinguishable. For example, in the following function , the variable is defined both in itself and in the inner function . When calls outward to and refers to , it finds the outer one (with value ) rather than the one defined in (with value ): which←{ ty←'lexical' f1←{ty←'dynamic' ⋄ f2 ⍵} f2←{ty,⍵} f1 ⍵ } which ' scope' lexical scope Error-guard The following function illustrates use of error guards: plus←{ tx←'catch all' ⋄ 0::tx tx←'domain' ⋄ 11::tx tx←'length' ⋄ 5::tx ⍺+⍵ } 2 plus 3 ⍝ no errors 5 2 3 4 5 plus 'three' ⍝ argument lengths don't match length 2 3 4 5 plus 'four' ⍝ can't add characters domain 2 3 plus 3 4⍴5 ⍝ can't add vector to matrix catch all In APL, error number 5 is "length error"; error number 11 is "domain error"; and error number 0 is a "catch all" for error numbers 1 to 999. The example shows the unwinding of the local environment before an error-guard's expression is evaluated. The local name is set to describe the purview of its following error-guard. When an error occurs, the environment is unwound to expose 's statically correct value. Dfns versus tradfns Since direct functions are dfns, APL functions defined in the traditional manner are referred to as tradfns, pronounced "trad funs". Here, dfns and tradfns are compared by consideration of the function : On the left is a dfn (as defined above); in the middle is a tradfn using control structures; on the right is a tradfn using gotos () and line labels. A dfn can be anonymous; a tradfn must be named. A dfn is named by assignment (); a tradfn is named by embedding the name in the representation of the function and applying (a system function) to that representation. A dfn is handier than a tradfn as an operand (see preceding items: a tradfn must be named; a tradfn is named by embedding ...). Names assigned in a dfn are local by default; names assigned in a tradfn are global unless specified in a locals list. Locals in a dfn have lexical scope; locals in a tradfn have dynamic scope, visible in called functions unless shadowed by their locals list. The arguments of a dfn are named and and the operands of a dop are named and ; the arguments and operands of a tradfn can have any name, specified on its leading line. The result (if any) of a dfn is unnamed; the result (if any) of a tradfn is named in its header. A default value for ⍺ is specified more neatly than for the left argument of a tradfn. Recursion in a dfn is effected by invoking or or its name; recursion in a tradfn is effected by invoking its name. Flow control in a dfn is effected by guards and function calls; that in a tradfn is by control structures and (goto) and line labels. Evaluating an expression in a dfn not ending in assignment causes return from the dfn; evaluating a line in a tradfn not ending in assignment or goto displays the result of the line. A dfn returns on evaluating an expression not ending in assignment, on evaluating a guarded expression, or after the last expression; a tradfn returns on (goto) line 0 or a non-existing line, or on evaluating a control structure, or after the last line. The simpler flow control in a dfn makes it easier to detect and implement tail recursion than in a tradfn. A dfn may call a tradfn and vice versa; a dfn may be defined in a tradfn, and vice versa. History Kenneth E. Iverson, the inventor of APL, was dissatisfied with the way user functions (tradfns) were defined. In 1974, he devised "formal function definition" or "direct definition" for use in exposition. A direct definition has two or four parts, separated by colons: name : expression name : expression0 : proposition : expression1 Within a direct definition, denotes the left argument and the right argument. In the first instance, the result of is the result of the function; in the second instance, the result of the function is that of if evaluates to 0, or if it evaluates to 1. Assignments within a direct definition are dynamically local. Examples of using direct definition are found in the 1979 Turing Award Lecture and in books and application papers. Direct definition was too limited for use in larger systems. The ideas were further developed by multiple authors in multiple works but the results were unwieldy. Of these, the "alternative APL function definition" of Bunda in 1987 came closest to current facilities, but is flawed in conflicts with existing symbols and in error handling which would have caused practical difficulties, and was never implemented. The main distillates from the different proposals were that (a) the function being defined is anonymous, with subsequent naming (if required) being effected by assignment; (b) the function is denoted by a symbol and thereby enables anonymous recursion. In 1996, John Scholes of Dyalog Limited invented direct functions (dfns). The ideas originated in 1989 when he read a special issue of The Computer Journal on functional programming. He then proceeded to study functional programming and became strongly motivated ("sick with desire", like Yeats) to bring these ideas to APL. He initially operated in stealth because he was concerned the changes might be judged too radical and an unnecessary complication of the language; other observers say that he operated in stealth because Dyalog colleagues were not so enamored and thought he was wasting his time and causing trouble for people. Dfns were first presented in the Dyalog Vendor Forum at the APL '96 Conference and released in Dyalog APL in early 1997. Acceptance and recognition were slow in coming. As late as 2008, in Dyalog at 25, a publication celebrating the 25th anniversary of Dyalog Limited, dfns were barely mentioned (mentioned twice as "dynamic functions" and without elaboration). , dfns are implemented in Dyalog APL, NARS2000, and ngn/apl. They also play a key role in efforts to exploit the computing abilities of a graphics processing unit (GPU). References External links , Dyalog APL programming language family Array programming languages Formal methods Functional programming Higher-order functions Programming language topics Programming paradigms
Direct function
[ "Engineering" ]
4,712
[ "Software engineering", "Programming language topics", "Formal methods" ]
61,950,060
https://en.wikipedia.org/wiki/KaVo%20Kerr
KaVo Kerr was a dental equipment manufacturer group that was sold formerly to Envista. The group stemmed from a joint venture set up in 2016 between KaVo (KaVo Dental GmbH), which was established in 1909 in Berlin, Germany, and Kerr Corporation, which was founded in 1891 in Detroit, Michigan, as well as a division of Danaher Corporation headquartered in Brea, California. In December 2019, Danaher spun off its dental segment into an independent publicly-traded company - Envista Holdings Corporation. Envista will employ 12,000 people worldwide. History Kerr Kerr was established in 1891 in Detroit, Michigan by brothers Robert and John Kerr as The Detroit Dental Manufacturing Company and started to offer its products and services to the European market in 1893. The company officially changed its name to The KERR Manufacturing Company in 1939. The company established its first factory in Europe in Scafati, Italy in 1959. Kerr acquired part of the McShirley line of products in 1971. Later in 1978, the Sybron Dental Product Division was formed. In 2001, Kerr acquired Hawe Neos company in the aim of enhancing its offer of prophylaxis consumables. In 2006, Kerr became part of Danaher Corporation. In 2014, Kerr acquired DUX Dental and Vettec Inc. In 2015, Total Care, Axis SybronEndo and Kerr reorganized into a unilateral organization: Kerr Dental. KaVo KaVo was established in 1909 in Berlin, Germany by Alois Kaltenbach as KaVo Dental GmbH. By 1919, Richard Voigt joined the Kavo and the number of the employees expanded to 300 by 1939. In 1946, the headquarters were moved from Potsdam to the Upper Swabian town of Biberach an der Riss. In 1959, the company opened a dental technology factory in Leutkirch. In 2004, it was purchased by Danaher Corporation. In the same year, KaVo acquired Gendex. In 2005, KaVo acquired Pelton & Crane, a dental operatory equipment manufacturer with a 100-year history in North America, and joined the KaVo Kerr family along with DEXIS. In 2007, i-CAT was acquired by Kavo, formerly Soredex imaging brands in 2009. In 2012, Aribex, which is best known for the NOMAD handheld and portable X-ray systems, was acquired by KaVo Dental Group. In September 2021, Envista announced that KaVo will be sold to Planmeca for $455 million. See also PaloDEx :de:KaVo Dental (German) References External links Official website Danaher Corporation Dental companies of the United States Companies based in Brea, California
KaVo Kerr
[ "Biology" ]
550
[ "Danaher Corporation", "Life sciences industry" ]
61,951,537
https://en.wikipedia.org/wiki/BED%20%28file%20format%29
The BED (Browser Extensible Data) format is a text file format used to store genomic regions as coordinates and associated annotations. The data are presented in the form of columns separated by spaces or tabs. This format was developed during the Human Genome Project and then adopted by other sequencing projects. As a result of this increasingly wide use, this format had already become a de facto standard in bioinformatics before a formal specification was written. One of the advantages of this format is the manipulation of coordinates instead of nucleotide sequences, which optimizes the power and computation time when comparing all or part of genomes. In addition, its simplicity makes it easy to manipulate and read (or parsing) coordinates or annotations using word processing and scripting languages such as Python, Ruby, or Perl or more specialized tools such as BEDTools. History The end of the 20th century saw the emergence of the first projects to sequence complete genomes. Among these projects, the Human Genome Project was the most ambitious at the time, aiming to sequence for the first time a genome of several gigabases. This required the sequencing centres to carry out major methodological development in order to automate the processing of sequences and their analyses. Thus, many formats were created, such as FASTQ, GFF, and BED. However, no official specifications were published at the time, which affected some formats such as FASTQ when sequencing projects multiplied at the beginning of the 21st century. Its wide use within genome browsers has made it possible to define this format in a relatively stable way as this description is used by many tools. Format Initially the BED format did not have any official specification. Instead, the description provided by the UCSC Genome Browser has been widely used as a reference. A formal BED specification was published in 2021 under the auspices of the Global Alliance for Genomics and Health. Description A BED file consists of a minimum of three columns to which nine optional columns can be added for a total of twelve columns. The first three columns contain the names of chromosomes or scaffolds, the start, and the end coordinates of the sequences considered. The next nine columns contain annotations related to these sequences. These columns must be separated by spaces or tabs, the latter being recommended for reasons of compatibility between programs. Each row of a file must have the same number of columns. The order of the columns must be respected: if columns of high numbers are used, the columns of intermediate numbers must be filled in. Header A BED file can optionally contain a header. However, there is no official description of the format of the header. It may contain one or more lines and be signified by different words or symbols, depending on its functional role or simply descriptive. Thus, a header line can begin with these words or symbol: "browser": functional header used by the UCSC Genome Browser to set options related to it, "track": functional header used by genome browsers to specify display options related to it, "#": descriptive header to add comments such as the name of each column. Coordinate system Unlike the coordinate system used by other standards such as GFF, the system used by the BED format is zero-based for the coordinate start and one-based for the coordinate end. Thus, the nucleotide with the coordinate 1 in a genome will have a value of 0 in column 2 and a value of 1 in column 3. A thousand-base BED interval with the following start and end: chr7 0 1000 would convert to the following 1-based "human" genome coordinates, as used by a genome browser such as UCSC: chr7 1 1000 This choice is justified by the method of calculating the lengths of the genomic regions considered, this calculation being based on the simple subtraction of the end coordinates (column 3) by those of the start (column 2): . When the coordinate system is based on the use of 1 to designate the first position, the calculation becomes slightly more complex: . This slight difference can have a relatively large impact in terms of computation time when data sets with several thousand to hundreds of thousands of lines are used. Alternatively, we can view both coordinates as zero-based, where the end position is non-inclusive. In other words, the zero-based end position denotes the index of the first position after the feature. For the example above, the zero-based end position of 1000 marks the first position after the feature including positions 0 through 999. Examples Here is a minimal example: chr7 127471196 127472363 chr7 127472363 127473530 chr7 127473530 127474697 Here is a typical example with nine columns from the UCSC Genome Browser. The first three lines are settings for the UCSC Genome Browser and are unrelated to the data specified in BED format: browser position chr7:127471196-127495720 browser hide all track name="ItemRGBDemo" description="Item RGB demonstration" visibility=2 itemRgb="On" chr7 127471196 127472363 Pos1 0 + 127471196 127472363 255,0,0 chr7 127472363 127473530 Pos2 0 + 127472363 127473530 255,0,0 chr7 127473530 127474697 Pos3 0 + 127473530 127474697 255,0,0 chr7 127474697 127475864 Pos4 0 + 127474697 127475864 255,0,0 chr7 127475864 127477031 Neg1 0 - 127475864 127477031 0,0,255 chr7 127477031 127478198 Neg2 0 - 127477031 127478198 0,0,255 chr7 127478198 127479365 Neg3 0 - 127478198 127479365 0,0,255 chr7 127479365 127480532 Pos5 0 + 127479365 127480532 255,0,0 chr7 127480532 127481699 Neg4 0 - 127480532 127481699 0,0,255 File extension There is currently no standard file extension for BED files, but the ".bed" extension is the most frequently used. The number of columns sometimes is noted in the file extension, for example: ".bed3", ".bed4", ".bed6", ".bed12". Usage The use of BED files has spread rapidly with the emergence of new sequencing techniques and the manipulation of larger and larger sequence files. The comparison of genomic sequences or even entire genomes by comparing the sequences themselves can quickly require significant computational resources and become time-consuming. Handling BED files makes this work more efficient by using coordinates to extract sequences of interest from sequencing sets or to directly compare and manipulate two sets of coordinates. To perform these tasks, various programs can be used to manipulate BED files, including but not limited to the following: Genome browsers: from BED files allows the visualization and extraction of sequences of mammalian genomes currently sequenced (e.g. the function Manage Custom Tracks in UCSC Genome Browser). Galaxy: web-based platform. Command-line tools: BEDTools: program allowing the manipulation of coordinate sets and the extraction of sequences from a BED file. BEDOPS: a suite of tools for fast boolean operations on BED files. BedTk: a faster alternative to BEDTools for a limited and specialized sub-set of operations. covtobed: a tool to convert a BAM file into a BED coverage track. .genome Files BEDtools also uses .genome files to determine chromosomal boundaries and ensure that padding operations do not extend past chromosome boundaries. Genome files are formatted as shown below, a two-column tab-separated file with one-line header. chrom size chr1 248956422 chr2 242193529 chr3 198295559 chr4 190214555 chr5 181538259 chr6 170805979 chr7 159345973 ... References Bioinformatics Biological sequence format
BED (file format)
[ "Engineering", "Biology" ]
1,753
[ "Bioinformatics", "Biological engineering", "Biological sequence format" ]
61,952,055
https://en.wikipedia.org/wiki/Transpacific%20crossing
Transpacific crossings are voyages of passengers and cargo across the Pacific Ocean between Asia, Oceania, and the Americas. Transpacific voyages frequently cross the International Date Line. The first recorded crossing of the Pacific was a Spanish expedition led by the Portuguese explorer Ferdinand Magellan of 1521. Commercial transpacific flights have been available since 1935. History The Spanish expedition of the Portuguese explorer Magellan was the first to cross the Pacific in 1521 and the one to give the ocean its name. After discovering and crossing the Strait of Magellan in November 1520, the expedition sailed northwest across the Pacific for over three months and reached the Philippines in March 1521. Juan Sebastian Elcano would continue the expedition to complete the first world circumnavigation in 1522. The first navigator to cross the Pacific from west to east was Andres de Urdaneta, who discovered the easterly route across the Pacific from the Philippines to Mexico in 1565. The first transpacific trade route in history was the Spanish Manila galleon route which lasted from 1565 to 1815 and followed navigator Andres de Urdaneta's discovery of the easterly route or tornaviaje in 1565. It ended two and a half centuries later, when most Pacific ports became open to world trade. Other early transpacific voyages include those of Spanish navigators García Jofre de Loaísa in 1526, Álvaro de Saavedra Cerón in 1527, Alvaro de Mendaña in 1567 and 1595, and Pedro Fernandes de Queirós in 1606. Another early navigator to cross the Pacific from Asia to the Americas was Francisco Gali who completed this journey in 1584. In the 19th century, the first liners built specially for the transpacific ocean service were the "Empress" vessels of the Canadian Pacific Railway. After the railway reached the Pacific seaboard in 1885, the liners began operation in 1891. In 1928, Charles Kingsford Smith and his crew were the first to cross the Pacific by flight. Smith and Australian aviator, Charles Ulm, arrived in the United States and began to search for an aircraft. Famed Australian polar explorer Sir Hubert Wilkins sold them a Fokker F.VII/3m monoplane, which they named the Southern Cross. Ulm was the relief pilot. The other crewmen were Americans, they were James Warner, the radio operator, and Captain Harry Lyon, the navigator and engineer. In 1935, the beginning of commercial transpacific flights to and from California began operation. On November 22, 1935, "Pan American Airlines' China Clipper launched its first transpacific flight, covering a distance of 8,000 miles". The route was ready for passenger service by October 1936. Between March and April 2019, blind sailor Matsuhiro Iwamoto of Japan and Doug Smith of the United States sailed from San Diego, United States to Fukushima, Japan, by April 24 making Iwamoto the first blind sailor to cross the Pacific non-stop. Iwamoto's first attempt in 2013 failed when his boat hit a whale. See also Transpacific flight Exploration of the Pacific Asia-Pacific Pre-Columbian trans-oceanic contact theories References International transport Crossing
Transpacific crossing
[ "Physics" ]
653
[ "Physical systems", "Transport", "International transport" ]
61,952,795
https://en.wikipedia.org/wiki/Probe%20electrospray%20ionization
Probe electrospray ionization (PESI) is an electrospray-based ambient ionization technique which is coupled with mass spectrometry for sample analysis. Unlike traditional mass spectrometry ion sources which must be maintained in a vacuum, ambient ionization techniques permit sample ionization under ambient conditions, allowing for the high-throughput analysis of samples in their native state, often with minimal or no sample pre-treatment. The PESI ion source simply consists of a needle to which a high voltage is applied following sample pick-up, initiating electrospray directly from the solid needle. History Probe electrospray ionization is an ambient ionization mass spectrometry technique developed by Kenzo Hiraoka et al. at the University of Yamanashi, Japan. The technique was developed to address some of the issues associated with traditional electrospray ionization (ESI), including clogging of the capillary and contamination, whilst providing a means of rapid and direct sample analysis. Since its initial conception, various modified forms of the PESI ion source have been developed, and the PESI-MS system has been commercialized by instrument manufacturing company Shimadzu. Principle of operation The PESI ion source consists of a solid needle or wire which acts as both the sampling probe and electrospray emitter. The needle is moved up and down along a vertical axis, a process which can be either automated or manual. When the needle is lowered to the sampling stage, the tip of the needle briefly touches the surface of a typical liquid sample. During this stage, the needle is held at ground potential. The needle is then raised to be level with the mass spectrometer inlet where a high voltage of 2–3 kV is applied. Electrospray is induced at the tip of the needle, producing analyte ions which are drawn into the mass spectrometer for analysis. The mechanism by which ions are formed is believed to be identical to traditional electrospray ionization. As a result, in positive ion mode analytes are often observed as the protonated, sodiated and potentiated ions, depending on the sample and analyte type. Although the amount of sample picked up by the needle is largely dependent on sample viscosity, it has been estimated that just a few picolitres of the sample solution are typically used. Because of this, the technique can be applied to small sample sizes, particularly ideal when limited sample amounts are available. As such a small sample amount is picked up and completely exhausted during the ionization process, issues of contamination are severely reduced. Furthermore, the process of sampling and ionization takes just a few seconds, so PESI-MS is suitable for high-throughput analysis. Sequential ionization A phenomenon observed with probe electrospray ionization is the sequential and exhaustive ionization of analytes with different surface activities. During the development of PESI, it was discovered that analytes could be sequentially ionized throughout the electrospray, thus enabling a temporal separation of components within a sample. In normal ESI, the sample solution is typically continuously supplied through a capillary and the charged droplets contain all sample components, with more surface-active analytes being constantly preferentially ionized. In PESI, surface-active analytes are also preferentially ionized. However, as a finite droplet exists on the tip of the needle, following the depletion of surface-active analytes, the remaining components in the droplet can then be ionized and observed. This can result in the production of distinctively different mass spectra from a single sample over the application of the high voltage for just a few seconds. This effect offers a particular advantage in the analysis of analytes suffering from ion suppression effects. The presence of surface-active analytes or charged solvent additives can result in the suppressed ionization of analytes of interest, resulting in low sensitivity or the complete absence of the analyte. The effects of ion suppression can be minimized by reducing the complexity of the sample, for instance through sample extraction techniques such as solid phase extraction, or by separation of analytes of interest using chromatographic separation. However, these sample preparation steps can be laborious, time-consuming and expensive. PESI enables a reduction in ion suppression without the need for sample pre-treatment. By separating the ionization of different analytes, components causing ion suppression can be exhausted before enabling the ionization of components of interest. This has been demonstrated in a number of scenarios, including in the analysis of raw urine, with concentrated components such as creatinine ionization initially, followed by the appearance of previously undetected metabolites. Sheath-flow PESI As the PESI needle is only applicable to liquid or penetrable solid samples, it cannot be used for the analysis of the majority of dry solid materials. To circumvent this limitation, sheath-flow probe electrospray ionization (sfPESI) was developed, a modification of the traditional PESI technique. The sfPESI ion source consists of a solid needle housed within a plastic sheath (typically a gel-loading tip) filled with a small amount of solvent. The needle protrudes from the base of the sheath by approximately 0.1 mm, where a minute solvent droplet is held. The based of based the probe is briefly touched to the sample surface, where a convex solvent meniscus forms between the probe and the sample, wetting the sample and enabling analyte extraction. The chemistry of the solvent can be modified to induce the extraction of particular analytes of interest. After application to the sample, the sfPESI probe is then raised to be level with the mass spectrometer inlet, with solubilised analytes held in the droplet at the tip of the needle, and a high voltage applied. sfPESI offers the same advantages as standard PESI, including the sequential and exhaustive ionization phenomenon, whilst enabling the direct analysis of dry samples. Applications PESI-MS has proven to be particularly effective in the metabolic analysis of biological materials, having been applied to the analysis of cancerous and non-cancerous breast tissue, as well as brain and liver tissue removed from mice. Interestingly, PESI-MS has recently been applied to the direct analysis of living animals for real-time metabolic profiling. Due to the narrow diameter of the PESI needle and brief sample introduction time, PESI is reasonably non-invasive. As a result, the technique has been used to sample from the organs of living anaesthetized animals, specifically to analyse metabolites in the brain, spleen, liver and kidney of a living mouse. In addition to this, PESI-MS has been applied to the on-site analysis of food products for the purpose of quality control, to the detection of herbicides in body fluids to demonstrate exposure, and finally to the detection of illicit drugs in bodily fluids to indicate drug use. Several groups have also harnessed the small size of the PESI probe to achieve single-cell analysis, demonstrating the capability of rapidly detecting metabolites at cellular and subcellular levels. The PESI modification known as sheath-flow PESI has been applied to the analysis of various solid samples in their native state, including pharmaceutical tablets, illicit drugs, food and agricultural products, and pesticides. In addition, sfPESI has been utilised in the field of forensic science for the analysis and identification of fresh and dried body fluids of forensic interest. In this work, sfPESI was also coupled with tandem mass spectrometry (MS/MS), demonstrating the capability of ion fragmentation for identification of unknown components. See also Ambient ionization Desorption electrospray ionization Electrospray ionization References Ionization
Probe electrospray ionization
[ "Physics", "Chemistry" ]
1,602
[ "Ionization", "Physical phenomena" ]
61,953,437
https://en.wikipedia.org/wiki/EPSG%20Geodetic%20Parameter%20Dataset
EPSG Geodetic Parameter Dataset (also EPSG registry) is a public registry of geodetic datums, spatial reference systems, Earth ellipsoids, coordinate transformations and related units of measurement, originated by a member of the European Petroleum Survey Group (EPSG) in 1985. Each entity is assigned an EPSG code between 1024 and 32767, along with a standard machine-readable well-known text (WKT) representation. The dataset is maintained by the IOGP Geomatics Committee. Most geographic information systems (GIS) and GIS libraries use EPSG codes as Spatial Reference System Identifiers (SRIDs) and EPSG definition data for identifying coordinate reference systems, projections, and performing transformations between these systems, while some also support SRIDs issued by other organizations (such as Esri). Common EPSG codes EPSG:4326 - WGS 84, latitude/longitude coordinate system based on the Earth's center of mass, used by the Global Positioning System among others. EPSG:3857 - Web Mercator projection used for display by many web-based mapping tools, including Google Maps and OpenStreetMap. EPSG:7789 - International Terrestrial Reference Frame 2014 (ITRF2014), an Earth-fixed system that is independent of continental drift. History The dataset was created in 1985 by Jean-Patrick Girbig of Elf, to "standardize, improve and share spatial data between members of the European Petroleum Survey Group". It was made public in 1993. In 2005, the EPSG organisation was merged into International Association of Oil & Gas Producers (IOGP), and became the Geomatics Committee. However, the name of the EPSG registry was kept to avoid confusion. Since then, the acronym "EPSG" became increasingly synonymous with the dataset or registry itself. See also List of map projections References External links Official website Spatial databases Spatial analysis Geodesy Catalogues Geomatics Geographic coordinate systems
EPSG Geodetic Parameter Dataset
[ "Physics", "Mathematics" ]
410
[ "Applied mathematics", "Spatial analysis", "Geographic coordinate systems", "Space", "Coordinate systems", "Spacetime", "Geodesy" ]
61,953,513
https://en.wikipedia.org/wiki/Bootstrap%20Studio
Bootstrap Studio is a proprietary web design and development application. It offers a large number of components for building responsive pages including headers, footers, galleries and slideshows along with basic elements, such as spans and divs. The program can be used for building websites and prototypes. It is built on the popular Electron framework, and is cross-platform. History Bootstrap Studio was launched on October 19, 2015 with a post on Product Hunt where it reached number 4 in the Product of the Day category. Version 2.0 of the software was released on January 22, 2016 and brought JavaScript editing, multi-page support and improved the CSS support. Version 4.0 was launched on November 1, 2017. The release added support for the Bootstrap 4 framework and CSS grid, filters, position sticky and blend mode CSS properties. On August 22, 2019, Bootstrap Studio was officially introduced into the GitHub Student Pack, making it available to students from around the world. Bootstrap Studio v6.7.0 updated Bootstrap to v5.3.3 on May 30, 2024. References External links HTML editors Web development software Responsive web design Web design MacOS software Windows software Cross-platform software 2015 software
Bootstrap Studio
[ "Engineering" ]
259
[ "Design", "Web design" ]
61,954,047
https://en.wikipedia.org/wiki/Anderson%20function
Anderson functions describe the projection of a magnetic dipole field in a given direction at points along an arbitrary line. They are useful in the study of magnetic anomaly detection, with historical applications in submarine hunting and underwater mine detection. They approximately describe the signal detected by a total field sensor as the sensor passes by a target (assuming the targets signature is small compared to the Earth's magnetic field). Definition The magnetic field from a magnetic dipole along a given line, and in any given direction can be described by the following basis functions: which are known as Anderson functions. Definitions: is the dipole's strength and direction is the projected direction (often the Earth's magnetic field in a region) is the position along the line points in the direction of the line is a vector from the dipole to the point of closest approach (CPA) of the line , a dimensionless quantity for simplification The total magnetic field along the line is given by where is the magnetic constant, and are the Anderson coefficients, which depend on the geometry of the system. These are where and are unit vectors (given by and , respectively). Note, the antisymmetric portion of the function is represented by the second function. Correspondingly, the sign of depends on how is defined (e.g. direction is 'forward'). Total field measurements The total field measurement resulting from a dipole field in the presence of a background field (such as earth magnetic field) is The last line is an approximation that is accurate if the background field is much larger than contributions from the dipole. In such a case the total field reduces to the sum of the background field, and the projection of the dipole field onto the background field. This means that the total field can be accurately described as an Anderson function with an offset. References Functions and mappings
Anderson function
[ "Mathematics" ]
375
[ "Mathematical analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
61,956,101
https://en.wikipedia.org/wiki/Energy%20Transitions%20Commission
The Energy Transitions Commission (ETC) is an international think tank, focusing on economic growth and climate change mitigation. It was created in September 2015 and is based in London. The commission currently contains 32 commissioners from a selection of individuals and company and government leaders. Activities The primary activity of the commission is publishing reports and position papers. They are typically supported by a body of readily available or explicitly commissioned data sets provided by various independent or industry-related organizations. The findings of reports are then reviewed through a broad consultation process within and outside of the commission. Finally, the report or position paper is redacted and generally understood to constitute the collective view of the ETC commission. Although individual commissioners may disagree with particular findings or recommendations, the general direction of the arguments developed in the publications is guided by consensus. Publications Since its founding in 2015, the commission has published two extensive reports and half a dozen papers. For example, Pathways from Paris – Accessing the INDC Opportunity, is a 25-page study of INDCs (i.e. the plans developed by individual countries and submitted at the 2015 UN Climate Change Conference in Paris). This investigation highlighted the mechanisms various countries utilize in order to reduce emissions and identify opportunities for further reductions. News outlets of general interest and the specialized press reported summaries of these reports. Both reports outlined below were cited as reference to several articles in a 2018 special report edition of The Economist magazine. Better Energy, Greater Prosperity This 120-page report recognized the opportunity to halve global carbon emissions by 2040. According to the report, it is possible to simultaneously ensure economic development and access affordable, sustainable energy for all, while reducing carbon emissions by half the current output. The report suggested four strategies to be concurrently implemented: Accelerate clean electricity access. Decarbonize beyond power generation, using bioenergy, hydrogen, and carbon capture for industrial activities and transport modes which cannot be electrified in an economical fashion. Improve energy productivity by targeting a 3% energy productivity per year (compared to 1.5% currently) Optimize usage of remaining fossil fuel uses According to the report, the strategies listed above would have reduced fossil fuel consumption by 30%, but 50% of energy needs would have needed to be met with fossil fuels. This, the report explained, could be solved by optimizing usage of these sources by switching from coal to gas, by preventing methane leakages, and by stopping routine flaring. Another area of optimization would come from carbon capture or sequestration such as underground storage, and finally a decrease in fossil fuel use. The report suggested two solutions for energy policy: Increased investment, keeping in mind that the investment required by the transition is estimated to be between $300-600 billion USD annually. At this level, the cost would not cause a significant macroeconomic challenge, relative to the approximately $20 trillion in anticipated savings and investments annually. The issue is more one of a shift in the mix of investments: moving away from fossil fuels and toward low carbon technologies and energy-efficient equipment and infrastructure. Public governance, with the introduction of coherent and predictable policies which favour the energetic transition, along with the phasing out of fossil fuel subsidies and the introduction of carbon pricing. Mission Possible This 172-page report focused on the "hard to abate sectors", namely: Heavy industry: cement, steel and plastics Heavy duty transport: heavy road transport, maritime shipping, and aviation Collectively, these sectors currently represent approximately 30% of energy emissions, with the potential to increase to 60% by 2050 (due to the reduction of the share owed to other sectors, and to the demand growth in these hard to abate sectors). The report concluded that full decarbonization of these sectors is feasible and the cost to the global economy would be less than 0.5% of GDP by 2050. It also identifies cement, plastics and shipping as the most challenging sectors, due to process emissions, end-of-life emissions and the fragmented nature of the maritime industry respectively. The feasibility if not inevitability of some of these transitions, for example these concerning the industrial production of ammonia, are echoed (or in some cases originate from) the respective industry sectors. Funding The ETC is funded by various businesses and organizations, including major oil and gas companies – this was a source of concern from many observers. Current or past sponsors include Bank of America Merrill Lynch, BHP Billiton, Energy Systems Catapult, CO2 Sciences, the European Climate Foundation, the Grantham Foundation and the UN Foundation. Regardless of funding every Commissioner has an equal voice and participation in ETC activities. List of commissioners References Think tanks based in the United Kingdom Emissions reduction
Energy Transitions Commission
[ "Chemistry" ]
949
[ "Greenhouse gases", "Emissions reduction" ]
61,956,186
https://en.wikipedia.org/wiki/Consumer%20green%20energy%20program
A consumer green energy program is a program that enables households to buy energy from renewable sources. By allowing consumers to purchase renewable energy, it simultaneously diverts the utilization of fossil fuels and promotes the use of renewable energy sources such as solar and wind. In several countries with common carrier arrangements, electricity retailing arrangements make it possible for consumers to purchase "green" electricity from either their utility or a green power provider. Electricity is considered to be green if it is produced from a source that produces relatively little pollution, and the concept is often considered equivalent to renewable energy. Although electricity is the most common green energy, biomethane is sold as "green gas" in some locations. In many countries, green energy currently provides a very small amount of electricity, generally contributing less than 2 to 5% to the overall pool of electricity offered by most utility companies, electric companies, or state power pools. In some U.S. states, local governments have formed regional power purchasing pools using Community Choice Aggregation and Solar Bonds to achieve a 51% renewable mix or higher, such as in the City of San Francisco. By participating in a green energy program a consumer may be having an effect on the energy sources used and ultimately might be helping to promote and expand the use of green energy. They are also making a statement to policy makers that they are willing to pay a price premium to support renewable energy. Green energy consumers either obligate the utility companies to increase the amount of green energy that they purchase from the pool (so decreasing the amount of non-green energy they purchase), or directly fund the green energy through a green power provider. If insufficient green energy sources are available, the utility must develop new ones or contract with a third party energy supplier to provide green energy, causing more to be built. However, there is no way the consumer can check whether or not the electricity bought is "green" or otherwise. In some countries such as the Netherlands, electricity companies guarantee to buy an equal amount of 'green power' as is being used by their green power customers. The Dutch government exempts green power from pollution taxes, which means green power is hardly any more expensive than other power. Green energy and labeling by region European Union Directive 2004/8/EC of the European Parliament and of the Council of 11 February 2004 on the promotion of cogeneration based on a useful heat demand in the internal energy market includes the article 5 (Guarantee of origin of electricity from high-efficiency cogeneration). European environmental NGOs have launched an ecolabel for green power. The ecolabel is called EKOenergy. It sets criteria for sustainability, additionality, consumer information and tracking. Only part of electricity produced by renewables fulfills the EKOenergy criteria. United Kingdom The Green Energy Supply Certification Scheme was launched in 2010: it implements guidelines from the Energy Regulator, Ofgem, and sets requirements on transparency, the matching of sales by renewable energy supplies, and additionality. Green electricity in the United Kingdom is widespread, and green gas is supplied to over a million homes. United States The United States Department of Energy (DOE), the Environmental Protection Agency (EPA), and the Center for Resource Solutions (CRS) recognizes the voluntary purchase of electricity from renewable energy sources (also called renewable electricity or green electricity) as green power. The most popular way to purchase renewable energy as revealed by NREL data is through purchasing Renewable Energy Certificates (RECs). According to a Natural Marketing Institute (NMI) survey 55 percent of American consumers want companies to increase their use of renewable energy. DOE selected six companies for its 2007 Green Power Supplier Awards, including Constellation NewEnergy; 3Degrees; Sterling Planet; SunEdison; Pacific Power and Rocky Mountain Power; and Silicon Valley Power. The combined green power provided by those six winners equals more than 5 billion kilowatt-hours per year, which is enough to power nearly 465,000 average U.S. households. In 2014, Arcadia Power made RECS available to homes and businesses in all 50 states, allowing consumers to use "100% green power" as defined by the EPA's Green Power Partnership. The U.S. Environmental Protection Agency (USEPA) Green Power Partnership is a voluntary program that supports the organizational procurement of renewable electricity by offering expert advice, technical support, tools and resources. This can help organizations lower the transaction costs of buying renewable power, reduce carbon footprint, and communicate its leadership to key stakeholders. Throughout the country, more than half of all U.S. electricity customers now have an option to purchase some type of green power product from a retail electricity provider. Roughly one-quarter of the nation's utilities offer green power programs to customers, and voluntary retail sales of renewable energy in the United States totaled more than 12 billion kilowatt-hours in 2006, a 40% increase over the previous year. In the United States, one of the main problems with purchasing green energy through the electrical grid is the current centralized infrastructure that supplies the consumer's electricity. This infrastructure has led to increasingly frequent brown outs and black outs, high CO2 emissions, higher energy costs, and power quality issues. An additional $450 billion will be invested to expand this fledgling system over the next 20 years to meet increasing demand. In addition, this centralized system is now being further overtaxed with the incorporation of renewable energies such as wind, solar, and geothermal energies. Renewable resources, due to the amount of space they require, are often located in remote areas where there is a lower energy demand. The current infrastructure would make transporting this energy to high demand areas, such as urban centers, highly inefficient and in some cases impossible. In addition, despite the amount of renewable energy produced or the economic viability of such technologies only about 20 percent will be able to be incorporated into the grid. To have a more sustainable energy profile, the United States must move towards implementing changes to the electrical grid that will accommodate a mixed-fuel economy. Several initiatives are being proposed to mitigate distribution problems. First and foremost, the most effective way to reduce USA's CO2 emissions and slow global warming is through conservation efforts. Opponents of the current US electrical grid have also advocated for decentralizing the grid. This system would increase efficiency by reducing the amount of energy lost in transmission. It would also be economically viable as it would reduce the amount of power lines that will need to be constructed in the future to keep up with demand. Merging heat and power in this system would create added benefits and help to increase its efficiency by up to 80-90%. This is a significant increase from the current fossil fuel plants which only have an efficiency of 34%. Asia India India's Ministry of Power notified 'Green Energy Open Access' Rules to accelerate ambitious renewable energy programmes by enabling provisions to incentivize the common consumers to get Green Power at reasonable rates through Electricity (Promoting Renewable Energy Through Green Energy Open Access) Rules, 2022 on 06.06.2022 Small-scale green energy systems Those not satisfied with the third-party grid approach to green energy via the power grid can install their own locally based renewable energy system. Renewable energy electrical systems from solar to wind to even local hydro-power in some cases, are some of the many types of renewable energy systems available locally. Additionally, for those interested in heating and cooling their dwelling via renewable energy, geothermal heat pump systems that tap the constant temperature of the earth, which is around 7 to 15 degrees Celsius a few feet underground and increases dramatically at greater depths, are an option over conventional natural gas and petroleum-fueled heat approaches. Also, in geographic locations where the Earth's Crust is especially thin, or near volcanoes (as is the case in Iceland) there exists the potential to generate even more electricity than would be possible at other sites, thanks to a more significant temperature gradient at these locales. The advantage of this approach in the United States is that many states offer incentives to offset the cost of installation of a renewable energy system. In California, Massachusetts and several other U.S. states, a new approach to community energy supply called Community Choice Aggregation has provided communities with the means to solicit a competitive electricity supplier and use municipal revenue bonds to finance development of local green energy resources. Individuals are usually assured that the electricity they are using is actually produced from a green energy source that they control. Once the system is paid for, the owner of a renewable energy system will be producing their own renewable electricity for essentially no cost and can sell the excess to the local utility at a profit. In household power systems, organic matter such as cow dung and spoilable organic matter can be converted to biochar. To eliminate emissions, carbon capture and storage is then used. References Sustainable energy Emissions reduction
Consumer green energy program
[ "Chemistry" ]
1,788
[ "Greenhouse gases", "Emissions reduction" ]
61,956,232
https://en.wikipedia.org/wiki/Surface%20Pro%20X
The Surface Pro X is a 2-in-1 detachable tablet computer developed by Microsoft. It was developed alongside and was announced on 2 October 2019 alongside the Surface Pro 7 and Surface Laptop 3. Updated hardware was announced alongside Surface Laptop Go and Surface accessories on October 1, 2020 and September 22, 2021. The device starts at $899.99 USD / £849.99. The Surface Pro X comes with a Microsoft SQ1 or SQ2 ARM processor, which the company claimed has three times the performance of an x86 MacBook Air, whilst also having a 13-hour battery life. This is due to the increased power efficiency of ARM processors compared to traditional x86 processors. Microsoft has previously used ARM processors in the discontinued Surface RT and Windows Phone devices. Microsoft now offers a Wifi-only version of the device as announced at their Surface Event on September 22, 2021. Configuration The Surface Pro X starts at US$899.99 / £849.99 for the least expensive model with 8 GB RAM and 128 GB storage. The device can be bought with either 8 GB or 16 GB RAM. Users can also choose between 128 GB, 256 GB and 512 GB of storage. Hardware and design The Surface Pro X is the 7th addition to Surface Pro lineup alongside the Surface Pro 7. Microsoft markets the tablet as a "go-anywhere, do-anything PC". Microsoft claims the Surface Pro X's battery can last up to 13 hours of use. Compared to the Surface Pro 6, the Surface Pro X is slimmer and has rounder edges featuring a matte black finish construction in platinum and black finish. The device contains 2 USB-C ports, an eSIM and a SIM card slot for LTE, a removable SSD, and the Surface Connect port for charging. There are no microSD card slot and headphone jack on the tablet, requiring its users to use dongles and USB-C or Bluetooth enabled headphones. The device's screen is a 13-inch touchscreen display, with smaller bezels compared to other Surface Pro devices. The device uses Microsoft SQ1 or SQ2 ARM processors co-developed by Qualcomm, based on the Snapdragon 8cx Gen 1 and Gen 2 processors respectively. A Qualcomm X24 LTE modem is also featured in the device for both processors. Software The Surface Pro X comes pre-installed with an ARM-based version of Windows 10, which supports ARM32 and ARM64 UWP and desktop apps from the Microsoft Store or from other sources. x86 applications can be run through emulation, addressing a major issue of Windows RT. Emulation of x64 applications is an upcoming feature that is already available to Windows Insiders for testing. In addition, Hyper-V can be installed on ARM64 devices such as the Surface Pro X running the Pro or Enterprise editions of Windows 10. Timeline References Microsoft Surface 2-in-1 PCs Tablet computers introduced in 2019
Surface Pro X
[ "Technology" ]
610
[ "Crossover devices", "2-in-1 PCs" ]
61,956,494
https://en.wikipedia.org/wiki/Unified%20scattering%20function
The unified scattering function was proposed in 1995 as a universal approach to describe small-angle X-ray, and neutron scattering (and in some cases light scattering) from disordered systems that display hierarchical structure. Concept The concept of universal descriptions of scattering, that is scattering functions that do not depend on a specific structural model, but whose parameters can be related back to specific structures, have existed since about 1950. The prominent examples of universal scattering functions are Guinier's Law, and Porod's Law, where G, Rg, and B are constants related to the scattering contrast, structural volume, surface area, and radius of gyration. q is the magnitude of the scattering vector which is related to the Bragg spacing, d, q = 2π/d = 4π/λ sin(θ/2). λ is the wavelength and θ is the scattering angle (2θ in diffraction). Both Guinier's Law and Porod's Law refer to an aspect of a single structural level. A structural level is composed of a size that can be expressed in Rg, and a structure as reflected in a power-law decay, -4 in the case of Porod's Law for solid objects with smooth, sharp interfaces. For other structures the power-law decay yields the mass-fractal dimension, df, which relates the mass and size of the object, thereby partially defining the object. For instance, a rod has df = 1 and a disk has df = 2. The prefactor to the power-law yields other details of the structure such as the surface to volume ratio for solid objects, the branch content for chain structures, the convolution or crumpled-ness of various objects. The prefactor to Guinier's Law yields the mass and volume fraction under dilute conditions. Above the overlap concentration (generally 1 to 5 volume percent) structural screening must be considered. In addition to these universal functions that describe only a part of a structural level, a number of scattering functions that can describe a single structural level have been proposed for some disordered systems, most interestingly Debye's scattering function for a Gaussian polymer chain derived during World War II, where x = q2Rg2. reverts to at low-q and to a power-law, I(q) = Bq−2 at high-q reflecting the two dimensional nature of a random walk or a diffusion path. refers to a single structural level, corresponding to a Guinier regime and a power-law regime. The Guinier regime reflecting the overall size of the object without reference to the internal or surface structure of the object and the power-law reflecting the details of the structure, in this case a linear (unbranched), mass-fractal object with mass-fractal dimension, df = 2 (connectivity dimension of 1 reflecting a linear structure; and minimum dimension of 2 indicating a random conformation in 3d space). In the 1990s it became apparent that single structural level functions similar to would be of great use in describing complex, disordered structures such as branched mass-fractal aggregates, linear polymers in good solvents (df ~ 5/3), branched polymers (df > 2), cyclic polymers, and macromolecules of complex topology such as star, dendrimer, and comb polymers, as well as polyelectrolytes, micellar and colloidal materials such as worm-like micelles. Further, no analytically derived scattering functions could describe multiple structural levels in hierarchical materials. The observation of multiple structural levels is extremely common even in the case of a simple linear Gaussian polymer chain describe by which is statistically composed of rod-like Kuhn units (level 1) which follow I(q) = Bq−1 at the highest-q. Common examples of hierarchical materials are silica, titania, and carbon black nano-aggregates composed of solid primary particles (level 1) displaying Porod scattering at highest q, , which aggregate into fairly rigid mass-fractal structures at intermediate nanoscales (level 2), and which agglomerate into micron-scale solid or network structures (level 3). Since these structural levels overlap in a small-angle scattering pattern, it was not possible to accurately model these materials using and various power-law functions such as . For these reasons, a global scattering function that could be expanded to multiple structural levels was of interest. In 1995 Beaucage derived the Unified Scattering Function, where "i" refers to the structural level starting with the smallest size, highest q. qi* is defined by, and k has a value of 1 for solid structural levels (:) and approximately 1.06 for mass-fractal structural levels (:). recognizes that all structures display the behavior of at largest sizes, that is all structures exhibit a size, and if the structure is randomly arranged that size manifests as a Gaussian function in small-angle scattering governed by the radius of gyration with larger objects displaying a smaller standard deviation, or larger Rg. At high-q fails to describe the structure because it reflects an object with no surface or internal structure [8]. The second term in gives the missing information concerning the surface or internal structure of the object by way of the power Pi and the prefactor Bi (as well as how Pi and Bi relate to Gi, and Rg,i). Beaucage realized that the problem of obtaining a generic multi-level scattering function lay in since a power-law could not extend infinitely to low-q and yield a finite intensity at q => 0. Also, such a function would over power in the range of q where is appropriate. Reference provides one of several possible derivations of , using as an example of a power-law regime. A vector, r, can be visualized as the vector connecting interference points between an incident beam and the scattered beam. r = 2π/q where q = 4π/(λ sin θ/2) is the scattering vector in inverse space. Scattering occurs when two fringe points separated by r contain scattering material. If material is located at |r|/2 destructive interference occurs. So within a solid object there is always material at a position |r|/2 that negates scattering form material separated at |r|. Only at the surface do conditions of contrast occur. describes scattering from a smooth sharp interface which results in scattering that is proportional to the surface area and decays with q−4. The volume of a scattering element in this case scales with V ~ r3. Scattering involves binary interference so is proportional to (ρV)2 ~ r6. The number of these V domains is proportional to the surface area divided by the area of a domain, N ~ S/r2. So the scattering intensity follows I(q) ~ SV2/r2 ~ Sr4 ~ Sq−4. At small size scales, at high q, for an oddly shaped object with a smooth/sharp interface, the structure appears to be a flat surface and the described approach is appropriate. As the size scale of observation, r, approaches Rg at low q this model fails because the surface is no longer planar. That is, the scattering even in figure 1 relies on both ends of the vector, r, being coplanar and arranged as indicated (the specular condition) with respect to the incident and scattered beams. In the absence of this orientation no scattering occurs. The curvature of the particle, which is related to the radius of gyration, extinguishes surface scattering at low-q in the Guinier regime. Incorporating this observation in Porod's law in the original derivation is not possible since it relies on a Fourier transform of a correlation function for surface scattering. Beaucage arrived at through a new derivation of based on randomly placed particles and adoption of this approach to modification of . Beaucage derivation of Guinier's Law Consider a randomly placed vector r such that both ends of the vector are in the particle. If the vector were held constant in space, while the particle were translated and rotated to any position meeting this condition and an average of the structures were taken, any object would result in a Gaussian mass distribution that would display a Gaussian correlation function, and would appear as an average cloud with no surface. The Fourier transform of results in . Limitations to power-law scattering at low-q Power-law scattering is restricted to sizes smaller than the object. For example, within a mass-fractal object such as a polymer chain described by the normalized mass of the chain, z, scales with the normalized size, R ~ Reted/lk, with a scaling power of the mass-fractal dimension, df, z ~ Rdf. Considering scattering elements of size r, the number of such elements in a particle scales with N ~ z/rdf, and the mass of such a particle n ~ rdf, so the scattering is proportional to Nn2 or rdf ~ q-df. At low-q the vector r ~ 1/q approaches the size of the particle. For this reason the power-law regime ends at low-q. One way to consider this is to think of the vector ra beginning and ending in the particle, Figure 2 (a). This vector meets the mass fractal condition if the particle is a mass-fractal. In Figure 2 (b) the vector rb separating two points, does not meet the mass-fractal condition, but with a translation of the particle by d the mass fractal condition can be met for bothe ends of rb, (c). In scattering we are considering all possible translations of the particle relative to one end of the vector r being located within the mass-fractal particle. The probability of moving the particle to meet the mass-fractal condition for both ends of the vector is less than 1 if r is close to the particle size. If the particle were of infinite size this probability would always be 1. For a finite particle Figure 2 shows that the reduction in probability for a scattering event at large sizes can be viewed as a reduction in the length of the vector r. This is the basis of the Unified Function. Rather than directly determining the scattering function, the reduction in r related to this translation is calculated. Since r is related to 2π/q we consider an effective increase in scattering vector q to q*. The relationship between q and q* is determined by first considering the consequence of the translation in Figure 2 on the correlation function based on the Gaussian derivation of Guinier's Law [8]. This analysis results in a modifying factor of, Following the Debye relationship, this factor can be incorporated into q yielding the transform, where, as shown in Figure 2 in terms of q* = 2π/r*. References and demonstrates that for strong power-law decays is equivalent to, which allows for the direct use of a modification of as, For mass-fractal power-laws this approximation is not perfect due to the shape of the correlation function at low-q as described in. A good approximation is to include a constant k whose value is about 1.06 for df = 2, so that is replaced by, In general for mass fractals it is found that k ~ 1.06 is a good approximation and k = 1 for surface fractal scattering. With this modification, power-law scattering is compatible with Guinier scattering and the two terms can be summed in a Unified Equation, can describe a single structural level and can closely replicate , equations for polydisperse spheres, rods, sheets, good solvent polymers, branched polymers, cyclic polymers, as demonstrated in and related publications. A wide range of disordered materials including mass and surface fractal structures can therefore be described using the Unified Approach. For hierarchical materials with multiple structural levels can be extended using a Gaussian cutoff at high-q for the power-law function which is common to equations for rods, disks and other simple scattering functions such as described in Guinier and Fournet, where it is taken that Rg,0 = 0. This function has been used to describe persistence in polymer chains in good and theta solvents, branched polymers, polymers of complex topology such as star polymers, mass fractal primary particles/aggregates/agglomerates, rod diameter/length, disk thickness/width and other complex hierarchical structures. The lead cutoff term in assumes that the structural level i is composed of structural levels i-1. If this is not true, a free parameter can substitute for Rg,i-1 as described in. is quite flexible and it has been extended as a Hybrid Unified Function for micellar systems where the local structure is a perfect cylinder or other structure. Implementation of Unified Function Jan Ilavsky of Argonne National Laboratory's Advanced Photon Source (USA) has provided open user code to perform fits using the Unified Function in the Igor Pro programing environment including video tutorials and an instruction manual. References Scattering theory
Unified scattering function
[ "Chemistry" ]
2,715
[ "Scattering", "Scattering theory" ]
61,957,505
https://en.wikipedia.org/wiki/Lenka%20Zdeborov%C3%A1
Lenka Zdeborová (born 24 November 1980) is a Czech physicist and computer scientist who applies methods from statistical physics to machine learning and constraint satisfaction problems. She is a professor of physics and computer science and communication systems at EPFL (École Polytechnique Fédérale de Lausanne). Life Zdeborová was born in Plzeň and attended a local grammar school where she excelled in math and physics. After living in France with her family and working at the Centre National de la Recherche Scientifique (CNRS), she and her partner moved to Switzerland in 2020. They are currently raising their two children there. Education and career Zdeborová earned a master's degree in physics at Charles University in 2004, and 2008, completed an international dual doctorate ("en cotutelle") at both Charles University and University of Paris-Sud. Her doctoral advisors were Václav Janiš at Charles University, and Marc Mézard at Paris-Sud. After postdoctoral research at the Center for Nonlinear Studies of Los Alamos National Laboratory, she became a researcher for the French Centre National de la recherche scientifique (CNRS) in 2010, posted at the French Alternative Energies and Atomic Energy Commission's Institut de physique théorique - IPhT Saclay in Paris-Saclay. She also earned a habilitation in 2015 at the École normale supérieure (Paris). After receiving her habilitation, she undertook a research fellowship in Los Alamos, New Mexico. Since 2020, she has been working at EPFL (École Polytechnique Fédérale de Lausanne) as an Associate Professor of physics, and of computer science and communication systems in the Schools of Basic Sciences and of Computer and Communication Sciences (IC), and is the head of Laboratory of Statistical Physics of Computation. Recognition Zdeborová won the CNRS Bronze medal in 2014. In 2016, the École Normale supérieure (Paris) gave her the Philippe Meyer Prize in theoretical physics for her work in Statistical Physics of Disordered Systems. She is the 2018 winner of the Irène Joliot-Curie Prize for young female scientists earned by standing out throughout her career.. She was also received the Josiah Willard Gibbs Lectureship of the American Mathematics Society and gave her Gibbs lecture in 2021. References External links Home page at Charles University 1980 births Living people Czech physicists Czech women computer scientists French physicists French computer scientists Women physicists Charles University alumni Paris-Sud University alumni Network scientists Statistical physicists
Lenka Zdeborová
[ "Physics" ]
517
[ "Statistical physicists", "Statistical mechanics" ]
61,960,057
https://en.wikipedia.org/wiki/Triller%20TV
Triller TV (stylized as TrillerTV, formerly known as FITE, and currently marketed as TrillerTV powered by FITE) is a Bulgarian-based American digital video streaming service. Owned by Flipps Media, Inc. and operated by its parent company Triller, Inc, TrillerTV is dedicated to combat sports-related programming (including boxing, kickboxing, mixed martial arts, professional wrestling, and submission grappling). The service distributes free-to-air content, pay-per-view events, and SVOD packages. As of December 2023, the service has over 8 million registered users worldwide. Notable wrestling content available on TrillerTV has included NWA All Access from the National Wrestling Alliance (NWA), AEW Plus from All Elite Wrestling (AEW), and Total Nonstop Action Wrestling (TNA)'s own streaming service TNA+ (then known as Impact Plus), among other subscription packages. History As FITE Founded in 2012, Flipps Media and their namesake app focused on online streaming content, including entertainment and sporting events. On January 4, 2015, Flipps streamed Global Force Wrestling's presentation of New Japan Pro-Wrestling's Wrestle Kingdom 9 event. On February 9, 2016, Flipps Media launched FITE (also known as FITE TV), a dedicated combat sports-oriented streaming platform. In April 2019, FITE officially provided international streaming for the professional wrestling company All Elite Wrestling (AEW), including its weekly televised shows and pay-per-view events. By September of that year, AEW announced a new subscription package on FITE called "AEW Plus" for viewers outside the U.S. that allows them access to AEW's library for $5USD a month. In April 2020, a new monthly subscription option called FITE+ was launched, including access to live pay-per-view events and back catalogues of various combat sports promotions. In October 2020, FITE began adding coverage of soccer events, acquiring rights to CONMEBOL qualifiers for the 2022 FIFA World Cup. On April 14, 2021, Triller, Inc acquired the service for an undisclosed amount, ahead of the first Triller Fight Club event, featuring the Jake Paul vs. Ben Askren boxing match. In 2021, FITE began streaming professional grappling events after Third Coast Grappling moved to the platform from FloGrappling. In January 2023, it was announced that the North American-based professional grappling promotion Fight 2 Win was leaving FloGrappling and had signed an exclusive two-year deal with FITE. In 2022, FITE aired the pandemic-delayed 2021 Rugby League World Cup tournaments in several territories including Germany, Italy, Spain and the Americas, with most men's tournament matches available on a pay-per-view basis and all other games airing on FITE+. In February 2023, FITE announced that all Bare Knuckle Fighting Championship (BKFC) events would be included with FITE+. Previous BKFC events had been offered as standalone pay-per-view events on the service. A majority stake in the promotion was acquired by Triller the previous year. On May 2, 2023, FITE announced a new partnership with Major League Wrestling (MLW) to air live events through FITE+, with its first event being MLW Never Say Never on July 8 of that year. On June 30, 2023, Colosseum Tournament was relaunched on FITE after a four-year hiatus. As TrillerTV In December 2023, FITE was rebranded as Triller TV (stylized as TrillerTV and marketed as TrillerTV powered by FITE) to closer align the service with its parent company. In May 2024, Eric Winter (formerly of Rivals.com and UFC Fight Pass) was named President and Chief Operating Officer of TrillerTV, ahead of its parent company, Triller, going public on NASDAQ. See also List of professional wrestling streaming services FightBox References External links Internet properties established in 2012 Internet properties established in 2016 Internet properties established in 2023 Internet television channels Subscription video streaming services Professional wrestling streaming services Streaming media systems Triller (company)
Triller TV
[ "Technology" ]
867
[ "Streaming media systems", "Telecommunications systems", "Computer systems" ]
62,887,627
https://en.wikipedia.org/wiki/C-ImmSim
C-ImmSim started, in 1995, as the C-language "version" of IMMSIM, the IMMune system SIMulator, a program written back in 1991 in APL-2 (APL2 is a Registered Trademark of IBM Corp.) by the astrophysicist Phil E. Seiden together with the immunologist Franco Celada to implement the Celada-Seiden model. The porting was mainly conducted and further developed by Filippo Castiglione with the help of few other people. The Celada-Seiden model The Celada-Seiden model is a logical description of the mechanisms making up the adaptive immune humoral and cellular response to a genetic antigen at the mesoscopic level. The computational counterpart of the Celada-Seiden model is the IMMSIM code. The Celada-Seiden model, as well as C-ImmSim, is best viewed as a collection of models in a single program. In fact, there are various components realising a particular function which can be turned on or off. At its current stage, C-ImmSim incorporates the principal "core facts" of today's immunological knowledge, e.g. the diversity of specific elements, MHC restriction, clonal selection by antigen affinity, thymic education of T cells, antigen processing and presentation (both the cytosolic and endocytic pathways are implemented, cell-cell cooperation, homeostasis of cells created by the bone marrow, hypermutation of antibodies, maturation of the cellular and humoral response and memory. Besides, an antigen can represent a bacterium, a virus or an allergen or a tumour cell. The high degree of complexity of the Celada-Seiden model makes it suitable to simulate different immunological phenomena, e.g., the hypermutation of antibodies, the germinal centre reaction (GCR), immunization, Thymus selection, viral infections, hypersensitivity, etc. Since the first release of C-ImmSim, the code has been modified many times. The actual version now includes features that were not in the original Celada-Seiden model. C-ImmSim has been recently customised to simulate the HIV-1 infection. Moreover, it can simulate the immunotherapy to generic solid tumours. These features are all present in the code and people can choose to turn them on and off at compiling time. However, the present user guide deals with the description of the standard immune system response and gives no indication on the features of HIV-1 and cancer. The latest version of C-ImmSim allows for the simulation of SARS-CoV-2 infection . Contributors The porting was possible thank to the aid of Seiden, especially during the initial validation phase. Massimo Bernaschi contributed to the development of C-ImmSim starting as the "beta" release. Most of the optimization of the memory usage and I/O has been possible thanks to Bernaschi in particular for what concerns the development of the parallel version. Other few people contributed to the further development of the code in the coming years. Related projects There are other computational models developed on the tracks of the Celada-Seiden model which come from (to a certain extent) the first porting in C-language of IMMSIM by F. Castiglione. They are IMMSIM++ developed by S. Kleinstein, IMMSIM-C developed by R. Puzone, Limmsim developed by J. Textor and SimTriplex developed by Pappalardo. IMMSIM++, http://www.cs.princeton.edu/immsim/software.html IMMSIM-C, http://www.immsim.org/ LImmSim, http://johannes-textor.name/limmsim.html SimTriplex C-ImmSim has been partially described in a series of publications but never extensively, in part because of the availability of other references for the IMMSIM code which could serve as manuals for C-ImmSim as well, in part because it is impractical to compress a full description of C-ImmSim in a regular paper. IMMSIM, in the authors' minds, was built around the idea of developing a computerized system to perform experiments similar to the real laboratory in vivo and in vivo experiments; a tool developed and maintained to help biologists to test theories and hypothesis about how the immune system works. They called it "in Machina" or "in silico" experiments. IMMSIM was in part developed keeping an eye on the educational potentialities of these kind of tools, in order to provide to students of biology/immunology courses, a way to play with the immune mechanisms to get a grasp on the fundamental concepts of the cellular and/or molecular interactions in the immune response. For this purpose, IMMSIM++ was developed for Microsoft Windows® and offers the chance to explore various (but not all) features of the Celada-Seiden model. However, since only the executable is available that code is not open for testing/development. LImmSim is available under the GNU GPL. SimTriplex is a customized version of the same model and derives from version 6 of C-ImmSim. It has been developed to simulate cancer immunoprevention. References F. Pappalardo, M. Pennisi, F. Castiglione, S. Motta. Vaccine protocols optimization: in silico experiences. Biotechnology Advances. 28: 82–93 (2010). doi:10.1016/j.biotechadv.2009.10.001 P. Paci, R. Carello, M. Bernaschi, G. D'Offizi and F. Castiglione. Immune control of HIV-1 infection after therapy interruption: immediate versus deferred antiretroviral therapy. BMC Infectious Diseases. 9: 172 (2009). doi:10.1186/1471-2334-9-172 D. Santoni, M.Pedicini, F. Castiglione. Implementation of a regulatory gene network to simulate the TH1/2 differentiation in an agent-based model of hypersensitivity reactions. Bioinformatics, 24(11):1374–1380 (2008). doi:10.1093/bioinformatics/btn135 F. Castiglione, F. Pappalardo, M. Bernaschi, S. Motta. Optimization of HAART with genetic algorithms and agent-based models of HIV infection. Bioinformatics, 23(24): 3350–3355 (2007) doi: 10.1093/bioinformatics/btm408 F. Castiglione, K.A. Duca, A. Jarrah, R. Laubenbacher, K. Luzuriaga, D. Hochberg and D.A. Thorley-Lawson. Simulating Epstein Barr Virus Infection with C-ImmSim. Bioinformatics, 23: 1371–1377 (2007) doi: 10.1093/bioinformatics/btm044 F. Pappalardo, P.-L. Lollini, F. Castiglione, S. Motta. Modelling and Simulation of Cancer Immunoprevention Vaccine. Bioinformatics, 2005 Jun 15;21(12): 2891–7. doi:10.1093/bioinformatics/bti426 F. Castiglione, F. Toschi, M. Bernaschi, S. Succi, R. Benedetti, B. Falini and A. Liso. Computational modelling of the immune response to tumour antigens: implications for vaccination. J Theo Biol, 237(4):390-400 (2005) F. Castiglione, V. Sleitser and Z. Agur. Analyzing hypersensitivity to chemotherapy in a Cellular Automata model of the immune system, in Cancer Modeling and Simulation, Preziosi L. (ed.), Chapman & Hall/CRC Press (UK), London, June 26, 2003, pp 333–365. External links http://www.cs.princeton.edu/immsim/software.html Immunology
C-ImmSim
[ "Biology" ]
1,793
[ "Immunology" ]
62,888,604
https://en.wikipedia.org/wiki/Pterobilin
Pterobilin also called biliverdin IXγ in the Fischer nomenclature, is a blue bile pigment found in Nessaea spp., Graphium agamemnon, G. antiphates, G. doson, and G. sarpedon. It is one of only a few blue pigments found in any animal species, as most animals use iridescence to create blue coloration. Other blue pigments of animal origin include phorcabilin, used by other butterflies in Graphium and Papilio (specifically P. phorcas and P. weiskei), and sarpedobilin, which is used by Graphium sarpedon. Synthetic pathways Pterobilin is a chemical precursor to sarpedobilin in the larvae of the fourth instar of G. sarpedon through a double cyclisation of the central vinyl groups of the adjacent nitrogens. In the butterfly species Pieris brassicae, it is produced starting with acetate and then proceeding to glycin, then δ-aminolevulinic acid, then coproporphyrinogen III, to protoporphyrin IX and finally into pterobilin. Pterobilin can be phototransformed into phorcabilin and sarpedobilin in vitro. Pterobilin can also be thermally rearranged in vitro into phorcabilin. Biochemical roles Pterobilin in P. brassicae is thought to play a role in photoreception for the different instars for metering diapause. In adult P. brassicae butterflies the compound is thought to have a role in heat transfer, as the wing scales where pterobilin accumulates differ morphologically in a way that would facilitate photoreception. See also Basics of blue flower colouration Biliverdin References Graphium (butterfly) Biological pigments Biblidinae Tetrapyrroles Dicarboxylic acids
Pterobilin
[ "Biology" ]
407
[ "Biological pigments", "Pigmentation" ]
62,889,191
https://en.wikipedia.org/wiki/Hanoch%20Gutfreund
Hanoch Gutfreund (Hebrew: חנוך גוטפרוינד; born 1935) is the Andre Aisenstadt Chair in theoretical physics and was the president at the Hebrew University of Jerusalem. Prior to his presidency, he was a professor at the university. Biography Gutfreund received a Ph.D. in theoretical physics from the Hebrew University of Jerusalem in 1966. Gutfreund is the Andre Aisenstadt Chair in Theoretical Physics and has been a professor at the university since 1985. Gutfreund was earlier the Head of the Physics Institute, Head of the Advanced Studies Institute, Rector, and President of the university from 1992 to 1997 (following Yoram Ben-Porat, and succeeded by Menachem Magidor). Gutfreund is the Director of the Einstein Center and is Hebrew University's appointee responsible for Albert Einstein's intellectual property. He heads the executive committee of the Israel Science Foundation. His writings include The Formative Years of Relativity: The History and Meaning of Einstein's Princeton Lectures (with Jürgen Renn, Princeton University Press, 2017) and The Road to Relativity: The History and Meaning of Einstein's "The Foundation of General Relativity", Featuring the Original Manuscript of Einstein's Masterpiece (with Jürgen Renn, Princeton University Press, 2017), and Einstein on Einstein: Autobiographical and Scientific Reflections (with Jürgen Renn, Princeton University Press, 2020). Gutfreund lives in Jerusalem. References Academic staff of the Hebrew University of Jerusalem Israeli physicists Theoretical physics Hebrew University of Jerusalem alumni 20th-century Israeli educators 21st-century Israeli educators Presidents of universities in Israel Living people 1935 births Israeli people of Polish-Jewish descent
Hanoch Gutfreund
[ "Physics" ]
351
[ "Theoretical physics" ]
62,889,984
https://en.wikipedia.org/wiki/Lean%20%28proof%20assistant%29
Lean is a proof assistant and a functional programming language. It is based on the calculus of constructions with inductive types. It is an open-source project hosted on GitHub. It was developed primarily by Leonardo de Moura while employed by Microsoft Research and now Amazon Web Services, and has had significant contributions from other coauthors and collaborators during its history. Development is currently supported by the non-profit Lean Focused Research Organization (FRO). History Lean was launched by Leonardo de Moura at Microsoft Research in 2013. The initial versions of the language, later known as Lean 1 and 2, were experimental and contained features such as support for homotopy type theory – based foundations that were later dropped. Lean 3 (first released Jan 20, 2017) was the first moderately stable version of Lean. It was implemented primarily in C++ with some features written in Lean itself. After version 3.4.2 Lean 3 was officially end-of-lifed while development of Lean 4 began. In this interim period members of the Lean community developed and released unofficial versions up to 3.51.1. In 2021, Lean 4 was released, which was a reimplementation of the Lean theorem prover capable of producing C code which is then compiled, enabling the development of efficient domain-specific automation. Lean 4 also contains a macro system and improved type class synthesis and memory management procedures over the previous version. Another benefit compared to Lean 3 is the ability to avoid touching C++ code in order to modify the frontend and other key parts of the core system, as they are now all implemented in Lean and available to the end user to be overridden as needed. Lean 4 is not backwards-compatible with Lean 3. In 2023, the Lean FRO was formed, with the goals of improving the language's scalability and usability, and implementing proof automation. Overview Libraries The official lean package includes a standard library batteries, which implements common data structures that may be used for both mathematical research and more conventional software development. In 2017, a community-maintained project to develop a Lean library mathlib began, with the goal to digitize as much of pure mathematics as possible in one large cohesive library, up to research level mathematics. As of September 2024, mathlib had formalised over 165,000 theorems and 85,000 definitions in Lean. Editors integration Lean integrates with: Visual Studio Code Neovim Emacs Interfacing is done via a client-extension and Language Server Protocol server. It has native support for Unicode symbols, which can be typed using LaTeX-like sequences, such as "\times" for "×". Lean can also be compiled to JavaScript and accessed in a web browser and has extensive support for meta-programming. Examples (Lean 4) The natural numbers can be defined as an inductive type. This definition is based on the Peano axioms and states that every natural number is either zero or the successor of some other natural number. inductive Nat : Type | zero : Nat | succ : Nat → Nat Addition of natural numbers can be defined recursively, using pattern matching. def Nat.add : Nat → Nat → Nat | n, Nat.zero => n -- n + 0 = n | n, Nat.succ m => Nat.succ (Nat.add n m) -- n + succ(m) = succ(n + m) This is a simple proof of for two propositions and (where is the conjunction and the implication) in Lean using tactic mode: theorem and_swap (p q : Prop) : p ∧ q → q ∧ p := by intro h -- assume p ∧ q with proof h, the goal is q ∧ p apply And.intro -- the goal is split into two subgoals, one is q and the other is p · exact h.right -- the first subgoal is exactly the right part of h : p ∧ q · exact h.left -- the second subgoal is exactly the left part of h : p ∧ q This same proof in term mode: theorem and_swap (p q : Prop) : p ∧ q → q ∧ p := fun ⟨hp, hq⟩ => ⟨hq, hp⟩ Usage Mathematics Lean has received attention from mathematicians such as Thomas Hales, Kevin Buzzard, and Heather Macbeth. Hales is using it for his project, Formal Abstracts. Buzzard uses it for the Xena project. One of the Xena Project's goals is to rewrite every theorem and proof in the undergraduate math curriculum of Imperial College London in Lean. Macbeth is using Lean to teach students the fundamentals of mathematical proof with instant feedback. In 2021, a team of researchers used Lean to verify the correctness of a proof by Peter Scholze in the area of condensed mathematics. The project garnered attention for formalizing a result at the cutting edge of mathematical research. In 2023, Terence Tao used Lean to formalize a proof of the Polynomial Freiman-Ruzsa (PFR) conjecture, a result published by Tao and collaborators in the same year. Artificial intelligence In 2022, OpenAI and Meta AI independently created AI models to generate proofs of various high-school-level olympiad problems in Lean. Meta AI's model is available for public use with the Lean environment. In 2023, Vlad Tenev and Tudor Achim co-founded startup Harmonic, which aims to reduce AI hallucinations by generating and checking Lean code. In 2024, Google DeepMind created AlphaProof which proves mathematical statements in Lean at the level of a silver medalist at the International Mathematical Olympiad. This was the first AI system that achieved a medal-worthy performance on a math olympiad's problems. See also Dependent type List of proof assistants mimalloc Type theory References External links Lean Website Lean Community Website Lean FRO The Natural Number Game - an interactive tutorial to learn Lean Moogle.ai - a semantic search engine for finding theorems in mathlib Programming languages created in 2013 Proof assistants Dependently typed languages Educational math software Functional languages Free and open-source software Free software programmed in C++ Microsoft free software Microsoft programming languages Microsoft Research Software using the Apache license Theorem provers Theorem proving software systems
Lean (proof assistant)
[ "Mathematics" ]
1,294
[ "Automated theorem proving", "Free mathematics software", "Theorem proving software systems", "Educational math software", "Mathematical software" ]
62,891,072
https://en.wikipedia.org/wiki/Parioscorpio
Parioscorpio is an extinct genus of arthropod containing the species P. venator known from the Silurian-aged Waukesha Biota of the Brandon Bridge Formation near Waukesha, Wisconsin. This animal has gone through a confusing taxonomic history, being called an arachnid, crustacean, and an artiopodan arthropod at various points. This animal is one of the more famous fossil finds from Wisconsin, due to the media coverage it received based on its original description in 2020 as a basal scorpion. Taxonomy The fossils were originally discovered in 1985, tentatively identified as a branchiopod or remipede crustacean but were neglected for decades. In 2016, some of the fossils now assigned to Parioscorpio were given the name Latromirus and were assigned to an extinct group of early Paleozoic arthropods known as cheloniellids in a Ph.D dissertation, but the name was never published in a peer-reviewed journal and is therefore not valid in accordance with the International Code of Zoological Nomenclature. The fossils known as “Latromirus” were also mistakenly named “Xus yus” in a preprint of a separate paper. Upon initial publication in 2020, Parioscorpio was considered the world's oldest and most primitive known scorpion, older than Dolichophonus from Scotland by several million years. In 2021, the fossils were reanalysed, and Parioscorpio was found not to be a scorpion, but an arthropod of uncertain placement, outside of Mandibulata, Chelicerata and all other groups of extinct arthropods (e.g. Megacheira, Fuxianhuiida, Artiopoda and so on). In 2021 another paper stated that Parioscorpio venator, including the fossils previously called Latromirus, might be a cheloniellid. If this is correct, it means that P. venator is related to trilobites, nektaspids, aglaspidids, xenopods, and xandarellids. However in 2022, its affinity as cheloniellid is questioned, and firmly rejected from that clade. Currently the most resolved tree in the paper considered P. venator as an enigmatic stem-group arthropod. In 2022 a study was published describing Acheronauta stimulapis, a new species of possible mandibulate arthropod from the biota. While coding the phylogenetic trees for this arthropod, the authors of the paper also included Parioscorpio, and all of the trees preformed presented this creature as a basal taxon of arthropod that sat in between the groups Artiopoda and Mandibulata. This discovery actually is consistent with the rejection of P. venator as a cheloniellid. As of 2023, P. venator is regarded as a basal euarthropod. Morphology The animal is around long. It is characterized by a trapezoidal head with a pair of eyes located antero-medially, a pair of enlarged raptorial appendages (previously thought to be scorpion's clawed pedipalps), as well as another pair of small appendages. Central to the head was a mouth-covering hypostome and a pair of muscular blocks articulated to the raptorial appendages. The trunk is composed of 14 segments, each associated with a pair of thin pleurae (lateral extension of tergite) and appendages. The first segment is covered by the head while the posterior segments may have lateral spines. The anterior 12 pairs of trunk appendages are multiramus (each composed of 4 bundles of setae and a segmented endopod) while the last two pairs are simple fan-like structures. The trunk ends with 3 spines. Paleoecology Parioscorpio may had been a marine or brackish water predator, using an ambush prey-capture method similar to extant waterbugs (Nepomorpha). It would have lived alongside many other bizarre organisms like the Conodont Panderodus, the enigmatic Butterfly Animal, the Thylacocephalan Thylacares, early synziphosurans, and Trilobites. References Silurian life of North America Telychian first appearances Fossil taxa described in 2020 Prehistoric arthropod genera Controversial taxa Silurian arthropods of North America
Parioscorpio
[ "Biology" ]
935
[ "Biological hypotheses", "Controversial taxa" ]
62,891,333
https://en.wikipedia.org/wiki/Abelian%20Lie%20group
In geometry, an abelian Lie group is a Lie group that is an abelian group. A connected abelian real Lie group is isomorphic to . In particular, a connected abelian (real) compact Lie group is a torus; i.e., a Lie group isomorphic to . A connected complex Lie group that is a compact group is abelian and a connected compact complex Lie group is a complex torus; i.e., a quotient of by a lattice. Let A be a compact abelian Lie group with the identity component . If is a cyclic group, then is topologically cyclic; i.e., has an element that generates a dense subgroup. (In particular, a torus is topologically cyclic.) See also Cartan subgroup Citations Works cited Abelian group theory Geometry Lie groups
Abelian Lie group
[ "Mathematics" ]
171
[ "Lie groups", "Mathematical structures", "Algebraic structures", "Geometry", "Geometry stubs" ]
62,893,752
https://en.wikipedia.org/wiki/Marine%20microbiome
All animals on Earth form associations with microorganisms, including protists, bacteria, archaea, fungi, and viruses. In the ocean, animal–microbial relationships were historically explored in single host–symbiont systems. However, new explorations into the diversity of marine microorganisms associating with diverse marine animal hosts is moving the field into studies that address interactions between the animal host and a more multi-member microbiome. The potential for microbiomes to influence the health, physiology, behavior, and ecology of marine animals could alter current understandings of how marine animals adapt to change, and especially the growing climate-related and anthropogenic-induced changes already impacting the ocean environment. In the oceans, it is challenging to find eukaryotic organisms that do not live in close relationship with a microbial partner. Host-associated microbiomes also influence biogeochemical cycling within ecosystems with cascading effects on biodiversity and ecosystem processes. The microbiomes of diverse marine animals are currently under study, from simplistic organisms including sponges and ctenophores to more complex organisms such as sea squirts and sharks. Background Within the vast biological diversity that inhabits the world's oceans, it would be challenging to find a eukaryotic organism that does not live in close relationship with a microbial partner. Such symbioses, i.e., persistent interactions between host and microbe in which none of the partners gets harmed and at least one of them benefits, are ubiquitous from shallow reefs to deep-sea hydrothermal vents. Studies on corals, sponges, and mollusks have revealed some of the profoundly important symbiotic roles microbes play in the lives of their hosts. These studies, however, have tended to focus on a small number of specific microbial taxa. In contrast, most hosts retain groups of many hundreds of different microbes (i.e., a microbiome, which themselves can vary throughout the ontogeny of the host and as a result of environmental perturbations. Rather than host-associated microbes functioning independently, complex multi-assemblage microbiomes have major impact on the fitness and function of their hosts. Studying these complex interactions and biological outcomes is difficult, but to understand the origin and evolution of organisms and populations and the structure and function of communities and ecosystems, the understanding of symbioses in host–microbiome systems needs advancing. There are many outstanding questions in ecology and evolution that could be addressed by expanding the phylogenetic and ecological breadth of host-associated microbiome studies, including all possible interactions throughout the microbiome. There is strong empirical evidence and new consensus that biodiversity (i.e., the richness of species and their interactions) pervasively influences the functioning of Earth's ecosystems, including ecosystem productivity. However, this research has focused almost exclusively on macroorganisms. Because microbial symbionts are integral parts of most living organisms, the understanding of how microbial symbionts contribute to host performance and adaptability needs broadening. Foundations of productive ecosystems Ecosystem engineers, such as many types of corals, deep-sea mussels, and hydrothermal vent tubeworms, contribute to primary productivity and create the structural habitats and nutrient resources that are the foundation of their respective ecosystems. All of these taxa engage in mutualistic nutritional symbioses with microbes. There are many examples of marine nutritional mutualisms in which microbes enable hosts to utilize resources or substrates otherwise unavailable to the host alone. Such symbioses have been described in detail in reduced and anoxic sediments (e.g., lucinid clams, stilbonematid nematodes, and gutless oligochaetes) and hydrothermal vents (e.g., the giant tube worm or deep-sea mussels). Moreover, many foundational species of marine macroalgae are vitamin auxotrophs (for example, half of more than 300 surveyed species were unable to synthesize cobalamin), and their productivity depends on provisioning from their epiphytic bacteria. Reefs often consist of stony corals, one of the most well-known examples of a mutualistic symbiosis, in which the dinoflagellate alga Symbiodiniaceae supplies the coral with glucose, glycerol, and amino acids, while the coral provides the algae with a protected environment and limiting compounds (e.g., nitrogen species) needed for photosynthesis. However, this is a classic example of a mutualistic symbiosis that is sensitive to environmental disturbances, which can disrupt the fragile interactions between host and microbe. When reefs become warm and eutrophic, mutualistic Symbiodiniaceae may induce cellular damage to the host and/or sequester more resources for their own growth, thereby injuring and parasitizing their hosts. Reef fishes, which seek homes on coral reefs, are important in fostering coral recovery in the wake of disturbance. Epulopiscium bacteria in the guts of surgeonfishes produce enzymes that allow their hosts to digest complex polysaccharides, enabling the host fish to feed on tough, leathery red and brown macroalgae. This trophic innovation has facilitated niche diversification among coral reef herbivores. Surgeonfishes are critical to the functioning of Indo-Pacific coral reefs, as they are among the only fishes capable of consuming large macroalgae that bloom in the wake of ecosystem disturbance and suppress coral recovery. Along with more standard examples of nutritional symbioses in animals, recent advances in genome sequencing technology have led to the discovery of many endosymbiotic associations in marine protists (a protist is a general term to refer to a non-monophyletic collection of unicellular eukaryotes that are not fungi or in the Plantae group) These illustrate the incorporation of various new biochemical functions, such as photosynthesis, nitrogen fixation and recycling, and methanogenesis, into protist hosts by endosymbionts. Endosymbiosis in protists is widespread and represents an important source of innovation. Previously unrecognized metabolic innovations of marine microbial symbioses that are ecologically important are discovered regularly. For example, Candidatus Kentron (a clade of Gammaproteobacteria found in association with ciliates) nourish their ciliate hosts in the genus Kentrophoros and recycle acetate and propionate, which are low-value cellular waste products from their hosts, into biomass. Another example is the anaerobic marine ciliate Strombidium purpureum. The ciliate lives under anaerobic conditions and harbors endosymbiotic purple nonsulfur bacteria that contain both bacteriochlorophyll a and spirilloxanthin. The endosymbionts are photosynthetically active; hence, this symbiosis represents an evolutionary transition of an aerobic organism to an anaerobic one while incorporating organelles. Reproduction and host development Extending beyond nutritional symbioses, microbial symbionts can alter the reproduction, development, and growth of their hosts. Specific bacterial strains in marine biofilms often directly control the recruitment of planktonic larvae and propagules, either by inhibiting settlement or by serving as a settlement cue. For example, the settlement of zoospores from the green alga Ulva intestinalis onto the biofilms of specific bacteria is mediated by their attraction to the quorum-sensing molecule, acyl-homoserine lactone, secreted by the bacteria. Classic examples of marine host–microbe developmental dependence include the observation that algal cultures grown in isolation exhibited abnormal morphologies and the subsequent discovery of morphogenesis-inducing compounds, such as thallusin, secreted by epiphytic bacterial symbionts. Bacteria are also known to influence the growth of marine plants, macroalgae, and phytoplankton by secreting phytohormones such as indole acetic acid and cytokinin-type hormones. In the marine choanoflagellate Salpingoeca rosetta, both multicellularity and reproduction are triggered by specific bacterial cues, offering a view into the origins of bacterial control over animal development (reviewed by Woznica and King. The benefit to the bacteria, in return, is that they receive physical space to colonize at particular points in the water column typically accessible only to planktonic microbes. Perhaps the best-studied example of intimate host–microbe interactions controlling animal development is the Hawaiian bobtail squid Euprymna scolopes. It lives in a mutualistic symbiosis with the bioluminescent bacteria Aliivibrio fischeri. The bacteria are fed a solution of sugars and amino acids by the host and, in return, provide bioluminescence for countershading and predator avoidance. This mutualism with microbes provides a selective advantage for the squid in predator–prey interactions. Another invertebrate example can be found in tubeworms, in which Hydroides elegans metamorphosis is mediated by a bacterial inducer and mitogen-activated protein kinase (MAPK) signaling in biofilms. Biofouling and microbial community assembly Some host-associated microbes produce compounds that prevent biofouling and regulate microbiome assembly and maintenance in many marine organisms, including sponges, macroalgae, and corals. For example, tropical corals harbor diverse bacteria in their surface mucus layer that produce quorum-sensing inhibitors and other antibacterial compounds as a defense against colonization and infection by potential microbial pathogens. Epiphytic bacteria of marine macroalgae excrete a diverse chemical arsenal capable of selectively shaping further bacterial colonization and deterring the settlement of biofouling marine invertebrates such as bryozoans. As in corals, these diverse, microbially secreted compounds include not only bactericidal and bacteriostatic antibiotics but also compounds like halogenated furanones, cyclic dipeptides, and acyl-homoserine lactone mimics that disrupt bacterial quorum sensing and inhibit biofilm formation. The bacteria likely are able to utilize the carbon-rich exudates from their hosts. For example, in the case of giant kelp, the alga emits approximately 20% of primary production as dissolved organic carbon. Whereas these prior examples illustrate how the microbiomes can protect hosts from surface colonization, a similar phenomenon has also been observed internally in the shipworm Bankia setacea, in which symbionts produce a boronated tartrolon antibiotic thought to keep the wood-digesting cecum clear of bacterial foulants. By producing antimicrobial compounds, these microbes are able to defend their niche space to prevent other organisms from crowding them out. Biogeochemical cycling Host-associated microbiomes also influence biogeochemical cycling within ecosystems with cascading effects on biodiversity and ecosystem processes. For example, microbial symbionts comprise up to 40% of the biomass of their sponge hosts. Through a process termed the "sponge-loop," they convert dissolved organic carbon released by reef organisms into particulate organic carbon that can be consumed by heterotrophic organisms. Along with the coral–Symbiodiniaceae mutualism, this sponge-bacterial symbiosis helps explain Darwin's paradox, i.e., how highly productive coral reef ecosystems exist within otherwise oligotrophic tropical seas. Some sponge symbionts play a significant role in the marine phosphorus cycle by sequestering nutrients in the form of polyphosphate granules in the tissue of their host and nitrogen cycling, e.g., through nitrification, denitrification, and ammonia oxidation.]. Many macroalgal-associated bacteria are specifically adapted to degrade complex algal polysaccharides (e.g., fucoidan, porphyran, and laminarin) and modify both the quality and quantity of organic carbon supplied to the ecosystem. The sulfur-oxidizing gill endosymbionts of lucinid clams contribute to primary productivity through chemosynthesis and facilitate the growth of seagrasses (important foundation species) by lowering sulfide concentrations in tropical sediments. Gammaproteobacterial symbionts of lucinid clams and stilbonematid nematodes were also recently shown to be capable of nitrogen fixation (bacterial symbiont genomes encode and express nitrogenase genes, highlighting the role of symbiotic microbes in nutrient cycling in shallow marine systems. These examples demonstrate the importance of microbial symbioses for the functioning of ocean ecosystems. Understanding symbioses with this same level of detail in the context of complex communities (i.e., whole microbiomes) remains ripe for exploration and, indeed, requires a more integrated framework from the fields of microbiology, evolutionary biology, community ecology, and oceanography. Individual taxa within the microbiome may help hosts withstand a wide range of environmental conditions, including those predicted under scenarios of climate change. Next, we explore two different avenues of how interdisciplinary collaborations could advance this line of research. Examples The microbiomes of diverse marine animals are currently under study, from simplistic organisms including sponges and ctenophores to more complex organisms such as sea squirts and sharks. The relationship between the Hawaiian bobtail squid and the bioluminescent bacterium Aliivibrio fischeri is one of the best studied symbiotic relationships in the sea and is a choice system for general symbiosis research. This relationship has provided insight into fundamental processes in animal-microbial symbioses, and especially biochemical interactions and signaling between the host and bacterium. The gutless marine oligochaete worm Olavius algarvensis is another relatively well-studied marine host to microbes. These three centimetre long worms reside within shallow marine sediments of the Mediterranean Sea. The worms do not contain a mouth or a digestive or excretory system, but are instead nourished with the help of a suite of extracellular bacterial endosymbionts that reside upon coordinated use of sulfur present in the environment. This system has benefited from some of the most sophisticated 'omics and visualization tools. For example, multi-labeled probing has improved visualization of the microbiome and transcriptomics and proteomics have been applied to examine host–microbiome interactions, including energy transfer between the host and microbes and recognition of the consortia by the worm's innate immune system. The major strength of this system is that it does offer the ability to study host–microbiome interactions with a low diversity microbial consortium, and it also offers a number of host and microbial genomic resources Corals Corals are one of the more common examples of an animal host whose symbiosis with microalgae can turn to dysbiosis, and is visibly detected as bleaching. Coral microbiomes have been examined in a variety of studies, which demonstrate how variations in the ocean environment, most notably temperature, light, and inorganic nutrients, affect the abundance and performance of the microalgal symbionts, as well as calcification and physiology of the host. Studies have also suggested that resident bacteria, archaea, and fungi additionally contribute to nutrient and organic matter cycling within the coral, with viruses also possibly playing a role in structuring the composition of these members, thus providing one of the first glimpses at a multi-domain marine animal symbiosis. The gammaproteobacterium Endozoicomonas is emerging as a central member of the coral's microbiome, with flexibility in its lifestyle. Given the recent mass bleaching occurring on reefs, corals will likely continue to be a useful and popular system for symbiosis and dysbiosis research. Astrangia poculata, the northern star coral, is a temperate stony coral, widely documented along the eastern coast of the United States. The coral can live with and without zooxanthellae (algal symbionts), making it an ideal model organism to study microbial community interactions associated with symbiotic state. However, the ability to develop primers and probes to more specifically target key microbial groups has been hindered by the lack of full length 16S rRNA sequences, since sequences produced by the Illumina platform are of insufficient length (approximately 250 base pairs) for the design of primers and probes. In 2019, Goldsmith et al demonstrated Sanger sequencing was capable of reproducing the biologically-relevant diversity detected by deeper next-generation sequencing, while also producing longer sequences useful to the research community for probe and primer design (see diagram on right). Sponges Sponges are common members of the ocean's diverse benthic habitats and their abundance and ability to filter large volumes of seawater have led to the awareness that these organisms play critical roles in influencing benthic and pelagic processes in the ocean. They are one of the oldest lineages of animals, and have a relatively simple body plan that commonly associates with bacteria, archaea, algal protists, fungi, and viruses. Sponge microbiomes are composed of specialists and generalists, and complexity of their microbiome appears to be shaped by host phylogeny. Studies have shown that the sponge microbiome contributes to nitrogen cycling in the oceans, especially through the oxidation of ammonia by archaea and bacteria. Most recently, microbial symbionts of tropical sponges were shown to produce and store polyphosphate granules, perhaps enabling the host to survive periods of phosphate depletion in oligotrophic marine environments. The microbiomes of some sponge species do appear to change in community structure in response to changing environmental conditions, including temperature and ocean acidification,<ref>Ribes, M., Calvo, E., Movilla, J., Logares, R., Coma, R. and Pelejero, C. (2016) "Restructuring of the sponge microbiome favors tolerance to ocean acidification. Environmental Microbiology Reports, 8(4): 536–544. .</ref> as well as synergistic impacts. Cetaceans The access of microbial samples from the gut out of marine mammals is limited because most species are rare, endangered, and deep divers. There are different techniques for sampling the cetacean's gut microbiome. The most common is collecting fecal samples from the environment and taking a probe from the center that is non-contaminated. Besides there are studies from rectal swabs and rare studies from stranded dead or living animals direct from the intestine. The outermost epidermal layer, i.e. the skin, is the first barrier that protects the individual from the outside world and the epidermal microbiome on it is considered an indicator not only of the health of the animal but is also considered an ecological indicator that shows the state of the surrounding environment. Knowing the microbiome of the skin of marine mammals under ''normal'' conditions has allowed us to understand how these communities are different from the free microbial communities found in the sea and how they can change according to abiotic and biotic variations, and also ''communities vary between healthy and sick individuals''. Cetaceans are in danger because they are affected by multiple stress factors which make them more vulnerable to various diseases. These animals have been noted to show high susceptibility to airway infections, but very little is known about their respiratory microbiome. Therefore, the sampling of the exhaled breath or "blow" of the cetaceans can provide an assessment of the state of health. Blow is composed of a mixture of microorganisms and organic material, including lipids, proteins , and cellular debris derived from the linings of the airways which, when released into the relatively cooler outdoor air, condense to form a visible mass of vapor, which can be collected. There are various methods for collecting exhaled breath samples, one of the most recent is through the use of aerial drones. This method provides a safer, quieter, and less invasive alternative and often a cost-effective option for monitoring fauna and flora. Once obtained, the blow samples are taken to the laboratory and we proceed with the amplification and sequencing of the respiratory tract microbiota. The use of aerial drones has been more successful with large cetaceans due to slow swim speeds and larger blow sizes. Marine holobionts Reef-building corals are holobionts that include the coral itself (a eukaryotic invertebrate within class Anthozoa), photosynthetic dinoflagellates called zooxanthellae (Symbiodinium''), and associated bacteria and viruses. Co-evolutionary patterns exist for coral microbial communities and coral phylogeny. References Further references Stal, L. J. and Cretoiu, M. S. (Eds.) (2016) The marine microbiome: an untapped source of biodiversity and biotechnological potential Springer. . Microbiomes
Marine microbiome
[ "Environmental_science" ]
4,464
[ "Microbiomes", "Environmental microbiology" ]
62,894,376
https://en.wikipedia.org/wiki/Global%20stocktake
The Global Stocktake is a fundamental component of the Paris Agreement which is used to monitor its implementation and evaluate the collective progress made in achieving the agreed goals. The Global Stocktake thus links implementation of nationally determined contributions (NDCs) with the overarching goals of the Paris Agreement, and has the ultimate aim of raising climate ambition. The synthesis report was published in 2023 before COP28. Background The Paris Agreement marked a turning point in international climate policy. Binding under international law and global in scope, it not only sets out ambitious global goals, such as limiting the rise in average global temperature to well below 2 °C compared with pre-industrial levels, but also introduces an innovative architecture that gives Parties considerable leeway in setting their own climate change targets. In contrast to common practice under international environmental law, states' individual contributions are not negotiated at international level and achievement of set targets is not binding. To ensure that the targets are implemented nonetheless, international-level review and transparency mechanisms have been made integral to the Agreement. Role as part of the Paris regime The Paris Agreement requires its signatory states (known as Parties) to regularly formulate their own climate action plans, so-called nationally determined contributions (NDCs), and to implement measures that help them achieve their climate action goals. There is, however, no obligation under international law for Parties to achieve their NDCs. Parties are, however, required to regularly report on their progress in implementing their NDCs and the reports are subject to international peer review. In addition to this Enhanced Transparency Framework, the Paris Agreement stipulates that Parties must regularly update their NDCs, that the updated NDCs must not fall short of the targets applicable prior to the update and that they should reflect the highest possible level of ambition. In addition, a Global Stocktake is carried out once every five years to assess the collective progress made towards achieving the long-term goals. The outcomes of the stocktake are to be taken into account when developing nationally determined contributions. The Global Stocktake is thus a fundamental component of the Paris Agreement in that it regularly takes stock of progress made and provides a basis for use in updating Parties' NDCs. Role in raising ambition The Global Stocktake is designed to raise ambition by helping Parties to: See what they have achieved so far in implementing their NDCs. Identify what still needs to be done to achieve their NDC targets. Identify the approaches that can be taken to enhance their own efforts at national and international level. In this way, it is hoped that the Global Stocktake will become a driver of ambition. However, the Global Stocktake takes a collective rather than an individual approach. This means that individual countries are not singled out and the outcomes of the stocktaking process should not allow conclusions to be drawn about the state of implementation in individual states. Scope The question of whether the Global Stocktake should be limited to mitigation or should also include other aspects such as adaptation and the provision of climate finance has been the subject of controversial debate. In the run-up to the Climate Change Conference in Paris, however, the view prevailed that the Global Stocktake should take in all three. As part of the Global Stocktake, Article 14 of the Paris Agreement lists adaptation and the means of implementation and support. Three-phase Global Stocktake The modalities for implementation agreed at the Climate Change Conference in Katowice provide for three stocktake phases: Phase 1: Information collection and preparation Phase 1 involves collecting and preparing information needed to conduct the stocktake. Information is taken from various sources. In addition to Parties' nationally determined contributions (NDCs) and the associated reports submitted under the Paris Agreement, the most recent scientific findings of the Intergovernmental Panel on Climate Change (IPCC) as well as inputs from non-governmental stakeholders and observer organisations are also used. The information gathered is published in the public domain and also collated in the form of synthesis reports. Individual reports are also prepared on various focus topics – mitigation, adaptation, means of implementation, and cross-cutting issues – and on issues such as the status of global greenhouse gas emissions, the overall contribution made by NDCs and the status concerning action taken to adapt to climate change. Phase 2: Technical assessment of information In Phase 2, the information is assessed for collective progress in implementing the Paris Agreement and its long-term goals. This sees various stakeholders entering into a series of technical dialogues to discuss the information gathered in Phase 1. Phase 2 is also used to highlight the opportunities to strengthen and enhance response measures in dealing with climate change. The results are documented in a series of reports, including summary reports of each technical dialogue and the final synthesis report. Phase 3: Political messages derived from the technical assessment In Phase 3, the outcomes of the assessment flow into the policy process. The aim here is to support Parties to the Paris Agreement in enhancing both their climate change policies and the action they take to support other Parties. The outcomes are also used to promote international cooperation. On this point, it is unclear as to how the outcomes are to be documented – perhaps a political declaration or even a formal decision by the Conference of the Parties. Outlook The first Global Stocktake will take place in 2023. However, the transparency framework established by the Paris Agreement, which requires each individual state to report on the status of implementation of its NDC targets, and its national emissions, will not come into effect until 2024. Since Parties' reports compiled under the transparency framework are a vital source of information in conducting the Global Stocktake, the first Global Stocktake will have to build on earlier reporting requirements. These have numerous informational gaps, however, and it is uncertain as to what extent those gaps can be filled using other sources of information. For example, it is conceivable that greater use could be made of analyses and recommendations from non-governmental stakeholders, including civil society initiatives, companies and city administrations. Another aspect that still needs to be worked out concerns the exact timing of the three Global Stocktake phases. In particular, it must be ensured that the outputs of the process are completed in time and prepared in such a way that they can be taken into account appropriately when developing Parties' NDCs. See also Pledge and review References Environmental terminology Climate change Environmental social science Political economy 21st-century treaties
Global stocktake
[ "Environmental_science" ]
1,286
[ "Environmental social science" ]
62,894,846
https://en.wikipedia.org/wiki/Sommerfeld%20effect
In mechanics, Sommerfeld effect is a phenomenon arising from feedback in the energy exchange between vibrating systems: for example, when for the rocking table, under given conditions, energy transmitted to the motor resulted not in higher revolutions but in stronger vibrations of the table. It is named after Arnold Sommerfeld. In 1902, A. Sommerfeld analyzed the vibrations caused by a motor driving an unbalanced weight and wrote that "This experiment corresponds roughly to the case in which a factory owner has a machine set on a poor foundation running at 30 horsepower. He achieves an effective level of just 1/3, however, because only 10 horsepower are doing useful work, while 20 horsepower are transferred to the foundational masonry". First mathematical descriptions of Sommerfeld effect were suggested by I. Blekhman and V. Konenko. Hidden attractors in Sommerfeld effect In the theory of hidden oscillations, Sommerfeld effect is explained by the multistability and presence in the phase space of dynamical model without stationary states of two coexisting hidden attractors, one of which attracts trajectories from vicinity of zero initial data (which correspond to the typical start up of the motor), and the other attractor corresponds to the desired mode of operation with a higher frequency of rotation. Depending on the model under consideration, coexisting hidden attractors in the model may be either periodic or chaotic; such dynamical models with Sommerfeld effect are the earliest known mechanical example of a system without equilibria and with hidden attractors. For example, the Sommerfeld effect with hidden attractors can be observed in dynamic models of drilling rigs, where the electric motor may excite torsional vibrations of the drill. References Dynamical systems Physical phenomena Hidden oscillation
Sommerfeld effect
[ "Physics", "Mathematics" ]
371
[ "Physical phenomena", "Mechanics", "Hidden oscillation", "Dynamical systems" ]
62,894,863
https://en.wikipedia.org/wiki/Platform%20capitalism
Platform capitalism is an economic and business model in which digital platforms play a central role in facilitating interactions, transactions, and services between different user groups, typically consumers and producers. This model of capitalism has emerged and expanded with the rise of the Internet and digital technologies, transforming various sectors of the economy from retail and transportation to media and labor markets. Four main facets of platform capitalism are: crowdsourcing, sharing economy, gig economy and platform economy. Key characteristics of platform capitalism include: Network effects: the value of the platform increases exponentially as more users join, attracting even more users in a self-reinforcing cycle. This creates a dynamic where leading platforms can dominate markets, benefiting from economies of scale and scope; Data driven marketing and monetization: platforms collect vast amounts of user data, which is used to personalize experiences, target advertising, develop new products and services and refine algorithms. This data-centric approach enhances efficiency and user engagement; 'Asset-Light' business models: many platforms don't own the physical assets necessary to provide the services they offer, instead, they rely on the resources of their users and partners; Disruption of traditional industries: platforms are disrupting traditional industries (taxi industry, hospitality industry, old media industries such as television, music, radio and film, brick and mortar retails, banking and financial services etc.) by cutting out intermediaries and directly connecting producers with consumers; Algorithmic Governance: platforms use algorithms to manage and regulate interactions, determine rankings, and set prices. These algorithms play a crucial role in shaping the platform's ecosystem and can influence market dynamics and user behavior significantly; Regulatory challenges: the rapid growth and novel business models of platforms often outpace existing regulation. Thus, this leads to debates over issues like worker classification, data privacy, and market power; Global Reach and scalability: platforms enable businesses to scale rapidly and reach a global audience with relatively low marginal costs. Examples of platform capitalism include: e-commerce platforms (Amazon, Alibaba, eBay), social media platforms (Facebook, Twitter, Instagram, X), ride-hailing platforms (Uber, Lyft), short-term rental platforms (Airbnb), online travel booking platforms (Expedia, Booking.com, Kayak), video-sharing platforms (YouTube, TikTok), search engine platforms (Google Search, Microsoft Bing), web mapping platforms (Google Maps, Apple Maps, Petal Maps), app marketplaces platforms (Google Play, App Store) streaming platforms (Netflix, Disney+, Apple TV+, Amazon Prime Video), music streaming platforms (Spotify, Apple Music, Deezer), fintech platforms (PayPal), food delivery platforms (Just Eat, DoorDash, Deliveroo), crowdfunding platforms (GoFundMe, Patreon), freelancing platforms (Upwork, Fiverr), online learning platforms (Coursera, Udemy, Khan Academy, edX), voice and video calling platforms (Skype, Zoom), e-book hosting platforms (Kindle, Apple Books), career and job search platforms (Indeed, Monster.com, Glassdoor), manual work platforms (Helpling, Taskrabbit, MyHammer), video games marketplace platforms (Steam, Epic Games Store), dating platforms (Tinder, Bumble, OkCupid), pornographic platforms (Pornhub, XVideos, xHamster), subscription-based content platforms (OnlyFans), telemedicine platforms (WebMD, Teladoc Health), and generative artificial intelligence platforms (GPT-4o, Claude 3.5, Gemini, Llama, Copilot, Grok). In this business model both hardware and software are used as a foundation (platform) for other actors to conduct their own business. Platform capitalism has been both praised for its innovation, user empowerment and market efficiency and criticized for its potential for exploitation, market concentration, algorithmic bias and privacy concerns by various authors. The trends identified in platform capitalism have similarities with those described under the heading of surveillance capitalism. Technology companies build platforms that entire industries rely on, and those industries can easily collapse due to the decisions of those technology companies. The possible effect of platform capitalism on open science has been discussed. Platform capitalism has been contrasted with platform cooperativism. Companies that try to focus on fairness and sharing, instead of just profit motive, are described as cooperatives, whereas more traditional and common companies that focus solely on profit, like Airbnb and Uber, are platform capitalists (or cooperativist platforms vs capitalist platforms). In turn, projects like Wikipedia, which rely on unpaid labor of volunteers, can be classified as commons-based peer-production initiatives. See also Enshittification Platform economy References Business models Social sciences Computing and society Information Age Science and technology studies Capitalism
Platform capitalism
[ "Technology" ]
1,011
[ "Information Age", "Science and technology studies", "Computing and society" ]
62,894,871
https://en.wikipedia.org/wiki/PDBe-KB
Protein Data Bank in Europe – Knowledge Base (PDBe-KB) is a community-driven, open-access, integrated resource whose mission is to place macromolecular structure data in their biological context and to make them accessible to the scientific community in order to support fundamental and translational research and education. It is part of the European Bioinformatics Institute (EMBL-EBI), based at the Wellcome Genome Campus, Hinxton, Cambridgeshire, England. References Medical databases Science and technology in Cambridgeshire South Cambridgeshire District
PDBe-KB
[ "Chemistry", "Biology" ]
111
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
62,895,972
https://en.wikipedia.org/wiki/List%20of%20general%20science%20and%20technology%20awards
This list of general science and technology awards is an index to articles about notable awards for general contributions to science and technology. These awards typically have broad scope, and may apply to many or all areas of science and/or technology. The list is organized by region and country of the sponsoring organization, but awards are not necessarily limited to people from that country. International Africa Americas Asia Europe Oceania See also Lists of awards Lists of science and technology awards List of years in science References Science and technology
List of general science and technology awards
[ "Technology" ]
99
[ "Science and technology awards", "Lists of science and technology awards" ]
62,896,097
https://en.wikipedia.org/wiki/6GH8
The 6GH8 (More commonly labeled as the 6GH8A)is a nine pin miniature vacuum tube, produced as a combination of medium-mu Triode and sharp-cutoff Pentode. It follows that the tube is divided into two sections - anode and cathode. Each of the sections had separate cathode. Commonly this type of tubes are used in radio and television receiver. The basic application of the tube is in multivibrator-type horizontal-deflection circuits, AGC-amplifier and sync-separator. They were most common and popular in RCA's CTC line of color television sets, as color demodulators. They were infamous for their failure, and therefore, often needed replacing. Other RETMA triode/pentode combinations with a 6-volts heater include: 6F7, 6P7, 6U8, 6X8, 6X9, 6AG9, 6AH9, 6AN8, 6AT8, 6AU8, 6AW8, 6AX8, 6AZ8, 6BE8, 6BH8, 6BL8, 6BR8, 6CG8, 6CH8, 6CM8, 6CS8, 6CU8, 6CX8, 6DZ8, 6EA8, 6EB8, 6EH8, 6EU8, 6FG7, 6FV8, 6GE8, 6GJ7, 6GJ8, 6GN8, 6GV7, 6GV8, 6GW8, 6GX7, 6HB7, 6HG8, 6HL8, 6HZ8, 6JA8, 6JC8, 6JN8, 6JV8, 6JW8, 6KA8, 6KD8, 6KE8, 6KR8, 6KT8, 6KV8, 6KY8, 6KZ8, 6LC8, 6LF8, 6LJ8, 6LM8, 6LN8, 6LR8, 6LX8, 6MG8, 6MQ8, 6MU8, 6MV8 References Telecommunications-related introductions in 1973 Audiovisual introductions in 1973 Vacuum tubes
6GH8
[ "Physics" ]
477
[ "Vacuum tubes", "Vacuum", "Matter" ]
62,897,234
https://en.wikipedia.org/wiki/Penis%20clamp
Penis clamp is an external penis compression device that treats male urinary incontinence. Incontinence clamps for men are applied to compress the urethra to compensate for the malfunctioning of the natural urinary sphincter, preventing leakage from the bladder with minimal restriction of blood flow. Description These devices are crafted to block or compress the urethra, thus preventing urine leakage. They are applied externally and are typically user-friendly and comfortable to use. Compression devices may vary in shape and size, but they are generally made of flexible and soft materials that adapt to the anatomical contour. Some models come with adjustable settings to customize the level of compression according to individual needs. There were models of urethra clamping devices that date back from the 1920s. They are most commonly made from stainless steel and plastic on the outer surface and silicone or rubber on the inner surface. They are usually applied as a cost-effective solution to urinary incontinence. Types of devices Cunningham Penile Clamp: This type of clamp is placed around the penis. It helps to stop urine leakage by compressing the urethra, the tube through which urine exits. This is achieved by controlled squeezing, preventing urine from escaping. Flexible Uriclak Device: Another type of clamp that is also placed on the penis. It functions similarly to the Cunningham penile clamp by compressing the urethra, but in this case, it does not have closures. It is flexible and comfortably adapts to halt urine flow. Advantages and benefits Non-invasive: These devices do not require surgery or invasive procedures, making them an attractive option for individuals looking to avoid surgical interventions. Customization: Penile clamps allow for personalized adjustments to cater to individual needs and preferences. Independence and Freedom: By offering greater control over urinary incontinence, incontinence devices help individuals maintain an active lifestyle and engage in various activities without worries. Effectiveness: Many users have experienced a significant reduction in urine leaks and an improvement in their quality of life after using penile devices. Risks Usually, these devices are safe and effective. However, none of the penile compression devices cause sustained irritation or impaired blood flow, and generally, patients yield good recovery around 40 minutes after the removal of the devices. It is recommended to de-clamp these devices at a regular interval of four hours. In the instruction manuals, manufacturers allow continuous use of penis clamps and recommend repositioning them every 2-3 hours (each time after urination). Customers should not buy this type of health products on non-specialized websites. Patients may incur serious risks. Urinary clamps imported directly from Asia may not have passed the necessary medical checks. See also Urinary incontinence management Artificial urinary sphincter (AUS) Urinary catheterization Intermittent catheterisation References Urology Urological conditions Medical devices Urologic procedures
Penis clamp
[ "Biology" ]
618
[ "Medical devices", "Medical technology" ]
62,897,500
https://en.wikipedia.org/wiki/Nordre%20Follo
Nordre Follo is a municipality in Akershus county. Norway. Nordre Follo was established on 1 January 2020 by the merging of Ski and Oppegård municipalities. Environment Contaminants from thousands of truckloads of slate (that were dumped on two properties that were not prepared as waste sites), are constantly leaking into stream Snipetjernsbekken, which flows into Lake Gjersjøen—the fresh water source for around 49,500 people in Oppegård and the municipality Ås; the stream has [a high concentration, or] much heavy metals and the pH is low. The magnitude of the problem has been known since 2006. In 2022, the government started its case-work thru Environment Agency and Radiation Protection Authority. Notable people References Municipalities of Akershus Oppegård Water pollution Water in Norway
Nordre Follo
[ "Chemistry", "Environmental_science" ]
177
[ "Water pollution" ]
62,897,843
https://en.wikipedia.org/wiki/JAMstack
JAMstack (also stylized as Jamstack) is a web development architecture pattern and solution stack. The acronym "JAM" stands for JavaScript, API and Markup (generated by a static site generator) and was coined by Matt Biilmann in 2015. The idea of combining the use of JavaScript, APIs and markup has existed since the beginnings of HTML5. In JAMstack websites, the application logic typically resides on the client side (for example, an embedded e-commerce checkout service that interacts with pre-rendered static content), without being tightly coupled to a backend server. JAMstack sites are usually served with a Git-based or headless CMS. See also Named "stacks" LAMP (software bundle) MEAN (solution stack) LYME (software bundle) References External links Web design JavaScript software
JAMstack
[ "Technology", "Engineering" ]
177
[ "Computing stubs", "Design", "Web design", "Computer network stubs" ]
62,899,094
https://en.wikipedia.org/wiki/Kristi%20Kiick
Kristi Lynn Kiick is the Blue and Gold Distinguished Professor of Materials Science and Engineering at the University of Delaware. She studies polymers, biomaterials and hydrogels for drug delivery and regenerative medicine. She is a Fellow of the American Chemical Society, the American Institute for Medical and Biological Engineering, and of the National Academy of Inventors. She served for nearly eight years as the deputy dean of the college of engineering at the University of Delaware. Early life and education Kiick first became interested in a career in the chemical sciences when she was at high school. She studied chemistry at the University of Delaware, from which she graduated summa cum laude as a Eugene du Pont memorial distinguished scholar. She was a Master's student at the University of Georgia, where she was awarded a National Science Foundation (NSF) predoctoral fellowship, and joined Kimberly-Clark as a research scientist in 1992. Kiick returned to academia for a second master's degree in polymer science and engineering at the University of Massachusetts Amherst. She completed her doctoral research at the California Institute of Technology, as a National Defense Science and Engineering Graduate (NDSEG) fellow. She completed her PhD from the University of Massachusetts Amherst on templated macromolecular synthesis in 2001 under the supervision of David A. Tirrell, prior to starting her faculty position at the University of Delaware in 2001. Research and career Kiick designs polymer nanostructures for targeted therapies and hydrogel matrices for regenerative medicine. She makes use of biomimetic self-assembly, bioconjugation and biosynthesis. In particular, Kiick has worked on polymer-peptide macromolecular structures that can engage cellular targets. These include the use of polyethylene glycol (PEG) in click chemistry to form hydrogels that degrade selectively in response to molecules present in tissues and extracellular matrix. Kiick has shown it is possible to selectively release small molecule cargo with a tuned release for applications in targeted drug-delivery and vascular grafts. She has developed resilin-like polypeptides (RLP), elastomeric materials that can be cross-linked using small molecules, as well as hydrogels that contain nanoparticles for targeting tumors and inflammatory conditions. Resilin is a primary elastomeric protein that is found in insects, and helps them to jump long distances and produce sound. She joined the faculty at the University of Delaware in 2001, and earned the rank of associate professor in 2007. In 2011 Kiick was promoted to the rank of professor of materials science and engineering and also named deputy dean of the University of Delaware’s college of engineering. In 2019-2020 she was awarded a Leverhulme Visiting Professorship from the Leverhulme Trust and a Fulbright Scholarship from the Fulbright Program to the University of Nottingham, to develop protocols for fabricating bioelastomeric materials. Awards and honours Her awards and honours include: 2003 National Science Foundation CAREER Award 2004 University of Delaware Francis Alison Young Scholar Award 2010 University of Minnesota Etter Memorial Lectureship in Chemistry 2012 University of Delaware Trabant Award for Women's Equity 2014 University of Southern Mississippi Bayer Distinguished Lectureship 2014 Elected a fellow of the American Chemical Society (ACS) 2014 Elected a fellow of the American Institute for Medical and Biological Engineering (AIMBE) 2015 University of Southern Mississippi Covestro Distinguished Lectureship 2019 Fulbright Program Scholar 2019 Elected a fellow of the National Academy of Inventors Selected publications Her publications include: Personal life Kiick is married with two children. References Living people American women chemists University of Delaware alumni University of Delaware faculty University of Georgia alumni University of Massachusetts Amherst College of Engineering alumni Supramolecular chemistry 1967 births American women academics 21st-century American women Academics of the University of Nottingham
Kristi Kiick
[ "Chemistry", "Materials_science" ]
779
[ "Nanotechnology", "nan", "Supramolecular chemistry" ]
62,900,324
https://en.wikipedia.org/wiki/Gelfand%E2%80%93Fuks%20cohomology
In mathematics, Gelfand–Fuks cohomology, introduced in , is a cohomology theory for Lie algebras of smooth vector fields. It differs from the Lie algebra cohomology of Chevalley-Eilenberg in that its cochains are taken to be continuous multilinear alternating forms on the Lie algebra of smooth vector fields where the latter is given the topology. References Further reading Cohomology theories Lie algebras Homological algebra
Gelfand–Fuks cohomology
[ "Mathematics" ]
95
[ "Fields of abstract algebra", "Mathematical structures", "Category theory", "Homological algebra" ]
62,902,993
https://en.wikipedia.org/wiki/AquaSalina
AquaSalina is a salt de-icer made from produced water (or brine) at Duck Creek Energy's vertical oil and gas wells. It is then filtered in Cleveland, Ohio and Mogadore, Ohio. The Ohio Department of Transportation approved AquaSalina in 2004, and it has been sold at Lowe's and elsewhere. In the winter of 2017–2018, the Ohio Department of Transportation sprayed over 500,000 gallons of AquaSalina deicer on highways. In the 2018–2019 winter they applied over 620,000 gallons of it. In the winter of 2018–2019, they applied nearly 800,000 gallons. In 2017, the Ohio Department of Natural Resources (ODNR) tested samples and found high radium levels, as has a Duquesne University scientist, who called it "a nightmare". While ODNR's tests indicated the results were 300 times higher than allowed in drinking water and above the levels allowed for the discharge of radioactive waste, it met their standards to be used as a deicer. Specifically, 0.005 picocuries per liter of radium is allowed for disposal, but there is no limit for spreading on roadways. The ODNR samples contained between 66 and 9602 picocuries per liter, including one sample that was higher than raw brine. Several bills have been introduced in the Ohio legislatures from 2017 to 2019 to consider brine deicers a commodity, rather than toxic waste, to exempt them from ODNR testing. Fracking water lawsuit Duck Creek Energy won a defamation lawsuit in 2013 against two individuals who said AquaSalina was "frac waste" or "fracking water". AquaSalina's source is vertical oil and gas wells, not fracking wells. They were allowed to continue describing it as "toxic". The ruling made a distinction stating AquaSalina "is" versus "contains" fracking water. References Further reading Water pollution Radioactive contamination Radiation health effects Ice in transportation Economy of Ohio
AquaSalina
[ "Physics", "Chemistry", "Materials_science", "Technology", "Environmental_science" ]
413
[ "Ice in transportation", "Radiation health effects", "Radioactive contamination", "Water pollution", "Physical systems", "Transport", "Environmental impact of nuclear power", "Radiation effects", "Radioactivity" ]
67,235,750
https://en.wikipedia.org/wiki/Chemosensory%20speciation
Chemosensory speciation (chemosensory isolation) is the evolution of a population to become distinct species that is driven by chemical stimuli (i.e., chemical signals, recognition). These chemical signals may create premating or other isolating behavioral barriers that prevent gene flow among subpopulations that eventually lead to two separate species. Chemosensory pathways are vital for exogenous and endogenous recognition and processing of volatile organic compounds (VOCs); therefore, they are viewed as active attributors to an organism's behavior. Chemosensory pathways involving: odorant binding proteins (OBP), chemosensory proteins (CSP), gustatory receptors (GR), and sensory neuron membrane proteins (SNMP), have been investigated in numerous biological systems as genetic barriers. These chemosensory genes are utilized for identification of candidate loci that are under positive selection. Experiments are commonly tailored to studying an organism's response to alterations in their chemosensory pathways, or using molecular phylogenetics to analyze the divergence of these systems in sister taxa. Sensory pathways allow integration of environmental stimuli that strongly influence an organism's behavior and are hypothesized to have broad implications as a module of behavioral selection; nevertheless, here it will be reviewed briefly in three well-supported modules: Resource Identification (foraging behaviors), Conspecific Interactions (sexual selection), and Host Recognition. Resource identification In vivo, a complex of chemical signals and pheromones are commonly perceived together instead of as chemical isolates. Identification of specific chemical cues in the myriad of surrounding compounds commonly leads to behavioral changes in the recipient organism. For instance, in a recent publication studying behavioral phenotypes, the researchers identify a foraging behavioral shift that is caused by an insensitivity to one of several pheromone isoforms detected in the mutant and wildtype; the pheromone insensitivity stimulates a constant foraging behavior, despite regular activity performed by the other individuals in the population. The expansion and evolution of chemosensory systems are especially evident in the insect lineage due to their life history. For example, a recent publication highlights the importance of odorant receptors (OR) for proper antennal lobe (AL) function; a gene knockout of an OR co-receptor drastically impaired AL development in the clonal raider ant, Ooceraea biroi. Chemosensory systems, and their regulatory repertoire, influence the development and behavior of foraging organisms. Host recognition Regardless of the relationship (symbiont, commensal, or parasitic), host recognition is determined by deciphering the surrounding chemical cocktails for specific signals that excite their sensory receptors. Host race formation in Aphid species has recently been used as a model system to understand divergent selection in the face of gene flow. For populations that depend on a host, their ability to identify and locate host signals are recognized as being under a post-mating selection pressure. The host-dependent organisms harbor sensory adaptations that allow them to appropriately process complex mixtures of chemical cues. Intermating populations that are host specific may have a higher chance of allopatric isolation and divergence due to a narrowed niche and host’s ecological variability. Another evolutionary model that supports chemosensory importance is the antagonistic Red Queen hypothesis that elaborates on coevolution adaptations. Conspecific interaction Olfactory cues are widely used for conspecific and mate selection but are also commonly used in avoidance strategies. The role of chemosensory systems in conspecific recognition recapitulates their place in natural and sexual selection. Chemosensory cues offer initial information on potential mates, such as size, age, and environment. An organism’s integration of these cues allows them to choose mates that provide increased fitness. Divergence of pheromones and their receptors lead to various expression patterns within populations that may then be selected upon. This variation of hormone expression has been studied in numerous biological systems. For example, two populations that are not spatially isolated, yet form a species complex with several monophyletic types may require specific chemical identification for proper intraspecific communication and mate selection. An example of this intraspecific communication barrier is seen in two co-habitual populations of the Iberian wall lizard, Podarcis hispanica; the males of the two species emit different pheromone assemblages and can discriminate the types of cues, however, the females were unable to discriminate between the two pheromone cues. The behavioral and evolutionary impacts of these relationships are commonly underlined in more recent sexual selection models, such as the good genes (sexy son hypothesis) and the sensory bias model. References Speciation Evolution Species
Chemosensory speciation
[ "Biology" ]
986
[ "Evolutionary processes", "Speciation" ]
67,236,309
https://en.wikipedia.org/wiki/H4R3me2
H4R3me2 is an epigenetic modification to the DNA packaging protein histone H4. It is a mark that indicates the di-methylation at the 3rd arginine residue of the histone H4 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial. Nomenclature The name of this modification indicates dimethylation of arginine 3 on histone H4 protein subunit: Arginine Arginine can be methylated once (monomethylated arginine) or twice (dimethylated arginine). Methylation of arginine residues is catalyzed by three different classes of protein arginine methyltransferases. Arginine methylation affects the interactions between proteins and has been implicated in a variety of cellular processes, including protein trafficking, signal transduction, and transcriptional regulation. Arginine methylation plays a major role in gene regulation because of the ability of the PRMTs to deposit key activating (histone H4R3me2, H3R2me2, H3R17me2, H3R26me2) or repressive (H3R2me2, H3R8me2, H4R3me2) histone marks. Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. Mechanism and function of modification JMJD6, a Jumonji domain-containing protein, was reported to demethylate H4R3me2. H4R3me2 is a major mark deposited by Prmt5. H4R8me2s is linked to transcriptional repression and is tightly linked with H4R3me2s methylation. Epigenetic implications The post-translational modification of histone tails by either histone-modifying complexes or chromatin remodeling complexes is interpreted by the cell and leads to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states, which define genomic regions by grouping different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterized by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. The human genome is annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Clinical significance There is evidence of crosstalk between H4R3me2 and H3K9ac and H3K14ac in cell differentiation and the response to cocaine. Methods The histone mark can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well-positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well-positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. See also Histone methylation Histone methyltransferase References Epigenetics Post-translational modification
H4R3me2
[ "Chemistry" ]
1,057
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
67,236,631
https://en.wikipedia.org/wiki/Victor%20Gevers
Victor Gevers is a Dutch security hacker. Career He has been hacking since 1998 and is running the GDI Foundation. In 2019 he discovered a large data breach of the Chinese surveillance company SenseNets. He is known for hacking the Twitter account of U.S. President Donald Trump. He was not convicted for it. References External links Twitter profile of Victor Gevers Living people Hackers Year of birth missing (living people)
Victor Gevers
[ "Technology" ]
87
[ "Lists of people in STEM fields", "Hackers" ]
67,237,637
https://en.wikipedia.org/wiki/Dragvanti
Dragvanti (stylized DragVanti) is a web portal dedicated to drag performers based in India. History DragVanti was launched on June 20, 2020 by Patruni Sastry. The platform also connects emerging drag artists to the entertainment industry. Originally, DragVanti was only a website. It became a monthly publication from 2019 to 2021 that was circulated online for no cost. The drag directory was launched in June 2020. Patruni Sastry who founded the platform says "When I started performing drag in 2019, there was no content about Indian drag available; the only content coming in was that from the West, However Drag is present in classical Indian culture with a mention of it occurs in the Nātya Śāstra, a record of Indian performance art estimated to be around 2,000 years old. Yet today, We don’t acknowledge what drag artists are doing within India” when asked about the intent of creating such a platform. Events In 2020 June, DragVanti co-hosted Pride Online fest in collaboration with Social Samosa where there was a curated drag panel discussions and performances. In August 2020, DragVanti hosted a TED circle for drag performers. In 2021 March, DragVanti hosted an open online mic evenings via its social media handle. In 2021 June , as a part of pride month celebration, Dragvanti has organized India's first ever Drag conference with more than 6 drag queens to initiate academic discussions in the field of drag. In 2021 August, DragVanti hosted India's first ever BI/PAN festival to create awareness on Bisexuality and Pansexuality spectrums. DragVanti also hosts annual celebration of queer Halloween. References External links Website Indian websites LGBTQ-related websites LGBTQ-related Internet forums
Dragvanti
[ "Technology" ]
357
[ "Computing stubs", "World Wide Web stubs" ]
67,239,625
https://en.wikipedia.org/wiki/KELT-3
KELT-3 is a star in the zodiac constellation Leo. With an apparent magnitude of 9.82, it is too faint to be seen with the naked eye, but can be detected using a telescope. It is currently located around 681 light years away, based on parallax measurements. Properties KELT-3 is an early F-type main-sequence star with 27.7% more mass than the Sun, and is slightly larger than the latter. It is radiating 3 times the Sun's luminosity, and has a metallicity similar to the latter. It has an effective temperature of 6,304 K, which gives KELT-3 a yellow-white hue. It's also slightly younger than the Sun, with an age of 3 billion years. There is uncertainty about the star's age, it being an evolved star or not. Since 2015, the star is suspected to have a stellar companion, at angular separation of 3.762 arcseconds. Planetary system In 2013, KELT discovered an eccentric hot Jupiter transiting the star. In the research paper, it is stated as one of the brightest transiting hosts. The light curves of the star have been observed during transits. See also KELT References Planetary transit variables Planetary systems with one confirmed planet F-type main-sequence stars BD+41 2024 J09543439+4023170 TIC objects Leo (constellation)
KELT-3
[ "Astronomy" ]
295
[ "Leo Minor", "Constellations" ]
67,239,644
https://en.wikipedia.org/wiki/Website%20footer
In web design, a footer is the bottom section of a website. It is used across many websites around the internet. Footers can contain any type of HTML content, including text, images and links. HTML5 introduced the <footer> element. Common items that are included or linked to from footers are copyright, sitemaps, privacy policies, terms of use, contact details and directions. Infinite scrolling cannot be used in combination with footers, because the footer becomes inaccessible. References Web design
Website footer
[ "Engineering" ]
105
[ "Software engineering", "Software engineering stubs", "Design", "Web design" ]
67,239,921
https://en.wikipedia.org/wiki/KELT-3b
KELT-3b is an extrasolar planet orbiting the F-type main-sequence star KELT-3 690 light years in the zodiac constellation Leo. It was discovered in 2013 by KELT's telescope in Arizona. Properties This planet has 44% more mass than Jupiter, but has expanded to 1.34 times the radius of the latter. It has a temperature of 1,811 K, which gives it a Hot Jupiter class. KELT-3b has a lower density than Jupiter, and completes a revolution in less in 3 days. This corresponds with an orbital distance of 0.04 AU, which is 10 times closer than Mercury (planet) orbits the Sun. The planetary equilibrium temperature is 1829 K, but measured temperature is hotter at 2132 K. The radiation of the moderately active host star KELT-3 do not produce a detectable ionization and consequent Lyman-alpha line emission in the atmosphere of the KELT-3b. Discovery KELT-3b was discovered in 2013. The light curves and parameters of both the planet and the star were observed. The paper also states that there is uncertainty about the system’s age. References Exoplanets discovered in 2012 Exoplanets discovered by KELT Hot Jupiters Leo (constellation)
KELT-3b
[ "Astronomy" ]
262
[ "Leo (constellation)", "Constellations" ]
67,240,166
https://en.wikipedia.org/wiki/Jane%20Kister
Jane Elizabeth Kister (born and also published as Jane Bridge, 18 October 1944 – 1 December 2019) was a British and American mathematical logician and mathematics editor who served for many years as an editor of Mathematical Reviews. Early life and education Jane Bridge was originally from Weybridge, England, where she was born on 18 October 1944; her father was a lawyer and later a judge. Her family moved to London when she was four, and she studied at St Paul's Girls' School in London. She matriculated at Somerville College, Oxford in 1963, but her studies were interrupted by a diagnosis of lupus; she resumed reading mathematics there in 1964, tutored by Anne Cobbe. She earned a first, won a Junior Mathematical Prize, and continued at Oxford for graduate study. She was given the Mary Somerville Research Fellowship in 1969, and completed her doctorate (D.Phil.) at Oxford in 1972. Her dissertation, Some Problems in Mathematical Logic: Systems of Ordinal Functions and Ordinal Notations, was supervised by Robin Gandy. She then became a tutorial fellow in mathematics at Somerville College, taking Anne Cobbe's position after Cobbe's retirement, and a member of the Mathematical Institute, University of Oxford, working among others there with Dana Scott. Marriage and later life In 1977, mathematician James Kister from the University of Michigan visited Oxford on sabbatical; they married in 1978 and she returned with him to the US, giving up her position at Oxford and in 1992 taking US citizenship. She obtained a visiting professorship at the Massachusetts Institute of Technology, and then in 1979 began working at Mathematical Reviews, where she would remain for the rest of her career. She became associate executive editor in 1984, and executive editor in 1998, the first woman to hold that position. When Mathematical Reviews shifted from being a paper review journal to an online electronic database, MathSciNet, in 1996, Kister was heavily involved in this advance. She also held an adjunct professorship at the University of Michigan. She retired in 2004, and died of a heart attack on 1 December 2019. Books As Jane Bridge, she was the author of the book Beginning Model Theory: The Completeness Theorem and Some Consequences (Clarendon Press, 1977), the first volume in the Oxford Logic Guides book series. She also co-edited the Ω-Bibliography of Mathematical Logic, Volume VI: Proof Theory, Constructive Mathematics (Perspectives in Mathematical Logic, Springer, 1987). References 1944 births 2019 deaths 20th-century American mathematicians 21st-century American mathematicians British mathematicians British women mathematicians Mathematical logicians Women logicians Alumni of Somerville College, Oxford Fellows of Somerville College, Oxford 20th-century American women mathematicians 21st-century American women mathematicians
Jane Kister
[ "Mathematics" ]
553
[ "Mathematical logic", "Mathematical logicians" ]
67,242,981
https://en.wikipedia.org/wiki/Gorgerin
In architecture, a gorgerin (from the meaning throat) is the neckline or portion of a capital of a column, or a feature forming the junction between a shaft and its capital. References Citations Sources Architectural elements
Gorgerin
[ "Technology", "Engineering" ]
45
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
67,243,270
https://en.wikipedia.org/wiki/DHL%20MoonBox
DHL MoonBox was a mementos box that was launched to the Moon on Astrobotic Technology's Peregrine Lunar Lander in 2024. 151 MoonBox capsules, also known as "Moonpods", were made by DHL, each containing items intended to be shipped to the lunar surface. The capsules measured up to 1 inch wide and 2 inches high (2.5 by 5.1 cm), and contained items from the USA, UK, Canada, Nepal, Germany and Belgium. The items included stories written by children and a rock from Mount Everest. DHL also included a data stick which contained 100,000 images from those who responded to its "Who do you love to the moon and back?" campaign. Landing of the Peregrine on the moon was later abandoned due to a propellant leak. It re-entered Earth's atmosphere and was destroyed on 18 January 2024. The payloads in the Moonbox include: References External links Astrobotic - MoonBox Message artifacts Peregrine Payloads
DHL MoonBox
[ "Astronomy" ]
216
[ "Message artifacts", "Outer space" ]
67,245,035
https://en.wikipedia.org/wiki/Bahareque
, also spelled (also referred to in spanish as bajareque or fajina), is a traditional building technique used in the construction of housing by indigenous peoples. The constructions are developed using a system of interwoven sticks or reeds, with a covering of mud, similar to the systems of wattle and clay structures seen in Europe. This technique is primarily used in regions such as Caldas, which is one of the 32 departments of Colombia. Origin , is an ancient construction system used within the Americas. The name is said to come from the word , is an old Spanish term for walls made of bamboo ( in Spanish) and soil. Guadua is a common woody grass found in Colombia. While its exact origin is uncertain, some authors have also attributed it to Caribbean-Taino culture and written it as 'bajareque'. Similar homophonies are found in other native American languages such as Miteca, ba and balibi, bava. Pedro José Ramírez Sendoya (1897-1966), a Colombian priest and anthropologist, mentioned its use in his writings, noting that it was used to construct "good buildings with walls of clay and wood almost as wide as one of our walls, tall and whitewashed with very white clay". Construction and materials Based on Jorge Enrique Robledo's book, Muñoz points out that this traditional technique of building evolved in Caldas from the first buildings constructed during the 1840s through the introduction of new materials, creating different typologies. All of these typologies typically use stone foundations. These typologies are: 1. , 2. , 3. , and 4. . Each typology has a different structural design. For instance, uses bamboo in both the frame and the structural panels and the plaster, and according to Sarmiento, is made from a mixture of earth and cattle dung. uses wood in the frame and bamboo () in its structural panels, and the plaster is made by a kind of “reinforced cement” because of the use of steel mat between the bamboo panels and the cement plaster. In the 1840s, the first settlers of Manizales, the capital city of Caldas, used in buildings that were usually single story. At the same time, in the rural areas, some farmers used a mix of traditional building styles. This mix of traditional styles was tapia, which is a pre-Hispanic construction technique, and . The first floor, , was based on compacted earth using wood earth forms, and the second floor was . In 1993, Robledo called this variation . The name derives from the fact that this new technique of had better performance in the earthquakes (Spanish meaning 'earthquake') since the first floor, which was rigid, absorbed the seismic energy, and the second floor, which was flexible, dissipated the energy. Consequently, the , which was used in a few farms and occasionally in the city of Manizales as temporary housing, gained favor after people saw that earthquakes were destroying buildings built with other construction techniques, such as . Those built with Estilo Temblorero remained standing. Because of the materials' flammability, and after the great fires of Manizales between 1925 and 1926, the trustworthiness of was lost. After these great fires and the introduction of new construction techniques, such as reinforced concrete, new variations of the technique were introduced, leaving more trust in reinforced concrete than . These new techniques, which used concrete frames and facades and structural panels, were the most common structural designs in the reconstruction of the downtown that was swept by the great fires. See also Adobe Footnotes Works cited Indigenous architecture of the Americas Architecture in Colombia Building materials Sustainable building
Bahareque
[ "Physics", "Engineering" ]
749
[ "Sustainable building", "Building engineering", "Architecture", "Construction", "Materials", "Matter", "Building materials" ]
67,245,304
https://en.wikipedia.org/wiki/Judith%20Harrison
Judith A. Harrison is an American physical chemist and tribologist who is known for pioneering numerical methods that incorporate chemical reactions into modeling studies. She is a professor in the Department of Chemistry at the United States Naval Academy in Annapolis, Maryland. Education Harrison attended the University of New Hampshire, and completed a Ph.D in computational quantum chemistry on gas-phase reaction dynamics under the supervision of H.R. Mayne in 1989. Career After a yearlong post-doctoral appointment at Duke University, Harrison joined the research group of Carter White and Richard Colton at the Naval Research Laboratory (NRL) as an American Society of Engineering Education postdoctoral associate She joined the chemistry department at the United States Naval Academy as an assistant professor, and was subsequently promoted to the ranks of associate professor and full professor. She has also held visiting scientist appointments at Johns Hopkins University and the University of Pennsylvania. Harrison currently serves on the editorial board of Tribology Letters. She has held a variety of leadership, educational, and advisory roles in the AVS and STLE, and is the Vice-Chair for the 2022 Gordon Research Conference on Tribology. Research and publications Harrison studies friction at the molecular level and runs simulations to unravel the molecular origins of friction and wear. She has published on topics that include Nanotribology, Molecular dynamics, AIREBO, Nanomechanics, Wear, Nanoindentation, Friction, Diamond and Mechanical properties. At NRL she began working with D.W. Brenner, who had recently reported a formalism for a potential energy surface now known as the "Brenner Potential". The potential was initially formulated for hydrocarbons in order to model diamond film deposition, but soon proved to be applicable to other scientific problems. Utilizing this potential and with a focus on diamond surfaces, Harrison was the lead author reporting on a number of pioneering accomplishments with Brenner and co-workers at NRL while she was at the Naval Academy. These include the first use of molecular dynamics to study atomic-scale friction and adhesion between sliding solids, the first reported simulation of a tribochemistry, and the first report of frictional energy dissipation with the use of molecular dynamics. Her two most cited works have been cited over 3400 times each: A second-generation reactive empirical bond order (REBO) potential energy expression for hydrocarbons DW Brenner, OA Shenderova, JA Harrison, SJ Stuart, B Ni, SB Sinnott, Journal of Physics: Condensed Matter, 2002 A reactive potential for hydrocarbons with intermolecular interactions SJ Stuart, AB Tutein, JA Harrison, The Journal of Chemical Physics, 2000 Honors and awards 2000 Naval Academy's Research Excellence Award 2000, 2018 Navy Meritorious Civilian Service Award 2011 Fellow of the American Vacuum Society, "For unraveling the complex mechanics of atomic scale friction through modeling and simulations, and developing reactive empirical bond-order potentials that take into account chemical reactions." 2014 Navy Superior Civilian Service Award 2009 George Braude Award 2018 Fellow, Society of Tribologists and Lubrication Engineers 2020 Kinnear endowed Fellow of Chemistry References External links United States Naval Academy faculty University of New Hampshire alumni American women chemists Tribologists Living people Year of birth missing (living people) 20th-century American engineers 20th-century American women engineers 20th-century American chemists 21st-century American engineers 21st-century American women engineers 21st-century American chemists American women academics
Judith Harrison
[ "Materials_science" ]
707
[ "Tribology", "Tribologists" ]
67,245,675
https://en.wikipedia.org/wiki/Indy%20Autonomous%20Challenge
The Indy Autonomous Challenge (IAC) is the main and, between July 2023 and April 2024, the only active racing series for autonomous race cars. The vehicles participating in the IAC are SAE level 4 autonomous as they are capable of completing circuit laps and overtaking maneuvers without any human intervention. Exclusively made up of student/university teams, each team participating in the competition uses the same vehicle, a custom-built Dallara AV single-seater. The AV-21 was derived from Dallara's IL-15 model with the addition of sensors, actuators and computing hardware necessary for fully autonomous driving. By 2024, the series was using the Dallara AV-24 specification, with the same base Dallara chassis but an entirely re-engineered compute hardware, sensor suite, and software stack. The participating teams are made up of university researchers from universities worldwide, including Massachusetts Institute of Technology, Carnegie Mellon University, University of Pittsburgh, KAIST, Politecnico di Milano, TUM, ETH Zurich, University of Virginia and Purdue University. The first race took place in October 2021 on the Indianapolis Motor Speedway (IMS), after an initial three-year period of simulator-only challenges, which started in November 2019 as a proving ground to allow competing teams to develop and demonstrate the ability to race autonomously before receiving the physical race car. Since then, the competition has raced on several notable oval circuits such as Las Vegas Motor Speedway and Texas Motor Speedway, and in June 2023 in its first road course circuit, at the Monza Circuit. Over the four years of on-track IAC competitions, the challenge has advanced to include two competitive events. Beginning in 2021, individual time trials are run by all teams over the course, and the event is scored with the fastest lap achieved in five minutes on an oval track. Later, a multicar event was added: a two-car scripted passing competition, with increasingly higher speeds assigned to the lead car, where the two cars "keep passing each other like a game of cat and mouse until one of them has to give up, or they have an accident.” By 2024, the autonomous racecars were achieving top-speeds on oval circuits of and the two-car passing races were achieving successful passes of a fixed-speed vehicle maintaining 160 mph. Motivation and summary of achievements As a successor of the DARPA Grand Challenge, the IAC aimed to provide a challenging environment for the development of autonomous vehicles. University teams were invited to develop software for solving the autonomous driving task, in the challenging environment of a racetrack, constrained by IAC rules through 2024 that limit only one or two cars to be on the race track at a time, and limit the autonomous control approach to only six camera sensors on the vehicle. During the competition, teams used simulation environments and cloud computing to test and prove the maturity of their algorithms. As the IAC race cars were to drive on track up to 290 km/h (180 mph) with high lateral and longitudinal accelerations, the software needed to plan a path in an adversarial environment and to drive safely and reliably with low computation times. Overall, three main goals were set for the IAC in 2021: Defining and solving edge case scenarios for autonomous vehicles; Catalyzing new autonomous driving technologies and innovations; Engaging the public in the competition to help ensure acceptance. The efforts of the IAC were initially led by Energy System Network, an Indianapolis-based nonprofit. The goal of the IAC was to focus on the development of a full autonomous driving software stack that enabled perception, planning and control on the racetrack. During its multiple years of operation, the IAC achieved a number of records, beyond the speed records for an autonomous vehicle on every racetrack the competition visited. The Autonomous land speed record, was obtained on 28 April 2022, on the Kennedy Space Center runway, where a Dallara AV-21 reached the speed of 309.3 km/h (192.2 mph) The scientific research from the IAC teams has led to several academic publications, mostly on the topics of automatic control, path planning and robotic perception. History IAC Simulation Race In order to qualify for the participation in the real championship, the participating teams had first to show their autonomous driving capabilities on a simulator, by completing a series of hackathon challenges of increasing difficulty, starting from a solo lap and simple obstacle avoidance to 1-to-1 full races. The simulation environment provided the teams with a perfect replica of the Indianapolis Motor Speedway and of the Dallara AV-21 Racecar. The simulation-only competition peaked with the IAC Simulation Race, which took place on June 30, 2021. It consisted of a qualification round, where the teams had to complete their fastest solo lap, with time penalties attributed for violating the track limits. Then, the teams were split into two 8-vehicle semifinal races in order to qualify for the final. In the semifinals and in the final, many vehicles were disqualified for causing collisions with other vehicles, with the race stopped and re-started every time a collision happened. The final times of the semi-finals were used to determine the running order of the final race, which concluded with the victory of team PoliMOVE, which started from pole position and defended its place along the 10 laps. The winning team was awarded US$100,000, while the second place team (TUM Autonomous Motorsport) received US$50,000 Indy Autonomous Challenge Simulation Race Final Standings DSQ = Disqualified DNQ = Did Not Qualify DNF = Did Not Finish IAC autonomous vehicle races IAC at the Indianapolis Motor Speedway, 2021 After the simulation race, 9 teams purchased the vehicle, and were admitted to the first physical race. Some of the teams participating in the simulation race merged in order to split the financial burden. The Indy Autonomous Challenge at the Indianapolis Motor Speedway (IMS) took place on October 23, 2021. Although in the original intentions of the organizers it should have been a full 10-vehicle traditional race as it had been the case of the IAC Simulation Race, eventually the scope of the competition was reduced to a time trial event with an obstacle avoidance test. The race, together with its US$1,000,000 prize, was won by TUM Autonomous Motorsport after many teams had to retire from the competition due to crashes, with Euroracing on the second place of the podium, while PoliMOVE crashed against the wall but, as it had already scored its time, was granted the third place. IAC at the Las Vegas CES, 2022 The next Indy Autonomous Challenge competition took place on January 7, 2022, at the Las Vegas Motor Speedway (LVMS), as the final event of the 2022 edition of the Consumer Electronics Show. The event itself was limited to CES attendees but was live streamed. After the single-vehicle time trials of the Indianapolis event, it was decided to have another competition with the autonomous race cars, this time with more than a vehicle on the track. To simplify the task for the teams, the "Overtaking Game" format was chosen for the race, where a defender car had to keep a constant speed, while an attacker vehicle had to complete the overtake before the end of the lap. Once the overtake had been completed, the roles would swap and the defender speed would be increased. The teams had to perform a complete and safe overtake on track in the test days before the event in order to qualify for the race matches, which were held on a tennis-style elimination tournament. To further simplify the environment, the defender was forced to stay on the inside of the turns. After winning the semi-final race against KAIST, PoliMOVE won the 2022 IAC Las Vegas Race by successfully completing an autonomous overtake over TUM Autonomous Motorsport defending at 150 mph (240 km/h). The German team spun out of control after performing an overly aggressive obstacle avoidance manoeuvre while car #5 (PoliMOVE) was ultimating its overtake. IAC 2022-2023 Season The success of the 2022 IAC competition in Las Vegas encouraged the series to expand towards new circuits. Consumer Technology Association renovated the IAC's contract to perform at the 2023 Consumer Electronics Show in Las Vegas. After a summer break and a vehicle refresh, which included an increase of the engine power and new sensors and computing equipment, the cars were brought to Fort Worth, Texas for the next challenge. The Indy Autonomous Challenge at the Texas Motor Speedway (TMS) took place in November 2023. A major change in the rules with respect to the 2022 Las Vegas edition consisted in the increased freedom of the defender vehicle to choose its trajectory, although right of way and minimum longitudinal and lateral separation rules were introduced to increase the safety of the competition. The race took place on a cold and wet racetrack after a morning of heavy rain. Similar to what had happened during the Simulation Race, many teams were disqualified due to either causing a collision or simply violating the minimum distance between the cars, as their algorithms could not safely handle the increased opponent freedom. Team PoliMOVE won the final race against AI Racing Tech. The 2023 Indy Autonomous Challenge at the Las Vegas Motor Speedway took place on January 7, 2023, following the same ruleset of the 2022 TMS Race. As the teams' software progressed, more advanced vehicle interaction and less incidents happened with respect to the TMS race. Team PoliMOVE won again, reaching a top speed of around 180 mph (289.682 km/h) during the event and beating its own speed record from the previous year. IAC at MIMO 2023 At the 2023 Las Vegas event, the IAC announced its intention of coming to Europe, at the Autodromo Nazionale di Monza in Italy, as part of the 2023 MIMO event. Five IAC teams participated in the event, which took place on June 16–18, 2023. Due to the increased difficulty of running on road course circuits with respect to ovals and the lack of complete GPS coverage of the track, the event format was once again a single vehicle time trial competition. Team PoliMOVE scored the fastest lap, while TUM Autonomous Motorsport took second place and TII Unimore Racing (formerly Euroracing) was on the lowest step of the podium. IAC at Las Vegas CES 2024 At CES®2024, the Indy Autonomous Challenge unveiled the IAC AV-24- the next-gen autonomous vehicle platform in the racing series. Teams PoliMOVE, TII Unimore Racing, and AI Racing Tech demonstrated autonomous laps with the AV-24 cars at the Las Vegas Motor Speedway on January 11th, 2024. The remaining teams with the older AV-21 autonomous cars participated in the autonomous passing challenge. TUM Autonomous Motorsport faced Cavalier Autonomous Racing from University of Virginia in the final round. The Cavalier race car accelerated as the defending car in the final round which is prohibited by the competition rules and the round had to end with TUM Autonomous Motorsport declared as the winner, and Cavalier Autonomous Racing finishing second. The first Semi-Final took place between MIT-PITT-RW and Cavalier Autonomous Racing with Cavalier Autonomous Racing passing the MIT-PITT-RW car at 143 mph before the round ended. The second Semi-Final saw TUM Autonomous Motorsport go head-to-head against KAIST with TUM emerging as the winner of the semi-final. IAC at the Indianapolis Motor Speedway, 2024 The second IAC race at the Indianapolis Motor Speedway was run on 6 September 2024, three years after the first autonomous Indy Autonomous Car race was first run on this track. Ten teams of students—almost all graduate students— from 19 engineering schools competed. Even though this was the fourth year of the IAC competition, the races are still restricted to just two competitive events. Individual time trials are run by all teams, and the event is scored with the fastest lap achieved in five minutes on the track. The only multicar event is a two-car scripted passing competition, with increasingly higher speeds assigned to the lead car, where the two cars "keep passing each other like a game of cat and mouse until one of them has to give up, or they have an accident.” The time trial winner was Cavalier Autonomous Racing from the University of Virginia (car #9) at . The winner of the passing competition was the Italian team PoliMOVE-MSU (car #5) where the lead car was limited to and Poli-MOVE-MSU passed it, briefly achieving a speed of 10–20 miles per hour faster. The second place team in the passing competition was unable to pass car #5 at the 160 mph increment, but had successfully passed at the 155 mph increment. Indy Autonomous Challenge series results The results from most of the actual car races, 2021 through 2024, are summarized in the following table. DSQ = Disqualified DNA = Did Not Attend DNF = Did Not Finish DNQ = Did Not Qualify DNS = Did Not Start Dallara AV racecar For the IAC, a special autonomous race car was developed, initially in 2021 by Clemson University in the Deep Orange Project, and the vehicle was presented at CES 2021. The race car is based on a Dallara Indy Lights chassis enhanced with computation hardware, sensors and actuators to support autonomous operation on the racetrack. The vehicle is named the "Dallara AV". The specific model introduced for the initial 2021 IAC challenge was the Dallara AV-21, a rear-wheel drive, powered by an internal combustion engine that produces and has a 6-speed sequential gearbox. To perceive the external environment, the AV-21 vehicle was equipped with six monochrome cameras, four Radars, three LiDARs, and an RTK GPS. The cars are assembled, serviced and maintained by an external company. The development of the physical vehicle was performed in parallel with the simulation challenges and race, in order to allow the teams to develop team-specific software using a simulator without the need of the hardware. The teams were required to purchase the race cars in order to take part in the first IAC race at the Indy Motor Speedway in October 2021. By 2024, the IAC racecar specification had changed, and was now the Dallara AV-24—also known as the IAC AV-24—the next-generation autonomous vehicle platform for the IAC racing series. The AV-24 has the same base Dallara AV chasis as the AV-21, but IAC has entirely re-engineered the compute hardware, sensors, and software system to support the autonomous racecar operation. New equipment includes six Allied Vision Mako G-319C cameras (2064 x 1544 px resolution, 12-bit color depth, 37.5 frames per second), a Luminar Iris 360-degree long-range ) lidar system, Continental ARS 548 radar sensor with range of , New Eagle/IAC custom drive-by-wire system (steer-by-wire, brake-by-wire including independent actuation of front and rear brakes), Marelli race control and real-time data interface, and an improved GPS interface. An AV-24 updated simulation tool was released to allow potential competitors to train and test their AI driver without having to buy a physical car and test it in the real world. The previous AV-21 hardware/software platform had suffered from maintenance and troubleshooting issues, especially in the fragility of the wiring harnesses, with numerous teams reporting problems. References External links Technological races Motorsport in Indianapolis 2020s in Indianapolis Robotics competitions Self-driving cars Autonomous Auto Racing
Indy Autonomous Challenge
[ "Engineering" ]
3,225
[ "Automotive engineering", "Self-driving cars" ]
67,246,222
https://en.wikipedia.org/wiki/Babler%20oxidation
The Babler oxidation, also known as the Babler-Dauben oxidation, is an organic reaction for the oxidative transposition of tertiary allylic alcohols to enones using pyridinium chlorochromate (PCC): It is named after James Babler who first reported the reaction in 1976 and William Dauben who extended the scope to cyclic systems in 1977, thereby significantly increasing the synthetic utility: The reaction produces the desired enone product to high yield (typically >75%), is operationally simple and does not require air-free techniques or heating. It suffers, however, from the very high toxicity and environmental hazard posed by the hexavalent chromium PCC oxidising reagent. The solvent of choice is usually dry dichloromethane (DCM) or chloroform (CHCl3). The reaction has been utilised as a step in the total syntheses of various compounds, e.g. of morphine. Mechanism The reaction proceeds through the formation of a chromate ester (1) from nucleophilic attack of the chlorochromate by the allylic alcohol. The ester then undergoes a [3,3]-sigmatropic shift to create the isomeric chromate ester (2). Finally, oxidation of this intermediate yields the α,β-unsaturated aldehyde or ketone product (3). Alternative reagents Concerns about the high toxicity and carcinogenicity of the PCC oxidant, as well as the role of chromium(VI) species as environmental pollutants in groundwater, have led to investigations for the replacement of PCC in the reaction. One successful alternative reported by multiple sources involves the use of N-oxoammonium salts derived from TMP: The oxoammonium salts with non-coordinating anions are used (such as tetrafluoroborate, perchlorate, hexafluorophosphate or hexafluoroantimonate). The oxidiser is added in stoichiometric amounts, usually 1.5 eq of alcohol. A different approach to minimise toxic chromium(VI) use involves performing the reaction with only a catalytic amount of PCC and an excess of another oxidant, to re-oxidise the chromium species as part of the catalytic cycle. Commonly reported stoichiometric reagents for this purpose include di-tert-butyl peroxide, 2-iodoxybenzoic acid or periodates. Secondary alcohols The Babler-Dauben oxidation of secondary allylic alcohols proves more difficult to control than that of tertiary analogues, as along with the desired product (a) a mixture with high proportion of side-products (b) and (c) is obtained: The yield of a is found to be maximised when PCC is not used in stoichiometric quantities but as a co-oxidant; the best effect (50–70% yield of a) is achieved for orthoperiodic acid as the main oxidiser with a 5% molar PCC. Acetonitrile (MeCN) over the usual DCM is used as the solvent. Notably, in contrast to the general oxidation of tertiary alcohols, the secondary alcohol case only works with aromatic substrates (Ar-: an aryl group). This, along with the strongly acidic conditions due to the stoichiometric amount of periodic acid, suggest that the initially formed chromate ester isomerises through a carbocationic route rather than a sigmatotropic reaction as for tertiary alcohols. See also Oxidation with chromium(VI) complexes Oxoammonium-catalyzed oxidation Other reactions of PCC References Organic oxidation reactions Name reactions
Babler oxidation
[ "Chemistry" ]
803
[ "Name reactions", "Organic oxidation reactions", "Organic reactions" ]
67,247,164
https://en.wikipedia.org/wiki/Miracidium
The miracidium is the second stage in the life cycle of trematodes. When trematode eggs are laid and come into contact with fresh water, they hatch and release miracidium. In this phase, miracidia are ciliated and free-swimming. This stage is completed upon coming in contact with, and entering into, a suitable intermediate host for the purposes of asexual reproduction. Many different species of Trematoda exist, expressing some variation in the physiology and appearance of the miracidia. The various trematode species implement similar strategies to increase their chances of locating and colonizing a new host. Anatomy Hirudinella ventricosa The trematode Hirudinella ventricosa releases eggs in strings. Each egg contains a single miracidium, while the string contains living spermatozoa. Miracidia have cilia that are only present in the upper portion of the body near an apical gland with 12 hook-like spines in the opening. Echinostoma paraensei Miracidia usually need to enter a Mollusca host before they can start growing and begin reproduction, however certain species can use other animals as intermediate or main hosts. Echinostoma paraensei miracidia have 18 plates along the outside of their body. Even when about to hatch, their eggs show no signs of specialization such as projection or spine-like structure. They have elongated bodies with one intraepidermal ridge in the anterior row. They display a single "excretory vesicle". The miracidia are oval-shaped and their body is almost entirely covered in cilia except for the most anterior portions, taken up by "apical papilla". The miracidia have four papillae on each side, which contain sensory hairs. They each have an apical gland that leads to the apical papilla. They have four rows of epidermal plates, with row two made up of eight plates, while the other three rows each have six. Their eyespots are dark brown and shaped like an inverted capital letter L, located between the first and second row of plates. A single "large cephalic ganglion" along with several smaller nuclei, make up the nervous system. Physiology Miracidia do not feed. Their sole purpose is to locate and colonize a host. The ability and efficiency of miracidia to find a host is a crucial factor in the growth and success of later life stages. Schistosome miracidia follow a three-phase process when searching for a host. In phase one, the miracidia use light and gravitational stimuli to concentrate in areas that are likely attractive to snail hosts. The second phase consists of randomly moving around. In phase three miracidia begin approaching their host target and preparing to penetrate it while responding to chemical stimuli. Chemosensitivity plays a large role in the search for a host, but it is not specific enough to find only those species that are suitable hosts. Carbohydrates along the surface of the miracidia interact with the lectins produced by gastropods. The organization and number of these carbohydrates shift as the miracidia begin their transition to the next step in their development. Certain carbohydrates are bound all over the body of the sporocyst stage but have only been found to be present on the "intercellular ridges" of the miracidia. Three glands within the apical papilla assist them in this process. They use glandular secretions that collect in an indented area of the papilla, as a means of both sticking to the host they are attempting to invade, and breaking down the cells on the outside of the host organism to gain entry into it. As the miracidium develops, germ cells begin to form and then replicate into germ balls. Each of the germ balls grows and eventually contributes to the next asexual generation. The miracidium itself can differentiate into a replicative primary sporocyst as it sheds its epidermal plates within the snail intermediate host. Trematodes may have varying numbers of asexual generations and larval forms, but share a cercarial stage. References Trematoda Reproduction in animals
Miracidium
[ "Biology" ]
873
[ "Reproduction in animals", "Behavior", "Reproduction" ]
49,192,307
https://en.wikipedia.org/wiki/Ofill%20Echevarria
Ofill Echevarria (born 1972 in Havana, Cuba) is a painter and multimedia artist based in New York City. Ofill graduated from the Elementary School of Fine Arts '20 de Octubre' in 1986 and from the San Alejandro Academy of Fine Arts in 1991. As a founder member of the controversially famous Havanan art collective, Arte Calle (1986-1988) becomes part of the Cuban art scene of the late eighties. In 1991 he traveled to Mexico City to pursue an art scholarship, where he lived for ten years. In 2002, represented by the renowned Praxis International Art - Mexico, which later became the Alfredo Ginocchio Gallery, both in Mexico City, Ofill moved to Miami. In 2005 he moved to New York City, where he currently lives and works. Since 2001, Ofill has exhibited his work widely throughout Latin America, the United States of America and Europe; both individually and participating in many international art fairs and group shows. His work are part of major public and private collections, among which are: Museo Nacional de Bellas Artes de La Habana; Museo Nacional de Arte de Mexico; Museum of Latin American Art; Frederick R. Weisman Art Foundation; Carnegie Art Museum (Oxnard, California); American Museum of the Cuban Diaspora. In 2013 he launched his book 'El Mundo de Los Vivos I The Real World', which contains several essays from recognized American and Cuban art experts and includes artworks from 2001 to 2012. Work Ofill Echevarria's fascination with motion in cityscapes has its origins while the artist was still living in Mexico City, where he developed a series of oil paintings on canvas regarding urban life that later was exhibited at the Multicultural Center of the URI, Kingston. In 2002 a more specific series of paintings about life in the city and its inhabitants with regard on the business people was exhibited in his, 'Iconos / Reflections', as well as in 'City Escapes' (2004) this iconography of stress becomes sharper, with pictures as 'Soñar Is Forbidden' or 'Ritual de Identidad / The Lost Identity'; "works that bordered on abstraction, and whose titles, often bilingual, alluded in a parallel way to another multitudinous movement: that of the human masses coming from the south to insert themselves in the megalopolises of the north". Ofill Echevarria's style and technique draws from the tradition of photography, documentary film and painting. Additionally, particularly in those works where motion matters, the artist has been pointed as an exponent of Wet-on-wet painting technique. In 2013, Ofill returns to Mexico with 'Momentum', an exhibition in which "the pace of city life could only be trapped by the fragments that constitute its temporality through a diligent observation". A more abstract series of paintings on urban life "defined by brushwork" was unveiled at the Gabarron Foundation New York in September of the same year. The show included several Pictures-In-Motion about the city of New York, a project that has been part of the artist's pictorial exhibitions since 2011, although it was officially presented throughout 2013. References External links Artist's Website The Gabarron Foundation New York Un Gyve Limited Living people 1972 births Cuban painters Cuban contemporary artists Multimedia artists Artists from Havana
Ofill Echevarria
[ "Technology" ]
685
[ "Multimedia", "Multimedia artists" ]
49,192,676
https://en.wikipedia.org/wiki/WavePad%20Audio%20Editor
WavePad Audio Editor Software is a multi-platform, digital audio editor and recorder. It supports VST and integrates a stock audio library. Features The primary functions and tools of WavePad are: Sound editing functions: cut, copy, paste, delete, insert, silence, auto-trim and more Audio effects: amplify, normalize, equalize, envelope, reverb, echo, reverse and many more with VST plugin compatibility Batch processing allows users to apply effects and/or convert thousands of files as a single function Scrub, search, and bookmark audio to find, recall and assemble segments of audio files Spectral analysis (FFT), speech synthesis (text-to-speech), and voice changer Audio restoration tools including noise reduction and click pop removal Supports sample rates from 6 to 96 kHz, stereo or mono, 8, 16, 24 or 32 bits Remove vocals from music tracks Create ready to use ringtones for mobile phones Controversy Previously, WavePad and other NCH products came bundled with optional browser plugins like the Ask and Chrome toolbars, which sparked complaints from users and triggered malware warnings from antivirus software companies like Norton and McAfee. NCH has since unbundled all toolbars in all program versions released after July 2015. See also Comparison of digital audio editors Audacity (audio editor) References External links Official Site Audio editors Multimedia software C++ software Proprietary software Windows multimedia software MacOS multimedia software
WavePad Audio Editor
[ "Technology" ]
301
[ "Multimedia", "Multimedia software" ]
49,194,353
https://en.wikipedia.org/wiki/Sarcodon%20thwaitesii
Sarcodon thwaitesii is a species of tooth fungus in the family Bankeraceae. It is found in Asia, Europe, and New Zealand, where it fruits on the ground in mixed forest. Taxonomy The fungus was first described in 1873 by Miles Berkeley and Christopher Edmund Broome as Hydnum thwaitesii, from collections made in Sri Lanka. Paul Christoph Hennings moved it to the now-defunct genus Phaeodon in 1898. Dutch mycologist Rudolph Arnold Maas Geesteranus transferred it to the genus Sarcodon in 1964, noting "To judge from the hyphal structure and the spore characters, this is a true Sarcodon". Gordon Herriot Cunningham's species Hydnum carbonarium, described from New Zealand in 1958, is a synonym of S. thwaitesii. The specific epithet thwaitesii honors English botanist and entomologist George Henry Kendrick Thwaites, who was superintendent of the botanical gardens at Peradeniya, Sri Lanka. Maas Geesteranus placed S. thwaitesii in the section Virescentes, along with S. atroviridis and S. conchyliatus. In all of these species, the flesh dries to a deep olive green color. Description The fruit bodies of Sarcodon thwaitesii have flattened, depressed, or rounded caps measuring in diameter. Initially pale pink in color, they change to pale reddish-brown, and ultimately to blackish-brown. The flesh, roughly the same color as the cap, has a bitter taste. The stipe is centrally attached to the cap, and measure long by at the top. The spines on the cap underside are at first purple or purple brown, drying to blackish brown in age, and measure 2–4 mm. Spores are brown in mass; microscopically, they are roughly spherical, covered with moderate sized growths (tubercules), and measure 6–8 by 6–7 μm. Habitat and distribution Sarcodon thwaitesii fruits on the ground in mixed forest. It is found in Asia, Europe, and New Zealand. References External links Fungi described in 1873 Fungi of Asia Fungi of Europe Fungi of New Zealand thwaitesii Taxa named by Miles Joseph Berkeley Taxa named by Christopher Edmund Broome Fungus species
Sarcodon thwaitesii
[ "Biology" ]
473
[ "Fungi", "Fungus species" ]
49,197,559
https://en.wikipedia.org/wiki/Galilean%20electromagnetism
Galilean electromagnetism is a formal electromagnetic field theory that is consistent with Galilean invariance. Galilean electromagnetism is useful for describing the electric and magnetic fields in the vicinity of charged bodies moving at non-relativistic speeds relative to the frame of reference. The resulting mathematical equations are simpler than the fully relativistic forms because certain coupling terms are neglected. In electrical networks, Galilean electromagnetism provides possible tools to derive the equations used in low-frequency approximations in order to quantify the current crossing a capacitor or the voltage induced in a coil. As such, Galilean electromagnetism can be used to regroup and explain the somehow dynamic but non-relativistic quasistatic approximations of Maxwell's equations. Overview In 1905 Albert Einstein made use of the non-Galilean character of Maxwell's equations to develop his theory of special relativity. The special property embedded in Maxwell's equations is known as the Lorentz invariance. In Maxwell's equations frame, assuming that the speed of moving charges is small compared to the speed of light, it is possible to derive approximations that fulfill Galilean invariance. This approach enables the rigorous definition of two main mutually exclusive limits known as quasi-electrostatics (electrostatics with displacement currents or ohmic currents) and quasi-magnetostatics (magnetostatics with electric field caused by variation of magnetic field according to Faraday's law, or by ohmic currents). Quasi-static approximations are often poorly introduced in literature as stated for instance in Hermann A. Hauss and James R. Melcher's book. They are often presented as a single one whereas Galilean electromagnetism shows that the two regimes are in general mutually exclusive. According to Germain Rousseaux, the existence of these two exclusive limits explains why electromagnetism has long been thought to be incompatible with Galilean transformations. However Galilean transformations applying in both cases (magnetic limit and electric limit) were known by engineers before the topic was discussed by Jean-Marc Lévy-Leblond. These transformations are found in H. H. Woodson and Melcher's 1968 book. If the transit time of the electromagnetic wave passing through the system is much less than a typical time scale of the system, then Maxwell equations can be reduced to one of the Galilean limits. For instance, for dielectrical liquids it is quasielectrostatics, and for highly conducting liquids quasimagnetostatics. History Electromagnetism followed a reverse path compared to mechanics. In mechanics, the laws were first derived by Isaac Newton in their Galilean form. They had to wait for Albert Einstein and his special relativity theory to take a relativistic form. Einstein has then allowed a generalization of Newton's laws of motion to describe the trajectories of bodies moving at relativistic speeds. In the electromagnetic frame, James Clerk Maxwell directly derived the equations in their relativistic form, although this property had to wait for Hendrik Lorentz and Einstein to be discovered. As late as 1963, Edward Mills Purcell's Electricity and Magnetism offered the following low velocity transformations as suitable for calculating the electric field experienced by a jet plane travelling in the Earth's magnetic field. In 1973 Michel Le Bellac and Jean-Marc Lévy-Leblond state that these equations are incorrect or misleading because they do not correspond to any consistent Galilean limit. Germain Rousseaux gives a simple example showing that a transformation from an initial inertial frame to a second frame with a speed of v0 with respect to the first frame and then to a third frame moving with a speed v1 with respect to the second frame would give a result different from going directly from the first frame to the third frame using a relative speed of (v0 + v1). Le Bellac and Lévy-Leblond offer two transformations that do have consistent Galilean limits as follows: The electric limit applies when electric field effects are dominant such as when Faraday's law of induction was insignificant. The magnetic limit applies when the magnetic field effects are dominant. John David Jackson's Classical Electrodynamics introduces a Galilean transformation for the Faraday's equation and gives an example of a quasi-electrostatic case that also fulfills a Galilean transformation. Jackson states that the wave equation is not invariant under Galilean transformations. In 2013, Rousseaux published a review and summary of Galilean electromagnetism. Further reading Electromagnetism Galilean invariance Lorentz invariance Principle of relativity Quasistatic approximation Electrostatics Magnetostatics Lévy-Leblond equation Notes References External links Example of Galilean invariance applied to Faraday's law Electromagnetism Electrodynamics
Galilean electromagnetism
[ "Physics", "Mathematics" ]
1,008
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions", "Electrodynamics", "Dynamical systems" ]
49,200,527
https://en.wikipedia.org/wiki/Mxparser
mXparser is an open-source mathematical expressions parser/evaluator providing abilities to calculate various expressions at a run time. Expressions definitions are given as plain text, then verified in terms of grammar / syntax, finally calculated. Library source code is maintained separately for Java and C#, providing the same API for Java/JVM, Android, .NET and Mono (Common Language Specification Compliant). Main features / usage examples mXparser delivers functionalities such as: basic calculations, implied multiplication, built-in constants and functions, numerical calculus operations, iterated operators, user defined constants, user defined functions, user defined recursion, Unicode mathematical symbols support. Basic operators mXparser supports basic operators, such as: addition '+', subtraction '-', multiplication '*', division '/', factorial '!', power '^', modulo '#'. Expression e = new Expression("2+3/(4+5)^4"); double v = e.calculate(); Implied multiplication Expression e = new Expression("2(3+4)3"); double v = e.calculate(); Expression e = new Expression("2pi(3+4)2sin(3)e"); double v = e.calculate(); Binary relations It is possible to combine typical expressions with binary relations (such as: greater than '>', less than '<', equality '=', inequality '<>', greater or equal '>=', lower or equal '<='), as each relation evaluation results in either '1' for true outcome, or '0' for false. Expression e = new Expression("(2<3)+5"); double v = e.calculate(); Boolean logic Boolean logic also operates assuming equivalence of '1 as true' and '0 as false'. Supported Boolean operators include: AND conjunction, OR disjunction, NAND Sheffer stroke, NOR, XOR Exclusive OR, IMP Implication, CIMP Converse implication, NIMP Material nonimplication, CNIMP Converse nonimplication, EQV Logical biconditional, Negation. Expression e = new Expression("1 --> 0"); double v = e.calculate(); Built-in mathematical functions Supported common mathematical functions (unary, binary and variable number of arguments), including: trigonometric functions, inverse trigonometric functions, logarithm functions, exponential function, hyperbolic functions, Inverse hyperbolic functions, Bell numbers, Lucas numbers, Stirling numbers, prime-counting function, exponential integral function, logarithmic integral function, offset logarithmic integral, binomial coefficient and others. Expression e = new Expression("sin(0)+ln(2)+log(3,9)"); double v = e.calculate(); Expression e = new Expression("min(1,2,3,4)+gcd(1000,100,10)"); double v = e.calculate(); Expression e = new Expression("if(2<1, 3, 4)"); double v = e.calculate(); Expression e = new Expression("iff(2<1, 1; 3<4, 2; 10<2, 3; 5<10, 4)"); double v = e.calculate(); Built-in math constants Built-in mathematical constants, with high precision. Expression e = new Expression("sin(pi)+ln(e)"); double v = e.calculate(); Iterated operators Iterated summation and product operators. Expression e = new Expression("sum(i, 1, 10, ln(i))"); double v = e.calculate(); Expression e = new Expression("prod(i, 1, 10, sin(i))"); double v = e.calculate(); Numerical differentiation and integration mXparser delivers implementation of the following calculus operations: differentiation and integration. Expression e = new Expression("der( sin(x), x )"); double v = e.calculate(); Expression e = new Expression("int( sqrt(1-x^2), x, -1, 1)"); double v = e.calculate(); Prime numbers support Expression e = new Expression("ispr(21)"); double v = e.calculate(); Expression e = new Expression("Pi(1000)"); double v = e.calculate(); Unicode mathematical symbols support Expression e = new Expression("√2"); double v = e.calculate(); Expression e = new Expression("∜16 + ∛27 + √16"); double v = e.calculate(); Expression e = new Expression("∑(i, 1, 5, i^2)"); double v = e.calculate(); Elements defined by user Library provides API for creation of user-defined objects, such as: constants, arguments, functions. User-defined constants Constant t = new Constant("t = 2*pi"); Expression e = new Expression("sin(t)", t); double v = e.calculate(); User-defined arguments Argument x = new Argument("x = 5"); Argument y = new Argument("y = 2*x", x); Expression e = new Expression("sin(x)+y", x, y); double v = e.calculate(); User-defined functions Function f = new Function("f(x, y) = sin(x)+cos(y)"); Expression e = new Expression("f(1,2)", f); double v = e.calculate(); User-defined variadic functions Function f = new Function("f(...) = sum( i, 1, [npar], par(i) )"); Expression e = new Expression("f(1,2,3,4)", f); double v = e.calculate(); User-defined recursion Function fib = new Function("fib(n) = iff( n>1, fib(n-1)+fib(n-2); n=1, 1; n=0, 0 ) )"); Expression e = new Expression("fib(10)", fib); double v = e.calculate(); Requirements Java: JDK 1.5 or higher .NET/Mono: framework 2.0 or higher Documentation Tutorial Javadoc API specification mXparser - source code Source code is maintained and shared on GitHub. See also List of numerical libraries List of numerical analysis software Mathematical software Exp4j References External links MathParser.org mXparser on NuGet mXparser on Apache Maven Scalar powered by mXparser ScalarMath.org powered by mXparser Free mathematics software Parsing 2010 software Free software programmed in Java (programming language) Free software programmed in C Sharp Software using the BSD license Free mobile software Software that uses Mono (software) Free and open-source Android software .NET Framework software Computer algebra systems
Mxparser
[ "Mathematics" ]
1,575
[ "Computer algebra systems", "Free mathematics software", "Mathematical software" ]
49,200,995
https://en.wikipedia.org/wiki/NGC%204490
NGC 4490, also known as the Cocoon Galaxy, is a barred spiral galaxy in the constellation Canes Venatici. William Herschel discovered it in 1788. It is known to be of the closest interacting/merging galactic system. The galaxy lies at a distance of 25 million light years from Earth making it located in the local universe. It interacts with its smaller companion NGC 4485 and as a result is a starburst galaxy. NGC 4490 and NGC 4485 are collectively known in the Atlas of Peculiar Galaxies as Arp 269. The two galaxies have already made their closest approach and are rushing away from each other. It has been discovered that NGC 4490 has a double nucleus. NGC 4490 is located 3/4° northwest of beta Canum Venaticorum and with apparent visual magnitude 9.8, can be observed with 15x100 binoculars. It is a member of the Herschel 400 Catalogue. It belongs to the Canes II Group. NGC 4490 has a system of satellite galaxies oriented roughly in a plane. Stellar stream A stellar stream 25,000 light years long connects the two interacting galaxies. The stellar stream is made of bright knots and large gas rich pockets. Young blue hot massive stars are formed in this region. Supernovae Two supernovae have been observed in NGC 4490: SN 1982F (type unknown, mag. 16) was discovered by Paul Wild on 15 April 1982. SN 2008ax (type II, mag. 13) was discovered by the Lick Observatory Supernova Search (LOSS) on 3 March 2008 and by Kōichi Itagaki on 4 March 2008. Gallery See also Barred spiral galaxy Canes Venatici (constellation) References External links SEDS Canes II Group Canes Venatici Barred spiral galaxies Starburst galaxies 4490 07651 41333 Discoveries by William Herschel 269
NGC 4490
[ "Astronomy" ]
382
[ "Canes Venatici", "Constellations" ]
49,201,597
https://en.wikipedia.org/wiki/Aliso%20Canyon%20Oil%20Field
The Aliso Canyon Oil Field (also Aliso Canyon Natural Gas Storage Field, Aliso Canyon Underground Storage Facility) is an oil field and natural gas storage facility in the Santa Susana Mountains in Los Angeles County, California, north of the Porter Ranch neighborhood of the City of Los Angeles. Discovered in 1938 and quickly developed afterward, the field peaked as an oil producer in the 1950s, but has remained active since its discovery. One of its depleted oil and gas producing formations, the Sesnon-Frew zone, was converted into a gas storage reservoir in 1973 by the Southern California Gas Company, the gas utility servicing the southern half of California. This reservoir is the second-largest natural gas storage site in the western United States, with a capacity of over 86 billion cubic feet of natural gas. Currently it is one of four gas storage facilities owned by Southern California Gas, the others being the La Goleta Gas Field west of Santa Barbara, Honor Rancho near Newhall, and Playa del Rey. Oil production on the field continues from 32 active wells as of 2016. The gas storage reservoir is accessed through 115 gas injection wells, along with approximately 38 miles of pipeline internal to the field. Three operators were active on the field: Southern California Gas Company, The Termo Company, and Crimson Resource Management Corp. Geographic setting The field is on the southern slope of the Santa Susana Mountains, an east-west trending range dividing the San Fernando Valley on the south from the Santa Clarita Valley on the north-northeast. With some of its productive wells set at an elevation over 3,000 feet, it is one of the highest and most rugged oil fields in California. The main entrance to the oil field is on Limekiln Canyon Trail where it intersects Sesnon Boulevard. Vehicles must pass a guard station and locked gate to enter. Land uses in the vicinity of the field include industrial (for the oil and gas field itself), open space, parkland, and residential to the south. Areas to the west, north, and east in the Santa Susana Mountains have been identified as Significant Ecological Areas. The Michael D. Antonovich Open Space Preserve abuts the field on the northeast, and numerous parks in Porter Ranch are adjacent on the south. Since the field is on the south slope of the Santa Susana Mountains, drainage is to the south into the San Fernando Valley, with runoff into Mormon Canyon, Limekiln Canyon, and Aliso Canyon, which all flow into the Los Angeles River, which then flows south through the Los Angeles Basin and out to the ocean at Long Beach. Vegetation on the field includes a mix of native habitat types, including oak woodlands and Venturan sage scrub, as well as non-native grassland, with many disturbed areas around roads and drilling and production pads. Climate in the area is Mediterranean, with warm, almost rainless summers, and mild and rainy winters. Snow is rare although it can fall at the higher elevations. Wildfires are common, particularly in the summer and fall, and some of the storage field was burned over in the October, 2008 14,000-acre Sesnon Fire. Geology The Aliso Canyon field consists of multiple layers of oil and gas bearing sediments in a southeast-plunging anticline bounded on the north by the Santa Susana Fault Zone and on the west-northwest by the Frew Fault. These tectonic features form a structural trap keeping oil in place. The layered Tertiary sedimentary zones within the anticline resemble a layer-cake elevated on the northwest, with some of the layers containing oil and gas, and other impermeable layers between them, keeping them separate. Older Cretaceous sedimentary rocks have been forced over the top by motion along the Frew Fault. The uppermost oil-bearing stratigraphic unit is the Pliocene-age Pico Formation, which contains the Aliso, Porter, and Upper, Middle, and Lower Del Aliso zones, from top to bottom, ranging in depth from about 4,500 to 8,000 feet. Underneath the Pico is the Middle Miocene Modelo Formation, and beneath that, bounded by an unconformity, the Eocene-age Llajas Formation. Since these units are both permeable and in direct contact, they form a single productive zone, the Sesnon-Frew, the largest of the field's zones and the one used by SoCalGas for gas storage. This unit has an average depth of about 9,000 feet, and averages about 160 feet thick. Beneath the Sesnon-Frew are marine sediments of Cretaceous age, not known to contain oil, and below that crystalline basement rocks of Cretaceous age or older. History and production The Santa Susana Mountains are one of several anticlinal formations within the Ventura Basin, and as such have long been of interest to those looking for oil. The oldest oil well in California, and the oldest commercially viable oil well in the western United States, was at the Pico Canyon Area of the Newhall Oil Field less than five miles northwest of the Aliso field boundary, also in the Santa Susana Mountains. J. Paul Getty's Tidewater Associated Oil Company drilled the discovery well for the Aliso field in 1938, finding oil in the Porter zone, 5,393 feet below ground surface. Other producing zones were discovered not long after, including Del Aliso zone in 1938, and the Sesnon-Frew zone in 1940. Several companies operated the field in the early years, including Tidewater, Standard Oil of California, Porter Sesnon et al., Porter Oil Co., Carlton Beal and Associates, and M.L. Orcutt. By the middle of 1959, there were 118 producing wells on the entire field, and over 32 million barrels of oil had been withdrawn. Early in production, the Sesnon-Frew zone had been identified as having a strong gas cap, with some wells being completed in gas-only portions of the reservoir, needing to be deepenend. The overproduction of gas led to accusations of wasting, and litigation commenced with Standard Oil and Tidewater accusing Carlton Beal of wasting gas (lacking a modern pipeline transport system, natural gas at this time was not always retained for use – it was commonly flared or just vented to the atmosphere). The State Oil and Gas Supervisor ruled in favor of Standard and Tidewater and limited production on the Sesnon pool to reduce the waste. One enhanced recovery technique, waterflooding, was used on the field, beginning in 1976. The Del Aliso zone was produced this way as conventional oil production began to decline. In this method, water pumped up with oil is disposed by being pumped back into the same formation from which it came, restoring reservoir pressure and pushing the remaining reservoir fluid to other recovery wells, even though it becomes more and more diluted with time. Suburban developments of the San Fernando Valley began approaching the field after it had already been fully developed, with some of the first residential housing in Porter Ranch appearing in the 1960s, but the main buildout started in the 1970s. Development continued into the first decade of the 21st century, expanding into the foothills right up to the SoCalGas property line. Many of these projects were master-planned developments, including gated communities, in one of San Fernando Valley's most affluent areas. Conversion to gas storage By the early 1970s the Sesnon zone was depleted of oil. As it was an enormous and structurally sound reservoir, with an average depth of about 9,000 feet, and centrally located in the distribution area of Pacific Lighting (an ancestor of Southern California Gas Company), it was ideal to use as a storage reservoir for gas for the local utility. Pacific Lighting bought rights to that portion of the field from Getty's Tidewater, and worked over the old oil production wells, many dating from 1940s and 1950s, to turn them into gas injection wells. The Aliso Canyon Natural Gas Storage Facility, as this repurposed part of the oil field became known, became the largest gas storage reservoir owned by SoCalGas and the second largest in the western United States. Storage fields such as the four maintained by SoCalGas are necessary to balance the load between summer and winter months; gas can be withdrawn during the winter, when it is in high demand, and injected back into the reservoir during the warmer months. Aliso Canyon was ideally placed near the center of SoCalGas's service region, and connected to the system by an extensive pipeline network. In 2009 SoCalGas proposed an expansion and upgrade of the storage facility involving replacement of the obsolete gas turbine compressors with more up-to-date electric versions. This project would increase the gas injection capacity of the site from 300 million to 450 million cubic feet per day, and remove the compressors which were installed in 1971 when the storage facility was first being developed. It would also move guard houses and some other structures, build a substation on the field, and upgrade various transmission and telecommunications lines. After environmental review through draft and final Environmental Impact Reports as required by the California Environmental Quality Act (CEQA), the project was approved and construction began in 2014. Gas wells on the site are old, and have required considerable maintenance in recent years. Of 229 storage wells on the site, half were more than 57 years old as of July 2014. Casing, tubing, and wellhead leaks have occurred in recent years. For example, in 2013, two wells were found with casing leaks, four with tubing leaks, and two with leaks at the wellhead. In 2008, one well – "Porter 50A" – was found to have a gas pressure of 400 pounds per square inch on the surface annulus, an indication of a serious underground leak and potential safety hazard; this well was immediately removed from service, and on investigation corrosion was discovered along a 600-foot stretch of the production casing, ending more than 1000 feet below ground surface. SoCalGas designed a Storage Integrity Management Program to address these deficiences, along with a budget, and presented it to the State Public Utility Commission in 2014. Two other oil companies continue to operate on the field, outside of the SoCalGas facility boundary: The Termo Company and Crimson Resource Management Corp. These companies produce oil from other, shallower zones than the Sesnon-Frew zone that SoCalGas uses for gas storage. The Termo Company proposed an expansion of their operation, adding another 12 wells to the 15 they already had at the end of 2015, but put their plans on hold after the methane gas eruption from SoCalGas well Standard Sesnon 25 that began on October 23, 2015. 2015–2016 methane gas blowout A dramatic break somewhere along the length of an 8,750-foot injection well casing resulted in a gigantic methane eruption from the field on October 23, 2015, allowing the escape of around 60 million cubic feet of methane per day at first, before the pressure was reduced. The well, Standard Sesnon 25 ("SS 25") had originally been installed in 1953, and reworked as a gas injection well in 1973, but lacked a blowout prevention valve, as it had not been considered a priority given the well's position, at the time, far from a populated area. Fallout from the methane cloud, in the form of oily droplets and persistent noxious odors, caused the evacuation of over 6,000 families, who relocated to hotels and other rentals at SoCalGas's expense throughout the region. Another 10,000 homes received air-purification systems at the company's expense. On Dec. 4, 2015, SoCalGas commenced drilling a relief well to stop the natural gas blowout by plugging the damaged well at its base. The relief well intercepted the base of the well on Feb. 11, 2016, and the company began pumping heavy fluids to temporarily control the flow of gas out of the well. SoCalGas was able to plug the leak permanently on February 18, 2016. Overall the well is estimated to have released over 100,000 metric tons of natural gas, the largest such release in U.S. history. In March 2016, Termo Company was fined $75,000 for piping in methane emissions from another natural gas leak in what the Division of Oil, Gas, and Geothermal Resources called "brazen and intentional violations of state law". Governor of California Jerry Brown issued an executive order banning natural gas injection until all of the wells were thoroughly tested for corrosion and leaks. On May 10 2016, Governor Brown signed Senate Bill 380, introduced in the California State Senate by Fran Pavley, into law. The bill extended the moratorium on gas injection, and requires the state to consider permanently shutting down this gas storage facility. References Environment of Greater Los Angeles Natural gas fields in the United States Natural gas storage Oil fields in Los Angeles County, California Petroleum in California Geography of Los Angeles County, California Porter Ranch, Los Angeles San Fernando Valley Santa Susana Mountains Economy of Los Angeles Sempra Energy
Aliso Canyon Oil Field
[ "Chemistry" ]
2,667
[ "Natural gas storage", "Natural gas technology" ]
49,202,119
https://en.wikipedia.org/wiki/Diffusion%20bonding
Diffusion bonding or diffusion welding is a solid-state welding technique used in metalworking, capable of joining similar and dissimilar metals. It operates on the principle of solid-state diffusion, wherein the atoms of two solid, metallic surfaces intersperse themselves over time. This is typically accomplished at an elevated temperature, approximately 50-75% of the absolute melting temperature of the materials. A weak bond can also be achieved at room temperature. Diffusion bonding is usually implemented by applying high pressure, in conjunction with necessarily high temperature, to the materials to be welded; the technique is most commonly used to weld "sandwiches" of alternating layers of thin metal foil, and metal wires or filaments. Currently, the diffusion bonding method is widely used in the joining of high-strength and refractory metals within the aerospace and nuclear industries. History The act of diffusion welding is centuries old. This can be found in the form of "gold-filled," a technique used to bond gold and copper for use in jewelry and other applications. In order to create filled gold, smiths would begin by hammering out an amount of solid gold into a thin sheet of gold foil. This film was then placed on top of a copper substrate and weighted down. Finally, using a process known as "hot-pressure welding" or HPW, the weight/copper/gold-film assembly was placed inside an oven and heated until the gold film was sufficiently bonded to the copper substrate. Modern methods were described by the Soviet scientist N.F. Kazakov in 1953. Characteristics Diffusion bonding involves no liquid fusion, and often no filler metal. No weight is added to the total, and the join tends to exhibit both the strength and temperature resistance of the base metal(s). The materials endure no, or very little, plastic deformation. Very little residual stress is introduced, and there is no contamination from the bonding process. It may theoretically be performed on a join surface of any size with no increase in processing time, however, practically speaking, the surface tends to be limited by the pressure required and physical limitations. Diffusion bonding may be performed with similar and dissimilar metals, reactive and refractory metals, or pieces of varying thicknesses. Due to its relatively high cost, diffusion bonding is most often used for jobs either difficult or impossible to weld by other means. Examples include welding materials normally impossible to join via liquid fusion, such as zirconium and beryllium; materials with very high melting points such as tungsten; alternating layers of different metals which must retain strength at high temperatures; and very thin, honeycombed metal foil structures. Titanium alloys will often be diffusion bonded as the thin oxide layer can be dissolved and diffused away from the bonding surfaces at temperatures over 850 °C. Temperature Dependence Steady state diffusion is determined by the amount of diffusion flux that passes through the cross-sectional area of the mating surfaces. Fick's first law of diffusion states: where J is the diffusion flux, D is a diffusion coefficient, and dC/dx is the concentration gradient through the materials in question. The negative sign is a product of the gradient. Another form of Fick's law states: where M is defined as either the mass or amount of atoms being diffused, A is the cross-sectional area, and t is the time required. Equating the two equations and rearranging, we achieve the following result: As mass and area are constant for a given joint, time required is largely dependent on the concentration gradient, which changes by only incremental amounts through the joint, and the diffusion coefficient. The diffusion coefficient is determined by the equation: where Qd is the activation energy for diffusion, R is the universal gas constant, T is the thermodynamic temperature experienced during the process, and D0 is a temperature-independent preexponential factor that depends on the materials being joined. For a given joint, the only term in this equation within control is temperature. Processes When joining two materials of similar crystalline structure, diffusion bonding is performed by clamping the two pieces to be welded with their surfaces abutting each other. Prior to welding, these surfaces must be machined to as smooth a finish as economically viable, and kept as free from chemical contaminants or other detritus as possible. Any intervening material between the two metallic surfaces may prevent adequate diffusion of material. Specific tooling is made for each welding application to mate the welder to the workpieces. Once clamped, pressure and heat are applied to the components, usually for many hours. The surfaces are heated either in a furnace, or via electrical resistance. Pressure can be applied using a hydraulic press at temperature; this method allows for exact measurements of load on the parts. In cases where the parts must have no temperature gradient, differential thermal expansion can be used to apply load. By fixturing parts using a low-expansion metal (i.e. molybdenum) the parts will supply their own load by expanding more than the fixture metal at temperature. Alternative methods for applying pressure include the use of dead weights, differential gas pressure between the two surfaces, and high-pressure autoclaves. Diffusion bonding must be done in a vacuum or inert gas environment when using metals that have strong oxide layers (i.e. copper). Surface treatment including polishing, etching, and cleaning as well as diffusion pressure and temperature are important factors regarding the process of diffusion bounding. At the microscopic level, diffusion bonding occurs in three simplified stages: Microasperity deformation- before the surfaces completely contact, asperities (very small surface defects) on the two surfaces contact and plastically deform. As these asperities deform, they interlink, forming interfaces between the two surfaces. Diffusion-controlled mass transport- elevated temperature and pressure causes accelerated creep in the materials; grain boundaries and raw material migrate and gaps between the two surfaces are reduced to isolated pores. Interface migration- material begins to diffuse across the boundary of the abutting surfaces, blending this material boundary and creating a bond. Benefits The bonded surface has the same physical and mechanical properties as the base material. Once bonding is complete, the joint may be tested using tensile testing for example. The diffusion bonding process is able to produce high quality joints where no discontinuity or porosity exists in the interface. In other words, we are able to sand, manufacturing and heat the material. Diffusion bonding enables the manufacture of high precision components with complex shapes. Also, diffusion is flexible. The diffusion bonding method can be used widely, joining either similar or dissimilar materials, and is also important in processing composite materials. The process is not extremely hard to approach and the cost to perform the diffusion bonding is not high. The material under diffusion is able to reduce the plastic deformation. Applicability Diffusion bonding is primarily used to create intricate forms for the electronics, aerospace, nuclear, and microfluidics industries. Since this form of bonding takes a considerable amount of time compared to other joining techniques such as explosion welding, parts are made in small quantities, and often fabrication is mostly automated. However, due to different requirements, the required time could be reduced. In an attempt to reduce fastener count, labor costs, and part count, diffusion bonding, in conjunction with superplastic forming, is also used when creating complex sheet metal forms. Multiple sheets are stacked atop one another and bonded in specific sections. The stack is then placed into a mold and gas pressure expands the sheets to fill the mold. This is often done using titanium or aluminum alloys for parts needed in the aerospace industry. Typical materials that are welded include titanium, beryllium, and zirconium. In many military aircraft diffusion bonding will help to allow for the conservation of expensive strategic materials and the reduction of manufacturing costs. Some aircraft have over 100 diffusion-bonded parts, including fuselages, outboard and inboard actuator fittings, landing gear trunnions, and nacelle frames. References Further reading Kalpakjian, Serope, Schmid, Steven R. "Manufacturing Engineering and Technology, Fifth Edition", pp. 771-772 External links "Cast Nonferrous: Solid State Welding," at Key to Metals An excellent discussion of diffusion bonding by Amir Shirzadi for the UK Centre for Materials Education Welding Materials science
Diffusion bonding
[ "Physics", "Materials_science", "Engineering" ]
1,712
[ "Welding", "Applied and interdisciplinary physics", "Materials science", "Mechanical engineering", "nan" ]
49,203,606
https://en.wikipedia.org/wiki/Innermost%20stable%20circular%20orbit
The innermost stable circular orbit (often called the ISCO) is the smallest marginally stable circular orbit in which a test particle can stably orbit a massive object in general relativity. The location of the ISCO, the ISCO-radius (), depends on the mass and angular momentum (spin) of the central object. The ISCO plays an important role in black hole accretion disks since it marks the inner edge of the disk. The ISCO should not be confused with the Roche limit, the innermost point where a physical object can orbit before tidal forces break it up. The ISCO is concerned with theoretical test particles, not real objects. In general terms, the ISCO will be far closer to the central object than the Roche limit. Basic concept In classical mechanics, an orbit is achieved when a test particle's angular momentum is enough to resist the gravity force of the central object. As the test particle approaches the central object, the required amount of angular momentum grows, due to the inverse square law nature of gravitation. This can be seen in practical terms in artificial satellite orbits; in geostationary orbit at the orbital speed is , whereas in low Earth orbit it is . Orbits can be achieved at any altitude, as there is no upper limit to velocity in classical mechanics. General relativity (GR) introduces an upper limit to the speed of any object: the speed of light. If a test particle is lowered in orbit toward a central object in GR, the test particle will eventually require a speed greater than light to maintain an orbit. This defines the innermost possible instantaneous orbit, known as the innermost circular orbit, which lies at 1.5 times the Schwarzschild radius (for a Black Hole governed by the Schwarzschild metric). This distance is also known as the photon sphere. In GR, gravity is not treated as a central force that pulls on objects; it instead operates by warping spacetime, thus bending the path that any test particle may travel. The ISCO is the result of an attractive term in the equation representing the energy of a test particle near the central object. This term cannot be offset by additional angular momentum, and any particle within this radius will spiral into the center. The precise nature of the term depends on the conditions of the central object (i.e. whether a black hole has angular momentum). Non-rotating black holes For a non-spinning massive object, where the gravitational field can be expressed with the Schwarzschild metric, the ISCO is located at where is the Schwarzschild radius of the massive object with mass . Thus, even for a non-spinning object, the ISCO radius is only three times the Schwarzschild radius, , suggesting that only black holes and neutron stars have innermost stable circular orbits outside of their surfaces. As the angular momentum of the central object increases, decreases. Bound circular orbits are still possible between the ISCO and the so-called marginally bound orbit, which has a radius of but they are unstable. Between and the photon sphere so-called unbound orbits are possible which are extremely unstable and which afford a total energy of more than the rest mass at infinity. For a massless test particle like a photon, the only possible but unstable circular orbit is exactly at the photon sphere. Inside the photon sphere, no circular orbits exist. Its radius is The lack of stability inside the ISCO is explained by the fact that lowering the orbit does not free enough potential energy for the orbital speed necessary: the acceleration gained is too little. This is usually shown by a graph of the orbital effective potential which is lowest at the ISCO. Rotating black holes The case for rotating black holes is somewhat more complicated. The equatorial ISCO in the Kerr metric depends on whether the orbit is prograde (negative sign in ) or retrograde (positive sign in ): where with the rotation parameter . As the rotation rate of the black hole increases to the maximum of , the prograde ISCO, marginally bound radius and photon sphere radius decrease down to the event horizon radius at the so-called gravitational radius, still logically and locally distinguishable though. The retrograde radii hence increase towards , . If the particle is also spinning there is a further split in ISCO radius depending on whether the spin is aligned with or against the black hole rotation. References External links Leo C. Stein, Kerr calculator V2 General relativity Black holes Orbits
Innermost stable circular orbit
[ "Physics", "Astronomy" ]
908
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "General relativity", "Density", "Theory of relativity", "Stellar phenomena", "Astronomical objects" ]
49,203,897
https://en.wikipedia.org/wiki/Peter%20Jost
(Hans) Peter Israel Jost, CBE (25 January 1921 – 7 June 2016) was a British mechanical engineer. He was the founder of the discipline of tribology, the science and engineering of interacting surfaces in relative motion. In 1966, Jost published a report which highlighted the cost of friction, wear and corrosion to the United Kingdom economy (1.1-1.4% of GDP). It was in this eponymous report that he coined the term tribology, which has now been widely adopted. Early life and education Jost was born in Berlin, son of merchant Leo Jost, and Margot (née Jacoby), both of Jewish descent. He was educated at Liverpool Technical College and Manchester College of Technology. Career Jost was an apprentice at Associated Metal Works, Glasgow, then at Napier and Sons in Liverpool, where he won the Sir John Larking medal for a paper on measurement of surface finish. At 29, he became general manager of Trier Brothers, an international lubricants company, of which he subsequently became director. Here he developed an innovative method of lubricating steam machinery, which saved energy and water by preventing scaling of boiler tubes. By 1960, he was lubrication consultant to iron, steel and tinplate producers Richard Thomas and Baldwins; he went on to serve as a director and chairman of several technology and engineering companies, including K. S. Paul Ltd, producers of solid lubricants, and Engineering & General Equipment Ltd. He served on numerous industry councils, and until his death was president of the International Tribology Council and a life member of the council of the Parliamentary and Scientific Committee. Awards and recognition The Royal Academy of Engineering noted that "there can hardly be another British engineer with more worldwide honours and decorations". He was appointed a CBE in 1969, and was also honoured by the heads of state of France, Germany, Poland, Austria and Japan, and in 1992 became the first honorary foreign member of the Russia Academy of Engineering. He received the Order of the Rising Sun, Gold Rays with Neck Ribbon honour from Japan in 2011, the Austrian Cross of Honour for Science and Art, First Class in 2001, and the Order of Merit Cross, First Class from Germany. He held two honorary professorships and 11 honorary doctorates including, in January 2000, the first Millennium honorary science doctorate. He was an honorary fellow of the Institution of Engineering and Technology, the Institution of Mechanical Engineers and of the Institute of Materials. He was awarded the International Award from the Society of Tribologists and Lubrication Engineers in 1997. Shortly before his death, he was elected an Honorary Fellow of the Royal Academy of Engineering but he died before the Academy's AGM at which this was announced. He established The Peter Jost Charitable Foundation which promotes the advancement of public education in science and technology through teaching and research, particularly the increase of public knowledge in tribology. In 2020, the International Tribology Council established the Peter Jost Tribology Award for mid-career tribologists. The inaugural winner of the award was Professor Daniele Dini from Imperial College London, who will be presented the award at the 7th World Tribology Congress in 2022. Personal life In 1948, Jost married Margaret Josephine, daughter of Michael Kadesh. They had two daughters. References 1921 births 2016 deaths 20th-century British engineers British mechanical engineers Tribologists Commanders of the Order of the British Empire
Peter Jost
[ "Materials_science" ]
703
[ "Tribology", "Tribologists" ]
71,547,611
https://en.wikipedia.org/wiki/Bail%20or%20Jail
Bail or Jail (originally titled Obakeidoro!) is an asymmetrical multiplayer action game developed by Free Style and published by Konami Digital Entertainment. The game was originally released on the Nintendo Switch in 2019, and ported to Windows PC on July 21, 2022 under its new name. Gameplay Bail or Jail is an asymmetrical multiplayer action game. It is a tag game, in which up to four players are divided into three humans and a monster. The Monster must capture all humans within a 3-minute time limit to win. The humans must keep at least 1 human out of jail. The Human skills are run and hide, but also use the lantern to stun the Monster for a while as well as press the all switches to unlock the jail to rescue the captured fellow. Meantime, the Monster use its own skills such as slipping through walls or detecting footprints to capture all Humans. There are also various lanterns and they can be used only once but restored by rescuing the fellow from the jail. On the other hand, the disadvantage is the lantern's light makes easy to find Human's location for Monster. Bail or Jail offers 5 modes, which are Quick Match, Friend Match and Join Match with online as well as single player and local multiplayer with offline. Quick Match allows players to play with random opponents from around the world. Friend Match can be played with Steam friends, but this mode can only allow either the friends or CPU to participate. If you want to play with friends who is not yours, then need to use Join Friend. Join Friend allows to hop into friends who are currently playing Quick Match or Friend Match. Single player is a mode that you can choose a stage and play as many times as you like. Local Multiplayer is up to 4 players can play together on a single screen without the internet. Release The Steam version of Bail or Jail has increased resolution, frame rates of up to 144 FPS and additional sound effects. In addition, players can use Discord to invite other players from the same server. The game was released for Nintendo Switch in July 2022. It was globally released for Microsoft Windows by Konami Digital Entertainment on July 21, 2022. In October 2022, the company released two free downloadable content packs which introduce Alucard and Leon Belmont from the Castlevania franchise to the game. Reception Shaun Musgrave of TouchArcade rated the game 2.5/5 points, saying there was a good idea behind it but criticizing its execution. Calling it "not very enjoyable to play", he compared it to Friday the 13th: The Video Game, saying it was worse. However, Nintendo Force Magazine rated the game 8.5/10 points, calling it "family-frightening fun". Ken Allsop of PCGamesN called the game "a very cutesy take on the Dead by Daylight formula" and recommended it to people who were intimidated by the former game's "grisly nature". References External links Official website Multiplayer video games Indie games Asymmetrical multiplayer video games 2019 video games Konami games Nintendo Switch games Windows games Action games Video games developed in Japan
Bail or Jail
[ "Physics" ]
639
[ "Asymmetrical multiplayer video games", "Symmetry", "Asymmetry" ]
71,548,487
https://en.wikipedia.org/wiki/Vuk%20Mandi%C4%87
Vuk Mandić (born April 20, 1975, in Priboj, Serbia) is a Serbian-American astrophysicist and professor of physics and astronomy at the University of Minnesota. In 2017 he was elected a Fellow of the American Physical Society (APS). Biography He grew up in Podgorica, where he received his elementary and secondary education. For his university education, he went to the United States. He graduated in 1998 with a B.S. in physics and mathematics from California Institute of Technology (Caltech) and in 2004 with a Ph.D. in physics from the University of California, Berkeley. His Ph.D. thesis advisor was Bernard Sadoulet. From 2004 to 2007 Mandic was supported by a Millikan Postdoctoral Fellowship at Caltech to work on the LIGO project to search for gravitational waves. In 2007 he became a faculty member in the department of physics and astronomy at the University of Minnesota, where he now is a Distinguished McKnight University Professor. In August 2017 he was part of the team that detected the GW170817 gravitational wave signal. He has chaired or co-chaired several committees for LIGO and for the Super Cryogenic Dark Matter Search. Mandic's research combines general relativity theory, astrophysics, and astronomy. His collaborations with various scientific teams have resulted in more than 350 publications with cumulative citations over 92,000. His 2017 APS Fellowship citation is for "significant contributions to searches for primordial gravitational waves using LIGO data and for pioneering studies of the ultimate limits to low frequency sensitivity of ground based gravitational wave detectors". References External links (See Sanford Underground Research Facility#History.) Stochastic Astrophysical Foreground from Compact Binary Mergers (Lectures 1 to 5) by Vuk Mandic, ICTS (International Centre for Theoretical Sciences) Summer School on Gravitational-Wave Astronomy, 05 - 16 July 2021 ICTS Bangalore Online Lectures, posted on YouTube, September 25, 2021 1975 births Living people Serbian physicists 20th-century American physicists 21st-century American physicists Astrophysicists University of California, Berkeley alumni California Institute of Technology alumni University of Minnesota faculty Fellows of the American Physical Society People from Priboj People from Podgorica
Vuk Mandić
[ "Physics" ]
456
[ "Astrophysicists", "Astrophysics" ]
71,548,827
https://en.wikipedia.org/wiki/Kojima-1Lb
Kojima-1Lb (TCP J05074264+2447555) is an exoplanet discovered through the microlensing method. The host star lens was discovered by the amateur astronomer Tadashi Kojima (小嶋正). At the time of its discovery it was the planet around the brightest microlensing host star and consequently around a nearby star, as opposed to most of the microlensing planets, which are usually found around distant and inaccessible host stars. Kojima-1Lb is a mildly cold Neptune around a red dwarf located from the solar system. Naming The microlensing event was first reported on CBAT as TCP J05074264+2447555 by Kojima. Conventionally microlensing planets are named after the discoverer of the microlensing event and not after the discoverer of the planetary feature. The discovery group of the planetary feature nicknamed the microlensing event Feynman-01 in honour of the Osservatorio Astroficico R.P. Feynman that discovered the planetary feature. The star now appears as Kojima-1 in SIMBAD. Discovery The microlensing event caused by the star Kojima-1L moving in front of a background star was first observed by Tadashi Kojima from Gunma prefecture in Japan with a Canon EOS 6D + 135-mm f3.2 lens on the 02. and 25. October 2017. ASASSN confirmed it as a microlensing event, but described it as a single-lens event. Nucita et al. 2017 used the photometry by AAVSO and the R.P. Feynman Observatory to first establish that TCP J05074264+2447555 was a binary lens with a hint to a new planetary system. Nucita et al. 2018 finally announced the discovery of the planet. Lensed background star The lensed background star is a single late F-type main-sequence star with a temperature of about 6400 K and a radius of . The lensed background star is about 800 parsec distant from earth. Lensing system The star The star has a mass of about 0.5 and it has a proper motion of . It is the second brightest microlensing host star with Ks=13.7 mag. The brightest microlensing host star is Gaia22dkvL with V≈14 as of September 2023. Together with Gaia22dkvL, Kojima-1L is located outside the bulge of the Milky Way, representing a small growing sample of microlensing planets discovered in less crowded regions. The planet The planet has a mass of about 20 earth masses and has a projected separation of 0.8 or 0.9 astronomical units from its host star. This translates into a semi-major axis of about 1.1 astronomical units, which was inside and near the ice line at the younger age of the system. The planet might have first formed while the snow line was at a distance larger than the orbit of the planet. As the snow line decreased, it might have crossed the orbit of the planet at around 2.2 Myrs after the Kojima-1L system has formed. Circumstellar disks around low-mass stars have lifetime of a few 10 Myrs. During planet-formation, the planet might have experienced a period of a gas-rich, but ice-poor environment, before it got more ice-rich in a following period. It is difficult to form planets as massive such as Kojima-1Lb if the ice-rich period was during a gas-poor period. It is more likely that some gas remained in the ice-rich period. This way Kojima-1Lb could have grown fast during the ice-rich period, by accreting solid material and then accreting the remaining gas. Future observations In the future it might be possible to observe the star causing the lens. With current Adaptive Optics instruments it is predicted that the background star and the star causing the lens can be resolved in 2021. This will enable an independent characterization of the host star by taking a spectrum. VLT/Espresso might even be able to detect the 1.3 year orbital period planet Kojima-1Lb with the radial velocity method. A follow-up with for example Subaru/IRD might even be able to detect additional inner and/or massive planets around Kojima-1L References External links Discovery image by T. Kojima Article about Kojima-1Lb by Sky & Telescope Osservatorio Astrofisico R.P.Feynman Taurus (constellation) Exoplanets detected by microlensing Giant planets Exoplanets discovered in 2018
Kojima-1Lb
[ "Astronomy" ]
973
[ "Taurus (constellation)", "Constellations" ]
71,550,155
https://en.wikipedia.org/wiki/Time%20in%20Chad
Time in Chad is given by a single time zone, denoted as West Africa Time (WAT; UTC+01:00). Chad shares this time zone with several other countries, including fourteen in western Africa. Chad does not observe daylight saving time (DST). IANA time zone database In the IANA time zone database, Chad is given one zone in the file zone.tab—Africa/Ndjamena. "TD" refers to the country's ISO 3166-1 alpha-2 country code. Data for Chad directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself: See also Time in Africa List of time zones by country References External links Current time in Chad at Time.is Time in Chad at TimeAndDate.com Time by country Geography of Chad Time in Africa
Time in Chad
[ "Physics" ]
178
[ "Spacetime", "Physical quantities", "Time", "Time by country" ]
71,550,747
https://en.wikipedia.org/wiki/Lanthanum%20diiodide
Lanthanum diiodide is an iodide of lanthanum, with the chemical formula of LaI2. It is an electride, actually having a chemical formula of La3+[(I−)2e−]. Preparation Lanthanum diiodide can be obtained from the reduction of lanthanum(III) iodide with lanthanum metal under a vacuum at 800 to 900 °C: It can also be obtained by reacting lanthanum and mercury(II) iodide: It was first created by John D. Corbett in 1961. Properties Lanthanum diiodide is a blue-black solid with metallic lustre, which is easily hydrolyzed into the iodide oxide. It has a MoSi2-type structure, with the space group I4/mmm (No. 139). References Lanthanum compounds Iodides Electrides Substances discovered in the 1960s
Lanthanum diiodide
[ "Chemistry" ]
193
[ "Electron", "Electrides", "Salts" ]
71,550,825
https://en.wikipedia.org/wiki/Bretagnolle%E2%80%93Huber%20inequality
In information theory, the Bretagnolle–Huber inequality bounds the total variation distance between two probability distributions and by a concave and bounded function of the Kullback–Leibler divergence . The bound can be viewed as an alternative to the well-known Pinsker's inequality: when is large (larger than 2 for instance.), Pinsker's inequality is vacuous, while Bretagnolle–Huber remains bounded and hence non-vacuous. It is used in statistics and machine learning to prove information-theoretic lower bounds relying on hypothesis testing  (Bretagnolle–Huber–Carol Inequality is a variation of Concentration inequality for multinomially distributed random variables which bounds the total variation distance.) Formal statement Preliminary definitions Let and be two probability distributions on a measurable space . Recall that the total variation between and is defined by The Kullback-Leibler divergence is defined as follows: In the above, the notation stands for absolute continuity of with respect to , and stands for the Radon–Nikodym derivative of with respect to . General statement The Bretagnolle–Huber inequality says: Alternative version The following version is directly implied by the bound above but some authors prefer stating it this way. Let be any event. Then where is the complement of . Indeed, by definition of the total variation, for any , Rearranging, we obtain the claimed lower bound on . Proof We prove the main statement following the ideas in Tsybakov's book (Lemma 2.6, page 89), which differ from the original proof (see C.Canonne's note for a modernized retranscription of their argument). The proof is in two steps: 1. Prove using Cauchy–Schwarz that the total variation is related to the Bhattacharyya coefficient (right-hand side of the inequality): 2. Prove by a clever application of Jensen’s inequality that Step 1: First notice that To see this, denote and without loss of generality, assume that such that . Then we can rewrite And then adding and removing we obtain both identities. Then because Step 2: We write and apply Jensen's inequality: Combining the results of steps 1 and 2 leads to the claimed bound on the total variation. Examples of applications Sample complexity of biased coin tosses Source: The question is How many coin tosses do I need to distinguish a fair coin from a biased one? Assume you have 2 coins, a fair coin (Bernoulli distributed with mean ) and an -biased coin (). Then, in order to identify the biased coin with probability at least (for some ), at least In order to obtain this lower bound we impose that the total variation distance between two sequences of samples is at least . This is because the total variation upper bounds the probability of under- or over-estimating the coins' means. Denote and the respective joint distributions of the coin tosses for each coin, then We have The result is obtained by rearranging the terms. Information-theoretic lower bound for k-armed bandit games In multi-armed bandit, a lower bound on the minimax regret of any bandit algorithm can be proved using Bretagnolle–Huber and its consequence on hypothesis testing (see Chapter 15 of Bandit Algorithms). History The result was first proved in 1979 by Jean Bretagnolle and Catherine Huber, and published in the proceedings of the Strasbourg Probability Seminar. Alexandre Tsybakov's book features an early re-publication of the inequality and its attribution to Bretagnolle and Huber, which is presented as an early and less general version of Assouad's lemma (see notes 2.8). A constant improvement on Bretagnolle–Huber was proved in 2014 as a consequence of an extension of Fano's Inequality. See also Total variation for a list of upper bounds Bretagnolle–Huber–Carol Inequality in Concentration inequality References Information theory Probabilistic inequalities
Bretagnolle–Huber inequality
[ "Mathematics", "Technology", "Engineering" ]
836
[ "Telecommunications engineering", "Applied mathematics", "Theorems in probability theory", "Computer science", "Probabilistic inequalities", "Information theory", "Inequalities (mathematics)" ]
71,550,828
https://en.wikipedia.org/wiki/Cerium%20diiodide
Cerium diiodide is an iodide of cerium, with the chemical formula of CeI2. Preparation Cerium diiodide can be obtained from the reduction of cerium(III) iodide with metallic cerium under vacuum at 800 °C to 900 °C. It can also be formed from the reaction of cerium and ammonium iodide in liquid ammonia at −78 °C. The reaction forms an ammonia complex of cerium diiodide, which decomposes to cerium diiodide under vacuum at 200 °C. It was first created by John D. Corbett in 1961. Properties Cerium diiodide is an opaque dark solid with a metal-like appearance and properties. There is no cerium(II) in cerium diiodide, and its real structure is Ce3+(I−)2e−. It is easily hydrolyzed to form the corresponding iodide oxide. Like lanthanum diiodide and praseodymium diiodide, the cerium diiodide forms in the MoSi2-type structure, with space group I4/mmm (No. 139). References Cerium compounds Iodides Substances discovered in the 1960s Electrides Lanthanide halides
Cerium diiodide
[ "Chemistry" ]
267
[ "Electrides", "Electron", "Salts" ]
71,551,262
https://en.wikipedia.org/wiki/Promethium%28III%29%20iodide
Promethium(III) iodide is an inorganic compound, with the chemical formula of PmI3. It is a red radioactive solid with a melting point of 695 °C. Preparation Promethium(III) iodide is obtained by reacting anhydrous hydrogen iodide and promethium(III) chloride at a high temperature: PmCl3 + 3 HI → PmI3 + 3 HCl From the reaction of a HI-H2 mixture and promethium oxide (Pm2O3), promethium(III) iodide cannot be produced, and only promethium oxyiodide (PmOI) can be obtained. Promethium oxide reacts with molten aluminum iodide at 500°C to form promethium iodide. References Iodides Promethium compounds Lanthanide halides
Promethium(III) iodide
[ "Chemistry" ]
178
[ "Inorganic compounds", "Inorganic compound stubs" ]