id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
62,414,339
https://en.wikipedia.org/wiki/Phaeocollybia%20festiva
Phaeocollybia festiva is a species of fungus in the family Cortinariaceae. References Cortinariaceae Fungi described in 1942 Fungus species
Phaeocollybia festiva
[ "Biology" ]
33
[ "Fungi", "Fungus species" ]
62,415,050
https://en.wikipedia.org/wiki/European%20Structural%20Integrity%20Society
The European Society for Structural Integrity (ESIS) is an international non-profit engineering scientific society. Its purpose is to create and expand knowledge about all aspects of structural integrity and the dissemination of that knowledge. The goal is to improve the safety and performance of structures and components. History The purpose of European Structural Integrity Society dates back to November 1978 during the summer school in Darmstadt (Germany). At the time, the name was European Group on Fracture. Between 1979 and 1988 the first technical committees were created, the first technical committee had the designation of Elasto-Plastic Fracture Mechanics. The initial idea was to reproduce in Europe the same as the ASTM committee. The first president of European Structural Integrity Society was Dr. L.H. Larsson (European Commission Joint Research Centre). ESIS has a total of 24 technical committees and national groups in each European country. The current president of ESIS is Prof. Aleksandar Sedmak from the University of Belgrade (Serbia). Scientific Journals ESIS is institutionally responsible for the following scientific journals: Engineering Failure Analysis Engineering Fracture Mechanics International Journal of Fatigue Theoretical and Applied Fracture Mechanics Procedia Structural Integrity Organization of International Conferences ESIS is the organizer or supporter of various international conference series: ECF, European Conference on Fracture (biennial) ICSI, International Conference on Structural Integrity (biennial) IRAS, International Conference on Risk Analysis and Safety of Complex Structures and Componentes (biennial) Awards ESIS, at its events, confers the following awards: The Griffith Medal The August-Wöhler Medal The Award of Merit Honorary Membership The Young Scientist Award Robert Moskovic Award (ESIS TC12) The August Wöhler Medal Winners 2022: Youshi Hong, Chinese Academy of Sciences, China 2020: Filippo Berto, Sapienza University of Rome, Italy 2016: Paul C. Paris, Washington University in St. Louis, USA 2014: Reinhard Pippan, Austrian Academy of Sciences, Austria 2010: Ashok Saxena, University of Arkansas, USA 2008: Morris Sonsino, Technische Universität Darmstadt, Germany 2006: Robert O. Ritchie, University of California, Berkeley, USA 2004: Leslie Pook, University College London, UK 2002: Michael W. Brown, The University of Sheffield, UK 2000: Darrell F. Socie, University of Illinois, USA The Award of Merit Winners 2022: José A.F.O. Correia, University of Porto, Portugal 2020: Uwe Zerbst, Federal Institute for Materials Research and Testing, Germany 2018: Filippo Berto, Sapienza University of Rome, Italy 2016: Laszlo Toth, University of Miskolc, Hungary 2014: Wolfgang Dietzel, Helmholtz-Zentrum Hereon, Germany 2010: Jaroslav Pokluda, Brno University of Technology, Czech Republic 2008: Emmanuel Gdoutos, Democritus University of Thrace, Greece 2006: Andrzej Neimitz, Kielce University of Technology, Poland 2004: Keith Miller, University of Sheffield, UK 2002: Dietrich Munz, Karlsruhe Institute of Technology, Germany 2000: Ian Milne, Integrity Management Services, UK The Robert Moskovic Award Winners 2023: Aleksandar Grbovic, University of Belgrade, Serbia; Marc A. Meyers, University of California, San Diego, USA; Motomichi Koyama, Tohoku University, Japan 2022: Hryhoriy Nykyforchyn, National Academy of Sciences of Ukraine, Ukraine; John Michopoulos, United States Naval Research Laboratory, USA; Grzegorz Lesiuk, Wrocław University of Science and Technology, Poland 2021: Maria Feng, Columbia University, USA; Filippo Berto, Norwegian University of Science and Technology, Norway; Milan Veljkovic, Delft University of Technology, Netherlands 2020: Neil James, Plymouth University, UK; Rui Calçada, University of Porto, Portugal; Vladimir Moskvichev, Russian Academy of Sciences, Russia 2019: Hojjat Adeli, Ohio State University, USA; Alfonso Fernández-Canteli, University of Oviedo, Spain; Aleksandar Sedmak, University of Belgrade, Serbia References External links European Structural Integrity Society Official Website Engineering organizations Organizations established in 1978 Scientific organizations established in 1978 Materials science organizations
European Structural Integrity Society
[ "Materials_science", "Engineering" ]
879
[ "Materials science organizations", "Materials science", "nan" ]
62,415,639
https://en.wikipedia.org/wiki/Cache%20Creek%20Ocean
The Cache Creek Ocean, formerly called Anvil Ocean, is an inferred ancient ocean which existed between western North America and offshore continental terranes between the Devonian and the Middle Jurassic. Evolution of the concept First proposed in the 1970s and referred to as the Anvil Ocean, the oceanic crust between the Yukon composite terranes and North America was later updated to Cache Creek Sea in 1987 Monger and Berg, before being renamed the Cache Creek Ocean by Plafker and Berg in 1994. Other researchers in 1998 proposed the name Slide Mountain Ocean. The geology of Yukon and geology of Alaska formed in part due to the accretion of island arcs and continental terranes onto the western margin of North America. Many of these island arcs arrived onshore during and after the Devonian. The Cache Creek Belt (also referred to as the Cache Creek suture zone or Cache Creek terrane) is an extensive area of mélange and oceanic rocks in the Canadian province of British Columbia. Sedimentary rocks contain fossils from the Carboniferous through the Middle Jurassic and isotopic dating of blueschist gives ages 230 and 210 million years ago in the Late Triassic. The Cache Creek Belt is bordered by the Quesnellia Terrane in the east and by the large Stikinia Terrane in the west. The accretion of the landmasses and the closing the Cache Creek Ocean likely happened in the Middle Jurassic. References Historical oceans Oceanography Geology of British Columbia Devonian North America Carboniferous North America Permian North America Triassic North America Early Jurassic North America Middle Jurassic North America
Cache Creek Ocean
[ "Physics", "Environmental_science" ]
316
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
62,415,641
https://en.wikipedia.org/wiki/H4K16ac
H4K16ac is an epigenetic modification to the DNA packaging protein Histone H4. It is a mark that indicates the acetylation at the 16th lysine residue of the histone H4 protein. H4K16ac is unusual in that it has both transcriptional activation AND repression activities. The loss of H4K20me3 along with a reduction of H4K16ac is a strong indicator of cancer. Lysine acetylation and deacetylation Proteins are typically acetylated on lysine residues and this reaction relies on acetyl-coenzyme A as the acetyl group donor. In histone acetylation and deacetylation, histone proteins are acetylated and deacetylated on lysine residues in the N-terminal tail as part of gene regulation. Typically, these reactions are catalyzed by enzymes with histone acetyltransferase (HAT) or histone deacetylase (HDAC) activity, although HATs and HDACs can modify the acetylation status of non-histone proteins as well. The regulation of transcription factors, effector proteins, molecular chaperones, and cytoskeletal proteins by acetylation and deacetylation is a significant post-translational regulatory mechanism These regulatory mechanisms are analogous to phosphorylation and dephosphorylation by the action of kinases and phosphatases. Not only can the acetylation state of a protein modify its activity but there has been recent suggestion that this post-translational modification may also crosstalk with phosphorylation, methylation, ubiquitination, sumoylation, and others for dynamic control of cellular signaling. In the field of epigenetics, histone acetylation (and deacetylation) have been shown to be important mechanisms in the regulation of gene transcription. Histones, however, are not the only proteins regulated by posttranslational acetylation. Nomenclature H4K16ac indicates acetylation of lysine 16 on histone H4 protein subunit: Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. Epigenetic implications The post-translational modification of histone tails by either histone modifying complexes or chromatin remodeling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Importance Secondly, it can block the function of chromatin remodelers. Thirdly, it neutralizes the positive charge on lysines. Acetylation of histone H4 on lysine 16 (H4K16Ac) is especially important for chromatin structure and function in a variety of eukaryotes and is catalyzed by specific histone lysine acetyltransferases (HATs). H4K16 is particularly interesting because this is the only acetylatable site of the H4 N-terminal tail, and can influence the formation of a compact higher-order chromatin structure. Hypoacetylation of H4K16 appears to cause delayed recruitment of DNA repair proteins to sites of DNA damage in a mouse model of the premature aging, such as Hutchinson–Gilford progeria syndrome. H4K16Ac also has roles in transcriptional activation and the maintenance of euchromatin. Activation and repression H4K16ac is unusual in that it is associated with both transcriptional activation and repression. The bromodomain of TIP5, part of NoRC, binds to H4K16ac and then the NoRC complex silences rDNA with HATs and DNMTs. There is also a reduction in the levels of H3K56ac during aging and an increase in the levels of H4K16ac. Increased H4K16ac in old yeast cells is associated with the decline in levels of the HDAC Sir2, which can increase the life span when overexpressed. Cancer marker The loss of the repressive H4K20me3 mark defines cancer along with a reduction of activating H4K16ac mark. It is not clear how the loss of a repressive and an activating mark is an indicator of cancer. It is not clear exactly how but this reduction happens at repetitive sequences along with general reduced DNA methylation. Methods The histone mark acetylation can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. See also Histone acetylation References Epigenetics Post-translational modification
H4K16ac
[ "Chemistry" ]
1,597
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
62,415,848
https://en.wikipedia.org/wiki/Bostwick%20Historic%20District
The Bostwick Historic District, in Bostwick, Georgia, is a historic district which was listed on the National Register of Historic Places in 2002. The listing included 64 contributing buildings, a contributing structure, and four contributing sites on . It is centered on the intersection of Bostwick Rd. (Georgia State Route 83) and Fairplay Rd. in Bostwick. The oldest historic resource is the Bostwick Cemetery, established around 1859. It was deemed significant partly as it is a "good example of a rural town in Georgia which developed from the cultivation and processing of cotton. The district is significant in the areas of agriculture and industry for its excellent collection of industrial buildings associated with the processing of cotton as well as for the remaining cotton fields located within the district. John Bostwick, Sr. (1859-1929), considered the founder of the town, started Bostwick Supply Company in 1892. In 1901, he started the Bostwick Manufacturing Company that consisted of a cottonseed oil mill and other buildings (cotton gin, granary, grist mill, warehouse, guano/fertilizer building) associated with the manufacturing of goods from cotton. (All these buildings still remain.) Historically, the region surrounding the small town of Bostwick was primarily planted in cotton. Currently, much of this land has been planted with other crops, such as pine trees and peanuts, or left open to be used as pasture land for grazing by cattle. The fields planted in cotton within the district still convey the historic significant pattern in Georgia of agricultural fields abutting the town development." It was deemed significant also for its architecture, specifically "for its excellent examples of historic residences, commercial, and community landmark buildings representing architectural types and styles popular in Georgia from the late 19th century into the early 20th century. The significant architectural types include Georgian cottage, gabled ell cottage, Queen Anne cottage, hall-parlor cottage, and bungalow. The significant architectural styles include Colonial Revival, Neoclassical Revival, Craftsman, and Folk Victorian. The John Bostwick, Sr. House, built in 1902, is an excellent representative example of a Georgian House, a two-story house with a central hallway on each floor with two rooms on either side, representing the Neoclassical Revival style. The character-defining features of the house include a full-height entry porch with lower full-width porch, truncated hipped roof, and wide cornice band. The historic stores are good examples of attached and freestanding buildings representing the Folk Victorian style. The character-defining features include a stepped parapet roof, recessed brick panels, and decorative arches over the windows and doors. The historic community landmark resources include two churches, the Susie Agnes Hotel, and the Bostwick Cemetery." References GA Stepped gables National Register of Historic Places in Morgan County, Georgia Historic districts on the National Register of Historic Places in Georgia (U.S. state) Buildings and structures completed in 1859
Bostwick Historic District
[ "Engineering" ]
596
[ "Stepped gables", "Architecture" ]
62,416,628
https://en.wikipedia.org/wiki/Design%20system
A design system is a comprehensive set of standards, documentation, and reusable components that guide the development of digital products within an organization. It serves as a single source of truth for designers and developers, ensuring consistency and efficiency across projects. A Design system may comprise, pattern and component libraries; style guides for font, color, spacing, component dimensions, and placement; design languages, coded components, brand languages, and documentation. Design Systems aid in digital product design and development of products such as mobile applications or websites. A design system serves as a reference to establish a common understanding between design, engineering, and product teams. This understanding ensures smooth communication and collaboration between different teams involved in designing and building a product, and ultimately results in a consistent user experience. Notable design systems include Lightning Design System (by Salesforce), Material Design (by Google), Carbon Design System (by IBM), and Fluent Design System (by Microsoft). Advantages Some of the advantages of a design system are: Streamlined design to production workflow. Creates a unified language between and within the cross-functional teams. Faster builds, through reusable components and shared rationale. Better products, through more cohesive user experiences and a consistent design language. Improved maintenance and scalability, through the reduction of design and technical debt. Stronger focus for product teams, through tackling common problems so teams can concentrate on solving user needs. Origins Design systems have been in practice for a long time under different nomenclatures. Design systems have been significant in the design field since they were created but have had many changes and improvements since their origin. Using systems or patterns as they called it in 1960s was first mentioned in NATO Software Engineering Conference (discussion on how the softwares should be developed) by Christopher Alexander gaining industry’s attention. In 1970s, he published a book named “A Pattern Language” along with Murray Silverstein, and Sara Ishikawa which discussed the interconnected patterns in architecture in an easy and democratic way and that gave birth to what we know today as “Design Systems”. Interests in the digital field surged again in the latter half of the 1980s, for this tool to be used in software development which led to the notion of Software Design Pattern. As patterns are best maintained in a collaborative editing environment, it led to the invention of the first wiki, which later led to the invention of Wikipedia itself. Regular conferences were held, and even back then, patterns were used to build user interfaces. The surge continued well into the 90s, with Jennifer Tidwell's research closing the decade. Scientific interest continued way into the 2000s. Mainstream interest about pattern languages for UI design surged again by the opening of Yahoo! Design Pattern Library in 2006 with the simultaneous introduction of Yahoo! User Interface Library (YUI Library for short). The simultaneous introduction was meant to allow more systematic design than mere components which the UI library has provided. Google's Material Design in 2014 was the first to be called a "design language" by the firm (the previous version was called "Holo Theme"). Soon, others followed suit. Technical challenges of large-scale web projects led to the invention of systematic approaches in the 2010s, most notably BEM and Atomic Design. The book about Atomic Design helped popularize the term "Design System" since 2016. The book describes an approach to design layouts of digital products in a component-based way making it future-friendly and easy to update. Difference between pattern languages and design systems and UI kits A pattern language allows its patterns to exist in many different shapes and forms – for example, a login form, with an input field for username, password, and buttons to log in, register and retrieve lost password is a pattern, no matter if the buttons are green or purple. Patterns are called patterns exactly because their exact nature might differ, but similarities provide the relationship between them (called a configuration) to remain the same. A design language however always has a set of visual guidelines to contain specific colors and typography. Most design systems allow elements of a design language to be configured (via its patterns) according to need. A UI kit is simply a set of UI components, with no explicit rules provided on its usage. Design tokens A design token is a named variable that stores a specific design attribute, such as a color, typography setting, spacing value, or other design decision. Design tokens serve as a single source of truth for these attributes across an entire brand or system, and provide a wide array of benefits such as abstraction, flexibility, scalability, and consistency to large design systems. Design tokens, which are essentially design decisions expressed in code, also improve collaboration between designers and developers. The concept of design tokens exists within a variety of well known design systems such as Google's Material Design, Amazon's Style Dictionary, Adobe's Spectrum and the Atlassian Design System The W3C Design Tokens Community Group is working to provide open standards for design tokens. Summary A design system comprises various components, patterns, styles, and guidelines that aid in streamlining and optimizing design efforts. The critical factors to consider when creating a design system include the scope and ability to reproduce your projects and the availability of resources and time. If design systems are not appropriately implemented and maintained, they can become disorganized, making the design process less efficient. When implemented well however, they can simplify work, make the end products more cohesive, and empower designers to address intricate UX challenges. References External links What is a Design System? by Robert Gourley Design Systems Handbook by Marco Suarez, Jina Anne, Katie Sylor-Miller, Diana Mounter, and Roy Stanfield. (Design Better by InVision) Post (in French): Why set up a design system? Design Patterns Example Design Systems system Product design Systems architecture
Design system
[ "Engineering" ]
1,195
[ "Systems engineering", "Product design", "Design", "Systems architecture" ]
62,417,498
https://en.wikipedia.org/wiki/Truthful%20resource%20allocation
Truthful resource allocation is the problem of allocating resources among agents with different valuations over the resources, such that agents are incentivized to reveal their true valuations over the resources. Model There are m resources that are assumed to be homogeneous and divisible. Examples are: Materials, such as wood or metal; Virtual resources, such as CPU time or computer memory; Financial resources, such as shares in firms. There are n agents. Each agent has a function that attributes a numeric value to each "bundle" (combination of resources). It is often assumed that the agents' value functions are linear, so that if the agent receives a fraction rj of each resource j, then his/her value is the sum of rj ∗vj . Design goals The goal is to design a truthful mechanism, that will induce the agents to reveal their true value functions, and then calculate an allocation that satisfies some fairness and efficiency objectives. The common efficiency objectives are: Pareto efficiency (PE); Utilitarian social welfare defined as the sum of agents' utilities. An allocation maximizing this sum is called utilitarian or max-sum; it is always PE. Nash social welfare defined as the product of agents' utilities. An allocation maximizing this product is called Nash-optimal or max-product or proportionally-fair; it is always PE. When agents have additive utilities, it is equivalent to the competitive equilibrium from equal incomes. The most common fairness objectives are: Equal treatment of equals (ETE) if two agents have exactly the same utility function, then they should get exactly the same utility. Envy-freeness no agent should envy another agent. It implies ETE. Egalitarian in lieu of equitable markets are analogous to laissez-faire early-stage capitalism, which form the basis of common marketplaces bearing fair trade policies in world markets' market evaluation; financiers can capitalise on financial controls and financial leverage and the concomitant exchange. Trivial algorithms Two trivial truthful algorithms are: The equal split algorithm which gives each agent exactly 1/n of each resource. This allocation is envy-free (and obviously ETE), but usually it is very inefficient. The serial dictatorship algorithm which orders the agents arbitrarily, and lets each agent in turn take all resources that he wants, from among the remaining ones. This allocation is PE, but usually it is unfair. It is possible to mix these two mechanisms, and get a truthful mechanism that is partly-fair and partly-efficient. But the ideal mechanism would satisfy all three properties simultaneously: truthfulness, efficiency and fairness. At most one object per agent In a variant of the resource allocation problem, sometimes called one-sided matching or assignment, the total amount of objects allocated to each agent must be at most 1. When there are 2 agents and 2 objects, the following mechanism satisfies all three properties: if each agent prefers a different object, give each agent his preferred object; if both agents prefer the same object, give each agent 1/2 of each object (It is PE due to the capacity constraints). However, when there are 3 or more agents, it may be impossible to attain all three properties. Zhou proved that, when there are 3 or more agents, each agent must get at most 1 object, and each object must be given to at most 1 agent, no truthful mechanism satisfies both PE and ETE. When there are multiple units of each object (but each agent must still get at most 1 object), there is a weaker impossibility result: no PE and ETE mechanism satisfies Group strategyproofness. He leaves open the more general resource allocation setting, in which each agent may get more than one object. There are analogous impossibility results for agents with ordinal utilities: For agents with strict ordinal utilities, Bogomolnaia and Moulin prove that no mechanism satisfies possible-PE, necessary-truthfulness, and ETE. For agents with weak ordinal utilities, Katta and Sethuraman prove that no mechanism satisfies possible-PE, possible-truthfulness, and necessary-envy-freeness. See also: Truthful one-sided matching. Approximation Algorithms There are several truthful algorithms that find a constant-factor approximation of the maximum utilitarian or Nash welfare. Guo and Conitzer studied the special case of n=2 agents. For the case of m=2 resources, they showed a truthful mechanism attaining 0.828 of the maximum utilitarian welfare, and showed an upper bound of 0.841. For the case of many resources, they showed that all truthful mechanisms of the same kind approach 0.5 of the maximum utilitarian welfare. Their mechanisms are complete - they allocate all the resources. Cole, Gkatzelis and Goel studied mechanisms of a different kind - based on the max-product allocation. For many agents, with valuations that are homogeneous functions, they show a truthful mechanism called Partial Allocation that guarantees to each agent at least 1/e ≈ 0.368 of his/her utility in the max-product allocation. Their mechanism is envy-free when the valuations are additive linear functions. They show that no truthful mechanism can guarantee to all agents more than 0.5 of their max-product utility. For the special case of n=2 agents, they show a truthful mechanism that attains at least 0.622 of the utilitarian welfare. They also show that the mechanism running the equal-split mechanism and the partial-allocation mechanism, and choosing the outcome with the highest social welfare, is still truthful, since both agents always prefer the same outcome. Moreover, it attains at least 2/3 of the optimal welfare. They also show an algorithm for computing the max-product allocation, and show that the Nash-optimal allocation itself attains at least 0.933 of the utilitarian welfare. They also show a mechanism called Strong Demand Matching, which is tailored for a setting with many agents and few resources (such as the privatization auction in the Czech republic). The mechanism guarantees to each agent at least p/(p+1) of the max-product utility, when p is the smallest equilibrium price of a resource when each agent has a unit budget. When there are many more agents than resources, the price of each resource is usually high, so the approximation factor approaches 1. In particular, when there are two resources, this fraction is at least n/(n+1). This mechanism assigns to each agent a fraction of a single resource. Cheung improved the competitive ratios of previous works: The ratio for two agents and two resources improved from 0.828 to 5/6 ≈ 0.833 with a complete-allocation mechanism, and strictly more than 5/6 with a partial-allocation mechanism. The upper bound improved from 0.841 to 5/6+ε; for a complete-allocation mechanism, and to 0.8644 for a partial mechanism. The ratio for two agents and many resources improved from 2/3 to 0.67776, by using a weighted average of two mechanisms: partial-allocation, and max (partial-allocation, equal-split). Related problems Truthful cake-cutting - a variant of the problem in which there is a single heterogeneous resource ("cake"), and each agent has a personal value-measure over the resource. Strategic fair division - the study of equilibria of fair division games when the agents act strategically rather than sincerely. Truthful allocation of two kinds of resources - plentiful and scarce. Truthful fair division of indivisible items. Relation between truthful fair division and wagering strategies. References Mechanism design Fair division protocols
Truthful resource allocation
[ "Mathematics" ]
1,616
[ "Game theory", "Mechanism design" ]
62,417,732
https://en.wikipedia.org/wiki/Kaddish%20%282019%20film%29
Kaddish () is the second feature film directed by Konstantin Fam. Like Konstantin’s previous work, this film is dedicated to the memory of the victims of the Holocaust. Plot The testament of a former concentration camp prisoner confronts and turns the lives of two young people from different worlds around, shedding light on the tragic history of their family. Cast Lenn Kudrjawizki – Leonid Masha King – Rachel Vladimir Koshevoi  – Leo Mikhail Gorevoy – Richard Vyacheslav Chepurchenko – Kurt Anzhelika Kashirina – Katya Alim Kandur – Shlomo Vyacheslav Ganenko – Moshe Production The film was shot with Russian-Belarusian co-production with the participation of Sasha Klein Production (Israel). The filming took place in Moscow, New York, Prague, Brest, Minsk and ended in Israel. The film was created with the financial support of the Ministry for Culture of Russia, as well as private philanthropists. Confession Film premiered as part of the competition program of the 1st ECG Film Festival in London in June 2019, where film got Grand Prix. The film is also a contender for the Golden Globe in the category "Best Foreign Film". Konstantin Fam also got into the longlist of the Golden Eagle Award of the National Academy of Motion Pictures Arts and Sciences of Russia for 2019 in the category "Best Director". The film "Kaddish" was also considered by the Oscar Committee of Belarus, but was not selected. Accolades Awards 1st ECG Film Festival in London, Grand Prix Amur Autumn Film Festival, Blagoveshchensk, Russia - 2019, Best Screenplay, Media Selection Sochi International Film Festival and Awards (Russia) 2019, Best Music Official partners Federation of Jewish Communities of Russia Russian Jewish Congress Chabad Odessa References External links 2019 films 2010s war films 2010s war drama films 2010s Russian films 2010s Russian-language films Holocaust films War epic films Russian epic films Russian historical drama films Russian war drama films Russian-language war drama films Russian World War II films Belarusian war films Belarusian World War II films Epic films based on actual events Rescue of Jews during the Holocaust
Kaddish (2019 film)
[ "Biology" ]
437
[ "Rescue of Jews during the Holocaust", "Behavior", "Altruism" ]
62,417,780
https://en.wikipedia.org/wiki/Atriplex%20hollowayi
Atriplex hollowayi, also known as Holloway's crystalwort, is a species of annual herbaceous plant in the genus Atriplex. This species is endemic to New Zealand. It has the "Nationally Critical" conservation status under the New Zealand Threat Classification System. Description Atriplex hollowayi is a soft, succulent annual plant. The stem and leaves look like they are coated with sugar crystals. The plant is a many-branch shrub that grows in sand to a diameter of , the yellow branches themselves long. The smooth, roughly oval leaves are long by wide with irregular dentate (toothed) margins. Taxonomy Atriplex hollowayi was described in 2000 by New Zealand botanists Peter de Lange and David Norton, who distinguished it from the widespread Atriplex billardierei. On reviewing the herbarium specimens of the latter species, de Lange noted there were two distinct forms: one with larger leaves with entire margins, and larger bracteoles and seeds, and one with smaller leaves with irregular sinuate-dentate margins and smaller bracteoles and seeds. This latter form was restricted to the North Island, and following field work was described as a new species, its name honouring botanist and conservationist John Stevenson Holloway, who had died in 1999. Distribution It inhabits the strand line, the unvegetated stable sand above the high tide line. It was once found on beaches from Northland to Wellington. Now it is only found on one beach, with a population estimated at fewer than 50 plants. The reason for its decline is unknown. References hollowayi Endemic flora of New Zealand Endangered flora of New Zealand Halophytes Annual plants Succulent plants Plants described in 2000 Taxa named by Peter James de Lange
Atriplex hollowayi
[ "Chemistry" ]
362
[ "Halophytes", "Salts" ]
52,054,307
https://en.wikipedia.org/wiki/Social%20physics
Social physics or sociophysics is a field of science which uses mathematical tools inspired by physics to understand the behavior of human crowds. In a modern commercial use, it can also refer to the analysis of social phenomena with big data. Social physics is closely related to econophysics, which uses physics methods to describe economics. History The earliest mentions of a concept of social physics began with the English philosopher Thomas Hobbes. In 1636 he traveled to Florence, Italy, and met physicist-astronomer Galileo Galilei, known for his contributions to the study of motion. It was here that Hobbes began to outline the idea of representing the "physical phenomena" of society in terms of the laws of motion. In his treatise De Corpore, Hobbes sought to relate the movement of "material bodies" to the mathematical terms of motion outlined by Galileo and similar scientists of the time period. Although there was no explicit mention of "social physics", the sentiment of examining society with scientific methods began before the first written mention of social physics. Later, French social thinker Henri de Saint-Simon's first book, the 1803 Lettres d’un Habitant de Geneve, introduced the idea of describing society using laws similar to those of the physical and biological sciences. His student and collaborator was Auguste Comte, a French philosopher widely regarded as the founder of sociology, who first defined the term in an essay appearing in Le Producteur, a journal project by Saint-Simon. Comte defined social physics:Social physics is that science which occupies itself with social phenomena, considered in the same light as astronomical, physical, chemical, and physiological phenomena, that is to say as being subject to natural and invariable laws, the discovery of which is the special object of its researches. After Saint-Simon and Comte, Belgian statistician Adolphe Quetelet, proposed that society be modeled using mathematical probability and social statistics. Quetelet's 1835 book, Essay on Social Physics: Man and the Development of his Faculties, outlines the project of a social physics characterized by measured variables that follow a normal distribution, and collected data about many such variables. A frequently repeated anecdote is that when Comte discovered that Quetelet had appropriated the term "social physics", he found it necessary to invent a new term, "sociologie" ("sociology") because he disagreed with Quetelet's collection of statistics. There have been several “generations” of social physicists. The first generation began with Saint-Simon, Comte, and Quetelet, and ended with the late 1800s with historian Henry Adams. In the middle of the 20th century, researchers such as the American astrophysicist John Q. Stewart and Finnish geographer Reino Ajo, who showed that the spatial distribution of social interactions could be described using gravity models. Physicists such as Arthur Iberall use a homeokinetics approach to study social systems as complex self-organizing systems. For example, a homeokinetics analysis of society shows that one must account for flow variables such as the flow of energy, of materials, of action, reproduction rate, and value-in-exchange. More recently there have been a large number of social science papers that use mathematics broadly similar to that of physics, and described as “computational social science”. In the late 1800s, Adams separated “human physics” into the subsets of social physics or social mechanics (sociology of interactions using physics-like mathematical tools) and social thermodynamics or sociophysics (sociology described using mathematical invariances similar to those in thermodynamics). This dichotomy is roughly analogous to the difference between microeconomics and macroeconomics. Examples Ising model and voter dynamics One of the most well-known examples in social physics is the relationship of the Ising model and the voting dynamics of a finite population. The Ising model, as a model of ferromagnetism, is represented by a grid of spaces, each of which is occupied by a Spin (physics), numerically ±1. Mathematically, the final energy state of the system depends on the interactions of the spaces and their respective spins. For example, if two adjacent spaces share the same spin, the surrounding neighbors will begin to align, and the system will eventually reach a state of consensus. In social physics, it has been observed that voter dynamics in a finite population obey the same mathematical properties of the Ising model. In the social physics model, each spin denotes an opinion, e.g. yes or no, and each space represents a "voter". If two adjacent spaces (voters) share the same spin (opinion), their neighbors begin to align with their spin value; if two adjacent spaces do not share the same spin, then their neighbors remain the same. Eventually, the remaining voters will reach a state of consensus as the "information flows outward". The Sznajd model is an extension of the Ising model and is classified as an econophysics model. It emphasizes the alignment of the neighboring spins in a phenomenon called "social validation". It follows the same properties as the Ising model and is extended to observe the patterns of opinion dynamics as a whole, rather than focusing on just voter dynamics. Potts model and cultural dynamics The Potts model is a generalization of the Ising model and has been used to examine the concept of cultural dissemination as described by American political scientist Robert Axelrod. Axelrod's model of cultural dissemination states that individuals who share cultural characteristics are more likely to interact with each other, thus increasing the number of overlapping characteristics and expanding their interaction network. The Potts model has the caveat that each spin can hold multiple values, unlike the Ising model that could only hold one value. Each spin, then, represents an individual's "cultural characteristics... [or] in Axelrod's words, 'the set of individual attributes that are subject to social influence'". It is observed that, using the mathematical properties of the Potts model, neighbors whose cultural characteristics overlap tend to interact more frequently than with unlike neighbors, thus leading to a self-organizing grouping of similar characteristics. Simulations done on the Potts model both show Axelrod's model of cultural dissemination agrees with the Potts model as an Ising-class model. Recent work In modern use “social physics” refers to using “big data” analysis and the mathematical laws to understand the behavior of human crowds. The core idea is that data about human activity (e.g., phone call records, credit card purchases, taxi rides, web activity) contain mathematical patterns that are characteristic of how social interactions spread and converge. These mathematical invariances can then serve as a filter for analysis of behavior changes and for detecting emerging behavioral patterns. Social physics has recently been applied to analyze the COVID-19 pandemics. It has been demonstrated that the large difference in the spread of COVID-19 between countries is due to differences in responses to social stress. The combination of traditional epidemic models with social physics models of the classical general adaptation syndrome triad, "anxiety-resistance-exhaustion", accurately describes the first two waves of the COVID-19 epidemic for 13 countries. The differences between countries are concentrated in two kinetic constants: the rate of mobilization and the rate of exhaustion. Recent books about social physics include MIT Professor Alex Pentland's book Social Physics or Nature editor Mark Buchanan's book The Social Atom. Popular reading about sociophysics include English physicist Philip Ball's Why Society is a Complex Matter, Dirk Helbing's The Automation of Society is next or American physicist Laszlo Barabasi's book Linked. See also Historic recurrence Logology (science) References Further reading Arnopoulos, Paris, Sociophysics, Cosmos and Chaos in Nature and Culture, New York, Nova Science Publishers Inc., 1st ed. 1995, 2nd ed. 2005. Ball, Philip, Critical Mass: How One Thing Leads to Another, 2004, . Big data
Social physics
[ "Physics", "Technology" ]
1,641
[ "Social physics", "Applied and interdisciplinary physics", "Data", "Big data" ]
52,055,632
https://en.wikipedia.org/wiki/Hironaka%20decomposition
In mathematics, a Hironaka decomposition is a representation of an algebra over a field as a finitely generated free module over a polynomial subalgebra or a regular local ring. Such decompositions are named after Heisuke Hironaka, who used this in his unpublished master's thesis at Kyoto University . Hironaka's criterion , sometimes called miracle flatness, states that a local ring R that is a finitely generated module over a regular Noetherian local ring S is Cohen–Macaulay if and only if it is a free module over S. There is a similar result for rings that are graded over a field rather than local. Explicit decomposition of an invariant algebra Let be a finite-dimensional vector space over an algebraically closed field of characteristic zero, , carrying a representation of a group , and consider the polynomial algebra on , . The algebra carries a grading with , which is inherited by the invariant subalgebra . A famous result of invariant theory, which provided the answer to Hilbert's fourteenth problem, is that if is a linearly reductive group and is a rational representation of , then is finitely-generated. Another important result, due to Noether, is that any finitely-generated graded algebra with admits a (not necessarily unique) homogeneous system of parameters (HSOP). A HSOP (also termed primary invariants) is a set of homogeneous polynomials, , which satisfy two properties: The are algebraically independent. The zero set of the , , coincides with the nullcone (link) of . Importantly, this implies that the algebra can then be expressed as a finitely-generated module over the subalgebra generated by the HSOP, . In particular, one may write , where the are called secondary invariants. Now if is Cohen–Macaulay, which is the case if is linearly reductive, then it is a free (and as already stated, finitely-generated) module over any HSOP. Thus, one in fact has a Hironaka decomposition . In particular, each element in can be written uniquely as 􏰐, where , and the product of any two secondaries is uniquely given by , where . This specifies the multiplication in unambiguously. See also Rees decomposition Stanley decomposition References Commutative algebra
Hironaka decomposition
[ "Mathematics" ]
470
[ "Fields of abstract algebra", "Commutative algebra" ]
52,057,576
https://en.wikipedia.org/wiki/NGC%20311
NGC 311 is a lenticular galaxy in the constellation Pisces. Its velocity with respect to the cosmic microwave background is 4636 ± 25km/s, which corresponds to a Hubble distance of . However, one non-redshift measurement gives a distance of . It was discovered on September 18, 1828, by British astronomer John Herschel. According to A.M. Garcia, NGC 311 is a member of the NGC 315 Group (also known as LGG 14). This group contains 42 galaxies, including NGC 226, NGC 243, NGC 262, NGC 266, NGC 315, NGC 338, IC 43, IC 66, AND IC 69, among others. See also List of NGC objects (1–1000) References External links 0311 18280918 Pisces (constellation) Lenticular galaxies Discoveries by John Herschel 003434 +05-03-028 00592
NGC 311
[ "Astronomy" ]
185
[ "Pisces (constellation)", "Constellations" ]
52,057,614
https://en.wikipedia.org/wiki/NGC%20312
NGC 312 is an elliptical galaxy in the constellation Phoenix. It was discovered on September 5, 1836, by John Herschel. NGC 312 is situated south of the celestial equator and, as such, it is more easily visible from the southern hemisphere. Given its B magnitude of 13.4, NGC 312 is visible with the help of a telescope having an aperture of 10 inches (250mm) or more. References 0312 18360905 Phoenix (constellation) Elliptical galaxies 003343
NGC 312
[ "Astronomy" ]
99
[ "Phoenix (constellation)", "Constellations" ]
52,058,358
https://en.wikipedia.org/wiki/Alistair%20Lawrence
Alistair B. Lawrence (born 1954) is an ethologist. He currently holds a joint chair in animal behaviour and welfare at Scotland's Rural College and the University of Edinburgh. Education Lawrence graduated from the University of St Andrews with a degree in zoology. He then studied for his PhD at the University of Edinburgh under the direction of David Wood-Gush. His 1985 thesis is entitled “The social organization of Scottish blackface sheep". Career In 1995 he received the RSPCA/BSAS award for innovative developments in animal welfare for his 'outstanding contribution to animal welfare research'. He has published extensively throughout his career. Lawrence is a past secretary of the International Society for Applied Ethology and is a supporter of Compassion in World Farming. He has served on the UK Farm Animal Welfare Committee and has been appointed to the council of the Universities Federation for Animal Welfare. With Aubrey Manning he oversees the David Wood-Gush Trust Fund that set up and supports the annual Wood-Gush lecture. References External links Scotland's Rural College homepage University of Edinburgh homepage 1954 births 20th-century Scottish scientists 21st-century scientists Academics of the University of Edinburgh Alumni of the University of St Andrews Alumni of the University of Edinburgh Ethologists Living people People educated at Strathallan School People from Perthshire Scottish animal welfare scholars 20th-century British zoologists
Alistair Lawrence
[ "Biology" ]
277
[ "Ethology", "Behavior", "Ethologists" ]
52,058,583
https://en.wikipedia.org/wiki/Energy%20system
An energy system is a system primarily designed to supply energy-services to end-users. The intent behind energy systems is to minimise energy losses to a negligible level, as well as to ensure the efficient use of energy. The IPCC Fifth Assessment Report defines an energy system as "all components related to the production, conversion, delivery, and use of energy". The first two definitions allow for demand-side measures, including daylighting, retrofitted building insulation, and passive solar building design, as well as socio-economic factors, such as aspects of energy demand management and remote work, while the third does not. Neither does the third account for the informal economy in traditional biomass that is significant in many developing countries. The analysis of energy systems thus spans the disciplines of engineering and economics. Merging ideas from both areas to form a coherent description, particularly where macroeconomic dynamics are involved, is challenging. The concept of an energy system is evolving as new regulations, technologies, and practices enter into service – for example, emissions trading, the development of smart grids, and the greater use of energy demand management, respectively. Treatment From a structural perspective, an energy system is like any system and is made up of a set of interacting component parts, located within an environment. These components derive from ideas found in engineering and economics. Taking a process view, an energy system "consists of an integrated set of technical and economic activities operating within a complex societal framework". The identification of the components and behaviors of an energy system depends on the circumstances, the purpose of the analysis, and the questions under investigation. The concept of an energy system is therefore an abstraction which usually precedes some form of computer-based investigation, such as the construction and use of a suitable energy model. Viewed in engineering terms, an energy system lends itself to representation as a flow network: the vertices map to engineering components like power stations and pipelines and the edges map to the interfaces between these components. This approach allows collections of similar or adjacent components to be aggregated and treated as one to simplify the model. Once described thus, flow network algorithms, such as minimum cost flow, may be applied. The components themselves can be treated as simple dynamical systems in their own right. Economic modeling Conversely, relatively pure economic modeling may adopt a sectoral approach with only limited engineering detail present. The sector and sub-sector categories published by the International Energy Agency are often used as a basis for this analysis. A 2009 study of the UK residential energy sector contrasts the use of the technology-rich Markal model with several UK sectoral housing stock models. Data International energy statistics are typically broken down by carrier, sector and sub-sector, and country. Energy carriers ( energy products) are further classified as primary energy and secondary (or intermediate) energy and sometimes final (or end-use) energy. Published energy datasets are normally adjusted so that they are internally consistent, meaning that all energy stocks and flows must balance. The IEA regularly publishes energy statistics and energy balances with varying levels of detail and cost and also offers mid-term projections based on this data. The notion of an energy carrier, as used in energy economics, is distinct and different from the definition of energy used in physics. Scopes Energy systems can range in scope, from local, municipal, national, and regional, to global, depending on issues under investigation. Researchers may or may not include demand side measures within their definition of an energy system. The Intergovernmental Panel on Climate Change (IPCC) does so, for instance, but covers these measures in separate chapters on transport, buildings, industry, and agriculture. Household consumption and investment decisions may also be included within the ambit of an energy system. Such considerations are not common because consumer behavior is difficult to characterize, but the trend is to include human factors in models. Household decision-taking may be represented using techniques from bounded rationality and agent-based behavior. The American Association for the Advancement of Science (AAAS) specifically advocates that "more attention should be paid to incorporating behavioral considerations other than price- and income-driven behavior into economic models [of the energy system]". Energy-services The concept of an energy-service is central, particularly when defining the purpose of an energy system: Energy-services can be defined as amenities that are either furnished through energy consumption or could have been thus supplied. More explicitly: A consideration of energy-services per capita and how such services contribute to human welfare and individual quality of life is paramount to the debate on sustainable energy. People living in poor regions with low levels of energy-services consumption would clearly benefit from greater consumption, but the same is not generally true for those with high levels of consumption. The notion of energy-services has given rise to energy-service companies (ESCo) who contract to provide energy-services to a client for an extended period. The ESCo is then free to choose the best means to do so, including investments in the thermal performance and HVAC equipment of the buildings in question. International standards ISO13600, ISO13601, and ISO13602 form a set of international standards covering technical energy systems (TES). Although withdrawn prior to 2016, these documents provide useful definitions and a framework for formalizing such systems. The standards depict an energy system broken down into supply and demand sectors, linked by the flow of tradable energy commodities (or energywares). Each sector has a set of inputs and outputs, some intentional and some harmful byproducts. Sectors may be further divided into subsectors, each fulfilling a dedicated purpose. The demand sector is ultimately present to supply energyware-based services to consumers (see energy-services). Energy system redesign and transformation Energy system design includes the redesigning of energy systems to ensure sustainability of the system and its dependents and for meeting requirements of the Paris Agreement for climate change mitigation. Researchers are designing energy systems models and transformational pathways for renewable energy transitions towards 100% renewable energy, often in the form of peer-reviewed text documents created once by small teams of scientists and published in a journal. Considerations include the system's intermittency management, air pollution, various risks (such as for human safety, environmental risks, cost risks and feasibility risks), stability for prevention of power outages (including grid dependence or grid-design), resource requirements (including water and rare minerals and recyclability of components), technology/development requirements, costs, feasibility, other affected systems (such as land-use that affects food systems), carbon emissions, available energy quantity and transition-concerning factors (including costs, labor-related issues and speed of deployment). Energy system design can also consider energy consumption, such as in terms of absolute energy demand, waste and consumption reduction (e.g. via reduced energy-use, increased efficiency and flexible timing), process efficiency enhancement and waste heat recovery. A study noted significant potential for a type of energy systems modelling to "move beyond single disciplinary approaches towards a sophisticated integrated perspective". See also Control volume – a concept from mechanics and thermodynamics Electric power system – a network of electrical components used to generate, transfer, and use electric power Energy development – the effort to provide societies with sufficient energy under the reduced social and environmental impact Energy modeling – the process of building computer models of energy systems Energy industry – the supply-side of the energy sector Mathematical model – the representation of a system using mathematics and often solved using computers Object-oriented programming – a computer programming paradigm suited to the representation of energy systems as networks Network science – the study of complex networks Open energy system databases – database projects which collect, clean, and republish energy-related datasets Open energy system models – a review of energy system models that are also open source Sankey diagram – used to show energy flows through a system Notes References External links Energy Energy development Energy economics Networks Energy infrastructure Systems science
Energy system
[ "Physics", "Environmental_science" ]
1,623
[ "Physical quantities", "Energy economics", "Energy (physics)", "Energy", "Environmental social science" ]
52,060,075
https://en.wikipedia.org/wiki/DDoS%20attacks%20on%20Dyn
On October 21, 2016, three consecutive distributed denial-of-service attacks were launched against the Domain Name System (DNS) provider Dyn. The attack caused major Internet platforms and services to be unavailable to large swathes of users in Europe and North America. The groups Anonymous and New World Hackers claimed responsibility for the attack, but scant evidence was provided. As a DNS provider, Dyn provides to end-users the service of mapping an Internet domain name—when, for instance, entered into a web browser—to its corresponding IP address. The distributed denial-of-service (DDoS) attack was accomplished through numerous DNS lookup requests from tens of millions of IP addresses. The activities are believed to have been executed through a botnet consisting of many Internet-connected devices—such as printers, IP cameras, residential gateways and baby monitors—that had been infected with the Mirai malware. Affected services Services affected by the attack included: Airbnb Amazon.com Ancestry.com The A.V. Club BBC The Boston Globe Box Business Insider CNN Comcast CrunchBase DirecTV The Elder Scrolls Online Electronic Arts Etsy Evergreen ILS FiveThirtyEight Fox News The Guardian GitHub Grubhub HBO Heroku HostGator iHeartRadio Imgur Indiegogo Mashable National Hockey League Netflix The New York Times Overstock.com PayPal Pinterest Pixlr PlayStation Network Qualtrics Quora Reddit Roblox Ruby Lane RuneScape SaneBox Seamless Second Life Shopify Slack SoundCloud Squarespace Spotify Starbucks Storify Swedish Civil Contingencies Agency Swedish Government Tumblr Twilio Twitter Verizon Communications Visa Vox Media Walgreens The Wall Street Journal Wikia Wired Wix.com WWE Network Xbox Live Yammer Yelp Zillow Investigation The US Department of Homeland Security started an investigation into the attacks, according to a White House source. No group of hackers claimed responsibility during or in the immediate aftermath of the attack. Dyn's chief strategist said in an interview that the assaults on the company's servers were very complex and unlike everyday DDoS attacks. Barbara Simons, a member of the advisory board of the United States Election Assistance Commission, said such attacks could affect electronic voting for overseas military or civilians. Dyn disclosed that, according to business risk intelligence firm FlashPoint and Akamai Technologies, the attack was a botnet coordinated through numerous Internet of Things-enabled (IoT) devices, including cameras, residential gateways, and baby monitors, that had been infected with Mirai malware. The attribution of the attack to the Mirai botnet had been previously reported by BackConnect Inc., another security firm. Dyn stated that they were receiving malicious requests from tens of millions of IP addresses. Mirai is designed to brute-force the security on an IoT device, allowing it to be controlled remotely. Cybersecurity investigator Brian Krebs noted that the source code for Mirai had been released onto the Internet in an open-source manner some weeks prior, which made the investigation of the perpetrator more difficult. On 25 October 2016, US President Obama stated that the investigators still had no idea who carried out the cyberattack. On 13 December 2017, the Justice Department announced that three men (Paras Jha, 21, Josiah White, 20, and Dalton Norman, 21) had entered guilty pleas in cybercrime cases relating to the Mirai and clickfraud botnets. Perpetrators In correspondence with the website Politico, hacktivist groups SpainSquad, Anonymous, and New World Hackers claimed responsibility for the attack in retaliation against Ecuador's rescinding Internet access to WikiLeaks founder Julian Assange, at their embassy in London, where he had been granted asylum. This claim has yet to be confirmed. WikiLeaks alluded to the attack on Twitter, tweeting "Mr. Assange is still alive and WikiLeaks is still publishing. We ask supporters to stop taking down the US internet. You proved your point." New World Hackers has claimed responsibility in the past for similar attacks targeting sites like BBC and ESPN.com. On October 26, FlashPoint stated that the attack was most likely done by script kiddies. A November 17, 2016, a Forbes article reported that the attack was likely carried out by "an angry gamer". On December 9, 2020, one of the perpetrators pleaded guilty to taking part in the attack. The perpetrator's name was withheld due to his or her age. See also WannaCry ransomware attack Mirai (malware) Vulnerability (computing) References 2016 in computing Denial-of-service attacks October 2016 crimes in Europe October 2016 crimes in the United States Internet of things WikiLeaks Botnets Malware Domain Name System Hacking in the 2010s Cloud infrastructure attacks and failures 2010s internet outages
DDoS attacks on Dyn
[ "Technology", "Engineering" ]
1,015
[ "Malware", "Cybersecurity engineering", "Denial-of-service attacks", "Cloud infrastructure attacks and failures", "Computer security exploits" ]
52,060,822
https://en.wikipedia.org/wiki/Dirty%20COW
Dirty COW (Dirty copy-on-write) is a computer security vulnerability of the Linux kernel that affected all Linux-based operating systems, including Android devices, that used older versions of the Linux kernel created before 2018. It is a local privilege escalation bug that exploits a race condition in the implementation of the copy-on-write mechanism in the kernel's memory-management subsystem. Computers and devices that still use the older kernels remain vulnerable. The original exploit sample leveraging this vulnerability was discovered by Phil Oester during the investigation of a compromised machine. The author of this sample is still unknown. Because of the race condition, with the right timing, a local attacker can exploit the copy-on-write mechanism to turn a read-only mapping of a file into a writable mapping. Although it is a local privilege escalation, remote attackers can use it in conjunction with other exploits that allow remote execution of non-privileged code to achieve remote root access on a computer. The attack itself does not leave traces in the system log. The vulnerability has the Common Vulnerabilities and Exposures designation . Dirty Cow was one of the first security issues transparently fixed in Ubuntu by the Canonical Live Patch service. It has been demonstrated that the vulnerability can be utilized to root any Android device before Android version 7 (Nougat). History The vulnerability has existed in the Linux kernel since version 2.6.22 released in September 2007, and there is information about it being actively exploited at least since October 2016. The vulnerability has been patched in Linux kernel versions 4.8.3, 4.7.9, 4.4.26 and newer. The patch produced in 2016 did not fully address the issue and a revised patch was released on November 27, 2017, before public dissemination of the vulnerability. Applications The Dirty COW vulnerability has many perceived use cases including proven examples, such as obtaining root permissions in Android devices, as well as several speculated implementations. There are many binaries used in Linux which are read-only, and can only be modified or written to by a user of higher permissions, such as the root. When privileges are escalated, whether by genuine or malicious means – such as by using the Dirty COW exploit – the user can modify usually unmodifiable binaries and files. If a malicious individual could use the Dirty COW vulnerability to escalate their permissions, they could change a file, such as /bin/bash, so that it performs additional, unexpected functions, such as a keylogger. When a user starts a program which has been infected, they will inadvertently allow the malicious code to run. If the exploit targets a program which is run with root privileges, the exploit will have those same privileges. Remedies and recourse At the dawn of its discovery, anyone using a machine running Linux was susceptible to the exploit. The exploit has no preventative work around, the only cure is a patch or running a newer version which is not vulnerable anymore. Linus Torvalds committed a patch on October 18, 2016, acknowledging that it was an old vulnerability he had attempted to fix eleven years prior. Some distributors provide patches, such as Canonical, who provided a live patch. In the absence of a patch, there are a few mitigation technologies including SystemTap, and very little security from SELinux or AppArmor. Antivirus software has the potential to detect elevated permissions attacks, but it cannot prevent the attack. When given the opportunity, the safest route is to upgrade the Linux kernel to the following versions: References External links CVE-2016-5195 at Red Hat CVE-2016-5195 at SUSE 2016 in computing Internet security Software bugs Linux Privilege escalation exploits Computer security exploits
Dirty COW
[ "Technology" ]
778
[ "Privilege escalation exploits", "Computer security exploits" ]
52,060,880
https://en.wikipedia.org/wiki/Paolo%20Farinella%20Prize
The Paolo Farinella Prize is named after Paolo Farinella. The prize recognizes significant contributions in the fields of planetary sciences, space geodesy, fundamental physics, science popularization, security in space, weapons control, and disarmament. Recipients must be under the age of 47 (the age at which Farinella died) to qualify for the prize. Paolo Farinella Prize Winners See also List of astronomy awards References Astronomy prizes Awards established in 2010
Paolo Farinella Prize
[ "Astronomy", "Technology" ]
95
[ "Science and technology awards", "Science award stubs", "Astronomy prizes" ]
52,062,473
https://en.wikipedia.org/wiki/Spinning%20bee
Spinning bees were 18th-century public events where women in the American Colonies produced homespun cloth to help the colonists reduce their dependence on British goods. They emerged in the decade prior to the American Revolution as a way for women to protest British policies and taxation. Historical background Great Britain enforced the 1765 Stamp Act on its American colonies, which taxed official documents throughout the colony. The British Crown viewed these measures as a legitimate way to raise revenue. In contrast, many colonists viewed these acts as tyrannical, arguing that taxation without consent violated their rights as Englishmen. One common way that colonists protested this act of Parliament was through non-importation agreements and boycotts. Though the Stamp Act 1765 was repealed in 1766, the following year Parliament passed the Townshend Acts, imposing a new tax on goods such as glass and paper. Non-importation movements and boycotts resumed in protest of these additional taxes. Spinning bees were among these acts of defiance of the Townshend Acts by encouraging local production of cloth instead of purchasing imported English textiles that bore the new tax. Political significance The homespun cloth and garments that these spinning bees produced became a political symbol as well as a material boycott. Wearing homespun showed other colonists that the wearer was protesting the British by refusing to buy British clothes. In addition to average colonists, prominent colonial leaders and politicians also donned homespun clothing as a show of rebellion against the British Crown. One year prior to the outbreak of the Revolution, the entirety of Harvard's graduating class wore homespun garments. Spinning bees also held a personal importance for women as well, involving women in the resistance to Great Britain where previously they had been excluded from public displays of resistance against the Crown. Process of spinning bees The spinning bees sponsored by Rebel groups such as the Daughters of Liberty represented one way that colonial women could get involved in the protest of imperial policies. The colonies relied on Great Britain for textiles, meaning that a successful boycott would require alternate sources for many goods that colonists imported. The task of enacting the boycott fell to women, providing them an opportunity to enter the public side of the protest alongside men against the British Crown. Women began to compete publicly against one another to see who could make the most homemade cloth, known as homespun. These contest became known as spinning bees. The Sons of Liberty often co-hosted these events with the Daughters of Liberty as a way to publicly support the Patriot cause against the British. Like other local festivities of the time, spinning bees included songs, picnics, and friendly competitions. Newspaper accounts, for example those from Rhode Island, also demonstrate that spinning bees attempted to use the spirit of competition to bridge the gap between married and unmarried women as well as lower- and upper-class women. The spinning bees would often be community events, taking place in the center of town or in the town minister's home, depending upon the class status of the women involved. It was more likely for poorer women to spin as part of a bigger festivity than upper-class women, who spun at their minister's house. Legacy Spinning bees were a predecessor to women's paid work outside the home. Since the spinning bees required women to spin and weave out in public, they presented an opportunity for women to participate in the colonial economy in a public setting. The ability for women to spin as well as weave in public paved the way for women's eventual role in the United States factory system. Factory work became one of the few occupations open to women in the 19th century. In other countries Before the advent of electric lighting in Europe, rural and urban women in Germany would gather to do their spinning and other handicrafts in a single house or room in order to preserve firewood, candles, and lantern oil, thus collectively saving supplies for heating and lighting. This was variably referred to depending on the dialect as a (), (light room), or (distaff room), among other terms. While the spinning rooms were nominally segregated by gender, it was common for young men to visit the spinning rooms to accompany young women home in the evenings. As such, it was one of the few places that a relationship could be started away from the watchful eyes of church authorities and family members. From the 16th century onwards, this practice later drew outrage from Catholics and Protestants alike due to accusations of sexual debauchery. In response, a ('light man') could be assigned to a spinning room to hold the members responsible to spiritual authorities. Ernest Borneman mentions the following obscene terms from spinning room jargon: ('naughty bride'), (flax queen), (commercial bride), (rough bride): The prettiest girl was chosen to be the "naughty bride" at the time of the flax breaking. (shaggy bush): A distaff coated in flax. The resembled a fir tree decorated with ribbons, which a girl threw under the boys so that they could fight for it: Whoever conquered it won the favor of the . : On the back of her smock, the wore a flax wreath, which the boys tried to soak with a bucket of water to get the girl to hang up her skirt and petticoats to dry. : The flax waste () was stuffed into the boys' waistbands by the girls, which served as a playful excuse to quickly grope the male genitals. (meat pile): After dancing, all participants dropped to the floor, creating the largest possible crowd, in which there was an opportunity for mutual contact. This custom was particularly offensive and was condemned in numerous sermons. (flax break): "tell nonsense, make stupid jokes". (hair drying): drying flax or coitus. : Children born in autumn who may have been conceived in the spinning room during flax crumbling in the previous winter months. References American Revolutionary War Quilting Weaving Civil disobedience in the United States Protest tactics Manufacturing 18th century in economic history History of fashion Textile and clothing labor disputes in the United States
Spinning bee
[ "Engineering" ]
1,238
[ "Manufacturing", "Mechanical engineering" ]
73,432,245
https://en.wikipedia.org/wiki/Mar%C3%ADa%20J.%20Carro
María Jesús Carro Rossell (born 1961) is a Spanish mathematician specializing in mathematical analysis, including Fourier analysis, functional analysis, harmonic analysis, operator theory and the analysis of Lorentz spaces. She is a professor at the Complutense University of Madrid, in the Department of Mathematical Analysis and Applied Mathematics. Education and career Carro was born in 1961, and motivated to work in mathematics by her father, who was prevented from studying science by the Spanish Civil War. She earned a degree in mathematical sciences in 1984 from the University of Extremadura. Next, she went to the University of Barcelona for doctoral study in mathematics, completing her Ph.D. in 1988 under the supervision of Joan Cerdà, with the dissertation Interpolación compleja de operadores lineales. After postdoctoral study at Washington University in St. Louis with Guido Weiss, she obtained a faculty position at the Autonomous University of Barcelona in 1991, and then returned to the University of Barcelona in 1992. There, she held a professorial chair from 1993 to 2019, when she moved to the Complutense University of Madrid. Recognition Carro received the medal of the Royal Spanish Mathematical Society in 2020. She was elected as a corresponding member of the Spanish Royal Academy of Sciences in 2021. References External links Home page 1961 births Living people Spanish mathematicians Spanish women mathematicians Mathematical analysts University of Extremadura alumni University of Barcelona alumni Academic staff of the Autonomous University of Barcelona Academic staff of the University of Barcelona Academic staff of the Complutense University of Madrid Washington University in St. Louis fellows
María J. Carro
[ "Mathematics" ]
319
[ "Mathematical analysis", "Mathematical analysts" ]
73,432,376
https://en.wikipedia.org/wiki/Clitocybula%20azurea
Clitocybula azurea is a species of mushroom in the genus Clitocybula. It is native to Central and South America. Clitocybula azurea has subglobose to broadly ellipsoid, smooth, inequilateral, amyloid spores which measure 4–6.5 × 3–5 μm. The cheilocystidia and pleurocystidia measure 18–30 × 3.5–8 μm and are versiform, cylindrical, clavate or utriform, subcapitate or with an apical papilla. Phylogenetically Clitocybula azurea is in Clitocybula sensu stricto, and is a sister species to Clitocybula familia. Gallery References Marasmiaceae Fungus species
Clitocybula azurea
[ "Biology" ]
164
[ "Fungus stubs", "Fungi", "Fungus species" ]
73,435,810
https://en.wikipedia.org/wiki/Cardiovascular%20disease%20in%20Nepal
Cardiovascular disease is rising in Nepal. More than 27% of total mortalities is attributed to cardiovascular disease in Nepal. The problem his worsened by lack of health infrastructure, geographical difficulties, a weak economy, and a low literacy rate. Prevalence of cardiovascular risk factor is particularly high among healthy adults in urban area as per a study conducted by Om Murti Anil in Kathmandu in 2014. According to the Nepal Health Research Council and the World Health Organization, cardiovascular diseases are one of the major components of non-communicable diseases that kill two-thirds of Nepalese people. Improvements in lifestyle and behavior may reduce the death rate. Cardiovascular health awareness in Nepal has improved and looks promising to mitigate the burden of disease. References Aging-associated diseases Cardiovascular diseases Health in Nepal
Cardiovascular disease in Nepal
[ "Biology" ]
156
[ "Senescence", "Aging-associated diseases" ]
73,435,829
https://en.wikipedia.org/wiki/List%20of%20least-polluted%20cities%20by%20particulate%20matter%20concentration
Below is a list of 526 cities sorted by their annual mean concentration of PM2.5 (μg/m3) in 2022. By default the least polluted cities which have fewest particulates in the air come first. Click on the arrows next to the table's headers to have the most polluted cities ranked first. Please note that constraints exist in this type of lists. For instance, some places like Africa and South America lack air pollution reporting tools, so their pollution levels are probably not reflected in this list. Moreover, many cities from a certain country are featured in the list may only mean that they have large and wide air pollution monitoring networks, which may or may not be an indicator of heavy pollution. See also List of most-polluted cities by particulate matter concentration List of countries by air pollution Air quality monitoring Air purifier References Particulates Pollutants Visibility Air pollution Pollution Pollution by city Pollution Pollution-related lists
List of least-polluted cities by particulate matter concentration
[ "Physics", "Chemistry", "Mathematics" ]
198
[ "Visibility", "Physical quantities", "Quantity", "Particulates", "Particle technology", "Wikipedia categories named after physical quantities" ]
73,438,573
https://en.wikipedia.org/wiki/Dark-field%20X-ray%20microscopy
Dark-field X-ray microscopy (DFXM or DFXRM) is an imaging technique used for multiscale structural characterisation. It is capable of mapping deeply embedded structural elements with nm-resolution using synchrotron X-ray diffraction-based imaging. The technique works by using scattered X-rays to create a high degree of contrast, and by measuring the intensity and spatial distribution of the diffracted beams, it is possible to obtain a three-dimensional map of the sample's structure, orientation, and local strain. History The first experimental demonstration of dark-field X-ray microscopy was reported in 2006 by a group at the European Synchrotron Radiation Facility in Grenoble, France. Since then, the technique has been rapidly evolving and has shown great promise in multiscale structural characterization. Its development is largely due to advances in synchrotron X-ray sources, which provide highly collimated and intense beams of X-rays. The development of dark-field X-ray microscopy has been driven by the need for non-destructive imaging of bulk crystalline samples at high resolution, and it continues to be an active area of research today. However, dark-field microscopy, dark-field scanning transmission X-ray microscopy, and soft dark-field X-ray microscopy has long been used to map deeply embedded structural elements. Principles and instrumentation In this technique, a synchrotron light source is used to generate an intense and coherent X-ray beam, which is then focused onto the sample using a specialized objective lens. The objective lens acts as a collimator to select and focus the scattered light, which is then detected by the 2D detector to create a diffraction pattern. The specialized objective lens in DFXM, referred to as an X-ray objective lens, is a crucial component of the instrumentation required for the technique. It can be made from different materials such as beryllium, silicon, and diamond, depending on the specific requirements of the experiment. The objective enables one to enlarge or reduce the spatial resolution and field of view within the sample by varying the number of individual lenses and adjusting and (as in the figure) correspondingly. The diffraction angle is typically 10–30°. The sample is positioned at an angle such that the direct beam is blocked by a beam stop or aperture, and the diffracted beams from the sample are allowed to pass through a detector. An embedded crystalline element (for example, a grain or domain) of choice (green) is aligned such that the detector is positioned at a Bragg angle that corresponds to a particular diffraction peak of interest, which is determined by the crystal structure of the sample. The objective magnifies the diffracted beam by a factor and generates an inverted 2D projection of the grain. Through repeated exposures during a 360° rotation of the element around an axis parallel to the diffraction vector, , several 2D projections of the grain are obtained from various angles. A 3D map is then obtained by combining these projections using reconstruction algorithms similar to those developed for CT scanning. If the lattice of the crystalline element exhibits an internal orientation spread, this procedure is repeated for a number of sample tilts, indicated by the angles and . The current implementation of DFXM at ID06, , uses a compound refractive lens (CRL) as the objective, giving spatial resolution of 100 nm and angular resolution of 0.001°. Applications, limitations and alternatives Current and potential applications DFXM has been used for the non-destructive investigation of polycrystalline materials and composites, revealing the 3D microstructure, phases, orientation of individual grains, and local strains. It has also been used for in situ studies of materials recrystallisation, dislocations and other defects, and the deformation and fracture mechanisms in materials, such as metals and composites. DFXM can provide insights into the 3D microstructure and deformation of geological materials such as minerals and rocks, and irradiated materials. DFXM has the potential to revolutionise the field of nanotechnology by providing non-destructive, high-resolution 3D imaging of nanostructures and nanomaterials. It has been used to investigate the 3D morphology of nanowires and to detect structural defects in nanotubes. DFXM has shown potential for imaging biological tissues and organs with high contrast and resolution. It has been used to visualize the 3D microstructure of cartilage and bone, as well as to detect early-stage breast cancer in mouse model. Limitations The intense X-ray beams used in DFXM can damage delicate samples, particularly biological specimens. DFXM can suffer from imaging artefacts such as ring artefacts, which can affect image quality and limit interpretation. The instrumentation required for DFXM is expensive and typically only available at synchrotron facilities, making it inaccessible to many researchers. Although DFXM can achieve high spatial resolution, it is still not as high as the resolution achieved by other imaging techniques such as transmission electron microscopy (TEM) or X-ray crystallography. Preparation of samples for DFXM imaging can be challenging, especially for samples that are not crystalline. There are also limitations on the sample size that can be imaged as the technique works best with thin samples, typically less than 100 microns thick, due to the attenuation of the X-ray beam by thicker samples. DFXM still suffers from long integration times, which can limit its practical applications. This is due to the low flux density of X-rays emitted by synchrotron sources and the high sensitivity required to detect scattered X-rays. Alternatives There are several alternative techniques to DFXM, depending on the application, some of which are: Differential-aperture X-ray structural microscopy (DAXM): DAXM is a synchrotron X-ray method capable of delivering precise information about the local structure and crystallographic orientation in three dimensions at a spatial resolution of less than one micron. It also provides angular precision and local elastic strain with high accuracy a wide range of materials, including single crystals, polycrystals, composites, and materials with varying properties. Bragg Coherent diffraction imaging (BCDI): BCDI is an advanced microscopy technique introduced in 2006 to study crystalline nanomaterials' 3D structure. BCDI has applications in diverse areas, including in situ studies of corrosion, probing dissolution processes, and simulating diffraction patterns to understand atomic displacement. Ptychography is a computational imaging method used in microscopy to generate images by processing multiple coherent interference patterns. It provides advantages such as high-resolution imaging, phase retrieval, and lensless imaging capabilities. Diffraction Contrast Tomography (DCT): DCT is a method that uses coherent X-rays to generate three-dimensional grain maps of polycrystalline materials. DCT enables visualization of crystallographic information within samples, aiding in the analysis of materials' structural properties, defects, and grain orientations. Three-dimensional X-ray diffraction (3DXRD): 3DXRD is a synchrotron-based technique that provides information about the crystallographic orientation of individual grains in polycrystalline materials. It can be used to study the evolution of microstructure during deformation and recrystallization processes and provides submicron resolution. Electron backscatter diffraction (EBSD): EBSD is a scanning electron microscopy (SEM) technique that can be used to map - the sample surface - crystallographic orientation and strain at the submicron scale. It works by detecting the diffraction pattern of backscattered electrons, which provides information about the crystal structure of the material. EBSD can be used on a variety of materials, including metals, ceramics, and semiconductors, and can be extended to the third dimension, i.e., 3D EBSD, and can be combined with Digital image correlation, i.e., EBSD-DIC. Digital image correlation (DIC): DIC is a non-contact optical method used to measure the displacement and deformation of a material by analysing the digital images captured before and after the application of load. This technique can measure strain with sub-pixel accuracy and is widely used in materials science and engineering. Transmission electron microscopy (TEM): TEM is a high-resolution imaging technique that provides information about the microstructure and crystallographic orientation of materials. It can be used to study the evolution of microstructure during deformation and recrystallization processes and provides submicron resolution. Micro-Raman spectroscopy: Micro-Raman spectroscopy is a non-destructive technique that can be used to measure the strain of a material at the submicron scale. It works by illuminating a sample with a laser beam and analysing the scattered light. The frequency shift of the scattered light provides information about the crystal deformation, and thus the strain of the material. Neutron diffraction: Neutron diffraction is a technique that uses a beam of neutrons to study the structure of materials. It is particularly useful for studying the crystal structure and magnetic properties of materials. Neutron diffraction can provide sub-micron resolution. References Further reading Diffraction Materials science Microscopes Microscopy Nanotechnology Scientific techniques
Dark-field X-ray microscopy
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
1,945
[ "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Materials science", "Measuring instruments", "Diffraction", "Crystallography", "Microscopes", "nan", "Microscopy", "Nanotechnology", "Spectroscopy" ]
73,439,301
https://en.wikipedia.org/wiki/Wilma%20Dierkes
Wilma K. Dierkes is a University of Twente Associate Professor and chair of the Elastomer Technology and Engineering group known for her research on elastomer sustainability. Education Dierkes completed an undergraduate degree in chemistry at Leibniz University in Hannover, Germany in 1990. After a period working in industry, she returned to study for a PhD in polymer science at University of Twente, completing her doctorate in 2010. She completed postgraduate study in environmental science at Foundation Universitaire Luxembourgeoise, Arlon, Belgium. Career Dierkes entered the rubber industry in 1991. She worked on elastomer recycling at the company Rubber Resources in Maastricht. Here, she was responsible for development and technical service, and implemented recycling of production waste. She later joined Degussa working on carbon black research, and Bosch working on windshield wiper development. She joined the University Twente, the Netherlands in 2001. She is currently an associate professor. From 2009 to 2013, she held a visiting professorship at Tampere University of Technology. From 2005 to 2014, Dierkes served as chairman of the Dutch Association of Plastics and Rubber Technologists (VKRT). She is also a founding member of the Female Faculty Network at the University Twente (FFNT) and has served on its board. She serves on the expert committee for the Recircle Awards, which is composed of "individuals from the global tyre retreading and recycling industries selected according to their independent status and their acknowledged expertise". Dierkes' most cited works address the topics of recycling of natural rubber based latex products and silica filler technology for application in tire tread compounding. She has advocated for an open source approach to research and development in the tire industry. Awards and recognition 2013 - Sparks–Thomas award from the ACS Rubber Division References Polymer scientists and engineers Living people Women materials scientists and engineers Year of birth missing (living people)
Wilma Dierkes
[ "Chemistry", "Materials_science", "Technology" ]
403
[ "Polymer scientists and engineers", "Physical chemists", "Materials scientists and engineers", "Polymer chemistry", "Women materials scientists and engineers", "Women in science and technology" ]
73,440,001
https://en.wikipedia.org/wiki/HD%20190422
HD 190422, also known as HR 7674 or rarely 77 G. Telescopii, is a solitary star located in the southern constellation Telescopium. It has an apparent magnitude of +6.25, placing it near the limit for naked eye, even under ideal conditions. At its current distance, HD 190422's brightness is diminished by 0.11 magnitudes due to extinction from interstellar dust and it has an absolute magnitude of +4.41. The star is located relatively close at a distance of 79 light years based on Gaia DR3 parallax measurements but it is receding with a heliocentric radial velocity of . Approximately 1.6 million years ago, HD 190422 was located away from the Sun. HD 190422 has a stellar classification of F9 V CH−0.4, indicating that it is a F-type main-sequence star with a mild underabundance of the CH radical in its spectrum. It has 125% the mass of the Sun and 109% of the Sun's radius. It radiates 1.534 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it a whitish-yellow hue. HD 190422 is slightly metal deficient with a metallicity of [Fe/H] = −0.13 and it spins modestly with a projected rotational velocity of . The star is estimated to be 400 million years old, less than a tenth the age of the Sun. References F-type main-sequence stars Telescopium Telescopii, 77 CD-55 08393 190422 099137 7674
HD 190422
[ "Astronomy" ]
346
[ "Telescopium", "Constellations" ]
73,440,849
https://en.wikipedia.org/wiki/Pressure%20gain%20combustion
Pressure gain combustion (PGC) is the unsteady state process used in gas turbines in which gas expansion caused by heat release is constrained. First developed in the early 20th century as one of the earliest gas turbine designs, the concept was mostly abandoned following the advent of isobaric jet engines in WWII. As an alternative to conventional gas turbines, pressure gain combustion prevents the expansion of gas by holding it at constant volume during the reaction, causing an increase in stagnation pressure. The subsequent combustion produces a detonation, rather than the deflagration used in most turbines. Doing so allows for extra work extraction rather than a loss of energy due to pressure loss across the turbine. Several different variations of turbines use this process, the most prominent being the pulse detonation engine and the rotating detonation engine. In recent years, pressure gain combustion has once again gained relevance and is currently being researched for use in propulsion systems and power generation due to its potential for improved efficiency and performance over conventional turbines. History Early history Gas-powered turbines have been researched since the late 18th century, starting with John Barber's 1791 patent. Over a century later, Ægidius Elling built a turbine in 1903 which generated 11 bhp (8.2 kW), the first gas turbine to produce net positive work. In 1909, the first pressure gain combustion turbine was built by Hans Holzwarth. Initially operating at 200 bhp (147 kW), subsequent improvements to the engine increased its power output to 5000 bhp (3728 kW) by 1939. However, the aptly named Explosion Turbine would lose popularity among engineers and inventors as continuous combustion designs gained traction due to their use in jet engine prototypes. Renewed Interest The concept of pulsed propulsion is neither new, nor exclusive to pressure gain combustion. In fact, the German V1 missile utilized a pulse jet operating at 45 Hz. During the space race, NASA's Project Orion concept utilized force from nuclear explosions ignited behind the spacecraft to generate thrust. This process is known as nuclear pulse propulsion and is stylistically similar to the pulse detonation engine. In the mid-20th century, US aeronautical scientists and engineers were trying to study the properties of detonation waves. To do this, a primitive rotating detonation chamber of created. This development became the basis for the rotating detonation engine, one of the leading PGC engine concepts, although it was largely ignored at the time due to its instability. However, as gas turbines are becoming more and more optimized, PGC research is now gaining traction in aircraft propulsion, power generation, and even rocket propulsion. In January 2008, a pulse detonation-powered plane completed its first flight as a cooperative project between the Air Force Research Laboratory and Innovative Scientific Solutions, a research and product development company. Currently, various organizations have developed working PGC engines (mostly RDEs), but none have been put to commercial use due to developmental challenges. Concept & Comparison to Convention Turbines Overview of Conventional Turbines The majority of gas turbines consist of an intake through which atmospheric air enters the turbine. The air is then pressurized through a compressor before mixing with fuel. The air-fuel mixture, also known as the working fluid, is combusted in a deflagration (a combustion reaction propagating at subsonic speed), which causes the mixture to expand in volume while maintaining constant pressure. Finally, the combustion product is ejected out of the exhaust to produce thrust. This process is known as the Brayton Cycle and has been used as the standard method of jet propulsion and turbine design for about a century. Humphrey Cycle Contrasting against the Brayton Cycle used in most turbines, Pressure Gain Combustion is based on the Humphrey Cycle. Instead of an isobaric system in which gas volume expands as heat is added to the combustion chamber, the volume of working fluid stays constant as its pressure increases during combustion. While the Brayton Cycle describes a subsonic deflagration, the Humphrey Cycle occurs in a detonation (A combustion reaction propagating at supersonic speed). The reaction occurs so quickly that the mixture doesn't have time to expand, causing a pressure gain, before being ejected through the exhaust to produce thrust. The whole process occurs rapidly, and turbines will produce anywhere from 20 to 200 detonations per second. Because the working fluid is combusting at a constant volume, there is no pressure loss across the turbine, which increases the net work generated by each cycle. However, since work is done by a series of detonations, rather than a constant reaction generating thrust, the process is naturally more unsteady compared to a conventional turbine. Designs & Variations Pulse Detonation Engine The simplest modern PGC turbine is the Pulse Detonation Engine. Consisting of almost no moving parts, the PDE is externally similar to a ramjet, a type of jet engine without compressor fans that is viable only at supersonic speeds. First, air enters the intake nozzle and travels directly to the combustion chamber to be mixed with injected fuel. There, the mixture is ignited while the front of the chamber closes, producing a detonation wave which both compresses and combusts the mixture, before the working fluid is ejected at supersonic speeds through the exhaust. Because of the engine's simplicity and anatomical similarity to ramjets and scramjets, pulse detonation engines can be implemented as a combined-cycle engine, which can improve the performance and reliability of ramjets. Conventional combined-cycle engines have complex moving parts that are essentially rendered useless at high speeds, an issue that PDE/ramjet drives will not have. Rotating Detonation Engine Apart from PDEs, there exist multiple other PGC engine concepts, including Resonant Pulse Combustors, and Internal Combustion Wave Rotors to name a few. However, the majority of modern PGC research is concentrated around the rotating detonation engine (RDE), which aims to solve many of the issues encountered by PDEs. The main drawback of pulse detonation is the intermittent nature of the combustions. Not only is the reaction hard to control, but the intermittent combustion also loses power due to the time it takes to refuel the combustion chamber after purging, in which no thrust is produced. The rotating detonation engine aims to address both these problems. While PDEs involve a series of repeating detonations to ignite batches of air that enter the combustion chamber, RDEs can circumvent this by utilizing a single detonation wave that rotates around the space in between concentric cylinders. A continuous air intake flows through the cylinders, which compresses and combusts as it passes through the rotating detonation wave. This eliminates the need to constantly produce detonations since it only uses a single cyclic detonation, and it allows for a steadier constant flow, instead of the pulsing thrust produced by PDEs. Applications & Technical Challenges Propulsion Modern chemical rockets still utilize deflagration reactions to generate thrust, which are getting increasingly optimized to their limits. As a result, pressure gain combustion engines, mostly RDEs, to garner significant attention as a possible method of improving rocket performance. Currently, pressure gain rocket engines are being researched by space agencies in multiple countries, including NASA and JAXA, as well as numerous universities and private companies. Detonation propulsion, which is more energy efficient than conventional deflagration reactions, may increase efficiency by 5-10%, which can both reduce rocket mass and increase payload size. As mentioned previously, pressure gain turbines have also been researched and developed extensively for use in aircraft propulsion. Pressure gain combustion engines can both improve the performance and reduce the complexity of combined ramjet/scramjet engines through their shared design similarities. Furthermore, this may even allow PDE/RDE combined ramjets to be utilized at conditions unsuitable for conventional ramjets. In addition, pressure gain turbojets require significantly less complexity, especially in the compressors, compared to regular turbines. This will not only save resources in manufacturing but also allow for designs to produce higher thrust in smaller engines. Energy Generation Apart from nuclear fission, natural gas contains the highest energy density of widely used fuels. As such, to reduce carbon emissions, electricity-generating plants are increasingly turning to gas turbines from crude oil and coal. While conventional turbines generate large amounts of energy more efficiently than other fossil fuels, just as in aerospace, they are beginning to reach their limits. Similar to its potential use in propulsion, pressure gain combustion turbines can offer an improvement to gas power plants. In addition to better efficiency, RDEs can operate at much higher hydrogen concentrations, further improving performance because of hydrogen's higher energy density compared to petrochemicals. The relative simplicity of RDEs can also improve reliability and ease of maintenance, though that may be counterbalanced by the increased stress put on the engine by the process itself. Engineering and Implementation Challenges While PGC offers improved performance and efficiency, there are serious flaws and challenges that researchers were initially unable to solve, preventing the technology from being widely used. Since PDEs are effectively intermittent explosion drives, the cycle they run on is far more unsteady and harder to control than conventional turbines. This makes PDEs very difficult to integrate into airframes, as the high energy pulsing of the engine can cause the inlet to unstart and stop the reaction, in addition to putting high stress on the nacelle or any other adjacent parts. The noise from the exhaust is also a concern. In testing, the highly energetic detonations produced up to 122 dB at a distance of 3 m in a 20 Hz PDE. For scaled-up commercial units operating at higher power and frequency, noise pollution will be a serious issue if effective damping measures are not implemented. Moreover, due to the high energy required to initiate detonations, PDEs with shorter combustion chambers will need to utilize deflagration combustion at initial ignition and accelerate pressure waves through a process called Deflagration to Detonation Transition (DDT). This requires placing obstacles in the path of the deflagration wave to induce turbulent flow, which speeds up the wave but requires more complexity in the engine structure. While RDEs solve many of the problems encountered in PDEs, it isn't without its flaws. The constant flow of the engine, coupled with the need to sustain the detonation, requires a tremendous intake of air to be rapidly mixed with the fuel in a shorter distance than most PDEs, which are normally quite elongated. In addition, the stress placed on the engine by the detonation process was simply too much for the engine to withstand during the early years of development. However, advancements in material science and manufacturing processes have improved the feasibility of RDEs to the point where research and development is believed to be worthwhile by many organizations. See also Nuclear Pulse Propulsion Pulse Jet Pulse Detonation Engine Rotating Detonation Engine Schramjet References Combustion
Pressure gain combustion
[ "Chemistry" ]
2,239
[ "Combustion" ]
73,442,312
https://en.wikipedia.org/wiki/Clitocybula%20ellipsospora
Clitocybula ellipsospora is a species of mushroom in the genus Clitocybula that was discovered in 2022. Currently it has only been identified in a few locations on the Iberian Peninsula. Description It grows in small cespitose groups occasionally producing lone specimens. The cap is roughly with a plano-convex shape, with a depression in the centre, starts off circular but becomes irregular with age. The margin is curved not striated becoming very irregular and lacerated with maturity. Colour is uniformly silvery with beige and greyish-brown undertones in young and wet specimens. Stem is × , central, curved, fistulous, very fibrous, wider at the junction with the cap and often flattened and/or cleft. Basidiospores are ellipsoid to narrowly ellipsoid, hyaline, guttulate, distinctly amyloid, producing a white spore print. Habitat Clitocybula ellipsospora grows in small clusters alongside partially buried deadwood, in Pinus sylvestris forests within the supra-mediterranean belt, on acid soils so far exclusively within the vicinity of peat bogs. References Marasmiaceae Fungus species
Clitocybula ellipsospora
[ "Biology" ]
247
[ "Fungus stubs", "Fungi", "Fungus species" ]
73,444,592
https://en.wikipedia.org/wiki/Eutrema%20salsugineum
Eutrema salsugineum (syn. Thellungiella salsuginea), the saltwater cress or salt-lick mustard, is a species of flowering plant in the family Brassicaceae. A petite annual or biennial, it is native to Central Asia, Siberia, Mongolia, northern and eastern China, northwestern and western Canada, Montana and Colorado in the United States, and Nuevo León in Mexico. An extremophile halophyte, it is a close relative of the model organism Arabidopsis thaliana and has been adopted to study salt, drought, and cold stress resistance in plants, including having its genome sequenced. References salsugineum Halophytes Flora of East European Russia Flora of Siberia Flora of Central Asia Flora of Mongolia Flora of Xinjiang Flora of Inner Mongolia Flora of Manchuria Flora of North-Central China Flora of Southeast China Flora of the Northwest Territories Flora of Yukon Flora of Western Canada Flora of Montana Flora of Colorado Flora of Nuevo León Plants described in 2005
Eutrema salsugineum
[ "Chemistry" ]
206
[ "Halophytes", "Salts" ]
73,445,802
https://en.wikipedia.org/wiki/FragDenStaat
FragDenStaat is a Berlin and Brussels-based NGO run by the Open Knowledge Foundation Germany focused on the right to information. It operates an Internet platform to facilitate freedom of information requests to both German and EU public authorities. The technical platform is supplemented by issue-related campaigns, investigative journalism and strategic lawsuits, which are organized and operated by a project team and often in cooperation with other NGOs or news outlets. FragDenStaat was founded by Stefan Wehrmeyer in August 2011 as a similar project to MySociety's WhatDoTheyKnow. So far, 120,000 users have sent more than 230,000 requests using the platform. Notable campaigns and scoops NSU Files: A leak of a classified report on neo-Nazi terror attacks that confirmed suspicions about the authorities' failures Frontex Files: Investigation into secret meetings between EU agency Frontex and weapon lobbyists Prisons in Paradise: Report on refugee detentions in Greece References External links Non-profit technology Politics and technology Freedom of information activists Open government
FragDenStaat
[ "Technology" ]
209
[ "Information technology", "Non-profit technology" ]
58,070,834
https://en.wikipedia.org/wiki/Non%20functional%20pad
A non-functional pad is a pad in a printed circuit board that is not connected to a track on the layer it is on. Removal Non-functional pads can be removed at any phase of the design process. Some software allows precise control during the design process, and also removes the non-functional pads during output file creation. Furthermore, some board manufacturers remove non-functional pads during data preparations. Occasionally, this process of non-functional pad removal is also called unused pad suppression. The benefits of removing the non-functional pads are limited. Electrically, it creates needless extra capacitance in certain designs, which needs to be removed. Removing non-functional pads can improve the drilling process, as it lessens drill wear. Non-functional pad removal can influence the reliability. (e.g. barrel cracking failure mode). Removal can increase or decrease reliability. Depending on design parameters, removing the non-functional pads can free up routing space. Non-functional pads naturally also affect thermal characteristics. Sometimes, non-functional pads (or their removal) are used for copper balancing, which affects etching, bow and twist and other effects. Bibliography Non-functional Pads: Should They Stay or Should They Go Pads/Nopads Printed circuit board manufacturing
Non functional pad
[ "Engineering" ]
256
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
58,071,309
https://en.wikipedia.org/wiki/Unitary%20transformation%20%28quantum%20mechanics%29
In quantum mechanics, the Schrödinger equation describes how a system changes with time. It does this by relating changes in the state of the system to the energy in the system (given by an operator called the Hamiltonian). Therefore, once the Hamiltonian is known, the time dynamics are in principle known. All that remains is to plug the Hamiltonian into the Schrödinger equation and solve for the system state as a function of time. Often, however, the Schrödinger equation is difficult to solve (even with a computer). Therefore, physicists have developed mathematical techniques to simplify these problems and clarify what is happening physically. One such technique is to apply a unitary transformation to the Hamiltonian. Doing so can result in a simplified version of the Schrödinger equation which nonetheless has the same solution as the original. Transformation A unitary transformation (or frame change) can be expressed in terms of a time-dependent Hamiltonian and unitary operator . Under this change, the Hamiltonian transforms as: . The Schrödinger equation applies to the new Hamiltonian. Solutions to the untransformed and transformed equations are also related by . Specifically, if the wave function satisfies the original equation, then will satisfy the new equation. Derivation Recall that by the definition of a unitary matrix, . Beginning with the Schrödinger equation, , we can therefore insert the identity at will. In particular, inserting it after and also premultiplying both sides by , we get . Next, note that by the product rule, . Inserting another and rearranging, we get . Finally, combining (1) and (2) above results in the desired transformation: . If we adopt the notation to describe the transformed wave function, the equations can be written in a clearer form. For instance, can be rewritten as , which can be rewritten in the form of the original Schrödinger equation, The original wave function can be recovered as . Relation to the interaction picture Unitary transformations can be seen as a generalization of the interaction (Dirac) picture. In the latter approach, a Hamiltonian is broken into a time-independent part and a time-dependent part, . In this case, the Schrödinger equation becomes , with . The correspondence to a unitary transformation can be shown by choosing . As a result, Using the notation from above, our transformed Hamiltonian becomes First note that since is a function of , the two must commute. Then , which takes care of the first term in the transformation in , i.e. . Next use the chain rule to calculate which cancels with the other . Evidently we are left with , yielding as shown above. When applying a general unitary transformation, however, it is not necessary that be broken into parts, or even that be a function of any part of the Hamiltonian. Examples Rotating frame Consider an atom with two states, ground and excited . The atom has a Hamiltonian , where is the frequency of light associated with the ground-to-excited transition. Now suppose we illuminate the atom with a drive at frequency which couples the two states, and that the time-dependent driven Hamiltonian is for some complex drive strength . Because of the competing frequency scales (, , and ), it is difficult to anticipate the effect of the drive (see driven harmonic motion). Without a drive, the phase of would oscillate relative to . In the Bloch sphere representation of a two-state system, this corresponds to rotation around the z-axis. Conceptually, we can remove this component of the dynamics by entering a rotating frame of reference defined by the unitary transformation . Under this transformation, the Hamiltonian becomes . If the driving frequency is equal to the g-e transition's frequency, , resonance will occur and then the equation above reduces to . From this it is apparent, even without getting into details, that the dynamics will involve an oscillation between the ground and excited states at frequency . As another limiting case, suppose the drive is far off-resonant, . We can figure out the dynamics in that case without solving the Schrödinger equation directly. Suppose the system starts in the ground state . Initially, the Hamiltonian will populate some component of . A small time later, however, it will populate roughly the same amount of but with completely different phase. Thus the effect of an off-resonant drive will tend to cancel itself out. This can also be expressed by saying that an off-resonant drive is rapidly rotating in the frame of the atom. These concepts are illustrated in the table below, where the sphere represents the Bloch sphere, the arrow represents the state of the atom, and the hand represents the drive. Displaced frame The example above could also have been analyzed in the interaction picture. The following example, however, is more difficult to analyze without the general formulation of unitary transformations. Consider two harmonic oscillators, between which we would like to engineer a beam splitter interaction, . This was achieved experimentally with two microwave cavity resonators serving as and . Below, we sketch the analysis of a simplified version of this experiment. In addition to the microwave cavities, the experiment also involved a transmon qubit, , coupled to both modes. The qubit is driven simultaneously at two frequencies, and , for which . In addition, there are many fourth-order terms coupling the modes, but most of them can be neglected. In this experiment, two such terms which will become important are . (H.c. is shorthand for the Hermitian conjugate.) We can apply a displacement transformation, , to mode . For carefully chosen amplitudes, this transformation will cancel while also displacing the ladder operator, . This leaves us with . Expanding this expression and dropping the rapidly rotating terms, we are left with the desired Hamiltonian, . Relation to the Baker–Campbell–Hausdorff formula It is common for the operators involved in unitary transformations to be written as exponentials of operators, , as seen above. Further, the operators in the exponentials commonly obey the relation , so that the transform of an operator is,. By now introducing the iterator commutator, we can use a special result of the Baker-Campbell-Hausdorff formula to write this transformation compactly as, or, in long form for completeness, References Quantum mechanics
Unitary transformation (quantum mechanics)
[ "Physics" ]
1,314
[ "Theoretical physics", "Quantum mechanics" ]
58,071,400
https://en.wikipedia.org/wiki/Marlin%20%28firmware%29
Marlin is open source firmware originally designed for RepRap project FDM (fused deposition modeling) 3D printers using the Arduino platform. Marlin supports many different types of 3D printing robot platforms, including basic Cartesian, Core XY, Delta, and SCARA printers, as well as some other less conventional designs like Hangprinter and Beltprinter. In addition to 3D printers, Marlin is generally adaptable to any machine requiring control and interaction. It has been used to drive SLA and SLS 3D printers, custom CNC mills, laser engravers (or laser beam machining), laser cutters, vinyl cutters, pick-and-place machines, foam cutters, and egg painting robots. History Marlin was first created in 2011 for the RepRap and Ultimaker printers by combining elements from the open source Grbl and Sprinter projects. Development continued at a slow pace while gaining in popularity and acceptance as a superior alternative to the other available firmware. By 2015, companies were beginning to introduce commercial 3D printers with Marlin pre-installed and contributing their improvements to the project. Early machines included the Ultimaker 1, the TAZ series by Aleph Objects and the Prusa i3 by Prusa Research. By 2018 manufacturers had begun to favor boards with more powerful and efficient ARM processors, often at a lower cost than the AVR boards they supplant. After extensive refactoring Marlin 2.0 was officially released in late 2019 with full support for 32-bit ARM-based controller boards through a lightweight extensible hardware access layer. While Marlin 1.x had only supported 8-bit AVR (e.g., ATMega) and ATSAM3X8E (Due) platforms, the HAL added ATSAMD51 (Grand Central), Espressif ESP32, NXP LPC176x, and STMicro STM32. Marlin also acquired HAL code to run natively on Linux, Mac, and Windows, but only within a simulation for debugging purposes. As of October 2022, Marlin was still under active development and continues to be very popular, claiming to be "the most widely used 3D printing firmware in the world." Some of the most successful companies using Marlin today are Ultimaker, LulzBot, Prusa Research, and Creality. Marlin firmware is not alone in the field of open source 3D printer firmware. Other popular open source firmware offerings include RepRap Firmware by Duet3D, Buddy Firmware by Prusa Research, and Klipper by the Klipper Foundation. These alternatives take advantage of extra processing power to offer advanced features like input shaping, which has only recently been added to Marlin (only limited version; Marlin does not support hardware accelerometers which is the best way to fully take advantage of input shaping). Technical Marlin firmware is hosted on GitHub, where it is developed and maintained by a community of contributors. Marlin's lead developer is Scott Lahteine (aka Thinkyhead), an independent shareware and former Amiga game developer who joined the project in 2014. His work is entirely supported by crowdfunding. Marlin is written in optimized C++ for the Arduino API in a mostly embedded-C++ style, which avoids the use of dynamic memory allocation. The firmware can be built with Arduino IDE, PlatformIO, or Auto Build Marlin extension for Visual Studio Code. The latter method is recommended because it is very easy but it only being an Visual Studio Code extension requires Visual Studio Code to be installed on the building system first. Once the firmware has been compiled from C++ source code; it is installed and runs on a mainboard with onboard components and general-purpose I/O pins to control and communicate with other components. For control the firmware receives input from a USB port or attached media in the form of G-code commands instructing the machine what to do. For example, the command "G1 X10" tells the machine to perform a smooth linear move of the X axis to position 10. The main loop manages all of the machine's real-time activities like commanding the stepper motors through stepper drivers, controlling heaters, sensors, and lights, managing the display and user interface, etc. License Marlin is distributed under the GPL license which requires that organizations and individuals share their source code if they distribute the firmware in binary form, including firmware that comes pre-installed on the mainboard. Vendors have occasionally failed to comply with the license, leading to some distributors dropping their products. In 2018 the US distributor Printed Solid ended its relationship with Creality due to GPL violations and quality issues. As of 2022, some vendors are still spotty in their compliance, deflecting customer requests for the source code for an extended period or in perpetuity after a product release. Usage and license compliance Marlin firmware is used by several 3D printer manufacturers, most of which are fully compliant with the license. Compliance is tracked by Tim Hoogland of TH3D Studio, et. al.. The following table may be out of date by the time you read this. See also RepRap Project G-code RAMPS 3D printing Applications of 3D printing List of 3D printer manufacturers List of 3D printing software Comparison of 3D printers 3D printing processes 3D Manufacturing Format 3D printing speed Fused filament fabrication Construction 3D printing References External links Marlin official website Marlin GitHub repository Marlin Patreon page How it's Made: The Marlin Firmware!, an interview with Scott Laheine, YouTube 3D printing Firmware
Marlin (firmware)
[ "Engineering", "Biology" ]
1,183
[ "RepRap project", "Self-replication" ]
58,073,834
https://en.wikipedia.org/wiki/Elaeagnus%20%C3%97%20submacrophylla
Elaeagnus × submacrophylla, formerly known as Elaeagnus × ebbingei, is a hybrid between Elaeagnus macrophylla and Elaeagnus pungens. Several cultivars, including 'Gilt Edge', are grown in gardens as ornamental plants. Both the hybrid and 'Gilt Edge' have gained the Royal Horticultural Society's Award of Garden Merit. Description Elaeagnus × submacrophylla is an evergreen shrub, ultimately growing to about . The upper surfaces of the leaves are dark green, sometimes appearing metallic; the lower surfaces are silvery and scaly. Small fragrant tubular white flowers appear in autumn. Taxonomy The hybrid was first discovered in 1929 by Simon Doorenbos, a Dutch horticulturalist, while he was director of the Parks Department in The Hague. He sowed seed from several Elaeagnus species growing next to one another. These included E. macrophylla and E. pungens. Doorenbos gave the hybrid the epithet ebbingei, honouring J.W.E. Ebbinge, and the name E. × ebbingei was widely used until it was realized that the name with priority was E. × submacrophylla, published by Camille Servettaz in 1909. E. × ebbingei is now regarded as an illegitimate name. Cultivars Cultivars include: 'Albert Doorenbos' – large green leaves 'Compacta' – dwarf plant 'Costal Gold' – broad leaves, pale yellow in the centre when mature 'Gilt Edge' AGM – leaves with dark green centres, golden yellow margins 'Limelight' – leaves with yellow and pale green central areas when mature 'The Hague' – small green leaves References submacrophylla Hybrid plants Plants described in 1909
Elaeagnus × submacrophylla
[ "Biology" ]
372
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
58,074,429
https://en.wikipedia.org/wiki/The%20Association%20for%20Women%20with%20Large%20Feet
The Association for Women with Large Feet was a British organization founded by Airton resident Mrs. Phyllis Crone in 1949. It lobbied for clothing manufacturers to better provide for tall women. Members had to be at least 5 feet 8 inches tall. The Association was originally formed to convince footwear manufacturers to provide more attractive options for women with larger shoe sizes. However, it expanded to lobby clothing manufacturers more generally to provide for tall women, with some success. The Association made direct approaches to manufacturers, and if they agreed to provide for tall women, their details were circulated to all members. The group changed its name to The Association of Tall Women in October 1951 to increase its appeal and membership. There were reportedly 10 branches across the UK by this point, with members joining at a rate of around 50 per week. Membership fees were 3 shillings and 6 pence (3s 6d) per year. By 1952 the group was reported to have 2,000 members in London alone. As part of their campaign, in 1952 the Association held a public display of clothing for taller women at a London department store. Successes included convincing stocking manufacturers to produce nylons with longer foot and leg measurements, "ending a nightmare for taller women", and the establishment of a London shop called "Tall Girls". References Women's organisations based in the United Kingdom Clothing-related organizations Campaigning Feminism and history Feminism and society History of fashion Women's clothing 1949 establishments in the United Kingdom Sizes in clothing Women in London History of women in the United Kingdom
The Association for Women with Large Feet
[ "Physics", "Mathematics" ]
308
[ "Sizes in clothing", "Quantity", "Physical quantities", "Size" ]
58,076,899
https://en.wikipedia.org/wiki/Toshiko%20Mayeda
Toshiko K. Mayeda (née Kuki) (1923–13 February 2004) was a Japanese American chemist who worked at the Enrico Fermi Institute in the University of Chicago. She worked on climate science and meteorites from 1958 to 2004. Early life and education Toshiko Mayeda was born in Tacoma, Washington. She grew up in Yokkaichi, Mie, and Osaka. When the United States entered World War II after the Japanese attack on Pearl Harbor, she and her father Matsusaburo Kuki were sent to the Tule Lake War Relocation Center. Whilst there she met her future husband, Harry Mayeda. After the war, she graduated with a bachelor's degree in chemistry from the University of Chicago in 1949. Research Mayeda worked initially as a laboratory assistant to Harold Urey at the University of Chicago, where she was hired initially to wash glassware. They used mass spectrometry to measure oxygen isotopes in the shells of marine molluscs which gave information on the prehistoric temperatures of ocean waters and hence paleoclimates. Urey developed the field of cosmochemistry and with Mayeda studied primitive meteorites, also by using oxygen isotope analysis. Later, she worked with Cesare Emiliani on isotopic evaluation of the ice age. When Urey retired from the university in 1958, Mayeda was persuaded to remain there by Robert N. Clayton, and collaborate with him on applications of mass spectroscopy. She was described as an indomitable research assistant. Mayeda and Clayton's first research paper considered the use of Bromine pentafluoride to extract Isotopes of oxygen from rocks and minerals. It remains their most cited work. From the 1970s until the late 1990s Mayeda and Clayton became famous for their use of oxygen isotopes to classify meteorites. They developed several tests that were used across the field of meteorite and lunar sample analysis. They studied variations in the abundances of the stable isotopes of oxygen, oxygen-16, oxygen-17 and oxygen-18, and deduced differences in the formation temperatures of the meteorites. They also worked on the mass spectroscopy and chemistry of the Allende meteorite. They published many scientific papers on the "oxygen thermometer" and analysed approximately 300 lunar samples that had been collected during NASAs Apollo Program. In 1992, a new type of meteorite, the Brachinite, was identified. Clayton and Mayeda studied the Achondrite meteorites and showed that variations in the oxygen-17 isotope ratios within a planet are due to inhomogeneities in the Solar Nebula. They analysed Shergotty meteorites, proposing that there could have been a water-rich atmosphere on Mars and studied the Bocaiuva meteorite, finding that the Eagle Station meteorite was formed due to impact heating. In 2002 Mayeda was awarded the Society Merit Prize from the Geochemical Society of Japan. In the same year, an asteroid was named after her. Mayeda's husband, Harry, died in 2003. Mayeda suffered from cancer and died on February 13, 2004. In 2008, the book Oxygen in the Solar System was dedicated to Clayton and Mayeda. References Further reading American people of Japanese descent American women chemists University of Chicago alumni University of Chicago faculty Geochemists 1923 births 2004 deaths People from Tacoma, Washington People from Yokkaichi People from Osaka Japanese-American internees 20th-century American women American women academics 21st-century American women
Toshiko Mayeda
[ "Chemistry" ]
709
[ "Geochemists" ]
58,077,092
https://en.wikipedia.org/wiki/Fosravuconazole
Fosravuconazole (trade name Nailin) is a triazole antifungal agent. In Japan, it is approved for the treatment of onychomycosis, a fungal infection of the nail. It is a prodrug that is converted into ravuconazole. Drugs for Neglected Diseases Initiative (DNDi) and the Japanese pharmaceutical company Eisai found that fosravuconazole works as a treatment for mycetoma, a serious condition. The Phase II clinical trial found that oral fosravuconazole was safe, patient-friendly, and effective in treating eumycetoma. Eumycetoma mainly affects young adults in poorer, rural areas; the standard treatment is itraconazole, which is much more expensive at about US$2,000 for a year than fosravuconazole and unaffordable, and not available in all endemic countries. References Triazole antifungals Fluoroarenes Thiazoles Phosphinates Nitriles Prodrugs Japanese inventions
Fosravuconazole
[ "Chemistry" ]
229
[ "Chemicals in medicine", "Nitriles", "Functional groups", "Prodrugs" ]
58,077,499
https://en.wikipedia.org/wiki/Louis%20Vialleton
Louis Marius Vialleton (December 22, 1859 - December 18, 1929) was a French zoologist and writer, best known for his advocation of non-Darwinian evolution. Career Vialleton was born in Vienne, Isère. He was the first professor of histology in the faculty of medicine at the University of Montpellier. Vialleton rejected any form of continuous evolution and favoured saltationism. Vialleton attempted to refute gradual transformism from a morphological perspective in his work Morphologie générale Membres et ceintures des vertébrés tétrapodes: Critique morphotogique du transformisme (1924). Zoologist Étienne Rabaud responded with a critical article. He contributed the chapter Morphologie et transformisme to the book Le Transformisme (1927). Vialleton's views were often misrepresented by creationists as anti-evolutionary. His writings were influential to creationists such as Douglas Dewar. However, he did not reject evolution. He was also incorrectly described as a critic of evolution by A. Morley Davies. Vialleton was a vitalist. Publications Un monstre double humain du genre Ectopage (1892) Un Problème de l'Évolution: La Théorie de la Récapitulation des Formes Ancestrales au Cours du Développement Embryonnaire (Loi Biogénétique Fondamentale de Haeckel) (1908) Éléments de Morphologie des Vertébrés Anatomie et Embryologie Comparées, Paléontologie et Classification (1911) Membres et ceintures des Vertébrés Tétrapodes: Critique morphologique du transformisme (1924) Morphologie générale Membres et ceintures des vertébrés tétrapodes: Critique morphotogique du transformisme (1924) Le Transformisme (1927) [with Élie Gagnebin, Lucien Cuénot, William Robin Thompson, Roland Dalbiez] L'origine Des Etres Vivants, L'illusion Transformiste (1929) See also The eclipse of Darwinism References 1859 births 1929 deaths French zoologists Non-Darwinian evolution Writers from Vienne, Isère University of Montpellier alumni Vitalists
Louis Vialleton
[ "Biology" ]
473
[ "Non-Darwinian evolution", "Biology theories" ]
58,078,797
https://en.wikipedia.org/wiki/HD%20133131
HD 133131 is a binary star in the constellation of Libra. It is roughly 168 light-years (51.5 parsecs) away from the Sun. It consists of two G-type main-sequence stars; neither are bright enough to be seen with the naked eye. Both components, HD 133131 A and B, are very similar to the Sun but are far older, about 6 billion years old. They also have low metallicities (50% of solar abundance), and HD 133131 A is additionally depleted in heavy elements compared to HD 133131 B, indicating a possibly past planetary engulfment event for HD 133131 B. Planetary system In 2016 two planets orbiting HD 133131 A and one planet orbiting HD 133131 B were discovered utilizing the radial velocity method. References Durchmusterung objects 073674 133131 G-type main-sequence stars Binary stars Libra (constellation) Planetary systems with three confirmed planets Multi-star planetary systems
HD 133131
[ "Astronomy" ]
207
[ "Libra (constellation)", "Constellations" ]
58,080,546
https://en.wikipedia.org/wiki/Sewer%20Murders
The Sewer Murders or "Sewage Plant Murders" ( or ) were an unexplained murder series of male adolescents in the Frankfurt Rhine-Main area during the 1970s and 1980s. Victims The killings took place from 1976 to 1983. The victims were seven boys and male adolescents aged between 11 and 18 from Frankfurt (likely Baseler Platz at the "Tivoli" arcade) or the Offenbach station area where some of them may have worked as prostitutes and met the culprit. The boys' hands were tied to the back with a rope or cord and then killed by apparent blunt force. For some, however, death presumably occurred by drowning in the sewerage. Due to long submersion in the sewage and partly strong damage to the corpses by screw conveyors, the victims were identified relatively late, and on only one, clear signs of blunt force trauma to the head had been found. Victim list 7 September 1976: Unidentified male (15–18 years), found in Stangenrod, Giessen. The naked corpse of a young man, only in socks, was found near a footpath in a forest between Atzenhain and Lehnheim during the military manoeuvre "Gordian Shield". The body was heavily mummified with partial skeletonization after a lying time of at least six weeks. A violent skull fracture had been found to be the probable cause of death. Since the identity of the decedent could not be clarified, the police assumed that he may have been a foreigner in transit through West Germany. 23 May 1982: Erik (17), Dreieich, Offenbach. Found in an oblique position behind an inflow. The body had significant damage, such as the right thigh being torn off, the pelvis and skull being smashed, and exposed bones. According to the autopsy report, the corpse was in an advanced state of decomposition with extensive adipocere growth. He was probably lying there for over six months, and the cause of death could no longer be determined. 19 September 1982: Bernd Michel (17–18 years), Darmstadt-Erzhausen. The collecting rake was blocked by a clothed body. Michel was probably still alive when he was thrown into a manhole, and most likely drowned. The identification of the almost unrecognizable corpse was difficult. The young man was around 17 years old and was characterized by a clear overbite. He was a prostitute in Frankfurt. 2 July 1983: Markus Hildebrandt (17 years), Darmstadt-Erzhausen. A tattooed body was discovered in the sump of the Dreieich-Buchschlag sewage plant. According to the Offenbach police, the decedent was washed ashore by a sewage pipe. His hands were handcuffed, but there were no other externally visible injuries. The tattoos on the upper arms showed different motifs and the word "Fuck". Hildebrandt came from Hanau and had been involved in the Frankfurt heroin scene since 1981. Hildebrandt, who had spent much of his youth in congregate care, was in apprenticeship and lived a "restless life" in Frankfurt. He is said to have occasionally prostituted. He was last seen in January 1983, accompanied by three men, and allegedly claimed to be travelling to Saarbrücken. 9 September 1983: Fuad Rahou (14 years), Niederrad. The body of the 14-year-old Moroccan boy was found in the Niederrad sewage treatment plant. At first, it was assumed that Rahou had drowned accidentally or inhaled marsh gasses. Only later did it become clear that he must have been murdered. Rahou had been reported missing since 1 September 1983, by his parents. 11 October 1983: Oliver Tupikas (11 years), Niederrad. The youngest victim, was also found in Niederrad's sewage treatment plant. Probably pushed down a manhole after being murdered. Traces of legcuffs were found on the body. Oliver had run away from home and had not been seen alive since. 21 June 1989: Daniel Schaub (14 years), Offenbach-Rosenhöhe. Bones and pieces of clothing from the presumably last victim were found in a tributary of the drainage system. The teenager had been missing since 1983. Possible motive The criminal psychologist Rudolf Egg suggested that the suspect might be a single person at the age of about 50 years without family ties or friends. It is possible that the culprit himself had been a victim of sexual abuse and may therefore have developed a disturbed relationship with his own homosexuality or with other same-sex people. His inclinations apparently include sadistic bondage. The suspect likely moved from Giessen to Frankfurt at the end of the 1970s and lived out his fetishes in the local milieu. He had also been familiar with the area and was highly mobile. The fact that he threw his victims into the sewerage after violating them is probably a hint of a deep-rooted hatred. Modus Operandi The first murder is believed to have happened at the body's site of discovery. Only when he resumed killing, the suspect could have figured out that throwing a dead or dying victim down the sewers was a more effective way to get rid of them. The quick disposal of the bodies allowed him to carry out his murders even within the densely populated Frankfurt area, without a risk of being caught. The victims were tied up, then the killer abused them and "disposed of them like garbage". For weeks or even months, the bodies in the sewers began to decompose. The dead usually remained undetected in the sewage system for a long time until they were eventually flushed into the sewage treatment plants, where they often blocked the screw pumps to separate the solid particles. The advanced decomposition of the bodies has made the identification and the clarification of the factual circumstances in the investigation much more difficult. The first victim, for instance, was identified 2.5 years after discovery. Investigation Horst Kropp and the "AG 229" were entrusted with the investigation of sexually motivated murders of young people. For some time, a 40-year-old storeman from Offenbach, who was mainly convicted of multiple sexual misconducts toward minors and molestation, had been the prime suspect. He was known for enticing homeless teens to his summer house in Riederwald where he performed sadistic sex games with them. He is said to have acted very brutally during these but bribed his victims with money to keep quiet about what he did to them. Investigators found out that the prime suspect and Markus Hildebrandt had visited the same gay bars in Frankfurt. However, this was not sufficient evidence, as the traces of blood found in the summer house did not match Hildebrandt's. In the home of the suspect, who had known two of the other victims alongside Hildebrandt, police secured a gas pistol, several knives, including a butcher knife, and handcuffs. Due to lack of evidence, however, there were no charges. See also List of fugitives from justice who disappeared List of German serial killers List of unsolved murders Literature Stephan Harbort: Mörderisches Profil: Phänomen Serienkiller, Heyne Verlag, 2006, . References External links Film contribution, Kriminalreport Hessen. Abominable murder series in the Rhine-Main area Text contribution, Kriminalreport Hessen. Abominable murder series in the Rhine-Main area Mortuary finds in sewage treatment plant. Aktenzeichen XY, 24. February 1984 from 37:47 1976 murders in Germany 1982 murders in Germany 1983 murders in Germany 1989 murders in Germany Crimes against sex workers Fugitives Serial murders in Germany Sewerage Unidentified serial killers Unsolved murders in Germany Violence against men in Europe
Sewer Murders
[ "Chemistry", "Engineering", "Environmental_science" ]
1,614
[ "Sewerage", "Environmental engineering", "Water pollution" ]
58,081,140
https://en.wikipedia.org/wiki/Actino-ugpB%20RNA%20motif
The Actino-ugpB RNA motif is a conserved RNA structure that was discovered by bioinformatics. Actino-ugpB motifs are found in strains of the species Gardnerella vaginalis, within the phylum Actinomycetota. It is ambiguous whether Actino-ugpB RNAs function as cis-regulatory elements or whether they operate in trans. Many of the RNAs are upstream of the gene 'ugpB', which encodes a protein putatively involved in sugar transport. However, several of the RNAs are not located upstream of a protein-coding gene. Structurally, the motif consists of two hairpins with conserved nucleotides located in the stems and outside of the hairpins, but not in their terminal loops. References RNA Bioinformatics
Actino-ugpB RNA motif
[ "Chemistry", "Engineering", "Biology" ]
168
[ "Biological engineering", "Bioinformatics stubs", "Biotechnology stubs", "Biochemistry stubs", "Bioinformatics" ]
58,081,544
https://en.wikipedia.org/wiki/Clostridiales-2%20RNA%20motif
The Clostridiales-2 RNA motif is a conserved RNA structure that was discovered by bioinformatics. Clostridiales-2 motifs are found in Clostridiales. Clostridiales-2 RNAs likely function in trans as sRNAs, and are often (but not always) preceded and also followed by Rho-independent transcription terminators. References Non-coding RNA
Clostridiales-2 RNA motif
[ "Chemistry" ]
87
[ "Biochemistry stubs", "Molecular and cellular biology stubs" ]
58,081,550
https://en.wikipedia.org/wiki/Clostridiales-3%20RNA%20motif
The Clostridiales-3 RNA motif is a conserved RNA structure that was discovered by bioinformatics. Clostridiales-3 motifs are found in Clostridiales. Clostridiales-3 RNAs likely function in trans as sRNAs and, structurally, largely consist of several hairpins. References Non-coding RNA
Clostridiales-3 RNA motif
[ "Chemistry" ]
75
[ "Biochemistry stubs", "Molecular and cellular biology stubs" ]
53,561,725
https://en.wikipedia.org/wiki/Eurypelmella
Eurypelmella is a nomen dubium (doubtful name) for a genus of spiders in the family Theraphosidae. It has been regarded as a synonym for Schizopelma, but this was disputed in 2016. References Theraphosidae Historically recognized spider taxa Nomina dubia
Eurypelmella
[ "Biology" ]
64
[ "Biological hypotheses", "Nomina dubia", "Controversial taxa" ]
53,562,736
https://en.wikipedia.org/wiki/Universal%20Orbital%20Support%20System
A Universal Orbital Support System is a concept for suspending an object from a tether orbiting in space. Explanation Background A concept for providing space-based support to things suspended above an astronomical object. It is envisioned as a type of non-rotating tethered satellite system. The orbital system is a coupled mass system wherein the upper supporting mass (A) is placed in an orbit around a given celestial body such that it can support a suspended mass (B) at a specific height above the surface of the celestial body, but lower than (A). The relationship between (A) and (B) is such that it (A) moves higher as (B) is lowered towards the surface, the distance is related as an inverse proportion of their masses. Example This system has been proposed for the Analemma Tower concept, which employs the system to suspend a building from a cable supported by an asteroid orbiting Earth. See also Space elevator Space tether Asteroid List of orbits Natural satellite Quasi-satellite References Space elevator Spaceflight concepts
Universal Orbital Support System
[ "Astronomy", "Technology" ]
208
[]
53,563,061
https://en.wikipedia.org/wiki/Kousha%20Etessami
Kousha Etessami is a professor of computer science at the University of Edinburgh, Scotland, UK. He has received his Ph.D. from the University of Massachusetts Amherst in 1995. He works on theoretical computer science, in particular on computational complexity theory, game theory and probabilistic systems. References External links Year of birth missing (living people) Living people Computer scientists Academics of the University of Edinburgh Theoretical computer scientists
Kousha Etessami
[ "Technology" ]
88
[ "Computer science", "Computer scientists" ]
53,563,270
https://en.wikipedia.org/wiki/Ebadollah%20S.%20Mahmoodian
Ebadollah S. Mahmoodian ( — Seyyed Ebadollah Mahmudian; 18 May 1943 – 28 December 2024) was an Iranian academic and mathematician who was professor of mathematics at the Mathematical Sciences Department of Sharif University of Technology. Life and career Mahmoodian was a professor of mathematics at the Mathematical Sciences Department of Sharif University of Technology for 41 years starting from 1983. He co-edited Combinatorics Advances. Mahmoodian contributed to graph theory, in particular graph colouring. He had also worked on combinatorial designs, in particular, defining sets, and the relations between all those areas. Mahmoodian died on 28 December 2024, at the age of 81. References Further reading External links Publications on Google scholar Webpage at Sharif University 1943 births 2024 deaths 20th-century Iranian mathematicians Graph theorists Academic staff of Sharif University of Technology People from Zanjan, Iran Iranian Science and Culture Hall of Fame recipients in Mathematics and Physics
Ebadollah S. Mahmoodian
[ "Mathematics" ]
201
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
53,564,323
https://en.wikipedia.org/wiki/Bioclogging
Bioclogging or biological clogging refers to the blockage of pore space in soil by microbial biomass, including active cells and their byproducts such as extracellular polymeric substance (EPS). The microbial biomass obstructs pore spaces, creating an impermeable layer in the soil and significantly reducing water infiltration rates. Bioclogging occurs under continuous ponded infiltration at various field conditions such as artificial recharge ponds, percolation trenches, irrigation channels, sewage treatment systems, constructed wetlands, landfill liners and natural systems such as riverbeds and soils. It also affects groundwater flow in the aquifer, such as ground source heat pumps, permeable reactive barriers, and microbial enhanced oil recovery. Bioclogging is a significant problem where water infiltration is hampered and countermeasures such as regular drying of the system can reduce the levels of bioclogging. However, bioclogging can also serve beneficial purposes in specific conditions. For instance, bioclogging can be utilized to make an impermeable layer to minimize the rate of infiltration or to enhance soil mechanic properties. General description Change in permeability with time Bioclogging is observed as the decrease in the infiltration rate. A decrease in the infiltration rate under ponded infiltration was observed in the 1940s for studying the infiltration of artificial recharge ponds and the water-spreading on agricultural soils. Allison described that when soils are continuously submerged, permeability or saturated hydraulic conductivity changes in 3 key stages: After initiating field or laboratory tests, the permeability decreases to a minimum. On highly permeable soils this initial decrease is small, or nonexistent, but for relatively impermeable soils, permeability decreases for 10 to 20 days possibly due to physical changes in the structure of the soil. Permeability increases due to dissolving the entrapped air in soil into the percolating water. Permeability decreases for 2 to 4 weeks due to the disintegration of aggregates and biological clogging of soil pores with microbial cells and their synthesized products, slimes, or polysaccharides. This description is based on experiments conducted at that time, and the actual process of bioclogging depends on system conditions, such as nutrient and electron acceptor availability, microbial biofilm formation propensity, initial conditions, etc. In particular, the 3 stages are not necessarily distinct in every field condition of bioclogging; when the second stage is not clear, and permeability just continues to decrease. Various types of bioclogging The change in permeability with time is dependent on the field condition and there are various causes for the change in the hydraulic conductivity, including physical (suspended solids, disintegration of aggregate structure, etc), chemical (dispersion and swelling of clay particles), and biological causes (as listed below). Usually bioclogging means the first of the following, while bioclogging in a broader sense means all of the following. Bioclogging by microbial cell bodies (such as bacteria, algae and fungus) and their synthesized byproducts such as extracellular polymeric substance (EPS) (also referred to as slime), which form biofilm or microcolony aggregation on soil particles are direct biological causes of the decrease in hydraulic conductivity. Entrapment of gas bubbles such as methane produced by methane-producing microorganisms clog the soil pore and contributes to decreasing hydraulic conductivity. As gas is also microbial byproduct, it can also be considered to be bioclogging. Iron bacteria stimulate ferric oxyhydroxide deposition which may cause clogging of soil pores. This is an indirect biological cause of the decrease in hydraulic conductivity. Bioclogging is mostly observed in saturated conditions, but bioclogging in unsaturated conditions is also studied. Field observation Field problem and countermeasures Bioclogging is a significant issue in various environmental and artificial water systems. Here are some specific field problems related to bioclogging and their potential countermeasures. Bioclogging commonly occurs during continuous ponded infiltration in such places as artificial recharge ponds and percolation trenches. Reduction of infiltration rate due to bioclogging at the infiltrating surface reduces the efficiency of such systems. To minimize the bioclogging effects, pretreatment of the water to reduce suspended solids, nutrients, and organic carbon might be necessary. Regular drying and physical removal of the clogging layer can be an effective countermeasure. Similarly, septic drain fields are prone to bioclogging primarily due to the continuous flow of nutrient-rich wastewater. The organic material causing bioclogging in the septic tank is sometimes called biomat. Pretreatment of water by filtration or reducing the load of the system could delay the failure of the system by bioclogging. Slow sand filter system also suffers from bioclogging. Besides the countermeasures mentioned above, cleaning or backwashing sand may be operated to remove biofilm and recover the permeability of sand. In river systems, bioclogging can significantly impact aquifer recharge, particularly in dry regions where losing rivers are prevalent. As a result of bioclogging, the connection between surface water and groundwater in riverine systems is affected. The development of a biofilm-induced clogging layer can lead to disconnection, changing the natural water flow patterns between rivers and aquifers. Bioclogging is also a concern in aquifers, particularly when water is extracted through water wells below the groundwater table. Over months and years of continued operation of water wells, they may show a gradual reduction in performance due to bioclogging or other clogging mechanisms. Bioclogging may also affect the sustainable operation of ground source heat pumps. Common approaches to treating bioclogging include utilizing phosphate, a critical nutrient for iron-bacteria biofilms, and employing chlorine and fungicides to address bacterial issues. Backwashing is a common method to deal with clogging in general, including bioclogging. Benefits In certain environments, bioclogging positively influences hydrological process. Here are some examples. Bioclogging plays a crucial role in sealing the bottoms of stabilization ponds for dairy farm wastewater treatment. Similarly, irrigation channels for seepage control may be inoculated with algae and bacteria to promote bioclogging for reducing water loss. Turning to landfill liners, such as compacted clay liners, bioclogging emerges as a beneficial factor. Clay liners are usually used in landfill to minimize pollution from landfill leachate to the surrounding soil environment. The hydraulic conductivity of clay liners becomes lower than the original value due to bioclogging, which is caused by microorganism in the leachate and the pore spaces in the clay. Bioclogging is a common occurrence in constructed wetlands which are engineered for treating various contaminated waters. Notably, in wetlands with subsurface horizontal flow, preferential flow paths avoiding the clogged part can improve the system treatment efficiency. Biofilm formation plays a crucial role in bioremediation, particularly in treating biodegradable groundwater pollution. A permeable reactive barrier is formed to contain the groundwater flow by bioclogging and also to degrade pollution by microbes. Contaminant flow should be carefully analyzed because a preferential flow path in the barrier may reduce the efficiency of the remediation. In the extraction of petroleum, enhanced oil recovery techniques are applied to maximize oil extraction from oil fields. The injected water displaces the oil in the reservoir which is transported to recovery wells. As the reservoir is not uniform in permeability, injected water tends to go through a high permeable zone and does not go through the zone where oil remains. In this situation, the bacterial profile modification technique, which injects bacteria into the high permeable zone to promote bioclogging can be employed. It is a type of microbial enhanced oil recovery. The potential of bioclogging in geotechnical engineering is under exploration, particularly for improving soil mechanical properties. This involves strategies like reducing porosity and hydraulic conductivity, and enhancing shear strength through biocementation, thereby optimizing the soil for construction and environmental applications. See also Biofilm Hydraulic conductivity Landfill liner Microbial enhanced oil recovery Septic tank Slow sand filter References Soil Environmental soil science Soil physics Microbiology
Bioclogging
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
1,797
[ "Applied and interdisciplinary physics", "Microbiology", "Soil physics", "Microscopy", "Environmental soil science" ]
63,316,464
https://en.wikipedia.org/wiki/Bhutan%20Biodiversity%20Portal
Bhutan Biodiversity Portal (འབྲུག་སྐྱེ་ལྡན་རིགས་སྣ་འཆར་སྒོ།) is a consortium based citizen science website comprising key biodiversity data generating agencies and can be used by anyone. The portal is an official online repository of data on Bhutanese biodiversity. History Access to the updated and reliable information on the biodiversity of Bhutan for effective conservation was a major problem. The Bhutan Biodiversity Portal was created to address this issue. In 1992 in Rio de Janeiro, a project agreement was initiated under the framework of United Nations Conference on Environment and Development which was formalized in 1994. Bhutan Integrated Biodiversity Information System(BIBIS) was created subsequently in 2002. The aim of the information system was to create a biodiversity platform which will be accessible to anyone interested in the biodiversity of Bhutan. The BIBIS was later upgraded and developed into a web-based web portal in 2008. Since 2011 the portal have been upgraded and developed into the present form. This gave birth to the present Bhutan Biodiversity Portal. The Present portal was officially launched on 17 December 2013 coinciding with the National Day of Bhutan by the then Minister of Agriculture and Forests, Lyonpo Yeshi Dorji. Features Species pages The species page feature of the Portal provides curated and updated information on various taxa found in the country. Editing and creation of the species pages is limited to approved curators and admins only. Observations The observations feature of the Portal provides platform to the users to record observation of various taxa from within the country. This section of the Portal promotes the participation from the users to document the biodiversity of Bhutan in the form of images, audios and videos. As on 08-03-2020 the portal has a total of 64585 observations. Users can add to the observations through the contribute link in the menu. Besides the image, sound and video observations, users can add checklist, documents and datasets. Maps The Maps module of the Portal provides various geo-spatial information through an interactive user interface. They are displayed in the form of layers. Currently the Portal provides the following map layers. Documents Discussions Datasets Groups Contribution Contributions to the portal can be made in the following Adding/Editing a species page (needs special permission) Add an observation can be done through multiple observations Adding a list Adding documents Adding dataset Add a trait/Value Add a fact Add a data package Technology The portal uses the opensource Biodiversity Informatics Platform codebase developed and maintained by Strand Life Sciences. The Strand Life Sciences and the India Biodiversity Portal provides all the technical backstoppings assisted by the Information and Communication Services Division under the Ministry of Agriculture and Forests. Platforms Users of the portal can interact with the portal in various ways. They can access or contribute to the portal through: the web based portal accessible at https://biodiversity.bt Android app available on playstore iOS app available on appstore The Consortium The consortium is supported by the following national and international organizations. National Biodiversity Centre (Bhutan) The National Biodiversity Centre (Bhutan) is the secretariat for the consortium. The National Biodiversity Centre is a government institution under the Ministry of Agriculture and Forests of Bhutan. Information and Communication Services Division The Information and Communication Services Division under the Ministry of Agriculture and Forests was established under the Ministry in 1992. The Division currently provides technological backstopping to the Portal. College of Natural Resources The College of Natural Resources (Bhutan) is a college under the Royal University of Bhutan. Department of Forests and Park Services The Department of Forests and Park Services of Bhutan is represented by the following institutions. Ugyen Wangchuck Institute for Conservation and Environmental Research The Ugyen Wangchuck Institute for Conservation and Environmental Research is a research based training institute under the Department of Forests and Park Services of Bhutan. Nature Conservation Division The Nature Conservation Division is one of the five functional divisions under the Department of Forests and Park Services established in 1992. It currently acts as the focal of the protected areas and biodiversity conservation in Bhutan. WWF Bhutan Awards The National Biodiversity Centre (Bhutan) as the secretariat to the consortium received the Information and Communications Technology (ICT) for Mountain Development Award 2018 on the occasion of International Mountain Day on 11 December 2018. References External links National Biodiversity Centre Ministry of Agriculture and Forests Biology websites Citizen science Internet properties established in 2013 Biodiversity databases Science and technology in Bhutan Biodiversity of Bhutan Wild animals identification
Bhutan Biodiversity Portal
[ "Biology", "Environmental_science" ]
876
[ "Biodiversity databases", "Environmental science databases", "Biodiversity" ]
63,317,783
https://en.wikipedia.org/wiki/Oppo%20Find%20X2
The Oppo Find X2 and Find X2 Pro are Android-based smartphones manufactured by Oppo, unveiled on 6 March 2020. Specifications Design The Find X2 and Find X2 Pro are constructed using an anodized aluminum frame and curved Gorilla Glass 6 on the front. The volume buttons are located on the left side opposite the power button, which has a green accent. Oppo offers two choices of material for the back panel on both phones: the Find X2 is available with either glass or ceramic, and the Find X2 Pro is available with either ceramic or artificial leather. The leather model has a metallic plate with a vertically positioned Oppo logo in the lower-left-hand corner. Unlike the Find X, there is no pop-up camera mechanism; instead both have a circular cutout for the front-facing camera. The rear camera module houses a rectangular array above the LED flash, and protrudes slightly. The Find X2 is splash-proof with an IP54 rating, while the Find X2 Pro has full IP68 water resistance. The Find X2 has an Ocean finish for the glass model and a Black finish for the ceramic model. The Find X2 Pro likewise has a Black finish for the ceramic model, but receives unique Orange, Gray and Green color options for the leather model. Hardware Both the Find X2 and Find X2 Pro use the Snapdragon 865 processor with the Adreno 650 GPU. Storage is non-expandable UFS 3.0; the Find X2 has 128 or 256 GB, while the Find X2 Pro has 256 or 512 GB. The Find X2 has 8 or 12 GB of RAM, while the Find X2 Pro has 12 GB of RAM; both have LPDDR5. The display is an OLED panel with HDR10+ support, identical to the OnePlus 8 Pro's. Both use a 6.7-inch (170mm) 1440p screen with a 19.8:9 aspect ratio and a 120 Hz refresh rate. The display is also capable of showing 1 billion colors. Both have stereo speakers with active noise cancellation, and there's no audio jack. The Find X2's battery capacity is 4200 mAh, while the Find X2 Pro is marginally larger at 4260 mAh. Both smartphones support wired fast charging at 65W enabled by a dual-cell design, although wireless charging is not supported. Finally, biometric options include an optical (under-screen) fingerprint sensor and facial recognition. Camera The Find X2's camera array consists of a 48 MP wide sensor, a 12 MP ultrawide sensor, and a 13 MP telephoto sensor, while the Find X2 Pro's camera array consists of a 48 MP wide sensor, a 48 MP ultrawide sensor and a 13 MP telephoto sensor. The telephoto camera differs between the Find X2 and Find X2 Pro although they have the same resolution. The Find X2's sensor is a conventional lens with 2x optical zoom, whereas the Find X2 Pro's sensor is a "periscope" lens with 5x optical zoom. The Find X2's wide sensor is the Sony IMX586, while the Find X2 Pro's wide sensor is the newer Sony IMX689. The front camera on both uses a 32 MP sensor. Software The Find X2 and Find X2 Pro run on ColorOS 7.1, which is based on Android 10. Reception The Find X2 Pro received an overall score of 124 from DXOMARK, with a photo score of 134 and video score of 104, the third-highest ranking as of May 2020. References External links Find X2 Android (operating system) devices Mobile phones introduced in 2020 Mobile phones with multiple rear cameras Mobile phones with 4K video recording Discontinued flagship smartphones
Oppo Find X2
[ "Technology" ]
802
[ "Phablets", "Crossover devices", "Discontinued flagship smartphones", "Flagship smartphones" ]
63,319,956
https://en.wikipedia.org/wiki/Korean%20Genome%20Project
Korean Genome Project (Korea1K) is the largest genome sequencing project in Korea, first launched in 2015 as part of the Genome Korea in Ulsan. As of 2021, the project has sequenced over 10,000 human genomes and is the first large-scale data base for constructing a genetic map and diversity analysis of Koreans. History KGP was originated from the national initiative of sequencing the reference Korean and whole population genomes in 2006 by KOBIC, KRIBB and NCSRD, KRISS, Daejeon in Korea. From 2009, KGP was supported by the Genome Research Foundation and TheragenEtex to build the Variome of Koreans as well as the Korean Reference Genome (KOREF). Starting from KOREF, a consensus variome reference, providing information on millions of variants from 40 additional ethnically homogeneous genomes from the Korean Personal Genome Project was completed in 2017. Updating the technology an improved version of KOREF was then constructed using long-read sequencing data produced by Oxford Nanopore PromethION and PacBio technologies has been released showcasing newer assembly technologies and techniques. In 2022 a new chromosome-level haploid assembly of KOREF was published, assembled using Oxford Nanopore Technologies PromethION, Pacific Biosciences HiFi-CCS, and Hi-C technology. Since 2014, KGP has been supported by Ulsan National Institute of Science and Technology, Clinomics, and Ulsan City, Ulsan, Korea. Science & development Korea1K) has been used in sequencing technologies such as MGI DNBSEQ-T7 and Illumina HiSeq2000, HiSeq2500, HiSeq4000, HiSeqX10, and NovaSeq6000 sequencing technologies. The variome data has been a reference to study the origin and composition of Korean ethnicity when compared to ancient DNA sequences. Korea1K released 1,094 Korean whole genome sequences on 27 May 2020, published in Science Advances. In April 2024, Korea4K was published, making whole genome sequences of 4,157 Koreans publicly accessible alongside an imputation reference panel and 107 phenotypes derived from extensive health check-ups. References External links KoreanGenome.org Opengenome.net www.srd.re.kr 1000genomes.kr Genome projects
Korean Genome Project
[ "Biology" ]
488
[ "Genome projects" ]
63,320,152
https://en.wikipedia.org/wiki/Snaptube
Snaptube is a free Android app that downloads video, audio and also works as a social media aggregator. It provides video resolutions in a range of 144p, 720p, 1080p HD, 2K HD, 4K HD and audio formats in MP3 and M4A. With Snaptube, users can look for content on all their platforms (Facebook, Instagram, TikTok and all others) without using numerous apps. As of June 2020, the application is used by over 100 million users. In 2019, Upstream warned that users are served invisible ads without their knowledge that run silently on the device, allowing the app maker to generate ad revenue at the expense of churning up a user's mobile data and battery power. According to Upstream, their Secure-D platform detected and blocked "more than 70 million suspicious mobile transaction requests" from SnapTube installs on 4.4 million devices. After Google pulled the application from The Play Store, Snaptube blamed a third-party software development kit called Mango SDK, with the developer claiming to have removed the offending SDK. The company took immediate action and released an update which took Mango SDK off subsequent versions. Mango was also found in other apps for fraud behaviors. According to Upstream, this third-party SDK downloads additional components from a central server to engage in this fraudulent ad activity and uses chains of redirection and obfuscation to hide its activity. References 2014 software Mobile applications
Snaptube
[ "Technology" ]
300
[ "Mobile software stubs", "Mobile technology stubs" ]
63,321,233
https://en.wikipedia.org/wiki/Stratingh%20Institute%20for%20Chemistry
The Stratingh Institute for Chemistry is a research institute of the Faculty of Science and Engineering of the University of Groningen (The Netherlands). It is named after Sibrandus Stratingh, who is known for being the inventor of the first battery powered electric car. As of 2020, about 150 people (from over 30 nationalities) are employed within the Stratingh Institute for Chemistry. The staff members include Ben Feringa, who won the 2016 Nobel Prize in Chemistry "for the design and synthesis of molecular machines", Nathalie Katsonis and Sijbren Otto. The institute is currently located on the Zernike Campus in Groningen, in the Feringa Building and Linnaeusborg. Research topics The research carried out within the institute falls within the following research areas: chemistry of life: the study of biological phenomena and medicinally relevant problems from a molecular perspective. This comprises the chemical synthesis of complex natural products, the design and synthesis of small molecules to study and steer biochemical processes, and steps towards the creation of new life (mimicking abiogenesis). chemical conversion: the development and synthesis of new catalysts. This includes asymmetric catalysis and catalytic oxidation, designing (artificial) enzymes, the use of bio-based raw material and development of sustainable processes, and homogeneous catalysis methods using earth-abundant metals. chemistry of materials: the development of molecular switches and motors, photovoltaics, functional polymers, molecular electronics, supramolecular materials, functional surfaces and synthetic membrane. External links University of Groningen References Research institutes in the Netherlands Chemical research institutes University of Groningen
Stratingh Institute for Chemistry
[ "Chemistry" ]
330
[ "Chemical research institutes" ]
63,321,569
https://en.wikipedia.org/wiki/Boris%20Kadomtsev
Boris Borisovich Kadomtsev (; 9 November 1928 – 19 August 1998) was a Soviet and Russian plasma physicist who worked on controlled fusion problems (e.g. tokamaks). He developed a theory of transport phenomena in turbulent plasmas and a theory of the so-called anomalous behavior of plasmas in magnetic fields. In 1966, he discovered plasma instability with trapped particles. In 1970, Kadomtsev and Vladimir Petviashvili introduced into plasma physics the Kadomtsev–Petviashvili equation (KP equation), a nonlinear partial differential equation with applications in theoretical physics and complex analysis. The exact solution to the KP equation was later found by Vladimir Zakharov and , which helped solve the Schottky problem. Early life and career Kadomtsev graduated from Moscow State University in 1951. He then worked at the Institute for Physics and Energy in Obninsk. Starting in 1956, he worked at the Institute for Atomic Energy. From 1976 to 1998, he was the chief editor of the journal Physics-Uspekhi ("Successes in Physics"). From 1973 until his death in 1998, he was the chair of the plasma physics section of the state committee for the use of nuclear energy. He died on 19 August 1998 and was buried at Troyekurovskoye Cemetery. Honors and awards In 1970, he received the USSR State Prize for the study of the instability of a high-temperature plasma in a magnetic field and the creation of a method for its stabilization with a “magnetic well”. In 1984, he was awarded the Lenin Prize for his work on the "Theory of Thermonuclear Toroidal Plasma". He was also honored with the Order of the Red Banner of Labour. He has been a corresponding member of the Soviet Academy of Sciences since 1962 and a full member since 1970. In 1998, he received the American Physical Society's James Clerk Maxwell Prize for Plasma Physics for "fundamental contributions to plasma turbulence theory, stability and nonlinear theory of MHD and kinetic instabilities in plasmas, and for international leadership in research and teaching of plasma physics and controlled thermonuclear fusion physics". Books References 1928 births 1998 deaths Full Members of the Russian Academy of Sciences Full Members of the USSR Academy of Sciences Members of the Royal Swedish Academy of Sciences Moscow State University alumni Recipients of the Lenin Prize Recipients of the Order of the Red Banner of Labour Recipients of the USSR State Prize Plasma physicists Russian physicists Soviet physicists Burials in Troyekurovskoye Cemetery
Boris Kadomtsev
[ "Physics" ]
522
[ "Plasma physicists", "Plasma physics" ]
63,321,991
https://en.wikipedia.org/wiki/Gregor%20Morfill
Gregor Eugen Morfill (born 23 July 1945 in Oberhausen, Germany) is a German physicist who works in basic astrophysical research and deals with complex plasmas and plasma medicine. Early life and career Gregor Morfill moved to England in 1961. There, he completed his school education and began studying physics at Imperial College London in 1964. In 1967, he graduated with a Bachelor of Science. In 1968, he received a diploma from Imperial College London and in 1971 he received a PhD with his work Satellite studies of energetic particles above the atmosphere. He then went to the Max Planck Institute for Extraterrestrial Physics in Garching. In 1977, he did his post-doctorate at Heidelberg University. In 1975, he received a professorship at the Max Planck Institute for Nuclear Physics in Heidelberg. In 1983, he headed the Theoretical Astronomy Program at the University of Arizona. In 1984, he became director of the Max Planck Institute for Extraterrestrial Physics. Since 2011, he has been on the scientific advisory board of Bauman University in Moscow. In the same year, he was co-founder of the company terraplasma in Garching near Munich, which develops devices and processes that use cold plasmas for wound healing, among other things. Scientific contributions Morfill is the author and co-author of over 500 scientific publications and a popular science book on chaos theory. In addition to his astrophysical work, Gregor Morfill makes important contributions to the subject of "dusty complex plasmas" (with application to space plasmas and the explanation of the structure of Saturn rings), to the discovery of plasma crystals as a solid state of aggregation of dusty plasmas (discovered in 1994) and to microscopic analysis of the melting process in plasma crystals. He also participates in space plasma experiments with the International Space Station (ISS), such as the experiment PKE-Nefedov (2001–2005) in cooperation with the Russian space agency and the Institute for High Energy Densities (IHED, JIHT) in Moscow. Morfill also researches applications of plasma in medicine such as in the treatment of chronic wounds. Honors and awards 1998: Bavarian Prime Minister's recognition award 1998: of the Stifterverband für die Deutsche Wissenschaft 1999: Member of the Russian Academy of Sciences 2003: Honorary doctorate from the Technische Universität Berlin 2007: of the Russian space agency Roscosmos 2010: Fellow of the Institute of Physics 2011: James Clerk Maxwell Prize for Plasma Physics from the American Physical Society for "pioneering, and seminal contributions to, the field of dusty plasmas, including work leading to the discovery of plasma crystals, to an explanation for the complicated structure of Saturn's rings, and to microgravity dusty plasma experiments conducted first on parabolic-trajectory flights and then on the International Space Station." Patten Prize Gagarin Medal URGO Foundation for Advances in Dermatology Award References 1945 births Living people 20th-century German physicists Plasma physicists Alumni of Imperial College London People from Oberhausen 21st-century German physicists
Gregor Morfill
[ "Physics" ]
626
[ "Plasma physicists", "Plasma physics" ]
63,322,305
https://en.wikipedia.org/wiki/Keeper%20%28chemistry%29
Keepers are substances (typically solvents, but sometimes adsorbent solids) added in relatively small quantities during an evaporative procedure in analytical chemistry, such as concentration of an analyte-solvent mixture by rotary evaporation. The purpose of a keeper is to reduce losses of a target analyte during the procedure. Keepers typically have reduced volatility and are added to a more volatile solvent. In the case of volatile target analytes, it is difficult to totally avoid loss of the analyte in an evaporative procedure, but the presence of a keeper solvent or solid is intended to preferentially solvate or adsorb the analyte, so that the volatility of the analyte is reduced as the evaporative procedure continues. In the case of non-volatile target analytes, the presence of the keeper solvent or solid is intended to prevent all the solvent from being evaporated off, thereby preventing the loss of analytes which might irreversibly adsorb to the container walls when completely dried, or if it is totally dried (in the case of a solid keeper), provide a surface where the analyte can be reversibly rather than irreversibly adsorbed. A solid keeper of sodium sulfate has been shown to be effective for reducing losses of polycyclic aromatic hydrocarbons (PAHs) in evaporative procedures. Solvents commonly used as keepers The following solvents are commonly used as keepers: References Analytical chemistry
Keeper (chemistry)
[ "Chemistry" ]
306
[ "nan", "Analytical chemistry stubs" ]
63,325,324
https://en.wikipedia.org/wiki/Cobalt%28II%29%20hydride
Cobalt(II) hydride is an inorganic compound with a chemical formula CoH2. It has dark grey crystals. It oxidizes slowly in air and reacts with water. Two forms of cobalt(II) hydride exist under high pressure. From 4 to 45 GPa there is a face-centred cubic form with formula CoH. This can be decompressed at low temperatures to form a metastable compound at atmospheric pressure. Over 45 GPa a cobalt(II) hydride CoH2 also crystallises in a face-centred cubic form. Preparation Cobalt(II) hydride can prepared by reacting phenylmagnesium bromide and cobalt(II) chloride in hydrogen gas: CoCl2 + 2 C6H5MgBr + 2 H2 → CoH2 + 2 C6H6 + MgBr2 + MgCl2 References Cobalt(II) compounds Metal hydrides
Cobalt(II) hydride
[ "Chemistry" ]
195
[ "Metal hydrides", "Inorganic compounds", "Reducing agents" ]
63,326,139
https://en.wikipedia.org/wiki/Integrally%20convex%20set
An integrally convex set is the discrete geometry analogue of the concept of convex set in geometry. A subset X of the integer grid is integrally convex if any point y in the convex hull of X can be expressed as a convex combination of the points of X that are "near" y, where "near" means that the distance between each two coordinates is less than 1. Definitions Let X be a subset of . Denote by ch(X) the convex hull of X. Note that ch(X) is a subset of , since it contains all the real points that are convex combinations of the integer points in X. For any point y in , denote near(y) := {z in | |zi - yi| < 1 for all i in {1,...,n} }. These are the integer points that are considered "nearby" to the real point y. A subset X of is called integrally convex if every point y in ch(X) is also in ch(X ∩ near(y)). Example Let n = 2 and let X = { (0,0), (1,0), (2,0), (2,1) }. Its convex hull ch(X) contains, for example, the point y = (1.2, 0.5). The integer points nearby y are near(y) = {(1,0), (2,0), (1,1), (2,1) }. So X ∩ near(y) = {(1,0), (2,0), (2,1)}. But y is not in ch(X ∩ near(y)). See image at the right. Therefore X is not integrally convex. In contrast, the set Y = { (0,0), (1,0), (2,0), (1,1), (2,1) } is integrally convex. Properties Iimura, Murota and Tamura have shown the following property of integrally convex set. Let be a finite integrally convex set. There exists a triangulation of ch(X) that is integral, i.e.: The vertices of the triangulation are the vertices of X; The vertices of every simplex of the triangulation lie in the same "cell" (hypercube of side-length 1) of the integer grid . The example set X is not integrally convex, and indeed ch(X) does not admit an integral triangulation: every triangulation of ch(X), either has to add vertices not in X, or has to include simplices that are not contained in a single cell. In contrast, the set Y = { (0,0), (1,0), (2,0), (1,1), (2,1) } is integrally convex, and indeed admits an integral triangulation, e.g. with the three simplices {(0,0),(1,0),(1,1)} and {(1,0),(2,0),(2,1)} and {(1,0),(1,1),(2,1)}. See image at the right. References Discrete geometry
Integrally convex set
[ "Mathematics" ]
700
[ "Discrete geometry", "Discrete mathematics" ]
63,326,424
https://en.wikipedia.org/wiki/Discrete%20fixed-point%20theorem
In discrete mathematics, a discrete fixed-point is a fixed-point for functions defined on finite sets, typically subsets of the integer grid . Discrete fixed-point theorems were developed by Iimura, Murota and Tamura, Chen and Deng and others. Yang provides a survey. Basic concepts Continuous fixed-point theorems often require a continuous function. Since continuity is not meaningful for functions on discrete sets, it is replaced by conditions such as a direction-preserving function. Such conditions imply that the function does not change too drastically when moving between neighboring points of the integer grid. There are various direction-preservation conditions, depending on whether neighboring points are considered points of a hypercube (HGDP), of a simplex (SGDP) etc. See the page on direction-preserving function for definitions. Continuous fixed-point theorems often require a convex set. The analogue of this property for discrete sets is an integrally-convex set. A fixed point of a discrete function f is defined exactly as for continuous functions: it is a point x for which f(x)=x. For functions on discrete sets We focus on functions , where the domain X is a nonempty subset of the Euclidean space . ch(X) denotes the convex hull of X. Iimura-Murota-Tamura theorem: If X is a finite integrally-convex subset of , and is a hypercubic direction-preserving (HDP) function, then f has a fixed-point. Chen-Deng theorem: If X is a finite subset of , and is simplicially direction-preserving (SDP), then f has a fixed-point. Yang's theorems: [3.6] If X is a finite integrally-convex subset of , is simplicially gross direction preserving (SGDP), and for all x in X there exists some g(x)>0 such that , then f has a zero point. [3.7] If X is a finite hypercubic subset of , with minimum point a and maximum point b, is SGDP, and for any x in X: and , then f has a zero point. This is a discrete analogue of the Poincaré–Miranda theorem. It is a consequence of the previous theorem. [3.8] If X is a finite integrally-convex subset of , and is such that is SGDP, then f has a fixed-point. This is a discrete analogue of the Brouwer fixed-point theorem. [3.9] If X = , is bounded and is SGDP, then f has a fixed-point (this follows easily from the previous theorem by taking X to be a subset of that bounds f). [3.10] If X is a finite integrally-convex subset of , a point-to-set mapping, and for all x in X: , and there is a function f such that and is SGDP, then there is a point y in X such that . This is a discrete analogue of the Kakutani fixed-point theorem, and the function f is an analogue of a continuous selection function. [3.12] Suppose X is a finite integrally-convex subset of , and it is also symmetric in the sense that x is in X iff -x is in X. If is SGDP w.r.t. a weakly-symmetric triangulation of ch(X) (in the sense that if s is a simplex on the boundary of the triangulation iff -s is), and for every pair of simplicially-connected points x, y in the boundary of ch(X), then f has a zero point. See the survey for more theorems. For discontinuous functions on continuous sets Discrete fixed-point theorems are closely related to fixed-point theorems on discontinuous functions. These, too, use the direction-preservation condition instead of continuity. Herings-Laan-Talman-Yang fixed-point theorem: Let X be a non-empty convex compact subset of . Let f: X → X be a locally gross direction preserving (LGDP) function: at any point x that is not a fixed point of f, the direction of is grossly preserved in some neighborhood of x, in the sense that for any two points y, z in this neighborhood, its inner product is non-negative, i.e.: . Then f has a fixed point in X. The theorem is originally stated for polytopes, but Philippe Bich extends it to convex compact sets.Note that every continuous function is LGDP, but an LGDP function may be discontinuous. An LGDP function may even be neither upper nor lower semi-continuous. Moreover, there is a constructive algorithm for approximating this fixed point. Applications Discrete fixed-point theorems have been used to prove the existence of a Nash equilibrium in a discrete game, and the existence of a Walrasian equilibrium in a discrete market. References Discrete mathematics Fixed-point theorems
Discrete fixed-point theorem
[ "Mathematics" ]
1,048
[ "Theorems in mathematical analysis", "Discrete mathematics", "Fixed-point theorems", "Theorems in topology" ]
63,326,444
https://en.wikipedia.org/wiki/National%20Chemical%20Emergency%20Centre
The National Chemical Emergency Centre (NCEC) is a former UK government agency, now privately owned as part of Ricardo plc, providing information related to chemical accidents (spillages and fires) to emergency services in the United Kingdom and other countries. The NCEC is headquartered on the Harwell Science and Innovation Campus in the Vale of White Horse in Oxfordshire. History The NCEC was formed in 1973 as a government agency. On 1 March 1979 the Centre launched, in cooperation with the Home Office, its Hazfile computer database, made available to fifteen British fire services, listing over 10,000 chemical compounds; this was later replaced by the Chemdata system. A similar system in the USA is called RTECS (Registry of Toxic Effects of Chemical Substances). Function Most chemical safety legislation in the UK covers the transport of hazardous chemicals by road. Companies carrying dangerous substances must comply with the legislation. The NCEC worked with the European Chemical Industry Council (CEFIC) to develop a set of safety codes for carrying dangerous chemicals for National Intervention in Chemical Transport Emergencies Centres across Europe. Chemdata In the 1980s the NCEC developed the Chemdata hazardous material database, which was provided to British fire services for use in case of chemical accidents. Chemdata lists over 61,600 safety data sheets (SDS) for dangerous substances. It is published in six languages. See also European chemical Substances Information System (ESIS) and European Chemicals Agency History of fire safety legislation in the United Kingdom International Maritime Dangerous Goods Code, developed by the London-based International Maritime Organization (IMO) National Poisons Information Service (NPIS), provides much the same function, but for pharmaceutical products References External links NCEC official website 1973 establishments in the United Kingdom Chemical accident Chemical industry in the United Kingdom Chemical safety Cheminformatics Defunct public bodies of the United Kingdom Emergency management in the United Kingdom Government agencies established in 1973 Health and safety in the United Kingdom Organisations based in Oxfordshire Safety organisations based in the United Kingdom Science and technology in Oxfordshire Scientific organizations established in 1973 Toxic effects of substances chiefly nonmedicinal as to source Toxicology in the United Kingdom Toxicology organizations Vale of White Horse
National Chemical Emergency Centre
[ "Chemistry", "Environmental_science" ]
440
[ "Chemical accident", "Toxicology", "Toxicology in the United Kingdom", "Environmental chemistry", "Computational chemistry", "Cheminformatics", "nan", "Toxic effects of substances chiefly nonmedicinal as to source", "Toxicology organizations", "Chemical safety" ]
63,327,214
https://en.wikipedia.org/wiki/VAD1%20analog%20of%20StAR-related%20lipid%20transfer
VAD1 analog of StAR-related lipid transfer (VASt) is a steroidogenic acute regulatory protein‐related lipid transfer (StART)-like lipid-binding domain first identified in the vad1 (vascular associated death1) protein in Arabidopsis thaliana (mouse-ear cress ). Proteins containing these domains are found in eukaryotes and usually contain another lipid-binding domain, typically the GRAM domain and sometimes the C2 domain in plants and the integral peroxisomal membrane peroxin Pex24p domain in oomycetes. Structure The VASt domain structurally resembles a truncated form of a START domain, but with limited sequence similarity. While VASt is a member of the Bet v1-like superfamily, it is unclear if it evolved from the same ancestral domain as the START domain or is an example of convergent evolution. The domain is highly conserved across all eukaryotes and is typically present in only one copy in VASt domain-containing proteins. Like the START domain, the VASt domain consists of a helix-grip fold structure. The pocket formed is large enough to bind one lipid such as cholesterol, 25-hydroxycholesterol or ergosterol. Analysis of the crystal structure of unbound and bound forms of VASt domains in lipid transfer proteins anchored at a membrane contact site (LAMs) from yeast revealed that the domain contains an accessible hydrophobic cavity." Upon sterol binding of the cavity, the entry point is closed or partially closed to the outside. Human proteins containing the VASt domain The sole proteins containing this domain identified in human are GRAMD1A/Aster-A, GRAMD1B/Aster-B and GRAMD1C/Aster-C (with the VASt domain referred to as an Aster domain). These sterol transfer proteins together with GRAMD2A and GRAMD2B are LAM family proteins, although the latter two lack the VASt domain. Like LAM proteins, GRAMD1 proteins preferentially transfer sterols. References Protein domains
VAD1 analog of StAR-related lipid transfer
[ "Biology" ]
423
[ "Protein domains", "Protein classification" ]
63,327,266
https://en.wikipedia.org/wiki/Lunar%20penetrometer
The lunar penetrometer was a spherical electronic tool that served to measure the load-bearing characteristics of the Moon in preparation for spacecraft landings. It was designed by NASA to be dropped onto the surface from a vehicle orbiting overhead and transmit information to the spacecraft. However, despite it being proposed for several lunar and planetary missions, the device was never actually fielded by NASA. History The lunar penetrometer was first developed in the early 1960s as part of NASA Langley Research Center’s Lunar Penetrometer Program. At the time, immense pressures from the ongoing Space Race caused NASA to shift its focus from conducting purely scientific lunar expeditions to landing a man on the Moon before the Russians.  As a result, the Jet Propulsion Laboratory's lunar flight projects, Ranger and Surveyor, were reconfigured to provide direct support to Project Apollo. One of the major problems that NASA faced in preparation for the Apollo Moon landing was the inability to determine the surface characteristics of the Moon with regard to spacecraft landings and post-landing locomotion of exploratory vehicles and personnel. While radio and optical technology situated on Earth at the time could make out large-scale characteristics such as the size and distribution of mountains and craters, there wasn't an Earth-based method of measuring small-scale features, such as the lunar surface texture and topographical details, with adequate resolution. In 1961, NASA's chief engineer Abe Silverstein proposed to the U.S. Congress that Project Ranger would help provide important data on the Moon's surface topography to facilitate the Apollo lunar landing. Once funding was provided to the Ranger program, Silverstein directed NASA laboratories to investigate potential instruments that could return information on the hardness of the lunar surface. Introduced shortly after Silverstein's directive, the Lunar Penetrometer Program devised the development of an impact-measuring instrumented projectile, or penetrometer, that provided preliminary information about the Moon's surface. The lunar penetrometer housed an impact accelerometer that measured the deceleration time history of the projectile as it made contact with the lunar surface to measure its hardness, bearing strength, and penetrability as well as a radio telemeter that could transmit the impact information to a remote receiver. Knowledge of the complete impact acceleration time history would have also made it possible for NASA researchers to ascertain the physical composition of the soil and whether it was granular, powdery, or brittle. If successful, the lunar penetrometer was planned for deployment for uncrewed landings in the Ranger and Surveyor programs as well as for the Apollo mission. However, the Jet Propulsion Laboratory Space Sciences Division Manager Robert Meghreblian decided in August 1963 that the use of the lunar penetrometer to provide information on the lunar surface in situ was too risky. Instead, it was decided that the lunar surface composition would be determined by using gamma-ray spectrometry and surface topography via television photography and radar probing. In 1966, the lunar penetrometer was investigated as a potential sounding device for the Apollo missions, but no information exists on whether it was used in that manner. Design In order to function properly, the lunar penetrometer was designed to sense the accelerations encountered by the projectile body during the impact process and telemeter the collected information to a nearby receiving station. Doing so required the penetrometer to package an acceleration sensing device as well as an independent telemetry system with a power supply, transmitter, and antenna system. The components also needed to be housed within a casing that could withstand a wide range of impact loads. The lunar penetrometer came in the form of a spherical omnidirectional penetrometer that did not have to account for the orientation of the penetrometer during impact, which was difficult to factor in an environment with little to no atmosphere like the lunar surface. The omnidirectional design packaged the accelerometer, computer, power supply, and the telemetry system within a 3-inch diameter sphere. The lunar penetrometer's spherical instrumentation compartment had an omnidirectional acceleration sensor located at the center surrounded by concentrically placed batteries and electronic modules. The components were enclosed within an electromagnetic shield that provided a uniform metallic reference for the omnidirectional antenna encircling the instrumentation compartment. Outside the compartment, an impact limiter made out of balsa wood provided shock absorption to limit the impact forces on the internal components to tolerable levels and provided a low overall penetrometer density to assure sensitivity to soft, weak target surfaces. The balsa impact limiter was coated in a thin outer shell made out of fiber-glass epoxy. Accelerometer As part of the Lunar Penetrometer Program, the NASA Langley Research Center tasked the Harry Diamond Laboratories (later consolidated to form the U.S. Army Research Laboratory) with the development of the omnidirectional accelerometer for the lunar penetrometer. The omnidirectional accelerometer, or the omnidirectional acceleration sensor, was an accelerometer capable of measuring the acceleration time histories independent of its angular acceleration or orientation at impact. The researchers at Harry Diamond Laboratories originally employed a hollow piezoelectric sphere but later transitioned to modifying a conventional triaxial accelerometer. The instantaneous magnitude of the acceleration was computed by obtaining the square root of the sum of the squares of the three orthogonal, acceleration-time signatures. The omnidirectional accelerometer withstood a maximum of 40,000 G during shock testing and operated using a 20V power supply drawing 10 mA. Telemetry system The telemetry system for the lunar penetrometer was commissioned by NASA to the Canadian defence contractor Computing Devices of Canada (now known as General Dynamics Mission Systems). It consisted of a network that fed the output of the accelerometer to a radio frequency power amplifier that was also connected to a master oscillator and a buffer amplifier. The amplifiers and the oscillator functioned together to act as a transmitter, whose outputs were fed to a spherical antenna that was embedded in the outer skin of the penetrometer. Relay craft Due to limitations in available power, antenna efficiency, and other factors, the impact acceleration information from the lunar penetrometers could not be transmitted for extensive distances. As a result, a relay craft needed to be placed within the transmission field of the lunar penetrometers to intercept the lunar penetrometer signals and transmit them to a distant receiving station. When located within moderate range of a receiving station like a parent spacecraft, the relay craft served to simply amplify and redirect the lunar penetrometer signals. At greater distances, the relay craft would perform data signal processing where it exchanged the peak power requirement of instantaneous data transmission for longer transmission time to decrease the demands placed upon the power supply. The relay craft functioned so that it would receive the lunar penetrometer signals and transmit them to the receiving station only after the lunar penetrometers landed on the surface and before the relay craft itself crashed onto the ground. As a result, a strict time limit would be imposed on the relay craft to deliver the necessary data sent by the penetrometers. Operation During lunar reconnaissance, a payload containing the lunar penetrometer and the relay station structure would be mounted on the spacecraft as it traveled to its destination. Above the lunar surface, the spacecraft would release the payload, which would spin for axis attitude stability and use the main retrorocket motor to reduce the descent velocity. At approximately 5,600 feet above the target area, the second retrorocket would fire once the main retrorocket was jettisoned from the payload. The centrifugal force resulting from the spin stabilization technique would cause a salvo of lunar penetrometers to disperse and free fall toward the lunar surface. The payload carriage would hold 16 lunar penetrometers in total that would be released in salvos of four at about 2 second intervals. The impact of the lunar penetrometers would be categorized as elastic, plastic, or penetration depending on the target surface. After the secondary retrorocket burns out, the payload would free fall to the lunar surface as well. Once the penetrometers make contact with the lunar surface, the impact information would then be transmitted to the descending payload relay station, which would then be relayed to a transmitting antenna system on Earth. In short, this chain of communication would take place within the time interval between the release of the lunar penetrometers and the moment the payload relay station lands on the lunar surface. Testing Shock testing Harry Diamond Laboratories was tasked with developing a high-energy shock testing method that monitored the omnidirectional accelerometer's behavior during acceleration peaking at 20,000 G. Components of the omnidirectional accelerometer, such as the resistors, capacitors, oscillators, and magnetic cores, were subjected to a modified air gun test. The component being tested was placed within a target body inside an extension tube in front of an air gun. The air gun would fire a projectile, impacting the target body and accelerating it to a peak of 20,000 G until it hit the lead target only a short distance away inside the extension tube. The results of the shock test showed that the resistors and capacitors changed very little during shock, while the commercial subcarrier oscillator and the tape-wound magnetic cores were affected considerably. Impact testing More than 200 impact tests were conducted with the spherical lunar penetrometer in investigating its soil penetration characteristics. Most consisted of impacting the penetrometers to a wide range of target materials at velocities ranging from 6 to 76 m/s and then recording the measured impact characteristics. Several experiments investigated the penetrometer's ability to predict the depth to which a lunar module would penetrate the surface of the landing zone. The results of these studies found that the lunar penetrometers were successful in not only identifying the nature of the impacted surface, i.e. whether the surface was rigid or collapsible, but also in distinguishing between particulate materials of different bearing strength from peak impact accelerations. The lunar penetrometers were able to accurately predict the conditions of the landing pad penetrations. Sounding device application The lunar penetrometer was studied as a potential sounding device for a crewed Apollo lunar module landing in 1966. The device was suggested to assist astronauts in on-the-spot decision making regarding whether a safe landing of the lunar module could be made. Once dropped individually or in salvo within the landing zone, the lunar penetrometers could autonomously transmit an acceleration-time profile upon impact and characterize the surface hardness of the landing zone. A short study on the feasibility of this application was conducted to determine the flight, trajectory, and impact parameters of the lunar penetrometers once launched from a lunar module. The study found that the lunar penetrometer's impact velocities were limited to range from 120 ft/s to 200 ft/s, meaning that the velocities impact angles would have to vary between 54 and 62 percent from the vertical. The earliest that a lunar penetrometer had to be launched was at a range of 3,400 feet and an altitude of 1,075 feet, which would grant the crew in the lunar module 16 seconds to analyze the penetrometer data. References Measuring instruments Impactor spacecraft Exploration of the Moon
Lunar penetrometer
[ "Technology", "Engineering" ]
2,409
[ "Measuring instruments" ]
63,327,285
https://en.wikipedia.org/wiki/Autogenous%20pressurization
Autogenous pressurization is the use of self-generated gaseous propellant to pressurize liquid propellant in rockets. Traditional liquid-propellant rockets have been most often pressurized with other gases, such as helium, which necessitates carrying the pressurant tanks along with the plumbing and control system to use it. Autogenous pressurization has been operationally used on the Titan 34D, Space Shuttle, Space Launch System, and Starship. Autogenous pressurization is planned to be used on the New Glenn, Terran 1 and Rocket Lab's Neutron rocket. Background As propellant is drained from its tank, something must fill the vacated ullage space to maintain pressure inside the tanks. This is for two reasons: first, rocket engines require a minimum inlet pressure to prevent cavitation in their turbopumps, and second, rockets usually require that their tanks be pressurized for structural strength. In autogenous pressurization, a small amount of propellant is heated until it turns to gas. That gas is then fed back into the liquid propellant tank it was sourced from. This helps keep the liquid propellant at the required pressure necessary to feed a rocket's engines. This is achieved through gas generators in a rocket's engine systems: tapped off from a gas generator; fed through a heat exchanger; or via electric heaters. Autogenous pressurization was already in use in the Titan booster by 1968 and had been tested with the RL10 engine, demonstrating its suitability for upper stage engines. Traditionally, tank pressurization has been provided by a high pressure inert gas such as helium or nitrogen. Autogenous pressurization has been described as both less and more complex than using helium or nitrogen but it does provide significant advantages. The first is for long-term spaceflight and interplanetary missions such as going to and landing on Mars. Removing inert gases from usage allows engine firing in a non-pumping mode. The same vaporized gases can be used for mono- or bi-propellant attitude control. The reuse of onboard oxidizer and fuel also reduces the contamination of combustibles by inert gases. Risk reduction benefits come from reducing the requirement of high pressure storage vessels and completely isolating fuel and oxidizer systems, removing a possible failure path via the pressurization subsystem (e.g. SpaceX CRS-7). This system also increases payload capacity by reducing component and propellant weight and increased chamber pressure. A major risk of autogenous pressurization is that it is prone to ullage collapse if the propellant sloshes. If the ullage gas mixes with the liquid propellant, such as during spacecraft maneuvers, it will be cooled and can condense to liquid, causing a sudden loss of pressure. Thus, autogenous pressurization is suited for booster engines which will operate under constant acceleration in a single direction, but is difficult to use when there are multiple engine burns separated by zero-g maneuvers. The RS-25 engines used autogenous pressurization to maintain fuel pressure in the Space Shuttle external tank. References Aerospace engineering
Autogenous pressurization
[ "Engineering" ]
653
[ "Aerospace engineering" ]
63,327,417
https://en.wikipedia.org/wiki/Smart%20Cities%20EMC%20Network%20for%20Training
The Smart Cities EMC Network for Training (SCENT) is a project funded by the European Union's Horizon 2020 research program under the Marie Skłodowska-Curie grant agreement No 812391. It is a Ph.D training network program in the field of Electromagnetic Compatibility (EMC) especially in the smart cities application. Three universities (the University of Twente, the University of Nottingham, the University of Zielona Góra) and twelve industries collaborate in SCENT project. Supported by the IEEE EMC Society Technical Committee 7: ("Low-Frequency EMC") besides Ph.D training program, SCENT project also performs scientific training programs and social outreach programs. History The initiative began in 2018 with the aim of finding solutions to power quality (PQ) problems arising from conducted electromagnetic interference (EMI) from integrated electrical power system equipment that are becoming smart (Smart Cities). The European Union started SCENT project at 1 September 2018 as part of the H2020-EU. 1.3.1 under ITN scheme. The H2020-EU.1.3.1 is an EU collaborative program to foster new skills by means of initial training of researchers. The CORDIS EU commission stated, as the part of Horizon 2020 program, the SCENT project is designated to support the optimization of power distribution networks inside buildings and industrial plants and transport networks with respect to compatibility (no interference) and efficiency. The SCENT project implement in 4 years and end on 31 August 2022. Objectives The SCENT project was established to carry out doctorate training, scientific training, and social activities in collaboration with universities, research institutions, and non-academic organisations. In order to achieve this general objective, the SCENT project divides their work into several detailed projects called Work Packages. Management Objective: Efficient project execution, including finance, reporting and maintaining the relationship between the SCENT stakeholders. Training Objective: Create a European Doctoral/Graduate School for EMC, especially on conducted emissions, susceptibility and harmonics. Dissemination and exploration Objective: Disseminate the SCENT's findings to the European academic, educational, industrial, and public groups. Integrate the SCENT doctoral program into established graduate programs. Outreach Objective: Disseminate the SCENT's findings to the public. Behavioral modelling and simulation of connected devices Objective: Developed new models and simulated several load conditions connected to grids. Statistical and probabilistic modelling and simulation Objective: Develop validated models and simulations of power distribution grids for EMC and power quality assessment. Measurement and monitoring, experimental evaluation of equipment and network Objective: To characterize and evaluate the distribution network and its interconnected equipment with experiment or measurement. System topology and interaction, compensation and corrective Objective: To provide a verified set of tools and determine the risk of electromagnetic interference in complex systems by employing statistical approach. Activities All of the SCENT activities are the result of collaboration between the University of Twente (UT), the University of Zielona Gora (UZ) the University of Nottingham (UN) and industries. Based on MSCA-ITN-EJD-European Join Doctoral scheme each Early Stage Researcher (ESR) of SCENT must study and take courses at these three Universities. Besides academic activities, the SCENT also involves in several activities including the ICT COST Action in Prague in 2019, the APEMC in Sapporo Japan in 2019, the EMC Europe in Barcelona Spain in 2019, Summer School at University of Twente in 2019, SENE conference in Łódź in 2019, APEMC in Sydney Australia in 2020, ETOPIA Workshop in 2020, IEEE EMC+SIPI virtual conference in 2020, ENEA workshop in Poland in 2020, IEEE EMC+SIPI in Glasgow in 2021, SCENT Summer School II in Zielona Gora Poland in 2021, ETOPIA Summer School II in Politechnico di Milan Italy in 2021, APEMC Bali in Indonesia in 2021 and Girls Day at University of Twente in 2022 as an outreach activity. Partners European Union Universities The University of Twente The University of Nottingham The University of Zielona Góra Industries Siemens Motion Control Ursus Lambda Engineering Eko Energetyka Thales Enea Operator Network Rail Solaris RH Marine Atkins Tauron Jaguar Land Rover Other partners IEEE EMC Society, Technical Committee (TC7) Nederlandse EMC-ESD Vereniging (EMC-ESD) References College and university associations and consortia in Europe Electromagnetic compatibility Engineering university associations and consortia European Union and science and technology International educational organizations 2018 establishments in Europe Smart cities
Smart Cities EMC Network for Training
[ "Engineering" ]
931
[ "Radio electronics", "Electrical engineering", "Electromagnetic compatibility" ]
63,328,300
https://en.wikipedia.org/wiki/Igacovirus
Igacovirus is a subgenus of viruses in the genus Gammacoronavirus. Species The genus consists of the following three species: Avian coronavirus Avian coronavirus 9203 Duck coronavirus 2714 References Virus subgenera Gammacoronaviruses
Igacovirus
[ "Biology" ]
55
[ "Virus stubs", "Viruses" ]
61,177,253
https://en.wikipedia.org/wiki/C12H16N4O4
{{DISPLAYTITLE:C12H16N4O4}} The molecular formula C12H16N4O4 (molar mass: 280.28 g/mol, exact mass: 280.1172 u) may refer to: 4,4'-Azobis(4-cyanopentanoic_acid) (ACPA) MK-608
C12H16N4O4
[ "Chemistry" ]
82
[ "Isomerism", "Set index articles on molecular formulas" ]
61,179,687
https://en.wikipedia.org/wiki/Eucrates
Eucrates was a hybrid teaching and learning analog computer created by Gordon Pask in 1956, in response to a request by the Solartron Electronic Group for a machine to exhibit at the Physical Society Exhibition in London. Its operation was based on simulating the functioning of neurons. The Solartron EUCRATES II was created by C.E.G. Bailey, T. Robin McKinnon Wood and Gordon Pask. References Analog computers Neurons Early computers History of computing
Eucrates
[ "Technology" ]
96
[ "Computing stubs", "Computers", "Computer hardware stubs", "History of computing" ]
61,180,729
https://en.wikipedia.org/wiki/Arum%20cylindraceum
Arum cylindraceum is a woodland plant species of the family Araceae. It is found in most of Europe except the UK, Russia, Ukraine, Belarus, the Baltic States and Scandinavia (although it is found in Denmark), and in Turkey. It is also missing in northwestern France and southern Italy. Description The plain green leaves of A. cylindraceum appear in early spring (late March–early May) followed by the flowers borne on a poker-shaped inflorescence called a spadix, which is partially enclosed in a grass-green spathe or leaf-like hood. The flowers are hidden from sight, clustered at the base of the spadix with a ring of female flowers at the bottom and a ring of male flowers above them. Above the male flowers is a ring of hairs forming an insect trap. Insects are attracted to the spadix by its faecal odour and a temperature warmer than the ambient temperature. The insects are trapped beneath the ring of hairs and are dusted with pollen by the male flowers before escaping and carrying the pollen to the spadices of other plants, where they pollinate the female flowers. The spadix is pale chocolate brown to dark purple. In autumn, the lower ring of (female) flowers forms a cluster of bright red berries which remain after the spathe and other leaves have withered away. These attractive red to orange berries are extremely poisonous. The berries contain oxalates of saponins which have needle-shaped crystals which irritate the skin, mouth, tongue, and throat, and result in swelling of the throat, difficulty breathing, burning pain, and upset stomach. However, their acrid taste, coupled with the almost immediate tingling sensation in the mouth when consumed, means that large amounts are rarely taken and serious harm is unusual. All parts of the plant can produce allergic reactions in many people and the plant should be handled with care. Many small rodents appear to find the spadix particularly attractive; finding examples of the plant with much of the spadix eaten away is common. The spadix produces heat and probably scent as the flowers mature, and this may attract the rodents. In areas where both A. cylindraceum and A. maculatum are found, they are easily confused. A. cylindraceum, however, does not usually occur in the wild in the UK, but in Central Europe both species are found, often growing in the same locations. The only characteristic that sets the two species apart with certainty all year is the tuber, which is horizontal with A. maculatum but vertical with A. cylindraceum. Other differences are: The spadix is around 2/3 as long as the spathe; with A. maculatum it is only 1/2 as long. The spadix of A. cylindraceum is never yellow, which it can be with A. maculatum. A. cylindraceum never has spotted leaves (except when hybridizing with A. maculatum). Note that A. maculatum, despite the name, does not always have spotted leaves (e.g. A. maculatum ssp. immaculatum). The cluster of berries is up to 7 cm with A. cylindraceum, and up to 4 cm with A. maculatum. The spathe is greener with A. cylindraceum, not quite as pale as with A. maculatum. Subspecies Two subspecies are accepted. Arum cylindraceum subsp. cylindraceum – Sweden, Denmark, middle Europe, southeastern Europe, Turkey, Corsica, and the Iberian Peninsula Arum cylindraceum subsp. pitsyllianum – Cyprus Habitat Throughout the area it is found in deciduous woodland or on the edges of coniferous woodland, preferring partly shade and somewhat moist conditions. It is found up to 1700m, lower in the northern part of the area. In the southern part it is also found on grassy or rocky slopes and pastures. Taxonomy Within the genus, it belongs to subgenus Arum, section Alpina. A. cylindraceum has a chromosome count of 2n = 28. A. alpinum is now considered a synonym of A. cylindraceum, but certain subspecies such as A. alpinum ssp. danicum were long held to be a representative of another species. The name A. alpinum is, however, now considered obsolete in all cases. Uses In medieval Denmark, then including parts of Germany and Sweden, starch from the tubers (also from A. maculatum) was used to stiffen clerical collars, but as the tubers contain a caustic sap that caused blistering of the hands, this was abandoned when starch from wheat became available. Today colonies of A. cylindraceum are still found close to church sites, although the species seems to have died out in southern Sweden. References cylindraceum Medicinal plants of Europe Flora of Middle Europe Flora of Southeastern Europe Flora of Corsica Flora of Cyprus Flora of Denmark Flora of Portugal Flora of Spain Flora of Sweden Flora of Turkey Plant toxins Neurotoxins Plants described in 1829 Taxa named by Guglielmo Gasparrini
Arum cylindraceum
[ "Chemistry" ]
1,080
[ "Neurochemistry", "Neurotoxins", "Chemical ecology", "Plant toxins" ]
61,180,813
https://en.wikipedia.org/wiki/C13H14N2O2
{{DISPLAYTITLE:C13H14N2O2}} The molecular formula C13H14N2O2 (molar mass: 230.26 g/mol, exact mass: 230.1055 u) may refer to: Batoprazine Metomidate Molecular formulas
C13H14N2O2
[ "Physics", "Chemistry" ]
64
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,181,985
https://en.wikipedia.org/wiki/Isolated%20organ%20perfusion%20technique
Isolated organ perfusion technique is employed to precipitate an organ's perfusion and circulation that are independent/isolated from the body's systemic circulation for various purposes such as organ-localized chemotherapy, organ-targeted delivery of drug, gene or anything else, organ transplantation, and organ injury recovery. The technique has been widely studied in animal and human for decades. Before the implementation, the perfusion system will be selected and the process can be similar to organ bath. Isolated organ perfusion technique, nevertheless, is averagely conducted in vivo without leaving the organ alone as a whole out of the body. See also ECMO References Oncology Organ donation Organ systems Organ transplantation
Isolated organ perfusion technique
[ "Biology" ]
140
[ "Organ systems" ]
61,182,081
https://en.wikipedia.org/wiki/List%20of%20Xbox%20Wireless%20Controller%20special%20editions
This is a list of all the special editions of the Xbox Wireless Controller, the primary controller of the Xbox One and Xbox Series X and Series S home video game consoles. Besides standard colors, "special" and "limited edition" Xbox Wireless Controllers have also been sold by Microsoft with special color and design schemes, sometimes tying into specific games. List Xbox One era Xbox Series X and Series S era References Controller Game controllers Xbox Wireless Controller special editions
List of Xbox Wireless Controller special editions
[ "Technology" ]
90
[ "Computing-related lists", "Video game lists" ]
61,182,504
https://en.wikipedia.org/wiki/Earth%20phase
The Earth phase, Terra phase, terrestrial phase, or phase of Earth, is the shape of the directly sunlit portion of Earth as viewed from the Moon (or elsewhere extraterrestrially). From the Moon, the Earth phases gradually and cyclically change over the period of a synodic month (about 29.53 days), as the orbital positions of the Moon around Earth and of Earth around the Sun shift. Overview Among the most prominent features of the Moon's sky is Earth. Earth's angular diameter (1.9°) is four times the Moon's as seen from Earth, although because the Moon's orbit is eccentric, Earth's apparent size in the sky varies by about 5% either way (ranging between 1.8° and 2.0° in diameter). Earth shows phases, just like the Moon does for terrestrial observers. The phases, however, are opposite; when the terrestrial observer sees the full Moon, the lunar observer sees a "new Earth", and vice versa. Earth's albedo is three times as high as that of the Moon (due in part to its whitish cloud cover), and coupled with the wider area, the full Earth glows over 50 times brighter than the full Moon at zenith does for the terrestrial observer. This Earth light reflected on the Moon's un-sunlit half is bright enough to be visible from Earth, even to the unaided eye – a phenomenon known as earthshine. As a result of the Moon's synchronous rotation, one side of the Moon (the "near side") is permanently turned towards Earth, and the other side, the "far side", mostly cannot be seen from Earth. This means, conversely, that Earth can be seen only from the near side of the Moon and would always be invisible from the far side. The Earth is seen from the lunar surface to rotate, with a period of approximately one Earth day (differing slightly due to the Moon's orbital motion). If the Moon's rotation were purely synchronous, Earth would not have any noticeable movement in the Moon's sky. However, due to the Moon's libration, Earth does perform a slow and complex wobbling movement. Once a month, as seen from the Moon, Earth traces out an approximate oval 18° in diameter. The exact shape and orientation of this oval depend on one's location on the Moon. As a result, near the boundary of the near and far sides of the Moon, Earth is sometimes below the horizon and sometimes above it. To be aware, although genuine photographs of the Earth viewed from the Moon exist, many from NASA, some photographs shared on social media, that are purported to be the Earth viewed from the Moon, may not be real. Phases of the Earth See also Earthrise Extraterrestrial sky List of first images of Earth from space List of notable images of Earth from space Lunar phase Overview effect Pale Blue Dot Pale Orange Dot (Early Earth) Planetary phase The Blue Marble The Day the Earth Smiled References External links Earth phases – model simulation program phase Observational astronomy
Earth phase
[ "Astronomy" ]
640
[ "Observational astronomy", "Astronomical sub-disciplines" ]
61,182,913
https://en.wikipedia.org/wiki/C15H17Cl2NO2
{{DISPLAYTITLE:C15H17Cl2NO2}} The molecular formula C15H17Cl2NO2 (molar mass: 314.207 g/mol) may refer to: Bemesetron (MDL-72222) Dichloropane Molecular formulas
C15H17Cl2NO2
[ "Physics", "Chemistry" ]
65
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,183,286
https://en.wikipedia.org/wiki/C7H6Cl2
{{DISPLAYTITLE:C7H6Cl2}} The molecular formula C7H6Cl2 may refer to: Benzal chloride Dichlorotoluene
C7H6Cl2
[ "Chemistry" ]
38
[ "Isomerism", "Set index articles on molecular formulas" ]
61,183,305
https://en.wikipedia.org/wiki/C17H19NO
{{DISPLAYTITLE:C17H19NO}} The molecular formula C17H19NO (molar mass: 253.34 g/mol, exact mass: 253.1467 u) may refer to: Benzedrone 3-Benzhydrylmorpholine Diphenylprolinol Nefopam Molecular formulas
C17H19NO
[ "Physics", "Chemistry" ]
74
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,183,422
https://en.wikipedia.org/wiki/C20H14O3
{{DISPLAYTITLE:C20H14O3}} The molecular formula C20H14O3 (molar mass: 302.329 g/mol) may refer to: (+)-Benzo[a]pyrene-7,8-dihydrodiol-9,10-epoxide Florantyrone Molecular formulas
C20H14O3
[ "Physics", "Chemistry" ]
76
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,183,442
https://en.wikipedia.org/wiki/C18H19N
{{DISPLAYTITLE:C18H19N}} The molecular formula C18H19N (molar mass: 249.35 g/mol, exact mass: 249.1517 u) may refer to: Benzoctamine 4-Cyano-4'-pentylbiphenyl Alpha-D2PV Molecular formulas
C18H19N
[ "Physics", "Chemistry" ]
75
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,183,655
https://en.wikipedia.org/wiki/C7H6N2S
{{DISPLAYTITLE:C7H6N2S}} The molecular formula C7H6N2S (molar mass: 150.20 g/mol) may refer to: Mercaptobenzimidazole Benzothiadiazine
C7H6N2S
[ "Chemistry" ]
58
[ "Isomerism", "Set index articles on molecular formulas" ]
61,183,726
https://en.wikipedia.org/wiki/C15H18N2O3
{{DISPLAYTITLE:C15H18N2O3}} The molecular formula C15H18N2O3 (molar mass: 274.314 g/mol) may refer to: Benzylbutylbarbiturate Methoxyetomidate Terbequinil Molecular formulas
C15H18N2O3
[ "Physics", "Chemistry" ]
67
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,183,765
https://en.wikipedia.org/wiki/C7H7I
{{DISPLAYTITLE:C7H7I}} The molecular formula C7H7I (molar mass: 218.03 g/mol) may refer to: Benzyl iodide Iodotoluene
C7H7I
[ "Chemistry" ]
50
[ "Isomerism", "Set index articles on molecular formulas" ]
61,184,028
https://en.wikipedia.org/wiki/C14H16ClNO
{{DISPLAYTITLE:C14H16ClNO}} The molecular formula C14H16ClNO (molar mass: 249.73594 g/mol, exact mass: 249.0920 u) may refer to: Bexlosteride Sercloremine (CGP-4718A) Molecular formulas
C14H16ClNO
[ "Physics", "Chemistry" ]
72
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,184,477
https://en.wikipedia.org/wiki/C28H28P2
{{DISPLAYTITLE:C28H28P2}} The molecular formula C28H28P2 (molar mass: 426.47 g/mol, exact mass: 426.1666 u) may refer to: 1,4-Bis(diphenylphosphino)butane (dppb) Chiraphos
C28H28P2
[ "Chemistry" ]
76
[ "Isomerism", "Set index articles on molecular formulas" ]
61,186,398
https://en.wikipedia.org/wiki/GigaDB
GigaDB (GigaScience DataBase) is a disciplinary repository launched in 2011 with the aim of ensuring long-term access to massive multidimensional datasets from life science and biomedical science studies. The datasets are diverse and include genomic, transcriptomic, and imaging data. The datasets are curated by GigaDB biocurators who are employed by BGI and China National GeneBank. In its inception, GigaDB was designed as the supporting archive for large-scale research data submitted to the GigaScience Press data journals GigaScience and Gigabyte whose focus are on ensuring reproducibility and reusability of biological and biomedical research. The scope of GigaDB has broadened to include computational research objects such as synthetic data, software and workflows. The database uses Genomics Standard Consortium (GSC)-approved sample attributes and standards, also collaborating with the GSC to ensure data are comprehensive and discoverable. Datasets hosted in GigaDB are defined as a group of files and metadata that support a specific article or study. For each published GigaDB dataset, a DataCite digital object identifier is assigned and the data are indexed and discoverable in NCBI Datamed and the Clarivate Analytics Data Citation Index. GigaDB has also collaborated with Repositive to boost the discoverability of their human datasets. References External links Biological databases Discipline-oriented digital libraries Open data Data publishing Institutional repository software
GigaDB
[ "Technology", "Biology" ]
309
[ "Bioinformatics", "Data", "Biological databases", "Data publishing" ]
61,186,410
https://en.wikipedia.org/wiki/First%20International%20Congress%20on%20Cybernetics
The First International Congress on Cybernetics was held in Namur, Belgium, 26–29 June 1956. It led to the formation of the International Association for Cybernetics which was incorporated in Belgium on 6 January 1957. William Grey Walter was involved in organising the congress. Attendees included: Stafford Beer: "The impact of cybernetics on the concept of industrial organization" Albert Uttley: "A theory on the mechanism of learning based on the computation of conditional probabilities" References Cybernetics 1956 conferences
First International Congress on Cybernetics
[ "Technology" ]
105
[ "Computing stubs", "Computer conference stubs" ]
61,186,777
https://en.wikipedia.org/wiki/Low-energy%20plasma-enhanced%20chemical%20vapor%20deposition
Low-energy plasma-enhanced chemical vapor deposition (LEPECVD) is a plasma-enhanced chemical vapor deposition technique used for the epitaxial deposition of thin semiconductor (silicon, germanium and SiGe alloys) films. A remote low energy, high density DC argon plasma is employed to efficiently decompose the gas phase precursors while leaving the epitaxial layer undamaged, resulting in high quality epilayers and high deposition rates (up to 10 nm/s). Working principle The substrate (typically a silicon wafer) is inserted in the reactor chamber, where it is heated by a graphite resistive heater from the backside. An argon plasma is introduced into the chamber to ionize the precursors' molecules, generating highly reactive radicals which result in the growth of an epilayer on the substrate. Moreover, the bombardment of Ar ions removes the hydrogen atoms adsorbed on the surface of the substrate while introducing no structural damage. The high reactivity of the radicals and the removal of hydrogen from the surface by ion bombardment prevent the typical problems of Si, Ge and SiGe alloys growth by thermal chemical vapor deposition (CVD), which are dependence of the growth rate from the substrate temperature, due to the thermal energy needed for precursors decomposition and hydrogen desorption from the substrate high temperatures (>1000 °C for silicon) required to get a significant growth rate, which is strongly limited by the aforementioned effects strong dependence of the deposition rate on the SiGe alloy composition, due to the large difference between the hydrogen desorption rate from Si and Ge surfaces. Thanks to this effects the growth rate in a LEPECVD reactor depends only on the plasma parameters and the gas fluxes, and it is possible to obtain epitaxial deposition at much lower temperatures compared to a standard CVD tool. LEPECVD reactor The LEPECVD reactor is divided in three main parts: a loadlock, to load the substrates into the chamber without breaking the vacuum the main chamber, which is kept in UHV at a base pressure of ~10 mbar the plasma source, where the plasma is generated. The substrate is placed at the top of the chamber, facing down toward the plasma source. Heating is provided from the back side by thermal radiation from a resistive graphite heater incapsulated between two boron nitride discs, which improve the temperature uniformity across the heater. Thermocouples are used to measure the temperature above the heater, which is then correlated to that of the substrate by a calibration done with an infrared pyrometer. Typical substrate temperatures for monocrystalline films are 400 °C to 760 °C, for germanium and silicon respectively. The potential of the wafer stage can be controlled by an external power supply, influencing the amount and the energy of radicals impinging on the surface, and is typically kept at 10-15 V with respect to the chamber walls. The process gases are introduced into the chamber through a gas dispersal ring placed below the wafer stage. The gases used in a LEPECVD reactor are silane () and germane () for silicon and germanium deposition respectively, together with diborane () and phosphine () for p- and n-type doping. Plasma source The plasma source is the most critical component of a LEPECVD reactor, as the low energy, high density, plasma is the key difference from a typical PECVD deposition system. The plasma is generated in a source which is attached to the bottom of the chamber. Argon is fed directly in the source, where tantalum filaments are heated to create an electron-rich environment by thermionic emission. The plasma is then ignited by a DC discharge from the heated filaments to the grounded walls of the source. Thanks to the high electron density in the source the voltage required to obtain a discharge is around 20-30V, resulting in an ion energy of about 10-20 eV, while the discharge current is of the order of several tens of amperes, giving a high ion density. The DC discharge current can be tuned to control the ion density, thus changing the growth rate: in particular at a larger discharge current the ion density is higher, therefore increasing the rate. Plasma confinement The plasma enters the growth chamber through an anode electrically connected to the grounded chamber walls, which is used to focus and stabilize the discharge and the plasma. Further focusing is provided by a magnetic field directed along the chamber's axis, provided by external copper coils wrapped around the chamber. The current flowing through the coils (i.e. the intensity of the magnetic field) can be controlled to change the ion density at the substrate's surface, thus changing the growth rate. Additional coils ("wobblers") are placed around the chamber, with their axis perpendicular to the magnetic field, to continuously sweep the plasma over the substrate, improving the homogeneity of the deposited film. Applications Thanks to the possibility of changing the growth rate (through the plasma density or gas fluxes) independently from the substrate temperature, both thin films with sharp interfaces and a precision down to the nanometer scale at rates as low as 0.4 nm/s, as well as thick layers (up to 10 um or more) at rates as high as 10 nm/s, can be grown using the same reactor and in the same deposition process. This has been exploited to grow low-loss composition-graded waveguides for NIR and MIR and integrated nanostructures (i.e. quantum well stacks) for NIR optical amplitude modulation. The capability of LEPECVD to grow both very sharp quantum wells on thick buffers in the same deposition step has also been employed to realize high mobility strained Ge channels. Another promising application of the LEPECVD technique is the possibility of growing high aspect ratio, self-assembled silicon and germanium microcrystals on deeply patterned Si substrates. This solves many problems related to heteroepitaxy (i.e. thermal expansion coefficient and crystal lattice mismatch), leading to very high crystal quality, and is possible thanks to the high rates and low temperatures found in a LEPECVD reactor. See also Chemical vapor deposition Plasma-enhanced chemical vapor deposition References External links LEPECVD page on the website of L-NESS laboratory of Politecnico di Milano, in Como, Italy. Chemical vapor deposition Plasma processing Semiconductor device fabrication Thin film deposition
Low-energy plasma-enhanced chemical vapor deposition
[ "Chemistry", "Materials_science", "Mathematics" ]
1,335
[ "Microtechnology", "Thin film deposition", "Coatings", "Thin films", "Semiconductor device fabrication", "Chemical vapor deposition", "Planes (geometry)", "Solid state engineering" ]
61,186,810
https://en.wikipedia.org/wiki/Pan%E2%80%93Tompkins%20algorithm
The Pan–Tompkins algorithm is commonly used to detect QRS complexes in electrocardiographic signals (ECG). The QRS complex represents the ventricular depolarization and the main spike visible in an ECG signal (see figure). This feature makes it particularly suitable for measuring heart rate, the first way to assess the heart health state. In the first derivation of Einthoven of a physiological heart, the QRS complex is composed by a downward deflection (Q wave), a high upward deflection (R wave) and a final downward deflection (S wave). The Pan–Tompkins algorithm applies a series of filters to highlight the frequency content of this rapid heart depolarization and removes the background noise. Then, it squares the signal to amplify the QRS contribution, which makes identifying the QRS complex more straightforward. Finally, it applies adaptive thresholds to detect the peaks of the filtered signal. The algorithm was proposed by Jiapu Pan and Willis J. Tompkins in 1985, in the journal IEEE Transactions on Biomedical Engineering. The performance of the method was tested on an annotated arrhythmia database (MIT/BIH) and evaluated also in presence of noise. Pan and Tompkins reported that the 99.3 percent of QRS complexes was correctly detected. Pre-processing Noise cancellation As a first step, a band-pass filter is applied to increase the signal-to-noise ratio. A filter bandwidth of 5-15 Hz is suggested to maximize the QRS contribute and reduce muscle noise, baseline wander, powerline interference and the P wave/T wave frequency content. In the original algorithm proposed in 1985, the band-pass filter was obtained with a low-pass filter and a high-pass filter in cascade to reduce the computational cost and allow a real-time detection, while ensuring a 3 dB passband in the 5–12 Hz frequency range, reasonably close to the design goal. For a signal sampled at a frequency of 200 Hz, Pan and Tompkins suggested the filters with the following transfer functions in an updated version of their article: for a second-order low-pass filter with a gain of 36 and a processing delay of 5 samples; for a high-pass filter with a unity gain and a processing delay of 16 samples. Derivative step As a third step, a derivative filter is applied to provide information about the slope of the QRS. For a signal sampled at 200 Hz, Pan and Tompkins suggested the following transfer function: for a 5-point derivative filter with gain of 0.1 and a processing delay of 2 samples. Squaring and integration The filtered signal is squared to enhance the dominant peaks (QRSs) and reduce the possibility of erroneously recognizing a T wave as an R peak. Then, a moving average filter is applied to provide information about the duration of the QRS complex. The number of samples to average is chosen in order to average on windows of 150 ms. The signal so obtained is called integrated signal. Decision rules Fiducial mark In order to detect a QRS complex, the local peaks of the integrated signal are found. A peak is defined as the point in which the signal changes direction (from an increasing direction to a decreasing direction). After each peak, no peak can be detected in the next 200 ms (i.e. the lockout time). This is a physiological constraint due to the refractory period during which ventricular depolarization cannot occur even in the presence of a stimulus. Thresholds Each fiducial mark is considered as a potential QRS. To reduce the possibility of wrongly selecting a noise peak as a QRS, each peak amplitude is compared to a threshold (ThresholdI) that takes into account the available information about already detected QRS and the noise level: where NoiseLevelI is the running estimate of the noise level in the integrated signal and SignalLevelI is the running estimate of the signal level in the integrated signal. The threshold is automatically updated after detecting a new peak, based on its classification as signal or noise peak: (if PEAKI is a signal peak) (if PEAKI is a noise peak) where PEAKI is the new peak found in the integrated signal. At the beginning of the QRS detection, a 2 seconds learning phase is needed to initialize SignalLevelI and NoiseLevelI as a percentage of the maximum and average amplitude of the integrated signal, respectively. If a new PEAKI is under the ThresholdI, the noise level is updated. If PEAKI is above the ThresholdI, the algorithm implements a further check before confirming the peak as a true QRS, taking into consideration the information provided by the bandpass filtered signal. In the filtered signal the peak corresponding to the one evaluated on the integrated signal is searched and compared with a threshold, calculated in a similar way to the previous step: (if PEAKF is a signal peak) (if PEAKF is a noise peak) where the final F stands for filtered signal. Search back for missed QRS complexes The algorithm takes into account the possibility of setting too high values of ThresholdII and ThresholdIF. A check is performed to continuously assess the RR intervals (namely the temporal interval between two consecutively QRS peaks) to overcome this issue. The average RR is computed in two ways to consider both regular and irregular heart rhythm. In the first method RRaverage1 is computed as the mean of the last RR intervals. In the second method RRaverage2 is computed as the mean of the last RR intervals that fell between the limits specified as: If no QRS is detected in a window of 166% of the average RR (RRaverage1 or RRaverage2, if the heart rhythm is regular or irregular, respectively), the algorithm adds the maximal peak in the window as a potential QRS and classify it considering half the values of the thresholds (both ThresholdII and ThresholdIF). This check is implemented because the temporal distance between two consecutive beats cannot physiologically change more quickly than this. T wave discrimination The algorithm takes particularly into consideration the possibility of a false detection of T waves. If a potential QRS falls up to a 160 ms window after the refractory period from the last correctly detected QRS complex, the algorithm evaluates if it could be a T wave with particular high amplitude. In this case, its slope is compared to that of the precedent QRS complex. If the slope is less than half the previous one, the current QRS is recognized as a T wave and discarded, and it also updates the NoiseLevel (both in the filtered signal and the integrated signal). Application Once the QRS complex is successfully recognized, the heart rate is computed as a function of the distance in seconds between two consecutive QRS complexes (or R peaks): where bpm stands for beats per minute. The HR is often used to compute the heart rate variability (HRV) a measure of the variability of the time interval between heartbeats. HRV is often used in the clinical field to diagnose and monitor pathological conditions and their treatment, but also in the affective computing research to study new methods to assess the emotional state of people. See also Electrophysiology QRS Heart rate Heart rate variability Affective computing References Algorithms Cardiac electrophysiology
Pan–Tompkins algorithm
[ "Mathematics" ]
1,496
[ "Applied mathematics", "Algorithms", "Mathematical logic" ]
61,186,816
https://en.wikipedia.org/wiki/Variational%20multiscale%20method
The variational multiscale method (VMS) is a technique used for deriving models and numerical methods for multiscale phenomena. The VMS framework has been mainly applied to design stabilized finite element methods in which stability of the standard Galerkin method is not ensured both in terms of singular perturbation and of compatibility conditions with the finite element spaces. Stabilized methods are getting increasing attention in computational fluid dynamics because they are designed to solve drawbacks typical of the standard Galerkin method: advection-dominated flows problems and problems in which an arbitrary combination of interpolation functions may yield to unstable discretized formulations. The milestone of stabilized methods for this class of problems can be considered the Streamline Upwind Petrov-Galerkin method (SUPG), designed during 80s for convection dominated-flows for the incompressible Navier–Stokes equations by Brooks and Hughes. Variational Multiscale Method (VMS) was introduced by Hughes in 1995. Broadly speaking, VMS is a technique used to get mathematical models and numerical methods which are able to catch multiscale phenomena; in fact, it is usually adopted for problems with huge scale ranges, which are separated into a number of scale groups. The main idea of the method is to design a sum decomposition of the solution as , where is denoted as coarse-scale solution and it is solved numerically, whereas represents the fine scale solution and is determined analytically eliminating it from the problem of the coarse scale equation. The abstract framework Abstract Dirichlet problem with variational formulation Consider an open bounded domain with smooth boundary , being the number of space dimensions. Denoting with a generic, second order, nonsymmetric differential operator, consider the following boundary value problem: being and given functions. Let be the Hilbert space of square-integrable functions with square-integrable derivatives: Consider the trial solution space and the weighting function space defined as follows: The variational formulation of the boundary value problem defined above reads: , being the bilinear form satisfying , a bounded linear functional on and is the inner product. Furthermore, the dual operator of is defined as that differential operator such that . Variational multiscale method In VMS approach, the function spaces are decomposed through a multiscale direct sum decomposition for both and into coarse and fine scales subspaces as: and Hence, an overlapping sum decomposition is assumed for both and as: , where represents the coarse (resolvable) scales and the fine (subgrid) scales, with , , and . In particular, the following assumptions are made on these functions: With this in mind, the variational form can be rewritten as and, by using bilinearity of and linearity of , Last equation, yields to a coarse scale and a fine scale problem: or, equivalently, considering that and : By rearranging the second problem as , the corresponding Euler–Lagrange equation reads: which shows that the fine scale solution depends on the strong residual of the coarse scale equation . The fine scale solution can be expressed in terms of through the Green's function : Let be the Dirac delta function, by definition, the Green's function is found by solving Moreover, it is possible to express in terms of a new differential operator that approximates the differential operator as with . In order to eliminate the explicit dependence in the coarse scale equation of the sub-grid scale terms, considering the definition of the dual operator, the last expression can be substituted in the second term of the coarse scale equation: Since is an approximation of , the Variational Multiscale Formulation will consist in finding an approximate solution instead of . The coarse problem is therefore rewritten as: being Introducing the form and the functional , the VMS formulation of the coarse scale equation is rearranged as: Since commonly it is not possible to determine both and , one usually adopt an approximation. In this sense, the coarse scale spaces and are chosen as finite dimensional space of functions as: and being the Finite Element space of Lagrangian polynomials of degree over the mesh built in . Note that and are infinite-dimensional spaces, while and are finite-dimensional spaces. Let and be respectively approximations of and , and let and be respectively approximations of and . The VMS problem with Finite Element approximation reads: or, equivalently: VMS and stabilized methods Consider an advection–diffusion problem: where is the diffusion coefficient with and is a given advection field. Let and , , . Let , being and . The variational form of the problem above reads: being Consider a Finite Element approximation in space of the problem above by introducing the space over a grid made of elements, with . The standard Galerkin formulation of this problem reads Consider a strongly consistent stabilization method of the problem above in a finite element framework: for a suitable form that satisfies: The form can be expressed as , being a differential operator such as: and is the stabilization parameter. A stabilized method with is typically referred to multiscale stabilized method . In 1995, Thomas J.R. Hughes showed that a stabilized method of multiscale type can be viewed as a sub-grid scale model where the stabilization parameter is equal to or, in terms of the Green's function as which yields the following definition of : Stabilization Parameter Properties For the 1-d advection diffusion problem, with an appropriate choice of basis functions and , VMS provides a projection in the approximation space. Further, an adjoint-based expression for can be derived, where is the element wise stabilization parameter, is the element wise residual and the adjoint problem solves, In fact, one can show that the thus calculated allows one to compute the linear functional exactly. VMS turbulence modeling for large-eddy simulations of incompressible flows The idea of VMS turbulence modeling for Large Eddy Simulations(LES) of incompressible Navier–Stokes equations was introduced by Hughes et al. in 2000 and the main idea was to use - instead of classical filtered techniques - variational projections. Incompressible Navier–Stokes equations Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density in a domain with boundary , being and portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied (): being the fluid velocity, the fluid pressure, a given forcing term, the outward directed unit normal vector to , and the viscous stress tensor defined as: Let be the dynamic viscosity of the fluid, the second order identity tensor and the strain-rate tensor defined as: The functions and are given Dirichlet and Neumann boundary data, while is the initial condition. Global space time variational formulation In order to find a variational formulation of the Navier–Stokes equations, consider the following infinite-dimensional spaces: Furthermore, let and . The weak form of the unsteady-incompressible Navier–Stokes equations reads: given , where represents the inner product and the inner product. Moreover, the bilinear forms , and the trilinear form are defined as follows: Finite element method for space discretization and VMS-LES modeling In order to discretize in space the Navier–Stokes equations, consider the function space of finite element of piecewise Lagrangian Polynomials of degree over the domain triangulated with a mesh made of tetrahedrons of diameters , . Following the approach shown above, let introduce a multiscale direct-sum decomposition of the space which represents either and : being the finite dimensional function space associated to the coarse scale, and the infinite-dimensional fine scale function space, with , and . An overlapping sum decomposition is then defined as: By using the decomposition above in the variational form of the Navier–Stokes equations, one gets a coarse and a fine scale equation; the fine scale terms appearing in the coarse scale equation are integrated by parts and the fine scale variables are modeled as: In the expressions above, and are the residuals of the momentum equation and continuity equation in strong forms defined as: while the stabilization parameters are set equal to: where is a constant depending on the polynomials's degree , is a constant equal to the order of the backward differentiation formula (BDF) adopted as temporal integration scheme and is the time step. The semi-discrete variational multiscale multiscale formulation (VMS-LES) of the incompressible Navier–Stokes equations, reads: given , being and The forms and are defined as: From the expressions above, one can see that: the form contains the standard terms of the Navier–Stokes equations in variational formulation; the form contain four terms: the first term is the classical SUPG stabilization term; the second term represents a stabilization term additional to the SUPG one; the third term is a stabilization term typical of the VMS modeling; the fourth term is peculiar of the LES modeling, describing the Reynolds cross-stress. See also Navier–Stokes equations Large eddy simulation Finite element method Backward differentiation formula Computational fluid dynamics Streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations References Mathematical modeling Numerical analysis Computational fluid dynamics
Variational multiscale method
[ "Physics", "Chemistry", "Mathematics" ]
1,893
[ "Mathematical modeling", "Computational fluid dynamics", "Applied mathematics", "Computational mathematics", "Computational physics", "Mathematical relations", "Numerical analysis", "Approximations", "Fluid dynamics" ]
61,186,826
https://en.wikipedia.org/wiki/Bueno-Orovio%E2%80%93Cherry%E2%80%93Fenton%20model
The Bueno-Orovio–Cherry–Fenton model, also simply called Bueno-Orovio model, is a minimal ionic model for human ventricular cells. It belongs to the category of phenomenological models, because of its characteristic of describing the electrophysiological behaviour of cardiac muscle cells without taking into account in a detailed way the underlying physiology and the specific mechanisms occurring inside the cells. This mathematical model reproduces both single cell and important tissue-level properties, accounting for physiological action potential development and conduction velocity estimations. It also provides specific parameters choices, derived from parameter-fitting algorithms of the MATLAB Optimization Toolbox, for the modeling of epicardial, endocardial and myd-myocardial tissues. In this way it is possible to match the action potential morphologies, observed from experimental data, in the three different regions of the human ventricles. The Bueno-Orovio–Cherry–Fenton model is also able to describe reentrant and spiral wave dynamics, which occurs for instance during tachycardia or other types of arrhythmias. From the mathematical perspective, it consists of a system of four differential equations. One PDE, similar to the monodomain model, for an adimensional version of the transmembrane potential, and three ODEs that define the evolution of the so-called gating variables, i.e. probability density functions whose aim is to model the fraction of open ion channels across a cell membrane. Mathematical modeling The system of four differential equations reads as follows: where is the spatial domain and is the final time. The initial conditions are , , , . refers to the Heaviside function centered in . The adimensional transmembrane potential can be rescaled in mV by means of the affine transformation . , and refer to gating variables, where in particular can be also used as an indication of intracellular calcium concentration (given in the adimensional range [0, 1] instead of molar concentration). and are the fast inward, slow outward and slow inward currents respectively, given by the following expressions: All the above-mentioned ionic density currents are partially adimensional and are expressed in . Different parameters sets, as shown in Table 1, can be used to reproduce the action potential development of epicardial, endocardial and mid-myocardial human ventricular cells. There are some constants of the model, which are not located in Table 1, that can be deduced with the following formulas: where the temporal constants, i.e. are expressed in seconds, whereas and are adimensional. The diffusion coefficient results in a value of , which comes from experimental tests on human ventricular tissues. In order to trigger the action potential development in a certain position of the domain , a forcing term , which represents an externally applied density current, is usually added at the right hand side of the PDE and acts for a short time interval only. Weak formulation Assume that refers to the vector containing all the gating variables, i.e. , and contains the corresponding three right hand sides of the ionic model. The Bueno-Orovio–Cherry–Fenton model can be rewritten in the compact form: Let and be two generic test functions. To obtain the weak formulation: multiply by the first equation of the model and by the equations for the evolution of the gating variables. Integrating both members of all the equations in the domain : perform an integration by parts of the diffusive term of the first monodomain-like equation: Finally the weak formulation reads: Find and , , such that Numerical discretization There are several methods to discretize in space this system of equations, such as the finite element method (FEM) or isogeometric analysis (IGA). Time discretization can be performed in several ways as well, such as using a backward differentiation formula (BDF) of order or a Runge–Kutta method (RK). Space discretization with FEM Let be a tessellation of the computational domain by means of a certain type of elements (such as tetrahedrons or hexahedra), with representing a chosen measure of the size of a single element . Consider the set of polynomials with degree smaller or equal than over an element . Define as the finite dimensional space, whose dimension is . The set of basis functions of is referred to as . The semidiscretized formulation of the first equation of the model reads: find projection of the solution on , , such that with , semidiscretized version of the three gating variables, and is the total ionic density current. The space discretized version of the first equation can be rewritten as a system of non-linear ODEs by setting and : where , and . The non-linear term can be treated in different ways, such as using state variable interpolation (SVI) or ionic currents interpolation (ICI). In the framework of SVI, by denoting with and the quadrature nodes and weights of a generic element of the mesh , both and are evaluated at the quadrature nodes: The equations for the three gating variables, which are ODEs, are directly solved in all the degrees of freedom (DOF) of the tessellation separately, leading to the following semidiscrete form: Time discretization with BDF (implicit scheme) With reference to the time interval , let be the chosen time step, with number of subintervals. A uniform partition in time is finally obtained. At this stage, the full discretization of the Bueno-Orovio ionic model can be performed both in a monolithic and segregated fashion. With respect to the first methodology (the monolithic one), at time , the full problem is entirely solved in one step in order to get by means of either Newton method or Fixed-point iterations: where and are extrapolations of transmembrane potential and gating variables at previous timesteps with respect to , considering as many time instants as the order of the BDF scheme. is a coefficient which depends on the BDF order . If a segregated method is employed, the equation for the evolution in time of the transmembrane potential and the ones of the gating variables are numerically solved separately: Firstly, is calculated, using an extrapolation at previous timesteps for the transmembrane potential at the right hand side: Secondly, is computed, exploiting the value of that has just been calculated: Another possible segregated scheme would be the one in which is calculated first, and then it is used in the equations for . See also Monodomain model Bidomain model References Mathematical modeling Numerical analysis Cardiac electrophysiology
Bueno-Orovio–Cherry–Fenton model
[ "Mathematics" ]
1,403
[ "Mathematical modeling", "Applied mathematics", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Approximations" ]
61,186,827
https://en.wikipedia.org/wiki/Electro-oxidation
Electro-oxidation (EO or EOx), also known as anodic oxidation or electrochemical oxidation (EC), is a technique used for wastewater treatment, mainly for industrial effluents, and is a type of advanced oxidation process (AOP). The most general layout comprises two electrodes, operating as anode and cathode, connected to a power source. When an energy input and sufficient supporting electrolyte are provided to the system, strong oxidizing species are formed, which interact with the contaminants and degrade them. The refractory compounds are thus converted into reaction intermediates and, ultimately, into water and CO2 by complete mineralization. Electro-oxidation has recently grown in popularity thanks to its ease of set-up and effectiveness in treating harmful and recalcitrant organic pollutants, which are typically difficult to degrade with conventional wastewater remediation processes. Also, it does not require any external addition of chemicals (contrary to other processes), as the required reactive species are generated at the anode surface. Electro-oxidation has been applied to treat a wide variety of harmful and non-biodegradable contaminants, including aromatics, pesticides, drugs and dyes. Due to its relatively high operating costs, it is often combined with other technologies, such as biological remediation. Electro-oxidation can additionally be paired with other electrochemical technologies such as electrocoagulation, consecutively or simultaneously, to further reduce operational costs while achieving high degradation standards. Apparatus The set-up for performing an electro-oxidation treatment consists of an electrochemical cell. An external electric potential difference (aka voltage) is applied to the electrodes, resulting in the formation of reactive species, namely hydroxyl radicals, in the proximity of the electrode surface. To assure a reasonable rate of generation of radicals, voltage is adjusted to provide current density of 10-100 mA/cm2. While the cathodes materials are mostly the same in all cases, the anodes can vary greatly according to the application (see ), as the reaction mechanism is strongly influenced by the material selection. Cathodes are mostly made up by stainless steel plates, Platinum mesh or carbon felt electrodes. Depending on the effluent nature, an increase of the conductivity of the solution may be required: the value of 1000 mS/cm is commonly taken as a threshold. Salts like sodium chloride or sodium sulfate can be added to the solution, acting as electrolytes, thus raising the conductivity. Typical values of salts concentration are in the range of few grams per liter, but the addition has a significant impact on power consumption and can reduce it by up to 30%. As the main cost associated to electro-oxidation process is the consumption of electricity, its performance are typically assessed through two main parameters, namely current efficiency and specific energy consumption. Current efficiency is generally defined as the charge required for the oxidation of the considered species over the total charged passed during electrolysis. Although some expressions have been proposed to evaluate the instantaneous current efficiency, they have several limitations due to the presence of volatile intermediates or the need for specialized equipment. Thus, it is much easier to define a general current efficiency (GCE), defined as an average of the value of current efficiency along the entire process and formulated as follows: Where COD0 and CODt are the chemical oxygen demand (g/dm3) at time 0 and after the treatment time t, F is the Faraday's constant (96'485 C/mol), V is the electrolyte volume (dm3), I is the current (A), t is the treatment time (h) and 8 is the oxygen equivalent mass. Current efficiency is a time dependent parameter and it decreases monotonically with treatment time. Instead, the specific energy consumption measures the energy required to remove a unit of COD from the solution and is typically expressed in kWh/kgCOD. It can be calculated according to: Where EC is the cell voltage (V), I is the current (A), t is the treatment time (h), (ΔCOD)t is the COD decay at the end of the process (g/L) and Vs is the solute volume (L). As the current efficiency may vary significantly depending on the treated solution, one should always find the optimal compromise between current density, treatment time and the resulting specific energy consumption, so to meet the required removal efficiency. Working principle Direct oxidation When voltage is applied to the electrodes, intermediates of oxygen evolution are formed near the anode, notably hydroxyl radicals. Hydroxyl radicals are known to have one of the highest redox potentials, allowing the degrading many refractory organic compounds. A reaction mechanism has been proposed for the formation of the hydroxyl radical at the anode through oxidation of water: S + H2O -> S[*OH] + H+ + e- Where S represents the generic surface site for adsorption on the electrode surface. Then, the radical species can interact with the contaminants through two different reaction mechanisms, according to the anode material. The surface of "active" anodes strongly interacts with hydroxyl radicals, leading to the production of higher state oxides or superoxides. The higher oxide then acts as a mediator in the selective oxidation of organic pollutants. Due to the radicals being strongly chemisorbed onto the electrode surface, the reactions are limited to the proximity of the anode surface, according to the mechanism: S[*OH] -> SO +H+ + e- SO + R -> S + RO Where R is the generic organic compound, while RO is the partially oxidized product. If the electrode interacts weakly with the radicals, it is qualified as a "non active" anode. Hydroxyl radicals are physisorbed on the electrode surface by means of weak interaction forces and thus available for reaction with contaminants. The organic pollutants are converted to fully oxidized products, such as CO2, and reactions occur in a much less selective way with respect to active anodes: S[*OH] + R -> S + mCO2 + nH2O + H+ + e- Both chemisorbed and physisorbed radicals can undergo the oxygen evolution competitive reaction. For this reason, the distinction between active and non active anodes is made according to their oxygen evolution overpotential. Electrodes with low oxygen overpotential show an active behavior, as in the case of Platinum, graphite or mixed metal oxide electrodes. Conversely, electrodes with high oxygen overpotential will be non-active. Typical examples of nonactive electrodes are lead dioxide or boron-doped diamond electrodes. A higher oxygen overpotential implies a lower yield of the oxygen evolution reaction, thus raising the anodic process efficiency. Mediated oxidation When appropriate oxidizing agents are dissolved into the solution, the electro-oxidation process not only leads to organics oxidation at the electrode surface, but it also promotes the formation of other oxidant species within the solution. Such oxidizing chemicals are not bound to the anode surface and can extend the oxidation process to the entire bulk of the system. Chlorides are the most widespread species for the mediated oxidation. This is due to the chlorides being very common in most wastewater effluents and being easily converted into hypochlorite, according to global reaction: Cl- + H2O -> ClO- + 2H+ + 2e- Although hypochlorite is the main product, chlorine and hypochlorous acid are also formed as reactions intermediate. Such species are strongly reactive with many organic compounds, promoting their mineralization, but they can also produce several unwanted intermediates and final products. These chlorinated by-products sometimes can be even more harmful than the raw effluent contaminants and require additional treatments to be removed. To avoid this issue, sodium sulfate is preferred as electrolyte to sodium chloride, so that chloride ions are not available for the mediated oxidation reaction. Although sulfates can be involved in mediated oxidation as well, electrodes with high oxygen evolution overpotential are required to make it happen. Electrode materials Carbon and graphite Electrodes based on carbon or graphite are common due to their low cost and high surface area. Also, they are able to promote adsorption of contaminants on their surface while at the same generating the radicals for electro-oxidation. However, they are not suited for working at high potentials, as at such conditions they experience surface corrosion, resulting in reduced efficiency and progressive degradation of the exposed area. In fact, the overpotential for oxygen evolution is quite low for graphite (1.7 V vs SHE). Platinum Platinum electrodes provide good conductivity and they are inert and stable at high potentials. At the same time, the oxygen evolution overpotential is low (1.6 V vs SHE) and comparable to that of graphite. As a result, electro-oxidation with Platinum electrodes usually provides low yield due to partial oxidation of the compounds. The contaminants are converted into stable intermediates, difficult to be broken down, thus reducing current efficiency for complete mineralization. Mixed metal oxides (MMOs) Mixed metal oxides, also known as dimensionally stable anodes, are very popular in electrochemical process industry, because they are very effective in promoting both chlorine and oxygen evolution. In fact, they have been used extensively in the chloroalkali industry and for water electrolysis process. In the case of wastewater treatment, they provide low current efficiency, because they favor the competitive reaction of oxygen evolution. Similarly to Platinum electrodes, formation of stable intermediates is favored over complete mineralization of the contaminants, resulting in reduced removal efficiency. Due to their ability to promote chlorine evolution reaction, dimensionally stable anodes are the most common choice for processes relying on mediated oxidation mechanism, especially in the case of chlorine and hypochlorite production. Lead dioxide Lead dioxide electrodes have long been exploited in industrial applications, as they show high stability, large surface area, good conductivity and they are quite cheap. In addition, lead dioxide has a very high oxygen evolution overpotential (1.9 V vs SHE), which implies a high current efficiency for complete mineralization. Also, lead dioxide electrodes were found to be able to generate ozone, another strong oxidizer, at high potentials, according to the following mechanism: PbO2[*OH] -> PbO2[O*] + H+ + e- PbO2[O*] + O2-> PbO2 + O3 Also, the electrochemical properties and the stability of these electrodes can be improved by selecting the proper crystal structure: the highly crystalline beta-phase of lead dioxide showed improved performance in the removal of phenols, due to the increased active surface provided by its porous structure. Moreover, incorporation of metallic species, such as Fe, Bi or As, within the film was found to increase the current efficiency for mineralization. Boron-doped diamond (BDD) Synthetic diamond is doped with Boron to raise its conductivity, making it feasible as electrochemical electrode. Once doped, BDD electrodes show high chemical and electrochemical stability, good conductivity, great resistance to corrosion even in harsh environment and a remarkable wide potential window (2.3 V vs SHE). For this reason, BDD is generally considered as the most effective electrode for complete mineralization of organics, providing high current efficiency as well as lower energy consumption compared to all other electrodes. At the same time, the manufacturing processes for this electrode, usually based on high temperature CVD technologies, are very costly. Reaction kinetics Once the hydroxyl radicals are formed on the electrode surface, they rapidly react with organic pollutants, resulting in a lifetime of few nanoseconds. However, a transfer of ions from the bulk of the solution to the proximity of the electrode surface is required for the reaction to occur. Above a certain potential, the active species formed near the electrode are immediately consumed and the diffusion through the boundary layer near the electrode surface becomes the limiting step of the process. This explains why the observed rate of some fast electrode reactions can be low due to transport limitations. Evaluation of the limiting current density can be used as a tool to assess whether the electrochemical process is in diffusion control or not. If the mass transfer coefficient for the system is known, the limiting current density can be defined for a generic organic pollutant according to the relation: Where jL is the limiting current density (A/m2), F is the Faraday's constant (96'485 C/mol), kd is the mass transfer coefficient (m/s), COD is the chemical oxygen demand for the organic pollutant (g/dm3) and 8 is the oxygen equivalent mass. According to this equation, the lower the COD the lower the corresponding limiting current. Hence, systems with low COD are likely to be operating in diffusion control, exhibiting pseudo-first order kinetics with exponential decrease. Conversely, for high COD concentration (roughly above 4000 mg/L) pollutants are degraded under kinetic control (actual current below the limiting value), following a linear trend according to zero-order kinetics. For intermediate values, the COD initially decreases linearly, under kinetic control, but below a critical COD value diffusion becomes the limiting step, resulting in an exponential trend. If the limiting current density is obtained with other analytical procedures, such as cyclic voltammetry, the proposed equation can be used to retrieve the corresponding mass transfer coefficient for the investigated system. Applications Given the thorough investigations on the process design and electrodes formulation, electro-oxidation has already been applied to both pilot-scale and full-stage commercially available plants. Some relevant cases are listed below: Oxineo and Sysneo are dedicated product for disinfection of public and private pools, where radicals are generated through electro-oxidation with BDD electrodes in order to destroy the microorganisms in the water. Compared to other disinfection methods, these systems do not require chemicals dosing, they do not produce any chlorine smell and they prevent algae formation and accumulation. CONDIAS and Advanced Diamond Technologies Inc. supply equipment for anodic oxidation with BDD electrodes, sold with the trademark of CONDIACELL and Diamonox, which can be used either for water disinfection or for industrial effluents treatment. A pilot plant was installed in 2007 in Cantabria (Spain), featuring an electro-oxidation with BDD electrodes as a final stage after aerobic remediation and chemical Fenton oxidation. The overall removal efficiency for organic pollutants was 99% for the combined processes. See also Wastewater treatment List of wastewater treatment technologies Advanced oxidation process References Further reading Publications External links Water treatment Electrochemical cells
Electro-oxidation
[ "Chemistry", "Engineering", "Environmental_science" ]
3,122
[ "Water treatment", "Water pollution", "Electrochemistry", "Environmental engineering", "Water technology", "Electrochemical cells" ]
61,186,829
https://en.wikipedia.org/wiki/5G%20network%20slicing
5G network slicing is a network architecture that enables the multiplexing of virtualized and independent logical networks on the same physical network infrastructure. Each network slice is an isolated end-to-end network tailored to fulfill diverse requirements requested by a particular application. For this reason, this technology assumes a central role to support 5G mobile networks that are designed to efficiently embrace a plethora of services with very different service level requirements (SLR). The realization of this service-oriented view of the network leverages on the concepts of software-defined networking (SDN) and network function virtualization (NFV) that allow the implementation of flexible and scalable network slices on top of a common network infrastructure. From a business model perspective, each network slice is administrated by a mobile virtual network operator (MVNO). The infrastructure provider (the owner of the telecommunication infrastructure) leases its physical resources to the MVNOs that share the underlying physical network. According to the availability of the assigned resources, a MVNO can autonomously deploy multiple network slices that are customized to the various applications provided to its own users. History The history of network slicing can be tracked back to the late 80s with the introduction of the concept of "slice" in the networking field. Overlay networks provided the first form of network slicing since heterogeneous network resources were combined to create virtual networks over a common infrastructure. However, they lacked a mechanism that could enable their programmability. In the early 2000s, PlanetLab introduced a virtualization framework that allowed groups of users to program network functions in order to obtain isolated and application-specific slices. The advent of SDN technologies in 2009 further extended the programmability capabilities via open interfaces that enabled the realization of fully configurable and scalable network slices. In the context of mobile networks, network slicing evolved from the concept of RAN sharing that was initially introduced in LTE standard. Examples of such technology are multi-operator radio access networks (MORAN) and multi-operator core networks (MOCN), which allow network operators to share common LTE resources within the same radio access network (RAN). Key concepts The "one-size-fits-all" network paradigm employed in the past mobile networks (2G, 3G and 4G) is no longer suited to efficiently address a market model composed of very different applications like machine-type communication, ultra reliable low latency communication and enhanced mobile broadband content delivery. Network slicing emerges as an essential technique in 5G networks to accommodate such different and possibly contrasting quality of service (QoS) requirements exploiting a single physical network infrastructure. The basic idea of network slicing is to "slice" the original network architecture in multiple logical and independent networks that are configured to effectively meet the various services requirements. To quantitatively realize such concept, several techniques are employed: Network functions: they express elementary network functionalities that are used as "building blocks" to create every network slice. Virtualization: it provides an abstract representation of the physical resources under a unified and homogeneous scheme. In addition, it enables a scalable slice deployment relying on NFV that allows the decoupling of each network function instance from the network hardware it runs on. Orchestration: it is a process that allows coordination of all the different network components that are involved in the life-cycle of each network slice. In this context, SDN is employed to enable a dynamic and flexible slice configuration. Impact and applications In commercial terms, network slicing allows a mobile operator to create specific virtual networks that cater to particular clients and use cases. Certain applications - such as mobile broadband, machine-to-machine communications (e.g. in manufacturing or logistics), or smart cars - will benefit from leveraging different aspects of 5G technology. One might require higher speeds, another low latency, and yet another access to edge computing resources. By creating separate slices that prioritise specific resources a 5G operator can offer tailored solutions to particular industries. Some sources insist this will revolutionise industries like marketing, augmented reality, or mobile gaming, while others are more cautious, pointing to unevenness in network coverage and poor reach of advantages beyond increased speed. Slicing will be very useful to MVNOs as different use cases can be supported in a layer based on parameters like low latency high speed for video streaming for OTT focused MVNOs, similarly telemetry operations could have lower speed parameter and as on. Slicing can also enhance service continuity via improved roaming across networks, by creating a virtual network running on physical infrastructure that spans multiple local or national networks; or by allowing a host network to create an optimised virtual network which replicates the one offered by a roaming device's home network. Architecture overview Although there are different proposals of network slice architectures, it is possible to define a general architecture that maps the common elements of each solution into a general and unified framework. From a high-level perspective, the network slicing architecture can be considered as composed of two mains blocks, one dedicated to the actual slice implementation and the other dedicated to the slice management and configuration. The first block is designed as a multi-tier architecture composed by three layers (service layer, network function layer, infrastructure layer), where each one contributes to the slice definition and deployment with distinct tasks. The second block is designed as a centralized network entity, generically denoted as network slice controller, that monitors and manages the functionalities between the three layers in order to efficiently coordinate the coexistence of multiple slices. Service layer The service layer interfaces directly with the network business entities (e.g. MVNOs and 3rd party service providers) that share the underlying physical network and it provides a unified vision of the service requirements. Each service is formally represented as service instance, which embeds all the network characteristics in the form of SLA requirements that are expected to be fully satisfied by a suitable slice creation. Network function layer The network function layer is in charge of the creation of each network slice according to service instance requests coming from the upper layer. It is composed of a set of network functions that embody well-defined behaviors and interfaces. Multiple network functions are placed over the virtual network infrastructure and chained together to create an end-to-end network slice instance that reflects the network characteristics requested by the service. The configuration of the network functions are performed by means of a set of network operations that allow management of their full lifecycle (from their placement when a slice is created to their de-allocation when the function provided is no longer needed). To increase resource usage efficiency, the same network function can be simultaneously shared by different slices at the cost of an increase in the complexity of operations management. Conversely, a one-to-one mapping between each network function and each slice eases the configuration procedures, but can lead to poor and inefficient resource usage. Infrastructure layer The infrastructure layer represents the actual physical network topology (radio access network, transport network and core network) upon which every network slice is multiplexed and it provides the physical network resources to host the several network functions composing each slice. The network domain of the available resources includes a heterogeneous set of infrastructure components like data centers (storage and computation capacity resources), devices enabling network connectivity such as routers (networking resources) and base stations (radio bandwidth resources). Network slice controller The network slice controller is defined as a network orchestrator, which interfaces with the various functionalities performed by each layer to coherently manage each slice request. The benefit of such network element is that it enables an efficient and flexible slice creation that can be reconfigured during its life-cycle. Operationally, the network slice controller oversees several tasks that provide more effective coordination between the aforementioned layers: End-to-end service management: mapping of the various service instances expressed in terms of SLA requirements with suitable network functions capable of satisfying the service constraints. Virtual resources definition: virtualization of the physical network resources in order to simplify the resources management operations performed to allocate network functions. Slice life-cycle management: slice performance monitoring across all the three layers in order to dynamically reconfigure each slice to accommodate possible SLA requirements modifications. Due to the complexity of the performed tasks which address different purposes, the network slice controller can be composed by multiple orchestrators that independently manage a subset of functionalities of each layer. To fulfill the service requirements, the various orchestration entities need to coordinate with each other by exchanging high-level information about the state of the operations involved in the slice creation and deployment. Slice isolation Slice isolation is an important requirement that allows enforcing the core concept of network slicing about the simultaneous coexistence of multiple slices sharing the same infrastructure. This property is achieved by imposing that each slice's performance must not have any impact on the other slice's performance. The benefit of this design choice is that enhances the network slice architecture in two main aspects: Slice security: cyber-attacks or faults occurrences affect only the target slice and have limited impact on the life-cycle of other existing slices. Slice privacy: private information related to each slice (e.g. user statistics, MVNO business model) are not shared among other slices. Guaranteeing QoS Slicing has become an important part of 5G networks, but we don't have to forget to guarantee the QoS. Some studies have demonstrated that formulating the problem with the QoS as a stochastic problem, permit us to maximize the average throughput of the AP, while satisfying the constraints related to the QoS. Monetizing 5G network slicing Monetizing 5G services faster is one of the topics that interests network operators the most because the costs of building and maintaining 5G networks are high, and it's difficult to predict the demand for 5G services. 5G network slicing is one of the effective ways to offer customized services for different industries such as manufacturing, transportation, and healthcare. Combined with AIOps, ML/AI-driven automation and 5G lifecycle optimization, it can reduce OpEx and increase revenues for network operators. 5G core network slicing In the 3GPP 5G core architecture, the user plane and control plane functions are separated. Control plane capabilities, for instance, session management, access authentication, policy management, and user data storage are independent of the user plane functionality. The user plane handles packet forwarding, encapsulation or de-capsulation, and associated transport level specifics. This separation leads to the distribution of the user plane functions close to the edge of network slices (e.g., so as to reduce latency) and to be independent of the control plane. The main 5G core network entities are the Authentication server function (AUSF), Unstructured data storage network function (UDSF), Network exposure function (NEF), NF repository function (NRF), Policy control function (PCF), Unified data management (UDM), Network Slice Selection Function (NSSF), Communication Service Management Function (CSMF), AMF, SMF, and UPF. The AMF (as a function of the CP) controls UEs that have been authenticated to use the services of the operator and manages the mobility of the UEs across the gNBs. The SMF (again part of the CP) manages the sessions of UEs, while AMF transmits the session management messages between the UEs and SMF. UPF (as part of the UP) performs the processing and forwarding of the user data. NSSF (as a function of the CP) is responsible for the management and orchestration of network slices. CSMF (as a function of the CP) translates the requirements of services to requirements relating to network slices. 5G Core network functions can be sliced to support specific services for different UEs. Thanks to the modular nature of the 5G core, the network functions of the 5G core can be split and shared between different network slices to reduce management complexity. In general, we can perform 5G core network slicing in two ways. We can implement dedicated core network functions per network slice. In this architecture, each network slice has a set of completely dedicated core network functions (e.g., AUSF, AMF, SMF, and UDM). The UEs can access various services from network slices and different core networks. Alternatively, we can share some control plane functions between the network slices while others such as user plane functions are slice specific (e.g., UPF). AMF is usually shared by several network slices, while SMF and UPF are usually dedicated to specific network slices. The AMF function will be shared between different network slices in order to reduce the mobility management signaling when the UE uses the services of different network slices simultaneously. For example, UE location management or the control signaling between the UE and the old AMF will be reduced when it will be connected to the new AMF of another network slice. Also, UDM and NSSF are typically shared by all network slices to reduce the management complexity of network slices. Network slicing security The emergence of network slicing also exposes novel security and privacy challenges, primarily related to aspects such as network slicing life-cycle security, inter-slice security, intra-slice security, slice broker security, zero-touch network and management security, and blockchain security. Therefore, enhancing the security, privacy, and trust of network slicing has become a key research area toward realizing the true capabilities of 5G. Various security solutions are proposed for resolving the security threats, challenges, and issues of network slicing. These solutions include artificial intelligence based solutions, security orchestration, blockchain based solutions, Security Service Level Agreement (SSLA) and policy based solutions, security monitoring based solutions, slice isolation, security-by-design and privacy-by-design, and offering security as a service. See also APN NGAP 5G Network virtualization Software-defined networking Network orchestration Network service 5G NR frequency bands References Network architecture 5G (telecommunication)
5G network slicing
[ "Engineering" ]
2,874
[ "Network architecture", "Computer networks engineering" ]
61,186,830
https://en.wikipedia.org/wiki/Reinforced%20concrete%20structures%20durability
The durability design of reinforced concrete structures has been recently introduced in national and international regulations. It is required that structures are designed to preserve their characteristics during the service life, avoiding premature failure and the need of extraordinary maintenance and restoration works. Considerable efforts have therefore made in the last decades in order to define useful models describing the degradation processes affecting reinforced concrete structures, to be used during the design stage in order to assess the material characteristics and the structural layout of the structure. Service life of a reinforced concrete structure Initially, the chemical reactions that normally occur in the cement paste, generate an alkaline environment, bringing the solution in the cement paste pores to pH values around 13. In these conditions, passivation of steel rebar occurs, due to a spontaneous generation of a thin film of oxides able to protect the steel from corrosion. Over time, the thin film can be damaged, and corrosion of steel rebar starts. The corrosion of steel rebar is one of the main causes of premature failure of reinforced concrete structures worldwide, mainly as a consequence of two degradation processes, carbonation and penetration of chlorides. With regard to the corrosion degradation process, a simple and accredited model for the assessment of the service life is the one proposed by Tuutti, in 1982. According to this model, the service life of a reinforced concrete structure can be divided into two distinct phases. , initiation time: from the moment the structure is built, to the moment corrosion initiates on steel rebar. More in particular, it is the time required for aggressive agents (carbon dioxide and chlorides) to penetrate the concrete cover thickness, reach the embedded steel rebar, alter the initial passivation condition on steel surface and cause corrosion initiation. , propagation time: which is defined as the time from the onset of active corrosion until an ultimate limit state is reached, i.e. corrosion propagation reaches a limit value corresponding to unacceptable structural damage, such as cracking and detachment of the concrete cover thickness. The identification of initiation time and propagation time is useful to further identify the main variables and processes influencing the service life of the structure which are specific of each service life phase and of the degradation process considered. Carbonation-induced corrosion The initiation time is related to the rate at which carbonation propagates in the concrete cover thickness. Once that carbonation reaches the steel surface, altering the local pH value of the environment, the protective thin film of oxides on the steel surface becomes instable, and corrosion initiates involving an extended portion of the steel surface. One of the most simplified and accredited models describing the propagation of carbonation in time is to consider penetration depth proportional to the square root of time, following the correlation where is the carbonation depth, is time, and is the carbonation coefficient. The corrosion onset takes place when the carbonation depth reaches the concrete cover thickness, and therefore can be evaluated as where is the concrete cover thickness. is the key design parameter to assess initiation time in the case of carbonation-induced corrosion. It is expressed in mm/year1/2 and depends on the characteristics of concrete and the exposure conditions. The penetration of gaseous CO2 in a porous medium such as concrete occurs via diffusion. The humidity content of concrete is one of the main influencing factors of CO2 diffusion in concrete. If concrete pores are completely and permanently saturated (for instance in submerged structures) CO2 diffusion is prevented. On the other hand, for completely dry concrete, the chemical reaction of carbonation cannot occur. Another influencing factor for CO2 diffusion rate is concrete porosity. Concrete obtained with higher w/c ratio or obtained with an incorrect curing process presents higher porosity at hardened state, and is therefore subjected to a higher carbonation rate. The influencing factors concerning the exposure conditions are the environmental temperature, humidity and concentration of CO2. Carbonation rate is higher for environments with higher humidity and temperature, and increases in polluted environments such as urban centres and inside close spaces as tunnels. To evaluate propagation time in the case of carbonation-induced corrosion, several models have been proposed. In a simplified but commonly accepted method, the propagation time is evaluated as function of the corrosion propagation rate. If the corrosion rate is considered constant, tp can be estimated as: where is the limit corrosion penetration in steel and is the corrosion propagation rate. must be defined in function of the limit state considered. Generally for carbonation-induced corrosion the concrete cover cracking is considered as limit state, and in this case a equal to 100 μm is considered. depends on the environmental factors in proximity of the corrosion process, such as the availability of oxygen and water at concrete cover depth. Oxygen is generally available at the steel surface, except for submerged structures. If pores are constantly fully saturated, a very low amount of oxygen reaches the steel surface and corrosion rate can be considered negligible. For very dry concretes is negligible due to the absence of water which prevents the chemical reaction of corrosion. For intermediate concrete humidity content, corrosion rate increases with increasing the concrete humidity content. Since the humidity content in a concrete can significantly vary along the year, it is general not possible to define a constant . One possible approach is to consider a mean annual value of . Chloride-induced corrosion The presence of chlorides to the steel surface, above a certain critical amount, can locally break the protective thin film of oxides on the steel surface, even if concrete is still alkaline, causing a very localized and aggressive form of corrosion known as pitting. Current regulations forbid the use of chloride contaminated raw materials, therefore one factor influencing the initiation time is chloride penetration rate from the environment. This is a complex task, because chloride solutions penetrate in concrete through the combination of several transport phenomena, such as diffusion, capillary effect and hydrostatic pressure. Chloride binding is another phenomenon affecting the kinetic of chloride penetration. Part of the total chloride ions can be absorbed or can chemically react with some constituents of the cement paste, leading to a reduction of chlorides in the pore solution (free chlorides that are steel able to penetrate in concrete). The ability of a concrete to chloride binding is related to the cement type, being higher for blended cements containing silica fume, fly ash or furnace slag. Being the modelling of chloride penetration in concrete particularly complex, a simplified correlation is generally adopted, which was firstly proposed by Collepardi in 1972 Where is the chloride concentration at the exposed surface, x is the chloride penetration depth, D is the chloride diffusion coefficient, and t is time. This equation is a solution of Fick's II law of diffusion in the hypothesis that chloride initial content is zero, that is constant in time on the whole surface, and D is constant in time and through the concrete cover. With and D known, the equation can be used to evaluate the temporal evolution of the chloride concentration profile in the concrete cover and evaluate the initiation time as the moment in which critical chloride threshold () is reached at the depth of steel rebar. However, there are many critical issues related to the practical use of this model. For existing reinforced concrete structures in chloride-bearing environment and D can be identified calculating the best-fit curve for measured chloride concertation profiles. From concrete samples retrieved on field is therefore possible to define the values of Cs and D for residual service life evaluation. On the other hand, for new structures it is more complicated to define and D. These parameters depend on the exposure conditions, the properties of concrete such as porosity (and therefore w/c ratio and curing process) and type of cement used. Furthermore, for the evaluation of long-term behaviour of structure, a critical issue is related to the fact that and D can not be considered constant in time, and that the transport penetration of chlorides can be considered as pure diffusion only for submerged structures. A further issue is the assessment of . There are various influencing factors, such as are the potential of steel rebar and the pH of the solution included in concrete pores. Moreover, pitting corrosion initiation is a phenomenon with a stochastic nature, therefore also can be defined only on statistical basis. Corrosion prevention The durability assessment has been implemented in European design codes at the beginning of the 90s. It is required for designers to include the effects of long-term corrosion of steel rebar during the design stage, in order to avoid unacceptable damages during the service life of the structure. Different approaches are then available for the durability design. Standard approach It is the standardized method to deal with durability, also known as deem-to-satisfy approach, and provided by current european regulation EN 206. It is required that the designer identifies the environmental exposure conditions and the expected degradation process, assessing the correct exposure class. Once this is defined, design code gives standard prescriptions for w/c ratio, the cement content, and the thickness of the concrete cover. This approach represents an improvement step for the durability design of reinforced concrete structures, it is suitable for the design of ordinary structures designed with traditional materials (Portland cement, carbon steel rebar) and with an expected service life of 50 years. Nevertheless, it is considered not completely exhaustive in some cases. The simple prescriptions do not allow to optimize the design for different parts of the structures with different local exposure conditions. Furthermore, they do not allow to consider the effects on service life of special measures such as the use of additional protections. Performance-based approach Performance-based approaches provide for a real design of durability, based on models describing the evolution in time of degradation processes, and the definition of times at which defined limit states will be reached. To consider the wide variety of service life influencing factors and their variability, performance-based approaches address the problem from a probabilistic or semiprobabilistic point of view. The performance-based service life model proposed by the European project DuraCrete, and by FIB Model Code for Service Life Design, is based on a probabilistic approach, similar to the one adopted for structural design. Environmental factors are considered as loads S(t), while material properties such as chloride penetration resistance are considered as resistances R(t) as shown in Figure 2. For each degradation process, design equations are set to evaluate the probability of failure of predefined performances of the structure, where acceptable probability is selected on the basis of the limit state considered. The degradation processes are still described with the models previously defined for carbonation-induced and chloride-induced corrosion, but to reflect the statistical nature of the problem, the variables are considered as probability distribution curves over time. To assess some of the durability design parameters, the use of accelerated laboratory test is suggested, such as the so called Rapid Chloride Migration Test to evaluate chloride penetration resistance of concrete '. Through the application of corrective parameters, the long-term behaviour of the structure in real exposure conditions may be evaluated. The use of probabilistic service life models allows to implement a real durability design that could be implemented in the design stage of structures. This approach is of particular interest when an extended service life is required (>50 years) or when the environmental exposure conditions are particularly aggressive. Anyway, the applicability of this kind of models is still limited. The main critical issues still concern, for instance, the individuation of accelerated laboratory tests able to characterize concrete performances, reliable corrective factors to be used for the evaluation of long-term durability performances and the validation of these models based on real long-term durability performances. See also Concrete Concrete degradation Reinforced concrete References Cement Concrete Corrosion Concrete buildings and structures Reinforced concrete Structural engineering
Reinforced concrete structures durability
[ "Chemistry", "Materials_science", "Engineering" ]
2,381
[ "Structural engineering", "Metallurgy", "Corrosion", "Construction", "Electrochemistry", "Civil engineering", "Concrete", "Materials degradation" ]
61,186,948
https://en.wikipedia.org/wiki/Harmonic%20quadrilateral
In Euclidean geometry, a harmonic quadrilateral, or harmonic quadrangle, is a quadrilateral that can be inscribed in a circle (cyclic quadrilateral) in which the products of the lengths of opposite sides are equal. It has several important properties. Properties Let be a harmonic quadrilateral and the midpoint of diagonal . Then: Tangents to the circumscribed circle at points and and the straight line either intersect at one point or are parallel. Therefore, the pole of each diagonal is contained in the other diagonal respectively. Angles and are equal. The bisectors of the angles at and intersect on the diagonal . A diagonal of the quadrilateral is a symmedian of the angles at and in the triangles ∆ and ∆. The point of intersection of the diagonals is located towards the sides of the quadrilateral to proportional distances to the length of these sides. The point of intersection of the diagonals minimizes the sum of squares of distances from a point inside the quadrilateral to the quadrilateral sides. Considering the points , , , as complex numbers, the cross-ratio . References Further reading Gallatly, W. "The Harmonic Quadrilateral." §124 in The Modern Geometry of the Triangle, 2nd ed. London: Hodgson, pp. 90 and 92, 1913. Types of quadrilaterals
Harmonic quadrilateral
[ "Mathematics" ]
287
[ "Geometry", "Geometry stubs" ]
61,186,989
https://en.wikipedia.org/wiki/Two-dimensional%20electronic%20spectroscopy
Two-dimensional electronic spectroscopy (2DES) is an ultrafast laser spectroscopy technique that allows the study of ultrafast phenomena inside systems in condensed phase. The term electronic refers to the fact that the optical frequencies in the visible spectral range are used to excite electronic energy states of the system; however, such a technique is also used in the IR optical range (excitation of vibrational states) and in this case the method is called two-dimensional infrared spectroscopy (2DIR). This technique records the signal which is emitted from a system after an interaction with a sequence of 3 laser pulses. Such pulses usually have a time duration of few hundred femtosecond (10−15 s) and this high time resolution allows capturing of dynamics inside the system that evolves with the same time scale. The main result of this technique is a two-dimensional absorption spectrum that shows the correlation between excitation and detection frequencies. The first 2DES spectra were recorded in 1998. 2DES has been combined with photoelectrochemical recordings (PEC2DES) to study charge separation in the photosynthetic complex photosystem I, which is the physiological output signal in contrast to fluorescence. This method provides experimental access to the action spectra of the complexes. Basic concepts about 2DES Pulse sequence The pulse sequence in this experiment is the same as 2DIR in which the delay between the first and second pulse is called the coherence time and is usually labeled as . The delay between the second and the third pulse is called the population time and it is labeled as . The time after the third pulse corresponds to the detection time which is usually Fourier transformed by a spectrometer. The interaction with the pulses creates a third-order nonlinear response function of the system from which it is possible to extract two-dimensional spectra as a function of excitation and detection frequencies. Although third-order two-dimensional spectroscopy is historically first and most popular, high-order two-dimensional spectroscopy approaches have also been developed. 2D Signal A possible way to recover an analytical expression of the response function is to consider the system as an ensemble and deal with the light-matter interaction process by using the density matrix approach. Such a result shows that the response function is proportional to the product of the three pulses' electric fields. Considering the wave vectors of the three pulses, the nonlinear signal will emit in several directions which are derived from a linear combination of the three wave vectors: . For this technique, two different signals which propagate in different directions are usually taken into account. When the signal is called rephasing and when the signal is called non-rephasing. An interpretation of these signals is possible by considering the system to be composed of many electric dipoles. When the first pulse interacts with the system, the dipoles start to oscillate in phase. The signal generated from each dipole rapidly dephases due to the different interaction that each dipole experienced with the environment. The interaction with the third pulse, in the case of rephasing, generates a signal which has an opposite temporal evolution with respect to the previous one. The dephasing of the last signal during compensates the one during . When the oscillations are in-phase again and the new signal generated is called photon echo. In the other case, there is no creation of a photon echo and the signal is called non-rephasing. From these signals is possible to extract the pure absorptive and dispersive spectra which are usually shown in literature. The real part of the sum of these two signals represents the absorption of the system and the imaginary part contains the dispersion contribution. In the absorptive 2D spectra, the sign of the peak implies different effects. If the transmitted signal is plotted, a positive peak can be associated to a bleaching signal with respect to the ground state or stimulated emission. If the sign is negative, that peak on the 2D spectra is associated with a photoinduced absorption. Acquisition of the 2DES spectra The first and the second pulses act as a pump and the third as a probe. The time-domain nonlinear response of the system interferes with another pulse called local oscillator (LO) which allows measurement of both amplitude and phase. Such a signal is usually acquired with a spectrometer which separates the contribution of each spectral components (detection frequencies ). The acquisition proceeds by scanning the delay for a fixed delay . Once the scan ends, the detector has acquired a signal as a function of coherence time per each detection frequency . The application of the Fourier transform along the axis allows for recovery of the excitation spectra for every . The result of this procedure is a 2D map that shows the correlation between excitation () and detection frequency () at a fixed population time . The time evolution of the system can be measured by repeating the procedure described before for different values of . There are several methods to implement this technique, all of which are based on the different configurations of the pulses. Two examples of possible implementations are the "boxcar geometry" and the "partially collinear geometry". The boxcar geometry is a configuration where all the pulses arrive at the system from different directions this property allows acquiring separately the rephasing and non-rephasing signal. The partially collinear geometry is another implementation of this technique where the first and the second pulse coming from the same direction . In this case, the rephasing and non-rephasing signal are emitted in the same direction and it is possible to directly recover the absorptive and dispersive spectra of the system. Information acquired from 2DES 2D spectra contain a lot of information about the system; in particular amplitude, position and lineshape of the peaks are related to different effects that happened inside of the system. Position of the peaks Diagonal Peaks The peaks that stay along the diagonal line in the 2D spectra are called diagonal peaks. These peaks appear when the system emits a signal that oscillates at the same frequency of the excitation signal. These points reflect the information of the linear absorption spectrum. Cross Peaks The peaks that stay out of the diagonal line are called cross peaks. These peaks appear when the system emits a signal that oscillates at a different frequency with respect to the signal used to excite. When a cross peak appears means that two electronic states of the system are coupled because when the pulses pump an electronic state, the system responds with emission from a different energy level. This coupling can be related to an energy transfer or charge transfer process between molecules. Lineshape of the peaks Short population times Thanks to the high spectral resolution, this technique acquires information based on the two dimensional shape of the peaks. When is close to zero the diagonal peaks show an elliptical lineshape as is shown in the figure on the right. The width along the diagonal line represents the inhomogeneous broadening which contains information about interactions between the environment and the system. If the system is composed of a large amount of identical molecules, each of them interacts with the environment in a different way; this implies that the same electronic state of each molecule assumes different small variations. The value of the linewidth will be close to the one calculated in the linear absorption spectrum. On the other hand, the linewidth along the off-diagonal shows a smaller value with respect to the diagonal one. In this case the spectral broadening contains a contribution from a local interaction inside of the system; for this reason, the width reflects the homogeneous broadening. Long population times For fs, the shape of the peaks becomes circular and the width along diagonal and off-diagonal line are similar. This phenomenon takes place because all the molecules of the system experienced different local environments and the entire system lose memory of the initial condition. This effect is called Spectral Diffusion. Temporal lineshape evolution The temporal evolution of the lineshape can be evaluated with several methods. One method evaluates the linewidth along diagonal and off-diagonal line separately. From the two values of the widths it is possible to calculate the flattening as where is the linewidth along diagonal line and is the linewidth along off-diagonal line. The flattening curve as a function of assumes a value close to 1 at fs ( ) and then decreases to zero at fs (). Another method is called Central Line Slope (CLS). In this case the positions of the maximum values in the 2D spectra per each detection frequency are considered. These points are then interpolated with a linear function where is possible to extract the slope between this function and the detection axis (x axis). From a theoretical point of view, this value should be 45° when is close to zero because the peak is elongated along the diagonal line. When the peak assumes a circular lineshape, the value of the slope goes to zero. The same approach can also be used by considering the positions of the maximum values per each excitation frequency (y axis) and the slope will be 45° at fs and 90° when the shape becomes circular. See also Two-dimensional correlation analysis Two-dimensional infrared spectroscopy References Absorption spectroscopy Ultrafast spectroscopy
Two-dimensional electronic spectroscopy
[ "Physics", "Chemistry" ]
1,882
[ "Spectroscopy", "Spectrum (physical sciences)", "Absorption spectroscopy" ]
61,187,067
https://en.wikipedia.org/wiki/Craterellus%20cinereus
Craterellus cinereus, commonly known as the black chanterelle or ashen chanterelle, is a species of Craterellus found growing in coniferous forest in Europe. Description Craterellus cinereus are greyish-black chanterelle mushrooms with thin, dark grey flesh that fades when dry. Cap: 2–4 cm. Irregular funnel shape/infundibuliform. Irregularly wavy at the edges with an inrolled margin. Stem: 2–4 cm. Smooth to lightly velvety in texture sometimes with a white woolly base. Veins/Ridges: Dark grey irregular forks which are distant and decurrent. Spore print: White. Spores: Broadly elliptical, smooth, non-amyloid. 7.5–10 x 5–6 μm. Taste: Mild. Smell: Indistinct. Habitat and distribution As a mycorrhizal species it grows on soil with leaf litter in broad-leaves woods and is usually found in small groups and may be trooping. It is also rarely found with conifers. It has a widespread distribution but is an uncommon find with mushrooms appearing during autumn. Edibility C. cinereus is an edible mushroom with a mild taste. Can be used similarly to black trumpets (Craterellus cornucopioides) but with a milder taste. Possible lookalikes include Craterellus cornucopioides, Pseudocraterellus undulatus and Faerberia carbonaria, all of which are edible. References External links Cantharellaceae Fungus species
Craterellus cinereus
[ "Biology" ]
320
[ "Fungi", "Fungus species" ]
61,187,256
https://en.wikipedia.org/wiki/Charge%20modulation%20spectroscopy
Charge modulation spectroscopy is an electro-optical spectroscopy technique tool. It is used to study the charge carrier behavior of organic field-effect transistors. It measures the charge introduced optical transmission variation by directly probing the accumulation charge at the burning interface of semiconductor and dielectric layer where the conduction channel forms. Principles Unlike ultraviolet–visible spectroscopy which measures absorbance, charge modulation spectroscopy measures the charge introduced optical transmission variation. In other words, it reveals the new features in optical transmission introduced by charges. In this setup, there are mainly four components: lamp, monochromator, photodetector and lock-in amplifier. Lamp and monochromator are used for generating and selecting the wavelength. The selected wavelength passes through the transistor, and the transmitted light is recorded by the Photodiode. When the signal to noise ratio is very low, the signal can be modulated and recovered with a Lock-in amplifier. In the experiment, a direct current plus an alternating current bias are applied to the organic field-effect transistor. Charge carriers accumulate at the interface between the dielectric and the semiconductor (usually a few nanometers). With the appearance of the accumulation charge, the intensity of the transmitted light changes. The variation of the light intensity () is then collected though the photodetector and lock-in amplifier. The charge modulation frequency is given to Lock-in amplifier as the reference. Modulate charge at the organic field-effect transistor There are four typically Organic field-effect transistor architectures: Top-gate, bottom-contacts; bottom-gate, top-contacts; bottom-gate, bottom-contacts; top-gate, top-contact. In order to create the accumulation charge layer, a positive/negative direct current voltage is applied to the gate of the organic field-effect transistor (positive for the P type transistor, negative for the N type transistor). In order to modulate the charge, an AC voltage is given between the gate and source. It is important to notice that only mobile charge can follow the modulation and that the modulation frequency given to lock-in amplifier has to be synchronous. Charge modulation spectra The charge modulation spectroscopy signal can be defined as the differential transmission divided by the total transmission . By modulating the mobile carriers, an increase transmission and decrease transmission features could be both observed. The former relates to the bleaching and the latter to the charge absorption and electrically induced absorption (electro-absorption). The charge modulation spectroscopy spectra is an overlap of charge-induced and electro-absorption features. In transistors, the electro-absorption is more significant during the high voltage drop. There are several ways to identify the electro-absorption contribution, such as get the second harmonic , or probe it at the depletion region. Bleaching and charge absorption When the accumulation charge carrier removes the ground state of the neutral polymer, there is more transmission in the ground state. This is called bleaching . With the excess hole or electrons at the polymer, there will be new transitions at low energy levels, therefore the transmission intensity is reduced , this is related to charge absorption. Electro-absorption The electro-absorption is a type of Stark effect in the neutral polymer, it is predominant at the electrode edge since there is a strong voltage drop. Electro-absorption can be observed from the second harmonic charge modulation spectroscopy spectra. Charge modulation microscopy Charge modulation microscopy is a new technology which combines the confocal microscopy with charge modulation spectroscopy. Unlike the charge modulation spectroscopy which is focused on the whole transistor, the charge modulation microscopy give us the local spectra and map. Thanks for this technology, the channel spectra and electrode spectra can be obtained individually. A more local dimension of charge modulation spectra (around submicrometer) can be observed without a significant Electro-absorption feature. Of course, this depends on the resolution of the optical microscopy. The high resolution of charge modulation microscopy allows mapping of the charge carrier distribution at the active channel of the organic field-effect transistor. In other words, a functional carrier morphology can be observed. It is well known that the local carrier density can be related to the polymer microstructure. Based on Density functional theory calculations, a polarized charge modulation microscopy can selectively map the charge transport associated with a relative direction of the transition dipole moment. The local direction can be correlated to the orientational order of polymer domains. More ordered domains show a high carrier mobility of the organic field-effect transistor device. See also Confocal microscopy Organic field-effect transistor Stark effect Ultraviolet–visible spectroscopy References Further reading Spectroscopy Scientific techniques Microscopy
Charge modulation spectroscopy
[ "Physics", "Chemistry" ]
955
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Microscopy", "Spectroscopy" ]
61,188,336
https://en.wikipedia.org/wiki/Marcel%20Golay%20%28astronomer%29
Marcel Golay (; 6 September 1927 – 9 April 2015) was a Swiss astronomer, professor at Geneva University and the eighth director of the Geneva Observatory from 1956 to 1992. Golay was a member of the International Astronomical Union and president of several of its commissions including "Stellar Classification" and "Astronomical Photometry and Polarimetry". In 1991, University of Basel awarded him an honorary professorship. Asteroid 3329 Golay is named after him. Honors and awards Asteroid 3329 Golay – an 18-kilometer-sized member of the Eos family, that was discovered by Paul Wild at Zimmerwald Observatory in 1985 – was named in his honor. The official was published by the Minor Planet Center on 1 September 1993 (). References External links Golay, Marcel, in the Historical Dictionary of Switzerland Marcel Golay (1927–2015), Geneva University, Geneva Observatory 1927 births 2015 deaths Scientists from Geneva 20th-century Swiss astronomers
Marcel Golay (astronomer)
[ "Astronomy" ]
191
[ "Astronomers", "Astronomer stubs", "Astronomy stubs" ]
61,189,517
https://en.wikipedia.org/wiki/C30H44O3
{{DISPLAYTITLE:C30H44O3}} The molecular formula C30H44O3 (molar mass: 452.67 g/mol) may refer to: Boldenone undecylenate, or boldenone undecenoate Ethandrostate, also known as ethinylandrostenediol 3β-cyclohexanepropionate Molecular formulas
C30H44O3
[ "Physics", "Chemistry" ]
88
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,190,120
https://en.wikipedia.org/wiki/Mitchell%20v.%20Wisconsin
Mitchell v. Wisconsin, 588 U.S. ___ (2019), is a United States Supreme Court case in which the Court held that "when a driver is unconscious and cannot be given a breath test, the exigent-circumstances doctrine generally permits a blood test without a warrant." Background In May 2013, Gerald Mitchell crashed his car near a lake in Sheboygan, Wisconsin. When police arrived, they used a breathalyzer to test his blood alcohol content. Mitchell registered a 0.24% BAC and was subsequently arrested for OWI. As police were driving him to the police station, he fell unconscious, so the officers changed plans and drove him to a local hospital to have his blood drawn intravenously. This test registered his BAC at 0.22%, and prosecutors formally charged Mitchell with violating several Wisconsin drunk driving laws. Lower court proceedings At the trial court, Mitchell made a motion to suppress the results of the hospital blood draw on the grounds that it was a warrantless search and thus unconstitutional under the Fourth Amendment. The prosecutor argued that Wisconsin's state laws constitute implied consent to blood draws once someone begins driving a vehicle. Sheboygan County Judge Terence Bourke sided with the prosecutor, denying Mitchell's motion to suppress. A jury then convicted Mitchell of all charges. Mitchell appealed his conviction to the state appellate court on the basis that the evidence gained from his blood draw should have been suppressed. The appellate court declined to hear the case, and instead certified two questions to the Wisconsin Supreme Court – whether the "implied consent" rule was constitutional, and whether a warrantless blood draw from an unconscious person was a violation of the Fourth Amendment. In a 5–2 decision written by Chief Justice Roggensack, the Wisconsin Supreme Court upheld Mitchell's conviction, answering that the "implied consent" rule was constitutional, and thus the blood draw was permissible under the Fourth Amendment. Justice Kelly wrote a concurring opinion that was joined by Justice Rebecca Bradley. In it, he argued that the "implied consent" rule is unconstitutional, but that the exigent circumstances doctrine, along with United States Supreme Court precedent, allow for a warrantless blood draw from an unconscious driver who is suspected of being intoxicated. Justice Ann Walsh Bradley wrote a dissent joined by Justice Abrahamson, which argued that "implied consent" is not the same as actual consent, and that a blood draw is such an invasive type of search that exigent circumstances do not apply. Thus, nothing the officers did was constitutional, and the blood draw should have been thrown out as evidence. Supreme Court Mitchell applied for certiorari before the United States Supreme Court, which accepted the case to decide "[w]hether a statute authorizing a blood draw from an unconscious motorist provides an exception to the Fourth Amendment warrant requirement." Oral argument was held on April 23, 2019. On June 27, 2019, the Court announced its decision. In a plurality opinion written by Justice Samuel Alito and joined by Chief Justice Roberts and Justices Breyer and Kavanaugh, the United States Supreme Court reversed the judgement of the Wisconsin Supreme Court. Justice Thomas wrote an opinion concurring in the judgement. In opposition, Justice Sotomayor wrote a dissenting opinion that was joined by Justices Ginsburg and Kagan. Justice Gorsuch wrote a lone one-paragraph dissenting opinion, arguing that the Court did not properly decide the question presented. He said that he would have dismissed the case as improvidently granted. See also Birchfield v. North Dakota References External links United States Supreme Court cases United States Supreme Court cases of the Roberts Court 2019 in United States case law United States Fourth Amendment case law Legal history of Wisconsin Sheboygan, Wisconsin Blood tests Breathalyzer Driving under the influence
Mitchell v. Wisconsin
[ "Chemistry" ]
775
[ "Blood tests", "Chemical pathology" ]
61,190,210
https://en.wikipedia.org/wiki/C16H24O4
{{DISPLAYTITLE:C16H24O4}} The molecular formula C16H24O4 (molar mass: 280.36 g/mol, exact mass: 280.1675 u) may refer to: Brefeldin A Fumarranol Molecular formulas
C16H24O4
[ "Physics", "Chemistry" ]
61
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
61,190,423
https://en.wikipedia.org/wiki/C14H16BrNO2
{{DISPLAYTITLE:C14H16BrNO2}} The molecular formula C14H16BrNO2 may refer to: Brofaromine DOB-2-DRAGONFLY-5-BUTTERFLY
C14H16BrNO2
[ "Chemistry" ]
46
[ "Isomerism", "Set index articles on molecular formulas" ]