id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
75,080,123
https://en.wikipedia.org/wiki/Non-exercise%20activity%20thermogenesis
Non-exercise activity thermogenesis (), also known as non-exercise physical activity (NEPA), is energy expenditure during activities that are not part of a structured exercise program. NEAT includes physical activity at the workplace, hobbies, standing instead of sitting, walking around, climbing stairs, doing chores, and fidgeting. Besides differences in body composition, it represents most of the variation in energy expenditure across individuals and populations, accounting from 6-10 percent to as much as 50 percent of energy expenditure in highly active individuals. Relationship with obesity NEAT is the main component of activity-related energy expenditure in obese individuals, as most do not do any physical exercise. NEAT is also lower in obese individuals than the general population. NEAT may be reduced in individuals who have lost weight, which some hypothesize contributes to difficulties in achieving and sustaining weight loss. In Western countries, occupations have shifted from physical labor to sedentary work, which results in a loss of energy expenditure. Strenuous physical labor can require 1500 calories or more per day than desk work. Relationship with exercise It is debated whether there is a significant reduction in NEAT after beginning a structured exercise program. Health benefits Lack of NEAT is posited as an explanation for health harms for prolonged sitting. Measurement Accelerometers and questionnaires can be used to estimate NEAT. References Human physiology Metabolism
Non-exercise activity thermogenesis
[ "Chemistry", "Biology" ]
285
[ "Biochemistry", "Metabolism", "Cellular processes" ]
75,082,412
https://en.wikipedia.org/wiki/Eta2%20Fornacis
{{DISPLAYTITLE:Eta2 Fornacis}} Eta2 Fornacis (η2 Fornacis) is an orange giant in the constellation of Fornax. The star has a spectral type of K0III and an apparent magnitude of 6.02. The star is visually close to, but unrelated with the similar stars η1 Fornacis and η3 Fornacis. The star is located at approximately 425 light years away, and forms a visual binary system with a 10th magnitude companion. References Fornax Fornacis, Eta2 K-type giants 017829 0851 013265 Binary stars
Eta2 Fornacis
[ "Astronomy" ]
131
[ "Fornax", "Constellations" ]
66,365,028
https://en.wikipedia.org/wiki/Kit%20Miyamoto
Dr. Hideki "Kit" Miyamoto (born 1963) is a Japanese-American structural engineer known for being the founder-CEO of Miyamoto International, a global structural engineering and disaster risk reduction organization. He is also the chairman of California's Alfred E. Alquist Seismic Safety Commission, which investigates earthquakes and recommends policies for risk reduction. Early life and education Miyamoto was born and raised in Tokyo and studied earthquake engineering at the Tokyo Institute of Technology and California State University. He lives in Los Angeles. Career Miyamoto started his career in structural engineering and later focused on disaster resiliency, response, and reconstruction. He provides policy consultation to the World Bank, USAID, UN agencies, governments and private sector. He has led teams of professionals on response and reconstruction projects after the 2008 Sichuan earthquake, 2010 Haiti earthquake, 2011 Japan earthquake, 2015 Nepal earthquake, 2020 Puerto Rico earthquakes and other seismic risk reduction programs along with disaster risk mitigation policy work. Miyamoto was elected as a chair of the California Seismic Safety Commission in October 2020. He has formerly served as a seismic safety commissioner for eight years where he has advocated for increased resiliency in California. Innovations Dr. Miyamoto was responsible for the seismic retrofit of the Theme Building, an iconic Space Age structure at Los Angeles International Airport (LAX). The innovative retrofit consisted of adding a tuned mass damper (TMD) to the top of the building's core. The TMD option was selected because it was less expensive, protected the building's architectural features, and minimized building closure. This was the first time this retrofit had been achieved in the United States. Awards and recognitions Earthquake Response Dr. Kit Miyamoto plays a key role in earthquake damage assessment, building safety, capacity building, and reconstruction strategies to improve seismic resilience around the world. His work focuses on failure mechanisms and improved construction practices to reduce future earthquake risks. Publications Haiti earthquake 2021: Findings from the repair and damage assessment of 179,800 buildings, International Journal of Disaster Risk Reduction (2024) Seismic Risk Assessment and Retrofit of School Buildings In Developing Countries, 11th U.S. National Conference on Earthquake Engineering, Los Angeles, California (2018) Seismic Collapse Probability of Structures with Viscous Dampers per ASCE 7–16: Effect of Large Earthquake, 11th U.S. National Conference on Earthquake Engineering, Los Angeles, California (2018) Damage Assessment and Seismic Retrofit of Heritage and Modern Buildings in the Aftermath of 2015 Nepal Earthquake, 11th U.S. National Conference on Earthquake Engineering, Los Angeles, California (2018) Design of Structures with Dampers per ASCE 7–16 and Performance for Large Earthquakes, Structures Congress, Houston, Texas (2018) Cost-Effective Seismic Isolation Retrofit of Heritage Cathedrals in Haiti, 16th World Conference on Earthquake, Santiago, Chile (2017) Transparent Global Earthquake Risk And Loss Estimation, Tokyo, Japan (2013) Media Major media such as CNN, LA Times, NY Times and Rolling Stone have mentioned, represented, or interviewed him. He was also featured in the “Designing for Disaster” exhibit at the National Building Museum. References Structural engineers Earthquake engineering 1963 births Living people
Kit Miyamoto
[ "Engineering" ]
659
[ "Earthquake engineering", "Civil engineering", "Structural engineering", "Structural engineers" ]
66,367,117
https://en.wikipedia.org/wiki/Materials%20Project
The Materials Project is an open-access database offering material properties to accelerate the development of technology by predicting how new materials–both real and hypothetical–can be used. The project was established in 2011 with an emphasis on battery research, but includes property calculations for many areas of clean energy systems such as photovoltaics, thermoelectric materials, and catalysts. Most of the known 35,000 molecules and over 130,000 inorganic compounds are included in the database. Dr. Kristin Persson of Lawrence Berkeley National Laboratory founded and leads the initiative, which uses supercomputers at Berkeley, among other institutions, to run calculations using Density Functional Theory (DFT). Commonly computed values include enthalpy of formation, crystal structure, and band gap. The assembled databases of computed structures and properties is freely available to anyone under a CC 4.0 license and was developed with ease of use in mind. The data have been used to predict new materials that should be synthesizable, and screen existing materials for useful properties. The project can be traced back to Persson's postdoc research at MIT in 2004, during which she was given access to a supercomputer to do DFT calculations. After joining Berkeley Lab in 2008, Persson received the necessary funding to make the data from her research freely available. References Materials science Internet properties established in 2011 Scientific databases
Materials Project
[ "Physics", "Materials_science", "Engineering" ]
282
[ "Materials science stubs", "Applied and interdisciplinary physics", "Materials science", "nan" ]
66,367,931
https://en.wikipedia.org/wiki/Spatial%20embedding
Spatial embedding is one of feature learning techniques used in spatial analysis where points, lines, polygons or other spatial data types. representing geographic locations are mapped to vectors of real numbers. Conceptually it involves a mathematical embedding from a space with many dimensions per geographic object to a continuous vector space with a much lower dimension. Such embedding methods allow complex spatial data to be used in neural networks and have been shown to improve performance in spatial analysis tasks Embedded data types Geographic data can take many forms: text, images, graphs, trajectories, polygons. Depending on the task, there may be a need to combine multimodal data from different sources. The next section describes examples of different types of data and their uses. Text Geolocated posts on social media can be used to acquire a library of documents bound to a given place that can be later transformed to embedded vectors using word embedding techniques. Image Satellites and aircraft collect digital spatial data acquired from remotely sensed images which can be used in machine learning. They are sometimes hard to analyse using basic image analysis methods and convolutional neural networks can be used to acquire an embedding of images bound to a given geographical object or a region. Point A single point of interest (POI) can be assigned multiple features that can be used in machine learning. These could be demographic, transportation, meteorological, or economic data, for example. When embedding single points, it is common to consider the entire set of available points as nodes in a graph. Line / multiline Among other things, motion trajectories are represented as lines (multilines). Individual trajectories are embedded taking into account travel time, distances and also features of points visited along the way. Embedding of trajectories allows to improve performance of such tasks as clustering and also categorization. Polygon The geographic areas analyzed in machine learning are defined by both administrative boundaries and top-down division into grids of regular shapes such as rectangles, for example. Both types are represented as polygons and, like points, can be assigned different demographic, transportation, or economic features. A polygon can also have features related to the size of the area or shape it represents. Graph An example domain where graph representation is used is the street layout in a city, where vertices can be intersections and edges can be roads. The vertices can also be destination points like public transport stops or important points in the city, and the edges represent the flow between them. Embedding graphs or single vertices allows to improve accuracy of analysis methods in which the treated geographical domain can be represented as a network. Usage POI recommendation - generating personalized point of interest recommendations based on user preferences. Next/future location prediction - prediction of the next location a person will go to based on their historical trajectory. Zone functions classification - based on different mobility of people or POI distribution a function of a given area in a city can be predicted. Crime prediction - estimation of crime rate in different regions of a city. Local event detection - studying spatio-temporal changes in embeddings can provide valuable information in detection of local event occurring in specific location. Regional mobility popularity prediction - analysis of mobility can show patterns in popularity of different regions in a city. Shape matching - finding a similar shape of given polygon, for example finding building with the same shape as input building. Travel time estimation - predicting estimated travel time given current traffic conditions and special occurring events. Time estimation for on-demand food delivery - estimation of delivery time when placing an order through the website. Temporal aspect Some of the data analyzed has a timestamp associated with it. In some cases of data analysis this information is omitted and in others it is used to divide the set into groups. The most common division is the separation of weekdays from weekends or division into hours of the day. This is particularly important in the analysis of mobility data, because the characteristics of mobility during the week and at different times of the day are very different from each other. Another area in which time division into, for example, individual months can be used is in the analysis of tourism of a given region. In order to take such a split into account, embedding methods treat the time stamp specifically or separate versions of the model are developed for different subgroups of the analyzed set. References Machine learning Data mining Spatial analysis
Spatial embedding
[ "Physics", "Engineering" ]
900
[ "Machine learning", "Spatial analysis", "Space", "Artificial intelligence engineering", "Spacetime" ]
66,373,584
https://en.wikipedia.org/wiki/Zerologon
Zerologon (formally: ) is a privilege elevation vulnerability in Microsoft's authentication protocol Netlogon Remote Protocol (MS-NRPC) , as implemented in the Windows Client Authentication Architecture and Samba. The vulnerability was first reported to Microsoft by security researcher Tom Tervoort from Secura on 17 August 2020 and dubbed "Zerologon". Zerologon was given a Common Vulnerability Scoring System v3.1 severity ranking of 10 by the U.S. American National Institute of Standards and Technology and a 5.5 by Microsoft. Crowdstrike classifies it as the most severe Active Directory vulnerability of 2020. The vulnerability allows from an unauthenticated user of the network to establish an unsafe conncetion to a Domain Controller (DC) and further impersonate the DC to elevate to domain admin priviledges. It allows attackers to access all valid usernames and passwords in each Microsoft network that they breached. This in turn allows them to access additional credentials necessary to assume the privileges of any legitimate user of the network, which in turn can let them compromise Microsoft 365 email accounts. Background The Netlogon Remote Protocol (MS-NRPC) is a Microsoft protocol used for authentication and secure communication between clients and DCs in a Windows network environment. It facilitates the exchange of authentication data and the establishment of secure channels for communication, enabling clients to authenticate against Active Directory and other network services. The protocol plays a key role in domain join operations, password changes, and other security-related tasks within a Windows domain. Behavior The original report by Secura explains the exploit in five steps. Bypassing the authentication The attack focuses on the DC of a network. MS-NRPC relies on a challenge–response authentication to generate a session key from the shared secret (such as a passphrase). To authenticate a client, the MS-NRPC client credentials are computed from the session key, an initialization vector (IV), and the client challenge using a less common Advanced Encryption Standard (AES) block cipher mode, namely 8-bit Cipher Feedback Mode (AES-CFB8) . This is, where the vulnerability lies. Due to the randomly chosen server secret, the computations of the session key yield in 1 out of 256 cases a session key that begins with a zero-byte. The session key is then used to encrypt the IV and the client challenge. Since the IV is all-zero by default, the client challange can be set to an all-zero vector as well and zero-byte beginning of the session key, AES-CFB8 results in an all-zero client credential. The server computed client credentials are then compared to the client sent credentials, which an attacker has also set to all-zero. The client is now authenticated. Disabling signing and encryption To circumvent signing and encryption with the session key (which the attacker does not know) that is performed by MS-NRPC, an attacker can disable it by not setting a flag in the authentication RPC call. Spoofing RPC calls Another obstacle the attacker must overcome is the so-called authenticator value used by Netlogon, that is required for some calls. This value is computed from an incrementing value held by the client, the client credentials, and a timestamp. If the incrementing value is set to all-zero by the client and the timestamp is also set to all-zero when an RPC call is invoked, the server will set the authenticator to all-zero as well, allowing the attacker to carry out the call. Setting the password In the penultimate step, the password is set to an empty one, allowing the attacker to follow the normal protocol procedure from this point on. Elevating to domain admin It is possible for the attacker to impersonate not just any user on the domain, but the domain controller itself. Once logged in, the attacker can retrieve hashed credentials from the DC, enabling a Pass the hash attack and ultimately elevating to the domain administrator. Mitigation Microsoft addressed the Zerologon vulnerability through two security updates. A less strict one in August 2020 and a later one in February 2021 that enforces signing and encryption for MS-NRPC calls by default, with the ability to allow certain devices to handle legacy support. Response and impact In 2020, Zerologon started to be used by sophisticated cyberespionage campaigns of threat groups such as Red Apollo in global attacks against the automotive, engineering and pharmaceutical industry. Zerologon was also used to hack the Municipal wireless network of Austin, Texas. Unusually, Zerologon was the subject of an emergency directive from the United States Cybersecurity and Infrastructure Security Agency. See also 2020 United States federal government data breach References 2020 in computing Computer security exploits
Zerologon
[ "Technology" ]
996
[ "Computer security exploits" ]
66,374,616
https://en.wikipedia.org/wiki/Convergence%20space
In mathematics, a convergence space, also called a generalized convergence, is a set together with a relation called a that satisfies certain properties relating elements of X with the family of filters on X. Convergence spaces generalize the notions of convergence that are found in point-set topology, including metric convergence and uniform convergence. Every topological space gives rise to a canonical convergence but there are convergences, known as , that do not arise from any topological space. An example of convergence that is in general non-topological is almost everywhere convergence. Many topological properties have generalizations to convergence spaces. Besides its ability to describe notions of convergence that topologies are unable to, the category of convergence spaces has an important categorical property that the category of topological spaces lacks. The category of topological spaces is not an exponential category (or equivalently, it is not Cartesian closed) although it is contained in the exponential category of pseudotopological spaces, which is itself a subcategory of the (also exponential) category of convergence spaces. Definition and notation Preliminaries and notation Denote the power set of a set by The or in of a family of subsets is defined as and similarly the of is If (respectively ) then is said to be (respectively ) in For any families and declare that if and only if for every there exists some such that or equivalently, if then if and only if The relation defines a preorder on If which by definition means then is said to be and also and is said to be The relation is called . Two families and are called ( ) if and A is a non-empty subset that is upward closed in closed under finite intersections, and does not have the empty set as an element (i.e. ). A is any family of sets that is equivalent (with respect to subordination) to filter or equivalently, it is any family of sets whose upward closure is a filter. A family is a prefilter, also called a , if and only if and for any there exists some such that A is any non-empty family of sets with the finite intersection property; equivalently, it is any non-empty family that is contained as a subset of some filter (or prefilter), in which case the smallest (with respect to or ) filter containing is called () . The set of all filters (respectively prefilters, filter subbases, ultrafilters) on will be denoted by (respectively ). The or filter on at a point is the filter Definition of (pre)convergence spaces For any if then define and if then define so if then if and only if The set is called the of and is denoted by A on a non-empty set is a binary relation with the following property: : if then implies In words, any limit point of is necessarily a limit point of any finer/subordinate family and if in addition it also has the following property: : if then In words, for every the principal/discrete ultrafilter at converges to then the preconvergence is called a on A or a (respectively a ) is a pair consisting of a set together with a convergence (respectively preconvergence) on A preconvergence can be canonically extended to a relation on also denoted by by defining for all This extended preconvergence will be isotone on meaning that if then implies Examples Convergence induced by a topological space Let be a topological space with If then is said to to a point in written in if where denotes the neighborhood filter of in The set of all such that in is denoted by or simply and elements of this set are called of in The () or is the convergence on denoted by defined for all and all by: if and only if in Equivalently, it is defined by for all A (pre)convergence that is induced by some topology on is called a ; otherwise, it is called a . Power Let and be topological spaces and let denote the set of continuous maps The is the coarsest topology on that makes the natural coupling into a continuous map The problem of finding the power has no solution unless is locally compact. However, if searching for a convergence instead of a topology, then there always exists a convergence that solves this problem (even without local compactness). In other words, the category of topological spaces is not an exponential category (i.e. or equivalently, it is not Cartesian closed) although it is contained in the exponential category of pseudotopologies, which is itself a subcategory of the (also exponential) category of convergences. Other named examples Standard convergence on The is the convergence on defined for all and all by: if and only if Discrete convergence The on a non-empty set is defined for all and all by: if and only if A preconvergence on is a convergence if and only if Empty convergence The on set non-empty is defined for all by: Although it is a preconvergence on it is a convergence on The empty preconvergence on is a non-topological preconvergence because for every topology on the neighborhood filter at any given point necessarily converges to in Chaotic convergence The on set non-empty is defined for all by: The chaotic preconvergence on is equal to the canonical convergence induced by when is endowed with the indiscrete topology. Properties A preconvergence on set non-empty is called or if is a singleton set for all It is called if for all and it is called if for all distinct Every preconvergence on a finite set is Hausdorff. Every convergence on a finite set is discrete. While the category of topological spaces is not exponential (i.e. Cartesian closed), it can be extended to an exponential category through the use of a subcategory of convergence spaces. See also Citations References Mathematical structures
Convergence space
[ "Physics", "Mathematics" ]
1,194
[ "Mathematical structures", "Mathematical objects", "Topology", "Space", "Geometry", "Spacetime" ]
61,387,167
https://en.wikipedia.org/wiki/Ind-completion
In mathematics, the ind-completion or ind-construction is the process of freely adding filtered colimits to a given category C. The objects in this ind-completed category, denoted Ind(C), are known as direct systems, they are functors from a small filtered category I to C. The dual concept is the pro-completion, Pro(C). Definitions Filtered categories Direct systems depend on the notion of filtered categories. For example, the category N, whose objects are natural numbers, and with exactly one morphism from n to m whenever , is a filtered category. Direct systems A direct system or an ind-object in a category C is defined to be a functor from a small filtered category I to C. For example, if I is the category N mentioned above, this datum is equivalent to a sequence of objects in C together with morphisms as displayed. The ind-completion Ind-objects in C form a category ind-C. Two ind-objects and determine a functor Iop x J Sets, namely the functor The set of morphisms between F and G in Ind(C) is defined to be the colimit of this functor in the second variable, followed by the limit in the first variable: More colloquially, this means that a morphism consists of a collection of maps for each i, where is (depending on i) large enough. Relation between C and Ind(C) The final category I = {*} consisting of a single object * and only its identity morphism is an example of a filtered category. In particular, any object X in C gives rise to a functor and therefore to a functor This functor is, as a direct consequence of the definitions, fully faithful. Therefore Ind(C) can be regarded as a larger category than C. Conversely, there need not in general be a natural functor However, if C possesses all filtered colimits (also known as direct limits), then sending an ind-object (for some filtered category I) to its colimit does give such a functor, which however is not in general an equivalence. Thus, even if C already has all filtered colimits, Ind(C) is a strictly larger category than C. Objects in Ind(C) can be thought of as formal direct limits, so that some authors also denote such objects by This notation is due to Pierre Deligne. Universal property of the ind-completion The passage from a category C to Ind(C) amounts to freely adding filtered colimits to the category. This is why the construction is also referred to as the ind-completion of C. This is made precise by the following assertion: any functor taking values in a category D that has all filtered colimits extends to a functor that is uniquely determined by the requirements that its value on C is the original functor F and such that it preserves all filtered colimits. Basic properties of ind-categories Compact objects Essentially by design of the morphisms in Ind(C), any object X of C is compact when regarded as an object of Ind(C), i.e., the corepresentable functor preserves filtered colimits. This holds true no matter what C or the object X is, in contrast to the fact that X need not be compact in C. Conversely, any compact object in Ind(C) arises as the image of an object in X. A category C is called compactly generated, if it is equivalent to for some small category . The ind-completion of the category FinSet of finite sets is the category of all sets. Similarly, if C is the category of finitely generated groups, ind-C is equivalent to the category of all groups. Recognizing ind-completions These identifications rely on the following facts: as was mentioned above, any functor taking values in a category D that has all filtered colimits, has an extension that preserves filtered colimits. This extension is unique up to equivalence. First, this functor is essentially surjective if any object in D can be expressed as a filtered colimits of objects of the form for appropriate objects c in C. Second, is fully faithful if and only if the original functor F is fully faithful and if F sends arbitrary objects in C to compact objects in D. Applying these facts to, say, the inclusion functor the equivalence expresses the fact that any set is the filtered colimit of finite sets (for example, any set is the union of its finite subsets, which is a filtered system) and moreover, that any finite set is compact when regarded as an object of Set. The pro-completion Like other categorical notions and constructions, the ind-completion admits a dual known as the pro-completion: the category Pro(C) is defined in terms of ind-object as (The definition of pro-C is due to .) Therefore, the objects of Pro(C) are or in C. By definition, these are direct system in the opposite category or, equivalently, functors from a small category I. Examples of pro-categories While Pro(C) exists for any category C, several special cases are noteworthy because of connections to other mathematical notions. If C is the category of finite groups, then pro-C is equivalent to the category of profinite groups and continuous homomorphisms between them. The process of endowing a preordered set with its Alexandrov topology yields an equivalence of the pro-category of the category of finite preordered sets, , with the category of spectral topological spaces and quasi-compact morphisms. Stone duality asserts that the pro-category of the category of finite sets is equivalent to the category of Stone spaces. The appearance of topological notions in these pro-categories can be traced to the equivalence, which is itself a special case of Stone duality, which sends a finite set to the power set (regarded as a finite Boolean algebra). The duality between pro- and ind-objects and known description of ind-completions also give rise to descriptions of certain opposite categories. For example, such considerations can be used to show that the opposite category of the category of vector spaces (over a fixed field) is equivalent to the category of linearly compact vector spaces and continuous linear maps between them. Applications Pro-completions are less prominent than ind-completions, but applications include shape theory. Pro-objects also arise via their connection to pro-representable functors, for example in Grothendieck's Galois theory, and also in Schlessinger's criterion in deformation theory. Related notions Tate objects are a mixture of ind- and pro-objects. Infinity-categorical variants The ind-completion (and, dually, the pro-completion) has been extended to ∞-categories by . See also completions in category theory Notes References . Functors Limits (category theory)
Ind-completion
[ "Mathematics" ]
1,427
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Mathematical relations", "Functors", "Category theory", "Limits (category theory)" ]
61,387,397
https://en.wikipedia.org/wiki/Codensity%20monad
In mathematics, especially in category theory, the codensity monad is a fundamental construction associating a monad to a wide class of functors. Definition The codensity monad of a functor is defined to be the right Kan extension of along itself, provided that this Kan extension exists. Thus, by definition it is in particular a functor The monad structure on stems from the universal property of the right Kan extension. The codensity monad exists whenever is a small category (has only a set, as opposed to a proper class, of morphisms) and possesses all (small, i.e., set-indexed) limits. It also exists whenever has a left adjoint. By the general formula computing right Kan extensions in terms of ends, the codensity monad is given by the following formula: where denotes the set of morphisms in between the indicated objects and the integral denotes the end. The codensity monad therefore amounts to considering maps from to an object in the image of and maps from the set of such morphisms to compatible for all the possible Thus, as is noted by Avery, codensity monads share some kinship with the concept of integration and double dualization. Examples Codensity monads of right adjoints If the functor admits a left adjoint the codensity monad is given by the composite together with the standard unit and multiplication maps. Concrete examples for functors not admitting a left adjoint In several interesting cases, the functor is an inclusion of a full subcategory not admitting a left adjoint. For example, the codensity monad of the inclusion of FinSet into Set is the ultrafilter monad associating to any set the set of ultrafilters on This was proven by Kennison and Gildenhuys, though without using the term "codensity". In this formulation, the statement is reviewed by Leinster. A related example is discussed by Leinster: the codensity monad of the inclusion of finite-dimensional vector spaces (over a fixed field ) into all vector spaces is the double dualization monad given by sending a vector space to its double dual Thus, in this example, the end formula mentioned above simplifies to considering (in the notation above) only one object namely a one-dimensional vector space, as opposed to considering all objects in Adámek and Sousa show that, in a number of situations, the codensity monad of the inclusion of finitely presented objects (also known as compact objects) is a double dualization monad with respect to a sufficiently nice cogenerating object. This recovers both the inclusion of finite sets in sets (where a cogenerator is the set of two elements), and also the inclusion of finite-dimensional vector spaces in vector spaces (where the cogenerator is the ground field). Sipoş showed that the algebras over the codensity monad of the inclusion of finite sets (regarded as discrete topological spaces) into topological spaces are equivalent to Stone spaces. Avery shows that the Giry monad arises as the codensity monad of natural forgetful functors between certain categories of convex vector spaces to measurable spaces. Relation to Isbell duality Di Liberti shows that the codensity monad is closely related to Isbell duality: for a given small category Isbell duality refers to the adjunction between the category of presheaves on (that is, functors from the opposite category of to sets) and the opposite category of copresheaves on The monad induced by this adjunction is shown to be the codensity monad of the Yoneda embedding Conversely, the codensity monad of a full small dense subcategory in a cocomplete category is shown to be induced by Isbell duality. See also References Footnotes Further reading Codensity Monads at the n-category café. Category theory
Codensity monad
[ "Mathematics" ]
829
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory" ]
60,018,232
https://en.wikipedia.org/wiki/NGC%203729
NGC 3729 is a barred spiral galaxy located in the constellation Ursa Major. It is located at a distance of circa 65 million light years from Earth, which, given its apparent dimensions, means that NGC 3729 is about 60,000 light years across. It was discovered by William Herschel on April 12, 1789. NGC 3729 has a bright nucleus embedded in a bar which measures 0.5 x 0.1 arcminutes. At the end of the bar lies a ring with knots. The outer part of the galaxy is formed by an asymmetric faint nebulosity with condensations. It is possible that the condensation is a disturbed satellite galaxy. In the centre of NGC 3729 is predicted to lie an intermediate-mass black hole, whose mass is estimated to be between 4 and 400 thousands (104.6 ± 1.0 ) based on Ks-band bulge luminosity. The galaxy has an inner ring which emits in far ultraviolet and H-alpha, which are considered to be markers of recent star formation activity. NGC 3729 is member of the M109 Group which is part of the south Ursa Major groups, part of the Virgo Supercluster. It forms a pair with NGC 3718, which lies 11.5 arcminutes to the west. It is possible the two galaxies interacted in the past. Although no supernovae have been observed in NGC 3729 yet, a luminous red nova, designated AT 2018hso, was discovered on 31 October 2018 (type LRN, mag. 19.4). References External links Barred spiral galaxies Peculiar galaxies Ursa Major Ursa Major Cluster 3729 06547 35711 Astronomical objects discovered in 1789 Discoveries by William Herschel
NGC 3729
[ "Astronomy" ]
359
[ "Ursa Major", "Constellations" ]
60,023,274
https://en.wikipedia.org/wiki/Plastic%20limit%20theorems
Plastic limit theorems in continuum mechanics provide two bounds that can be used to determine whether material failure is possible by means of plastic deformation for a given external loading scenario. According to the theorems, to find the range within which the true solution must lie, it is necessary to find both a stress field that balances the external forces and a velocity field or flow pattern that corresponds to those stresses. If the upper and lower bounds provided by the velocity field and stress field coincide, the exact value of the collapse load is determined. Limit theorems The two plastic limit theorems apply to any elastic-perfectly plastic body or assemblage of bodies. Lower limit theorem: If an equilibrium distribution of stress can be found which balances the applied load and nowhere violates the yield criterion, the body (or bodies) will not fail, or will be just at the point of failure. Upper limit theorem: The body (or bodies) will collapse if there is any compatible pattern of plastic deformation for which the rate of work done by the external loads exceeds the internal plastic dissipation. References Continuum mechanics Physics theorems
Plastic limit theorems
[ "Physics" ]
226
[ "Continuum mechanics", "Equations of physics", "Classical mechanics", "Physics theorems" ]
63,583,623
https://en.wikipedia.org/wiki/Greater%20Dublin%20Area%20Cycle%20Network
The Greater Dublin Area Cycle Network is a proposed cycle network for the Greater Dublin Area. The plan was launched in 2013. A target, endorsed by the Irish government, proposed that the number of people commuting into Dublin would reach "75,000 each morning by 2021", representing a "three-fold increase in cycling over 2011 levels". A significant part of the proposed plan, as published in 2013, expected that the Greater Dublin Area's cycle network would increase "five fold" from 500 km in length to over 2,800 km by 2020. The planned targets were not met. In August 2018, 78 companies and third-level education institutions called on the government to build a network of segregated cycle routes in Dublin. This call was reiterated by the National Children's Hospital and St. James's Hospital in 2019. The letter from St James's Hospital to the Minister for Transport cited worrying levels of air pollution, adding, As of mid-2021, the National Transport Authority (NTA) website noted that "the NTA [..was then..] in the process of updating the GDA Cycle Network Plan" and that it planned to publish this update "later in 2021". References Cycling infrastructure Cycling in Ireland Transport infrastructure Transport in Dublin (city) Dublin Transport in County Kildare Transport in County Meath Transport in County Wicklow
Greater Dublin Area Cycle Network
[ "Physics" ]
277
[ "Physical systems", "Transport", "Transport infrastructure" ]
63,584,718
https://en.wikipedia.org/wiki/Richards%27%20theorem
Richards' theorem is a mathematical result due to Paul I. Richards in 1947. The theorem states that for, if is a positive-real function (PRF) then is a PRF for all real, positive values of . The theorem has applications in electrical network synthesis. The PRF property of an impedance function determines whether or not a passive network can be realised having that impedance. Richards' theorem led to a new method of realising such networks in the 1940s. Proof where is a PRF, is a positive real constant, and is the complex frequency variable, can be written as, where, Since is PRF then is also PRF. The zeroes of this function are the poles of . Since a PRF can have no zeroes in the right-half s-plane, then can have no poles in the right-half s-plane and hence is analytic in the right-half s-plane. Let Then the magnitude of is given by, Since the PRF condition requires that for all then for all . The maximum magnitude of occurs on the axis because is analytic in the right-half s-plane. Thus for . Let , then the real part of is given by, Because for then for and consequently must be a PRF. Richards' theorem can also be derived from Schwarz's lemma. Uses The theorem was introduced by Paul I. Richards as part of his investigation into the properties of PRFs. The term PRF was coined by Otto Brune who proved that the PRF property was a necessary and sufficient condition for a function to be realisable as a passive electrical network, an important result in network synthesis. Richards gave the theorem in his 1947 paper in the reduced form, that is, the special case where The theorem (with the more general casse of being able to take on any value) formed the basis of the network synthesis technique presented by Raoul Bott and Richard Duffin in 1949. In the Bott-Duffin synthesis, represents the electrical network to be synthesised and is another (unknown) network incorporated within it ( is unitless, but has units of impedance and has units of admittance). Making the subject gives Since is merely a positive real number, can be synthesised as a new network proportional to in parallel with a capacitor all in series with a network proportional to the inverse of in parallel with an inductor. By a suitable choice for the value of , a resonant circuit can be extracted from leaving a function two degrees lower than . The whole process can then be applied iteratively to until the degree of the function is reduced to something that can be realised directly. The advantage of the Bott-Duffin synthesis is that, unlike other methods, it is able to synthesise any PRF. Other methods have limitations such as only being able to deal with two kinds of element in any single network. Its major disadvantage is that it does not result in the minimal number of elements in a network. The number of elements grows exponentially with each iteration. After the first iteration there are two and associated elements, after the second, there are four and so on. Hubbard notes that Bott and Duffin appeared not to know the relationship of Richards' theorem to Schwarz's lemma and offers it as his own discovery, but it was certainly known to Richards who used it in his own proof of the theorem. References Bibliography Bott, Raoul; Duffin, Richard, "Impedance synthesis without use of transformers", Journal of Applied Physics, vol. 20, iss. 8, p. 816, August 1949. Cauer, Emil; Mathis, Wolfgang; Pauli, Rainer, "Life and Work of Wilhelm Cauer (1900 – 1945)", Proceedings of the Fourteenth International Symposium of Mathematical Theory of Networks and Systems (MTNS2000), Perpignan, June, 2000. Hubbard, John H., "The Bott-Duffin synthesis of electrical circuits", pp. 33–40 in, Kotiuga, P. Robert (ed), A Celebration of the Mathematical Legacy of Raoul Bott, American Mathematical Society, 2010 . Hughes, Timothy H.; Morelli, Alessandro; Smith, Malcolm C., "Electrical network synthesis: A survey of recent work", pp. 281–293 in, Tempo, R.; Yurkovich, S.; Misra, P. (eds), Emerging Applications of Control and Systems Theory, Springer, 2018 . Richards, Paul I., "A special class of functions with positive real part in a half-plane", Duke Mathematical Journal, vol. 14, no. 3, 777–786, 1947. Wing, Omar, Classical Circuit Theory, Springer, 2008 . Theorems in complex analysis Electronic engineering Network synthesis Circuit theorems
Richards' theorem
[ "Physics", "Mathematics", "Technology", "Engineering" ]
983
[ "Theorems in mathematical analysis", "Equations of physics", "Computer engineering", "Theorems in complex analysis", "Electronic engineering", "Circuit theorems", "Electrical engineering", "Physics theorems" ]
67,886,008
https://en.wikipedia.org/wiki/Alabay%20Statue
Alabay Statue () is a gilded statue that stands on Ashgabat, Turkmenistan. The 49 feet (15 m) tall gilded statue (the statue itself is 29 feet, the pedestal is 20 feet), depicts a Alabay dog. The statue was created by Turkmen artist Sargart Babaev with the President Gurbanguly Berdimuhamedow's initiative in 2020. Appearance The height of the dog's figure is 6 meters, it is installed on a pedestal 9 meters high. The 15-meter monument is located on an area with a diameter of 36 meters. It is installed at a roundabout along Magtymguly Avenue, in a vast area between Taslama and Tehran streets. History The idea to erect a monument appeared in 2017, with several designs presented in October. Construction began in 2019 and completed in autumn 2020. The official opening of the statue took place in November 2020. References External links Berdimuhamedov's poem dedicated to Alabay Buildings and structures in Ashgabat 2020 sculptures Asian sculpture Colossal statues Sculptures of dogs
Alabay Statue
[ "Physics", "Mathematics" ]
221
[ "Quantity", "Colossal statues", "Physical quantities", "Size" ]
67,889,146
https://en.wikipedia.org/wiki/Allotropes%20of%20silicon
Allotropes of silicon are structurally varied forms of silicon. Amorphous silicon Amorphous silicon takes the form of a brown powder. Crystalline silicon Crystalline silicon has a metallic luster and a grayish color. Single crystals can be grown with the Czochralski process. Crystalline silicon can be doped with elements such as boron, gallium, germanium, phosphorus or arsenic. Doped silicon is used in solid-state electronic devices, such as solar cells, rectifiers and computer chips. Silicon crystallizes in the same pattern as diamond, viewable as two interpenetrating face-centered cubic primitive lattices. The cube measures 0.543 nm on a side. Silicene Silicene is a two-dimensional system with a hexagonal honeycomb structure similar to that of graphene. Silicene has different characteristics than graphene. It has a periodically buckled topology; interlayer coupling is much stronger; and its oxidized form, 2D silica, has a different chemical structure from graphene oxide. It was first created in 2010. Penta-silicene is a two-dimensional system with pentagonal structure similar to that of penta-graphene. The structure was first synthesized in 2005. is an orthorhombic crystalline Si allotrope. It was first synthesized in 2014. Creating the allotrope involved forming , a polycrystalline compound with help from a tantalum capsule, high temperature, and a 1,500 ton multi-anvil press that gradually reached a pressure of . Next it was "degassed" in a vacuum at for eight days. The result was a zeolite-type structure. has a quasi-direct band gap (specifically a small and almost flat indirect band gap). It can conduct electricity more efficiently than diamond-structured silicon. It can absorb and emit light. It is composed of five-, six-, and eight-membered rings. Small atoms and molecules could pass through the associated holes. Si24 can be doped as both p- and n-type, and the dopants are readily ionized. Boron and phosphorus the most likely dopants. Potential applications include energy storage and filtering. 4H silicon 4H silicon is a bulk, highly ordered hexagonal 4-layer crystalline form of . Optical absorption measurements revealed an indirect band gap near 1.2 eV, in agreement with first principles calculations. Silicyne 1-dimensional silicyne is analogous to the carbon allotrope carbyne, being a long chain of silicons, instead of carbons. 2-dimensional silicyne is analogous to the carbon allotrope graphyne. References Allotropes of silicon
Allotropes of silicon
[ "Chemistry" ]
565
[ "Allotropes of silicon", "Allotropes" ]
67,892,328
https://en.wikipedia.org/wiki/Long%20Lived%20In-situ%20Solar%20System%20Explorer
Long Lived In-situ Solar System Explorer (LLISSE) is a possible NASA payload on the Russian Venera-D mission to Venus. LLISSE uses new materials and heat-resistant electronics that would enable independent operation for about 90 Earth days. This endurance may allow it to obtain periodic measurements of weather data to update global circulation models and quantify near surface atmospheric chemistry variability. Its anticipated instruments include wind speed/direction sensors, temperature sensors, pressure sensors, and a chemical multi-sensor array. LLISSE is a small cube of about . The Venera-D lander may carry two LLISSE units; one would be battery-powered (3,000 h), and the other would be wind-powered. References Venera program NASA Spacecraft instruments Meteorological instrumentation and equipment Space science experiments
Long Lived In-situ Solar System Explorer
[ "Astronomy", "Technology", "Engineering" ]
163
[ "Meteorological instrumentation and equipment", "Astronomy stubs", "Measuring instruments", "Spacecraft stubs" ]
67,893,191
https://en.wikipedia.org/wiki/Bergman%27s%20diamond%20lemma
In mathematics, specifically the field of abstract algebra, Bergman's Diamond Lemma (after George Bergman) is a method for confirming whether a given set of monomials of an algebra forms a -basis. It is an extension of Gröbner bases to non-commutative rings. The proof of the lemma gives rise to an algorithm for obtaining a non-commutative Gröbner basis of the algebra from its defining relations. However, in contrast to Buchberger's algorithm, in the non-commutative case, this algorithm may not terminate. Preliminaries Let be a commutative associative ring with identity element 1, usually a field. Take an arbitrary set of variables. In the finite case one usually has . Then is the free semigroup with identity 1 on . Finally, is the free associative -algebra over . Elements of will be called words, since elements of can be seen as letters. Monomial Ordering The reductions below require a choice of ordering on the words i.e. monomials of . This has to be a total order and satisfy the following: For all words and , we have that if then . For each word , the collection is finite. We call such an order admissible. An important example is the degree lexicographic order, where if has smaller degree than ; or in the case where they have the same degree, we say if comes earlier in the lexicographic order than . For example the degree lexicographic order on monomials of is given by first assuming . Then the above rule implies that the monomials are ordered in the following way: Every element has a leading word which is the largest word under the ordering which appears in with non-zero coefficient. In if , then the leading word of under degree lexicographic order is . Reduction Assume we have a set which generates a 2-sided ideal of . Then we may scale each such that its leading word has coefficient 1. Thus we can write , where is a linear combination of words such that . A word is called reduced with respect to the relations if it does not contain any of the leading words . Otherwise, for some and some . Then there is a reduction , which is an endomorphism of that fixes all elements of apart from and sends this to . By the choice of ordering there are only finitely many words less than any given word, hence a finite composition of reductions will send any to a linear combination of reduced words. Any element shares an equivalence class modulo with its reduced form. Thus the canonical images of the reduced words in form a -spanning set. The idea of non-commutative Gröbner bases is to find a set of generators of the ideal such that the images of the corresponding reduced words in are a -basis. Bergman's Diamond Lemma lets us verify if a set of generators has this property. Moreover, in the case where it does not have this property, the proof of Bergman's Diamond Lemma leads to an algorithm for extending the set of generators to one that does. An element is called reduction-unique if given two finite compositions of reductions and such that the images and are linear combinations of reduced words, then . In other words, if we apply reductions to transform an element into a linear combination of reduced words in two different ways, we obtain the same result. Ambiguities When performing reductions there might not always be an obvious choice for which reduction to do. This is called an ambiguity and there are two types which may arise. Firstly, suppose we have a word for some non-empty words and assume that and are leading words for some . This is called an overlap ambiguity, because there are two possible reductions, namely and . This ambiguity is resolvable if and can be reduced to a common expression using compositions of reductions. Secondly, one leading word may be contained in another i.e. for some words and some indices . Then we have an inclusion ambiguity. Again, this ambiguity is resolvable if , for some compositions of reductions and . Statement of the Lemma The statement of the lemma is simple but involves the terminology defined above. This lemma is applicable as long as the underlying ring is associative. Let generate an ideal of , where with the leading words under some fixed admissible ordering of . Then the following are equivalent: All overlap and inclusion ambiguities among the are resolvable. All elements of are reduction-unique. The images of the reduced words in form a -basis. Here the reductions are done with respect to the fixed set of generators of . When any of the above hold we say that is a Gröbner basis for . Given a set of generators, one usually checks the first or second condition to confirm that the set is a -basis. Examples Resolving ambiguities Take , which is the quantum polynomial ring in 3 variables, and assume . Take to be degree lexicographic order, then the leading words of the defining relations are , and . There is exactly one overlap ambiguity which is and no inclusion ambiguities. One may resolve via or via first. The first option gives us the following chain of reductions, whereas the second possibility gives, Since are commutative the above are equal. Thus the ambiguity resolves and the Lemma implies that is a Gröbner basis of . Non-resolving ambiguities Let . Under the same ordering as in the previous example, the leading words of the generators of the ideal are , and . There are two overlap ambiguities, namely and . Let us consider . If we resolve first we get, which contains no leading words and is therefore reduced. Resolving first we obtain, Since both of the above are reduced but not equal we see that the ambiguity does not resolve. Hence is not a Gröbner basis for the ideal it generates. Algorithm The following short algorithm follows from the proof of Bergman's Diamond Lemma. It is based on adding new relations which resolve previously unresolvable ambiguities. Suppose that is an overlap ambiguity which does not resolve. Then, for some compositions of reductions and , we have that and are distinct linear combinations of reduced words. Therefore, we obtain a new non-zero relation . The leading word of this relation is necessarily different from the leading words of existing relations. Now scale this relation by a non-zero constant such that its leading word has coefficient 1 and add it to the generating set of . The process is analogous for inclusion ambiguities. Now, the previously unresolvable overlap ambiguity resolves by construction of the new relation. However, new ambiguities may arise. This process may terminate after a finite number of iterations producing a Gröbner basis for the ideal or never terminate. The infinite set of relations produced in the case where the algorithm never terminates is still a Gröbner basis, but it may not be useful unless a pattern in the new relations can be found. Example Let us continue with the example from above where . We found that the overlap ambiguity does not resolve. This gives us and . The new relation is therefore whose leading word is with coefficient 1. Hence we do not need to scale it and can add it to our set of relations which is now . The previous ambiguity now resolves to either or . Adding the new relation did not add any ambiguities so we are left with the overlap ambiguity we identified above. Let us try and resolve it with the relations we currently have. Again, resolving first we obtain, On the other hand resolving twice first and then we find, Thus we have and and the new relation is with leading word . Since the coefficient of the leading word is -1 we scale the relation and then add to the set of defining relations. Now all ambiguities resolve and Bergman's Diamond Lemma implies that is a Gröbner basis for the ideal it defines. Further generalisations The importance of the diamond lemma can be seen by how many other mathematical structures it has been adapted for: For power series algebras. For certain quiver Hecke algebras. For category algebras. For small categories. For shuffle operads. The lemma has been used to prove the Poincaré–Birkhoff–Witt theorem. References Lemmas in algebra
Bergman's diamond lemma
[ "Mathematics" ]
1,698
[ "Theorems in algebra", "Lemmas in algebra", "Lemmas" ]
67,894,577
https://en.wikipedia.org/wiki/RAC%20421-II
RAC 421-II, also referred to simply as RAC 421, is a quaternary local anesthetic that acts through intracellular blockage of the NaKATPase channel. Function As a quaternary ammonium analogue of another local anesthetic, RAC 109, RAC 421-II is permanently charged and so cannot cross the hydrophobic phospholipid cell membrane. As it cannot diffuse across the cell membrane, it cannot exert its inhibitory effects on the intracellular surface of NaKATPase. As such, it can only exert its anesthetic properties if it is injected into the cytosol of the nerve fibre. Inhibition occurs through allowing the sodium and potassium gradients across the cell membrane to dissipate. NaKATPase blockage preferentially inhibits firing of nociceptive nerve fibres due to their relatively low cell diameter and so low tolerance to NaKATPase inhibitors. This is in contrast to non-quaternary anesthetics like benzocaine and tetracaine which cross the cell membrane in their uncharged states and so they can induce anesthetic effects upon application to the extracellular side of the membrane. They subsequently become charged and so activated within the cytosol to exert their inhibitory effects on NaKATPase (NaKATPase inhibiting anesthetics must be in their charged state to become active). References Local anesthetics Pyrrolidones Spiro compounds Tetralins
RAC 421-II
[ "Chemistry" ]
311
[ "Organic compounds", "Spiro compounds" ]
67,895,670
https://en.wikipedia.org/wiki/Maleic%20hydrazide
Maleic hydrazide, often known by the brand name Fazor is a plant growth regulator that reduces growth through preventing cell division but not cell enlargement. It is applied to the foliage of potato, onion, garlic and carrot crops to prevent sprouting during storage. It can also be used to control volunteer potatoes that are left in the field during harvesting. It was first identified in the 1940s but was not used commercially in the United Kingdom until 1984. The banning of chlorpropham as a sprout suppressant in 2019 has led renewed interest in how maleic hydrazide can be used in potatoes. References Hydrazides Plant growth regulators
Maleic hydrazide
[ "Chemistry" ]
137
[ "Organic chemistry stubs" ]
70,796,522
https://en.wikipedia.org/wiki/Chemical%20defenses%20in%20Cannabis
Cannabis (/ˈkænəbɪs/) is commonly known as marijuana or hemp and has two known strains: Cannabis sativa and Cannabis indica, both of which produce chemicals to deter herbivory. The chemical composition includes specialized terpenes and cannabinoids, mainly tetrahydrocannabinol (THC), and cannabidiol (CBD). These substances play a role in defending the plant from pathogens including insects, fungi, viruses and bacteria. THC and CBD are stored mostly in the trichomes of the plant, and can cause psychological and physical impairment in the user, via the endocannabinoid system and unique receptors. THC increases dopamine levels in the brain, which attributes to the euphoric and relaxed feelings cannabis provides. As THC is a secondary metabolite, it poses no known effects towards plant development, growth, and reproduction. However, some studies show secondary metabolites such as cannabinoids, flavonoids, and terpenes are used as defense mechanisms against biotic and abiotic environmental stressors. Biosynthesis pathways Cannabinoids The production of the cannabinoids THC and CBD are a result of a series of chemical reactions, and are just two types of over a hundred that are known. Inside the transcriptomes of glandular trichomes in the cannabis plant, the pathway for cannabinoid production takes place. Beginning with the formation of 3,5,7-trioxododecaneoyl-COA by the condensation reaction between hexanoyl-CoA and malonyl-CoA, catalyzed by type III polyketide synthase (PKS), the product is then used to form olivetolic acid. After the geranylation of olivetolic acid, cannabigerolic acid (CBGA) or cannabigerivarinic acid (CBGVA) is formed. The decarboxylation of these acids yield what we recognize as THC and CBD. Terpenes Terpenes are a key component in chemotaxonomical classification of cannabis strains as terpene composition is a phenotypic trait. Majority of terpenes found in cannabis are hydrocarbons, which are a direct product of terpene synthase (TPS) enzymes. The molecular make up of terpenes in a cannabis plant involves the linking and elongation of chains in hydrocarbons and isoprene units, formed by isopentenyl pyrophosphate and dimethylallyl pyrophosphate. Terpenoids are basically terpenes with the addition of oxygen, among other structural additions. There are numerous types of unique functional terpenes in green plants and are formed via many differing pathways; methylerythritol phosphate (MEP), cytosolic mevalonate (MEV), or deoxyxylulose phosphate pathway (DOXP) to name a few. In addition, mevalonic acid's (MVA) involvement in biosynthesis of complex terpenoids, such as steroids, was demonstrated in 1983. Once produced, specifically within the disk cells, terpenes are stored within the trichomes of the plant. There are several types of terpenes in cannabis composed of varying numbers of isoprene units. They contribute to the signature aroma and insecticidal properties via their emission as volatile organic compounds. Different cannabis strains synthesize different terpenes through their biochemical pathways, and diversity of the terpenes is dependent upon the diversity of the TPS enzymes present in the cannabis plant's TPS gene pool. Though, causes of variations in the TPS enzymes are still unknown. Monoterpenes myrcene and sesquiterpenes β-caryophyllene (binds to the human CB2 cannabinoids receptor) and α-humulene are the most common terpene compounds, and are present in most varieties of cannabis strains. The lack of exact standards makes it sometimes difficult for scientists to classify new terpenes. Terpene profiles are subject to change under different environmental conditions, which may lead to variation in TPS gene expression, ultimately leading to a variation in the synthesized terpenes. Terpenes have unique, distinct aromas, which is why each strain smells different. Cannabis plants, like many others, biochemically synthesize terpenes with intense aromas as a method of chemical defense in attempts to repel predators, and invite pollinators. Because terpenes and terpenoids are biologically active molecules, it is possible variations in terpenes may elicit different biological and psychoactive responses in humans. This is why people claim to have different psychological effects to different strains. Chemical biotic stress defense One form of Cannabis defense is the up-regulation of cannabinoids and specialized terpenes in response to differing biotic stressors in the environment such as pests and predation. In a study from 2019, tobacco hornworm larvae were fed on an artificial diet of wheat germ containing a cannabis agent. The results showed that on average, significantly high dosages of CBD in the new diet may have decreased survival rates of the larvae. In addition, Maduca sexta larvae avoids eating plants containing high amounts of CBD, allowing for the indication that CBD may be a natural pest deterrent. However, research also has shown when the plant is subjected to mechanical wounds from certain insects, CBD levels were unchanged and even decreased. This observation may be due to difference in the species of insect and chemical secretions, thus providing a new hypothesis that CBD levels vary in response to certain species or even have no effect. Phytocannabinoid and terpene content in the leaves and flowers of C. sativa rises when under attack by Tetranychus urticae, a common pest for the genus. When compared to a control of Cannabis sativa without any pest damage, research from 2022 demonstrated an overall increase of secondary metabolites in plants exposed to Tetranychus urtivae infestation and measured this metabolite rise using liquid and gas chromatograph mass spectrometers. The increase was found to be significant, and is attributed as a defense mechanism in the plant. The induction and up-regulation of cannabinoids as defense genes in Cannabis can be induced by elicitors. In a study from 2019, salicylic acid (SA) was used with GABA as an elicitor to determine its effects on the expression of metabolites involved in THC and CBD biosynthesis. SA and GABA were demonstrated to effectively up-regulate the expression of THCAS, a cannabigerolic acid used to form THC, which resulted in higher levels of THC. These results support the mechanism in which cannabis elicitors such as salicylic acid and GABA triggers a signal cascade for increased expression of defense genes in response to stress. One line of defense is the release of volatile organic compounds (VOCs) into the air to defend against herbivores by warning neighboring plants. The release of VOCs may begin with the jasmonic (JA) pathway which up-regulates defensive genes. Jasmonic acid, also called jasmonate, is a hormone linked to wound signaling in plants. Rapid wound signaling involves an influx of calcium after the arrival of an action potential. The increase of calcium triggers a regulatory protein, calmodulin, to turn on a protein kinase releasing JASMONATE-ASSOCIATED VQ-MOTIF GENE1 (JAV1) by combining it with phosphoric acid. From a study in 2020, in response to the necrotrophic pathogen gray mold, JA mediated markers were up-regulated in the leaves that were infected, from beginning of infection to the end. Through a series of signals, the plant detects the presence of fungal elicitors/pathogens, then through the JA pathway the expression of defense genes are increased. Chemical abiotic stress defense Drought resistance Drought poses negative impacts to growth and yield of hemp, therefore, hemp has evolved survival mechanisms for abiotic stress. Plant cells will discontinue normal growth rates when exposed to drought stress along with other physiological processes such as photosynthesis. Down regulating certain gene's expressions or transcription factors can assist in this response. For example, photosynthesis–antenna proteins and differentially expressed genes (DEGs) in the jasmonic acid pathway were demonstrated as being down-regulated during drought stress. In a 2018 research article by Gao, they attribute the down regulation to reduced photosynthesis in the plant. Up regulating genes can also occur, such as transcriptional factors from the NAC gene family, which were demonstrated to be over expressed in response to drought treatments, possibly contributing to tolerance. Numerous regulation genes involved in the biosynthesis of abscisic acid (ABA), a plant hormone linked to stress response, are over expressed during times of drought stress. Some of these genes are from the PP2C and SnPK gene families, linked to drought tolerance because of their intrinsic roles of ABA signaling. ABA signaling, controlled by changes in ABA metabolic pathways, assist in stomata closure and changes in the photosynthesis processes in hemp plants to combat water loss during drought stress. Another stress hormone, auxin, may be important in drought tolerance by means of the gene GH3. A hemp GH3 homolog gene has been shown to increase drought resistance in rice by decreasing expression of Indole-3-Acetic Acid (IAA), which decreases photosynthesis and cell growth. Salt stress Ion balance is a key factor in plant development to produce yield. Too high salt concentration in soil lowers the water potential in root tissue which becomes toxic; stunting growth and inhibiting flowering by dehydrating the plant. Stomatal closure is also a response to high salinity, leading to lowered sugar production and transpiration rates. Plants respond to high salinity soils by accumulating sodium and chlorine, and reducing uptake of macronutrients and other ions. This accumulation results in inhibition of calcium signaling. In order to combat this type of stress, plants must have strategies and adaptations in place for survival, such as osmotic stress pathways. RNA sequencing and qRT-PCR analysis has made finding these gene expression pathways possible, such as the MAPK, allowing for the scrutiny of candidate genes responsible for greater tolerance to salinity. Candidate genes for this type of stress response have also been found in plant hormone signal transduction pathways. Different species of Cannabis carry unique variations of gene expression, with some having a greater ability to utilize salt tolerance by keeping potassium levels high enough as to deny sodium uptake. Removing sodium from the cytoplasm by means of sodium or hydrogen anti-porters is another mechanism to resist desiccation from high salinity environments by using the salt overly sensitive (SOS) regulation pathway. The SOS pathway exchanges excess sodium for hydrogen, and it is set into action by calcium signal flux. Modifying the cytoskeleton or utilizing an osmotic stress pathway are two other physiological defenses plants use to handle salinity. The ability to sense excess salinity is a valuable tool, where excess sodium can trigger an influx of calcium and reactive oxygen species (ROS). Without the ability to notice sodium, calcium wouldn't be triggered into a signaling cascade to flow into the cytosol. This flow notifies the system to block salt ions from entering into the roots by using any available defenses, such as modifying the cell wall. A plant without these sensing and signaling capabilities is considered salt-sensitive, known as a glycophyte. Metal toxicity Metal pollution in soil will induce high toxicity in hemp plants. Cadmium (Cd) toxicity has been proven to be long term and irreversible in plants. Cd specifically results in oxidative stress and increase in free radicals. Free radicals are found to cause oxidative stress, cell damage, and death. Hemp plants in Cd polluted soils were found to help detoxify the metal while the plants were still conserved. While growth in these hemp plants were slightly reduced when planted in Cd concentration soil, continued plant growth indicated hemp plants were able to detoxify some Cd. Specifically, transporter proteins would move Cd into the cell wall and differentially expressed genes (DEGs) would activate, bind, and defend against Cd stress. These DEGs were found to be involved in cell wall metabolism and were most active when in contact with Cd. The plant hormone ABA plays an important role in activating signal transduction cascades and cell cycling and growth. An increase of ABC transporters in hemp plants contributes to increases in calcium concentration, indicating that calcium-binding proteins can control Cd concentration and absorption. References Cannabis Chemical ecology
Chemical defenses in Cannabis
[ "Chemistry", "Biology" ]
2,686
[ "Biochemistry", "Chemical ecology" ]
70,797,998
https://en.wikipedia.org/wiki/Wolfgang%20Lechner
Wolfgang Lechner (born 14 May 1981 in Kufstein) is a theoretical physicist from Austria. He is the co-founder and co-CEO of the company ParityQC (Parity Quantum Computing GmbH) and professor at the Institute for Theoretical Physics of the University of Innsbruck. Academic career Wolfgang Lechner earned his Masters and PhD in Physics from the University of Vienna with Christoph Dellago as supervisor. He completed postdoctoral study under the direction of Peter Bolhuis at the University of Amsterdam from 2009 to 2011, followed by postdoctoral stints with Peter Zoller at IQOQI Innsbruck from 2011 to 2013 and at IQOQI Innsbruck from 2013 to 2016. Since December 2020 he is an associate professor at the Institute for Theoretical Physics, University of Innsbruck. Research Together with his colleagues Philipp Hauke and Peter Zoller, Wolfgang Lechner developed a quantum computing scheme which mitigates the fundamental connectivity limitations of quantum computers and solves general optimization problems through a software architecture. In 2017, Lechner set up a research team in the field of quantum optimization at the University of Innsbruck. The research group is dedicated to theoretical quantum physics, with the aim to solve computationally challenging problems efficiently in near term quantum devices. The research group published several papers including “Quantum Approximate Optimization With Parallelizable Gates” and “Quantum Optimization via Four-Body Rydberg Gates”. ParityQC In January 2020 Wolfgang Lechner co-founded the company ParityQC together with Magdalena Hauser, as a spin-off from the University of Innsbruck, and with Hermann Hauser as mentor. ParityQC is a quantum architecture company that develops blueprints for quantum computers to solve optimization problems as well as the appertaining operating system called ParityOS. The ParityQC architecture is a generalisation of the LHZ architecture for both digital and analog quantum devices. Awards Wolfgang Lechner has received a number of awards for his contributions to the field of quantum optimization. He was awarded the 2011 Loschmidt Prize of Austria's Chemisch Physikalische Gesellschaft, the 2015 Wallnöfer Prize of the Austrian Industrialists Association (IV), the 2017 Thirring Prize of the Austrian Academy of Sciences and the 2017 START Prize of the Austrian Science Fund. Lechner also received the Houska Prize 2019 which was awarded jointly to him and his research group, the Google Faculty Research Award for Quantum Computing, the 2020 “Spinoff Prize” Nature Research Award for ParityQC. In 2020, he was nominated among the “22 Innovators Building a Better Future” by Wired UK. References 1981 births Living people 21st-century Austrian physicists People from Kufstein University of Vienna alumni Theoretical physicists Academic staff of the University of Innsbruck
Wolfgang Lechner
[ "Physics" ]
563
[ "Theoretical physics", "Theoretical physicists" ]
70,798,598
https://en.wikipedia.org/wiki/Aurantimycin%20A
Aurantimycin A is a depsipeptide antibiotic with the molecular formula C38H64N8O14. Aurantimycin A is produced by the bacterium Streptomyces aurantiacus. Aurantimycin A also show cytotoxic properties. References Further reading Antibiotics Depsipeptides Cyclic peptides
Aurantimycin A
[ "Chemistry", "Biology" ]
75
[ "Biotechnology products", "Organic compounds", "Antibiotics", "Biocides", "Organic compound stubs", "Organic chemistry stubs" ]
70,800,230
https://en.wikipedia.org/wiki/Felix%20Boehm
Felix Hans Boehm (June 9, 1924, Basel – May 25, 2021, Altadena, California) was a Swiss-American experimental physicist, known for his research on weak interactions, parity violation, and neutrino physics. Biography He had four brothers and both his father and his paternal grandfather were in the publishing business. Felix Boehm completed his Matura in 1943 and was drafted into Swiss army, which allowed him to study physics part-time at the University of Geneva. In the autumn of 1943 he matriculated at ETH Zurich. There he took several classes from Wolfgang Pauli and graduated in physics with his Diplom in 1948 and his doctorate in 1951 with doctoral advisor Paul Scherrer. Boehm worked as an assistant to Scherrer from 1951 to March 1952 and then went as a Boese Fellow to Columbia University, where he studied with C. S. Wu for a year and a half. As a postdoctoral research fellow he went in July 1953 to Caltech, where he studied with Jesse DuMond and Charles Lauritsen. In 1957 Boehm married Ruth Sommerhalder, whom he met in 1956 at a social occasion at the Swiss consulate in Los Angeles. At Caltech he became in 1958 an assistant professor, in 1961 a full professor at Caltech, in 1985 William L. Valentine Professor of Physics, and in 1995 professor emeritus in retirement. In 1960 he played an essential role in bringing Rudolf Mössbauer to the California Institute of Technology. In 1961 Boehm was awarded a 2-year Sloan Research Fellowship. He held visiting positions in 1957/58 at the University of Heidelberg (at the invitation of Jensen), 1965/66 at the University of Copenhagen, in 1971/72 at CERN, and in 1979/80 at the Institut Laue-Langevin in Grenoble, where he also worked with scientists from the Paul Scherrer Institute. He was a visiting professor in 1980 at the Ludwig Maximilian University of Munich and 1981 at ETH Zurich. (Years earlier he had turned down an offer of a professorship at ETH in favor of Caltech.) In the 1950s Boehm worked on experiments on parity violation and experimentally confirmed the violation first reported by C. S. Wu. In 1956 Boehm and Aaldert Wapstra made the confirmation by measuring the circular polarization of gamma rays in beta decay. At Caltech Boehm came into contact with the theorists Richard Feynman and Murray Gell-Mann. Boehm did research on X-ray spectroscopy in nuclear physics, specifically, isotope shift of K-shell electrons and then experiments involving muons at CERN and at the Los Alamos Meson Physics Facility (LAMPF). He collaborated with French and Swiss scientists on neutrino detection with an experiment set up in the Gotthard Tunnel. For a number of years, he and his group also searched in vain for violations of time reversal invariance in nuclear physics (but found upper bounds for such violations). At Caltech he did research on double beta decay. In 1969 and 1970 he and J. C. Vanderleeden found parity non-conservation in nuclear forces by measuring the circular polarization of gamma rays from unpolarized atomic nuclei. Beginning in 1970 he collaborated extensively with the theorist Petr Vogel. In 1980 Boehm received the Humboldt Research Award. In 1983 he was elected a member of the National Academy of Sciences. In 1995 he received the Tom W. Bonner Prize in Nuclear Physics with citation: In 2006 he was elected a Fellow of the American Association for the Advancement of Science. Upon his death in 2021 he was survived by his widow and their two sons. Selected publications Articles Books 1st edition 1988 References 1924 births 2021 deaths 20th-century American physicists 20th-century Swiss physicists ETH Zurich alumni California Institute of Technology faculty Scientists from Basel-Stadt Experimental physicists Nuclear physicists Members of the United States National Academy of Sciences Fellows of the American Association for the Advancement of Science Swiss emigrants to the United States People associated with CERN
Felix Boehm
[ "Physics" ]
835
[ "Experimental physics", "Experimental physicists" ]
70,802,355
https://en.wikipedia.org/wiki/Spectral%20dimension
The spectral dimension is a real-valued quantity that characterizes a spacetime geometry and topology. It characterizes a spread into space over time, e.g. an ink drop diffusing in a water glass or the evolution of a pandemic in a population. Its definition is as follow: if a phenomenon spreads as , with the time, then the spectral dimension is . The spectral dimension depends on the topology of the space, e.g., the distribution of neighbors in a population, and the diffusion rate. In physics, the concept of spectral dimension is used, among other things, in quantum gravity, percolation theory, superstring theory, or quantum field theory. Examples The diffusion of ink in an isotropic homogeneous medium like still water evolves as , giving a spectral dimension of 3. Ink in a 2D Sierpiński triangle diffuses following a more complicated path and thus more slowly, as , giving a spectral dimension of 1.3652. See also Dimension Fractal dimension Hausdorff dimension References Geometry Diffusion Quantum gravity Power laws
Spectral dimension
[ "Physics", "Chemistry", "Mathematics" ]
219
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Unsolved problems in physics", "Quantum gravity", "Geometry", "Physics beyond the Standard Model" ]
72,283,961
https://en.wikipedia.org/wiki/Privileged%20access%20management
Privileged Access Management (PAM) is a type of identity management and branch of cybersecurity that focuses on the control, monitoring, and protection of privileged accounts within an organization. Accounts with privileged status grant users enhanced permissions, making them prime targets for attackers due to their extensive access to vital systems and sensitive data. Implementation and models PAM can be implemented as a Software-as-a-Service (SaaS) solution or an on-premises offering, providing organizations with the flexibility to choose the model that best fits their needs. The objective is to safeguard, regulate, observe, examine, and manage privileged access across diverse environments and platforms. PAM solutions adopt Zero Trust and least-privilege frameworks, guaranteeing that users receive only the essential computer access control needed for their roles, thereby minimizing the likelihood of unauthorized entry or security incidents. PAM focuses on securing and overseeing privileged accounts to prevent unauthorized access to critical resources, while SNMP is used for monitoring and managing network devices. These two components can work together to enhance overall network security by ensuring that SNMP configurations and access controls are protected and only accessible to authorized personnel, thus safeguarding against potential security breaches and unauthorized modifications to network settings. In July 2023, the Keeper Security survey revealed that only 43% of SMBs have deployed Privileged Access Management (PAM) solutions, significantly lower than other leading security technologies such as network, email, endpoint security, and SIEM tools, which all exceed 75% deployment. Key features PAM solutions play a crucial role in reducing security vulnerabilities, adhering to information security standards, and protecting an organization's IT infrastructure. They establish a comprehensive system for handling privileged accounts, encompassing the gathering, safeguarding, administration, verification, documentation, and examination of privileged access: Privileged Session Management controls and records high-risk user sessions, aiding in audit and compliance with searchable session recordings. Privileged Password Vault secures credential granting with role-based management and automated workflows. Privileged Threat Analytics check privileged session recordings to identify high-risk users and monitor for questionable behavior and anomalies. This helps in early detection of internal and external threats, allowing for immediate action to prevent breaches. Least Privileged Access: PAM safeguards the organization and thwarts security breaches by granting administrators precisely the access they need. This method employs a least-privilege security strategy, meticulously allocating administrative permissions across different systems. UNIX Identity Consolidation replaces native UNIX systems' individual authentication and authorization with a more secure, integrated identity management via Active Directory (AD). This approach broadens AD's authentication and authorization scope to include UNIX, Linux, and Mac systems. When combined with customer identity access management, Privileged Access Governance enhances governance features. This integration offers cohesive policies, automated and role-specific attestation, and provisioning. It guarantees a consistent governance framework for every employee, irrespective of their position or access level. Unified access management is an essential component of Privileged Access Management (PAM), encompassing user permissions, privileged access control, and identity management within a Unified Identity Security Platform. It efficiently addresses identity sprawl, streamlining cybersecurity efforts while promoting governance and operational efficiency. By integrating user data across various platforms, it centralizes management and enhances situational awareness, making it a pivotal tool in modern cybersecurity and identity management. According to Security-First Compliance for Small Businesses book the best practices for managing privileged access (PAM) encompass: Distinguishing between privileged and non-privileged access for users with elevated permissions. Constraining the count of users possessing privileged rights. Restricting privileged rights solely to in-house staff. Mandating Multi-Factor Authentication (MFA) for accessing privileged accounts. See also List of ISO standards 28000–29999 Cybersecurity information technology list References Identity management Computer security procedures
Privileged access management
[ "Engineering" ]
789
[ "Cybersecurity engineering", "Computer security procedures" ]
72,285,945
https://en.wikipedia.org/wiki/L%C3%B6vheim%20Cube%20of%20Emotions
Lövheim Cube of Emotion is a theoretical model for the relationship between the monoamine neurotransmitters serotonin, dopamine and noradrenaline and emotions. The model was presented in 2012 by Swedish researcher Hugo Lövheim. Lövheim classifies emotions according to Silvan Tomkins, and orders the basic emotions in a three-dimensional coordinate system where the level of the monoamine neurotransmitters form orthogonal axes. The model is regarded as a dimensional model of emotion. The main concepts of the hypothesis are firstly that the monoamine neurotransmitters are orthogonal variables, meaning involving independent pairs of neurotransmitters; and secondly the proposed one-to-one relationship between the monoamine neurotransmitters and emotions. References Psychology articles needing expert attention Emotion Mathematical psychology Affective science
Lövheim Cube of Emotions
[ "Mathematics", "Biology" ]
176
[ "Emotion", "Behavior", "Mathematical psychology", "Applied mathematics", "Human behavior" ]
72,287,044
https://en.wikipedia.org/wiki/Grimpoteuthis%20angularis
Grimpoteuthis angularis is a species of octopus in the family Grimpoteuthidae. It was first described by Tristan J Verhoeff and Steve O'Shea in 2022, based on a single specimen found in New Zealand. Taxonomy The species was given the name angularis, referring to the octopus' angled shell. Verhoeff & O'Shea proposed that the common name of the species should be angle-shelled dumbo octopus. This species (as well as other Grimpoteuthis) may belong in its own family, the Grimpoteuthididae. Description and habitat The shell of Grimpoteuthis angularis is V-shaped, notably different to other Grimpoteuthis; the relatively elongate cirri are also distinctive. The holotype was discovered on the Chatham Rise to the east of New Zealand, at a depth of 628 metres. References Cephalopods of Oceania Endemic fauna of New Zealand Endemic molluscs of New Zealand Fauna of the Chatham Islands Cephalopods described in 2022 Molluscs of New Zealand Molluscs of the Pacific Ocean Octopuses Species known from a single specimen
Grimpoteuthis angularis
[ "Biology" ]
242
[ "Individual organisms", "Species known from a single specimen" ]
72,287,653
https://en.wikipedia.org/wiki/CYP109E1
Cytochrome P450 family 109 subfamily E member 1 (abbreviated CYP109E1) is a prokaryote monooxygenase of CYP109 family originally from Bacillus megaterium, could atc as a 24- and 25-Hydroxylase for Cholesterol. References Cytochrome P450 Prokaryote genes
CYP109E1
[ "Biology" ]
78
[ "Prokaryotes", "Prokaryote genes" ]
75,093,073
https://en.wikipedia.org/wiki/Azomethane
Azomethane is an organic compound with the chemical formula CH3-N=N-CH3. It exhibits cis-trans isomerism. It can be produced by the reaction of 1,2-dimethylhydrazine dihydrochloride with copper(II) chloride in sodium acetate solution. The reaction produces the azomethane complex of copper(I) chloride, which can produce free azomethane by thermal decomposition. It is the source of methyl radical in laboratory. CH3-N=N-CH3 → 2 CH3· + N2 References Further reading Azo compounds Organic compounds with 2 carbon atoms Methyl compounds
Azomethane
[ "Chemistry" ]
138
[ "Organic compounds", "Organic compounds with 2 carbon atoms" ]
78,056,837
https://en.wikipedia.org/wiki/Erdogan%E2%80%93Chatwin%20equation
In fluid dynamics, Erdogan–Chatwin equation refers to a nonlinear diffusion equation for the scalar field, that accounts for shear-induced dispersion due to horizontal buoyancy forces. The equation was named after M. Emin Erdogan and Phillip C. Chatwin, who derived the equaiton in 1967. The equation for the scalar field reads where is a positive constant. For , the equation reduces to the linear heat equation, and for , the equation reduces to . References Equations of fluid dynamics Fluid dynamics Partial differential equations
Erdogan–Chatwin equation
[ "Physics", "Chemistry", "Engineering" ]
114
[ "Equations of fluid dynamics", "Equations of physics", "Chemical engineering", "Piping", "Fluid dynamics" ]
78,060,184
https://en.wikipedia.org/wiki/Baffle%20blocks
Baffle blocks are concrete structures with different shapes, for example trapezoidal, used in flowing water to reduce its force. They are arranged in several rows in stilling basins. They are used in spillways and dams, in irrigation systems, to protect fishes in rivers with hydrotechnical installations, to improve sediment deposition, as a solution to scouring in hydraulic projects and in flood control. See also Slosh baffle References Fluid dynamics Stormwater management Soil erosion
Baffle blocks
[ "Chemistry", "Engineering", "Environmental_science" ]
96
[ "Water treatment", "Stormwater management", "Chemical engineering", "Water pollution", "Piping", "Fluid dynamics stubs", "Fluid dynamics" ]
78,069,556
https://en.wikipedia.org/wiki/%CE%91-Methyltryptophan
α-Methyltryptophan (αMTP or α-MTP) is a synthetic tryptamine derivative, an artificial amino acid, and a prodrug of α-methylserotonin (αMS). It is the α-methylated derivative of tryptophan, while αMS is the α-methylated analogue of serotonin. αMTP has been suggested for potential therapeutic use in the treatment of conditions thought by some authors to be related to serotonin deficiency, such as depression. In labeled forms, αMTP is also used as a radiotracer in positron emission tomography (PET) imaging to assess serotonin synthesis and certain other processes. αMS is a non-selective serotonin receptor agonist, including of the serotonin 5-HT2 receptors, and has been described as a "substitute neurotransmitter" of serotonin. However, whereas αMS itself is too hydrophilic to efficiently cross the blood–brain barrier, thus being peripherally selective, αMTP is able to cross the blood–brain barrier and, following transformation, deliver αMS into the brain. Besides αMS, αMTP is also metabolized into α-methyltryptamine (αMT). αMT is a serotonin–norepinephrine–dopamine releasing agent, a non-selective serotonin receptor agonist, and a serotonergic psychedelic. However, αMT levels are much lower than those of αMS with αMTP and αMT is described as a minor metabolite of αMTP. In accordance, the behavioral effects of αMTP and αMT in animals are described as strikingly different. α-Methylmelatonin can also be formed in small amounts from αMTP, but the formation of this compound with αMTP in vivo appears to be negligible. αMTP and αMS remain in the body for long amounts of time following a single dose of αMTP, whereas tryptophan results in only a short-lasting increase in brain serotonin levels. This is attributed to the resistance to metabolism of these compounds afforded by their α-methyl group. As such, αMTP might be advantageous for therapeutic purposes relative to tryptophan. αMTP is useful over tryptophan in PET imaging because αMTP, unlike tryptophan, is not incorporated as an amino acid into brain proteins, and because, unlike serotonin, αMS is not a substrate for monoamine oxidase (MAO) and hence remains in the brain for a much longer amount of time. The preceding limitations of tryptophan make its use in PET imaging in humans impossible, whereas αMTP is a viable agent for such purposes. αMTP is first converted by tryptophan hydroxylase into α-methyl-5-hydroxytryptophan (αM-5-HTP or α-methyl-5-HTP), the α-methylated analogue of 5-hydroxytryptophan (5-HTP), prior to being decarboxylated by aromatic L-amino acid decarboxylase (AAAD) into αMS. αM-5-HTP has also been suggested for potential therapeutic use. However, αM-5-HTP is also a tyrosine hydroxylase inhibitor similarly to α-methyltyrosine, as well as an AAAD inhibitor, and has been found to deplete levels of brain norepinephrine in animals, although not levels of brain dopamine. See also O-Acetylbufotenine (O-acetyl-N,N-dimethylserotonin) α-Methylphenylalanine Metirosine (α-methyltyrosine) Methyldopa (α-methyl-DOPA) Neurotransmitter prodrug References Alpha-Alkyltryptamines Alpha-Amino acids Prodrugs Serotonin receptor agonists
Α-Methyltryptophan
[ "Chemistry" ]
852
[ "Chemicals in medicine", "Prodrugs" ]
65,074,394
https://en.wikipedia.org/wiki/LEAPER%20gene%20editing
LEAPER (Leveraging endogenous ADAR for programmable editing of RNA) is a genetic engineering technique in molecular biology by which RNA can be edited. The technique relies on engineered strands of RNA to recruit native ADAR enzymes to swap out different compounds in RNA. Developed by researchers at Peking University in 2019, the technique, some have claimed, is more efficient than the CRISPR gene editing technique. Initial studies have claimed that editing efficiencies of up to 80%. Synopsis As opposed to DNA gene editing techniques (e.g., using CRISPR-Cas proteins to make modifications directly to a defective gene), LEAPER targets editing messenger RNA (mRNA) for the same gene which is transcribed into a protein. Post-transcriptional RNA modification typically involves the strategy of converting adenosine-to-inosine (A-to-I) since inosine (I) demonstrably mimics guanosine (G) during translation into a protein. A-to-I editing is catalyzed by adenosine deaminase acting on RNA (ADAR) enzymes, whose substrates are double-stranded RNAs. Three human ADAR genes have been identified with ADAR1 (official symbol ADAR) and ADAR2 (ADARB1) proteins developed activity profiles. LEAPER achieves this targeted RNA editing through the use of short engineered ADAR-recruiting RNAs (). consist of endogenous ADAR1 proteins with several RNA binding domains (RBDs) fused with a peptide, CRISPR-Cas13b protein, and a guide RNA (gRNA) between 100 and 150 nt in length for high editing efficiency designed to recruit the chimeric ADAR protein to a target site. This results in a change in which protein is synthesized during translation. History The technique was discovered by a team of researchers at Peking University in Beijing, China. The discovery was announced in the journal Nature Biotechnology in July 2019. Applications Chinese researchers have utilized LEAPER to restore functional enzyme activity in cells from patients with Hurler syndrome. They have claimed that LEAPER could have the potential to treat almost half of all known hereditary disorders. Highly specific editing efficiencies of up to 80% can be achieved when LEAPER editing using arRNA151 is delivered via a plasmid or viral vector or as a synthetic oligonucleotide, though this efficiency varied significantly across cell types. Based on these preliminary results, LEAPER may have the most therapeutic promise with no production of functional protein but if a partial restoration of protein expression would provide therapeutic benefit. For example, in human cells with defective α-L-iduronidase (IDUA) expression in cells from patients with IDUA-defective Hurler syndrome, LEAPER resulted in a W53X truncation mutant of p53 being edited using arRNA151 to achieve a "normal" p53 translation and functional p53-mediated transcriptional responses. Comparison to CRISPR LEAPER is analogous to CRISPR Cas-13 in that it targets RNA before proteins are synthesized. However, LEAPER is simpler and more efficient as it only requires , rather than Cas and a guide RNA. According to the developers of LEAPER, it has the potential to be easier and more precise than any CRISPR technique. LEAPER also eliminates health concerns and technical barriers arising from the introduction of exogenous proteins. It has also been called more ethical as it does not change DNA and thus does not result in heritable changes, unlike methods using CRISPR Cas-9. See also CRISPR gene editing Gene knockout NgAgo Prime editing References Genetic engineering Genome editing Biotechnology RNA Biochemistry
LEAPER gene editing
[ "Chemistry", "Engineering", "Biology" ]
749
[ "Genetics techniques", "Biological engineering", "Genome editing", "Genetic engineering", "Biotechnology", "nan", "Molecular biology", "Biochemistry" ]
65,074,511
https://en.wikipedia.org/wiki/Zakya%20Kafafi
Zakya H. Ismail (born 1948) is an Egyptian scientist who is professor of Electrical Engineering at Lehigh University. Her research considers printed electronics and photonics. She was the first woman to be appointed to the National Science Foundation Director of the Division of Materials Research. Early life and education Kafafi was born in Cairo, Egypt. She has said that she became interested in chemistry whilst she was at high school, and that her science teacher frequently referred to her as The Chemist. She started her undergraduate degree in chemistry at the University of Houston, where she minored in mathematics. She moved to Rice University for her graduate studies and gained her MA and PhD in chemistry, and worked on low-temperature spectroscopy. At Rice University Kafafi was friends with Marilyn E. Jacox. After completing her doctorate, Kafafi moved to Cairo, where she was appointed Assistant Professor. Research and career In 1986, while on a sabbatical, Kafafi visited the United States, where she learned about a job that was open in the Optical Sciences Division at the United States Naval Research Laboratory (NRL). Kafafi eventually joined NRL, where she established the organic optoelectronics section. Here she worked on nonlinear optical materials and colour centre lasers. She transitioned from chemistry to materials science and eventually ended up in physics, studying the properties of OLEDs. Kafafi spent over twenty years working at the NRL, during which time OLED displays found their way into televisions and mobile phones. In 2007 Kafafi was appointed to the National Science Foundation Director of the Division of Materials Research, during which time she oversaw a billion dollar budget. She was the first woman to hold such a position. In 2010 Kafafi returned to Egypt, where she looked to develop partnerships that promoted solar energy across the country. Kafafi joined the faculty at Lehigh University in 2008, where she was made Distinguished Research Fellow in the Department of Electrical Engineering. Here she has developed metallic plasmonic nanostructures that can increase light absorption and the efficiency of photovoltaics. These nanostructures make it possible to increase the optical absorption of the active layer of photovoltaics without increasing the layer thickness, allowing for improved device performance without compromising the flexibility or weight. From 2011 to 2016 Kafafi served as Editor-in-Chief of the Journal of Photonics for Energy. In 2014 Kafafi became the inaugural editor of the journal Science Advances. Awards and honours 2004 NRL Edison Patent Award 2005 Elected Fellow of the American Association for the Advancement of Science 2007 Elected Fellow of The Optical Society 2007 Elected Fellow of the American Association for the Advancement of Science 2015 Elected Fellow of the Materials Research Society 2017 American Chemical Society Hillebrand Prize 2018 Kuwait Foundation for the Advancement of Sciences Kuwait Prize in Applied Sciences 2021 Elected Member of the National Academy of Engineering Select publications References 1948 births Living people Egyptian scientists Rice University alumni University of Houston alumni Fellows of the American Association for the Advancement of Science Fellows of Optica (society) Fellows of SPIE Members of the United States National Academy of Engineering Women materials scientists and engineers Women in optics Egyptian women scientists
Zakya Kafafi
[ "Materials_science", "Technology" ]
639
[ "Women materials scientists and engineers", "Materials scientists and engineers", "Women in science and technology" ]
66,379,796
https://en.wikipedia.org/wiki/Ferraris%27%20motor
In 1885, Galileo Ferraris demonstrated an induction motor that also involved using two pairs of electromagnets to create a rotating magnetic field, though he did this independently of Baily. His motor more closely resembled modern ones in that the electromagnets surrounded a cylinder. More significantly, however, he proposed creating a true rotating magnetic field for it by supplying two sine wave alternating currents 90° apart. He gave his first public demonstration of the motor in 1888. History and description Professor Galileo Ferraris, of Turin, had already in 1885, arrived at the same fundamental ideas as those of Baily and of Deprez. But the result was more fruitful, inasmuch as he, without knowing of the work of either, united both sets of ideas. Like Baily he proposed to produce rotation of a copper conductor by means of eddy-currents induced in it by a progressively shifted magnetic field; and this progressively shifted magnetic field he proposed to generate as a true rotating field by combining at right angles to one another two alternate currents which differed by a quarter-period from one another. In 1885, Professor Ferraris constructed the motor depicted in plan in Fig 1, which was not, however, publicly shown till 1888. It was exhibited in 1893 at the World's Fair at Chicago. It consisted of two pairs of electromagnets A A and B B', having a common yoke made by winding iron wire around the exterior. Two alternate currents differing in phase were led into these two circuits, and the pivoted central body was observed to revolve. Ferraris's first publication was in March 1888, entitled Electrodynamic rotations produced by means of alternate currents. After expounding the geometric theory of the rotatory magnetic field, he suggested that a simple way of procuring the desired phase-currents would be to branch the circuit of an alternate current into two parts, into one of which should be inserted a resistance without self-induction, into the other a coil of much self-induction but of small resistance. The two windings of the motor should be respectively introduced into these two branches. The difference of phase thus produced would be sufficiently near to 90° to be effective. He expressed the opinion that in this way one might obtain all the effects that can be obtained by the rotation of a magnet. He then described the following experiments which were made in the autumn of 1885. Two flat coils, one of thick wire, the other of thinner wire, represented diagrammatically at A A and B B of Fig. 1, were set at right angles to one another. Into the first was brought a current from the primary of a Gaulard's transformer, and into the second the current from the secondary, with more or less non-inductive resistance. In the central space was suspended a small hollow closed cylinder of copper. If the current was turned on in one only of the two windings the cylinder remained immovable, but on turning on the second current it at once began to rotate. The sense of the rotation could be reversed by simply changing, with a reversing-switch, the connections of the second coil. The same results were found to follow when a cylinder of iron was substituted for that of copper. A laminated iron cylinder built up of insulated disks also turned. Then followed suggestions for constructing alternate current motors on this principle but of modified form; for, as Professor Ferraris remarked, it was evident that a motor thus made could not have any importance as a means of industrial transformation of power. He therefore designed a larger model, having as its rotating part a copper cylinder weighing 10 lbs, having a length of 18 cm, and a diameter of 8,9 cm, borne on a horizontal shaft 1 cm in diameter. It was surrounded by two sets of coils A A and B B at right angles to one another, as in the Fig. 2. It was, however, of but small power. Ferraris discussed the elementary theory of the apparatus, pointing out that the inductive action would be proportional to the slip, that is to say to the difference between the angular velocity of the magnetic field and that of the rotating cylinder, that the induced current in the rotating metal would also be proportional to this; and that the power of the motor is proportional jointly to the slip and to the velocity of the rotating part. Ferraris also suggested measuring instruments for alternate currents based on this principle. Lastly he succeeded in producing rotation in a mass of mercury placed in a vessel in the rotatory field. In 1894 Ferraris published another discussion of the theory of these motors. See also Arago's Rotations Timeline of the Electric Motor Rotating magnetic Field Induction Motor References External links Inventing The Induction Motor Electric motors Electromagnetism 19th-century inventions
Ferraris' motor
[ "Physics", "Technology", "Engineering" ]
977
[ "Electromagnetism", "Physical phenomena", "Engines", "Electric motors", "Fundamental interactions", "Electrical engineering" ]
66,387,043
https://en.wikipedia.org/wiki/Rolling%20hairpin%20replication
Rolling hairpin replication (RHR) is a unidirectional, strand displacement form of DNA replication used by parvoviruses, a group of viruses that constitute the family Parvoviridae. Parvoviruses have linear, single-stranded DNA (ssDNA) genomes in which the coding portion of the genome is flanked by telomeres at each end that form hairpin loops. During RHR, these hairpin loops repeatedly unfold and refold to change the direction of DNA replication so that replication progresses in a continuous manner back and forth across the genome. RHR is initiated and terminated by an endonuclease encoded by parvoviruses that is variously called NS1 or Rep, and RHR is similar to rolling circle replication, which is used by ssDNA viruses that have circular genomes. Before RHR begins, a host cell DNA polymerase converts the genome to a duplex form in which the coding portion is double-stranded and connected to the terminal hairpins. From there, messenger RNA (mRNA) that encodes the viral initiator protein is transcribed and translated to synthesize the protein. The initiator protein commences RHR by binding to and nicking the genome in a region adjacent to a hairpin called the origin and establishing a replication fork with its helicase activity. Nicking leads to the hairpin unfolding into a linear, extended form. The telomere is then replicated and both strands of the telomere refold back in on themselves to their original turn-around forms. This repositions the replication fork to switch templates to the other strand and move in the opposite direction. Upon reaching the other end, the same process of unfolding, replication, and refolding occurs. Parvoviruses vary in whether both hairpins are the same or different. Homotelomeric parvoviruses such as adeno-associated viruses (AAV), i.e. those that have identical or similar telomeres, have both ends replicated by terminal resolution, the previously described process. Heterotelomeric parvoviruses such as minute virus of mice (MVM), i.e. those that have different telomeres, have one end replicated by terminal resolution and the other by an asymmetric process called junction resolution. During asymmetric junction resolution, the duplex extended form of the telomere reorganizes into a cruciform-shaped junction, and the correct orientation of the telomere is replicated off the lower arm of the cruciform. As a result of RHR, a replicative molecule that contains numerous copies of the genomes is synthesized. The initiator protein periodically excises progeny ssDNA genomes from this replicative concatemer. Background Parvoviruses are a family of DNA viruses that have single-stranded DNA (ssDNA) genomes enclosed in rugged, icosahedral protein capsids 18–26 nanometers (nm) in diameter. Unlike most other ssDNA viruses, which have circular genomes that form a loop, parvoviruses have linear genomes with short terminal sequences at each end of the genome. These termini are capable of being formed into structures called hairpins or hairpin loops and consist of short, imperfect palindromes. Varying from virus to virus, the coding region of the genome is 4–6 kilobases (kb) in length, and the termini are 116–550 nucleotides (nt) in length each. The hairpin sequences provide most of the cis-acting information needed for DNA replication and packaging. Parvovirus genomes may be either positive-sense or negative-sense. Some species, such as adeno-associated viruses (AAV) like AAV2, package a roughly equal number of positive-sense and negative-sense strands into virions, others, such as minute virus of mice (MVM), show preference toward packaging negative-sense strands, and others have varying proportions. Because of this disparity, the 5′-end (usually pronounced "five prime end") of the strand that encodes the non-structural proteins is called the "left end", and the 3′-end (usually pronounced "three prime end") is called the "right end". In reference to the negative-sense strand, the 3′-end is the left side and the 5′-end is the right side. Parvoviruses replicate their genomes through a process called rolling hairpin replication (RHR), which is a unidirectional, strand displacement form of DNA replication. Before replication, the coding portion of the ssDNA genome is converted to a double-strand DNA (dsDNA) form, which is then cleaved by a viral protein to initiate replication. Sequential unfolding and refolding of the hairpin termini acts to reverse the direction of synthesis, which allows replication to go back and forth along the genome to synthesize a continuous duplex replicative form (RF) DNA intermediate. Progeny ssDNA genomes are then excised from the RF intermediate. While the general aspects of RHR are conserved across genera and species, the exact details likely vary. Parvovirus genomes have distinct starting points of replication that contain palindromic DNA sequences. These sequences are able to alternate between inter- and intrastrand basepairing throughout replication, and they serve as self-priming telomeres at each end of the genome. They also contain two key sites necessary for replication used by the initiator protein: a binding site and a cleavage site. Telomere sequences have significant complexity and diversity, suggesting that they perform additional functions for many species. In MVM, for example, the left-end hairpin contains binding sites for transcription factors that modulate gene expression from an adjacent promoter. For AAV, the hairpins can bind to MRE11/Rad50/NBS1 (MRN) complexes and Ku70/80 heterodimers, which are involved in sensing and repairing DNA. In general, however, they have the same basic structure: imperfect palindromes in which a fully or primarily basepaired region terminates into an axial symmetry. These palindromes can fold into a variety of structures such as a Y-shaped structure and a cruciform-shaped structure. During replication, the termini act as hinges in which the imperfectly basepaired or partial cruciform regions surrounding the axis provide a favorable environment for unfolding and refolding of the hairpin. Some parvoviruses, such as AAV2, are homotelomeric, meaning the two palindromic telomeres are similar or identical and form part of larger (inverted) terminal repeat sequences. Replication at each terminal ending is therefore similar. Other parvoviruses, such as MVM, are heterotelomeric, meaning they have two physically different telomeres. As a result, heterotelomeric parvoviruses tend to have a more complex replication process since the two telomeres have different replication processes. In general, homotelomeric parvoviruses replicate both ends via a process called terminal resolution, whereas heterotelomeric parvoviruses replicate one end by terminal resolution and the other end by an asymmetric process called junction resolution. Whether a genus is hetero- or homotelomeric, along with other genomic characteristics, is shown in the following table. General process The entire process of rolling hairpin replication, which has distinct, sequential stages, can be summarized as follows: 1. The coding portion of the genome is replicated, starting from the 3′-end of the 3′ hairpin, which acts as a primer, and continues until the newly synthesized strand is connected to the 5′-end of the 5′ hairpin, producing a duplex DNA molecule that has two strands of the coding portion of the genome. 2. mRNA that encodes the viral replication initiator protein is transcribed and subsequently translated to synthesize the protein. 3. The initiator protein binds to and cleaves the DNA within a region called the origin, which results in the hairpin unfolding into a linear, extended form. At the same time, the initiator protein establishes a replication fork with its helicase activity. 4. The extended-form hairpin is replicated to create an inverted copy of the telomere on the newly synthesized strand. 5. The two strands of that end refold back into two hairpins, which repositions the replication fork to switch templates and move in the opposite direction. 6. DNA replication continues in a linear manner from one end to the other using the opposite strand as a template. 7. Upon reaching the other end, that end's hairpin is unfolded and refolded to replicate the terminus and once again swap templates and change the direction of replication. This back-and-forth replication is continually repeated, producing a concatemer of multiple copies of the genome. 8. The viral initiator protein periodically excises individual genomic strands of DNA from the replicative concatemer. 9. Excised ssDNA genomes are packaged into newly constructed viral capsids. Preparation for replication Upon cell entry, a tether about 24 nucleotides in length that attaches the viral protein NS1, essential in replication, to the virion is cleaved off the virion to be reattached later. After cell entry, virions accumulate in the cell nucleus while the genome is still contained within the capsid. These capsids may be reconfigured to an open or transitioned state during entry. The exact mechanism by which the genome leaves the capsid is unclear. For AAV, it has been suggested that nuclear factors disassemble the capsid, whereas for MVM, it appears as if the genome is ejected in a 3′-to-5′ direction from an opening in the capsid called a portal. Parvoviruses lack genes capable of inducing resting cells to enter their DNA synthesis phase (S-phase). Additionally, naked ssDNA is likely to be unstable, perceived as foreign by the host cell, or improperly replicated by host DNA repair. For these reasons, the genome must either be converted rapidly to its less obstructive, more stable duplex form or retained within the capsid until it is uncoated during S-phase. Typically, the latter occurs and virion remains silent in the nucleus until the host cell enters S-phase by itself. During this waiting period, virions may make use of certain strategies to evade host defense mechanisms to protect their hairpins and DNA to reach S-phase, though it is unclear how this occurs. Since the genome is packaged as ssDNA, creation of a complementary strand is necessary before gene expression. DNA polymerases are only able to synthesize DNA in a 5′ to 3′ direction, and they require a basepair primer to begin synthesis. Parvoviruses address these limitations by using their termini as primers for complementary strand synthesis. A 3′ hydroxyl end of the left-hand (3′) terminus pairs with an internal base to prime initial DNA synthesis, resulting in the conversion of the ssDNA genome to its first duplex form. This is a monomeric double-stranded DNA molecule in which the two strands are covalently cross-linked to each other at the left-end by a single copy of the viral telomere. Synthesis of the duplex form precedes NS1 expression so that when the replication fork during initial complementary strand synthesis reaches the right (5′) end, it does not displace and copy the right-end hairpin. This allows the 3′-end of the new DNA strand to be covalently ligated to the 5′-end of the right hairpin by a host ligase, thereby creating the duplex molecule. During this step, the tether sequence that was present before viral entry into the cell is resynthesized. Essential viral proteins and initiation Once an infected cell enters S-phase, parvovirus genomes are converted to their duplex form by host replication machinery, and mRNA that encodes non-structural (NS) proteins is transcribed starting from a viral promoter (P4 for MVM). One of these NS proteins is usually called NS1 but also Rep1 or Rep68/78 for the genus Dependoparvovirus, which AAV belongs to. NS1 is a site-specific DNA binding protein that acts as the replication initiator protein via nickase activity. It also mediates excision of both ends of the genome from duplex RF intermediates via a transesterification reaction that introduces a nick into specific duplex origin sequences. Key components of NS1 include an HUH endonuclease domain toward the N-terminus of the protein and a superfamily 3 (SF3) helicase toward the C-terminus, as well as ATPase activity. It binds to ssDNA, RNA, and site-specifically on duplex DNA at reiterations of the tetranucleotide sequence 5′-ACCA-3′. These sequences are present in the viral replication origin sites and repeated at multiple sites throughout the genome in more or less degenerative forms. NS1 nicks the covalently-closed right-end telomere via a transesterification reaction that liberates a basepaired 3′ nucleotide as a free hydroxyl (-OH). This reaction is assisted by a host DNA-binding protein from the high mobility group 1/2 (HMG1/2) family and is made in the replication origin, OriR, which was created by sequences in and immediately adjacent to the right hairpin. The left-end telomere of MVM, a heterotelomeric parvovirus, contains sequences that can give rise to replication origins in higher-order duplex intermediates, but these sequences are inactive in the hairpin terminus of the monomeric molecule, so NS1 always initiates replication at the right end. The 3′-OH that is freed by nicking acts as a primer for the DNA polymerase to start complementary strand synthesis while NS1 remains covalently attached to the 5′-end via a tyrosine residue. Consequently, a copy of NS1 remains attached to the 5′-end of all RF and progeny DNA throughout replication, packaging, and virion release. NS1 is only able to bind to this specific site by assembling into homodimers or higher order multimers, which happens naturally with the addition of adenosine triphosphate (ATP) that is likely mediated by NS1's helicase domain. In vivo studies have shown that NS1 can form into a variety of oligomeric states, but it most likely assembles into hexamers to fulfill the functions of both the endonuclease domain and helicase domain. Starting from the location at the nick, it is thought that NS1 organizes a replication fork and acts as the replicative 3′-to-5′ helicase. Near its C-terminus, NS1 contains an acidic transcriptional activation domain. This domain acts to upregulate transcription starting from a viral promoter (P38 for MVM) when NS1 is bound to a series of 5′-ACCA-3′ motifs, called the tar sequence, positioned upstream (toward the 5′-end) of the promoter unit, and via interaction with NS1 and various transcription factors. NS1 also recruits the cellular replication protein A (RPA) complex, which is essential for establishing the new replication fork and for binding and stabilizing displaced single strands. While NS1 is the only non-structural protein essential for all parvoviruses, some have other individual proteins that are essential for replication. For MVM, NS2 appears to reprogram the host cell for efficient DNA amplification, single-strand progeny synthesis, capsid assembly, and virion export, though it seems to lack direct involvement in these processes. NS2 initially accumulates up to three times more quickly than NS1 in the early S-phase but is turned-over rapidly by a proteasome-mediated pathway. As the infectious cycle progresses, NS2 becomes less common as P38-driven transcription becomes more prominent. Another example is the nuclear phosphoprotein NP1 of bocaviruses, which, if not synthesized, results in non-viable progeny genomes. As viral NS proteins accumulate, they commandeer host cell replication apparati, terminating host cell DNA synthesis and causing viral DNA amplification to begin. Interference with host DNA replication may be due to direct effects on host replication proteins that are not essential for viral replication, by extensive nicking of host DNA, or by the restructuring of the nucleus during viral infection. Early in infection, parvoviruses establish replication foci in the nucleus that are termed autonomous parvovirus-associated replication (APAR) bodies. NS1 co-localizes with replicating viral DNA in these structures with other cellular proteins necessary for viral DNA synthesis, while other complexes not required for replication are sequestered from APAR bodies. The exact manner by which proteins are included or excluded from APAR bodies is unclear and appears to vary from species to species and between cell types. As infection progresses, APAR microdomains begin to coalesce with other, formerly distinct, nuclear bodies to form progressively larger nuclear inclusions where viral replication and virion assembly occur. After S-phase begins, the host cell is forced to synthesize viral DNA and cannot leave S-phase. MVM right-end origin The right-end hairpin of MVM contains 248 nucleotides organized into a cruciform shape. This region is almost perfectly basepaired, with just three unpaired bases at the axis and a mismatched region positioned 20 nucleotides from the axis. A three nucleotide insertion, AGA or TCT, on one strand separates opposing pairs of NS1 binding sites, creating a 36 basepair-length palindrome that can assume an alternate cruciform configuration. This configuration is expected to destabilize the duplex, which facilitates its ability to function as a hinge. The mismatch of the unpaired bases, rather than the three-nucleotide sequence itself, may help to promote instability of duplex DNA. Fully-duplex linear forms of the right-end hairpin sequence also function as NS1-dependent origins. For many parvoviral telomeres, however, only an initiator binding site next to the nick site is required for the origin function so that the minimal sequences required for nicking are less than 40 basepairs in length. For MVM, the minimal right-end origin is around 125 basepairs in length and includes most of the hairpin sequence because at least three recognition elements are involved: the nick site 5′-CTWWTCA-3′ (element 1), positioned seven nucleotides upstream from a duplex NS1-binding site (element 2) that is oriented to have the attached NS1 complex extending over the nick site, and a second NS1-binding site (element 3), which is adjacent to the hairpin axis. The second binding site is over 100 basepairs away from the nick site but is required for NS1-mediated cleavage. In vivo, there is slight variation in the position of the nick, plus or minus one nucleotide, with one position preferred. During nicking, this site is likely exposed as a single strand and is potentially stabilized as a minimal stem-loop by the tetranucleotide inverted repeats to the sides of the site. Optimal forms of the NS1-binding site contain at least three tandem copies of the 5′-ACCA-3′ sequence. Modest alterations to these motifs only have a small effect on affinity, which suggests that each tetranucleotide motif is recognized by different molecules in the NS1 complex. The NS1-binding site that positions NS1 over the nick site in the right-end origin is a high affinity site. With ATP, NS1 binds asymmetrically over the aforementioned sequence, protecting a region 41 basepairs in length from digestion. This footprint extends just five nucleotides beyond the 3′-end of the ACCA repeat but 22 nucleotides beyond the 5′-end so that the footprint ends 15 nucleotides beyond the nick site, placing NS1 in position to nick the origin. Nicking only occurs if the second, distant NS1-binding site is also present in the origin and the entire complex is activated by addition of HMG1. In the absence of NS1, HMG1 binds the hairpin sequence independently, causing it to bend, without protecting any region from digestion. HMG1 can also directly bind to NS1 and mediates interactions between NS1 molecules bound to their recognition elements in the origin, so it is essential for formation of the cleavage complex. The ability of the axis region to reconfigure into a cruciform does not appear to be important in this process. Cleavage is dependent on the correct spacing of the elements of the origin, so additions and deletions can be lethal, whereas substitutions can be tolerated. Addition of HMG1 appears to only slightly adjust the sequences protected by NS1, but the conformation of the intervening DNA changes, folding into a double helical loop that extends about 30 basepairs through a guanine-rich element in the hairpin stem. Between this element and the nick site there are five thymidine residues included in the loop, and the site has a region to its side containing many alternating adenine and thymine residues, which likely increases flexibility. The creation of the loop likely allows the terminus to assume a specific 3-dimensional structure required to activate the nickase since origins that fail to reconfigure into a double-helical loop once HMG1 is added are not nicked. Terminal resolution Following nicking, a replication fork is established at the newly exposed 3′ nucleotide that proceeds to unfold and copy the right-end hairpin through a series of melting and reannealing reactions. This process begins once NS1 nicks the inboard end of the original hairpin. The terminal sequence is then copied in the opposite direction, which produces an inverted copy of the original sequence. The end result is a duplex extended-form terminus that contains two copies of the terminal sequence. While NS1 is required for this, it is unclear if unfolding is mediated by its helicase activity in front of the fork or by destabilization of the duplex following DNA binding at one of its 5′-(ACCA)-3′ recognition sites. This process is usually called terminal resolution but also hairpin transfer or hairpin resolution. Terminal resolution occurs with each round of replication, so progeny genomes contain an equal number of each terminal orientation. The two orientations are termed "flip" and "flop", and may be represented as R and r, or B and b, for the flip and flop of the right-end telomere and L and l, or A and a, for the flip and flop of the left-end telomere. Since parvoviral terminal palindromes are imperfect, it is easy to identify which orientation is which. The extended-form duplex telomeres generated during terminal resolution are melted, mediated by NS1 with ATP hydrolysis, causing individual strands to fold back on themselves to create hairpin "rabbit ear" structures that have the flip and flop of the termini. This requires the NS1 helicase activity as well as its site-specific binding activity, the latter of which enables NS1 to bind to symmetrical copies of NS1-binding sites that surround the axis of the extended-form terminus. Rabbit ear formation allows the 3′ nucleotide of the newly synthesized DNA strand to pair with an internal base, which repositions the replication fork in a strand-switching maneuver that primes synthesis of additional linear sequences. Switching from DNA synthesis to rabbit-ear formation at the end of terminal resolution may require different types of NS1 complexes. Alternatively, the NS1 complex may remain intact during this switch, being ready to start stand displacement synthesis following refolding into rabbit ears. After the replication fork is repositioned, replication continues toward the left end, using the newly synthesized DNA strand as a template. At the left end of the genome, NS1 is probably required to unfold the hairpin. NS1 appears to be directly involved in melting-out and reconfiguring the resulting extended-form left-end duplexes into rabbit ear structures, though this reaction seems to be less efficient than at the right-end terminus. Dimeric and tetrameric concatemers of the genome are generated successively for MVM. In these concatemers, alternating unit-length genomes are fused through a palindromic junction in left-end to left-end and right-end to right-end orientations. In total, RHR results in coding sequences of the genome being copied twice as often as the termini. Both linear and hairpin configurations of the right-end telomere support initiation of RHR, so resolution of duplex right-end to right-end junctions can occur symmetrically on the basepaired duplex sequence or after this complex is melted and reconfigured into two hairpins. It is unclear which of these two reactions is more common since both appear to produce identical results. For AAV, each telomere is 125 bases in length and capable folding into a T-shaped hairpin. AAV contains a Rep gene that encodes for four Rep proteins, two of which, Rep68 and Rep78, act as replication initiator proteins and fulfill the same functions, such the nickase and helicase activities, as NS1. They recognize and bind to a (GAGC) sequence in the stem region of the terminus and nick a site 20 bases away termed trs. The same process of terminal resolution as MVM is done for AAV, but at both ends. The other two Rep proteins, Rep52 and Rep40, are not involved in DNA replication but are implicated in synthesis of progeny. AAV replication is dependent on a helper virus that is either an adenovirus or a herpesvirus that coinfects the cell. In the absence of coinfection, the AAV genome is integrated into the host cell's DNA until coinfection occurs. A general rule is that parvoviruses with identical termini, i.e. homotelomeric parvoviruses such as AAV and B19, replicate both ends by terminal resolution, generating equal numbers of flips and flops of each telomere. Parvoviruses that have different termini, i.e. heterotelomeric parvoviruses like MVM, replicate one end by terminal resolution and the other end by asymmetric junction resolution, which conserves a single-sequence orientation and requires different structural arrangements and cofactors to activate NS1's nickase. AAV DNA intermediates containing covalently linked sense and antisense strands yield genomic concatemers under denaturing conditions, indicating that AAV replication also synthesizes duplex concatemers that require some form of junction resolution. MVM left-end origin In negative-sense MVM genomes, the left-end hairpin is 121 nucleotides in length and exists in a single flip sequence orientation. This telomere is Y-shaped and contains small internal palindromes that fold into the "ears" of the Y, a duplex stem region 43 nucleotides in length that is interrupted by an asymmetric thymidine residue, and a mismatched "bubble" sequence in which the 5′-GAA-3′ sequence on the inboard arm is opposite of 5′-GA-3′ in the outboard strand. Sequences in this hairpin are involved in both replication and regulation of transcription. The elements involved in these two functions separate the two arms of the hairpin. The left-end telomere of MVM, and likely of all heterotelomeric parvoviruses, cannot function as a replication origin in its hairpin configuration. Instead, a single origin on the lower strand is created when the hairpin is unfolded, extended, and copied to form a duplex basepaired sequence that spans adjacent genomes in the dimer RF. Within this structure, the sequence from the outboard arm that surrounds a GA/TC dinucleotide serves as an origin, OriL. The equivalent GAA/TTC sequence on the inboard arm that contains the bubble trinucleotide, called OriL, does not serve as an origin. The inboard arm and hairpin configuration of the terminus instead appear to function as upstream control elements for the viral transcriptional promoter P4. Additionally, the ability to segregate one arm from nicking appears essential for replication. The minimal linear left-end origin is about 50 basepairs long and extends from two 5′-ACGT-3′ motifs, spaced five nucleotides apart at one end, to a position seven basepairs beyond the nick site. The bubble's GA sequence itself is relatively unimportant, but the space that it occupies is necessary for the origin to function. Within the origin, there are three recognition sequences: an NS1-binding site that orients the NS1 complex over the nick site 5′-CTWWTCA-3′, which is located 17 nucleotides downstream (toward the 3′-end), and the two ACGT motifs. These motifs bind a heterodimeric cellular factor called either parvovirus initiation factor (PIF) or glucocorticoid modulating element-binding protein (GMEB). PIF is a site-specific DNA-binding heterodimeric complex that contains two subunits, p96 and p79, and functions as a transcription modulator in the host cell. It binds DNA via a KDWK fold and recognizes two ACGT half-sites. The spacing between these sites can vary significantly for PIF, from one to nine nucleotides, with an optimal spacing of six. PIF stabilizes the binding of NS1 on the active form of the left-end origin, OriL, but not on the inactive form, OriL, because the two complexes are able to establish contact over the bubble binucleotide. The left-end hairpin of all other species in the Protoparvovirus genus, of which MVM belongs, have bubble asymmetries and PIF-binding sites, though with slight variation in spacing. This suggests that they all share a similar origin segregation mechanism. Asymmetric junction resolution Due to the location of the active origin OriL in the dimer junction, synthesis of new copies of the left-end hairpin in the correct, i.e.flip, orientation is not straightforward since a replication fork moving from this site through the linear bridge structure should synthesize new DNA in the flop orientation. Instead, the left-hand MVM dimer junction is resolved asymmetrically in a process that creates a cruciform intermediate. This maneuver accomplishes two things: it allows synthesis of the new DNA in the correct sequence orientation, and it creates a structure that can be resolved by NS1. This "heterocruciform" model of synthesis suggests that resolution is driven by the NS1 helicase activity and depends on the inherent instability of the duplex palindrome, a property that allows it to switch between its linear and cruciform configurations. NS1 initially introduces a single-strand nick in OriL in the B ("right") arm of the junction and becomes covalently attached to the DNA on the 5′ side of the nick, exposing a basepaired 3′ nucleotide. Two outcomes can then occur, depending on the speed with which a replication fork is assembled. If assembly is rapid, then while the junction is in its linear configuration, "read-through" synthesis copies the upper strand, which regenerates the duplex junction and displaces a positive-sense strand that feeds back into the replicative pool. This promotes MVM DNA amplification but does not lead to synthesis of new terminal sequences in the correct orientation or to junction resolution. To create a resolvable structure, the initial nicking must be followed by melting and rearrangement of the dimer junction into a cruciform. This is driven by the 3′-to-5′ helicase activity of the 5′-linked NS1 complex. Once this cruciform extends to include sequences beyond the nick site, the exposed primer at the nick site in OriL undergoes template switching by annealing with its complement in the lower arm of the cruciform. If a fork assembles after this point, then the subsequent synthesis unfolds and copies the lower cruciform arm. This creates a heterocruciform intermediate that contains the newly synthesized telomere in the flip sequence orientation that is attached to the lower strand of the B arm. This modified junction is called MJ2. The lower arm of MJ2 is an extended-form duplex palindrome that is essentially identical to those generated during terminal resolution. Once MJ2 is synthesized, the lower arm becomes susceptible to rabbit-ear formation. This repositions the 3′ nucleotide of the newly synthesized copy of the lower arm so that it pairs with inboard sequences on the junction's B arm to prime strand displacement synthesis. If a replication fork is created at this 3′ nucleotide, then the lower strand of the B arm is copied, creating an intermediate junction called MJ1 and progressively displacing the upper strand. This leads to the release of the newly synthesized B turn-around (B-ta) sequence. The residual cruciform, called δJ, is partially single-stranded at the upper part of the B arm and contains the intact upper strand of the junction paired to the lower strand of the A ("left") arm, with an intact copy of the left-end hairpin, ending in a 5′ NS1 complex. Since δJ carries the NS1 helicase, it is presumed to periodically alter configuration. The next step is less certain but can be inferred based on what is known about the process thus far. The NS1 helicase is expected to create a dynamic structure in which the nick site in δJ in the normally inactive A side is temporarily but repeatedly exposed in a single-stranded form during duplex-to-hairpin rearrangements, which allows NS1 to engage the nick site in the origin OriL without the help of a cofactor. The nick would leave NS1 covalently attached to the positive-sense "B" strand of δJ and lead to the release of this strand. Nicking also leaves open a basepaired 3′ nucleotide on the "A" strand of δJ to prime DNA synthesis. If a replication fork is established here, then the A strand is unfolded and copied to create its duplex extended form. When MVM genomes replicate in vivo, the aforementioned nick may not occur because both ends of the dimer replicative form contain an efficient number of right-end hairpin origins. Therefore, replication forks may progress back toward the dimer junction from the genome's right end, copying the top strand of the B arm before the final resolution nick. This bypasses dimer bridge resolution and recycles the top strand into a replicating duplex dimer pool. In a closely related virus, LuIII, the single-strand nick releases a positive-sense strand with its left-end hairpin in the flop orientation. Unlike MVM, LuIII packages strands of both sense with equal frequency. In the negative-sense strands, the left-end hairpins are all in the flip orientation, while in the positive-sense strands, there are an equal number of flip and flop orientations. Compared to MVM, LuIII contains a two-base insertion immediately 3′ of the nick site in the right origin, which impairs its efficiency. Because of this, the reduced efficiency of replication fork assembly in the genome's right end may favor single-strand nicking by giving it more time to occur. Synthesis of progeny Individual progeny genomes are excised from genomic replicative concatemers starting by introducing breaks in replication origins, usually by the replication initiator protein. This results in the establishment of new replication forks that replicate the telomeres in a combination of terminal resolution and junction resolution and displaces individual ssDNA genomes from the replicative molecule. At the end of this process, the telomeres are folded back inwards to form hairpins on excised genomes. The extended-form termini created during excision resemble the extended-form molecules prior to terminal resolution, so they can be melted out and refolded into rabbit ears for additional rounds of replication. Within an infected cell, numerous replicative concatemers are therefore able to arise. Displacement of progeny ssDNA genomes either occurs: predominantly or exclusively during active DNA replication, or when cells are assembling viral particles. Displacement of single strands may therefore be associated with packaging viral DNA into capsids. Earlier research suggested that the preassembled viral particle may sequester the genome in a 5′-to-3′ direction as it is displaced from the fork, but more recent research suggests that packaging is performed in a 3′-to-5′ direction driven by the NS1 helicase using newly synthesized single strands. It is not clear if these single strands are released into the nucleoplasm so that packaging complexes are physically separate from replication complexes or if the replication intermediates serve as both replication and packaging substrates. In the latter case, newly displaced progeny genomes would be kept in the replication complex via interactions between their 5′-linked NS1 molecules and NS1 or capsid proteins that are physically associated with replicating DNA. Genomes are inserted into the capsid via an entrance called a portal situated at one of the icosahedral 5-fold axes of the capsid, which is possibly opposite of the opening from which genomes are expelled early in the replication cycle. Strand selection for encapsidation likely does not involve specific packaging signals but may be predictable by the Kinetic Hairpin Transfer (KHT) mathematical model, which explains the distribution of the strands and terminal conformations of packaged genomes in terms of the efficiency with which each terminus type can undergo reactions that allow it to be copied and reformed. In other words, the KHT model postulates that the relative efficiency with which two genomic termini are resolved and replicated determines the distribution of amplified replication intermediates created during infection and ultimately the efficiency with which ssDNAs of characteristic polarity and terminal orientations are excised, which will then be packaged with equal efficiency. Preferential excision of particular genomes is only apparent during packaging. Therefore, among parvoviruses that package strands of one sense, replication appears to be biphasic. At early times, both sense strands are excised. This is followed by a switch in the replication mode that allows for exclusive synthesis of a single sense for packaging. A modified form of the KHT model, called the preferential strand displacement model, proposes that the aforementioned switch in replication is caused by the onset of packaging because the substrate for packaging is probably a newly displaced DNA molecule. For heterotelomeric parvoviruses, imbalance of origin firing leads to preferential displacement of negative sense strands from the right-end origin. The relative frequency of sense strands in packaged virions can therefore be used to infer the type of resolution mechanism used during excision. Shortly after the start of S-phase, translation of viral mRNA leads to the accumulation of capsid proteins in the nucleus. These proteins form into oligomers that are assembled into intact empty capsids. After encapsidation, complete virions may be exported from the nucleus to the exterior of the cell before disintegration of the nucleus. Disruption of the host cell environment may also occur later on in infection. This results in cell lysis via necrosis or apoptosis, which releases virions to the outside of the cell. Comparison to rolling circle replication Many small replicons that have circular genomes such as circular ssDNA viruses and circular plasmids replicate via rolling circle replication (RCR), which is a unidirectional, strand displacement form of DNA replication similar to RHR. In RCR, successive rounds of replication, which proceeds in a loop around the genome, are initiated and terminated by site-specific single-strand nicks made by a replicon-encoded endonuclease, variously called the nickase, relaxase, mobilization protein (mob), transesterase, or replication protein (Rep). The replication initiator protein of parvoviruses is genetically related to these other endonucleases. RCR initiator proteins contain three motifs considered to be important for replication. Two of these are retained within parvovirus initiator proteins: an HUHUUU cluster, which is presumed to bind to a ion required for nicking, and a YxxxK motif that contains the active-site tyrosine residue that attacks the phosphodiester bond of target DNA. In contrast to RCR initiator proteins, which can join together DNA strands, RHR initiator proteins have only vestigial traces of being able to perform ligation. RCR begins when the initiator protein nicks a DNA strand at a specific sequence in the replication origin region. This is done through a transesterification reaction that forms a 5′-phosphate bond that connects the DNA to the active-site tyrosine and frees the 3′-end hydroxyl (3′-OH) adjacent to the nick site. The 3′-end is then used as a primer for the host DNA polymerase to begin replication while the initiator protein remains attached to the 5′-end of the "original" strand. After one loop of replication around the circular genome, the initiator protein returns to the nick site, i.e. the original initiator complex, while still attached to the parent strand and attacks the regenerated duplex nick site, or a nearby second site in some cases, by means of a topoisomerase-like nicking-joining reaction. During the aforementioned reaction, the initiator protein cleaves a new nick site and is transferred across the analogous phosphodiester bond. It thereby becomes attached to the new 5′-end while ligating the 5′-end of the first strand to which it was originally attached to the 3′-end of the same strand. This second mechanism varies depending on the replicon. Some replicons such as the virus ΦX174 contain a second active tyrosine residue in the initiator protein. Others use the analogous active-site tyrosine in a second initiator protein that is present as part of a multimeric nickase complex. This second nicking reaction may occur after one loop or successive loops may occur in which a concatemer containing multiple copies of the genome is created. The result of this nick is that displaced genomes become detached from the replicative molecule. These copies of the genome are ligated and may either be encapsidated into progeny capsids, provided they are monomeric, or converted to a covalently-closed double-stranded form by a host DNA polymerase for further replication. While RHR generally involves replication of both sense strands in a continuous process, RCR has complementary strand synthesis and genomic strand synthesis occur separately. The strategies used in RHR to engage the nick site are also present in RCR. Most RCR origins are in the form of duplex DNA that has to be melted before nicking. RCR initiators accomplish this by binding to specific DNA-binding sequences in the origin next to the initiation site. The latter site is then melted in a process that consumes ATP and which is assisted by the ability of the separated strands to reconfigure into stem-loop structures. In these structures, the nick site is presented on an exposed loop. Like RHR initiator proteins, many RCR initiator proteins contain helicase activity, which allows them to melt the DNA prior to nicking and serve as the 3′-to-5′ helicase in the replication fork. Notes References Bibliography DNA replication Molecular biology Parvoviruses
Rolling hairpin replication
[ "Chemistry", "Biology" ]
9,366
[ "Genetics techniques", "DNA replication", "Molecular genetics", "Molecular biology", "Biochemistry" ]
66,387,301
https://en.wikipedia.org/wiki/6-Chloronicotine
6-Chloronicotine is a drug which acts as an agonist at neural nicotinic acetylcholine receptors. It substitutes for nicotine in animal studies with around twice the potency, and shows antinociceptive effects. See also ABT-418 Altinicline Epibatidine Tebanicline References Nicotinic agonists Chloropyridines Pyrrolidines
6-Chloronicotine
[ "Chemistry" ]
90
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
66,390,800
https://en.wikipedia.org/wiki/Guillotine%20partition
Guillotine partition is the process of partitioning a rectilinear polygon, possibly containing some holes, into rectangles, using only guillotine-cuts. A guillotine-cut (also called an edge-to-edge cut) is a straight bisecting line going from one edge of an existing polygon to the opposite edge, similarly to a paper guillotine. Guillotine partition is particularly common in designing floorplans in microelectronics. An alternative term for a guillotine-partition in this context is a slicing partition or a slicing floorplan. Guillotine partitions are also the underlying structure of binary space partitions. There are various optimization problems related to guillotine partition, such as: minimizing the number of rectangles or the total length of cuts. These are variants of polygon partitioning problems, where the cuts are constrained to be guillotine cuts. A related but different problem is guillotine cutting. In that problem, the original sheet is a plain rectangle without holes. The challenge comes from the fact that the dimensions of the small rectangles are fixed in advance. The optimization goals are usually to maximize the area of the produced rectangles or their value, or minimize the waste or the number of required sheets. Computing a guillotine partition with a smallest edge-length In the minimum edge-length rectangular-partition problem, the goal is to partition the original rectilinear polygon into rectangles, such that the total edge length is a minimum. This problem can be solved in time even if the raw polygon has holes. The algorithm uses dynamic programming based on the following observation: there exists a minimum-length guillotine rectangular partition in which every maximal line segment contains a vertex of the boundary. Therefore, in each iteration, there are possible choices for the next guillotine cut, and there are altogether subproblems. In the special case in which all holes are degenerate (single points), the minimum-length guillotine rectangular partition is at most 2 times the minimum-length rectangular partition. By a more careful analysis, it can be proved that the approximation factor is in fact at most 1.75. It is not known if the 1.75 is tight, but there is an instance in which the approximation factor is 1.5. Therefore, the guillotine partition provides a constant-factor approximation to the general problem, which is NP-hard. These results can be extended to a d-dimensional box: a guillotine-partition with minimum edge-length can be found in time , and the total (d-1)-volume in the optimal guillotine-partition is at most times that of an optimal d-box partition. Arora and Mitchell used the guillotine-partitioning technique to develop polynomial-time approximation schemes for various geometric optimization problems. Number of guillotine partitions Besides the computational problems, guillotine partitions were also studied from a combinatorial perspective. Suppose a given rectangle should be partitioned into smaller rectangles using guillotine cuts only. Obviously, there are infinitely many ways to do this, since even a single cut can take infinitely many values. However, the number of structurally-different guillotine partitions is bounded. In two dimensions, there is an upper bound in attributed to Knuth. the exact number is the Schröder number. In d dimensions, Ackerman, Barequet, Pinter and Romik give an exact summation formula, and prove that it is in . When d=2 this bound becomes . Asinowski, Barequet, Mansour and Pinter also study the number of cut-equivalence classes of guillotine partitions. Coloring guillotine partitions A polychromatic coloring of a planar graph is a coloring of its vertices such that, in each face of the graph, each color appears at least once. Several researchers have tried to find the largest k such that a polychromatic k-coloring always exists. An important special case is when the graph represents a partition of a rectangle into rectangles. Dinitz, Katz and Krakovski proved that there always exists a polychromatic 3-coloring. Aigner-Horev, Katz, Krakovski and Loffler proved that, in the special sub-case in which the graph represents a guillotine partition, a strong polychromatic 4-coloring always exists. Keszegh extended this result to d-dimensional guillotine partitions, and provided an efficient coloring algorithm. Dimitrov, Aigner-Horev and Krakovski finally proved that there always exists a strong polychromatic 4-coloring. See also Binary space partitioning References Optimization algorithms and methods Discrete geometry Rectangular subdivisions
Guillotine partition
[ "Physics", "Mathematics" ]
997
[ "Discrete mathematics", "Tessellation", "Discrete geometry", "Rectangular subdivisions", "Symmetry" ]
69,274,191
https://en.wikipedia.org/wiki/InterPore
The International Society for Porous Media (InterPore) is a nonprofit independent scientific organization established in 2008. It aims to advance and disseminate knowledge for the understanding, description, and modeling of natural and industrial porous medium systems. It acts as a platform for researchers active in modeling of flow and transport in natural, biological, and technical porous media, such as soils, aquifers, oil and gas reservoirs, biological tissues, plants, fuel cells, wood, ceramics, concrete, textiles, paper, polymer composites, hygienic materials, food, foams, membranes, etc. History In the course of 2006, researchers from the Department of Earth Sciences, Utrecht University and Institute for Modelling Hydraulic and Environmental Systems, University of Stuttgart, under the leadership of Professor Rainer Helmig and Professor Majid Hassanizadeh, respectively, developed a proposal for setting up a joint international graduate research program. The proposal was submitted to German Research Foundation (DFG) and Netherlands Organisation for Scientific Research (NWO), and was successfully funded. The research school started its activities on January 1, 2007, under the name NUPUS (Non-linearities and Upscaling in PoroUS Media). This project led to the idea of creating an international center for porous media wherein scientists from diverse disciplines who study porous media could exchange ideas and research activities. The European Society for Porous Media (Europore) was established in Spring, 2008. By Summer 2008, the geographical scope was expanded beyond Europe and the name was changed to the International Society for Porous Media (InterPore.) Bylaws were approved and the society was officially registered in Fall 2008. InterPore Academy was established in 2020 to promote educational activities mainly to serve industrial and/or younger researchers. The academy is organizing short courses, webinars, and workshops. National Chapters National chapters are country-wide activity groups of InterPore. They form platforms for bringing together porous media researchers from academia and industry of a given country or region. A variety of activities, such as porous media workshops, conferences, short courses, are organized by national chapters. National chapters compile a list of porous media companies in their countries to be able to interact with institutions and industries. As of 2021, InterPore has active national chapters in: Australia Benelux countries Brazil China Colombia France Germany Greece India Iran Israel Italy Mexico Norway Saudi Arabia Spain United Kingdom United States (Southern) InterPore Annual Meetings InterPore has organized the International Conference on Porous Media annually since 2009. General themes include: fundamentals of porous media; computational challenges in porous media simulation; experimental studies and applications involving porous media. Previous conferences have been hosted by in Fraunhofer ITWM in Kaiserslautern, Germany; Texas A&M University in College Station, Texas, USA; I2M-Dept TREFLE (CNRS, ENSAM, University of Bordeaux), France; Purdue University in West Layfayette, Indiana, USA; Technical University of Prague, Czech Republic; University of Wisconsin, in Milwaukee, USA; University of Padova, Italy; University of Cincinnati, Ohio, USA; Technical University of Delft in Rotterdam, Netherlands; Louisiana State University in New Orleans, USA; Universitat Politecnica de Valencia, Spain; and two online conferences (2020 & 2021.) InterPore2022 is scheduled for May 30 - June 2, 2022 at Khalifa University in Abu Dhabi, UAE. References Materials science organizations
InterPore
[ "Materials_science", "Engineering" ]
713
[ "Materials science organizations", "Materials science" ]
76,674,682
https://en.wikipedia.org/wiki/Sonocatalysis
Sonocatalysis is a field of sonochemistry which is based on the use of ultrasound to change the reactivity of a catalyst in homogenous or heterogenous catalysis. It is generally used to support catalysis. This method of catalysis has been known since the creation of sonochemistry in 1927 by (1887–1975) and Robert Williams Wood (1868–1955). Sonocatalysis depends on ultrasounds, which were discovered in 1794 by the Italian biologist Lazarro Spallanzani (1729–1799). Principle General concept Sonocatalysis is not a self-sufficient catalysis technique but instead supports a catalyst in the reaction. Sonocatalysis and sonochemistry both come from a phenomenon called “acoustic cavitation”, which happens when a liquid is irradiated by ultrasounds. Ultrasounds will create huge local variations of pressure and temperature, affecting the liquid's relative density and creating cavitation bubbles when liquid pressure decreases under its vapor pressure. When these bubbles blow up, some energy is released, which comes from the transformation of kinetic energy into heat. Sonocatalysis may happen in the homogenous phase or the heterogenous phase. This depends on the phase in which the catalyst is compared to the reaction. The blowing of cavitation bubbles can cause intense local pressure and temperature conditions, going to a 1000 atm pressure and a 5000 K temperature. This may provoke the creation of highly energetic radicals. Bubbles' blowing causes the formation of hydroxyl radical HO^{.}and hydrogen radical H^{.}in a water-based environment. Next, these radicals may combine to produce different molecules, such as water H2O, hydroperoxyl HO2^{.} , hydrogen peroxide H2O2 and dioxygen O2 Radical formation reactions due to the decomposition of water by ultrasound can be described this way: H2O ->[{)))}] HO{.} + H{.} .OH + H. -> H2O2 .H + O2 ->HO2. 2HO. -> H2O2 2HO2.->H2O2 +O2 H2O +.OH->H2O2 +H. Energy from ultrasonic irradiation differs from heat energy or electromagnetic radiation energy in time, pressure, and energy received by a molecule... For example, a 20 kHz ultrasound creates an 8.34 x 10−11 eV energy, while a 300 nm laser creates a 4.13 eV energy. This ultrasound causes a shorter reaction time and a better yield. Direct and indirect irradiation There are two types of irradiation in sonocatalysis and sonochemistry: direct irradiation and indirect radiation. In direct irradiation, the solution is in touch with a sound wave emitter (generally a transducer), while these two elements are separated by an irradiated bath in indirect irradiation. The bath transmits the radiations to the solution due to convection. While indirect irradiation is the most used irradiation technique, direct irradiation is possible too, especially when the irradiated bath may be the container for the solution too. Catalysts Homogenous catalysts Metal carbonyls, such as Fe(CO)5, Fe3(CO)12, Cr(CO)6, Mo(CO)6 and W(CO)6, are very often used in homogenous catalysis, because these are stable species at standard temperature and pressure, due to their structures. Furthermore, their catalytic capacities are well-known and efficient. Heterogenous catalysts Carbon-based species like carbon nanotubes, graphene, graphene oxide, activated carbon, biochar, g-C3N4, carbon-doped materials, Buckminsterfullerene (C60), and mesoporous carbons, are very often used in heterogeneous sonocatalysis. These species are great sonocatalysts because they favour the degradation process during sonocatalysis. Furthermore, they have a huge activity and stability for sonocatalysis, and they show the nucleation effect. These properties come from features like optic activities, electrical resistivities and conductivities, chemical stabilities, forces, and their porous structures. These species are becoming more frequently used. Materials Transducers Sonocatalysis needs equipment other than catalysts to generate ultrasound, like transducers that create ultrasound by the transformation from electrical energy to mechanical energy. There are two types of transducers: piezoelectric transducers and magnetostrictic transducers. Piezoelectric transducers are used more often because they are cheaper, lighter, and less bulky. These transducers are constituted of single crystals or ceramic and two electrodes fixed on the side of the precedent materials. These electrodes receive a voltage which equals at the most to the transducer's resonance frequency. Then, single crystals may be compressed or dilated, creating a wave. Some examples of transducers The ultrasonic cleaner is a bath full of liquid. The liquid can transmit acoustic energy from the bottom of the bath to the solution in the container. This cleaner often generates ultrasound with low frequencies (from 20 to 60 kHz) and is inexpensive. However, it has some inconveniences, like the difficulty of controlling the liquid temperature in the bath, and the fact that irradiation isn't equal everywhere in the bath The cup-horn sonicator is similar to the ultrasonic cleaner, but it may irradiate using both direct and indirect irradiation. While ultrasonic cleaning only generates ultrasound with low frequencies, the cup-horn sonicator can generate ultrasound with high frequencies too, and with a higher intensity. However, this equipment is very expensive due to its conception. The "whistle" reactor is a reactor in which the reaction mix is continuously pumped through an adjustable-width opening, in a delimited area where cavitation happens. Ultrasonic waves are generated in this area by the vibration of blades during the passing of the pumped solution. This reactor is often used for homogenous reaction mixes, as the solid part of heterogenous mixes cannot pass through the whistle. This type of reactor is less frequently used. Applications The use of sonocatalysis has risen. Today, sonocatalysis is used in lots of fields, like medicine, pharmacology, metallurgy, environment, nanotechnology, and wastewater treatment. Health Active ingredient synthesis The example of pyrazole Several studies showed that sonocatalysis could increase pyrazole synthesis yield, compounds that has antimicrobial, antihypertensive, anti-inflammatory and anticonvulsant activities. A study developed a new way of synthesis for this molecule, using ecological and economical reactants while keeping a high yield and using sonocatalysis. The following table contains is an example for the 3-methyl-5-phenyl-4,5-dihydro-1H-pyrazole-1-carbothioamide synthesis: (*) synthesis conditions are described on the picture above Environment Pollutants degradation An example of the use of sonocatalysis is to degrade pollutants. Ultrasound can generate the HO^{.}radical from a water molecule. This radical is a strong oxidizing agent, which can degrade persistent organic pollutant. However, the reaction speed for hydrophobic compounds is low, so ultrasound is often paired with a solid catalyst. The addition of this catalyst means the addition of atomic nuclei that amplifies the cavity phenomenon, and so does the ultrasonic efficiency. Near the solid-liquid contact surface, pressure is applied on one of the sides of the bubble, causing a more violent blowing of the bubble. 46 cationic red bleaching This principle can apply to the oxidated bleaching of 46 cationic red by zinc oxide held by bentonite. More than 10% to 20% of organic dyes are lost and released in nature. Finding new ways to improve dyes’ bleaching is an important topic, as these dyes may be toxic and carcinogenic. The oxidation comes from the HO^{.} radical, whose oxidant capacities are known. Indeed, we can observe that a higher concentration of the HO^{.} radical provokes a better 46 red cationic bleaching, as the yield for bleaching of cationic red is 17.8% before using ultrasound and 81.6% after using ultrasound. However, sonocatalysis’ efficiency mainly comes from the combination of both catalyst and ultrasound. For example, we observe a cationic red bleaching of only 25.4% by applying only ultrasound. Tetracycline elimination Another example of pollutant degradation is the elimination of tetracycline, an antibiotic that is frequently found as a pollutant in wastewater. When tetracycline is dissolved in aqueous solution, using only ultrasound is inefficient to degrade tetracycline, because it is kinetically unfavourable. The addition of catalysts like titanium dioxide TiO2 or hydrogen peroxide H2O2 to ultrasound may speed up degradation: thirty minutes are enough when ultrasound and both catalysts are used. Rhodamine B degradation Sonocatalysis is used in rhodamine B degradation too. Rhodamine B is a synthetic dye that may be harmful for aquatic plant when released in wastewater. Application to reactions The Fenton's reaction Sonocatalysis can be applied for reactions like Fenton's reaction. By associating sonocatalysis (at a 20 kHz frequency) and Fenton's reaction, with a 5.0 mg/L iron chloride ( FeCl2) mass concentration and a pH of 4, degradation efficiency is about 80% after 12 minutes. References Ultrasound Physical chemistry Catalysis
Sonocatalysis
[ "Physics", "Chemistry" ]
2,098
[ "Catalysis", "Applied and interdisciplinary physics", "nan", "Chemical kinetics", "Physical chemistry" ]
76,686,234
https://en.wikipedia.org/wiki/Join%20count%20statistic
Join count statistics are a method of spatial analysis used to assess the degree of association, in particular the autocorrelation, of categorical variables distributed over a spatial map. They were originally introduced by Australian statistician P. A. P. Moran. Join count statistics have found widespread use in econometrics, remote sensing and ecology. Join count statistics can be computed in a number of software packages including PASSaGE, GeoDA, PySAL and spdep. Binary data Given binary data distributed over spatial sites, where the neighbour relations between regions and are encoded in the spatial weight matrix the join count statistics are defined as Where The subscripts refer to 'black'=1 and 'white'=0 sites. The relation implies only three of the four numbers are independent. Generally speaking, large values of and relative to imply autocorrelation and relatively large values of imply anti-correlation. To assess the statistical significance of these statistics, the expectation under various null models has been computed. For example, if the null hypothesis is that each sample is chosen at random according to a Bernoulli process with probability then Cliff and Ord show that where However in practice an approach based on random permutations is preferred, since it requires fewer assumptions. Local join count statistic Anselin and Li introduced the idea of the local join count statistic, following Anselin's general idea of a Local Indicator of Spatial Association (LISA). Local Join Count is defined by e.g. with similar definitions for and . This is equivalent to the Getis-Ord statistics computed with binary data. Some analytic results for the expectation of the local statistics are available based on the hypergeometric distribution but due to the multiple comparisons problem a permutation based approach is again preferred in practice. Extension to multiple categories When there are categories join count statistics have been generalised Where is an indicator function for the variable belonging to the category . Analytic results are available or a permutation approach can be used to test for significance as in the binary case. Spatial analysis Covariance and correlation References
Join count statistic
[ "Physics" ]
424
[ "Spacetime", "Space", "Spatial analysis" ]
76,689,273
https://en.wikipedia.org/wiki/Spatial%20weight%20matrix
The concept of a spatial weight is used in spatial analysis to describe neighbor relations between regions on a map. If location is a neighbor of location then otherwise . Usually (though not always) we do not consider a site to be a neighbor of itself so . These coefficients are encoded in the spatial weight matrix Where is the number of sites under consideration. The spatial weight matrix is a key quantity in the computation of many spatial indices like Moran's I, Geary's C, Getis-Ord statistics and Join Count Statistics. Contiguity-Based Weights This approach considers spatial sites as nodes in a graph with links determined by a shared boundary or vertex. The elements of the spatial weight matrix are determined by setting for all connected pairs of nodes with all the other elements set to 0. This makes the spatial weight matrix equivalent to the adjacency matrix of the corresponding network. It is common to row-normalize the matrix , In this case the sum of all the elements of equals the number of sites. There are three common methods for linking sites named after the chess pieces which make similar moves: Rook: sites are neighbors if they share an edge Bishop: sites are neighbours if they share a vertex Queen: sites are neighbours if they share an edge or a vertex In some cases statistics can be quite different depending on the definition used, especially for discrete data on a grid. There are also other cases where the choice of neighbors is not obvious and can affect the outcome of the analysis. Bivand and Wong describe a situation where the value of spatial indices of association (like Moran's I) depend on the inclusion or exclusion of a ferry crossing between counties. There are also cases where regions meet in a tripoint or quadripoint where Rook and Queen neighborhoods can differ. Distance-Based Weights Another way to define spatial neighbors is based on the distance between sites. One simple choice is to set for every pair separated by a distance less than some threshold . Cliff and Ord suggest the general form Where is some function of the distance between and and is the proportion of the perimeter of in contact with . The function is then suggested. Often the term is not included and the most common values for are 1 and 2. Another common choice for the distance decay function is though a number of different Kernel functions can be used. The exponential and other Kernel functions typically set which must be considered in applications. It is possible to make the spatial weight matrix a function of 'distance class': where denotes the 'distance class', for example corresponding to first, second, third etc. neighbors. In this case, functions of the spatial weight matrix become distance class dependent. For example, Moran's I is This defines a type of spatial correlogram, in this case, since Moran's I measures spatial autocorrelation, measures how the autocorrelation of the data changes as a function of distance class. Remembering Tobler's first law of geography, "everything is related to everything else, but near things are more related than distant things" it usually decreases with distance. Common distance functions include Euclidean distance, Manhattan distance and Great-circle distance. Spatial Lag One application of the spatial weight matrix is to compute the spatial lag For row-standardised weights initially set to and with , is simply the average value observed at the neighbors of . These lagged variables can then be used in regression analysis to incorporate the dependence of the outcome variable on the values at neighboring sites. The standard regression equation is The spatial lag model adds the spatial lag vector to this where is a parameter which controls the degree of autocorrelation of . This is similar to an autoregressive model in the analysis of time series. See Also Spatial Analysis Moran's I Geary's C Join Count Statistics Spatial analysis Covariance and correlation References
Spatial weight matrix
[ "Physics" ]
780
[ "Spacetime", "Space", "Spatial analysis" ]
63,589,365
https://en.wikipedia.org/wiki/Archaeal%20translation
Archaeal translation is the process by which messenger RNA is translated into proteins in archaea. Not much is known on this subject, but on the protein level it seems to resemble eukaryotic translation. Most of the initiation, elongation, and termination factors in archaea have homologs in eukaryotes. Shine-Dalgarno sequences only are found in a minority of genes for many phyla, with many leaderless mRNAs probably initiated by scanning. The process of ABCE1 ATPase-based recycling is also shared with eukaryotes. Being a prokaryote without a nucleus, archaea do perform transcription and translation at the same time like bacteria do. References Further reading Molecular biology Protein biosynthesis Gene expression
Archaeal translation
[ "Chemistry", "Biology" ]
156
[ "Protein biosynthesis", "Gene expression", "Molecular biology stubs", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
63,589,689
https://en.wikipedia.org/wiki/Archaeal%20transcription
Archaeal transcription is the process in which a segment of archaeal DNA is copied into a newly synthesized strand of RNA using the sole Pol II-like RNA polymerase (RNAP). The process occurs in three main steps: initiation, elongation, and termination; and the end result is a strand of RNA that is complementary to a single strand of DNA. A number of transcription factors govern this process with homologs in both bacteria and eukaryotes, with the core machinery more similar to eukaryotic transcription. Because archaea lack a membrane-enclosed nucleus like bacteria do, transcription and translation can happen at the same time on a newly-generated piece of mRNA. Operons are widespread in archaea. Initiation Initiation in archaea is governed by TATA-binding protein (TBP), Archaeal transcription factor B (TFB), and Archaeal transcription factor E (TFE) that are homologous to eukaryotic TBP, TFIIB, and TFIIE respectively. These factors recognize the promoter core sequence (TATA box, B recognition element) upstream of the coding region and recruits the RNAP to form a closed transcription preinitiation complex (PIC). The PIC is turned into an open state with the local DNA helix "melting" to load the template strand of DNA. The RNAP undergoes "abortive initiation": it makes and releases many short (2-15nt) segments before generating a transcript of significant length. This continues until it moves past the promoter (promoter escape), loosening TBP's grasp on the DNA, and swapping TFE out for elongation factors Spt4/5. How this escape happens exactly remains to be studied. Elongation After getting out of the promoter region, the RNAP moves into the elongation state, where it keeps growing the new RNA strand in a processive process. Double stranded DNA that enters from the front of the enzyme is unzipped to avail the template strand for RNA synthesis. For every DNA base pair separated by the advancing polymerase, one hybrid RNA:DNA base pair is immediately formed. DNA strands and nascent RNA chain exit from separate channels; the two DNA strands reunite at the trailing end of the transcription bubble while the single strand RNA emerges alone. A number of elongation factors help with the rate and processivity of the RNAP. Factors of the Spt4/Spt5 family (bacterial homolog of Spt5 is called NusG) stimulate transcription by binding to the RNAP clamp on one side of the DNA channel and to the gate loop on the other. The resultant DSIF locks the clamp into a closed state to prevent the elongation complex (EC) from dissociating. Spt5 also has a NGN domain that helps the two strands separate. A KOW domain probably hooks the RNAP up to a ribosome so that translation and transcription happen together. Some archaea have an Elf1 homolog that might also act as an elongation factor. Backtracking The RNAP occasionally stops and starts moving backwards when it encounters a roadblock or some difficult sequences. When this happens, the EC gets stuck because the reactive 3' edge of the RNA is out of the active site. The transcript cleavage factor TFS (a TFIIS homolog) helps resolve this issue by generating a cut so that a new 3' end is available in the active site. Some archaeon have up to 4 paralogs of TFS with divergent functions. Termination Not much is known about archaeal termination. Euryarchaeal RNAPs seem to terminate on their own when poly-U stretches appear. References Gene expression Archaea
Archaeal transcription
[ "Chemistry", "Biology" ]
770
[ "Archaea", "Gene expression", "Prokaryotes", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Microorganisms" ]
63,591,823
https://en.wikipedia.org/wiki/Simen%20%C3%85dn%C3%B8y%20Ellingsen
Simen Andreas Ådnøy Ellingsen (born 14 May 1981) is a Norwegian engineering physicist specializing in fluid mechanics, especially waves, turbulence, and quantum mechanics. He is a full professor at the Norwegian University of Science and Technology, at the Department of Energy and Process Engineering. He is known for having expanded Lord Kelvin's work known as Kelvinangle. He received the Royal Norwegian Society of Sciences and Letters Prize for Young Researchers in the Natural Sciences in 2011 and became a member of the Young Academy of Norway in 2019. He received a European Research Council Consolidator Grant in 2022. He plays several instruments and has published music with the band Shamblemaths. Education Ellingsen has two doctoral degrees. The first from 2009 is Nuclear Terrorism and Rational Choice from King's College London. The second from 2011 is Dispersion forces in Micromechanics: Casimir and Casimir-Polder forces affected by geometry and non-zero temperature from the Norwegian University of Science and Technology. Publications (selection) (The Norwegian Scientific Index) Membership and honours In 2011 he was the winner of the Royal Norwegian Society of Sciences and Letters Prize for Young Researchers in the Natural Sciences. Ellingsen became one of 12 new members of the Young Academy of Norway in 2019, and is member of the Royal Norwegian Society of Sciences and Letters. References External links Home page 1981 births Engineering academics Engineering educators Fluid dynamicists Living people Norwegian physicists Academic staff of the Norwegian University of Science and Technology Quantum physicists Royal Norwegian Society of Sciences and Letters
Simen Ådnøy Ellingsen
[ "Physics", "Chemistry" ]
312
[ "Fluid dynamicists", "Quantum physicists", "Quantum mechanics", "Fluid dynamics" ]
63,593,401
https://en.wikipedia.org/wiki/Japanese%20amber
Japanese amber is a type of amber that can be found in Japan. The largest sources of this substance are located in Honshu. It is similar to Baltic amber and has similar general use. However, Japanese amber is softer and much more difficult to treat than the Baltic type. Its treatment requires special care and precision because stones can be easily damaged. Its color range varies from many shades of orange to brown. It is characterized by dark spots that can be found on its surface. The opacity of Japanese amber varies from clear to opaque pieces. Location Sources of Japanese amber can be found in many different locations all over Japan. They have the whole area of 2800 km, starting from Hokkaido in the North and Kyūshū in the South. The only still opened mine is the Fuji mine where amber is recovered since the 6th century AD. In 1938 up to 13 tons of amber was recovered there. Two pieces recovered in the mine can be found as a part of a private collection (mass: 19kg, size 40x40x2 cm, recovered in 1927) and as a part of an exhibition in the National Museum of Nature and Science in Tokyo (mass: 16 kg, size 40x23x23 cm, recovered in 1941). Use Due to its soft and easy to damage surface Japanese amber is not widely used. It can be found in jewellery as a decorative gemstone or to decorate clothes and utility items. A recovered decorative pillow from the 6th century decorated with Japanese amber was a part of an exhibition in Kaliningrad. Modern artists prefer to use Baltic amber, as it is easier to work with and has similar aesthetic values. References Amber
Japanese amber
[ "Physics" ]
333
[ "Amorphous solids", "Unsolved problems in physics", "Amber" ]
63,597,263
https://en.wikipedia.org/wiki/Kuphus%20polythalamius
Kuphus polythalamius (known as giant tamilok) is a species of shipworm, a marine bivalve mollusc in the family Teredinidae. Description The tube of Kuphus polythalamius is known as a crypt and is a calcareous secretion designed to enable the animal to live in its preferred habitat, the mud of mangrove swamps. A typical specimen measures in length and is shaped like a truncated elephant's tusk. The wider, anterior end is closed, has a rounded tip, and is about in diameter. From there the tube tapers to an open, posterior end about in diameter, with a central septum. Siphons project through this end for feeding and respiration. They can be withdrawn inside the tube and the end can be sealed with a set of specialised plates or "pallets". The two small valves of the mollusc are inside the tube along with the mantle, gut and other soft organs. In the intact but otherwise empty tube found on the strandline, they can be seen by X-ray photography. Longest bivalve The giant clam (Tridacna gigas) is generally considered to be the largest bivalve mollusc. It is indeed the heaviest species, growing to over and measuring up to in length, but Kuphus polythalamius holds the record for the largest bivalve by length. A specimen owned by Victor Dan in the United States has a length of , which is considerably longer than the largest giant clam. Distribution Today, Kuphus polythalamius is found in the western Pacific Ocean, the western and eastern Indian Ocean and the Indo-Malaysian area. The range includes the Philippines, Indonesia and Mozambique. However, the only thoroughly studied natural habitat of the species is in Kalamansig, Sultan Kudarat in the Philippines. Evolution Marine biologist Ruth Turner studied shipworms and considered that their common ancestor would have been very like Kuphus polythalamius, the most primitive of the teredinids. She believed that the anatomy of the tube was such that the animal would not have been able to burrow in wood as other modern teredinids do, but would instead have lived buried in soft sediments. Live specimen In April 2017, the species became the focus of international attention when the announcement of a scientific study conducted in the Philippines was misinterpreted by foreign news reporters as the discovery of a rare live specimen. The sample was gunmetal black, and very muscular. While other shipworms feed on submerged wood, K. polythalamius was found to use bacteria in its gills to use hydrogen sulphide in the water as an energy source used to convert carbon dioxide into nutrients. In this respect it resembles the unrelated giant tube worm, which actually is a worm. Videos uploaded to YouTube, however, already show Philippine scientists dissecting specimens as far back as 2010, after a news feature on a giant , the local name for the common shipworm, was broadcast on a local TV network. The report by local media celebrity Jessica Soho suggests that local residents in the province of Sultan Kudarat, Mindanao island, were familiar enough with the creature to the point of treating it as a delicacy. After the discovery of the species in Sultan Kudarat, various environmental groups launched a campaign to protect the species and its habitat from further destruction and human consumption. Currently, the municipal waters where the species thrive in is protected by the local government. References polythalamius Chemosynthetic symbiosis Molluscs of the Indian Ocean Molluscs of the Pacific Ocean Bivalves described in 1758 Taxa named by Carl Linnaeus
Kuphus polythalamius
[ "Biology" ]
753
[ "Biological interactions", "Chemosynthetic symbiosis", "Behavior", "Symbiosis" ]
63,598,605
https://en.wikipedia.org/wiki/The%20Strange%20Logic%20of%20Random%20Graphs
The Strange Logic of Random Graphs is a book on zero-one laws for random graphs. It was written by Joel Spencer and published in 2001 by Springer-Verlag as volume 22 of their book series Algorithms and Combinatorics. Topics The random graphs of the book are generated from the Erdős–Rényi–Gilbert model in which vertices are given and a random choice is made whether to connect each pair of vertices by an edge, independently for each pair, with probability of making a connection. A zero-one law is a theorem stating that, for certain properties of graphs, and for certain choices of , the probability of generating a graph with the property tends to zero or one in the limit as goes to infinity. A fundamental result in this area, proved independently by Glebskiĭ et al. and by Ronald Fagin, is that there is a zero-one law for for every property that can be described in the first-order logic of graphs. Moreover, the limiting probability is one if and only if the infinite Rado graph has the property. For instance, a random graph in this model contains a triangle with probability tending to one; it contains a universal vertex with probability tending to zero. For other choices of , other outcomes can occur. For instance, the limiting probability of containing a triangle is between 0 and 1 when for a constant ; it tends to 0 for smaller choices of and to 1 for larger choices. The function is said to be a threshold for the property of containing a triangle, meaning that it separates the values of with limiting probability 0 from the values with limiting probability 1. The main result of the book (proved by Spencer with Saharon Shelah) is that irrational powers of are never threshold functions. That is, whenever is an irrational number, there is a zero-one law for the first-order properties of the random graphs . A key tool in the proof is the Ehrenfeucht–Fraïssé game. Audience and reception Although it is essentially the proof of a single theorem, aimed at specialists in the area, the book is written in a readable style that introduces the reader to many important topics in finite model theory and the theory of random graphs. Reviewer Valentin Kolchin, himself the author of another book on random graphs, writes that the book is "self-contained, easily read, and is distinguished by elegant writing", recommending it to probability theorists and logicians. Reviewer Alessandro Berarducci calls the book "beautifully written" and its subject "fascinating". References Random graphs Finite model theory Mathematics books 2001 non-fiction books
The Strange Logic of Random Graphs
[ "Mathematics" ]
524
[ "Graph theory", "Finite model theory", "Mathematical relations", "Model theory", "Random graphs" ]
73,729,878
https://en.wikipedia.org/wiki/Polyisobuteneamine
Polyisobuteneamine (PIBA) is a polymer derived from the reaction of polyisobutylene (PIB) with ammonia or primary amines. This polymeric compound is known for its excellent adhesive and dispersant properties and is commonly used as an additive in lubricants, fuel, and other industrial applications. History of discovery The history of polyisobuteneamine dates back to the early development and study of polyisobutylene. The first synthesis of polyisobutylene was reported in 1931 by the German chemists Hermann Staudinger and Leonidas Zechmeister, who obtained the polymer through the cationic polymerization of isobutylene. The discovery of polyisobuteneamine followed as researchers began to explore the potential applications of polyisobutylene and its derivatives. Synthesis Polyisobuteneamine is synthesized through the reaction of polyisobutylene with ammonia or primary amines in the presence of a catalyst. The reaction takes place at elevated temperatures and pressures. The molecular weight of the resulting polymer can be controlled by adjusting the reaction conditions and the choice of catalyst. Polyisobutylene (PIB): (CH2=C(CH3)2)n Ammonia (NH3) or Primary amine (RNH2) Polyisobuteneamine (PIBA): [-(CH2-C(CH3)2)N(H)-]m In the chemical formulas above, n represents the degree of polymerization of PIB, R represents a hydrogen atom (in the case of ammonia) or an alkyl group (in the case of primary amines), and m is the degree of substitution of the amine group on the polyisobutylene backbone. Properties Polyisobuteneamine is a viscous liquid with a yellow to amber color. It has excellent adhesion and dispersant properties, which are attributed to its polar amine groups and nonpolar polyisobutylene backbone. The unique combination of polar and nonpolar groups allows PIBA to interact with a wide range of materials, making it a versatile additive. Applications Polyisobuteneamine is commonly used as an additive in lubricants, fuel, and other industrial applications. Its adhesive and dispersant properties make it particularly useful in enhancing the performance of engine oils, gear oils, and hydraulic fluids. PIBA is also used in fuel additives to improve the combustion process and reduce deposits in the engine. Other applications include the use of PIBA as a corrosion inhibitor, an emulsifier, and a demulsifier in various industrial processes. References Staudinger, H., & Zechmeister, L. (1931). Über Polymerisation. Berichte der deutschen chemischen Gesellschaft (A and B Series), 64(9), 2157-2160. Legge, N. R., Holden, G., & Schroeder, H. (eds.). (2005). Thermoplastic Elastomers: A Comprehensive Review. iSmithers Rapra Publishing. Mart, L. (ed.). (2013). Handbook of Plasticizers, 2nd Edition. Elsevier. Notes External links JP4197298B2 - Polyisobuteneamine - Google Patents Synthesis of 1-polyisobuteneamine-(2-14C) Effect of Multifunctional Fuel Additives on Octane Number Requirement of Internal Combustion Engines 932813 Polymers Plasticizers
Polyisobuteneamine
[ "Chemistry", "Materials_science" ]
737
[ "Polymers", "Polymer chemistry" ]
73,730,090
https://en.wikipedia.org/wiki/Snowy%202.0%20Pumped%20Storage%20Power%20Station
Snowy 2.0 Pumped Storage Power Station or Snowy Hydro 2.0 or simply Snowy 2.0 is a pumped-hydro battery megaproject in New South Wales, Australia. The dispatchable generation project expands upon the original Snowy Mountains Scheme (ex post facto Snowy 1.0) connecting two existing dams through a underground tunnel and a new, underground pumped-hydro power station. It is expected to supply 2.2 gigawatts of capacity and about 350,000 megawatt hours of large-scale storage to the national electricity market. It is the largest renewable energy project under construction in Australia. It includes one of the largest and deepest cavern excavations ever undertaken. It also includes the longest tunnels at 27 kilometres in length, of any pumped-hydro station ever built. It is designed for grid stabilization; to be a backup at times of peak demand and for when solar and wind energy are not providing sufficient power. It provides valuable firming capability. Snowy Hydro acts like a giant battery by absorbing, storing, and dispatching energy. Snowy 2.0 can be "switched on" very quickly. The battery is designed to operate for up to 175 hours of temporary supply. It is Australia's largest energy project, estimated to cost 12 billion Australian dollars and projected to generate 10% of the nation's energy. Construction began in 2019. By 2023, AU$4.3 billion had been spent. Snowy 2.0 has been described as a white elephant. The project is led by public company Snowy Hydro Limited. Snowy 2.0 will last for at least 100 years. When complete it is expected to have a large impact on the price and reliability of electric power. History Initial plans for a power station at the location were discussed in 1966. Further studies were undertaken in 1980 and 1990. The current project originated as the centrepiece of Malcolm Turnbull's climate change policy in 2017. The original cost of the project was around $2 billion. A feasibility study carried out in 2017 finding the project was both technically and financially feasible. The study was released on 21 December 2017 and found the project cost would be between $3.8 and 4.5 billion. The first tunnel that was completed by October 2022, was a 2.85 kilometre section that provided main access at Lobs Hole. It was 10 metres in diameter and provides pedestrian and vehicle access into the power station. By May 2023 the emergency, cable and ventilation tunnel was excavated. It is 2.93 kilometre long, 10 metres in diameter and will be used for power station ventilation and high-voltage cables. Excavation of the transformer and machines halls began in June 2023. By February 2024, half of the construction required was complete. It was originally expected to be completed by 2024. Snowy Hydro 2.0 has been beset by delays and cost blowouts. Delays have been caused by the COVID-19 pandemic, global supply chain disruptions, complex design elements and variable site and geological conditions. The delays have raised concerns that Snowy Hydro will not be ready in time for new solar and wind projects coming online as five coal-fired power stations close. AEMO warns that supply gaps will emerge from 2025. The project is currently expected to be fully operational by the end of 2028 and generating power as early as late 2027. The project is using three tunnel boring machines to dig tunnels. One of the machines, called Florence was stuck for 19 months after encountering soft rock near Tantangara. Florence is excavating the 16 km headrace tunnel, which will connect the underground power station to the upper Tantangara reservoir. Florence launched in March 2022 and was named in honour of Australia's first female electrical engineer, Florence Violet McKenzie. Eight weeks later the machine was bogged in wet soft ground. The machine is capable of digging 30 to 50 metres a day. In December 2022, a sinkhole opened up above the tunnel. Florence was moving at a pace of six metres a day by early December 2023. In May 2024, the tunnel boring machine was stuck in hard rock. A complex fault zone caused the delay. By 11 July Florence was clear of the hard rock after using ultra-high pressure water jetting. A fourth boring machine was required due to the delays caused by Florence. Drilling and blasting was used to dig caverns. The company managing underground blasting operations was Orica. Rock bolts and shotcrete support the exposed solid rock face. The main cavern was excavated between June 2023 and January 2024. Design and location It is located remotely within the Kosciuszko National Park in the Snowy Mountains. Snowy Hydro 2.0 will use water from the Talbingo Reservoir (bottom storage) and Tantangara Reservoir (top storage). The dams have a height differential of 700 metres. The new power station is being built by the Italian firm Webuild. It will be located in a cavern 800 metres underground. The underground location allows for reduced environmental impacts within the national park. The operational footprint of the facility is less than 0.01% of the total size of the park. The Inclined Pressure Shaft (IPS) through which the water will pass is the largest of its kind in the world and facilitates the water's return to the upper reservoir when the pump-turbines operate in reverse. The IPS is 10 metres in diameter, 1.6 kilometres long and at a 25-degree incline. Pre-cast concrete segments for the shaft are produced at a factory in the town of Cooma. Fatigue resistance is a key design element in the IPS. The power station will measure 22 metres wide, 50 m high and 250 m long. The station will house six reversible Francis pump-turbine and motor-generator units. Three units will be of variable speed with the remaining of synchronous speed. Each turbine will have a rated output of 333 megawatts. Power generating equipment is being supplied by Voith. It will be connected to the grid via the HumeLink transmission line. Construction costs for the project total $4.8 billion. The construction of overhead power lines by TransGrid has been opposed by community advocacy groups. Landholders desire to see the transmission line built underground have been opposed due to prohibitive costs. See also List of megaprojects List of pumped-storage hydroelectric power stations List of power stations in New South Wales Renewable energy in Australia References External links https://www.snowyhydro.com.au/ Snowy Mountains Scheme Economic history of New South Wales Engineering projects Hydroelectric power stations in New South Wales Murray River River regulation in Australia Snowy Mountains Underground power stations
Snowy 2.0 Pumped Storage Power Station
[ "Engineering" ]
1,348
[ "nan" ]
73,732,459
https://en.wikipedia.org/wiki/UWI%20Seismic%20Research%20Centre
The University of the West Indies Seismic Research Centre (UWI-SRC) is a centre for volcanological, seismic and geophysical research in Trinidad, which has the responsibility for monitoring and studying earthquakes, volcanoes and tsunamis across the Eastern Caribbean. Part of the University of the West Indies, it is also responsible for providing formal advice, and information, around the volcanic, seismic and tsunami hazards and events across the region, to reduce risk and protect lives and livelihoods. In recent years, UWI-SRC has managed ongoing volcanic unrest at the Soufriere Hills Volcano through the running of the Montserrat Volcano Observatory, and the 2020–2021 eruptions of La Soufrière on St Vincent. History UWI-SRC was established in 1953, as the Volcanological Research Department of the Imperial College of Tropical Agriculture in Trinidad. In the early 1960s the department became the Seismic Research Unit of the University of the West Indies, and in 2008 was formally established as a research centre within the university, and took on the name Seismic Research Centre. Regional seismic network UWI-SRC manages the largest network of seismometers in the Caribbean, extending across all of the islands of the English-speaking Caribbean, and seventeen known or active volcanoes. The origins of the modern network go back to the early 1950s, when geophysicist Patrick Willmore was sent by the British Colonial Office to investigate a seismic crisis on St Kitts and Nevis which had begun in late December 1950. Willmore arrived in February 1951, but soon realised he had already missed the most significant earthquakes of the crisis. To prevent this happening again, Willmore recommended that a regional network of instruments be established by placing one seismograph on 'each of the major British islands', with data collected at a central office. The first seismograph was installed in Trinidad; followed by others on St Vincent and Dominica, and by 1959 there were stations on eight islands. In 2022, the network extends to more than 60 stations. Over time, the instruments used in the seismic network have changed radically. The first seismometers installed were analogue seismographs designed by Patrick Willmore, which recorded onto photographic paper. During the 1970s, radio-telemetry was introduced, so that signals could be transmitted from the analogue field stations, to the UWI-SRC headquarters. Tools were developed to digitise and time-stamp the analogue data, and then to record and process the digitised data using an in-house algorithm called "WurstMachine" to calculate the earthquake parameters: hypocentre and magnitude. The current generation of seismometers are fully digital, and networked so that they can stream data to UWI-SRC headquarters. The network includes both broadband, three-component and one-component instruments. Many of the seismic stations are co-located with other monitoring instruments (including accelerometers and continuous GPS receivers), and some are shared with regional monitoring agencies run by UNAVCO, IPGP and others. Directors Directors of UWI-SRC include Geoffrey Robson John Tomblin (1968-1980) John Shepherd (1980-1989) Keith Rowley (1989-1991) Lloyd Lynch William Ambeh John Shepherd (1999-2004) Richard Robertson (2004-2011) Joan Latchman (2011-2013) Richard Robertson (2013-2019) Erouscilla Joseph (2019-) Awards 2022 Volcanic Surveillance and Crisis Management Award of the International Association of Volcanology and Chemistry of the Earth's Interior (IAVCEI) References Seismic Research Centre Geology organizations Volcano observatories 1953 establishments in Trinidad and Tobago Earthquake and seismic risk mitigation Tsunami Volcano seismology Seismic networks Seismological observatories, organisations and projects Volcano monitoring
UWI Seismic Research Centre
[ "Engineering" ]
776
[ "Structural engineering", "Earthquake and seismic risk mitigation" ]
73,732,597
https://en.wikipedia.org/wiki/Coiled-coil%20drug%20delivery
Coiled-coil drug delivery systems refer to drug delivery systems utilizing coiled-coil motifs capable of delivering disease-treating therapies, imaging agents, and vaccines to patients systemically or specifically. These systems are a form of peptide therapeutics and are capable of being engineered and finely tuned into different types of drug delivery vehicles (such as liposomes, nanoparticle drug carriers, polymer hybrid drug carriers, micelles, etc.) based on the specific application required. The goal of a coiled-coil drug delivery system is to deliver cargo such as medication, imaging agents, biological molecules, or vaccines efficiently and specifically, in order to maximize the therapeutic efficacy and minimize unwanted side effects. This is achieved through fine-tuning the factors affecting the coiled coil’s oligomerization, resulting in modular systems that are highly specific for the intended application. Coiled-coil motifs make up 10% of all protein sequences, and are utilized naturally by various proteins in both prokaryotes and eukaryotes to achieve diverse cellular functions. Coupled with the simple helical structure of coiled coils which has been widely studied and reported on in literature, engineered coiled coil drug delivery systems are capable of improving drug pharmacokinetics, reducing unintentional toxicity during delivery, delivering drugs in a specific manner, controlling cargo release, and maintaining high stability through transport in the body. History Coiled-coil research began in 1953 when Dr. Francis Crick first reported on the theory behind the packing formation of α-helices in fibrous proteins at the time, which he proposed to consist of alpha helices composed of heptad repeats, or seven-residue repeats (a-b-c-d-e-f-g), whereby 2 or more alpha helices twist around each other similar to the strands of a rope. In 1972, Dr. Robert Hodges and his colleagues confirmed Dr. Crick’s hypothesis upon sequencing tropomyosin, further discovering that the heptad repeat consists of two hydrophobic residues at the a and d positions, which stabilize coiled coils and are their basis for formation. This confirmation formed the basis for designing engineered coiled-coil proteins to further investigate and better understand coiled-coil interactions, structures, functions, oligomerization, and other properties. Later in 1991, Dr. O’Shea and colleagues obtained the first high-resolution image of a two-stranded coiled-coil at a resolution of 1.8Å. Dr. Hodges was the first to suggest the use of coiled coils in a drug delivery system in 1996 when he proposed a two-stage targeting and delivery system based on heterodimerization, whereby a drug would be conjugated to chain 1 and an antibody would be conjugated to chain 2, such that chains 1 and 2 would form a heterodimeric coiled coil. In this system, the antibody conjugate would hypothetically be delivered first such that it binds to the target location, followed by the administration of the drug conjugate, whose chain 1 would dimerize with the antibody chain 2, resulting in targeted drug delivery. Since then, hundreds of investigations have been reported in the literature discussing novel drug delivery systems consisting of various coiled coil supramolecular assemblies, such as fibers, hydrogels, and nanostructures. Design factors Typically, a coiled-coil motif consists of 2-7 alpha helix strands coiled together, each of which consists of a 7-residue repeat (a-b-c-d-e-f-g) called a heptad. Heptads are unique in that positions a, d are occupied by hydrophobic residues – typically Leu, Ile, or Val. Positions e, g are typically occupied by charged or polar residues – typically Lys or Glu. Through this pattern, individual helices become amphipathic, such that when oligomerized, a hydrophobic core forms between the a, d residues of the helices, along with interhelical ionic interactions that aid in stabilizing the oligomer that forms between the e and g residues of the helices (see figure 1). The number of heptads in a molecule is variable and can be modified based on specific applications of coiled-coil systems. For example, sequences with fewer heptads consisting of a, d hydrophobic residues can prove to be more stable than sequences with more heptads containing a mixture of polar and non-polar residues at the same positions. Thus, the hydrophobic core of a coiled-coil motif is considered a dominant factor affecting the stability of the motif. Additionally, the hydrophobic core residues affect the specificity of the coiled-coil motif, such that the specific pairs of a, d residues determine the number of alpha helices that compose the coiled-coil system. For example, in the case of the GCN4 leucine zipper protein, mutants with the a, d pair of I, L resulted in a two-stranded coiled-coil, while a pair of I, I resulted in a three-stranded coiled-coil and a pair of L, I resulted in a four-stranded coiled coil. Thus, the oligomerization selectivity can be tuned on a coiled-coil motif by choosing the appropriate amino acid residues in positions a, d. The polar residues on positions e, g of a heptad also contribute to the stability and specificity of the coiled-coil motif due to the electrostatic interactions such as salt bridges with e, g residues on other heptads, though to a lesser extent compared to residues in the a, d positions. However, e, g residues are capable of conferring heterospecific properties to a coiled-coil motif, such that a system can be designed whereby strands prefer hetero-oligomerize as opposed to homo-oligomerize. Coiled coils may be either left-handed or right-handed coils – although the majority of coiled-coil proteins found in nature consist of heptads and are left-handed since the handedness of coiled coils opposes the handedness of the alpha helices that comprise them. Right-handed coils have been reported in the literature to contain 11 residue repeats known as undecad repeats (a-b-c-d-e-f-g-h-i-j-k) or 15 residue repeats known as pentadecad repeats (a-b-c-d-e-f-g-h-i-j-k-l-m-n-o), both of which could feature larger hydrophobic cores and larger cavities that would be useful in drug delivery systems to load larger cargo. Polymer-hybrid delivery systems Coiled-coils are used as non-covalent polymer-drug conjugates to link drugs to polymer backbones. The goal of these types of systems is to attach multiple drugs to a non-toxic backbone such that drugs can be stably transported throughout the body and released at a controlled rate once at the target location. Doxorubicin, paclitaxel, and campothecin are examples of drugs typically used with polymer-drug conjugate systems. Hetero-dimeric coiled-coils motifs can be utilized in such systems, whereby one strand would be conjugated to the polymer backbone network, while the other strand would be conjugated to the drug of choice. The coiled-coils would then oligomerize, followed by the administration of the drug system into the body, whereby the stability of the coiled-coil in physiological conditions would ensure the intact delivery of the drug to the target. Upon cellular uptake at the target site, coiled-coil system would be exposed to a decrease in pH associated with the acidic environments of endosomes and lysosomes, triggering the dissociation of the coiled-coils, resulting in drug release. Dr. Harm-Anton Klok and colleagues were the first to investigate the usage of coiled coils as linkers in polymer-drug conjugate systems, whereby they utilized the parallel heterodimeric E3/K3 coiled-coil system (known for its stability at physiological pH and dissociation at pH 5, resulting in E3 homotrimers along with K3 unimers) to link cargo to a poly(N-(2-hydroxypropyl)methacrylamide) (PHPMA)-based polymer backbone. Klok et al. proved the intracellular uptake of cargo via endocytosis, along with cargo release as a result of coiled-coil dissociation. Dr. Ondřej Vaněk and colleagues utilized the same E3/K3-PHPMA system to attach an antibody to the polymer backbone to target the delivery of the drug system, which was successful in vitro. Coiled-coil polymer hybrid drug delivery systems can also be used in drug-free macromolecular therapeutic (DFMT) applications, whereby a coiled-coil-based system would be used to induce apoptosis in target cells. Specifically, Dr. Jindřich Kopeček and colleagues attempted to induce apoptosis in CD20-positive non-Hodgkin's lymphoma B-cells by mimicking the induction of apoptosis typically caused by the recognition of secondary antibodies to the CD20 antigen. In this case, apoptosis was induced upon the oligomerization of a PHMPA copolymer-conjugated coil to the anti-CD20 FAB fragment-conjugated coil (which would recognize and bind CD20). The coiled-coil motifs used in this system were the anti-parallel heterodimeric CCE/CCK coiled coils, which consist of pentaheptad repeats. This system was found to be successful at inducing apoptosis in those cells in vitro, providing an alternative to the anti-CD20 antibody drug Rituximab. Further studies have shown the efficacy of this system in vivo whereby malignant B-cells implanted in the bone marrow of mice were eradicated completely. Nanoparticle system Coiled-coils can be used to create nanoparticle drug delivery systems capable of delivering drugs or other biological molecules with increased targeting and controlled release due to their biocompatibility, stability, and targeting properties. Self-assembled cage-like particles (SAGE) utilize coiled-coils along with disulfide linkers to create hollow nanoparticles of diameters in the range of 100nm. SAGE consists of two separate coiled-coil motifs: a ~20 residue heptad homotrimer motif (CC-Tri3) and a ~20 residue heptad heterodimer motif (CC-Di-A / CC-Di-B). Each CC-Tri3 would be bound to either a CC-Di-A or a CC-Di-B via a disulfide linker, such that each time CC-Di-A and CC-Di-B would oligomerize together, hexagonal networks would form with pores of 5-6nm in diameter: CC-Di-A – CC-Tri3 – CC-Di-A – CC-Di-b – CC-Tri3 – CC-Di-B. Self-assembly would result in further oligomerization between the heterodimer motifs, which would eventually result in the formation of a hollow nanoparticle sphere. The final diameter of the nanoparticle would depend on the length linker used, along with the size of the coiled-coil motifs used. SAGE has been applied in the field of antigen delivery, whereby Dr. Andrew Davidson and colleagues modified 3 SAGE systems described above with the antigenic peptides tetanus toxoid, ovalbumin, and hemagglutinin individually. The investigators found that SAGE systems were nontoxic in vivo, and were capable of eliciting CD4 T cell and B cell responses in the case of the tetanus toxoid and ovalbumin systems while eliciting a CD8 T cell response with the hemagglutinin system.   Some advantages to using SAGE systems for antigen presentation include the ability to remain stable and functional after functionalization with cargo, the ability to modify and tune cellular uptake properties, and the modularity of the platform which could potentially be used to present multiple antigens at the same time, resulting in increased antigen immunogenicity. Another type of coiled-coil nanoparticle system is the self-assembling protein nanoparticles (SAPN). SAPN differs from SAGE in that SAPN utilizes trimeric and pentameric coiled-coil motifs. This change results in the self-assembly of a symmetrical polyhedral 16nm nanoparticle composed of 60 monomer building blocks. The small size of SAPN allows the nanoparticle system to resemble viruses in shape and size, which is beneficial to antigen presentation. Specifically, SAPN has been utilized by Dr. David Lanar and colleagues to develop a P. falciparum malaria vaccine whereby B and CD8-T cell epitopes of the disease were modified into the SAPN coiled-coil motifs. In vivo results showed that a long-lasting immune response was generated in the mice for up to 13 months, capable of preventing malaria infection in vaccine-treated mice. References Drug delivery devices
Coiled-coil drug delivery
[ "Chemistry" ]
2,766
[ "Pharmacology", "Drug delivery devices" ]
73,734,121
https://en.wikipedia.org/wiki/Nek5000
Nek5000 is a highly scalable spectral element computational fluid dynamics code for solving the incompressible Navier-Stokes equations on 2D quadrilateral and 3D hexahedral meshes. Nek5000 was awarded the 1999 Gordon Bell Prize and a 2016 R&D 100 Award. History Related and derived codes Gslib Nekbone Neko NekCEM NekLBM NekROM NekRS ParRSB References Computational fluid dynamics Free science software Free computer-aided design software Scientific simulation software
Nek5000
[ "Physics", "Chemistry" ]
108
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
69,290,671
https://en.wikipedia.org/wiki/Linnett%20double-quartet%20theory
Linnett double-quartet theory (LDQ) is a method of describing the bonding in molecules which involves separating the electrons depending on their spin, placing them into separate 'spin tetrahedra' to minimise the Pauli repulsions between electrons of the same spin. Introduced by J. W. Linnett in his 1961 monograph and 1964 book, this method expands on the electron dot structures pioneered by G. N. Lewis. While the theory retains the requirement for fulfilling the octet rule, it dispenses with the need to force electrons into coincident pairs. Instead, the theory stipulates that the four electrons of a given spin should maximise the distances between each other, resulting in a net tetrahedral electronic arrangement that is the fundamental molecular building block of the theory. By taking cognisance of both the charge and the spin of the electrons, the theory can describe bonding situations beyond those invoking electron pairs, for example two-centre one-electron bonds. This approach thus facilitates the generation of molecular structures which accurately reflect the physical properties of the corresponding molecules, for example molecular oxygen, benzene, nitric oxide or diborane. Additionally, the method has enjoyed some success for generating the molecular structures of excited states, radicals, and reaction intermediates. The theory has also facilitated a more complete understanding of chemical reactivity, hypervalent bonding and three-centre bonding. Historical background The cornerstone of classical bonding theories is the Lewis structure, published by G. N. Lewis in 1916 and continuing to be widely taught and disseminated to this day. In this theory, the electrons in bonds are believed to pair up, forming electron pairs which result in the binding of nuclei. While Lewis’ model could explain the structures of many molecules, Lewis himself could not rationalise why electrons, negatively-charged particles which should repel, were able to form electron pairs in molecules or even why electrons can form a bond between atoms. Lewis’ theory has been seminal in the understanding of the chemical bond. Yet despite this, it was formulated before the discovery of electron spin, a key intrinsic property of electrons which manifests itself through inter-electronic interactions. While spin was known about ever since the publication of Stern and Gerlach's results in 1922, with the Pauli exclusion principle being formulated in 1925, the importance of 'spin correlation' for understanding when and why electrons form pairs in molecules was not understood until the work of Lennard-Jones in the 1950s. During the latter decade, J. W. Linnett and his students began to explicitly study the role of spin in determining the electronic structures of various molecules. This resulted in Linnett's landmark 1961 publication, and subsequent 1964 book, in which he outlined what became known as “Linnett double-quartet” theory. Linnett continued to expand on his theory through a number of publications until his death in 1975. In these writings, Linnett recognised the continued importance of the Lewis model of bonding and the importance of satisfying the octet rule. However, he also argued that this view overemphasises the importance of electron pairing in the formation of chemical bonds. Hence, his theory sought to introduce spin into the conventional model of bonding and hence rectify some of the problems associated with Lewis’ theory. While LDQ theory is a relatively simple extension of Lewis’ bonding theory, the additional freedom of the electrons to separate into two sets, differentiated by their spins, has bestowed upon the theory exquisite agreement with the results of many experiments. In its nascent years, LDQ theory attracted the interest of many researchers, furnishing greater insights into the structures of many molecules. However, LDQ theory began to fade from the spotlight in the 1970s and was mostly abandoned by researchers in the United States, Great Britain and Europe by the mid-1980s. Formulation of Linnett double-quartet theory Basic principles A key trait of LDQ theory that is shared with Lewis theory is the importance of using formal charges to determine the most important electronic structure. LDQ theory produces the spatial distributions of the electrons by considering the two fundamental physical properties of said electrons: The mutual repulsion of electrons with like spins, in accordance with the Pauli exclusion principle. Hence, electrons with like (parallel) spins tend to keep as far away from each other as possible by refusing to occupy the same spatial region, while electrons with unlike (antiparallel) spins can occupy the same spatial region. This effect is known as ‘spin correlation’. The mutual Coulombic repulsion between electrons. This effect tends to keep electrons as far away from each other as possible, regardless of their relative spins. This is known as ‘charge correlation’. In Linnett's interpretation, correlation is “the mutual effect the electrons have on one another’s spatial positions”. In the absence of charge correlation, the situation would be as follows: If an equal number of both spins is present, the electrons will tend to pair up. If an unequal number of spins is present, then the probability distribution of the possible structures is independent of the mutual disposition of the two spin sets. When one adds the effects of charge correlation, the situation is modified somewhat: For electrons with the same spin, charge correlation works in tandem with spin correlation to yield a strong repulsion between the electrons. For electrons of opposite spin, charge correlation effects will work against spin correlation effects. Given these rules, it is found that: The four electrons in the same spin set will always keep apart as they experience a negative charge correlation and a negative spin correlation. Electrons in different spin sets can pair up (occupy the same spatial region) as they experience a negative charge correlation (which tends to keep them apart) but a positive spin correlation (which favours the spatial proximity of electrons with unlike spins). Consequences of electron correlation effects An octet is any arrangement which results in a given nucleus having a total of eight valence electrons around it. In Lewis' bonding model, the electrons tend to pair up in bonds such that an atom has a total of four chemical bonds and lone pairs associated with it: thus, the atom can satisfy its octet. LDQ theory also acknowledges that the elements in the ‘first short period’ of the periodic table tend to attain an octet of electrons surrounding them. However, in contrast with Lewis' view, Linnett argued that due to the combined effects of charge correlation and spin correlation, it is physically more meaningful to consider the octet as the sum of two tetrahedral quartets of electrons. Each quartet consists of electrons of one spin only, and these electrons can act and orient themselves independently. One can then obtain molecular structures by arranging the electrons in such a way as to maximise the separations between the electrons, hence minimising the mutual inter-electronic repulsions, while simultaneously ensuring that the basic geometry of the spin sets is not altered. Additionally, Linnett stressed that due to the Pauli exclusion principle, one should prioritise separating electrons of the same spin when considering the overall electronic structure. Influence of nearby nuclei In chemical bonding, the presence of additional nuclei causes the electrons to seek to maximise their attractive electrostatic interactions with all nearby nuclei. This can result in the formation of coincident or ‘close-paired’ electron pairs, in accordance with Lewis’ bonding model. Thus, it has previously been argued that the following should also be included in the basic postulates of LDQ theory: The attraction between the nuclei and the electrons tends to distort the electronic geometry. This distortion acts to force a maximum number of electrons into the internuclear (bond axis) region, helping to efficiently bind the nuclei together. The presence of any additional nearby nuclei can partially relax the influence of correlation effects on the electronic geometry. Therefore, it is possible for two electrons of opposite spin to come together and occupy the same spatial region, effectively forming the classical Lewis electron pair. This can serve to strengthen the binding between the nuclei by increasing the net electron density in the internuclear region. The exact disposition of the electrons is determined by the relative electronegativities of the constituent elements. The electron pairing can result in a greater net binding between the nuclei, but this is not necessarily the case in all molecules. In his discussions, Linnett notes that due to the opposing effects of charge and spin, the correlation between the two spin quartets should be small and so the individual spin tetrahedra can be treated as being partly independent from each other. This then facilitates electron pairing since nearby nuclei can easily force the two electrons together. Linnett also argues that a relatively small deviation from the strictly regular tetrahedra of the rigorous LDQ theory approach could be energetically favourable in some cases. Balancing the intramolecular interactions The structure obtained from applying LDQ theory balances the three principal interactions in the molecule: electron-electron, electron-nuclear and nuclear-nuclear. Much like Lewis’ bonding model, LDQ theory assumes that the dominant contributions result from electron-electron and electron-nuclear interactions. However, it has previously been shown that the introduction of nuclear-nuclear interactions into LDQ theory can explain some trends in bond angles and bond lengths. In particular, Firestone produced an extensive discussion of the effects of moving bonding electron density out of the internuclear region and highlighted that sometimes such a distortion is necessary to produce a more satisfactory arrangement of the spin sets. Due to the decreased shielding of the nuclear-nuclear interactions and the decreased electron-nuclear interactions associated with this change, the net energy of the molecule tends to increase: this is known as “L-strain” (see section on reactivity later). Examples of the application of the theory Understanding structures using LDQ As an example of the application of LDQ theory to molecular bonding, take the case of the fluoride ion. By using LDQ theory, the electronic structure shown below is obtained. The two spin sets are under the action of only one nucleus and so there is no net interaction which will cause the electrons to pair up. Hence, unlike the Lewis model which predicts four lone pairs, all electrons in the fluoride ion are spatially separated. Therefore, the following statement by Luder is found to be true for all mononuclear species:“In an isolated atom, no valence electron is close-paired with another”. If a proton then approaches the fluoride ion, the proton's attractive potential can distort the electronic geometry. Two electrons of opposite spin (necessary to complete the duplet of the hydrogen atom) are attracted to the proton and this attractive potential pulls them together to yield an electron pair localised to the internuclear region. This is illustrated in the LDQ structure of hydrogen fluoride shown below. Again, while the Lewis picture would predict four coincident electron pairs, the LDQ theory treatment yields only one close pair and two staggered spin tetrahedra that share a vertex. This makes sense as the other six electrons, unlike the two bonding electrons, do not significantly experience the attractive influence of the proton and hence their inter-electronic repulsions keep them separated. Example: molecular oxygen One of the major triumphs of LDQ theory over the traditional Lewis view is the ability of the former to generate an electronic structure which explains the paramagnetism of the ground state (3Σg− state) of molecular oxygen (O2). The LDQ structure of the ground state of O2 does not involve any electron pairs, in contrast with the Lewis structure of the molecule. Instead, the electrons are arranged as shown below. There are seven valence electrons of one spin which occupy two tetrahedra that share a common vertex (purple spheres), and the remaining five valence electrons of the other spin occupy two tetrahedra which share a common face (green spheres). Linnett postulated that this electronic arrangement reduces the magnitude of the inter-electronic repulsions in comparison with the case where the two spin sets have six electrons each. This arrangement results in a bond order of 2 and an excess of one electron spin, giving rise to the molecule's paramagnetism: both observations are in agreement with molecular orbital theory treatments of the molecule. In effect, the LDQ structure is equivalent to the combination of a two-centre one-electron bond (purple spin set) and a two-centre three-electron bond (green spin set). Example: methane Not all LDQ structures differ from those produced using Lewis’ bonding model. For example, an alkane such as methane has both spin tetrahedra totally coincident, resulting in four close-pairs of electrons as in the Lewis picture. Simplification of the theory: 2D structures The above three-dimensional LDQ structures are useful for visualising the molecular structures, but they can be laborious to construct. Hence, Linnett introduced two-dimensional structures, analogous to Lewis structures, that used dots and crosses to represent the relative spin states of electrons. An example is shown on the right for molecular oxygen. Further, Linnett also modified the lines used in Lewis structures to account for electron coincidence and/or non-coincidence: a thin line represents an electron pair that is not close-paired, while a thick line represents a close-pair of electrons. This is exemplified best in the case of the hydrogen fluoride molecule, the dot-and-cross diagram of which is shown on the right. Here, the Lewis structure drawn on the left of the image is compared with the LDQ line structure on the right of the image. The LDQ structure thus expands on the Lewis structure by denoting if the electrons are coincident (thick line) or if they are spatially separated (thin lines).Additionally, by adding a dot or cross above/below the bond line, one can denote an odd number of electrons which are involved in the bond. This is illustrated well in the structure of nitric oxide (NO) shown below: More details about the LDQ structures of radicals such as NO are given in the section ‘Theoretical Description of Radicals’. Example: benzene LDQ theory has been lauded for its ability to produce an accurate electronic structure of benzene. The LDQ structure for benzene is shown below. In this model, each carbon atom is bonded to its neighbouring carbon atoms by three non-coincident electrons, two of one spin (e.g. green spheres) and one of the other spin (e.g. purple spheres). Thus, LDQ theory is able to predict the 1.5 bond order of the carbon-carbon bonds in benzene, the equivalence of all six carbon-carbon bonds and the stability of benzene due to the fact that none of the electrons in the carbon-carbon bonds are close-paired. This is in contrast with the valence bond picture which must invoke resonance between the two Kekulé forms of benzene in order to predict the non-integral bond order. Hence, the LDQ structure is lower in energy than either of the Kekulé forms due to a reduction in the magnitude of the inter-electronic repulsions in the former. The 2D LDQ structures of benzene using both the full dot-and-cross diagram and the simplified diagram are shown on the right. Again, the bonding situation determined using LDQ theory is in good agreement with molecular orbital theory results. This also highlights that the additional degree of freedom afforded by having two distinct spin sets in the LDQ approach allows a single electron in a bond to be shared equally between two atoms, which produces the above structure for benzene. Theoretical description of excited states The ability of LDQ theory to describe electronic distributions in terms of independent spin sets has facilitated studies of the excited states of various molecules, producing excited state electronic structures that are in agreement with experiments. This sets LDQ theory apart from both valence bond theory and Lewis bonding theory as these have not been previously utilised to study excited state electronic structures. Further, the LDQ theory approach to studying excited states produces three-dimensional redistributions of the electron density, in contrast with the single-electron vertical transitions produced using molecular orbital methods. Example: excited states of molecular oxygen As outlined previously, Linnett found that disposing the electrons into two spin sets, one with seven electrons and the other with only five electrons, produced the electronic structure of the ground state of O2 (see above). In contrast, one can look at the case where the two spin sets both contain six electrons to generate the excited states of O2. When the spin sets are non-coincident, the electronic structure shown below is produced. In this case, each spin set is the same but there is no correlation between them, giving rise to a cubic arrangement of the electrons. As the average distance between the electrons is shorter than in the ground state case, this disposition of the electrons thus results in a greater net magnitude of the inter-electronic repulsion energy as compared to the ground state. Hence, the above structure corresponds to the first excited state (1Δg state) of O2. If one further increases the degree of inter-electronic repulsions by forcing the electrons into coincident pairs, the electronic structure shown below is generated. This corresponds to the electronic structure of the second excited state of O2 (1Σg+ state), and also corresponds to the (incorrect) Lewis structure of the ground state of O2. Thus, a comparison of the magnitude of the inter-electronic repulsions in a series of possible molecular structures can be used to assess their relative energies and hence determine the ground and excited states. Additionally, it is found that in all three electronic structures, the net bond order is 2 as they all have four electrons in the spatial region between the oxygen nuclei. Thus, we see that this example clearly demonstrates that “not all double bonds are created equal”. Example: excited states of acetylene Linnett also used the example of acetylene to illustrate the power of the LDQ approach for understanding the structures of the excited states of molecules. The dot-and-cross diagrams for both the ground state and the first excited state of acetylene are shown below.Upon excitation of the acetylene molecule, there is a net depletion of electron density from the bond region. This is captured in the above figure on the right as three electrons are withdrawn from the internuclear region and localised to the individual carbon atoms: resonance needs to be invoked in this case to explain how the three electrons can be distributed among the two carbon centres. Linnett rationalises this three-electron redistribution by arguing that it is required by the need to both form the two carbon-hydrogen bonds and retain the tetrahedral disposition of the electrons of a given spin. Interestingly, the excited state does not obey the octet rule as the carbon atoms have an average 6.5 valence electrons surrounding them. Further, the internuclear region contains only three electrons, the same as in the benzene molecule (see above), and this explains why the carbon-carbon bond length in the excited state of acetylene is the same as that in benzene. Most strikingly, the molecule changes its geometry upon excitation, going from a simple linear symmetry to a trans-bent structure. This is in excellent agreement with both the landmark results of Ingold and King, which were the first demonstration of an excited state having a qualitatively different geometry than the ground state, and the results from molecular orbital theory methods. Thus, this example illustrates that LDQ theory can be a powerful tool for understanding the geometric rearrangements that occur when excited states are formed. Theoretical description of radicals A major drawback of Lewis’ bonding theory is its inability to predict and understand the structures of radicals due to the presence of unpaired single electrons. LDQ theory has seen great success in explaining the structures of open shell systems such as nitric oxide or ozone due to the additional degree of freedom associated with having two independent spin sets. In the cases of nitric oxide and ozone, the maxima of the electron density of the localised orbitals result in distributions which closely mirror the dot-and-cross diagrams produced using LDQ theory. Example: nitric oxide The typical example of a radical that cannot be treated satisfactorily using Lewis structures is nitric oxide (NO). By allowing the electrons in the two spin sets to separate from each other, the LDQ structure for NO can be generated as shown below. Hence, the NO molecule is held together by a perfectly symmetric two-centre five-electron bond, made up of three electrons of one spin (green spheres) and two electrons of the other spin (purple spheres). This bonding arrangement satisfies the octet for both the nitrogen and oxygen atoms and results in a bond order of 2.5, in excellent agreement with the molecular orbital theory treatment of NO. Stability of radicals against dimerisation It has previously been highlighted that, from applications of LDQ theory, there exist two distinct classes of radicals: (a) radicals which do not have enough electrons to satisfy the octets of their constituent atoms and (b) radicals which obey the octet rule. Radicals of type (a) are thus highly reactive fragments which want to gain electrons to satisfy the octet rule, while radicals of type (b) are stable species by virtue of satisfying the octets of their constituent atoms. As an example, the cyanide (CN) radical shown below is a type (a) radical that has ten bonding electrons, while the cyanogen molecule (a dimeric combination of two CN radicals) has 14 bonding electrons. Hence, the dimerisation of CN to cyanogen is favourable as it increases the degree of bonding in the overall system and reduces the total energy. In contrast, the NO molecule is a type (b) radical, also with ten electrons. However, the dimeric N2O2 molecule likewise has ten bonding electrons, and hence there is no significant energetic benefit from the formation of the dimer. In fact, the formation of the nitrogen-nitrogen bond leads to an increase in the number of close-paired electrons and hence an increase in the total system energy, and so isolated NO molecules are stable against dimerisation in the gas phase. Application to chemical reactivity LDQ theory has enjoyed some success in studies of chemical reactivity, in particular organic reactions, as it can furnish one with the ability to predict chemical reactivity from analyses of the relevant reactant and transition state structures. Firestone's extensive work constitutes the most significant application of LDQ theory to chemical reactivity thus far. Firestone has previously used the concept of L-strain (see above) to analyse the activation energies in SN2, SH2 and E2 reactions, since the movement of electron density out of the internuclear region is commonly associated with the formation of transition states. Example: reactivity among different families of hydrocarbons LDQ structures, in particular the coincidence of electron pairs, can be used to rationalise and explain the stability and reactivity of certain families of molecules such as hydrocarbons. As shown for ethane, the electrons reside in two coincident tetrahedra which share a common vertex, and hence all the electrons are in close-pairs as expected from Lewis’ bonding model. However, compare this with the situation in ethylene: again, all the electrons are in close-pairs but now there is no electron density along the internuclear axis. The result is that the energy required to overcome charge correlation and pair the electrons up is compensated to a lesser extent by the bonding in ethylene as compared with ethane. Thus, in agreement with experiments, the ethylene molecule should be highly reactive with respect to addition reactions. Finally, the above can be compared with the situation in acetylene. Here, the six electrons involved in bonding are all anti-coincident and so the energy cost associated with charge correlation is minimised. Indeed, in agreement with experiment, carbon-carbon triple bonds are far less reactive with respect to addition reactions than carbon-carbon double bonds as transforming carbon-carbon triple bonds into double bonds also involves the formation of close-pairs of electrons, an energetically costly process. Application to hypervalent and three-centre bonding The strengths of LDQ theory have been applied to understand the structures and bonding modes of various molecules which, in the valence bond method, are described using the terms ‘hypervalent’ and ‘three-centre bonding’. Hypervalent molecules In the case of phosphorus pentachloride (PCl5), the example shown on the right, the central phosphorus atom is bonded to five chlorine atoms. In the traditional Lewis view, this violates the octet rule as the five phosphorus-chlorine bonds would result in a net ten electrons around the phosphorus atom. Thus, the molecule is assumed to expand its bonding beyond the octet, a situation known as hypervalent bonding. LDQ theory, however, presents a different view of the bonding in this molecule. The three equatorial chlorine atoms each form two-electron bonds with the central phosphorus atom. The remaining two axial chlorine atoms each contribute only one electron to a bond with the phosphorus atom, leaving a single electron to reside exclusively on the chlorine atom. Thus, the LDQ structure for PCl5 consists of three two-centre two-electron bonds and two two-centre one-electron bonds, thus satisfying the octet rule and dispensing with the need to invoke hypervalent bonding. This LDQ structure is also in good agreement with quantum chemical calculations. Three-centre bonding LDQ theory has facilitated a more rigorous analysis of bonding in compounds which have conventionally been described in terms of three-centre two-electron bonding. For example, compare the various ways shown below to represent the bonding in the Lewis acid-base adduct of the hydride anion (H−) and borane (BH3) shown below. The LDQ approach thus enables each electron to localise in one of the boron-hydrogen internuclear bond regions, rather than being delocalised over the entire three-centre boron-hydrogen-boron moiety. This arrangement of the bonding electrons into two two-centre one-electron bonds benefits from a lowering of the net magnitude of the inter-electronic repulsions in the system. In comparison, as described by Linnett:“By allowing the two electrons independent ‘movement’ in a three-centre system, the three-centre bond allows the electrons a fairly considerable chance of being near one another”.Similarly, the resonance forms shown above also increase the degree of inter-electronic repulsions as the electrons are paired up in the boron-hydrogen bonds. Thus, a more complete description of the bonding in B2H7− is obtained using LDQ theory as it can utilise two two-centre one-electron bonds, in comparison with the awkward three-centre two-electron bond or the resonance structures derived from the valence bond method. The situation is similar for diborane (B2H6), the archetypal example used to explain three-centre two-electron bonding. The above demonstrates that the structure produced using LDQ theory again yields the lowest degree of inter-electronic repulsions. Indeed, the separation of the electrons into two distinct spin sets has enabled the theory to expand the set of possible bonding arrangements, with two-centre one-electron, two-centre three-electron and two-centre five-electron bonding patterns all possible in the theory. Quantitative extension of Linnett double-quartet theory Along with the qualitative picture outlined above, LDQ theory has also been applied to computational studies. This quantitative extension is known as the non-pairing spatial orbital (NPSO) theory. In the NPSO method, the constituent wave functions are based on the corresponding qualitative LDQ structures. This approach has previously been shown to produce lower energies as compared to valence bond or molecular orbital wave functions derived from Lewis structures for molecules such as benzene, diborane or ozone. Hence, by the variational principle, the wave functions produced by NPSO methods are often a better approximation than those generated using molecular orbital theory methods. Relation to the electron localisation function It is possible to visualise the reality of disposing the two spin sets separately. Recent investigations have shown that the electron localisation function (ELF) can be successfully applied to understand the disposition of the electrons in a number of molecules. Example: acetylene The ELF of acetylene has been studied by a number of authors. The results of this analysis are indicated in the figure below. The ELF of acetylene thus contains a toroidal basin surrounding the carbon-carbon bond axis, rather than three discrete concentrations of electron density as would be expected from the Lewis structure for a triple bond. This is directly comparable to the bonding picture produced using LDQ theory (see above), highlighting that the theory can accurately reflect the bonding situation in multiply-bonded species. Example: digermyne A recent report on the disilyne and digermyne molecules has shown that their ELFs also result in a toroidal basin surrounding the internuclear axis. The toroidal basin represents the six electrons which are involved in the bonding between the two germanium centres in this molecule. The LDQ structure is in excellent agreement with these computational results: the toroid is angled in comparison with the case in acetylene due to the perturbation caused by the off-axis hydrogen atoms. Example: chlorine trifluoride In the VSEPR structure of chlorine trifluoride (ClF3), the molecule adopts a trigonal bipyramidal structure with the central chlorine atom violating the octet rule. This is typically rationalised by invoking d orbital participation in the bonding of the sp3d hybridised chlorine centre. The ELF of ClF3 is presented below. The ELF analysis of ClF3 indicates that there is a single toroidal-shaped basin at the 'back' of each fluorine atom, corresponding analogously to the three lone pairs arranged in a ring as generated for the HF molecule (see above). This is in contrast with the Lewis structure which would place the fluorine lone pair electrons into discrete coincident pairs. Further, the lone pairs of electrons associated with the central chlorine atom reside in two kidney-shaped lobes which lie in the equatorial plane along with one of the fluorine atoms. This structure, consistent with the LDQ structure of the molecule, is also consistent with the VSEPR structure as the more diffuse chlorine lone pairs distort the molecular geometry and result in the bent planar geometry seen. In contrast, the bonding situation described by LDQ theory differs greatly from that produced using valence bond theory. Rather than having three two-centre two-electron bonds and two lone pairs, necessitating the invocation of hypervalent bonding for the chlorine atom, the LDQ structure instead allows the axial fluorine atoms to form two-centre one-electron bonds. This, when combined with a two-centre two-electron bond to the equatorial fluorine atom and the two chlorine lone pairs, restores the octet of the chlorine atom. As exemplified by the increased bond length of the axial fluorine-chlorine bonds as compared to the equatorial fluorine-chlorine bond, LDQ theory is able to more accurately describe the electronic structure of ClF3 as compared to valence bond theory. Strengths and weaknesses of the theory Strengths of the approach One of the main benefits is that many molecular structures, such as molecular oxygen and ozone, can be represented using a single LDQ structure without invoking any resonance structures. This lesser reliance on resonance structures is favourable as, according to Linnett, resonance structures are not satisfactory descriptions of bonding as the ‘resonance stabilisation energy’ is not easily attributable to any particular molecular feature. Several other strengths of the approach include: It can be used to generate the electronic structures of species with π systems, affording greater precision for systems where there are partial charges associated with the constituent atoms. It is able to treat individual molecular features, such as π systems, separately from the rest of the structure. This is in contrast with molecular orbital theory approaches which often require the simultaneous treatment of σ and π systems. It can be used to understand and predict the relative bond strengths of a single species in cases where a number of structures with different bond orders are possible. The success of LDQ theory in elucidating structures akin to those generated using quantum chemical calculations has also afforded a better understanding of the meaning of the dots and crosses used in the theory. Accordingly, the dots and crosses have been associated with the centroids of charge of the localised orbitals, while also making the distinction between the two sets of spins in the charge analyses. Weaknesses of the approach LDQ theory greatly diminishes, but does not completely remove, the need for invoking resonance structures to explain the bonding in certain molecules. While the need for resonance structures is reduced, it is still necessary to invoke resonance for certain molecules such as semiquinones, nitryl chloride or nitrogen dioxide. Additionally, like its Lewis theory progenitor, the theory ignores the energy differences between s and p orbitals. This has garnered criticism from authors who have dismissed LDQ theory as it was seen to invoke "the inert gas magic". Other authors have also claimed that LDQ theory cannot be easily extended "to larger systems for which its use generally becomes very intuitive" and that its results are "as ambiguous as those of resonance theory". Luder’s extension of Linnett double-quartet theory – electron-repulsion theory Linnett's vision of double-quartet theory was limited to elements which did not expand their valence beyond the octet: this produced the familiar spin tetrahedra. However, later work by W. F. Luder extended the principles of LDQ theory to produce electronic structures with more than four electrons in each spin set. This extension, called “electron repulsion theory” by Luder, could be applied to elements of the d and f blocks in the periodic table. For example, the structure of the zinc atom produced using electron-repulsion theory is shown above. The author asserts that the s electrons occupy the axial positions, leaving the d electrons to occupy the positions at the vertices of two pentagonal bases of the two constituent pyramids. The electronic structure of the ytterbium atom can be constructed similarly. The s electrons are again assumed to occupy the axial positions while the f electrons occupy the positions at the vertices of two heptagonal bases of the two constituent pyramids. While these results are interesting, they have been contested in the scientific literature due to Luder's abandonment of the octet rule and the author's controversial views on spin correlation. Indeed, one author notes that Luder's works “[do] a great disservice to Linnett and his method”. Recent applications of Linnett double-quartet theory Recently, there has been a modest resurgence of LDQ theory in the scientific literature, especially among theoretical chemists. For example, a recent study found that there is a qualitative correspondence between the molecular structures produced using LDQ theory and those suggested by dynamic Voronoi metropolis sampling. Another recent example is the correspondence of the results obtained using LDQ theory to those produced using the Fermi-Löwdin orbital self-interaction correction (FLO-SIC) method. It was shown that this method generates structures which can successfully house two electrons of one spin in a given ‘spin channel’, and the remaining single electron can be housed in the other spin channel: this can be directly related to the LDQ structures of many radicals (see for instance NO above). Further, the electronic geometries for many ground state molecules, such as carbon dioxide, produced via FLO-SIC methods were found to generally agree with those derived from LDQ theory. In a subsequent publication, the authors posited that the Fermi orbital descriptors utilised in their work can be correlated to the electron spins generated in LDQ analyses. The authors also noted that the use of LDQ theory to produce model electronic structures of molecules for quantum calculations results in calculated dipole moments that agree more closely with experiments. References Wikipedia Student Program Molecules Chemical bonding
Linnett double-quartet theory
[ "Physics", "Chemistry", "Materials_science" ]
7,500
[ "Molecular physics", "Molecules", "Condensed matter physics", "Physical objects", "nan", "Chemical bonding", "Atoms", "Matter" ]
69,296,068
https://en.wikipedia.org/wiki/Northstarite
Northstarite is an immensely rare lead-tellurite-thiosulfate mineral with an ideal formula of Pb6(Te4+ O3)5(S6+O3S2-). Northstarite was first discovered in 2019 by Charles Adan in the North Star Mine of the Tintic Mining District, Juab County, Utah, USA. Northstarite received its name after this type locality where it was originally discovered, the North Star Mine. Northstarite is the fourth thiosulfate mineral that exists on Earth, and although all thiosulfates have essential lead components, northstarite is the first thiosulfate species containing groups of both thiosulfate and tellurite (Te4+O3). Occurrence Northstarite is a mineral found in the oxidation zone of Earth, meaning that it is found near the Earth's surface and formed as a result of the chemical decomposition of other minerals that are unstable at the surface. Northstarite occurs in small rock cavities with quartz, baryte, enargite, and pyrite, but is also associated with anglesite, azurite, chrysocolla, fluorapatite, plumbogummite, tellurite, zincospiroffite, and a type of copper-tellurite that possesses poor crystallization. Northstarite is associated with another new mineral called adanite, which was also discovered in the North Star Mine and shares a similar chemical composition as northstarite. The holotype specimen of northstarite originated from the holotype specimen of adanite. Physical properties The crystals of northstarite are about 1 mm in length and are short and prismatic, with pyramidal terminations. The irregular or uneven faces of the crystals avert accurate measurements, but rough measurements have been recorded as {100}, {101}, and {101} based on the general appearance of the crystals and the Donnay-Harker Law. The crystals display no twinning or cleavage, and have an uneven fracture. Northstarite is very brittle. Based on scratch tests, the hardness of northstarite is approximately 2 on the Mohs scale of hardness. Northstarite displays an adamantine luster and has transparent to translucent crystals with a beige color and a white streak. Optical properties Northstarite is a nonpleochroic and uniaxial negative mineral. The calculated average index of refraction for northstarite is 2.15 based on the empirical formula of Pb5.80Sb3+0.05Te4+5.04S6+1.02O18. Chemical properties Northstarite is a thiosulfate mineral that contains tellurite. The empirical formula of northstarite is Pb5.80Sb3+0.05Te4+5.04S6+1.02O18. When simplified, this formula becomes an ideal formula of Pb6(Te4+ O3)5(S6+O3S2-). Chemically, northstarite resembles adanite, schieffelinite, and also eztlite to an extent. Northstarite is indicated to be an anhydrous mineral. When introduced to concentrated hydrochloric acid at room temperature, the crystals of northstarite are slowly soluble. Chemical composition * The measured SO3 in parentheses is allocated as SO3 and S based on S6+:S2– = 1:1 X-ray crystallography Northstarite is in the hexagonal crystal system with a space group of P63. The unit cell dimensions are a= 10.253 Å and c= 11.6747 Å with a standard unit cell volume of 1061.50 Å3. Powder diffraction data: See also List of minerals References Wikipedia Student Program Lead minerals Tellurite minerals Thiosulfates Mixed anion compounds Hexagonal minerals Minerals in space group 173 Minerals described in 2020
Northstarite
[ "Physics", "Chemistry" ]
829
[ "Ions", "Matter", "Mixed anion compounds" ]
69,296,107
https://en.wikipedia.org/wiki/Balliranoite
Balliranoite ((Na,K)6Ca2(Si6Al6O24)Cl2(CO)3) is a mineral that was discovered at Monte Somma – Vesuvio volcanic complex, Campania, Italy. This mineral is named in honor of Paolo Ballirano (b. 1964), Italian crystallographer and professor in the Department of Earth Sciences, University of Rome ‘‘La Sapienza’’, who has made important contributions to the crystal chemistry of cancrinite-group minerals. Occurrence Balliranoite is found in an alkaline skarnlike rock composed of orthoclase, phlogopite, clinohumite, calcite, diopside, pargasite, haüyne, apatite and balliranoite, as product of the metasomatic interactions between alkaline magma and limestone. These chemical alterations by hydrothermal and other fluids replace elements in the chemical structure, changing the mineral composition of the rock. Mineral properties The idealized formula for balliranoite is (Na,K)6Ca2(Si6Al6O24)Cl2(CO3), and the empirical formula based on 12 Si atoms with isomorphic substitution by Al atoms is: Na4.70Ca2.53K0.73(Si6.02Al5.98O23.995)Cl2.34(CO3)0.82(SO40.27*0.12H2O. This is a uniaxial (+) mineral with w = 1.523(2), e = 1.525(2), composed of the following compounds: X-ray crystallography The powder diffraction data for balliranoite is: See also List of minerals References Hexagonal minerals Chlorine-containing natural products Mixed anion compounds
Balliranoite
[ "Physics", "Chemistry" ]
391
[ "Ions", "Matter", "Mixed anion compounds" ]
69,296,350
https://en.wikipedia.org/wiki/Edoylerite
Edoylerite is a rare mercury-containing mineral. Edoylerite was first discovered in 1961 by Edward H. Oyler, whom the mineral is named after, in a meter-sized boulder at the Clear Creek claim in San Benito County, California. The Clear Creek claim is located near the abandoned Clear Creek mercury mine. The material from the boulder underwent several analyses including, X-ray powder diffraction (XRD), a single crystal study, and a preliminary electron microprobe analysis (EMA). Using these analyses it was determined that this was a new mineral but the nature of the material at the time prevented further investigation. It was not until 1986, with the discovery of crystals large enough for a crystal structure determination and a sufficient quantity for a full mineralogical characterization, that the study was renewed. The new edoylerite crystals were found in the same area at the Clear Creek claim but were situated in an outcrop of silica-carbonate rock. This silica-carbonate rock was mineralized by cinnabar following the hydrothermal alteration of the serpentinite in the rock. Edoylerite is a primary alteration product of cinnabar. Though found with cinnabar, the crystals of edoylerite do not typically exceed 0.5mm in length. The ideal chemical formula for edoylerite is Hg32+Cr6+O4S2 Occurrence Edoylerite is found in association with cinnabar, terlinguaite, mercury, wattersite, deanesmithite, and opal. When found with these minerals, it means that the edoylerite crystals form on the surface of the other minerals after the mercury mineralization. The minerals formed during the mercury mineralization, in rough order of abundance, are cinnabar, mercury, edgarbaileyite, metacinnabar, montroydite, eglestonite, calomel, an unidentified yellow massive cryptocrystalline mercury mineral, edoylerite, wattersite, giannellaite, mosesite, deanesmithite, and one occurrence of szmanskiite. Edoylerite most commonly occurs with cinnabar and is a primary alteration product of cinnabar. Edoylerite is a rare mineral, as it has only been found at one locality, the Clear Creek claim in San Benito County, California near the Clear Creek mine. At the edoylerite locality, the host hock is composed of quartz, chalcedony, opal, ferroan magnesite, dolomite, goethite, and minor chlorite. In spite of a considerable search, only microgram quantities of edoylerite have been found since the mineral was originally discovered in 1961. Physical properties Edoylerite is a canary yellow to orangish-yellow mineral, with an adamantine luster. The crystals are transparent to translucent, but a large grouping of the, massive, material appears opaque. The average length of a crystal is 0.2mm. Edoylerite occurs as acicular to prismatic crystals that are elongated on the [101] axis which gives it a slender, needle-like crystal shape or a tabular/platy crystal shape. Its crystals are characterized by the {010}, {11}, {001}, and {101} faces. Edoylerite is brittle and inflexible with very good cleavage along the {010} and a fair cleavage on {101} planes. It exhibits subconchoidal fractures and is nonfluorescent and nonmagnetic. The measured density of edoylerite is 7.13 g/cm3. Optical properties Edoylerite is optically biaxial, which means it will refract light along two axes. The refractive indices are all greater than 1.78. It displays weak pleochroism and strong bireflectance and absorption. In polished sections, Edoylerite is weakly bireflectant and weakly pleochroic with light gray colors. In plane-polarized light, edoylorite is bluish-gray to gray with brilliant pale yellow internal reflections. The pleochroism changes color in the direction it is viewed. In the x-direction, the color is a lemon-yellow, the y-direction exhibits a lemon-yellow color and in the z-direction, the color is a darker lemon-yellow. Chemical properties In cold mineral acids, edoylerite is insoluble or only slightly soluble, but in aqua regia it dissolves slowly. After 24 hours in aqua regia at a constant temperature of 115oC under infrared radiation, the mineral turns greenish yellow. At higher temperatures in the same conditions, the mineral loses its mercury (Hg) and sulfur (S) atoms resulting in a change of color to yellowish-black. Upon cooling, it changes from yellowish black to a dark green. The green residue from this experiment gives the X-ray powder diffraction pattern of Cr2O3 (the synthetic equivalent of eskolaite. Edoylerite is photosensitive and will turn an olive-green after several months of exposure to visible light. Chemical composition The empirical chemical formula for edoylerite is Hg3.262+Cr0.976+O4S2.16. Simplified, the formula is Hg32+Cr6+O4S2 Wattersite, Hg1+4Hg2+Cr6+O6, and deanesmithite, Hg1+2Hg2+3Cr6+O5S2, are related species of edoylerite and are chemically similar, however their bonds . The difference between wattersite and edoylerite is the bonds. There are no Hg-S chains in the structure. The difference between deanesmithite and edoylerite is that three of the four Hg2+ are in distorted octahedral coordination. This equates to the unit cell dimension being similar but not exact. X-ray Powder Diffraction Data Edoylerite is in the monoclinic crystal system, with space group P21/a. The unit cell dimensions are a=7.524(7) Å, b=14.819(8) Å, c=7.443(5) Å, α=90.00°, β=118.72(5)°, γ=90.00°. See also List of minerals References Mercury minerals Monoclinic minerals Chromate minerals Minerals in space group 14 Mixed anion compounds
Edoylerite
[ "Physics", "Chemistry" ]
1,355
[ "Ions", "Matter", "Mixed anion compounds" ]
61,406,091
https://en.wikipedia.org/wiki/Decomposition%20of%20a%20module
In abstract algebra, a decomposition of a module is a way to write a module as a direct sum of modules. A type of a decomposition is often used to define or characterize modules: for example, a semisimple module is a module that has a decomposition into simple modules. Given a ring, the types of decomposition of modules over the ring can also be used to define or characterize the ring: a ring is semisimple if and only if every module over it is a semisimple module. An indecomposable module is a module that is not a direct sum of two nonzero submodules. Azumaya's theorem states that if a module has an decomposition into modules with local endomorphism rings, then all decompositions into indecomposable modules are equivalent to each other; a special case of this, especially in group theory, is known as the Krull–Schmidt theorem. A special case of a decomposition of a module is a decomposition of a ring: for example, a ring is semisimple if and only if it is a direct sum (in fact a product) of matrix rings over division rings (this observation is known as the Artin–Wedderburn theorem). Idempotents and decompositions To give a direct sum decomposition of a module into submodules is the same as to give orthogonal idempotents in the endomorphism ring of the module that sum up to the identity map. Indeed, if , then, for each , the linear endomorphism given by the natural projection followed by the natural inclusion is an idempotent. They are clearly orthogonal to each other ( for ) and they sum up to the identity map: as endomorphisms (here the summation is well-defined since it is a finite sum at each element of the module). Conversely, each set of orthogonal idempotents such that only finitely many are nonzero for each and determine a direct sum decomposition by taking to be the images of . This fact already puts some constraints on a possible decomposition of a ring: given a ring , suppose there is a decomposition of as a left module over itself, where are left submodules; i.e., left ideals. Each endomorphism can be identified with a right multiplication by an element of R; thus, where are idempotents of . The summation of idempotent endomorphisms corresponds to the decomposition of the unity of R: , which is necessarily a finite sum; in particular, must be a finite set. For example, take , the ring of n-by-n matrices over a division ring D. Then is the direct sum of n copies of , the columns; each column is a simple left R-submodule or, in other words, a minimal left ideal. Let R be a ring. Suppose there is a (necessarily finite) decomposition of it as a left module over itself into two-sided ideals of R. As above, for some orthogonal idempotents such that . Since is an ideal, and so for . Then, for each i, That is, the are in the center; i.e., they are central idempotents. Clearly, the argument can be reversed and so there is a one-to-one correspondence between the direct sum decomposition into ideals and the orthogonal central idempotents summing up to the unity 1. Also, each itself is a ring on its own right, the unity given by , and, as a ring, R is the product ring For example, again take . This ring is a simple ring; in particular, it has no nontrivial decomposition into two-sided ideals. Types of decomposition There are several types of direct sum decompositions that have been studied: Semisimple decomposition: a direct sum of simple modules. Indecomposable decomposition: a direct sum of indecomposable modules. A decomposition with local endomorphism rings (cf. #Azumaya's theorem): a direct sum of modules whose endomorphism rings are local rings (a ring is local if for each element x, either x or 1 − x is a unit). Serial decomposition: a direct sum of uniserial modules (a module is uniserial if the lattice of submodules is a finite chain). Since a simple module is indecomposable, a semisimple decomposition is an indecomposable decomposition (but not conversely). If the endomorphism ring of a module is local, then, in particular, it cannot have a nontrivial idempotent: the module is indecomposable. Thus, a decomposition with local endomorphism rings is an indecomposable decomposition. A direct summand is said to be maximal if it admits an indecomposable complement. A decomposition is said to complement maximal direct summands if for each maximal direct summand L of M, there exists a subset such that Two decompositions are said to be equivalent if there is a bijection such that for each , . If a module admits an indecomposable decomposition complementing maximal direct summands, then any two indecomposable decompositions of the module are equivalent. Azumaya's theorem In the simplest form, Azumaya's theorem states: given a decomposition such that the endomorphism ring of each is local (so the decomposition is indecomposable), each indecomposable decomposition of M is equivalent to this given decomposition. The more precise version of the theorem states: still given such a decomposition, if , then if nonzero, N contains an indecomposable direct summand, if is indecomposable, the endomorphism ring of it is local and is complemented by the given decomposition: and so for some , for each , there exist direct summands of and of such that . The endomorphism ring of an indecomposable module of finite length is local (e.g., by Fitting's lemma) and thus Azumaya's theorem applies to the setup of the Krull–Schmidt theorem. Indeed, if M is a module of finite length, then, by induction on length, it has a finite indecomposable decomposition , which is a decomposition with local endomorphism rings. Now, suppose we are given an indecomposable decomposition . Then it must be equivalent to the first one: so and for some permutation of . More precisely, since is indecomposable, for some . Then, since is indecomposable, and so on; i.e., complements to each sum can be taken to be direct sums of some 's. Another application is the following statement (which is a key step in the proof of Kaplansky's theorem on projective modules): Given an element , there exist a direct summand of and a subset such that and . To see this, choose a finite set such that . Then, writing , by Azumaya's theorem, with some direct summands of and then, by modular law, with . Then, since is a direct summand of , we can write and then , which implies, since F is finite, that for some J by a repeated application of Azumaya's theorem. In the setup of Azumaya's theorem, if, in addition, each is countably generated, then there is the following refinement (due originally to Crawley–Jónsson and later to Warfield): is isomorphic to for some subset . (In a sense, this is an extension of Kaplansky's theorem and is proved by the two lemmas used in the proof of the theorem.) According to , it is not known whether the assumption " countably generated" can be dropped; i.e., this refined version is true in general. Decomposition of a ring On the decomposition of a ring, the most basic but still important observation, known as the Wedderburn-Artin theorem is this: given a ring R, the following are equivalent: R is a semisimple ring; i.e., is a semisimple left module. for division rings , where denotes the ring of n-by-n matrices with entries in , and the positive integers , the division rings , and the positive integers are determined (the latter two up to permutation) by R Every left module over R is semisimple. To show 1. 2., first note that if is semisimple then we have an isomorphism of left -modules where are mutually non-isomorphic minimal left ideals. Then, with the view that endomorphisms act from the right, where each can be viewed as the matrix ring over , which is a division ring by Schur's Lemma. The converse holds because the decomposition of 2. is equivalent to a decomposition into minimal left ideals = simple left submodules. The equivalence 1. 3. holds because every module is a quotient of a free module, and a quotient of a semisimple module is semisimple. See also Pure-injective module Notes References Frank W. Anderson, Lectures on Non-Commutative Rings , University of Oregon, Fall, 2002. Y. Lam, Bass's work in ring theory and projective modules [MR 1732042] R. Warfield: Exchange rings and decompositions of modules, Math. Annalen 199(1972), 31–36. Module theory
Decomposition of a module
[ "Mathematics" ]
2,005
[ "Fields of abstract algebra", "Module theory" ]
61,408,330
https://en.wikipedia.org/wiki/STAT%20inhibitors
STAT inhibitors are drugs which target signal transducer and activator of transcription (STAT) proteins, a family of cytoplasmic induction factors. Inhibitors of STAT proteins are being developed for use in cancer therapy. See also JAK-STAT signaling pathway References Enzyme inhibitors
STAT inhibitors
[ "Chemistry" ]
57
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
76,697,957
https://en.wikipedia.org/wiki/Zermelo%27s%20categoricity%20theorem
Zermelo's categoricity theorem was proven by Ernst Zermelo in 1930. It states that all models of a certain second-order version of the Zermelo-Fraenkel axioms of set theory are isomorphic to a member of a certain class of sets. Statement Let denote Zermelo-Fraenkel set theory, but with a second-order version of the axiom of replacement formulated as follows: , namely the second-order universal closure of the axiom schema of replacement.p. 289 Then every model of is isomorphic to a set in the von Neumann hierarchy, for some inaccessible cardinal . Original presentation Zermelo originally considered a version of with urelements. Rather than using the modern satisfaction relation , he defines a "normal domain" to be a collection of sets along with the true relation that satisfies .p. 9 Related results Dedekind proved that the second-order Peano axioms hold in a model if and only if the model is isomorphic to the true natural numbers.pp. 5–6p. 1 Uzquiano proved that when removing replacement form and considering a second-order version of Zermelo set theory with a second-order version of separation, there exist models not isomorphic to any for a limit ordinal .p. 396 References Set theory Theorems in the foundations of mathematics Model theory
Zermelo's categoricity theorem
[ "Mathematics" ]
286
[ "Foundations of mathematics", "Set theory", "Mathematical logic", "Model theory", "Mathematical problems", "Mathematical theorems", "Theorems in the foundations of mathematics" ]
63,601,960
https://en.wikipedia.org/wiki/Information%20fluctuation%20complexity
Information fluctuation complexity is an information-theoretic quantity defined as the fluctuation of information about entropy. It is derivable from fluctuations in the predominance of order and chaos in a dynamic system and has been used as a measure of complexity in many diverse fields. It was introduced in a 1993 paper by Bates and Definition The information fluctuation complexity of a discrete dynamic system is a function of the probability distribution of its states when it is subject to random external input data. The purpose of driving the system with a rich information source such as a random number generator or a white noise signal is to probe the internal dynamics of the system in much the same way as a frequency-rich impulse is used in signal processing. If a system has possible states and the state probabilities are known, then its information entropy is where is the information content of state . The information fluctuation complexity of the system is defined as the standard deviation or fluctuation of about its mean : or The fluctuation of state information is zero in a maximally disordered system with all ; the system simply mimics its random inputs. is also zero if the system is perfectly ordered with only one fixed state , regardless of the inputs. is non-zero between these two extremes with a mixture of higher-probability states and lower-probability states populating state space. Fluctuation of information allows for memory and computation As a complex dynamic system evolves over time, how it transitions between states depends on external stimuli in an irregular way. At times it may be more sensitive to external stimuli (unstable) and at other times less sensitive (stable). When a given state has multiple possible next-states, external information determines which one will be next and the system gains this information by following a particular trajectory in state space. However, if several different states all lead to the same next-state, then upon entering the next-state the system loses information about which state preceded it. Thus, a complex system exhibits alternating information gain and loss as it evolves over time. This alternation or fluctuation of information is equivalent to remembering and forgetting — temporary information storage or memory — an essential feature of non-trivial computation. The gain or loss of information associated with transitions between states can be related to state information. The net information gain of a transition from state to state is the information gained when leaving state less the information lost when entering state : Here is the forward conditional probability that if the present state is then the next state will be and is the reverse conditional probability that if the present state is then the previous state was . The conditional probabilities are related to the transition probability , the probability that a transition from state to state occurs, by: Eliminating the conditional probabilities: Therefore, the net information gained by the system as a result of the transition depends only on the increase in state information from the initial to the final state. It can be shown that this is true even for multiple consecutive is reminiscent of the relation between force and potential energy. is like potential and is like force in . External information "pushes" a system "uphill" to a state of higher information potential to accomplish information storage, much like pushing a mass uphill to a state of higher gravitational potential stores energy. The amount of energy stored depends only on the final height, not on the path up the hill. Similarly, the amount of information stored does not depend on the transition path between an initial common state and a final rare state. Once a system reaches a rare state with high information potential, it may then "fall" back to a common state, losing previously stored information. It may be useful to compute the standard deviation of about its mean (which is zero), namely the fluctuation of net information gain but takes into account multi-transition memory loops in state space and therefore should be more indicative of the computational power of a system. Moreover, is easier to apply because there can be many more transitions than states. Chaos and order A dynamic system that is sensitive to external information (unstable) exhibits chaotic behavior whereas one that is insensitive to external information (stable) exhibits orderly behavior. A complex system exhibits both behaviors, fluctuating between them in dynamic balance when subject to a rich information source. The degree of fluctuation is quantified by ; it captures the alternation in the predominance of chaos and order in a complex system as it evolves over time. Example: rule 110 variant of the elementary cellular automaton Source: The rule 110 variant of the elementary cellular automaton has been proven to be capable of universal computation. The proof is based on the existence and interactions of cohesive and self-perpetuating cell patterns known as gliders, which are examples of emergent phenomena associated with complex systems and which imply the capability of groups of automaton cells to remember that a glider is passing through them. It is therefore to be expected that there will be memory loops in state space resulting from alternations of information gain and loss, instability and stability, chaos and order. Consider a 3-cell group of adjacent automaton cells that obey rule 110: . The next state of the center cell depends on the present state of itself and the end cells as specified by the rule: To compute the information fluctuation complexity of this system, attach a driver cell to each end of the 3-cell group to provide random external stimuli like so, , such that the rule can be applied to the two end cells. Next, determine what the next state will be for each possible present state and for each possible combination of driver cell contents, in order to determine the forward conditional probabilities. The state diagram of this system is depicted below, with circles representing states and arrows representing transitions between states. The eight possible states of this system, to , are labeled with the octal equivalent of the 3-bit contents of the 3-cell group: 7 to 0. The transition arrows are labeled with forward conditional probabilities. Notice that there is variability in the divergence and convergence of arrows corresponding to variability in gain and loss of information originating from the driver cells. The forward conditional probabilities are determined by the proportion of possible driver cell contents that drive a particular transition. For example, for the four possible combinations of two driver cell contents, state 7 leads to states 5, 4, 1 and 0 and therefore , , , and are each or 25%. Similarly, state 0 leads to states 0, 1, 0 and 1 and therefore and are each or 50%. And so forth. The state probabilities are related by and These linear algebraic equations can be solved for the state probabilities, with the following The information entropy and the complexity can then be computed from the state probabilities: Note that the maximum possible entropy for eight states is , which is the case when all . Thus, rule 110 has a relatively high entropy or state utilization of . However, this does not preclude a considerable fluctuation of state information about entropy and thus a considerable value of the complexity. Whereas, maximum entropy would preclude complexity. An alternative method can be used to obtain the state probabilities when the analytical method used above is unfeasible. Simply drive the system at its inputs (the driver cells) with a random source for many generations and observe the state probabilities empirically. When this is done via computer simulation for 10 million generations the results are as Since both and increase with system size, their dimensionless ratio , the relative information fluctuation complexity, is included to compare systems of different sizes. Notice that the empirical and analytical results agree for the 3-cell automaton and that the relative complexity levels off to about by 10 cells. In the paper by Bates and is computed for all elementary cellular automaton rules and it was observed that the ones that exhibit slow-moving gliders and possibly stationary objects, as rule 110 does, are highly correlated with large values of . can therefore be used as a filter to select candidate rules for universal computation, which is challenging to prove. Applications Although the derivation of the information fluctuation complexity formula is based on information fluctuations in dynamic systems, the formula depends only on state probabilities and therefore is also applicable to any probability distribution, including those derived from static images or text. Over the years the original paper has been referred to by researchers in many diverse fields: complexity theory, complex systems science, complex networks, chaotic dynamics, many-body localization entanglement, environmental engineering, ecological complexity, ecological time-series analysis, ecosystem sustainability, air and water pollution, hydrological wavelet analysis, soil water flow, soil moisture, headwater runoff, groundwater depth, air traffic control, flow patterns and flood events, topology, economics, market forecasting of metal and electricity prices, health informatics, human cognition, human gait kinematics, neurology, EEG analysis, education, investing, artificial life and aesthetics. References Information theory Entropy and information Statistical randomness Complex systems theory Measures of complexity Chaos theory Automata (computation) Cellular automata
Information fluctuation complexity
[ "Physics", "Mathematics", "Technology", "Engineering" ]
1,854
[ "Telecommunications engineering", "Physical quantities", "Applied mathematics", "Recreational mathematics", "Entropy and information", "Cellular automata", "Computer science", "Entropy", "Information theory", "Dynamical systems" ]
63,603,451
https://en.wikipedia.org/wiki/Hoiamides
The hoiamides are a class of small molecules recently characterized from isolations of secondary metabolites of cyanobacteria that feature a triheterocyclic system. Hoiamide A and B are cyclic while hoiamide C and D are linear. Hoiamide A and B demonstrate neurotoxicity by acting on mammalian voltage gated sodium channels, while hoiamide D shows inhibition of the p53/MDM2 complex. The hoiamides are promising therapeutic targets, making their total synthesis an attractive feat. Structural class The structural class of hoiamides is charactered by an acetate extended and S-adenosyl methionine modified isoleucine unit. Central to the molecule is a triheterocyclic system made of two α-methylated thiazolines and one thiazole, and a highly oxygenated and methylated C-15 polyketide unit. The hoiamides are stereochemically complex structures, with hoiamide A and B exhibiting 15 chiral centers. History Marine cyanobacteria obtained by SCUBA in Papua New Guinea offer abundant secondary metabolites. Sometimes called blue-green algae, marine cyanobacteria have long been recognized for their toxic effects. Since the 1930s, collections have been gathered from these organisms. Hoiamide A was isolated in 2009 from Lyngbya majuscula and Phormidium gracile through screening of cyanobacteria extracts using high throughput calcium and sodium ion influx assay in neocortical mouse neurons. Other groups have also found hoiamide A in M. producens and Phormidium gracile. Hoiamide B and C were isolated in 2010 from Symploca sp. and Oscillatoria cf. Hoiamide D was isolated in 2012 from Symploca sp. The total synthesis of hoiamide C was completed in 2011. Bioactivity Hoiamide A In mouse neocortical neurons, hoiamide A acts as a partial agonist to site two of the mammalian voltage gated sodium channel (VGSC). In electrically excitable cells VGSC allow the influx of sodium that causes the rising phase of the action potential. Hoiamide A stimulated sodium influx with EC50 of 1.7 micromolar in mouse neocortical neurons VGSC have at least six neurotoxic sites that act as targets for small molecules. By using a radioligand probe [3H]BTX known to bind to neurotoxic site two on the VGSC alpha subunit, a group found that hoiamide A must bind to site two as well, given its inhibition of [3H]BTX. A full agonist of VGSC site two, batrachotoxin, was then used to determine to what extent hoiamide A acted as an agonist. The experiments demonstrated that hoiamide A is a partial agonist because the maximum sodium influx hoiamide A binding caused was less than that of batrachotoxin. Another study found that hoiamide A stimulated capspase-3 activity, lactic acid dehydrogenase efflux, and nuclear condensation. These processes are specifically and uniquely involved in necrosis and apoptosis, suggesting that hoiamide A is involved neuronal death by both necrosis and apoptosis. Hoiamide B Like hoiamide A, hoiamide B stimulated sodium influx in mouse neocortical neurons with an EC50 value of 3.9 micromolar. Because hoiamide B is so structurally similar to hoiamide A, research currently predicts that B is also a site 2 VGSC inhibitor. Though the mechanism of inhibition of calcium oscillations in mouse neocortical neurons is unknown for hoiamide A and B, both compounds potently suppress spontaneous calcium oscillations with EC50 values of 45.6 and 79.8 nanomolar, respectively. Hoiamide C Hoiamide C exhibits a LC50 of 1.3 micromolar in brine shrimp toxicity assays. However, it does not disrupt spontaneous calcium ion oscillations. Because ethanol is used in storage of biological material, it is possible that hoiamide C may be an extraction artifact of hoiamide D. Hoiamide D p53 protein is a well known tumor suppressor that regulates the cell cycle, DNA repair, and apoptosis by acting as a transcription factor. MDM2 is a murine ubiquitin ligase that downregulates p52 by various mechanisms. The binding surface of the two proteins is small, and the interaction is hydrophobic. Through an assay that made available the p53/MDM2 complex, hoiamide D was found to inhibit the activity of the interaction. Potential therapeutic applications As partial agonists of VGSC, hoiamide A and B may be able to mimic activity-dependent control of neuronal development through the up-regulation of pathways that influence neuronal growth and plasticity. Hoiamide D is a molecule that may have applications as a precursor molecule for cancer therapies. References Thiazoles Polyketides
Hoiamides
[ "Chemistry" ]
1,086
[ "Biomolecules by chemical classification", "Natural products", "Polyketides" ]
63,604,103
https://en.wikipedia.org/wiki/Tumor-homing%20bacteria
Tumor-homing bacteria are facultative or obligate anaerobic bacteria (capable of producing ATP when oxygen is absent or is destroyed in normal oxygen levels) that are able to target cancerous cells in the body, suppress tumor growth and survive in the body for a long time even after the infection. When this type of bacteria is administered into the body, it migrates to the cancerous tissues and starts to grow, and then deploys distinct mechanisms to destroy solid tumors. Each bacteria species uses a different process to eliminate the tumor. Some common tumor homing bacteria include Salmonella, Clostridium, Bifidobacterium, Listeria, and Streptococcus. The earliest research of this type of bacteria was highlighted in 1813 when scientists began observing that patients that had gas gangrene, an infection caused by the bacteria Clostridium, were able to have tumor regressions. Tumor-inhibition mechanisms Different strains of tumor homing bacteria in distinct environments use unique or similar processes to inhibit or destroy tumor growth. Unique mechanisms Salmonella bacteria kill tumor cells by uncontrolled bacterial multiplication that can lead to the bursting of cancerous cells. Moreover, the macrophages and dendritic cells (type of white blood cells) in these Salmonella-colonized tumors secrete IL-1β, a protein responsible for anti-tumor activity. S. Typhimurium flagellin increases both innate and adaptive immunity (nonspecific and specific defense mechanisms) of the bacteria by stimulating NK cells (Natural Killer cells) to produce interferon-γ (IFN-γ), an important cytokine (regulatory protein) for this immunity. Listeria inhibits tumors through NADPH oxidase mediated production (nicotinamide adenine dinucleotide phosphate oxidase) of ROS (reactive oxygen species) which is a cell signaling process that activates CD8+ T cells (cells that kill cancerous tissue) which target primary tumors. Similar mechanisms Clostridium, S. Typhimurium, Listeria produce exotoxins (e.g. phospholipases, hemolysins, lipases) that damage the membrane structure and the cellular functions of the tumor using apoptosis or autophagy which is programmed cell death. Salmonella, Clostridium, and Listeria infections promote tumor elimination by increasing cytokines and chemokines (cell signaling regulatory proteins) that regulate infected sites using granulocytes and cytotoxic lymphocytes (WBC s that kill cancerous cells). Confirmed medical treatments Bacterial cancer therapy is an emerging field for cancer treatment. Although many clinical trials are taking place, as of right now only a few confirmed treatments are being administered to patients. Treatment with live strains of bacteria The usage of the live attenuated strain of Mycobacterium Bovis, also known as Bacillus Calmette-Guérin (BCG), is a confirmed treatment for bladder cancer. BCG therapy is done by intravesical instillation (drug administration into the urinary bladder via a catheter) and has been used since 1970 on cancer patients. Due to the necrotic and hypoxic regions of tumor cells (area of treatment resistance), drug delivery of chemotherapy can be impaired. Therefore Salmonella can be combined with chemotherapy to provide treatment and transport as Salmonella is not affected by these regions. Moreover, the Salmonella mutant strain VNP20009 increased in number from this combination which causes further inhibition of cancerous cells by stimulating anti-tumor proteins. Treatment with genetically engineered bacteria Tumor homing bacteria can be genetically engineered to enhance their anti-tumor activities and be used to transport therapeutic materials based on medical needs. They are usually transformed into a plasmid that contains the specific gene expression of these therapeutic proteins of the bacteria. After the plasmid reaches the target site, the protein's genetic sequence is expressed and the bacteria can have its full biological effect. Currently, there is no approved treatment with genetically engineered bacteria. However, research is being conducted on Listeria and Clostridium as vectors to transport RNAi (suppresses genes) for colon cancer. Safety Some active tumor-homing bacteria can be harmful to the human body, since they produce toxins that disturb the cell cycle which results in altered cell growth and chronic infections. However, many ways to enhance the safety of tumor homing bacteria in the body has been found. For example, when the virulent genes of the bacteria are removed by gene targeting, a process where genes are deleted or modified, it can be reduced in pathogenicity (property of causing disease). Adverse effects DNA mutations of the tumor homing bacteria in the body can lead to problems like extreme infection and failure of therapy as the genes that are expressed will be different and cause the bacteria to become non-functional. Incomplete tumor lysis or colonization by the bacteria can lead to delayed treatment and will necessitate the use of other cancer treatments such as chemotherapy or a combination of more. Delayed or combined treatment causes many effects on the body such as vomiting, nausea, loss of appetite, fatigue, and hair loss. Prevention of adverse effects Deleting the msbB gene from Salmonella by genetic engineering leads to the loss of lipid A (a lipid responsible for the toxicity levels of gram-negative bacteria) and therefore reduces the toxicity of Salmonella by 10,000-fold. Generating auxotrophic mutants (a strain of microorganism that will proliferate only when the medium is supplemented with some specific substance) that cannot replicate efficiently in an environment where a particular nutrient required by the mutant strain is scarce. Salmonella A1-R represents such a strain, which is auxotrophic for the amino acids leucine and arginine that are enriched in the tumor but not in normal tissues. Therefore, in the tumor, Salmonella A1-R will grow but not in the normal tissues thereby preventing infections and increasing safety. Research The most researched bacteria for cancer therapy are Salmonella, Listeria, and Clostridium. A genetically engineered strain of Salmonella (TAPET-CD) has completed phase 1 clinical trials for patients with stage 4 metastatic cancer. Listeria-based cancer vaccines are currently being produced and are undergoing many clinical trials. Phase I trials of the Clostridium strain called Clostridium novyi (C. novyi-NT) for patients with treatment-refractory tumors or tumors that are unresponsive to treatment is currently underway. See also Gene targeting Chemotherapy Immunotherapy BCG Vaccine Virotherapy References Bacteria and humans Biological engineering Biotechnology Biotechnology products Biopharmaceuticals Pharmaceutical industry Life sciences industry Specialty drugs Pharmacy
Tumor-homing bacteria
[ "Chemistry", "Engineering", "Biology" ]
1,398
[ "Pharmacology", "Biological engineering", "Specialty drugs", "Life sciences industry", "Biotechnology products", "Pharmacy", "Pharmaceutical industry", "Biotechnology", "nan", "Bacteria", "Bacteria and humans", "Biopharmaceuticals" ]
63,607,740
https://en.wikipedia.org/wiki/Michael%20Bach%20%28vision%20scientist%29
Michael Bach (born 10 April 1950) is a German scientist who researches ophthalmology, clinical electroencephalography, clinical electroretinography, visual acuity testing, and visual perception. Bach is the creator of website Optical Illusions & Visual Phenomena, which began receiving over two million hits a day in 2005. Life and work Bach was born in Berlin on 10 April 1950. In 1956, he moved with his family to Dortmund, where he attended school. From 1970 to 1972, Bach completed an undergraduate degree in physics at Ruhr University Bochum, then moved to the University of Freiburg, where he studied for a Master's degree in physics. In 1975, he began a part-time position running an electronics workshop in the Department of Psychology, then became a full-time research assistant in the Department of Neurology in 1978. Bach was awarded his Master's in physics in 1977 and his PhD, also in physics, in 1981, on the visual system. In 1981 he moved into a full-time position in the Department of Ophthalmology, rising to Professor in 1998, and being appointed as Head of Section Visual Function/Electrophysiology at the University Eye Hospital in 1999. After Bach's retirement in 2015 he became an Emeritus Scientist, continuing his research. In 1996, Bach began his service to the International Society for Clinical Electrophysiology of Vision, establishing, with others, standards for clinical electroencephalography, electroretinography and electrooculography, and becoming the society's president from 2004 to 2011. In 1975, Bach married Ulrike Röhling. They have three adult children and one grandchild. Research Bach has conducted research in ophthalmology, electroretinography, and visual perception. One strand of his research has been to develop tests of visual acuity, using verbal responding or using brain activity. As of April 2021, Bach has published 356 scientific papers that have been cited 16602 times, giving him an h-index of 61. According to Neurotree, Bach has 16 academic children and 44 academic grandchildren. Illusions Bach began his illusions web site as a hobby some time before 2005. He did not appreciate how popular the site was until he discovered that his internet service provider had suspended his account after it received more than one million hits per day. Bach upgraded his account and continued developing the site. As of April 2021, Bach's site contained 143 illusions, most interactive, and all with Bach's clear explanations. The site and Bach have won plaudits on the internet, in the news media, and in science journals. The site has also been used in scientific research into illusions. Selected works References External links Homepage of Michael Bach Profile from the Homepage of the Freiburg University Eye Clinic Profile from the Homepage of GWUP, the German equivalent of The Skeptics Society and the Committee for Skeptical Inquiry Homepage of Bach's Visual Phenomena & Optical Illusions Homepage of Bach's on-line tests of visual acuity (‘FrACT’) 1950 births 20th-century German biologists 21st-century German scientists German biophysicists German ophthalmologists Academic staff of the University of Freiburg Scientists from Berlin Vision scientists Optical illusions Living people
Michael Bach (vision scientist)
[ "Physics" ]
664
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
72,292,745
https://en.wikipedia.org/wiki/Paul%20Chaleff
Paul Chaleff (born 1947) is an American ceramist and professor emeritus of Fine Arts at Hofstra University. He is considered a pioneer of the revival of wood-fired ceramics in the US and credited as one of the first to use wood-burning dragon kilns in the style of the anagama tradition. He is best known as an innovator of large-scale ceramic sculpture. His work can be found in the collections of the Museum of Modern Art Department of Architecture and Design, and in the Metropolitan Museum of Art. Paul Chaleff's work was strongly influenced by master potter Takeshi Nakazato. In 1989, Chaleff began collaborating with sculptor Sir Anthony Caro. Together they created nearly 50 works, both figurative and abstract. Caro's sculpture has had a direct influence on Chaleff's work as has the sculpture of Isamu Noguchi, and the ceramics of John Mason and Lucie Rie. Chaleff has also been recognized as an innovator of large-scale ceramic sculpture. The strength of his works stems from their being rough, gestural, split, and impure while remaining elegant. Education Chaleff attended the Bronx High School of Science. In 1968, while studying biology at the City College of New York, Chaleff survived a drowning accident that took his friend's life. He graduated in 1969 with a degree in Fine Arts. In 1971, Chaleff received his Master of Fine Arts in Ceramic Design from City College of New York. In 1975 he traveled to Japan to study Japanese pottery and wood-burning kiln design and returned to New York in 1977 where he built a studio and kilns in Pine Plains. Career Chaleff's anagama kiln was one of the first in the US. In 1980, the Museum of Modern Art purchased and exhibited his work from that kiln. In 1980, his wood-fired work was showcased at an official State dinner at the White House. Between 1989 and 2000, Chaleff collaborated on a series of clay sculptures with Sir Anthony Caro in his studio, first in Pine Plains and then Ancram. In 1995, he participated in Fire and Clay, a symposium of international clay sculptors held in Iksan. In 1997, Chaleff accepted a professorship from Hofstra University, where he directed the ceramics program until retirement in 2021. Museum collections Chaleff's work is represented in the following museum collections. Museum of Modern Art, Department of Architecture and Design, New York Metropolitan Museum of Art, New York Los Angeles County Museum of Art Boston Museum of Fine Arts National Museum of American Art (Washington, DC) Carnegie Museum of Art (Pennsylvania) Yale University Art Gallery Philadelphia Museum of Art (Pennsylvania) Princeton University Art Museum (New Jersey) Amore-Pacific Museum of Art (Korea) Brooklyn Museum Museum of Arts and Design (New York City) Everson Museum Grounds For Sculpture (New Jersey) Longhouse Foundation (East Hampton, New York) Boise Art Museum Racine Art Museum Arkansas Art Center Rockefeller University Allentown Museum of Art (Pennsylvania) University of Colorado Art Museum (Bolder) University of Iowa Museum of Art (Iowa City) Crocker Art Museum (California) American Museum of Ceramic Art (California) Arizona State University Museum of Art (Tempe) Mills College (California) Thayer Academy (Massachusetts) Muju Sculpture Park (Korea) Idyllwild School of Music and the Arts (California) City College of New York (New York City) Studio Potter Collection (New Hampshire) Arrowmount School of Arts and Crafts References External links Katonah Museum of Art Hofstra University The Marks Project Noguchi Museum Paul Challeff Official Site Museum of Modern Art, Architecture and Design Collection Metropolitan Museum of Art Sara Japanese Pottery Elena Zang Gallery 1947 births Living people 20th-century American ceramists 21st-century American ceramists 20th-century American sculptors 21st-century American sculptors Artists from New York City Kilns Hofstra University faculty Japanese pottery American male sculptors City College of New York alumni American people of Russian-Jewish descent American people of Polish-Jewish descent People from Columbia County, New York People from Dutchess County, New York The Bronx High School of Science alumni
Paul Chaleff
[ "Chemistry", "Engineering" ]
868
[ "Chemical equipment", "Kilns" ]
72,294,493
https://en.wikipedia.org/wiki/Two-dimensional%20quantum%20turbulence
Turbulent phenomena are observed universally in energetic fluid dynamics, associated with highly chaotic fluid motion, and typically involving excitations spreading over a wide range of length scales. The particular features of turbulence are dependent on the fluid and geometry, and specifics of forcing and dissipation. In classical fluids the fluid vorticity is a continuous field able to acquire any value at each point in the fluid, associated with the fluid supporting any local value of fluid rotation. Quantum fluids are distinguished by vorticity that is quantised, a restriction imposed by the quantum wavefunction that describes the fluid when it reaches a superfluid state; the ability of a fluid to form quantum vortices is the most widely used experimental signature of superfluidity. While quantum fluids can also support classical turbulence, quantum turbulence involves the chaotic dynamics of many interacting quantum vortices. In highly excited bulk superfluid, many vortex lines interact with each other forming quantum turbulent states. When confined to move only in a plane, classical fluids exhibit a reversal in the direction of energy flow during turbulence. Instead of the three-dimensional process involving the formation of smaller rotating eddies, in two-dimensions small eddies tend to combine to make larger rotating structures. By introducing tight confinement along one direction the Kelvin wave excitations involving bending of otherwise straight vortex lines can be strongly suppressed, favouring vortex alignment with the axis of tight confinement. Vortex dynamics can then enter a regime of effective 2D motion, equivalent to point vortices moving on a plane. In general, 2D quantum turbulence (2DQT) can exhibit complex phenomenology involving coupled vortices and sound in compressible superfluids. The quantum vortex dynamics can exhibit signatures of turbulence including a Kolmogorov −5/3 power law, a quantum manifestation of the inertial transport of energy to large scales observed in classical fluids, known as an inverse energy cascade. Point vortices The point vortex model, introduced by Helmholtz and Kirchhoff, describes the motion of ideal point vortices confined to a plane, with direct mapping to planar electrodynamics. The model plays a central role in the study of planar Navier-Stokes flows, and can be realized in compressible superfluids such as those in ultracold gas Bose-Einstein condensates, when the healing length setting the vortex core size is very small compared to the system size. Negative temperature Point vortices confined to finite area were predicted by Onsager to exhibit states of negative temperature. This possibility of negative absolute temperature can be traced to the finite phase space of the point vortex system: in contrast to a massive particle moving on a plane, each point vortex only has two degrees of freedom. Specifying the spatial coordinates of the vortex also completely determines the superfluid velocity. At leading order a quantum vortex is massless, with each filament moving with the net background flow and obeying a form of the Biot–Savart law. Guiding-centre plasmas exhibit a symmetry breaking transition at high energy per vortex associated with negative temperature. In Bose-Einstein condensates the annihilation of low-energy vortex dipoles can raise the energy per vortex until the system undergoes spontaneous ordering into macroscopic same-sign vortex clusters associated with negative temperature. Clustered equilibrium states have high energy per vortex, with clusters forming as a consequence of the limited phase space of confined point vortices. Forced turbulence Vortices can be injected into a planar superfluid through various forcing mechanisms such as obstacle dragging or elliptical stirring that induce a localized breakdown of superfluidity, or through mechanisms that exploit abrupt phase evolution at the merging of multiple condensates or the condensate phase transition itself. Small-scale forcing from appropriately dragging an obstacle can inject small vortex clusters into a planar superfluid. In strongly non-equilibrium quantum fluid dynamics, clustered states can develop as a result of steady inverse energy cascade from small scale forcing, leading to an accumulation of energy at the system scale in the form of macroscopic flow due to vortex charge ordering. Superfluid experiments Advances in quantum fluids experiments have provided access to the point vortex regime in compressible superfluids. 2DQT regime has been established in ultracold gases, superfluid helium, and in exciton-polariton condensates comprising quantum fluids of light. Negative temperature states predicted by Onsager have recently been observed in systems with hard-wall boundary conditions. References Turbulence Superfluidity
Two-dimensional quantum turbulence
[ "Physics", "Chemistry", "Materials_science" ]
929
[ "Physical phenomena", "Phase transitions", "Turbulence", "Phases of matter", "Superfluidity", "Condensed matter physics", "Exotic matter", "Matter", "Fluid dynamics" ]
66,393,505
https://en.wikipedia.org/wiki/St%20Peter%27s%20Medal
The St Peter's Medal is awarded annually by the British Association of Urological Surgeons (BAUS) for contributions to the surgical field of urology. The medal was designed and produced by sculptor William Bloye of the Birmingham School of Art and presented to the BAUS in 1948 by Bernard Joseph Ward, the BAUS's first vice-president. The first medal was awarded in 1949 to J. B. Macalpine who was the first to report bladder cancers associated with the dye industry. St Peter on the medal is identified by a key engraved on the bible that he holds. On the reverse is a laurel wreath within which the recipient's name is engraved, and around the circumference are the names of Edwin Hurry Fenwick, Peter Freyer and John Thomson-Walker. Origin and history The St Peter's Medal was designed and produced by sculptor William Bloye of the Birmingham School of Art, for the purpose of being awarded to a person who has made significant contributions to the field of urology and is a member of the British Isles or Commonwealth. The stamping die for the medal was presented to the British Association of Urological Surgeons (BAUS) in 1948 by Bernard Joseph Ward, the BAUS's first vice-president and urologist at Queen Elizabeth Hospital. The first medal was awarded in 1949 to J. B. Macalpine who first reported bladder cancers associated with the dye industry. It has subsequently been awarded annually by the BAUS, usually to one recipient, apart from 1951, 1999, 2005, 2006, 2007 and 2014, when there were two recipients. The medal is engraved with the names of the three teachers who influenced Bernard Ward: Edwin Hurry Fenwick, Peter Freyer and John Thomson-Walker. On presenting the medal in 1948, Ward stated in his speech that "although they were individually attached to other hospitals, they all came together in one hospital, St. Peter's; and the suggestion therefore was that in order to honour all three of them, we should call it the St. Peter's Medal. The hospital, the first urological hospital in Britain, was named after Saint Peter, whose name derives from the Latin for rock, petrus, and who was said by Christ to be the foundation upon which the Christian church was to be constructed. St Peter on the medal is identified by the iconography of a key engraved on the bible that he holds. On the reverse of the medal is a laurel wreath, within which the recipient's name is engraved, and around the wreath are the names of Fenwick, Freyer and Thomson-Walker. Recipients In 1951, the medal was presented for the second time, and for the first time to two recipients, when Ronald Ogier Ward and Terence J. Millin were given the award. In 1959 the medal was awarded to Harold H. Hopkins, a physicist, and in 2006 to Alison Brading, a physiologist. Other recipients have included Sir Michael Woodruff, Richard Turner-Warwick, John Wickham, Howard Kynaston, Geoffrey Chisholm, John M. Fitzpatrick, Roger Kirby and Prokar Dasgupta. Influence In 1975 the International Medical Society of Paraplegia proposed to offer a similar award based on the BAUS's St Peter's Medal. See also List of recipients of the St Peter's Medal References Awards established in 1948 Urology Medicine awards
St Peter's Medal
[ "Technology" ]
692
[ "Science and technology awards", "Medicine awards" ]
66,393,853
https://en.wikipedia.org/wiki/5182%20aluminium%20alloy
5182 Aluminium alloy has magnesium and manganese as minor elements. 5182 Aluminium alloy is used in the automobile industry for making various parts of vehicles. Composition Mechanical properties Thermal properties Applications Audi A8 (D2)’s structural panel BMW Z8's inner panel Rolls-Royce Phantom's structural panel Aluminium can (top part) Aluminium alloy table References
5182 aluminium alloy
[ "Chemistry" ]
76
[ "Alloys", "Aluminium alloys" ]
66,396,699
https://en.wikipedia.org/wiki/Lambrequin%20arch
The lambrequin arch, also known as (or related to) the muqarnas arch, is a type of arch with an ornate profile of lobes and points. It is especially characteristic of Moorish and Moroccan architecture. The "muqarnas arch" is both another name for this type of arch as well as a more specific type of arch whose intrados (inner surfaces) are made up of muqarnas sculpting, which has a very close resemblance to the lambrequin arch. Some scholars speculate that the lambrequin arch was itself derived from the use of muqarnas in archways. Moreover, lambrequin arches were indeed commonly used with muqarnas sculpting along the intrados of the arch. Its origins are also traced further back to the "mixtilinear" arches seen in the oratory of the 11th-century Aljaferia Palace in Zaragoza. This type of arch was introduced into the Maghreb and Al-Andalus regions during the Almoravid period (11th–12th centuries), with an early appearance in the funerary section of the Qarawiyyin Mosque (in Fez) dating from the early 12th century. It was a Maghrebi innovation that grew in importance during the following Almohad period. It remained common in the subsequent architecture of the region, in many cases used to highlight the arches near the mihrab area of a mosque. Muqarnas arches are also found abundantly the Alhambra palaces in Granada, for example, particularly the Court of Lions. See also Horseshoe arch Multifoil arch References Islamic architectural elements Moorish architecture Architecture in Spain Architectural elements Architecture in Morocco
Lambrequin arch
[ "Technology", "Engineering" ]
351
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
58,409,634
https://en.wikipedia.org/wiki/Geophysical%20%26%20Astrophysical%20Fluid%20Dynamics
Geophysical & Astrophysical Fluid Dynamics is a bimonthly peer-reviewed scientific journal covering applications of fluid dynamics in the fields of astrophysics and geophysics. It was established in 1970 as Geophysical Fluid Dynamics, obtaining its current name in 1977. It is published by Taylor & Francis and the editor-in-chief is Andrew Soward (Newcastle University). According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.451. References External links Geophysics journals Astrophysics journals Fluid dynamics journals Academic journals established in 1970 Bimonthly journals Taylor & Francis academic journals English-language journals
Geophysical & Astrophysical Fluid Dynamics
[ "Physics", "Chemistry" ]
124
[ "Astrophysics journals", "Fluid dynamics journals", "Astrophysics", "Fluid dynamics" ]
58,415,827
https://en.wikipedia.org/wiki/Shimansky%20equation
In thermodynamics, the Shimansky equation describes the temperature dependence of the heat of vaporization (also known as the enthalpy of vaporization or the heat of evaporation): where: is the latent heat of vaporization at the temperature , is the critical temperature, is the parameter that is equal to the heat of vaporization at zero temperature (), is the hyperbolic tangent function. This equation was obtained in 1955 by Yu. I. Shimansky, at first empirically, and later derived theoretically. The Shimansky equation does not contain any arbitrary constants, since the value of can be determined experimentally and can be calculated if has been measured experimentally for at least one given value of temperature . The Shimansky equation describes quite well the heat of vaporization for a wide variety of liquids. For chemical compounds that belong to the same class (e.g. alcohols) the value of ratio remains constant. For each such class of liquids, the Shimansky equation can be re-written in a form of where The latter formula is a mathematical expression of structural similarity of liquids. The value of plays a role of the parameter for a group of curves of temperature dependence of . Sources Shimansky Yu. I. В«Structure and physical properties of binary solutions of alcohols В», PhD dissertation, Taras Shevchenko State University of Kyiv, 1955; Shimansky Yu. I. В«The temperature dependence of the heat of vaporization of pure liquidsВ» Journal of Physical Chemistry (USSR), v. 32(8), p. 1893, 1958; Shimanskaya E. T., Shimansky Yu. I. В«Critical state of pure compoundsВ», published by Taras Shevchenko State University of Kyiv, 1961. References Molecular physics
Shimansky equation
[ "Physics", "Chemistry" ]
376
[ "Thermodynamics stubs", "Molecular physics", " molecular", "Thermodynamics", "nan", "Atomic", "Molecular physics stubs", "Physical chemistry stubs", " and optical physics" ]
58,420,390
https://en.wikipedia.org/wiki/Object%20co-segmentation
In computer vision, object co-segmentation is a special case of image segmentation, which is defined as jointly segmenting semantically similar objects in multiple images or video frames. Challenges It is often challenging to extract segmentation masks of a target/object from a noisy collection of images or video frames, which involves object discovery coupled with segmentation. A noisy collection implies that the object/target is present sporadically in a set of images or the object/target disappears intermittently throughout the video of interest. Early methods typically involve mid-level representations such as object proposals. Dynamic Markov networks-based methods A joint object discover and co-segmentation method based on coupled dynamic Markov networks has been proposed recently, which claims significant improvements in robustness against irrelevant/noisy video frames. Unlike previous efforts which conveniently assumes the consistent presence of the target objects throughout the input video, this coupled dual dynamic Markov network based algorithm simultaneously carries out both the detection and segmentation tasks with two respective Markov networks jointly updated via belief propagation. Specifically, the Markov network responsible for segmentation is initialized with superpixels and provides information for its Markov counterpart responsible for the object detection task. Conversely, the Markov network responsible for detection builds the object proposal graph with inputs including the spatio-temporal segmentation tubes. Graph cut-based methods Graph cut optimization is a popular tool in computer vision, especially in earlier image segmentation applications. As an extension of regular graph cuts, multi-level hypergraph cut is proposed to account for more complex high order correspondences among video groups beyond typical pairwise correlations. With such hypergraph extension, multiple modalities of correspondences, including low-level appearance, saliency, coherent motion and high level features such as object regions, could be seamlessly incorporated in the hyperedge computation. In addition, as a core advantage over co-occurrence based approach, hypergraph implicitly retains more complex correspondences among its vertices, with the hyperedge weights conveniently computed by eigenvalue decomposition of Laplacian matrices. CNN/LSTM-based methods In action localization applications, object co-segmentation is also implemented as the segment-tube spatio-temporal detector. Inspired by the recent spatio-temporal action localization efforts with tubelets (sequences of bounding boxes), Le et al. present a new spatio-temporal action localization detector Segment-tube, which consists of sequences of per-frame segmentation masks. This Segment-tube detector can temporally pinpoint the starting/ending frame of each action category in the presence of preceding/subsequent interference actions in untrimmed videos. Simultaneously, the Segment-tube detector produces per-frame segmentation masks instead of bounding boxes, offering superior spatial accuracy to tubelets. This is achieved by alternating iterative optimization between temporal action localization and spatial action segmentation. The proposed segment-tube detector is illustrated in the flowchart on the right. The sample input is an untrimmed video containing all frames in a pair figure skating video, with only a portion of these frames belonging to a relevant category (e.g., the DeathSpirals). Initialized with saliency based image segmentation on individual frames, this method first performs temporal action localization step with a cascaded 3D CNN and LSTM, and pinpoints the starting frame and the ending frame of a target action with a coarse-to-fine strategy. Subsequently, the segment-tube detector refines per-frame spatial segmentation with graph cut by focusing on relevant frames identified by the temporal action localization step. The optimization alternates between the temporal action localization and spatial action segmentation in an iterative manner. Upon practical convergence, the final spatio-temporal action localization results are obtained in the format of a sequence of per-frame segmentation masks (bottom row in the flowchart) with precise starting/ending frames. See also Image segmentation Object detection Video content analysis Image analysis Digital image processing Activity recognition Computer vision Convolutional neural network Long short-term memory References Image segmentation Computer vision Applications of computer vision Image processing Machine vision Film and video technology Applied machine learning Cognition Motion in computer vision
Object co-segmentation
[ "Physics", "Engineering" ]
864
[ "Physical phenomena", "Robotics engineering", "Packaging machinery", "Motion (physics)", "Machine vision", "Motion in computer vision", "Artificial intelligence engineering", "Computer vision" ]
70,807,222
https://en.wikipedia.org/wiki/Sarah-Marie%20Belcastro
Sarah-Marie Belcastro (aka sarah-marie belcastro, born 1970) is an American mathematician and book author. She is an instructor at the Art of Problem Solving Online School and is the director of MathILy, a residential math summer program hosted at Bryn Mawr. Although her doctoral research was in algebraic geometry, she has also worked extensively in topological graph theory. She is known for and has written extensively about mathematical knitting, and has co-edited three books on fiber mathematics. She herself exclusively uses the form "sarah-marie belcastro". Biography Belcastro was born in San Diego, CA in 1970, and grew up mostly in Andover, MA, and in Dubuque, IA. She earned a B.S. (1991) in Mathematics and Astronomy from Haverford College, an M.S. (1993) from The University of Michigan, Ann Arbor, and a Ph.D. (1997) there for a thesis on “Picard Lattices of Families of K3 Surfaces” done with Igor Dolgachev. Since 2012, she has also been an instructor at the Art of Problem Solving Online School. Since 2013, she has been the director of Bryn Mawr College's residential summer program MathILy (serious Mathematics Infused with Levity). She is also a guest faculty member at Sarah Lawrence College. She was Associate Editor for The College Mathematics Journal (2003—2019). She has also lectured frequently at the University of Massachusetts, Amherst since 2012. Selected publications Books Discrete Mathematics with Ducks (AK Peters, 2012; 2nd ed., CRC Press, 2019, ). Figuring Fibers, edited by belcastro and Carolyn Yackel, Providence, RI: American Mathematics Society, 2018. Crafting by Concepts: fiber arts and mathematics, edited by belcastro and Yackel. AK Peters, 2011. Making Mathematics with Needlework: Ten Papers and Ten Projects, edited by belcastro and Yackel. Wellesley, MA: AK Peters, 2007. Journal papers References External links Official home page American women mathematicians Haverford College alumni University of Michigan alumni Geometric topology American algebraists Mathematics and art 21st-century American textile artists American people in knitting 1970 births Living people
Sarah-Marie Belcastro
[ "Mathematics" ]
451
[ "Topology", "Geometric topology" ]
70,810,630
https://en.wikipedia.org/wiki/Structure%20field%20map
Structure field maps (SFMs) or structure maps are visualizations of the relationship between ionic radii and crystal structures for representing classes of materials. The SFM and its extensions has found broad applications in geochemistry, mineralogy, chemical synthesis of materials, and nowadays in materials informatics. History The intuitive concept of the SFMs led to different versions of the visualization method established in different domains of materials science. Structure field map was first introduced in 1954 by MacKenzie L. Keith and Rustum Roy to classify structural prototypes for the oxide perovskites of the chemical formula ABO3. It was later popularized by a compiled handbook written by Olaf Muller and Rustum Roy, published in 1974 that included many more known materials. Examples A structure field map is typically two-dimensional, although higher dimensional versions are feasible. The axes in an SFM are the ionic sequences. For example, in oxide perovskites ABO3, where A and B represent two metallic cations, the two axes are ionic radii of the A-site and B-site cations. SFMs are constructed according to the oxidation states of the constituent cations. For perovskites of the type ABO3, three ways of cation pairings exist: A3+B3+O3, A2+B4+O3, and A1+B5+O3, therefore, three different SFMs exist for each pairs of cation oxidation states. See also Goldschmidt tolerance factor Ramachandran plot References Materials science Crystallography Scientific visualization Inorganic chemistry Mineralogy concepts
Structure field map
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
326
[ "Applied and interdisciplinary physics", "Materials science", "Crystallography", "Condensed matter physics", "nan" ]
70,811,597
https://en.wikipedia.org/wiki/Angustmycin%20A
Angustmycin A is a purine antibiotic and metabolite from Streptomyces bacteria with the molecular formula C11H13N5O4. Angustmycin A is also a cytokinin. References Further reading cytokinins Antibiotics Purines
Angustmycin A
[ "Chemistry", "Biology" ]
61
[ "Biotechnology products", "Organic compounds", "Antibiotics", "Biocides", "Organic compound stubs", "Organic chemistry stubs" ]
70,812,668
https://en.wikipedia.org/wiki/Celler%20Perelada
Celler Perelada is a winery building in Peralada. Building and design The Celler Perelada project was undertaken by family Suqué Mateu at a price of 40 million €. with standards aimed at achieving the improvement of the quality of its wines, returning to traditional systems. The winery was designed and building activities supervised by Rafael Aranda of RCR Arquitectes, recipients of the Pritzker Prize 2017. The ground-breaking ceremony took place in 2016 and the first vintage was in 2020. Taking advantage of the unevenness of the land, it is half-buried up to 20 metres, which favours energy saving. The deep foundation of the winery allows interaction with geothermal layers. The building has 538 supports at a depth of between 8 and 20 metres, 331 of which are used as heat exchangers with the ground to reduce the consumption of heating, cooling and hot water, thus minimising energy consumption, resulting in a saving of around 37 %. Water consumption is reduced both inside the building through the combination of efficient taps and rainwater, and outside through an efficient irrigation system and the use of rainwater for gardening. Floor space is 18,200 square meter and provides a production capacity of over two million bottles per vintage. References Catalan wine Sustainable building Companies based in Catalonia Wineries of Spain
Celler Perelada
[ "Engineering" ]
270
[ "Construction", "Sustainable building", "Building engineering" ]
65,091,079
https://en.wikipedia.org/wiki/Broad-spectrum%20therapeutic
A broad-spectrum therapeutic or broad-spectrum antibiotics is a type of antimicrobial active against multiple types of pathogens, such as an antibiotic that is effective against both bacteria and viruses. The opposite of a broad-spectrum drug is a narrow-spectrum therapeutic, which only treats a specific or very similar set of pathogens. Such therapeutics have been suggested as potential emergency treatments for pandemics. See also Broad-spectrum antibiotic Broad-spectrum antiviral drug References Anti-infective agents
Broad-spectrum therapeutic
[ "Chemistry" ]
106
[ "Anti-infective agents", "Chemicals in medicine" ]
65,093,515
https://en.wikipedia.org/wiki/Direct%20collapse%20black%20hole
Direct collapse black holes (DCBHs) are high-mass black hole seeds that form from the direct collapse of a large amount of material. They putatively formed within the redshift range z=15–30, when the Universe was about 100–250 million years old. Unlike seeds formed from the first population of stars (also known as Population III stars), direct collapse black hole seeds are formed by a direct, general relativistic instability. They are very massive, with a typical mass at formation of ~. This category of black hole seeds was originally proposed theoretically to alleviate the challenge in building supermassive black holes already at redshift z~7, as numerous observations to date have confirmed. Formation Direct collapse black holes (DCBHs) are massive black hole seeds theorized to have formed in the high-redshift Universe and with typical masses at formation of ~, but spanning between and . The environmental physical conditions to form a DCBH (as opposed to a cluster of stars) are the following: Metal-free gas (gas containing only hydrogen and helium). Atomic-cooling gas. Sufficiently large flux of Lyman–Werner photons, in order to destroy hydrogen molecules, which are very efficient gas coolants. The previous conditions are necessary to avoid gas cooling and, hence, fragmentation of the primordial gas cloud. Unable to fragment and form stars, the gas cloud undergoes a gravitational collapse of the entire structure, reaching extremely high matter density at its core, on the order of ~107 g/cm3. At this density, the object undergoes a general relativistic instability, which leads to the formation of a black hole of a typical mass ~, and up to 1 million . The occurrence of the general relativistic instability, as well as the absence of the intermediate stellar phase, led to the denomination of direct collapse black hole. In other words, these objects collapse directly from the primordial gas cloud, not from a stellar progenitor as prescribed in standard black hole models. A computer simulation reported in July 2022 showed that a halo at the rare convergence of strong, cold accretion flows can create massive black holes seeds without the need for ultraviolet backgrounds, supersonic streaming motions or even atomic cooling. Cold flows produced turbulence in the halo, which suppressed star formation. In the simulation, no stars formed in the halo until it had grown to 40 million solar masses at a redshift of 25.7 when the halo's gravity was finally able to overcome the turbulence; the halo then collapsed and formed two supermassive stars that died as DCBHs of and . Demography Direct collapse black holes are generally thought to be extremely rare objects in the high-redshift Universe, because the three fundamental conditions for their formation (see above in section Formation) are challenging to be met all together in the same gas cloud. Current cosmological simulations suggest that DCBHs could be as rare as only about 1 per cubic gigaparsec at redshift 15. The prediction on their number density is highly dependent on the minimum flux of Lyman–Werner photons required for their formation and can be as large as ~107 DCBHs per cubic gigaparsec in the most optimistic scenarios. Detection In 2016, a team led by Harvard University astrophysicist Fabio Pacucci identified the first two candidate direct collapse black holes, using data from the Hubble Space Telescope and the Chandra X-ray Observatory. The two candidates, both at redshift , were found in the CANDELS GOODS-S field and matched the spectral properties predicted for this type of astrophysical sources. In particular, these sources are predicted to have a significant excess of infrared radiation, when compared to other categories of sources at high redshift. Additional observations, in particular with the James Webb Space Telescope, will be crucial to investigate the properties of these sources and confirm their nature. Difference from primordial and stellar collapse black holes A primordial black hole is the result of the direct collapse of energy, ionized matter, or both, during the inflationary or radiation-dominated eras, while a direct collapse black hole is the result of the collapse of unusually dense and large regions of gas. Note that a black hole formed by the collapse of a Population III star is not considered "direct" collapse. See also Quasi-star UHZ1 QSO J0313−1806 GNz7q CEERS 1019 References Further reading Black holes Supermassive black holes
Direct collapse black hole
[ "Physics", "Astronomy" ]
918
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Supermassive black holes", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects" ]
65,136,309
https://en.wikipedia.org/wiki/Nuclear%20ensemble%20approach
The Nuclear Ensemble Approach (NEA) is a general method for simulations of diverse types of molecular spectra. It works by sampling an ensemble of molecular conformations (nuclear geometries) in the source state, computing the transition probabilities to the target states for each of these geometries, and performing a sum over all these transitions convoluted with shape function. The result is an incoherent spectrum containing absolute band shapes through inhomogeneous broadening. Motivation Spectrum simulation is one of the most fundamental tasks in quantum chemistry. It allows comparing the theoretical results to experimental measurements. There are many theoretical methods for simulating spectra. Some are simple approximations (like stick spectra); others are high-level, accurate approximations (like those based on Fourier-transform of wavepacket propagations). The NEA lies in between. On the one hand, it is intuitive and straightforward to apply, providing much improved results compared to the stick spectrum. On the other hand, it does not recover all spectral effects and delivers a limited spectral resolution. Historical The NEA is a multidimensional extension of the reflection principle, an approach often used for estimating spectra in photodissociative systems. With popularization molecular mechanics, ensembles of geometries started to be also used to estimate the spectra through incoherent sums. Thus, different from the reflection principle, which is usually done via direct integration of analytical functions, the NEA is a numerical approach. In 2012, a formal account of NEA showed that it corresponded to an approximation to the time-dependent spectrum simulation approach, employing a Monte Carlo integration of the wavepacket overlap time evolution. NEA for absorption spectrum Consider an ensemble of molecules absorbing radiation in the UV/vis. Initially, all molecules are in the ground electronic state Because of the molecular zero-point energy and temperature, the molecular geometry has a distribution around the equilibrium geometry. From a classical point of view, supposing that the photon absorption is an instantaneous process, each time a molecule is excited, it does so from a different geometry. As a consequence, the transition energy has not always the same value, but is a function of the nuclear coordinates. The NEA captures this effect by creating an ensemble of geometries reflecting the zero-point energy, the temperature, or both. In the NEA, the absorption spectrum (or absorption cross section) σ(E) at excitation energy E is calculated as where e and m are the electron charge and mass, c is the speed of light, ε0 the vacuum permittivity, and ћ the reduced Planck constant. The sums run over Nfs excited states and Np nuclear geometries xi. For each of such geometries in the ensemble, transition energies ΔE0n(xi) and oscillator strengths f0n(xi) between the ground (0) and the excited (n) states are computed. Each transition in the ensemble is convoluted with a normalized line shape function centered at ΔE0n(xi) and with width δ. Each xi is a vector collecting the cartesian components of the geometries of each atom. The line shape function may be, for instance, a normalized Gaussian function given by Although δ is an arbitrary parameter, it must be much narrower than the band width, not to interfere in its description. As the average value of band widths is around 0.3 eV, it is a good practice to adopt δ ≤ 0.05 eV. The geometries xi can be generated by any method able to describe the ground state distribution. Two of the most employed are dynamics and Wigner distribution nuclear normal modes. Molar extinction coefficient ε can be obtained from absorption cross section through Because of the dependence of f0n on xi, NEA is a post-Condon approximation, and it can predict dark vibronic bands. NEA for emission spectrum In the case of fluorescence, the differential emission rate is given by . This expression assumes the validity of the Kasha's rule, with emission from the first excited state. NEA for other types of spectrum NEA can be used for many types of steady-state and time-resolved spectrum simulations. Some examples beyond absorption and emission spectra are: two-dimensional differential transmission photoelectron ultrafast Auger X-ray photo-scattering Limitations of NEA By construction, NEA does not include information about the target (final) states. For this reason, any spectral information that depends on these states cannot be described in the framework of NEA. For example, vibronically resolved peaks in the absorption spectrum will not appear in the simulations, only the band envelope around them, because these peaks depend on the wavefunction overlap between the ground and excited state. NEA can be, however, coupled to excited-state dynamics to recover these effects. NEA may be too computationally expensive for large molecules. The spectrum simulation requires the calculation of transition probabilities for hundreds of different nuclear geometries, which may become prohibitive due to the high computational costs. Machine learning methods coupled to NEA have been proposed to reduce these costs. References Theoretical chemistry
Nuclear ensemble approach
[ "Physics", "Chemistry" ]
1,063
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", " molecular", "nan", "Atomic", " and optical physics" ]
75,103,460
https://en.wikipedia.org/wiki/Corrosion%20Engineering%2C%20Science%20and%20Technology
Corrosion Engineering, Science and Technology (CEST) is a peer-reviewed scientific journal published by Taylor & Francis on behalf of IOM3 covering corrosion engineering, corrosion science, and corrosion control. History The journal was founded in 1965 as the British Corrosion Journal (BCJ). It was launched as a publication of the British Joint Corrosion Group, which represented the interests of a number of professional organisations, including the Institute of Metals (later known the Metals Society and the Institute of Materials), to promote corrosion as an independent area of expertise. In this way, BCJ contrasted with existing journals in this field, namely Corrosion Science, which represented a more academic background. In 1979, the Metals Society established the annual Guy Bengough Medal and Prize, which would be awarded to the best paper published in BCJ from the previous two years. In 2001, the Institute of Materials (IoM) outsourced publication of 13 journals including BCJ to Maney Publishing. The next year, IoM merged into the Institute of Materials, Minerals, and Mining (IOM3). BCJ had initially sourced the majority of its papers from the United Kingdom and the rest of the Commonwealth although increasingly drew from more international sources over time. In 2003, the journal was renamed to Corrosion Engineering, Science and Technology to reflect the international nature of the journal. In 2015, Maney was acquired by Taylor & Francis Group, which continues to publish CEST. The journal is currently edited by Stuart B. Lyon. Abstracting and indexing Corrosion Engineering, Science and Technology is abstracted and indexed in: Chemical Abstracts Service Science Citation Index Expanded Essential Science Indicators Inspec Scopus According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.8. Notes References External links Taylor & Francis academic journals Academic journals established in 1965 Materials science journals Hybrid open access journals
Corrosion Engineering, Science and Technology
[ "Materials_science", "Engineering" ]
377
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Materials science" ]
78,085,410
https://en.wikipedia.org/wiki/European%20Conference%20on%20Composite%20Materials
The European Conference on Composite Materials is an international scientific conference covering research on composite materials. The conference is organized by the European Society For Composite Materials and is held biannually, usually in alternance with the International Conference on Composite Materials. The topics represented at the conference include, among others, fracture and damage, multiscale modeling, durability, aging, process modeling, simulation, additive manufacturing, bio-sourced composites, material recycling and reuse of parts, and environmental impacts. History The conference was organized for the first time in September 1985 by the European Association of Composite Materials, created one year earlier. When this association was succeeded by the European Association of Composite Materials in 1998, it assumed the conference organization. References Academic conferences
European Conference on Composite Materials
[ "Physics", "Materials_science", "Engineering" ]
151
[ "Materials science stubs", "Composite materials", "Materials science", "Materials", "Matter" ]
69,308,477
https://en.wikipedia.org/wiki/Ground%20level%20enhancement
A Ground Level Enhancement or Ground Level Event (GLE), is a special subset of solar particle event where charged particles from the Sun have sufficient energy to generate effects which can be measured at the Earth's surface. These particles (mostly protons) are accelerated to high energies either within the solar atmosphere or in interplanetary space, with some debate as to the predominant acceleration method. While solar particle events typically involve solar energetic particles at 10–100 MeV, GLEs involve particles with energies higher than about 400 MeV. Definition The definition of a GLE is as follows: "A GLE event is registered when there are near-time coincident and statistically significant enhancements of the count rates of at least two differently located neutron monitors including at least one neutron monitor near sea level and a corresponding enhancement in the proton flux measured by a space-borne instrument(s)." There is a subclass of GLEs called sub-GLE: "A sub-GLE event is registered when there are near-time coincident and statistically significant enhancements of the count rates of at least two differently located high-elevation neutron monitors and a corresponding enhancement in the proton flux measured by a space-borne instrument(s), but no statistically significant enhancement in the count rates of neutron monitors near sea level." Description Charged particles from the Sun generally do not possess the energy required to penetrate the Earth's magnetic field or Upper atmosphere. However, a small number of solar events produce charged particles which are able to penetrate these layers, causing an air shower. This particle shower reaches ground level, where effects are measured, leading to the name "Ground Level Enhancement". These effects are usually measured as elevated levels of neutrons and muons. These events can increase the radiation dose of an individual at sea level or while in an aircraft, though not by enough to significantly increase an individual's lifetime risk of cancer. GLEs are distinct from individual cosmic rays because multiple charged particles enter the Earth's atmosphere simultaneously, leading to a synchronized event over a wide area. The term GLE refers to this wider event rather than an individual particle shower. A GLE is indicated by an increase in levels of neutrons and muons at one or more monitoring stations occurring over a period of 15 min or longer, followed by a longer decay to previous levels. GLEs are associated with intense solar flares; for example, the GLE which occurred on May 17, 2012, was associated with an M-Class flare which occurred 20 minutes prior. As GLE-causing particles have such high kinetic energies, they travel very quickly and can be used to predict the arrival of solar energetic particle (SEP) events (with lower-energy, slower particles). The method by which solar flares and coronal mass ejections (CMEs) produce such high-energy particles is still uncertain, with some studies suggesting that they are produced mostly by a CME shock wave, by strong flare events or some combination, or related to the connection between the active solar region and the magnetic field of the Earth. Ground level enhancements are usually accompanied by a solar radiation storm. GLE occurrence rate was 29% for S2 or larger storms, 36% for S3 or larger, and 40% for S4 when correlated with the S-scale (related to the number of >10MeV protons measured at geosynchronous orbit). GLEs are uncommon. At present, 76 GLE events have been observed since the 1940s. The most recent GLE #74 took place on 21st Nov 2024. GLEs are more frequent around solar maximum. See also Heliophysics List of solar storms Solar energetic particles Space weather Solar particle event Particle shower Air shower (physics) References Astroparticle physics
Ground level enhancement
[ "Physics" ]
772
[ "Astroparticle physics", "Particle physics", "Astrophysics" ]
69,310,432
https://en.wikipedia.org/wiki/Quantification%20%28machine%20learning%29
In machine learning and data mining, quantification (variously called learning to quantify, or supervised prevalence estimation, or class prior estimation) is the task of using supervised learning in order to train models (quantifiers) that estimate the relative frequencies (also known as prevalence values) of the classes of interest in a sample of unlabelled data items. For instance, in a sample of 100,000 unlabelled tweets known to express opinions about a certain political candidate, a quantifier may be used to estimate the percentage of these tweets which belong to class `Positive' (i.e., which manifest a positive stance towards this candidate), and to do the same for classes `Neutral' and `Negative'. Quantification may also be viewed as the task of training predictors that estimate a (discrete) probability distribution, i.e., that generate a predicted distribution that approximates the unknown true distribution of the items across the classes of interest. Quantification is different from classification, since the goal of classification is to predict the class labels of individual data items, while the goal of quantification it to predict the class prevalence values of sets of data items. Quantification is also different from regression, since in regression the training data items have real-valued labels, while in quantification the training data items have class labels. It has been shown in multiple research works that performing quantification by classifying all unlabelled instances and then counting the instances that have been attributed to each class (the 'classify and count' method) usually leads to suboptimal quantification accuracy. This suboptimality may be seen as a direct consequence of 'Vapnik's principle', which states: In our case, the problem to be solved directly is quantification, while the more general intermediate problem is classification. As a result of the suboptimality of the 'classify and count' method, quantification has evolved as a task in its own right, different (in goals, methods, techniques, and evaluation measures) from classification. Quantification tasks The main variants of quantification, according to the characteristics of the set of classes used, are: Binary quantification, corresponding to the case in which there are only classes and each data item belongs to exactly one of them; Single-label multiclass quantification, corresponding to the case in which there are classes and each data item belongs to exactly one of them; Multi-label multiclass quantification, corresponding to the case in which there are classes and each data item can belong to zero, one, or several classes at the same time; Ordinal quantification, corresponding to the single-label multiclass case in which a total order is defined on the set of classes. Regression quantification, a task which stands to 'standard' quantification as regression stands to classification. Strictly speaking, this task is not a quantification task as defined above (since the individual items do not have class labels but are labelled by real values), but has enough commonalities with other quantification tasks to be considered one of them. Most known quantification methods address the binary case or the single-label multiclass case, and only few of them address the multi-label, ordinal, and regression cases. Binary-only methods include the Mixture Model (MM) method, the HDy method, SVM(KLD), and SVM(Q). Methods that can deal with both the binary case and the single-label multiclass case include probabilistic classify and count (PCC), adjusted classify and count (ACC), probabilistic adjusted classify and count (PACC), and the Saerens-Latinne-Decaestecker EM-based method (SLD). Methods for multi-label quantification include regression-based quantification (RQ) and label powerset-based quantification (LPQ). Methods for the ordinal case include Ordinal Quantification Tree (OQT), and ordinal versions of the above-mentioned ACC, PACC, and SLD methods. Methods for the regression case include Regress and splice and Adjusted regress and sum. Evaluation measures for quantification Several evaluation measures can be used for evaluating the error of a quantification method. Since quantification consists of generating a predicted probability distribution that estimates a true probability distribution, these evaluation measures are ones that compare two probability distributions. Most evaluation measures for quantification belong to the class of divergences. Evaluation measures for binary quantification and single-label multiclass quantification are Absolute Error Squared Error Relative Absolute Error Kullback-Leibler divergence Pearson Divergence Evaluation measures for ordinal quantification are Normalized Match Distance (a particular case of the Earth Mover's Distance) Root Normalized Order-Aware Distance Applications Quantification is of special interest in fields such as the social sciences, epidemiology, market research, and ecological modelling, since these fields are inherently concerned with aggregate data. However, quantification is also useful as a building block for solving other downstream tasks, such as improving the accuracy of classifiers on out-of-distribution data, performing word sense disambiguation, allocating resources, and measuring classifier bias. Resources LQ 2021: the 1st International Workshop on Learning to Quantify LQ 2022: the 2nd International Workshop on Learning to Quantify LQ 2023: the 3rd International Workshop on Learning to Quantify LQ 2024: the 4th International Workshop on Learning to Quantify LeQua 2022: the 1st Data Challenge on Learning to Quantify LeQua 2024: the 2nd Data Challenge on Learning to Quantify QuaPy: An open-source Python-based software library for quantification QuantificationLib: A Python library for quantification and prevalence estimation References Machine learning
Quantification (machine learning)
[ "Engineering" ]
1,237
[ "Artificial intelligence engineering", "Machine learning" ]
69,312,633
https://en.wikipedia.org/wiki/Genome%20mining
Genome mining describes the exploitation of genomic information for the discovery of biosynthetic pathways of natural products and their possible interactions. It depends on computational technology and bioinformatics tools. The mining process relies on a huge amount of data (represented by DNA sequences and annotations) accessible in genomic databases. By applying data mining algorithms, the data can be used to generate new knowledge in several areas of medicinal chemistry, such as discovering novel natural products. History In the mid- to late 1980s, researchers have increasingly focused on genetic studies with the advancing sequencing technologies. The GenBank database was established in 1982 for the collection, management, storage, and distribution of DNA sequence data due to the increasing availability of DNA sequences. With the increasing number of genetic data, biotechnological companies have been able to use human DNA sequence to develop protein and antibody drugs through genome mining since 1992. In the late 1990s, many companies, such as Amgen, Immunec, Genentech were able to develop drugs that progressed to the clinical stage by adopting genome mining. Since the Human Genome Project was completed in the early 2000, researchers have been sequencing the genomes of many microorganisms. Subsequently, many of these genomes have been carefully studied to identify new genes and biosynthetic pathways. Algorithms As large quantities of genomic sequence data began to accumulate in public databases, genetic algorithms became important to decipher the enormous collection of genomic data. They are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection. The followings are commonly used genetic algorithms: AntiSMASH (Antibiotics and Secondary Metabolite Analysis Shell) addresses secondary metabolite genome pipelines. PRISM (Prediction Informatics for Secondary Metabolites) is a combinatorial approach to chemical structure prediction for genetically encoded nonribosomal peptides and type I and II polyketides. SIM (Statistically based sequence similarity) method, such as FASTA or PSI-BLAST, infer orthologous homology. BLAST (Basic local alignment search tool) is an approach for rapid sequence comparison. Applications Genome mining applies on the discovery of natural product by facilitating the characterization of novel molecules and biosynthetic pathways. Natural product discovery The production of natural products is regulated by the biosynthetic gene clusters (BGCs) encoded in the microorganism. By adopting genome mining, the BGCs that produce the target natural product can be predicted. Some important enzymes responsible for the formation of natural products are polyketide synthases (PKS), non-ribosomal peptide synthases (NRPS), ribosomally and post-translationally modified peptides (RiPPs), and terpenoids, and many more. Mining for enzymes, researchers can figure out the classes that BGCs encode and compare target gene clusters to known gene clusters. To verify the relation between the BGCs and natural products, the target BGCs can be expressed by suitable host through the use of molecular cloning. Databases and tools Genetic data has been accumulated in databases. Researchers are able to utilize algorithms to decipher the data accessible from databases for the discovery of new processes, targets, and products. The following are databases and tools: GenBank database provides genomic datasets for analysis. UCSC Genome Browser AntiSMASH-DB allows comparing the sequences of newly sequenced BGCs against those of previously predicted and experimentally characterized ones. BIG-FAM is a biosynthetic gene cluster family database. DoBISCUIT is a database of secondary metabolite biosynthetic gene clusters. MIBiG (Minimum Information about a Biosynthetic Gene cluster specification) provides a standard for annotations and metadata on biosynthetic gene clusters and their molecular products. Interactive tree of life (iTOL) is a web-based tool for the display, manipulation and annotation of phylogenetic trees. References Medicinal chemistry DNA Mining
Genome mining
[ "Chemistry", "Biology" ]
826
[ "Biochemistry", "nan", "Medicinal chemistry" ]
69,314,196
https://en.wikipedia.org/wiki/Bunyaviridae%20nonstructural%20S%20proteins
Bunyaviridae nonstructural S proteins (NSs) are synthesized by viral DNA/RNA and do not play a role in the replication or the viral protein coating. The nonstructural S segment (NSs) created by Bunyaviridae virus family, are able to interact with the human immune system, in order to increase their replication in infected cells. Understanding this mechanism can have global health impacts. Inhibition pathways Within the Bunyaviridae virus family, specifically phlebovirus genus, there has been multiple pathways of the inhibition of the immune response. NSs proteins are able to interact with interferon (INF) pathways, but the mechanism varies from virus to virus. The NSs protein in different viruses have been shown to differ in amino acid sequence by up to 85%. Rift Valley Fever Virus (RVFV) NSs protein is distributed throughout the cytoplasm and nucleus of the RVFV-infected cell. The protein created fiber-like substances within the nucleus. NSs in RVFV to the SAP30 region of DNA in the nucleus of the cell, which is an important promotor region of INF-b. Many other NSs proteins in the Bunyaviridae virus family do not function in this same way. Severe Fever with Thrombocytopenia Syndrome Virus (SFTSV) Although the exact target of the SFTSV is unknown, many believe that the virus attacks human hemopoietic cells. It has been shown that upstream molecules of INFs are unchanged in infected cells, such as MAVS, TRAF6 and TRAF3. This suggests that INFs are still being produced, but they have no effect and are undetectable in people's blood serum. The NSs protein in SFTSV has been shown to interfere with TBK1 which is needed in the activation of both IRF and NF-κB pathways. Uukuniemi virus (UUKV) UUKV is a non-human pathogen that still creates a NSs protein. The NSs protein has only been shown to weakly interact with the 40s subunit of ribosomes and MAVS. Arumowot virus (AMTV) AMTV is another non-human pathogen and its NSs protein is quickly degraded by proteasomes, and therefore doesn't cause infection in humans. References Proteins
Bunyaviridae nonstructural S proteins
[ "Chemistry" ]
490
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
69,314,250
https://en.wikipedia.org/wiki/Retinomorphic%20sensor
Retinomorphic sensors are a type of event-driven optical sensor which produce a signal in response to changes in light intensity, rather than to light intensity itself. This is in contrast to conventional optical sensors such as charge coupled device (CCD) or complementary metal oxide semiconductor (CMOS) based sensors, which output a signal that increases with increasing light intensity. Because they respond to movement only, retinomorphic sensors are hoped to enable faster tracking of moving objects than conventional image sensors, and have potential applications in autonomous vehicles, robotics, and neuromorphic engineering. Naming and history The first so-called artificial retina were reported in the late 1980's by Carver Mead and his doctoral students Misha Mahowald, and Tobias Delbrück. These silicon-based sensors were based on small circuits involving differential amplifiers, capacitors, and resistors. The sensors produced a spike and subsequent decay in output voltage in response to a step-change in illumination intensity. This response is analogous to that of animal retinal cells, which in the 1920's were observed to fire more frequently when the intensity of light was changed than when it was constant. The name silicon retina has hence been used to describe these sensors. The term retinomorphic was first used in a conference paper by Lex Akers in 1990. The term received wider use by Stanford Professor of Engineering Kwabena Boahen, and has since been applied to a wide range of event-driven sensing strategies. The word is analogous to neuromorphic, which is applied to hardware elements (such as processors) designed to replicate the way the brain processes information. Operating principles There are several retinomorphic sensor designs which yield a similar response. The first designs employed a differential amplifier which compared the input signal from of a conventional sensor (e.g. a phototransistor) to a filtered version of the output, resulting in a gradual decay if the input was constant. Since the 1980's these sensors have evolved into much more complex and robust circuits. A more compact design of retinomorphic sensor consists of just a photosensitive capacitor and a resistor in series. The output voltage of these retinomorphic sensors, , is defined as voltage dropped across the resistor. The photosensitive capacitor is designed to have a capacitance which is a function of incident light intensity. If a constant voltage , is applied across this RC circuit it will act as a passive high-pass filter and all voltage will be dropped across the capacitor (i.e. ). After a sufficient amount of time, the plates of the capacitor will be fully charged with a charge on each plate, where is the capacitance in the dark. Since under constant illumination, this can be simplified to . If light is then applied to the capacitor it will change capacitance to a new value: . The charge that the plates can accommodate will therefore change to , leaving a surplus / deficit of charge on each plate. The excess charge will be forced to leave the plates, flowing either to ground or the input voltage terminal. The rate of charge flow is determined by the resistance of the resistor , and the capacitance of the capacitor. This charge flow will lead to a non-zero voltage being dropped across the resistor and hence a non-zero . After the charge stops flowing the system returns to steady-state, all the voltage is once again dropped across the capacitor, and again.For a capacitor to change its capacitance under illumination, the dielectric constant of the insulator between the plates, or the effective dimensions of the capacitor, must be illumination-dependent. The effective dimensions can be changed by using a bilayer material between the plates, consisting of an insulator and a semiconductor. Under appropriate illumination conditions the semiconductor will increase its conductivity when exposed to light, emulating the process of moving the plates of the capacitor closer together, and therefore increasing capacitance. For this to be possible, the semiconductor must have a low electrical conductivity in the dark, and have an appropriate band gap to enable charge generation under illumination. The device must also allow optical access to the semiconductor, through a transparent plate (e.g. using a transparent conducting oxide). Applications Conventional cameras capture every part of an image, regardless of whether it is relevant to the task. Because every pixel is measured, conventional image sensors are only able to sample the visual field at relatively low frame rates, typically 30 - 240 frames per second. Even in professional high speed cameras used for motion picture, the frame rate is limited to a few 10's of thousands of frames per second for a full resolution image. This limitation could represent a performance bottleneck in the identification of high speed moving objects. This is particularly critical in applications where rapid identification of movement is critical, such as in autonomous vehicles. By contrast, retinomorphic sensors identify movement by design. This means that they do not have a frame rate and instead are event-driven, responding only when needed. For this reason, retinomorphic sensors are hoped to enable identification of moving objects much more quickly than conventional real-time image analysis strategies. Retinomorphic sensors are therefore hoped to have applications in autonomous vehicles, robotics, and neuromorphic engineering. Theory Retinomorphic sensor operation can be quantified using similar techniques to simple RC circuits, the only difference being that capacitance is not constant as a function of time in a retinomorphic sensor. If the input voltage is defined as , the voltage dropped across the resistor as , and the voltage dropped across the capacitor as , we can use Kirchhoff's Voltage Law to state: Defining the current flowing through the resistor as , we can use Ohm's Law to write: From the definition of current, we can then write this in terms of charge, , flowing off the bottom plate: where is time. Charge on the capacitor plates is defined by the product of capacitance, , and the voltage across the capacitor, , we can hence say: Because capacitance in retinomorphic sensors is a function of time, cannot be taken out of the derivative as a constant. Using the product rule, we get the following general equation of retinomorphic sensor response: or, in terms of the output voltage: Response to a step-change in intensity While the equation above is valid for any form of , it cannot be solved analytically unless the input form of the optical stimulus is known. The simplest form of optical stimulus would be a step function going from zero to some finite optical power density at a time . While real-world applications of retinomorphic sensors are unlikely to be accurately described by such events, it is a useful way to understand and benchmark the performance of retinomorphic sensors. In particular, we are primarily concerned with the maximum height of the immediately after the light has been turned on. In this case the capacitance could be described by: The capacitance under illumination will depend on . Semiconductors are known to have a conductance, , which increases with a power-law dependence on incident optical power density: , where is a dimensionless exponent. Since is linearly proportional to charge density, and capacitance is linearly proportional to charges on the plates for a given voltage, the capacitance of a retinomorphic sensor also has a power-law dependence on . The capacitance as a function of time in response to a step function, can therefore be written as: where is the capacitance prefactor. For a step function we can re-write our differential equation for as a difference equation: where is the change in voltage dropped across the capacitor as a result of turning on the light, is the change in capacitance as a result of turning on the light, and is the time taken for the light to turn on. The variables and are defined as the voltage dropped across the capacitor and the capacitance, respectively, immediately after the light has been turned on. I.e. is henceforth shorthand for , and is henceforth shorthand for . Assuming the sensor has been held in the dark for sufficiently long before the light is turned on, the change in can hence be written as: Similarly, the change in can be written as Putting these into the difference equation for : Multiplying this out: Since we are assuming the light turns on very quickly we can approximate . This leads to the following: Using the relationship , this can then be written in terms of the output voltage: Where we have defined the peak height as , since he peak occurs immediately after the light has been turned on. The retinomorphic figure of merit, , is defined as the ratio of the capacitance prefactor and the capacitance of the retinomorphic sensor in the dark: With this parameter, the inverse ratio of peak height to input voltage can be written as follows: The value of will depend on the nature of recombination in the semiconductor, but if band-to-band recombination dominates and the charge density of electrons and holes are equal, . For systems where this is approximately true the following simplification to the above equation can be made: This equation provides a simple method for evaluating the retinomorphic figure of merit from experimental data. This can be carried out by measuring the peak height, , of a retinomorphic sensor in response to a step change in light intensity from 0 to , for a range of values . Plotting as a function of should yield a straight line with a gradient of . This approach assumes that is linearly proportional to . See also Active-pixel sensor Charge coupled device Event camera Neuromorphic engineering Optical sensor Photodiode References Image sensors Semiconductors Sensors
Retinomorphic sensor
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
2,023
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Measuring instruments", "Materials", "Electronic engineering", "Condensed matter physics", "Sensors", "Solid state engineering", "Matter" ]
61,417,483
https://en.wikipedia.org/wiki/Antimicrobial%20spectrum
The antimicrobial spectrum of an antibiotic means the range of microorganisms it can kill or inhibit. Antibiotics can be divided into broad-spectrum antibiotics, extended-spectrum antibiotics and narrow-spectrum antibiotics based on their spectrum of activity. Detailedly, broad-spectrum antibiotics can kill or inhibit a wide range of microorganisms; extended-spectrum antibiotic can kill or inhibit Gram positive bacteria and some Gram negative bacteria; narrow-spectrum antibiotic can only kill or inhibit limited species of bacteria. Currently no antibiotic's spectrum can completely cover all types of microorganisms. Determination The antimicrobial spectrum of an antibiotic can be determined by testing its antimicrobial activity against a wide range of microbes in vitro . Nonetheless, the range of microorganisms which an antibiotic can kill or inhibit in vivo may not always be the same as the antimicrobial spectrum based on data collected in vitro. Significance Narrow-spectrum antibiotics have low propensity to induce bacterial resistance and are less likely to disrupt the microbiome (normal microflora). On the other hand, indiscriminate use of broad-spectrum antibiotics may not only induce the development of bacterial resistance and promote the emergency of multidrug-resistant organisms, but also cause off-target effects due to dysbiosis. They may also have side effects, such as diarrhea or rash. Generally, a broad antibiotic has more clinical indications, and therefore are more widely used. The Healthcare Infection Control Practices Advisory Committee (HICPAC) recommends the use of narrow-spectrum antibiotics whenever possible. Examples Broad-spectrum antibiotic: Ciprofloxacin, Doxycycline, Minocycline, Tetracycline, Imipenem, Azithromycin Extended-spectrum antibiotic: Ampicillin Narrow-spectrum antibiotic: Sarecycline, Vancomycin, Isoniazid See also Antibiotic Methicillin-resistant Staphylococcus aureus (MRSA) References External links Healthcare Infection Control Practices Advisory Committee (HICPAC) Antibiotics Clinical pharmacology
Antimicrobial spectrum
[ "Chemistry", "Biology" ]
444
[ "Pharmacology", "Biotechnology products", "Clinical pharmacology", "Antibiotics", "Biocides" ]
61,421,514
https://en.wikipedia.org/wiki/Hantz%20reactions
Hantz reactions are a class of pattern-forming precipitation reactions in gels implementing a reaction–diffusion system. The precipitation patterns are forming as a reaction of two electrolytes: a highly concentrated "outer" one diffuses into a hydrogel, while the "inner" one is dissolved in the gel itself. The colloidal precipitate which builds up the patterns is trapped by the gel and kept at the location where it is formed, similar to Liesegang rings. The first representative of this class of reactions was the NaOH (outer electrolyte)+CuCl2 (inner electrolyte). Later the NaOH+AgNO3, the CuCl2+K3[Fe(CN)6], the NaOH+AlCl3, and the NH3+AgNO3 reactions in several hydrogels have also proved to show similar behavior. Precipitate patterns forming in these reactions are exceptionally rich. Besides the macroscopic shapes like layered structures, helices and cardioids, regular sheets of colloidal precipitate may also emerge with a periodicity even less than 20 micrometers (microscopic patterns). Macroscopic patterns The arrangement that best shows the sequence of events leading to the formation of macroscopic patters is the one in which the outer electrolyte penetrates in a thin gel sheet located between two glass plates. In this case, the diffusion front has a quasi-one-dimensional shape. If there are some impurities or obstacles in the gel, the precipitation may cease at these points, and the traveling precipitation front following the diffusion front will split. As the broken precipitation front advances, its active segments are getting shorter, resulting in triangle-like regions free of precipitate behind the front. The reason why the precipitation temporarily or permanently stops in these regions is that the oblique, passive edges of the precipitate act as a semipermeable membrane, blocking the diffusion of the outer electrolyte. The mechanism behind the regression of the active front segments is not fully understood. It is believed that a diffusive intermediate compound forms at the active segments having reduced concentration at the sides, and a critical concentration is required for the precipitation to occur. When the outer electrolyte is poured onto the top of a gel column in a glass tube, the diffusion front takes roughly the form of a disk. In this case, the precipitation fronts involved in pattern formation can perform more complicated motions, leading to more complex patterns that depend on the outer and inner electrolyte concentration. These include the formation of multi-armed helices, intermingled cardioids, Voronoi tessellations, so-called target patterns and other, even more complex shapes. Microscopic patterns In certain conditions, for example when the cation of the inner electrolyte is Cu2+ or Ag2+, regular sheets consisting of colloidal grains are formed. This phenomenon is especially striking when the reactions run in poly(vinyl)alcohol gels, and the speed of the precipitation front falls below about 0.3 μm/s. The finest microscopic patterns have been observed in the NaOH+AgNO3 reactions, where the periodicity dropped below 10 μm. The chemical mechanism of this pattern formation is not fully understood, but computer simulations based on phase separation described by the Cahn–Hilliard equation with a moving source front exhibit the most important properties of the building of the microscopic patterns. Defects may also be present in the regular microscopic sheets, which can even interact during the front propagation. These microscopic patterns have raised interest in different fields of micro and nanotechnology as well See also Diffusion-controlled reaction Turing pattern Belousov–Zhabotinsky reaction References External links Liesegang banding Macroscopic patterns, NaOH+CuCl2 Microscopic patterns, NaOH+CuCl2 and NaOH+AgCl2 in PVA gel Helical precipitation patterns Gels Chemical reactions Diffusion
Hantz reactions
[ "Physics", "Chemistry" ]
809
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Colloids", "nan", "Gels" ]
61,424,662
https://en.wikipedia.org/wiki/Anderson%27s%20theorem%20%28superconductivity%29
In the field of superconductivity, Anderson's theorem states that superconductivity in a conventional superconductor is robust with respect to (non-magnetic) disorder in the host material. It is named after P. W. Anderson, who discussed this phenomenon in 1959, briefly after BCS theory was introduced. One consequence of Anderson's theorem is that the critical temperature Tc of a conventional superconductor barely depends on material purity, or more generally on defects. This concept breaks down in the case of very strong disorder, e.g. close to a superconductor-insulator transition. Also, it does not apply to unconventional superconductors. In fact, strong suppression of Tc with increasing defect scattering, thus non-validity of Anderson's theorem, is taken as a strong indication for superconductivity being unconventional. References Superconductivity
Anderson's theorem (superconductivity)
[ "Physics", "Materials_science", "Engineering" ]
187
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
63,611,863
https://en.wikipedia.org/wiki/H3K36me
H3K36me is an epigenetic modification to the DNA packaging protein Histone H3, specifically, the mono-methylation at the 36th lysine residue of the histone H3 protein. There are diverse modifications at H3K36, such as phosphorylation, methylation, acetylation, and ubiquitylation, which have many important biological processes. The methylation of H3K36 has particularly had effects in transcriptional repression, alternative splicing, dosage compensation, DNA replication and repair, DNA methylation, and the transmission of the memory of gene expression from parents to offspring during development. Nomenclature H3K36me2 indicates dimethylation of lysine 36 on histone H3 protein subunit: Lysine methylation This diagram shows the progressive methylation of a lysine residue. The mono-methylation (second from left) denotes the methylation present in H3K36me1. Lysine methylation is the addition of a methyl group to the lysine of histone proteins. This occurs via histone lysine methyltransferase (HMTase) that utilize S-adenosylmethionine to specifically place the methyl group on histone Lys or Arg residues. So far, there have only been eight specific mammalian enzymes discovered that can methylate H3K36 in vitro and/or in vivo, all of which have identical catalytic SET domains but, different preferences for Lys36 residues in different methylation states. Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome, which consists of the core octamer of histones (H2A, H2B, H3, and H4) as well as a linker histone and about 180 base pairs of DNA wrapped around it. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. Epigenetic implications The post-translational modification of histone tails by either histone-modifying complexes or chromatin remodeling complexes is interpreted by the cell and leads to the complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones come from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. The use of ChIP-sequencing revealed regions in the genome characterized by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look into the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. Five core histone modifications were found with each respective one being linked to various cell functions. H3K4me3-promoters H3K4me1- primed enhancers H3K36me3-gene bodies H3K27me3-polycomb repression H3K9me3-heterochromatin The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell-specific gene regulation. Methods The histone mark H3K36me can be detected in a variety of ways: Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well-positioned nucleosomes. The use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well-positioned nucleosomes are seen to have enrichment of sequences. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look into regions that are nucleosome-free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localization. See also Histone methylation Histone methyltransferase Methyllysine References Methylation Epigenetics Post-translational modification
H3K36me
[ "Chemistry" ]
1,171
[ "Post-translational modification", "Gene expression", "Methylation", "Biochemical reactions" ]
63,612,550
https://en.wikipedia.org/wiki/Systems%20Improved%20Numerical%20Differential%20Analyzer
The Systems Improved Numerical Differential Analyzer (acronym SINDA) is a commercially available software system developed by C&R Technologies that solves resistor-capacitor (R-C) network representations of physical problems governed by diffusion equations. The software was originally designed as a general thermal analyzer for the spacecraft and launch vehicle thermal community and is currently an integral part of the Thermal Desktop plugin for AutoCAD. References Physics software
Systems Improved Numerical Differential Analyzer
[ "Physics" ]
89
[ "Physics software", "Computational physics stubs", "Computational physics" ]
73,741,133
https://en.wikipedia.org/wiki/Synthetic%20exosome
Exosomes are small vesicles secreted by cells that play a crucial role in intercellular communication. They contain a variety of biomolecules, including proteins, nucleic acids and lipids, which can be transferred between cells to modulate cellular processes. Exosomes have been increasingly acknowledged as promising therapeutic tool and delivery platforms due to unique biological properties. Biocompatibility: Exosomes are naturally occurring particles in body, which makes them highly biocompatible and less likely to activate immune response. Targeting ability: Exosomes are assembled to express specific proteins or peptides, allowing them to target specific cells or tissues. Natural cargo carries: Exosomes can naturally transport a variety of biomolecules, including proteins, RNA and DNA, which can be used for therapeutic purposes. However, due to exosomes being small in size (30-150 nm), present in various biological fluids (such as blood, urine, saliva), sensitivity to environmental factors (such temperature, pH), complexity of drug loading efficiency, there are challenges associated with isolation, purification, delivery and drug payload. While application of exosomes is still in its early stages, approaches are being explored to produce exosome-like nanovesicles (ELNs or artificial exosomes) to overcome these challenges. ELNs are a type of engineered exosomes designed to modify the structure and enhance the function of natural exosomes. The content of ELNs can be highly-customized to match with various medical needs, allowing for more precise control over their properties compared to natural exosomes. Additionally, ELNs can be modified with selectively expressed functional groups on the surface to enhance its targeting and uptake by cells or tissues. For example, ELNs can be engineered to enhance their stability in fluids, to target specific cell types, such ascytosol of brain cells. Further, ELNs could consistently deliver cargo mRNA with therapeutic catalase mRNA to the brain, attenuating neurotoxicity and neuroinflammation. Above all, ELNs' properties can be tailored by researchers for specific applications with precise controlling. ELNs hold great potential as a novel approach to meet medical needs, including immunologic therapy, anti-tumor, anti-aging and regeneration. References Synthetic biology
Synthetic exosome
[ "Engineering", "Biology" ]
475
[ "Synthetic biology", "Molecular genetics", "Biological engineering", "Bioinformatics" ]
67,900,481
https://en.wikipedia.org/wiki/Angular%20correlation%20function
The angular correlation function is a function which measures the projected clustering of galaxies, due to discrepancies between their actual and expected distributions. The function may be computed as follows: , where represents the conditional probability of finding a galaxy, denotes the solid angle, and is the mean number density. In a homogeneous universe, the angular correlation scales with a characteristic depth. References Galaxy clusters Equations of astronomy
Angular correlation function
[ "Physics", "Astronomy" ]
82
[ "Galaxy clusters", "Galaxy stubs", "Concepts in astronomy", "Astronomy stubs", "Equations of astronomy", "Astronomical objects" ]
67,901,308
https://en.wikipedia.org/wiki/Venus%20Emissivity%20Mapper
The Venus Emissivity Mapper (VEM) is a spectrometer for mapping the surface composition of Venus through a distinct number of atmospheric spectral windows. It will be one of the two payloads onboard the VERITAS mission, and will also be the VenSpec-M channel of the EnVision mission's spectrometer suite. Overview While Earth and Venus are similar in many aspects, they evolved very differently and currently have distinct surface and atmospheric environments. Where Earth's surface has liquid water and supports life, Venus experiences a mean surface temperature of over 400 °C, and a carbon dioxide rich atmosphere with a surface pressure of about 92 times that of Earth at sea-level. Little is known about Venus' surface composition. The dense atmosphere and its cloud layers are mostly opaque to visible and infrared radiation, making remote sensing a challenge. As light travels through the atmosphere, it is attenuated by absorption and scattering, and it is blurred by the emissions of the atmosphere itself. Observations of Venus have shown that the surface can be observed through a number of narrow infrared bands. Using this technique Venus Express was able to observe fresh basalt, pointing towards recent volcanic activity on the surface. The spectral windows were at the edge of Venus Express' VIRTIS instrument's sensitive spectral range. VIRTIS also experienced thermal drifts and had issues with stray-light. This was because VIRTIS was not designed for ground mapping, yet it allowed for a proof-of-concept leading to VEM's design. VEM will be the first instrument in orbit around Venus focussed solely on these spectral bands, allowing for a complete mapping of the surface composition and surface redox states. VEM was selected for NASA's VERITAS mission and for ESA's EnVision mission in June 2021. The principal investigator is Jörn Helbert, and the instrument is built by the DLR in Berlin. Goal The goal of this instrument is to obtain a full mapping of the rock types, their iron content, and their redox states from orbit. Laboratory measurements have shown that a 4% difference in relative emissivity is sufficient to distinguish between the different rock types, and potentially identify their weathering states. This is thus the design driver for the instrument. An identification of the ground composition based on measured spectra is only possible once a spectral library representing the surface conditions of Venus is available, which has been in the works at the Planetary Spectroscopy Laboratory at DLR. By continuously monitoring the surface, it will be possible to further constrain the current volcanic activity. In addition to that, any surface information will contribute towards understanding the past evolution of Venus leading up to its current state. Description History and heritage Spectral windows have been used by several space-based (Venus Express, Galileo and Cassini) and ground-based missions to study Venus. While these mostly represented proof of concepts, they gave rise to the idea for the Venus Emissivity Mapper, which is building on the flight heritage of all the aforementioned missions, especially so on VIRTIS and VMC aboard Venus Express. VEM was first put forward as part of the EnVision mission proposal in 2010. At the same time, the first Venus-analogue measurements began surfacing, making it possible to derive surface compositions from the measured Venus emissivities. EnVision's initial proposal was not accepted, and so the design was iterated upon so that a new proposal could be made in 2014 and again in 2016. ESA selected it for an in-depth design study in 2018 and three years later, ESA declared EnVision — with VEM (VenSpec-M) aboard — the fifth M-class Cosmic Vision mission. The Venus Emissivity Mapper was also submitted to the NASA Discovery Program as part of the VERITAS proposal in 2014. It was initially selected for Phase A funding but not chosen for flight. In 2019, an updated proposal was submitted to the Discovery Program, once more receiving Phase A funding. Two years later, in June 2021, the announcement about VERITAS' official selection was made public. Building onto the heritage of previous missions, all subsystems have a TRL level of at least 6, thereby giving VEM an overall TRL of 6. Science Observations will be made at night measuring the emissivity signal from the surface. Typically, igneous rocks are identified by their sodium, potassium, and silicon content. However these elements lack observable features in the 1 μm spectral band. Instead, transition metals (primarily Fe), and their spectral features in the relevant windows, will be used to characterise the surface composition. This map of iron content will then, with topological data, be used to generate a map of inferred rock types. In order for these measurements to be absolute rather than relative, the measurements are to be calibrated using the data gathered by the Venera landers when overflying their landings zones. Design VEM is a multispectral imaging instrument, operating as a pushbroom scanner. It consists of the following subsystems: The optical sub-system (VEMO), the Instrument Controller (VEMIC), the power supply (VEMPS), and the two-stage baffle (VEMBA). The development approach is analogue to what was successfully done when designing MERTIS. This means that one starts with a breadboard, moves on to a lab prototype, follows that up by an engineering prototype, and finally reaches the full qualification model. Along the way, risks are constantly identified and mitigated. Optics (VEMO) The optics sub-system is a three lens system, provided by LESIA, Observatoire de Paris, France. First, a telescope with an aperture of 8 mm and a focal length of 40.5 mm projects the scene on the filter array. From there, it is then imaged on the focal plane through two more lenses with a combined magnification factor of 0.4. The optics have a total transmittance of 0.88, not taking into account the filters. The focal plane array (VEMFPA) consists of a Xenics XSW-640 InGaAs detector, which has a resolution of 640x512 pixel, a FOV of 30°×45°, a pixel pitch of 20 μm, and a pixel FOV of 0.07°×0.07°. The imaging electronics unit used is LM98640QML-SP from Texas Instruments. InGaAs detectors have been used in deep space successfully over many years, making it a safe choice. This specific unit is currently in use on the ExoMars Trace Gas Orbiter. The filter array is provided by CNES Toulouse, France. Narrow-band filters takes care of only transmitting the spectral region of interest. Based on the 4% relative emissivity that is needed to differentiate between the different rock types, the signal-to-noise ratio (SNR) for each band is derived by running the respective radiative transfer model. The bands and their required SNRs are found in the following table: Instrument controller (VEMIC) The instrument controller is the interface between the internal units and the spacecraft. It handles and processes all the data, and controls the subsystem. The system used in VEM is taken from MERTIS, with only the interfaces needing adaptation to comply with VEM. Power system (VEMPS) The main power draw is coming from the focal plane array and the instrument controller. This sub-system is heavily based upon the MERTIS PSU, as the latter is already flight-proven. The main part of the PS is a DC/DC converter from the Interpoint SMRT series, which is supplemented by external LC-filters and some additional specialised circuitry. Baffle (VEMBA) To keep stray- and sunlight out, a two-stage baffle is employed. The front part is mainly a screen to protect the spacecraft, while the back part is the one taking care of the stray-light. The baffle aims to reduce stray light to a factor of at least 10−5. Signal optimisations Atmospheric effects When looking at the spectral bands presented earlier, one can see that next to the six bands used for mineralogical measurements, eight more are present. Those additional bands are used to correct for the various effects altering the signal between the surface of Venus and the measuring spacecraft. By measuring the atmosphere on its own, the effects of it and the varying conditions it introduces can be considered. The same is done for stray-light, for which three dedicated channels are used. Signal-to-noise ratio As the integration time for a satellite in orbit can hardly be optimised, a few other techniques are applied to get the highest possible signal-to-noise performance. Those improvements are: Oversampling during one dwell time (for slow orbits) Discrete Time-Delayed Integration (TDI) Spatial binning (macro-pixels) Once applied, even for an orbit altitude of 8000 km, the SNR required to reach the necessary accuracy is theoretically attained with margins of more than 100% for all bands. For orbit altitudes around 250 km, the SNR is close to 10 times better than the one obtained at 8000 km. A laboratory prototype showed potential for a later SNR performance of well over 1000. Reducing uncertainties By optimising the detector for the relevant wavelengths, and by making use of the additional spectral ranges, the effects of atmosphere and stray-light are accounted for, thereby significantly lowering the uncertainty in the measurements — as described above. The uncertainties are further reduced by having an overlapping ground coverage (to take care of short-term atmospheric variability), and repeated measurements (to reduce error due to uncertainty in water vapour content, cloud opacity, and surface window radiance). References Spacecraft instruments Spectrometers
Venus Emissivity Mapper
[ "Physics", "Chemistry" ]
2,020
[ "Spectrometers", "Spectroscopy", "Spectrum (physical sciences)" ]
67,910,445
https://en.wikipedia.org/wiki/Ethylene%20signaling%20pathway
Ethylene signaling pathway is a signal transduction in plant cells to regulate important growth and developmental processes. Acting as a plant hormone, the gas ethylene is responsible for promoting the germination of seeds, ripening of fruits, the opening of flowers, the abscission (or shedding) of leaves and stress responses. It is the simplest alkene gas and the first gaseous molecule discovered to function as a hormone. Most of the understanding on ethylene signal transduction come from studies on Arabidopsis thaliana. Ethylene can bind to at least five different membrane gasoreceptors. Although structurally diverse, the ethylene gasoreceptors all exhibit similarity (homology) to two-component regulatory system in bacteria, indicating their common ancestry from bacterial ancestor. Ethylene binds to the gasoreceptors on the cell membrane of the endoplasmic reticulum. Although homodimers of the gasoreceptors are required for functional state, only one ethylene molecule binds to each dimer. Unlike in other signal transductions, ethylene is the suppressor of its gasoreceptor activity. Ethylene gasoreceptors are active without ethylene due to binding with other enzymatically active co-gasoreceptors such as constitutive triple response 1 (CTR1) and ethylene insensitive 2 (EIN2). Ethylene binding causes EIN2 to split in two, of which the C-terminal portion of the protein can activate different transcription factors to bring about the effects of ethylene. There is also non-canonical pathway in which ethylene activates cytokinin gasoreceptor, and thereby regulate seed development (stomatal aperture) and growth of root (the apical meristem). Ethylene gasoreceptors Ethylene binds to it specific transmembrane gasoreceptor present on the cell membrane of endoplasmic reticulum. There are different ethylene gasoreceptor isoforms. Five isoforms are known in Arabidopsis thaliana which are named ethylene response/gasoreceptor 1 (ETR1), ethylene response sensor 1 (ERS1), ETR2, ERS2, and ethylene insensitive 4 (EIN4). The ETR1 is similar (conserved sequence) in different plants but with slight amino acid differences. A. thaliana gasoreceptors are classified into two subfamilies based on genetic relationship and common structural features, namely subfamily 1 that includes ETR1 and ERS1, and subfamily 2 that consists of ETR2, ERS2, and EIN4. In tomato there are seven types of ethylene gasoreceptors named SlETR1, SlETR2, SlETR3, SlETR4, SlETR5, SlETR6, and SlETR7 (Sl for Solanum lycopersicum, the scientific of tomato). All ethylene gasoreceptors have similar organisation: a short N-terminal domain, three conserved transmembrane domains towards the N-terminus, followed by a GAF domain of unknown function, and then signal output motifs in the C-terminal region. The N-terminus is exposed on the lumen of the endoplasmic reticulum, and the C-terminus that is exposed to the cytoplasm of the cell. The N-terminus contains the sites for binding of ethylene, dimerization and membrane localization. Two similar gasoreceptors combine to form a homodimer through a disulfide bridge forming a cysteine-cysteine interaction. However, the main membrane localization is done by the transmembrane domain, which can also bind ethylene with the help of copper as a cofactor. Copper ion is supplied by a transmembrane protein responsive-to-antagonist 1 (RAN1) from antioxidant protein 1 (ATX1) via tiplin, or directly by copper transport protein. Although the gasoreceptors are functionally active as dimers, only one copper ion binds to such dimer, indicating that one gasoreceptor dimer binds only one ethylene molecule. Mutations in the binding sites stop ethylene binding and also make plants insensitive to ethylene. Cys-65 in the protein helix 2 is particularly important as the binding site of copper ion as mutation in it stops copper and ethylene binding. The C-terminus is basically a bacterial two-component system with kinase activity and response regulator. ETR1 has histidine kinase activity, whereas ETR2, ERS2, and EIN4 have serine/threonine kinase activity, and ERS1 has both. The histidine kinase in ETR1 is not required for ethylene signaling. Origin and evolution Ethylene gasoreceptors are functionally similar to bacterial two-component system which has two activation sites named response regulator and histidine kinase. The cytoplasmic carboxy-terminal part of ethylene gasoreceptor is similar in amino acid sequence to these response regulator and histidine kinase in bacteria; although the N-terminal region is altogether different. Such genetic and protein relationships indicate that gasoreceptors and bacterial two-component gasoreceptors as well as phytochromes and cytokinin gasoreceptors in plants evolved from and were acquired by plants from a cyanobacterium that gave rise to plastids, the power organelles in plants and protists. Phylogenetic analysis also shows the common origin of the ethylene gasoreceptor in plants and ethylene-binding domain in cyanobacteria. In 2016, Randy F. Lacey and Brad M. Binder at the University of Tennessee discovered that a cyanobacterium, Synechocystis sp. PCC 6803 response to ethylene signal and has a functional ethylene gasoreceptor, which they named Synechocystis Ethylene Response1 (SynEtr1). They further showed that SynEtr1 acts similar to plant ethylene gasoreceptor in binding ethylene, indicating the origin of ethylene gasoreceptor from Synechocystis-related cyanobacterium. The functional difference however is that kinase activity is not compulsory for ethylene binding in plants, but is the key role of SynEtr1. Signal transduction Two proteins are crucial for interacting ethylene with the gasoreceptors, namely constitutive triple response 1 (CTR1) and ethylene insensitive 2 (EIN2). CTR1 is a serine/threonine protein kinase that functions as a negative regulator of ethylene signalling. It is a member of the signaling protein mitogen-activated protein kinase (MAPK) kinase kinase. EIN2 is required for ethylene signalling and is part of the NRAMP (natural resistance-associated macrophage protein) family of metal transporters; it comprises a large, N-terminal portion containing multiple transmembrane domains (EIN2-N) in the ER membrane and a cytosolic C-terminal portion (EIN2-C). Other proteins such as reversion to ethylene sensitivity 1 (RTE1), cytochrome b5 and tetratricopeptide repeat protein 1 (TRP1) also play important roles in ethylene signaling. RTE1 is a highly conserved proteins in plants and protists but absent in fungi and prokaryotes. TRP1 is genetically related to transmembrane and coiled-coil protein 1 (TCC1) in animals that is involved F actin function and competes with Raf-1 for Ras binding. Unlike in most signal transductions where the ligands activate their gasoreceptors to relay their signals, ethylene acts as the suppressor of its gasoreceptor, and the gasoreceptor being the negative regulator in ethylene responses. Ethylene gasoreceptor is active in the absence of ethylene. Without ethylene, the gasoreceptor binds to CTR1 at its C-terminal kinase domain. The kinase activity of CTR1 becomes activated and phosphorylates the neighbouring EIN2. As long as EIN2 remains highly phosphorylated, it remains inactive and there never is an ethylene signal relay. In ETR1, the gasoreceptor histidine kinase is required for binding with EIN2. RTE1 can bind to and activate ETR1 independent of CTR1. There is evidence that cytochrome b5 aids or acts similar to RTE1. Ethylene binding to the gasoreceptor disrupts the EIN2 phosphorylation. It does not cause any particular change in the structural feature of the gasoreceptor-CTR1-EIN2 complex or stop the phosphorylation. In fact, at low level of ethylene there is increased gasoreceptor-CTR1-EIN2 complexes, which is then reduced as ethylene level rises. The turnover process is not yet fully understood. The only consequence of ethylene binding is reduced phosphorylation of EIN2. Under such condition EIN2 is activated and is cleaved to release EIN2-C from the membrane-bound EIN2-N portion. The enzyme that causes the cleavage is yet unknown. The role of EIN2-N is also unknown in A. thaliana. But in rice, its homologue OsEIN2-N (Os for Oryza sativa, the scientific name for rice) interacts with another protein, mao huzi 3 (MHZ3), a mutation of which gives rise to insensitivity to ethylene. EIN2-C is the main component that mediates ethylene signal in the cell. It acts in two ways. In one, it binds the mRNAs that encode for EIN3-binding F-box proteins, EBF1 and EBF2 to cause their degradation. In another, it enters the nucleus to bind with EIN2 nuclear associated protein 1 (ENAP1) to regulate transcriptional and translational activities of EIN3 and the related EIL1 transcription factor to cause most of the ethylene responses. References Signal transduction Plant hormones Gaseous signaling molecules Plant growth regulators Plant physiology
Ethylene signaling pathway
[ "Chemistry", "Biology" ]
2,237
[ "Plant physiology", "Plants", "Signal transduction", "Gaseous signaling molecules", "Biochemistry", "Neurochemistry" ]
58,422,881
https://en.wikipedia.org/wiki/Anderson%E2%80%93Kadec%20theorem
In mathematics, in the areas of topology and functional analysis, the Anderson–Kadec theorem states that any two infinite-dimensional, separable Banach spaces, or, more generally, Fréchet spaces, are homeomorphic as topological spaces. The theorem was proved by Mikhail Kadec (1966) and Richard Davis Anderson. Statement Every infinite-dimensional, separable Fréchet space is homeomorphic to the Cartesian product of countably many copies of the real line Preliminaries Kadec norm: A norm on a normed linear space is called a with respect to a total subset of the dual space if for each sequence the following condition is satisfied: If for and then Eidelheit theorem: A Fréchet space is either isomorphic to a Banach space, or has a quotient space isomorphic to Kadec renorming theorem: Every separable Banach space admits a Kadec norm with respect to a countable total subset of The new norm is equivalent to the original norm of The set can be taken to be any weak-star dense countable subset of the unit ball of Sketch of the proof In the argument below denotes an infinite-dimensional separable Fréchet space and the relation of topological equivalence (existence of homeomorphism). A starting point of the proof of the Anderson–Kadec theorem is Kadec's proof that any infinite-dimensional separable Banach space is homeomorphic to From Eidelheit theorem, it is enough to consider Fréchet space that are not isomorphic to a Banach space. In that case there they have a quotient that is isomorphic to A result of Bartle-Graves-Michael proves that then for some Fréchet space On the other hand, is a closed subspace of a countable infinite product of separable Banach spaces of separable Banach spaces. The same result of Bartle-Graves-Michael applied to gives a homeomorphism for some Fréchet space From Kadec's result the countable product of infinite-dimensional separable Banach spaces is homeomorphic to The proof of Anderson–Kadec theorem consists of the sequence of equivalences See also Notes References . . Topological vector spaces Theorems in functional analysis Theorems in topology
Anderson–Kadec theorem
[ "Mathematics" ]
465
[ "Theorems in mathematical analysis", "Vector spaces", "Topological vector spaces", "Space (mathematics)", "Theorems in topology", "Theorems in functional analysis", "Topology", "Mathematical problems", "Mathematical theorems" ]
58,431,337
https://en.wikipedia.org/wiki/HAT%20transposon
hAT transposons are a superfamily of DNA transposons, or Class II transposable elements, that are common in the genomes of plants, animals, and fungi. Nomenclature and classification Superfamilies are identified by shared DNA sequence and ability to respond to the same transposase. Common features of hAT transposons include a size of 2.5-5 kilobases with short terminal inverted repeats and short flanking target site duplications generated during the transposition process. The hAT superfamily's name derives from three of its members: the hobo element from Drosophila melanogaster, the Activator or Ac element from Zea mays, and the Tam3 element from Antirrhinum majus. The superfamily has been divided based on bioinformatics analysis into at least two clusters defined by their phylogenetic relationships: the Ac family and the Buster family. More recently, a third group called Tip has been described. Family members The hAT transposon superfamily includes the first transposon discovered, Ac from Zea mays (maize), first reported by Barbara McClintock. McClintock was awarded the Nobel Prize in Physiology or Medicine in 1983 for this discovery. The family also includes a subgroup known as space invaders or SPIN elements, which have very high copy numbers in some genomes and which are among the most efficient known transposons. Although no extant active example is known, laboratory-generated consensus sequences of active SPIN elements are able to generate high copy numbers when introduced to cells from a wide range of species. Distribution hAT transposons are widely distributed across eukaryotic genomes, but are not active in all organisms. Inactive hAT transposon sequences are present in mammal genomes, including the human genome; they are among the transposon families believed to have been present in the ancestral vertebrate genome. Among mammals, the genome of the little brown bat Myotis lucifugus is notable for its relatively high and recently acquired number of inactive hAT transposons. The distribution of SPIN elements is patchy and does not relate well to known phylogenetic relationships, prompting suggestions that these elements may have spread through horizontal gene transfer. Domestication Transposons are said to be exapted or "domesticated" when they have acquired functional roles in the host genome. Several sequences evolutionarily related to the hAT family have been exapted in diverse organisms, including Homo sapiens. An example is the ZBED gene family, which encode a group of zinc finger-containing regulatory proteins. References Mobile genetic elements
HAT transposon
[ "Biology" ]
527
[ "Molecular genetics", "Mobile genetic elements" ]
58,432,047
https://en.wikipedia.org/wiki/Particle%20chauvinism
Particle chauvinism is the term used by British astrophysicist Martin Rees to describe the (allegedly erroneous) assumption that what we think of as normal matter – atoms, quarks, electrons, etc. (excluding dark matter or other matter) – is the basis of matter in the universe, rather than a rare phenomenon. Dominance of dark matter With the growing recognition in the late 20th century of the presence of dark matter in the universe, ordinary baryonic matter has come to be seen as something of a cosmic afterthought. As J.D. Barrow put it: "This would be the final Copernican twist in our status in the material universe. Not only are we not at the center of the universe: We are not even made of the predominant form of matter." The 21st century saw the share of baryonic matter in the total mass-energy of the universe downgraded further, to perhaps as low as 1%, further extending what has been called the demise of particle-chauvinism, before being revised up to some 5% of the contents of the universe. See also Anthropic principle Carbon chauvinism Mediocrity principle References External links Astronomical hypotheses Chauvinism Exceptionalism Dark matter
Particle chauvinism
[ "Physics", "Astronomy" ]
265
[ "Dark matter", "Astronomical hypotheses", "Unsolved problems in astronomy", "Concepts in astronomy", "Unsolved problems in physics", "Astronomical controversies", "Exotic matter", "Physics beyond the Standard Model", "Matter" ]
72,314,003
https://en.wikipedia.org/wiki/Magnetic%20chicane
A magnetic chicane also called a bunch compressor helps form dense bunches of electrons in a free-electron laser. A magnetic chicane makes electrons detour slightly from their otherwise straight path, and in that way is similar to a chicane on a road. A magnetic chicane consists of four dipole magnets, giving electrons at the beginning of a bunch a longer path than electrons at the end of the bunch, thereby allowing the lagging electrons to catch up. Free-electron laser A free-electron laser depends upon a beam of tightly bunched electrons. Short bunches of electrons are produced by a photoinjector, but they quickly elongate, because electrons have negative charge and little mass, causing the bunch to expand. As the bunch is accelerated, the electrons gain mass and quickly approach the speed of light. After that, electrons at the end of the bunch cannot go any faster to catch up with electrons at the beginning of the bunch. Chirp This problem is solved by adjusting the phase of the driving electric field to more strongly add energy and mass to electrons at the trailing end of the bunch. This is called negative energy chirp, meaning the energy decreases along the direction of beam travel. Because the beam is traveling at almost the speed of light, the trailing electrons gain mass, rather than velocity. This results in a correlation between mass and position in the bunch. Chicane The chicane gives lagging electrons time to catch up. More massive electrons are deflected less by the magnetic field than lighter electrons, and therefor take a shorter path through the chicane, resulting in a shorter bunch. A chicane consists of four dipole magnets with the following roles: Deflects the beam slightly away from the central axis of the accelerator, with lighter electrons deflected more than more massive electrons. Deflects the beam in the opposite direction, making it parallel to the central axis, but with an offset. The offset is greatest for lighter electrons. Deflects the beam back towards the central axis. Deflects the beam back in the direction of the central axis. Limitations In practice, bunch compression cannot be done a single step. To avoid beam emittance blowup, beam compression is usually done by using two chicanes. References External links RF and Space Charge Emittance in Guns, a basic definition of emittance Space Charge Induced Beam Emittance Growth and Halo Formation Electron beam Free-electron lasers Accelerator physics
Magnetic chicane
[ "Physics", "Chemistry" ]
508
[ "Electron", "Applied and interdisciplinary physics", "Electron beam", "Experimental physics", "Particle physics", "Particle physics stubs", "Accelerator physics" ]
72,315,475
https://en.wikipedia.org/wiki/Karmitoxin
Karmitoxin is an amine-containing polyhydroxy-polyene toxin isolated from Karlodinium armiger strain K-0668. It is structurally related to amphidinols, luteophanols, lingshuiols, carteraols, and karlotoxins. See also Prymnesin-1 Prymnesin-2 References Amines Heterocyclic compounds with 1 ring Oxygen heterocycles Phycotoxins Polyether toxins Primary alcohols Secondary alcohols
Karmitoxin
[ "Chemistry" ]
112
[ "Toxins by chemical classification", "Polyether toxins", "Functional groups", "Amines", "Bases (chemistry)" ]