id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
59,856,294
https://en.wikipedia.org/wiki/Brake%20cleaner
Brake cleaner, often also called parts cleaner, is a mostly colorless cleaning agent, mainly used for cleaning the brake disks, the engine compartment and underfloor of motor vehicles. An important feature is that the brake cleaner leaves no residue after the solvents evaporate. Composition Chlorinated brake cleaners (often sold as non-flammable) use organochlorides like tetrachloroethylene and dichloromethane. Historically 1,1,1-Trichloroethane was used, sometimes together with Tetrachloroethylene. It was phased out because of its ozone-depleting nature. Non-chlorinated brake cleaners use hydrocarbons as a main component; it will either be a low-boiling aliphatic compound or higher-boiling hydrocarbon mixture. Aromatics like benzene, toluene or xylene may also be used. The hydrocarbons used are sometimes made by hydrogenation from naphtha. The lipophilic liquids dissolve fat-soluble lubricants or oils. Some products also contain polar solvents such as ethanol, methanol, isopropanol, and acetone in order to dissolve non-lipophilic substances. Many formulations are incompatible with various materials, especially plastics. Use The main application of brake cleaners is the degreasing and cleaning of metal parts or metallic surfaces. They are used for removing oils, fats, resins, tar and dust, mainly in motor vehicles. About 10 million liters are consumed per year in Germany. Danger Brake cleaners contain toxic compounds and should be used only in well-ventilated areas or outdoors. Some are highly flammable and harmful for the environment, which also has to be considered during storage. Skin exposure to the solvent mixture may cause irritation and defatting injury. Brake cleaners decompose rubber and some types of plastics by removing binding components. The rubber appears unchanged at first; however, it will become brittle, and after a few weeks to months cracks and fractures appear. Alternatives For frequent and industrial use, cleaning and degreasing may be replaced by supercritical carbon dioxide or dry Ice blasting, which is abrasive. This requires a setup to apply the carbon dioxide. While the harmful vapors are eliminated, the CO2 must be ventilated. Applying the carbon dioxide causes electrostatic discharge by the expanding gas. The dust and harmful brake dust is not bound in the liquid. References Cleaning products Vehicle braking technologies Pages with unreviewed translations
Brake cleaner
[ "Chemistry" ]
528
[ "Cleaning products", "Products of chemical industry" ]
59,856,308
https://en.wikipedia.org/wiki/Felix%20Behrend
Felix Adalbert Behrend (23 April 1911 – 27 May 1962) was a German mathematician of Jewish descent who escaped Nazi Germany and settled in Australia. His research interests included combinatorics, number theory, and topology. Behrend's theorem and Behrend sequences are named after him. Life Behrend was born on 23 April 1911 in Charlottenburg, a suburb of Berlin. He was one of four children of Dr. Felix W. Behrend, a politically liberal mathematics and physics teacher. Although of Jewish descent, their family was Lutheran. Behrend followed his father in studying both mathematics and physics, both at Humboldt University of Berlin and the University of Hamburg, and completed a doctorate in 1933 at Humboldt University. His dissertation, Über numeri abundantes [On abundant numbers] was supervised by Erhard Schmidt. With Adolf Hitler's rise to power in 1933, Behrend's father lost his job, and Behrend himself moved to Cambridge University in England to work with Harold Davenport and G. H. Hardy. After taking work with a life insurance company in Zürich in 1935 he was transferred to Prague, where he earned a habilitation at Charles University in 1938 while continuing to work as an actuary. He left Czechoslovakia in 1939, just before the war reached that country, and returned through Switzerland to England, but was deported on the HMT Dunera to Australia as an enemy alien in 1940. Although both Hardy and J. H. C. Whitehead intervened for an early release, he remained in the prison camps in Australia, teaching mathematics there to the other internees. After Thomas MacFarland Cherry added to the calls for his release, he gained his freedom in 1942 and began working at the University of Melbourne. He remained there for the rest of the career, and married a Hungarian dance teacher in 1945 in the Queen's College chapel; they had two children. Although his highest rank was associate professor, Bernhard Neumann writes that "he would have been made a (personal) professor" if not for his untimely death. He died of brain cancer on 27 May 1962 in Richmond, Victoria, a suburb of Melbourne. Contributions Behrend's work covered a wide range of topics, and often consisted of "a new approach to questions already deeply studied". He began his research career in number theory, publishing three papers by the age of 23. His doctoral work provided upper and lower bounds on the density of the abundant numbers. He also provided elementary bounds on the prime number theorem, before that problem was solved more completely by Paul Erdős and Atle Selberg in the late 1940s. He is known for his results in combinatorial number theory, and in particular for Behrend's theorem on the logarithmic density of sets of integers in which no member of the set is a multiple of any other, and for his construction of large Salem–Spencer sets of integers with no three-element arithmetic progression. Behrend sequences are sequences of integers whose multiples have density one; they are named for Behrend, who proved in 1948 that the sum of reciprocals of such a sequence must diverge. He wrote one paper in algebraic geometry, on the number of symmetric polynomials needed to construct a system of polynomials without nontrivial real solutions, several short papers on mathematical analysis, and an investigation of the properties of geometric shapes that are invariant under affine transformations. After moving to Melbourne his interests shifted to topology, first in the construction of polyhedral models of manifolds, and later in point-set topology. He was also the author of a posthumously-published children's book, Ulysses' Father (1962), consisting of a collection of bedtime stories linked through the Greek legend of Sisyphus. Selected publications References 1911 births 1962 deaths 20th-century German mathematicians 20th-century Australian mathematicians Australian people of German-Jewish descent Combinatorialists German number theorists Humboldt University of Berlin alumni Charles University alumni Academic staff of the University of Melbourne German emigrants to Australia
Felix Behrend
[ "Mathematics" ]
818
[ "Combinatorialists", "Combinatorics" ]
59,856,969
https://en.wikipedia.org/wiki/Auburn-Folsom%20South%20Unit
The Auburn-Folsom South Unit is a project associated with the Central Valley Project in California and is one of three units located on the American River in Northern California, the United States Bureau of Reclamation is in charge of the Central Valley Project, including this project. The initial budget for this unit was 1.5 billion dollars. This unit includes a number of dams located on the American River, and work to divert and manage water in the area. The associated features of the Auburn-Folsom South Unit include the Folsom South Canal which was designed to change the direction of water flow at the Nimbus Dam along the American River near Sacramento in Northern California, Auburn Dam which was proposed to be built in the city of Auburn, California, the Sugar Pine Dam located in Placer County, and the County Line Dam and associated features of which construction was never initiated. The South Unit was approved by law in 1965, although actual construction of the projects did not begin until 1967. Some of the projects initially proposed to be a part of the unit were never halted once construction began, or were never started at all. Significance This project goals include the need for water for irrigation, municipal uses, and industrial needs and demands in the area. This project also has associated significance toward groundwater depletion in the area, as part of its original construction goals were to help provide water supply in order to lessen the depleting groundwater supplies. The intended use for the water diverted utilizing the Folsom South Canal is for a multitude of purposes, predominantly being irrigation and municipal, which is in accordance with the Central Valley Project. Not only does the constructed section of canal have human uses and support in mind, but the Auburn-Folsom South Unit was also constructed with goals of creating hydrological energy generation. As well as power, there was goals to provide protection to fish species as well as recreation. Construction The Sugar Pine Dam is the only proposed feature of the Auburn-Folsom South Unit that has been completed, this dam supplies fresh drinking water to the Foresthill, California area. The Folsom South Canal was worked on and roughly 27 miles of the proposed 68.8 miles of the canal was built in the 70’s, however, further construction of the structure was halted. The Auburn Dam, of which there was a lot of concern and controversy over, was started but then was never fully constructed. The County Line Dam construction was never initiated. Environmental impact When this project was initially proposed, there came with it a good amount of backlash and controversy about the project. In the early 1980s, there was a good amount of discontent surrounding the project, as many thought that the project was not economically justified. The main source of controversy surrounding this project was the Auburn Dam. The Auburn Dam created a lot of safety concerns because of its unusual design, the dam was to be high and relatively thin, it would be designed as an arch dam and people were concerned about the unusual shape and curvature of the infrastructure. The dam created a lot of concerns over how it would handle seismic activity. An earthquake in Oroville, California sparked concern over the Auburn Dam and several scientific groups advised against original design plans for the dam and suggested reconsidering the dam in general.  Not only was there concern over how the Auburn Dam would handle a potential seismic event, but the dam design also would require a considerable amount of road relocation and bridge construction in order to deal with these changes. The Foresthill Bridge was constructed as a result of road diversions because of the construction of the Auburn Dam. This bridge was designed and constructed by the Bureau of Reclamation and is the second tallest bridge ever constructed by the department. In regards to the Folsom South Canal, although the structure was started it was eventually halted due to concerns over minimum flows of the American River in regard to fishery and recreational use. There was to be a relationship between the Auburn Dam and reservoir and the Folsom South Canal. The Auburn Reservoir was designed to power the Auburn Power plant with hydroelectric power, this release of water from the Auburn Reservoir would then provide water to the Folsom South Canal. References Central Valley Project
Auburn-Folsom South Unit
[ "Engineering" ]
841
[ "Irrigation projects", "Central Valley Project" ]
59,857,641
https://en.wikipedia.org/wiki/CLOC
CLOC (an acronym derived from CoLOCation) was a first generation general purpose text analyzer program. It was produced at the University of Birmingham and could produce concordances as well as word lists and collocational analysis of text. First-generation concordancers were typically held on a mainframe computer and used at a single site; individual research teams would build their own concordancer and use it on the data they had access to locally, any further analysis was done by separate programs. History CLOC was written by Alan Reed in Algol 68-R which was available only on the ICT 1900 series of computer at that time. Perhaps because it was designed for use in a department of linguistics rather than by computer specialists it had the distinction of having a comparatively simple user interface, it also has some useful features for studying collations or the co-occurrence of words. CLOC was used in the COBUILD project that was headed by Professor John Sinclair. Further reading References History of software
CLOC
[ "Technology" ]
204
[ "Computing stubs", "History of software", "Software stubs", "History of computing" ]
59,858,742
https://en.wikipedia.org/wiki/Ales%20Leonardis
Aleš Leonardis is professor of computer and information science at the University of Birmingham and at the University of Ljubljana. He is also an adjunct professor at the Faculty of Computer Science of the Graz University of Technology and a former visiting researcher of the Laboratory at the University of Pennsylvania, postdoc of TU Wien and visiting professor at ETH Zurich. His research concerns computer vision, including object recognition, tracking, and image segmentation. He has also been one of the organizers of an annual series of Visual Object Tracking challenges. References External links 20th-century births Living people Academics of the University of Birmingham Academic staff of ETH Zurich Academic staff of the Graz University of Technology Academic staff of the University of Ljubljana University of Pennsylvania faculty Year of birth missing (living people) Place of birth missing (living people)
Ales Leonardis
[ "Technology" ]
162
[ "Computing stubs", "Computer specialist stubs" ]
59,859,338
https://en.wikipedia.org/wiki/Cascade%20Model%20of%20Relational%20Dissolution
The Cascade Model of Relational Dissolution (also known as Gottman's Four Horsemen) is a relational communications theory that proposes four critically negative behaviors that lead to the breakdown of marital and romantic relationships. The model is the work of psychological researcher John Gottman, a professor at the University of Washington and founder of The Gottman Institute, and his research partner, Robert W. Levenson. This theory focuses on the negative influence of verbal and nonverbal communication habits on marriages and other relationships. Gottman's model uses a metaphor that compares the four negative communication styles that lead to a relationship's breakdown to the biblical Four Horsemen of the Apocalypse, wherein each behavior, or horseman, compounds the problems of the previous one, leading to total breakdown of communication. Background Gottman's and Levenson's research focuses on differentiating failed and successful marriages and notes that nonverbal emotional displays progress in a linear pattern, creating an emotional and physical response that leads to withdrawal. Until the development of the model (1992-1994), little research had been conducted on specific interactive behaviors and processes that result in marital dissatisfaction, separation, and divorce. Gottman's and Levenson's research indicated that not all negative interactions, like anger, are predictive of relational separation and divorce. But it shows a strong correlation between the presence of contempt in a marriage and the couple's likelihood of divorce. Gottman's and Levenson's research notes that the "cascade toward relational dissolution" can be predicted by the regulation of couples' positive and negative interactions, with couples that regulate their positive-to-negative interactions significantly less likely to experience the cascade. This research has been furthered by looking at ways to intervene in the cascade, and its application to other types and models of relationships. Four Horsemen of Relational Apocalypse Gottman's and Levenson's Four Horsemen of the Apocalypse theory centers around the concept that the behaviors below work in a cascade model, in which one leads to the other, creating a continued environment of negativity and hostility. This creates marital dissatisfaction, leading to considerations of marital dissolution, separation, and permanent dissolution. Horseman One: criticism Criticism is the first indication of the Cascade Model and is an attack on the partner's character. Gottman defines criticism as a type of complaint that blames or attacks a partner's personality or character. Critical comments often materialize in chained comments and are communicated in broad, absolute statements like "‘you never’" or "you always."’ Research indicates that non-regulated couples, or couples whose interaction trended more negative, engaged more frequently in criticism and were more likely to begin the Cascade of Dissolution. Gottman's and Levenson's research found that wives' criticism correlated to separation and possible dissolution, but this was not so with husbands. One possible solution to avoiding criticism is to grow the culture in a marriage to include a well-held vulnerability. This means that those in the marriage should feel safe enough to express their opinions and frustrations without fear of rejection. Criticism does not allow partners to be vulnerable with each other, and their relationship can quickly deteriorate as a result. One may consider using more "I" statements and expressive language in order to overcome criticism. An example of an "I" statement is: "When I am feeling frustrated, I tend to become more irritable and begin to hyper-focus on your flaws to blame someone for my negative feelings." "I" statements allow a spouse to take responsibility for their feelings rather than blaming the other spouse for their perspective and emotional reactions. They build emotional intelligence, self-reflection, and help prevent cycles of criticism and defensiveness. Horseman Two: defensiveness Defensiveness is a reaction to pervasive criticism that often results in responding to criticism with more criticism, and sometimes contempt, and the second level of the Cascade Model. Defensiveness is a protective behavior and is indicated by shifting blame and avoiding responsibility, often in an attempt to defend against the first two horsemen. Defensiveness stems from an internal response to protect one's pride and self-worth. The body may go into fight-or-flight mode to protect against a perceived threat in the defensive stage. Fowler and Dillow also characterize defensiveness as using counterattack behaviors such as whining, making negative assumptions about the other's feelings, and denials of responsibility. Gottman's and Levenson's research found defensiveness to be strongest among men. Horseman Three: contempt Contempt is the result of repetitive criticism and is driven by a lack of admiration and respect. It is the third level of the Cascade Model. Contempt is expressed verbally through mocking, sarcasm, and indignation, with an attempt to claim moral superiority over one's partner. It can also be indicated nonverbally, as with eye-rolling and scoffing. Gottman's and Levenson's research found contempt to be the strongest predictor of relational dissolution, and the strongest overall predictor for women. Horseman Four: stonewalling Stonewalling is the final phase of the model and is a reaction to the previous three behaviors. Stonewalling occurs when parties create mental and physical distance to avoid conflict by appearing busy, responding in grunts, and disengaging from the communication process. Gottman's and Levenson's research found it to be most common among men and a very challenging behavior to redirect once it becomes habitual. Gottman's research in predicting divorce Predicting divorce / couple separation and how divorce can be avoided Gottman and his team did more extensive research in follow-up to this study, testing whether or not couples who exhibited these “horseman” were more or less likely to divorce. In a longitudinal study, Gottman and his team were able to predict with 93% accuracy how many couples would divorce from their observations. They found that those couples who ended up separating had the following attributes in their marriage: Harsh Startup: In arguments or disagreements, those couples who participated in harsh startups were those who begin an argument with great aggression, refused to see another's point of view, or brought issues up at inappropriate times. The Four Horsemen: as above. Emotional Flooding: This condition occurs when one partner feels overwhelmed ,and their brain begins to protect itself by shutting down. They physically and mentally cannot process any more what the other is saying. This may lead to the person who is not flooded to think the flooded person is not listening or does not care, when in fact, their system has been overwhelmed. This may occur when one partner brings up a controversial topic or points out many flaws in another in a short period of time. Body Language: Whether the couple is sending mixed messages, participating in a double-bind kind of thinking, or sending hostile nonverbal cues, destruction occurs. Repair Attempts that were not accepted: A repair attempt is anything that one partner tries to bring the relationship back into control. This could be de-escalation tactics, bringing up something about which you both stand on common ground about, or even an inside joke. These attempts, when accepted and acted upon, encourage intimacy and affection in a marriage and allow the situation to deescalate. Those who do not participate in this tactic will have a greater likelihood of an argument or fight escalating out of control. A Negative View on their marriage and their overall happiness together: Gottman found that those in the study who ended up divorcing or having low marital satisfaction thought about landmarks in their marriages as negative. The landmark moments that most people think of with fondness, such as their engagement, wedding, reception, birth of a child, etc., were almost all met with criticism from those in unhappy marriages. These people had trained their brain that their partner had never met their needs, and there had never been happiness in their relationship. Belligerence: Bad couples will sometimes try to provoke the other party with statements like "You think you're tough? Then do it!" Methodology and regulated vs. non-regulated couples Behavioral coding systems methodology Gottman and Levenson's primary research for this model, published in the 1990s, centered around utilizing a variety of measures, in combination, to study the conflict interactions amongst married couples. Gottman and Levenson physiological information garnered by polygraphs, EKGs, and pulse monitoring and behavioral information collected via survey and video recording. Information collected by video was coded using the Rapid Couples Interaction Scoring System (RCISS), the Special Affect Coding System (SPAFF), and the Marital Coding Information System (MCIS). RCISS consists of a thirteen-point speaker behavior and a nine-point listener checklist, which can be broken down into five positive and eight negative codes. The SPAFF is "a cultural informant coding system" which considers: verbal content, tone, and context; facial expression, movement, and gestures; and body movement. MCIS is the oldest and most widely used affect coding system, but is not as specific as others and is generally used in addition to other methods. Regulated and nonregulated couples Information obtained from the RCISS and SPAFF analysis lead to the formation of the idea of regulated and non-regulated couples. Gottman and Levenson defined nonregulated couples as more prone to conflict engaging behaviors, while regulated couples tend to engage in more constructive, positive communicative behaviors. It is noted that not all nonregulated couples exhibit all negative affective behaviors, nor do all regulated couples exhibit all positively affected behaviors. Gottman and Levenson proposed that maintaining marriage stability is not about the exclusion of negative behavior, but about maintaining a positive-to-negative comment ratio of around 5:1. The marital typology Gottman's research indicates that there are five types of marriages: three of which are stable and avoid entering the Cascade Model, and two that are volatile. All of the three stable couple types achieve a similar balance between positive and negative affect; however, this does not mean that negative interactions or communication is eliminated. Stable couple typologies Validators This couple mixes moderate amounts of positive and negative affect. This model is the preferred model of marital counselors and is a more intimate approach focused on shared experiences; however, romance may disappear over time. These couples engage in reduced persuasion attempts and do not attempt to persuade until a third of the conflict has elapsed. Volatiles This couple type mixes high amounts of positive and negative affect. These marriages tend to be quite "romantic and passionate, but [have] the risk of dissolving into endless bickering." These couples engage in high levels of persuasion from the beginning of a conflict. Avoiders This couple type mixes small amounts of positive and negative affect. This type of marriage avoids the pain associated with conflict, but risks loneliness and emotional distance. These couples make very few, if any, attempts to persuade each other. Volatile couple typologies Hostile Hostile marriages often see the husband influence both positively and negatively, but the wife only influences by being positive. In general, "the wife is likely to seem quite aloof and detached to the husband, whereas he is likely to seem quite negative and excessively conflictual to her." Hostile-detached These relationships tend to display a significantly higher rate of contempt and defensiveness. Hostile-detached marriages see the husband influence both positively and negatively, but the wife only influences by being negative. In these cases, "the husband is likely to seem quite aloof and detached to the wife, whereas she is likely to seem quite negative and excessively conflictual to him." Mismatch theory This theory proposes that "hostile and hostile-detached couples simply fail to create a stable adaptation to marriage that is either volatile, validating, or avoiding." The belief is that marital instability arises from a couples inability to accommodate one-another's preferences and create one of the three types of marriage. Interventions and therapeutic strategies Proximal change interventions Gottman and Tabres research on proximal change interventions attempts to interrupt the negative communications process by creating chances for positive influence to help alter relational dynamics and alter or repair damage done by the cascade. Two interventions were implemented, a "compliments intervention" and a "criticize intervention" design to increase positivity and negativity respectively. Groups were randomly assigned one of the two intervention conditions or a control group. The research indicated that couples determined the effectiveness of the interventions, as many non-regulated couples who have entered the Cascade Model will "construe" interventions by coding them into criticisms and/or by communicating with contempt. The effectiveness of these interventions is contingent on the continued facilitation and monitoring of interventions by therapists. Avoidance and anxiety attachment Researchers Fowler and Dillow note that avoidance attachment can be predictive of defensiveness and stonewalling whereby an individual is reluctant to depend on others. Those with avoidance attachment may also struggle to regulate negative emotions and be prone to lashing out at partners. Fowler and Dillow hypothesized that avoidance attachment can be predictive through self-reports of criticism, contempt and defensiveness; however, research finding indicated that avoidance attachment was only predictive of stonewalling. Fowler and Dillow noted that anxiety attachment, characterized by over-dependence, flooding, and fear of rejection, will also predict criticism, contempt, and defensiveness as those who exhibit anxiety attachment tend to become self-fulfilling prophecies. Flooding Flooding occurs when strong negative emotions are present within exchanges between individuals. It causes individuals to feel overwhelmed and can lead to destructive communication such as name calling and criticism. Oftentimes, individuals express that their partner's negative emotions come out of nowhere and therefore they will do what they feel is necessary to retreat from the negativity. Individuals may begin to adopt behaviors that discourage effective communication such as becoming defensive and generating negative qualities for their partner's behavior. Further, marital satisfaction has been shown to decrease over time as couples are more aroused during conflict. This in turn causes a destructive loop of higher frequencies of flooding as well as an increase in self-isolation and destructive communication patterns. To combat flooding, couples could try to take breaks during conflict. Doing this has proved to reduce heart rates, which in turn, reduces negative behaviors. Another way to reduce flooding is to resolve conflicts through text-based or voice communication instead of face to face. This may allow individuals to regulate their emotions with more control. Gottman method couples therapy Research into Gottman method couples therapy has been of poor quality and is insufficient to consider the therapy as evidence-based, despite its popularity. Criticisms Gottman has been criticized for claiming that his Cascade Model can predict divorce with over a 90% accuracy. Additionally, researcher Stanley Scott and his colleagues noted that Gottman's highly publicized research findings from 1998, which recommended significant shifts in focus and application for marital educators and therapists, including the de-emphasis of anger management and active listening, has several flaws. Among the concerns raised, the most significant are methodological, including Gottman and his fellow researchers not providing justification for the nonrandom selection of participants, not controlling for cultural impacts, and flaws in physiological impact analysis. Concerns were also raised about the methods for observational data collection and the ambiguity of statistics tests used. Stanley's findings indicate that, while Gottman's findings are interesting, there are too many unexplained methods and that additional research is needed before the overhauling Gottman's suggested. References Human communication Relationship breakup
Cascade Model of Relational Dissolution
[ "Biology" ]
3,172
[ "Human communication", "Behavior", "Human behavior" ]
59,859,530
https://en.wikipedia.org/wiki/EP%20matrix
In mathematics, an EP matrix (or range-Hermitian matrix or RPN matrix) is a square matrix A whose range is equal to the range of its conjugate transpose A*. Another equivalent characterization of EP matrices is that the range of A is orthogonal to the nullspace of A. Thus, EP matrices are also known as RPN (Range Perpendicular to Nullspace) matrices. EP matrices were introduced in 1950 by Hans Schwerdtfeger, and since then, many equivalent characterizations of EP matrices have been investigated through the literature. The meaning of the EP abbreviation stands originally for Equal Principal, but it is widely believed that it stands for Equal Projectors instead, since an equivalent characterization of EP matrices is based in terms of equality of the projectors AA+ and A+A. The range of any matrix A is perpendicular to the null-space of A*, but is not necessarily perpendicular to the null-space of A. When A is an EP matrix, the range of A is precisely perpendicular to the null-space of A. Properties An equivalent characterization of an EP matrix A is that A commutes with its Moore-Penrose inverse, that is, the projectors AA+ and A+A are equal. This is similar to the characterization of normal matrices where A commutes with its conjugate transpose. As a corollary, nonsingular matrices are always EP matrices. The sum of EP matrices Ai is an EP matrix if the null-space of the sum is contained in the null-space of each matrix Ai. To be an EP matrix is a necessary condition for normality: A is normal if and only if A is EP matrix and AA*A2 = A2A*A. When A is an EP matrix, the Moore-Penrose inverse of A is equal to the group inverse of A. A is an EP matrix if and only if the Moore-Penrose inverse of A is an EP matrix. Decomposition The spectral theorem states that a matrix is normal if and only if it is unitarily similar to a diagonal matrix. Weakening the normality condition to EPness, a similar statement is still valid. Precisely, a matrix A of rank r is an EP matrix if and only if it is unitarily similar to a core-nilpotent matrix, that is, where U is an orthogonal matrix and C is an r x r nonsingular matrix. Note that if A is full rank, then A = UCU*. References Matrices
EP matrix
[ "Mathematics" ]
515
[ "Matrices (mathematics)", "Mathematical objects" ]
59,859,725
https://en.wikipedia.org/wiki/Rho%20Phoenicis
Rho Phoenicis (ρ Phoenicis) is a variable star in the constellation of Phoenix. From parallax measurements by the Gaia spacecraft, it is located at a distance of from Earth. This star is classified as an F-type giant with a spectral type of F3III, and in the HR diagram it occupies in the lower part of the instability strip. Rho Phoenicis is Delta Scuti variable, changing its visual apparent magnitude between 5.17 and 5.27 with a period of around 0.1–0.2 days. The pulsation period seems to vary in a timescale of weeks, which indicates the star is not a simple radial pulsator. The analysis of the temperature variations over the pulsation cycles also supports this conclusion. It is not clear if the pulsation period really is variable, or if the light curve is simply the sum of multiple stable pulsation frequencies. Stellar evolution models indicate that Rho Phoenicis has about 2.1 times the solar mass and an age of around 1 billion years. This star is shining with 36 times the solar luminosity and has an effective temperature of 6,900 K. Its metallicity is high, with an overall metal abundance 25% greater the solar value. Gaia Data Release 2 discovered a star with the same proper motion and parallax as Rho Phoenicis. It has an apparent magnitude of 14.6 (G band) and is at a separation of 7.9 arcseconds. References Delta Scuti variables Phoenix (constellation) F-type giants Durchmusterung objects 004919 003949 0242
Rho Phoenicis
[ "Astronomy" ]
346
[ "Phoenix (constellation)", "Constellations" ]
59,860,379
https://en.wikipedia.org/wiki/CH2OS
{{DISPLAYTITLE:CH2OS}} The molecular formula CH2OS (molar mass: 62.09 g/mol, exact mass: 61.9826 u) may refer to: Thioformic acid, a thiocarboxylic acid Sulfine (sulfinylmethane)
CH2OS
[ "Chemistry" ]
68
[ "Isomerism", "Set index articles on molecular formulas" ]
59,861,198
https://en.wikipedia.org/wiki/Deferrisoma%20palaeochoriense
Deferrisoma palaeochoriense is a thermophilic, anaerobic and mixotrophic bacterium from the genus of Deferrisoma which has been isolated from a hydrothermal vent from the Palaeochori Bay from Greece. References Thermodesulfobacteriota Bacteria described in 2016
Deferrisoma palaeochoriense
[ "Biology" ]
68
[ "Bacteria stubs", "Bacteria" ]
59,861,213
https://en.wikipedia.org/wiki/Acxiom
Acxiom (pronounced "ax-ee-um") is a Conway, Arkansas-based database marketing company. The company collects, analyzes and sells customer and business information used for targeted advertising campaigns. The company was formed in 2018 when Acxiom Corporation (since renamed LiveRamp) spun off its Acxiom Marketing Services (AMS) division to global advertising network Interpublic Group of Companies. In 2018, The Interpublic Group of Companies acquired the Acxiom Marketing Services division of Acxiom Corporation and renamed it Acxiom LLC, with the remaining portion of Acxiom Corporation becoming the publicly held company of LiveRamp. The company has offices in the United States, Europe and Asia. History Foundation and early days Acxiom was founded in 1969 as Demographics, Inc. by Charles D. Ward in Conway, Arkansas. The company was initially involved in producing mailing lists using phonebooks and payroll processing. In 1980, the company changed its name to Conway Communications Exchange, and in 1983 it incorporated as CCX Network, Inc. and made its first public offering. In 1988 it became Acxiom Corporation. 1990s In November 1997, Acxiom acquired Buckley Dement, a provider of healthcare fulfillment and professional medical lists. In May 1998, Acxiom made the announcement that it would acquire one of its competitors, May & Speh. 2000s In early 2004, Acxiom acquired part of Claritas, a European data provider. In 2005, Acxiom acquired Digital Impact for $140 million and integrated its digital and online services into its business. In April 2007, Acxiom acquired Kefta, a real-time personalization company, and incorporated its clients and technology into its Digital Impact division. It acquired EchoTarget, a dynamic banner retargeting company, to expand its interactive marketing offerings. On May 16, 2007, Acxiom Corporation agreed to a buyout by investment firms Silver Lake Partners and ValueAct Capital in an all-cash deal valued at $3 billion, including assuming about $756 million of debt. However, in October 2007, due to challenging credit market conditions, the companies decided to terminate the deal. Concurrently, Acxiom announced the retirement of Chairman Charles Morgan, pending the selection of his successor. On January 17, 2008, Acxiom named John Meyer (from Alcatel-Lucent) as new CEO and president. On July 11, 2008, Acxiom acquired ChoicePoint's database marketing solutions division. Acxiom acquired Quinetix LLC, an analytics and predictive modeling firm in November. The terms of the deal were not disclosed. 2010s In 2010, Acxiom acquired part of GoDigital, a Brazilian direct marketing and data quality company. The company launched AbiliTec Digital, a web-based tool to match digital identities to traditional name and address data, such as those collected from loyalty programs. On July 27, 2011, Acxiom named Scott E. Howe as the company's chief executive officer and president. Acxiom announced the sale of its background screening business, Acxiom Information Security Services (AISS), to Sterling Infosystems, now SterlingBackcheck. In 2012, the New York Times reported that the company had the world's largest commercial database on consumers. On May 14, 2014, Acxiom Corporation announced that it had acquired LiveRamp, a data onboarding company, for $310 million. In July 2015, the company sold its IT outsourcing division, Acxiom IT Outsourcing (Acxiom ITO) for $190 million to Charlesbank Capital Partners and M/C Partners, and Acxiom ITO was subsequently rebranded as Ensono. Acxiom acquired the Boston-based advanced advertising unit of Allant, a third-party data shop focused on advertising and marketing. In August 2016, Acxiom sold its marketing automation and email solution, Acxiom Impact, for $50 million to New York City-based marketing firm Zeta Interactive, now Zeta Global. In January 2017, Acxiom Corporation launched Audience Cloud, an anonymous targeting tool that allowed demographic segmentation of customers without revealing their actual identities. In May 2018, the company announced international expansion into Brazil, Netherlands, and Italy and released Global Data Navigator (GDN), a portal for identifying available data elements by country. In July, the Interpublic Group of Companies (IPG) announced the acquisition of Acxiom Corporation's Marketing Solutions (AMS) business for US$2.3 billion. The deal excluded the LiveRamp business. The sale of the Marketing Solutions business to IPG was completed in October. Following the sale, Acxiom Corporation officially changed its name to LiveRamp and its ticker symbol to RAMP, while the AMS business retained the Acxiom name under IPG ownership. In September 2018, Acxiom introduced the Unified Data Layer, an open data framework designed to enable clients to integrate and manage both online and offline data sources. 2020s In March 2021, Acxiom partnered with the Forge Institute to enhance cybersecurity education and workforce development opportunities, focusing on underserved communities to help them acquire valuable workforce skills. Later in July 2023, the collaboration expanded as the Forge Institute announced Acxiom's sponsorship of its Forge Fellowship program, further solidifying their longstanding partnership. In August 2021, Acxiom received the MarTech Breakthrough award, alongside Goldman Sachs, in the "Customer Experience Innovation" category for their collaboration on the personal loans marketplace using the Acxiom Real-Time Personalization Platform. In September, Great Place to Work and Fortune magazine honored Acxiom for the first time as one of the 2021 Best Workplaces for Women. In October 2021, Acxiom and Adobe announced the integration of Acxiom's Real Identity with the Adobe Experience Platform. In June 2022, Acxiom launched Match Multiplier, an application that facilitates data sharing by allowing brands to increase the reach of their data with additional match keys natively in the Snowflake Data Cloud. In August, Acxiom was recognized in the top 50 of Fast Company's 2022 list of 100 Best Workplaces for Innovators. In July 2023, Acxiom achieved a top score on the 2023 Disability Equality Index (DEI) and earned recognition as a Best Place to Work for Disability Inclusion. In October 2023, Acxiom won the Salesforce Partner Innovation Award in Transportation, Travel, and Hospitality for their work with Heathrow Airport.  In the same month, IPG, Acxiom's parent company, announced the launch of an identity resolution cloud application aimed at integrating brands’ cloud ecosystems powered by Acxiom. In January 2024, Acxiom introduced Acxiom Health as the latest development in its data-driven healthcare and pharmaceutical marketing practice. The new initiative has demonstrated a 25% increase in campaign conversions, positioning Acxiom Health favorably compared to competitors in head-to-head tests. Business Acxiom provides anonymized customer data to marketers, allowing the delivery of more relevant ads to consumers, with more effective measurement. Acxiom's client base in the United States consists primarily of companies in the financial, insurance and investment services, automotive, retail, telecommunications, healthcare, travel, entertainment, non-profit and government sectors. Products Audience Cloud identifies anonymous audience segments, and matches them with publications to display targeted ads when a member of the audience visits a particular site. Global Data Navigator service allows agencies to select global data elements by country. InfoBase is the company's brokered warehouse of consumer data. Personicx is a customer segmentation tool. Unified Data Layer (UDL) uses cloud architecture to help firms connect online and offline data, to better identify consumers' identities, with a goal of complying with GDPR privacy laws. Locations Acxiom's headquarters is located in Conway, Arkansas, United States. The company has an additional U.S. office in New York, New York. International offices are located in the United Kingdom, Germany, Poland, and China. References External links Business intelligence companies Data collection Technology companies of the United States Companies based in Arkansas Business services companies established in 2018 Technology companies established in 2018 2018 establishments in Arkansas Data brokers
Acxiom
[ "Technology" ]
1,702
[ "Data collection", "Data" ]
59,861,597
https://en.wikipedia.org/wiki/Induced%20matching
In graph theory, an induced matching or strong matching is a subset of the edges of an undirected graph that do not share any vertices (it is a matching) and these are the only edges connecting any two vertices which are endpoints of the matching edges (it is an induced subgraph). An induced matching can also be described as an independent set in the square of the line graph of the given graph. Strong coloring and neighborhoods The minimum number of induced matchings into which the edges of a graph can be partitioned is called its strong chromatic index, by analogy with the chromatic index of the graph, the minimum number of matchings into which its edges can be partitioned. It equals the chromatic number of the square of the line graph. Brooks' theorem, applied to the square of the line graph, shows that the strong chromatic index is at most quadratic in the maximum degree of the given graph, but better constant factors in the quadratic bound can be obtained by other methods. The Ruzsa–Szemerédi problem concerns the edge density of balanced bipartite graphs with linear strong chromatic index. Equivalently, it concerns the density of a different class of graphs, the locally linear graphs in which the neighborhood of every vertex is an induced matching. Neither of these types of graph can have a quadratic number of edges, but constructions are known for graphs of this type with nearly-quadratic numbers of edges. Computational complexity Finding an induced matching of size at least is NP-complete (and thus, finding an induced matching of maximum size is NP-hard). It can be solved in polynomial time in chordal graphs, because the squares of line graphs of chordal graphs are perfect graphs. Moreover, it can be solved in linear time in chordal graphs . Unless an unexpected collapse in the polynomial hierarchy occurs, the largest induced matching cannot be approximated to within any approximation ratio in polynomial time. The problem is also W[1]-hard, meaning that even finding a small induced matching of a given size is unlikely to have an algorithm significantly faster than the brute force search approach of trying all -tuples of edges. However, the problem of finding vertices whose removal leaves an induced matching is fixed-parameter tractable. The problem can also be solved exactly on -vertex graphs in time with exponential space, or in time with polynomial space. See also Induced path References Graph theory objects Matching (graph theory)
Induced matching
[ "Mathematics" ]
497
[ "Matching (graph theory)", "Mathematical relations", "Graph theory objects", "Graph theory" ]
59,862,590
https://en.wikipedia.org/wiki/Aerodynamics%20Research%20Institute
The Aerodynamische Versuchsanstalt (AVA) in Göttingen was one of the four predecessor organizations of the 1969 founded "German Research and Experimental Institute for Aerospace", which in 1997 was renamed German Aerospace Center (DLR). History The AVA was created in 1919 from the 1907 Göttingen by Ludwig Prandtl founded "Modellversuchsanstalt für Aerodynamik der Motorluftschiff-Studiengesellschaft". In its founding years, it was still concerned with the development of the "best" form of airship. In 1908, the first wind tunnel was built in Göttingen for tests on models for aviation. In 1915, founded in 1911 Kaiser Wilhelm Society (KWG) and under the direction of Ludwig Prandtl the "Modellversuchsanstalt aerodynamics" was founded in 1919 as the "Aerodynamic Research Institute of the Kaiser Wilhelm Society" (AVA) was transferred to the KWG and converted in 1925 into the "Kaiser Wilhelm Institute for Flow Research linked to the Aerodynamic Research Institute". Ludwig Prandtl headed the institute until 1937, his successor became Albert Betz. In the same year a spin-off from the institute took place under the name "Aerodynamische Versuchsanstalt Göttingen e. V. in the Kaiser Wilhelm Society ", in which the Reich Ministry of Aviation was involved. The remaining after the spin-off part was continued under the name "Kaiser Wilhelm Institute for Flow Research" from the 1948, the Max Planck Institute for Fluid Research (today Max Planck Institute for Dynamics and Self-Organization). The AVA was confiscated in 1945 by the British (until 1948), 1953 as "Aerodynamic Research Institute Göttingen e. V. re-opened in the Max Planck Society and fully integrated in 1956 as the "Aerodynamic Research Institute in the Max Planck Society". In 1969, the spin-off from the Max Planck Society and the founding of the "German Research and Experimental Institute for Aerospace e. V.". Bibliography Aerodynamische Versuchsanstalt Göttingen e.V. in der Kaiser-Wilhelm-/Max-Planck-Gesellschaft (CPTS), in: Eckart Henning, Marion Kazemi: Handbuch zur Institutsgeschichte der Kaiser-Wilhelm-/ Max-Planck-Gesellschaft zur Förderung der Wissenschaften 1911–2011 – Daten und Quellen, Berlin 2016, 2 subvolumes, volume 1: Institute und Forschungsstellen A–L (online, PDF, 75 MB), pages 27–45 (Chronologie des Instituts) Sources Historie des DLR – Gesellschaft von Freunden des DLR e. V. 100 Jahre DLR – Homepage des DLR Archiv zur Geschichte der Max-Planck-Gesellschaft Former research institutes Aerodynamics Research institutes in Göttingen Max Planck Institutes Aviation history of Germany 1907 establishments in Germany 1969 disestablishments in West Germany History of Lower Saxony
Aerodynamics Research Institute
[ "Chemistry", "Engineering" ]
613
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
59,863,606
https://en.wikipedia.org/wiki/Sulfine
Sulfinylmethane or sulfine is an organic compound with molecular formula H2CSO. It is the simplest sulfine. Sulfines are chemical compounds with the general structure XY=SO. IUPAC considers the term 'sulfine' obsolete, preferring instead thiocarbonyl S-oxide; despite this, the use of the term sulfine still predominates in the chemical literature. Substituted sulfines The parent sulfine H2CSO is very labile, whereas substituted derivatives are more conveniently isolated. One route is a variant of ketene synthesis, in which a sulfinyl halide reacts with a hindered base. For example, syn-propanethial-S-oxide, responsible for eye-watering effects of cutting onions, is produced so from allicin. Another route is oxidation, as with thiobenzophenone from diphenylsulfine: (C6H5)2C=S + [O] → (C6H5)2C=S=O See also Sulfene - related functional group with the formula H2C=SO2 Ethenone Heterocumulene References Organosulfur compounds
Sulfine
[ "Chemistry" ]
249
[ "Organic compounds", "Organosulfur compounds" ]
59,864,839
https://en.wikipedia.org/wiki/Randomized%20benchmarking
Randomized benchmarking is an experimental method for measuring the average error rates of quantum computing hardware platforms. The protocol estimates the average error rates by implementing long sequences of randomly sampled quantum gate operations. Randomized benchmarking is the industry-standard protocol used by quantum hardware developers such as IBM and Google to test the performance of the quantum operations. The original theory of randomized benchmarking, proposed by Joseph Emerson and collaborators, considered the implementation of sequences of Haar-random operations, but this had several practical limitations. The now-standard protocol for randomized benchmarking (RB) relies on uniformly random Clifford operations, as proposed in 2006 by Dankert et al. as an application of the theory of unitary t-designs. In current usage randomized benchmarking sometimes refers to the broader family of generalizations of the 2005 protocol involving different random gate sets that can identify various features of the strength and type of errors affecting the elementary quantum gate operations. Randomized benchmarking protocols are an important means of verifying and validating quantum operations and are also routinely used for the optimization of quantum control procedures. Overview Randomized benchmarking offers several key advantages over alternative approaches to error characterization. For example, the number of experimental procedures required for full characterization of errors (called tomography) grows exponentially with the number of quantum bits (called qubits). This makes tomographic methods impractical for even small systems of just 3 or 4 qubits. In contrast, randomized benchmarking protocols are the only known approaches to error characterization that scale efficiently as number of qubits in the system increases. Thus RB can be applied in practice to characterize errors in arbitrarily large quantum processors. Additionally, in experimental quantum computing, procedures for state preparation and measurement (SPAM) are also error-prone, and thus quantum process tomography is unable to distinguish errors associated with gate operations from errors associated with SPAM. In contrast, RB protocols are robust to state-preparation and measurement errors Randomized benchmarking protocols estimate key features of the errors that affect a set of quantum operations by examining how the observed fidelity of the final quantum state decreases as the length of the random sequence increases. If the set of operations satisfies certain mathematical properties, such as comprising a sequence of twirls with unitary two-designs, then the measured decay can be shown to be an invariant exponential with a rate fixed uniquely by features of the error model. History Randomized benchmarking was proposed in Scalable noise estimation with random unitary operators, where it was shown that long sequences of quantum gates sampled uniformly at random from the Haar measure on the group SU(d) would lead to an exponential decay at a rate that was uniquely fixed by the error model. Emerson, Alicki and Zyczkowski also showed, under the assumption of gate-independent errors, that the measured decay rate is directly related to an important figure of merit, the average gate fidelity and independent of the choice of initial state and any errors in the initial state, as well as the specific random sequences of quantum gates. This protocol applied for arbitrary dimension d and an arbitrary number n of qubits, where d=2n. The SU(d) RB protocol had two important limitations that were overcome in a modified protocol proposed by Dankert et al., who proposed sampling the gate operations uniformly at random from any unitary two-design, such as the Clifford group. They proved that this would produce the same exponential decay rate as the random SU(d) version of the protocol proposed in Emerson et al.. This follows from the observation that a random sequence of gates is equivalent to an independent sequence of twirls under that group, as conjectured in and later proven in. This Clifford-group approach to Randomized Benchmarking is the now standard method for assessing error rates in quantum computers. A variation of this protocol was proposed by NIST in 2008 for the first experimental implementation of an RB-type for single qubit gates. However, the sampling of random gates in the NIST protocol was later proven not to reproduce any unitary two-design. The NIST RB protocol was later shown to also produce an exponential fidelity decay, albeit with a rate that depends on non-invariant features of the error model In recent years a rigorous theoretical framework has been developed for Clifford-group RB protocols to show that they work reliably under very broad experimental conditions. In 2011 and 2012, Magesan et al. proved that the exponential decay rate is fully robust to arbitrary state preparation and measurement errors (SPAM). They also proved a connection between the average gate fidelity and diamond norm metric of error that is relevant to the fault-tolerant threshold. They also provided evidence that the observed decay was exponential and related to the average gate fidelity even if the error model varied across the gate operations, so-called gate-dependent errors, which is the experimentally realistic situation. In 2018, Wallman and Dugas et al., showed that, despite concerns raised in, even under very strong gate-dependence errors the standard RB protocols produces an exponential decay at a rate that precisely measures the average gate-fidelity of the experimentally relevant errors. The results of Wallman. in particular proved that the RB error rate is so robust to gate-dependent errors models that it provides an extremely sensitive tool for detecting non-Markovian errors. This follows because under a standard RB experiment only non-Markovian errors (including time-dependent Markovian errors) can produce a statistically significant deviation from an exponential decay The standard RB protocol was first implemented for single qubit gate operations in 2012 at Yale on a superconducting qubit. A variation of this standard protocol that is only defined for single qubit operations was implemented by NIST in 2008 on a trapped ion. The first implementation of the standard RB protocol for two-qubit gates was performed in 2012 at NIST for a system of two trapped ions References Quantum computing Computer hardware Benchmarks (computing)
Randomized benchmarking
[ "Technology", "Engineering" ]
1,222
[ "Computer engineering", "Computer hardware", "Computing comparisons", "Computer systems", "Computer performance", "Computer science", "Benchmarks (computing)", "Computers" ]
59,866,029
https://en.wikipedia.org/wiki/Orphan%20wells%20in%20Alberta%2C%20Canada
Orphan wells in Alberta, Canada are inactive oil or gas well sites that have no solvent owner that can be held legally or financially accountable for the decommissioning and reclamation obligations to ensure public safety and to address environmental liabilities. The 100% industry-funded Alberta Energy Regulator (AER)the sole regulator of the province's energy sectormanages licensing and enforcement related to the full lifecycle of oil and gas wells based on Alberta Environment Ministry requirements, including orphaned and abandoned wells. Oil and gas licensees are liable for the responsible and safe closure and clean-up of their oil and gas well sites under the Polluter Pays Principle (PPP) as a legal asset retirement obligation (ARO). An operator's liability for surface reclamation issues continues for 25 years following the issuance of a site reclamation certificate. There is also a lifelong liability in case of contamination. Once the current environmental legislation was in place, and the industry-led and industry-funded Orphan Wells Association (OWA), was established in 2002, some orphan wells became the OWA's responsibility. OWA's Inventory does not include legacy wells which are more complex, time-intensive and costly to remediate. Following the 2014 downturn in the global price of oil, there was a "tsunami" of orphaned wells, facilities, and pipelines resulting from bankruptcies. As of March 2023, oil and gas companies owe rural municipalities $268 million in unpaid taxes; they owe landowners "tens of millions in unpaid lease payments". Original owners of what are now orphan wells "failed to fulfill their responsibility for costly end-of-life decommissioning and restoration work"; some sold these wells "strategically to insolvent operators". Landowners suffer both "environmental and economic consequences" of having these wells on their property. OWA funding is underfunded by at least several hundred million. The total estimate for cleaning up all existing sites is as much as $260 billion. Remediation is paid for through federal and provincial bailouts, a PPP violation. Current overview In 2017, of the estimated 450,000 AER oil and gas registered wells in the province, 150,000 were no longer producing but were not remediated, and 92,000 were inactive with no set value. A 2021 Alberta Liabilities Disclosure Project report, "The Big Clean", that accessed AER data through a Freedom of Information (FOIP) request, estimated that Alberta had 300,000 unreclaimed wells and that it would cost from $40 to $70 billion to clean them up. This cost estimate does not include unreclaimed pipelines and pumping stations. The ALDP, an independent, nonpartisan research organization that provides "government-level data" on liabilities related to the oil and gas industry in Alberta, seeks solutions towhat they describe asa "growing liabilities crisis". Unreclaimed wells are inactive wells that may be orphaned or legally-licensed. Some unreclaimed wells may have been sealed off, while some have begun or completed remediation or reclamation of the well site. Under current AER regulations, it is legal for operators to leave well reclamation suspended indefinitely, this is not the case in some oil producing states, such as North Dakota. Daryl Bennett, who represents landowners through both My Landman Group and Action Surface Rights "in disputes involving resource companies", said that there were 170,000 unreclaimed sites that require cleaning. These unreclaimed wells were licensed by the province to oil and gas operators under Alberta's mineral rights provision, by which landowners only have surfacenot below the surface mineral rightsand have no right of refusal to prevent a well being drilled on their property. When the wells were producing, landowners benefitted, as operators pay an annual fee to lease and access the site. When operators go bankrupt or simply cannot be relocated, landowners are left with these aging wells with no recourse. By 2001, there were about 59,000 farms with at least one well on their property. By 2023, wells and pumpjacks dot the landscape across much of rural Alberta with a well for almost every . In 2019, the Intergovernmental Panel on Climate Change (IPCC) warned that methane gas leakage from abandoned oil and gas wells represented a serious risk to climate change and recommended the monitoring of these wells. Canada began to monitor methane leakage from abandoned wells at that time. It is uncertain how many of the roughly 300,000 inactive wells belong to the various classifications that describe oil and gas wells in Alberta. The oil and gas industry refer to wells that have been sealed as "abandoned", or to be exact, "responsibly abandoned". The Narwhal says that this has led to "countless confusing headlines." There are numerous inactive well sites that are neither sealed nor officially designated as orphans by AER. The annual inventory of the OWA does not include orphaned wells that AER has identified but not transitioned into orphan status. The OWA is also not responsible for wells that were orphaned prior to its establishment in 2002. These wells are the responsibility of the regulatory and ministerial bodies, the AER and the Department of Energy. A 2021 Office of the Auditor General (OAG) report said that the regulator and ministry failed to prioritize sites and rejected responsibility for funding and cleaning well sites "even when evidence showed otherwise." The January 2022 Parliamentary Budget Office (PB0) report on the cost of cleaning Canada's orphan oil and gas wells said that, in spite of the 1.7 billion federal money provided during the pandemic, the cost of cleaning up orphan well sites nationally will require funding sources from industry, the provinces, and the federal government. By January 2022, Alberta had given about 50% of the allocated funding to viable energy companies, not companies "with an acute financial risk." The CAPP says that most of their member companies pay taxes and clean up their own wells and that the bankruptciesone of the prime factors in the increase in orphan wells. were the result of the "lagging effects of this multi-year downturn for the oil and gas sector." The increase in the number of insolvencies and wells with no solvent owner was the result of the "largest oil price declines in modern history" in 2014 to 2016 and the longest decline in oil prices since the 1980s. As of 2022, most orphan wells were still not remediated. Farmers and ranchers suffer both "environmental and economic consequences" as the wells on their land, which are licensed as active, are not. They face decades-long challenges including land devaluation because of orphan wells, and contamination, and loss of compensation from bankrupt companies. Insolvent operators owe landowners "tens of millions" in unpaid surface rights lease payments, and/or transfer costs, such as taxes onto the landowners. These delinquent operators owe municipalities $268 million in unpaid taxes Rural Municipalities of Alberta (RMA) says this represents an increase of 261%, since 2018—despite industry recording multi-billions in profits. This will result in service cuts or tax increases at the municipal level. The level of unpaid taxes reported in was "unprecedented" and presented a "unique challenge that has not been experienced by municipalities in Alberta before." Starting in March 2022, the industry experienced the "largest 23-month increase in energy prices since the 1973 oil price" following the Russian invasion of Ukraine. The increase in the price of oil resulting record profits for Canadian oil companies, with some of them earning billions. In Alberta, Canadian Natural Resources, Cenovus Energy, Paramount Resources and Whitecap Resources earned a combined net income approximately $5 billion in the fourth quarter alone of the fiscal year 2022. The OWA funding is "grossly inadequate" by at least several hundred million. The total estimate for cleaning up all existing sites is as much as $260 billion. Taxpayers have paid the difference through federal and provincial bailouts in the form of grants and loans, a PPP violation. Concerns have been raised about the "murky practice" of offloading liabilities strategically to smaller, junior operators with insufficient funds that are likely to face future insolvency. This practice allows original owners of what are now orphan wells to avoid paying for costly end-of-life decommissioning and restoration work for which they were responsible. Many of these wells become orphan wells. In this way, companies misuse the bankruptcy process to keep their valuable assets. The OWA says as owners shirk their responsibility the collective becomes responsible for the liabilities. Fifty percent of the delinquent wells are owned by small companies that have insufficient finances but are still able to produce and collect revenue. There is a direct correlation between these abandoned wells' environmental liabilities, unpaid taxes, and unpaid surface payments to landowners. The AER has the authority to enforce the rules. The RMA says that the AER "props" up small companies to avoid increasing the already concerning number of orphan wells which results primarily from bankruptcies. The AER says that it is the RMA's role to collect taxes. A lawyer representing Action Surface Rights, a landowners group, Christine Laing, called on the AER to use the power it has more often and in a timely fashion to "protect the public interest". The International Institute for Sustainable Development (IISD), which was established in 1990 during the premiership of Brian Mulroney as part of Canada's contribution to the 2002 Rio Earth Summit, drew attention to ways in which Canadian producers have failed on ESG issues. Context The province's oldest inactive well has been dormant and unreclaimed since June 30, 1918. Some of the legacy sites were in operation in the 1920s or earlier, and have no known operator and no "financial security to cover the cleanup costs." Canada's oil production in 1946 was only of oil per day. By 1956, Alberta was producing per day. In 2012, the OWA only had 14 classified orphan wells; in 2013 there were 74; in 2014 there were 162; in 2015 there were 705; in 2024 there were 2,647. The average cost of reclamation/remediation (R/R) site services in 2015 was $180,000 per site and range from $20,000 to $1 million. This provides work during downturns in the oil industry. Prior to 2017, the energy industry paid $15 million a year into the Orphan Fund Levy. It doubled to $30 million in 2017. Between 1955 and 2017, approximately 580,000 wells were drilled in Canada, according to a Natural Resources Canada (NRC) report on wellbore integrity in the oil and gas industry in Canada. Of these, 400,000 were in Alberta and the NRC anticipated that there would be 100,000s more drilled. The New Democratic Party (NDP) provincial government began consulting with the energy industry in 2017 to "introduce new rules that might limit a multi-billion-dollar public liability for reclaiming about 80,000 inactive wells around Alberta." The C.D. Howe Institute report estimated that the social cost of orphan wells, including those incurred by financially insolvent firms, could be more than $8.6 billion. In 2017, the federal government provided Alberta with a one-time grant of $30 million for "decommissioning and reclamation" which the province used to "cover the interest on a $235 million repayable loan". As of 2018, 37.8% of all inactive wells89,217had been inactive for up to 5 years; 29.8% had been inactive for 5 to 10 years; 16% from 10 to 15 years; 8.2% from 15 to 20 years; 3.9% from 20 to 25 years; and 4.5% had been inactive for over 25 years. Based on the OWA's 2018 data, at the current level of the orphan well inventory, the cost of well abandonment and reclamation of their inventory of orphan wells was expected to be around $611 million. However, this estimate of $611 million does not include potential orphan wells. In this context, potential candidates include wells owned by financially insolvent firms and nearly insolvent firms. The cost of abandonment and remediation per well can be estimated from reviewing the OWA's annual report; those costs are estimated to be $61,000 and $20,000 per well respectively. Of the 440,000 wells drilled in the province, approximately 22,000 were leaking as of 2019. As part of Alberta's Area-Based Closure program (ABC), which represented 70% of the provinces remediation activity, the oil and gas industry spent approximately $340 million on clean up. The federal government provided a grant of $1.2 billion through the COVID-19 Economic Response Plan announced in 2020. Using the federal grant, in 2020, the province funded the Alberta Site Rehabilitation Program (ASRP) with $1 million in provincial loans. The oil and gas industry paid almost the same amount on clean up$363as they did in 2019, in spite of the federal grant. As of 2020, there were about 97,000 inactive wells that were not properly closed and another 71,000 abandoned wells requiring clean-up, according to a University of Calgary Policy School article. The January 2022 Parliamentary Budget Officer (PB0) report on the cost of cleaning Canada's orphan oil and gas wells, estimated that it would cost $361 million just to clean traditional orphan wells nationally, which does not include the cost of oil sands operations. More than 50% of Alberta's wells are not producing oil or gas, yet they have not been cleaned up. The OWA spent $161.5 million in the fiscal year 2021/2022 on decommissioning wells, pipelines, and facilities. In 2021/22 42% of this total went going towards well decommissioning, 30% towards site reclamation, 13% to facilities decommissioning, and 5% to pipeline decommissioning. The oil and gas sector provided 22% of the Government of Alberta's total estimated revenue for the fiscal year 2021/22. Since 2012, the Alberta government has received $66 billion from the sector. AER reported that, as of July 2022, there were about 170,000 abandoned wells in the province that are the responsibility of the licensees for all abandonment and reclamation costs. This represents 37% of all the wells in Alberta. The January 2022 Parliamentary Budget Officer (PB0) report on the cost of cleaning Canada's orphan oil and gas wells, estimated that it would cost $361 million just to clean traditional orphan wells nationally, which does not include the cost of oil sands operations. By 2025, the forecast is $1.1 billion in clean up costs for orphan wells. According to AER, as of December 2022, of the 463,000 oil and gas wells in Alberta, 33.7% or 156,031 were active and 28% or 129,640 were reclaimed. There were 172,236 wells that were either abandoned or inactive—19% or 88,433 were abandoned and 18.1% or 83,803 were inactive. There are thousands of oil and gas well in municipalities and on landowners properties that require plugging or reclamation and have no solvent owner, but have not yet transitioned to orphan status. They represent environmental and public safety liabilities but are not designated as orphaned by AER and are not being addressed. Liabilities and taxes for these wells become the responsibility of municipalities and landowners depending on where the wells are located. The 2023 OWA Inventory included only 3,114 orphan sites for which it was responsible. Landowners and municipalities In contrast to Texas, where private property owners own both the mineral and surface rights, in Alberta, landowners only own surface rights, and they do not have the right of refusal to prevent extraction companies from operating wells on their private property. Many of the orphan wells are on private property owned by ranchers, farmers, and others. By 2001, there were about 59,000 farm or ranch properties in the province that had at least one well on their property. While the AER and CAPP were pleased with the 2019 Supreme Court ruling on orphan wells, landowners with orphan wells left by defunct energy companies, are concerned about the impact of the orphan wells on "crops, water and the environment". Bennett's group was invited by Alberta Energy Minister Peter Guthrie to a February 9, 2023 meeting on Premier's Smith proposed Liability Management Incentive Program. While Bennett acknowledged that it was "somewhat regrettable" that taxpayers would fund the LMIP, and oil companies would see their royalties reduced. Based on a survey in early January 2019, the Rural Municipalities of Alberta's (RMA) reported an "unprecedented" unpaid $81 million in property taxes from oil and gas companies that presented a "unique challenge that has not been experienced by municipalities in Alberta before." According to RMA president, Paul McLauchlin, by 2023, the oil and gas industry owed $268 million in unpaid property taxes to towns and villages across Alberta. In response to their concerns in 2021, Dale Nally, then Associate Minister of Natural Gas, said that the solution to unpaid taxes lies in the province helping the "battered" oil and gas industry so they can "pay their municipal taxes and contribute to the economy." Orphan Well Association The oil-industry led Orphan Well Association (OWA) is an independent, non-profit organization, that was established in 2002 with a mandate to protect public safety and to manage the "environmental risks of oil and gas properties that do not have a legally or financially responsible party that can be held to account." The OWA is responsible for orphan wells, pipelines, and facilities. Representatives from the Alberta provincial government, the AER and Alberta Environment and Parks (AEP), Canadian Association of Petroleum Producers (CAPP), and the Explorers and Producers Association of Canada (EPAC) serve on the OWA's board of directors. Brad Herald is the Chair of the OWA and is also CAPP vice-president. The OWA manages the potential environmental and public safety risks that these orphaned properties represent. It also maintains an inventory, and oversees the decommissioning, remediation, and reclamation of these sites. The OWA's mandate includes the management of the "decommissioning (abandonment) of upstream oil and gas 'orphan' wells, pipelines, facilities and the remediation and reclamation of their associated sites." The OWA is also responsible for orphaned pipelines and orphan facilities, which now includes the newly-established Large Facility Liability Management Program (LFP). The LFP operates with separate financing from orphan wells and has its own levy set at $3 million a year. By 2022, its first projectdecommissioning the Mazeppa Gas Plant pumping station facilities south of Calgarywas almost completed. Critics say that the annual Orphan Wells Levy decided by the industry and set by AER is too low to cover the actual size of the problem. OWA funding Because orphan wells are the entire responsibility of the oil and gas industry, they are also responsible for funding OWA's operations. Industry funding for the OWA includes an annual Orphan Wells Levy prescribed by the AER, in consultation with the Canadian Association of Petroleum Producers (CAPP) and Explorers and Producers Association of Canada (EPAC). CAPP's members produce about 80% of oil and gas in Canada. The levy is based on the "estimated cost of decommissioning and reclamation activities for the upcoming fiscal year". Prior to 2017, the energy industry paid $15 million a year into the fund. It doubled to $30 million in 2017. For the fiscal year 2021/2022 it was set at $65 million. Critics say that this levy is inadequate to cover the costs of the orphan wells clean up. As of 2022, the annual Orphan Fund Levy on oil and gas companies set by the industry-funded Alberta Energy Regulator (AER) is very low in relation to the OWA's responsibilities. The OWA Levy is prescribed by the AER, in consultation with the Canadian Association of Petroleum Producers (CAPP) and Explorers and Producers Association of Canada (EPAC)based on the "estimated cost of decommissioning and reclamation activities for the upcoming fiscal year". The 2021 levy was $65 million. OWA funding comes from a levy paid by the Alberta energy industry and collected by the AER. The OWA Inventory only includes orphan wells that have been designated as orphaned by the AER. Federal and provincial subsidies Although the OWA is meant to be funded entirely by the oil and gas industry, it is also subsidized by the federal and provincial governments through grants and loans. The ballooning costs of decommissioning and reclamation were transferred from the oil and gas industry to the public which many see as corporate welfare and a PPP violation. Federal grants include $30 million in 2017 and 1.2 billion dollars in 2020. In 2017, the Government of Canada provided Alberta with a one-time grant of $30 million for "activities associated with decommissioning and reclamation". In that year, the provincial government used the federal funds to "cover the interest on a $235 million repayable loan" which the oil and gas industry will repay over the next nine years, to support the OWA's efforts. As part of the federal government's COVID-19 Economic Response Plan, in April 2020, new financial aid was announced to help sustain employment in the energy sector that also served to respond to environmental concerns in provinces with orphan and inactive oil and gas wells. Of the total $1.72 billion, up to $1.2 billion was available to the Alberta government and $200 million was made available in the form of a loan to the Orphan Wells Association. By January 2022, Alberta had given about 50% of the allocated funding to viable energy companies, not companies "with an acute financial risk." In 2020, Alberta established the Alberta Site Rehabilitation Program (ASRP) through which applicants could apply for grants of up to $30,000. The province also loaned the OWA $100-million for 1,000 environmental site assessments, as part of the process of decommissioning 800 to 1,000 orphan wells. The loan was intended to "create 500 direct and indirect jobs in the oil services sector." The loan was intended to enable the OWA to double its activity in 2020 to nearly 2,000 wells. In early February 2023, the Premier of Alberta introduced a controversial $100 million dollar Royalty Credit System as part of a new Liability Management Incentive Program (LMIP). If fully enacted, it would provide individual oil and gas companies with royalty credits for cleaning their own well sites that have been inactive for two decades or more. Alberta economist, Andrew Leach, said advocates for the oil industry were the original authors of the generous incentives-based royalty credit program, then called R-Star. According to a Scotiabank report, the incentive program "goes against the core capitalist principle that private companies should take full responsibility for the liabilities they willingly accept." Their analysts cautioned that the program could result in the public viewing the oil and gas sector negatively. The Scotiabank report said that "Canadian Natural Resources, Cenovus Energy, Paramount Resources and Whitecap Resources" would benefit most from the incentive programtheir combined net income in fiscal year 2022 Q4 was almost $5 billion.Mount Royal University professor, Duane Bratt, said that there was an element of "corporate welfare" in the program, but there was also the "corruption element"in 2022, Smithas paid lobbyist for dozens of Calgary companies in the Alberta Enterprise Grouphad promoted "$20 billion of R-Star credits" to then-energy minister Sonya Savage. The piloting of RStar was in Minister Guthrie's mandate letter. Critics include "[e]nvironmentalists, economists, landowners and analysts within Alberta Energy." Some also question how this could apply to orphan wells as, by definition, there is no legal party to be incentivized. In a February 22 statement, Premier Smith said that Minister Guthrie's consultation process would take a number of months to complete. Calgary-based Canadian Natural Resources is one of OWA's "largest single funders." Canadian Natural, which "produces more than one million barrels of oil and gas per day, is also one of the most active at cleaning up." Of the 1,293 wells abandoned in 2018, the company "submitted 1,012 reclamation certificates." Alberta Energy Regulator In Alberta, the sole regulator of the province's energy developmentfrom a project's first application, licensing and production, through to its decommissioning, closure, and reclamationis the 100% industry-funded corporation, the Alberta Energy Regulator (AER). The AER, which replaced the Energy Resources Conservation Board (ERCB) in 2013following the passing of the Responsible Energy Development Actoperates at arm's length from the provincial government. AER regulations based on PPP, require energy companies to safely retire their inactive wells following provincial guidelines as a legal asset retirement obligation (ARO). This includes the proper plugging of inactive wells as well as performing remediation to return the site to the condition it was in prior to extraction operations. AER wellbore licensing status includes abandoned, amended; cancelled; issued, re-entered, rec-certified; recexempt, rescinded; and suspension. Industry funding for the OWA includes an annual Orphan Wells Levy prescribed by the AER, in consultation with the Canadian Association of Petroleum Producers (CAPP) and Explorers and Producers Association of Canada (EPAC). It is based on the "estimated cost of decommissioning and reclamation activities for the upcoming fiscal year". In March 2014, AER took over Alberta Environment and Sustainable Resource Development's (ESRD) responsibilities to regulate reclamation and remediation activities resulting from fossil fuel extraction operations in Alberta. AER's Directive 079 provides guidelines and regulations regarding surface development in municipalities that have abandoned wells. This includes identification of wells through the Subdivision and Development Regulation (SDR) and requirements to identify abandoned wells located near developments. Directive 079 also requires oil and gas companies to locate and test wells. On February 6, 2017, the Alberta Energy Regulator and the Alberta government revised Directive 67, which sets the "eligibility requirements for obtaining or continuing to hold a licence for energy development" in Alberta. The new requirements came in to place in response to concerns about the "growing number of licensees abandoning wells in an unprofitable market in bankruptcy proceedings." The changes gave AER the authority to refuse or grant licenses based on past behaviour, for example licensees with a "history, or a higher risk, of non-compliance". Previously, energy companies could get a license by paying a small down payment as long as they had an address, and some insurance. Revised compliance rules cover operational, pipeline, and emission issues. The 2021 report submitted by Alberta's Office of the Auditor General (OAG), Doug Wylie, examined the provincial government's environmental liabilities and the roles of the Alberta Energy Regulator (AER) and Environment and Parks, now called Ministry of Environment and Protected Areas. Not all orphan and legacy wells are managed by the OWA. The regulator and the ministry also manage legacy and orphan wells that existed prior to the enactment of environmental legislation in 2000. The AER and the ministryboth under the jurisdiction of the government of Albertainterpret their responsibilities differently. Each says the other has the responsibility to pay for and clean up oil and gas sites liabilities. This resulted in neither the regulatory nor the ministry taking "responsibility "for sites, even when evidence showed otherwise." There was a lack of information on funding sources for cleaning up sites as well as a lack of up-to-date cost estimates, and site prioritization. While regulatory AER staff maintained a list of legacy and orphan sites under its management, the list was not shared with AER's own financial staff until the list was uncovered through the OAG's audit. The list also included cost estimates with other similar sites. Keith Wilson, who has been working with landowners on orphan wells for three decades, told The Narwhal in a 2018 interview that, "The [regulator's] system is not achieving anything. If anything, it's creating a false sense of comfort that this problem is being addressedand we know it's not." Legal and regulatory considerations Oil and gas companies that have profited from Alberta's energy revenue are liable for the responsible and safe closure and clean-up of their oil and gas well sites under the Polluter Pays Principle (PPP) as clearly defined by the Supreme Court of Canada (SCC) in 2003. Alberta's Environmental Law Centre (ELC) said that while the polluter pays principle appears to be simple and straightforward, its evolution, operationalization, and application in Alberta is complex, as it is often politically charged. As of, 2014, the EPEA "requires operators to conserve and reclaim specified land and get a reclamation certificate". Polluter pays principle The 1999 Canadian Environmental Protection Act, provided new powers for health and environmental protection. The Environmental Protection and Enhancement Act (EPEA) enacted in 2000, is the only statute in Alberta that references the polluter pays principle directly. The PPP is integrated in a variety of EPEA provisions but it does not have "an express statutory commitment to the principle." In their 2003 decision in Imperial Oil v Quebec, the SCC described the Polluter Pays Principle, saying that, in order to "encourage sustainable development, that principle assigns polluters the responsibility for remedying contamination for which they are responsible and imposes on them the direct and immediate costs of pollution." Suspended inactive wells As of 2020, there were 97,920 wells that were "licensed as temporarily suspended" in Alberta. They were labelled as "zombie wells" by the New York Times. Owners of inactive wells can choose to suspend operations indeterminately, without going through the costly process of decommissioning, remediation and reclamation. Many suspended wells are orphaned, or simply deserted. They may still have oil, but are rarely recertified. They are mainly on private property whose landowners have limited recourse for having them removed, maintaining the site, or collecting surface rights access fees. Suspended wells have the highest risk of methane gas leaks, which increases with the age of the well. Of all the inactive wells in Alberta, 29% 27,532 wellshave been suspended for more than a decade without being either "abandoned" or reactivated, as of March 25, 2021. There is no limit on the amount of time an inactive well can remain suspended under existing AER regulations, even though the danger of leakage increases with the age of the well. The lack of a time limit favours well owners who can avoid paying $75,000 to $100,000 to reclaim a wellsite, by paying only several thousand a year in surface rights access and municipal taxes. It is a liability for the ranchers on whose lands the wells are left. These suspended, inactive, "zombie" wells have become a "hazardous threat to public safety." As of 2016, North Dakotawhich shares a border with Alberta and also has a large oil sectoras of 2016, the state had no unfunded orphan or inactive well liabilities. They learned the "hard lessons" following previous boom and bust cycles. Starting in 2001, as the number of orphan wells began to increase, the state enacted a use-it-or-lose-it policy. Operators are required to either pump oil or plug their wells. After a year of nonproduction, the state's industrial commission "calls the company's bond, levies fines and plugs the well itself." In contrast, in Alberta, owners of inactive wells can choose to suspend operations indeterminately, without going through the costly process of decommissioning, remediation and reclamation. AER has set no time limit requirement on suspended wells. A suspended well is only closed temporarily and may be reactivated. These wells may also be relicensed by AER as "re-entered" if a new owner takes over the site. The risk of leakage is higher in a suspended inactive well than in a well that the AER calls, "responsibly abandoned""rendered permanently incapable of flow and capped". Suspended inactive older wells present the highest risk of leakage. The risk of leakage in an inactive well increases with the amount of time it has been inactive without being properly closed down. Twenty-nine percent of all inactive wells in Alberta27,532 wellshave been suspended for more than a decade without being either "abandoned" or reactivated, as of March 25, 2021. AER's Directive 020: Well Abandonment deals with suspended wells. Bankruptcies and orphan wells Bankruptcies are prime factors in the increase in orphan wells. In the last decade, companies have become insolvent because of the "multi-year downturn for the oil and gas sector." This downturn or bust is part of the well-known cyclical nature of the oil and gas industry. Historian David Finch, whose research focused on the oil industry in Western Canada, said that Alberta experienced three significant downturns in the oil industry since it first became commercially viable—the first in the 1960s; the second in the 1980s, and the third that started with the collapse of global oil prices in 2014. Crude oil prices dropped to near ten-year low prices. There were concerns that nearly a third of oil companies could go bankrupt. It was the longest oil price decline since the 1980s. That downturn resulted in what the CBC described in 2019 as a "tsunami" of orphaned oil and gas wells. By 2017, there were "3,127 wells that need[ed] to be plugged or abandoned, and a further 1,553 sites that have been abandoned but still need[ed] to be reclaimed". Since the downturn in the oil industry in 2014, many companies became insolvent and went into receivership while holding costly liabilities, including abandoned wells. The media brought attention to four cases where bankruptcies threatened to increase the inventory of orphan wells: Redwater Energy, Sequoia Resources, Trident Exploration, and Lexin Resources. Trident Exploration's receivership in May 2019 resulted in 3,650 wells that no longer had a solvent owners, and the loss of 94 jobs. Houston Oil & Gas entered receivership In November 2019, leaving behind 1,264 wells, 41 facilities and 251 pipelines. When Redwater entered receivership in 2015, ATB Financiala provincial Crown corporation and financial service that lends money to oil and gas companies, including Redwaterwent to court to recover its investments through Redwater's assets. Redwater's bankruptcy trustee agreed that the banks and other creditors should collect first and any environmental liabilities, such as orphan wells, should get the leftovers. When two lower courts agreed with the trustee in 2016 and 2016, both the OWA and AER appealed their decisions before the Supreme Court of Canada. The SCC overturned the lower court decisions in Orphan Well Association v. Grant Thornton Ltd. (Redwater). This benchmark ruling led to changes in the way in which bankruptcies were handled when orphan wells were at stake. Prior to the 2019 SCC ruling, bankrupt energy companies were able to avoid paying for their abandoned wells. The SCC clarified that in the case of a bankruptcy, a company's first priority is to fulfil its environmental obligationsnot as a debtbut as a duty to "citizens and communities." Strategic packaging of costly liabilities with productive assets A International Institute for Sustainable Development (IISD) report said that many of the orphan well sites were sold "strategically to insolvent operators". These owners avoid PPP responsibilities which included paying the hefty price of "end-of-life decommissioning and restoration work". Citing the case of the insolvent Bellatrix Exploration Ltd, which sold its unwanted wells to a numbered shell companyalso under threat of insolvencya 2021 Financial Post article also said that this "murky practice" of misusing the bankruptcy process to get rid of liabilities while keeping valuable assets is raising concerns. The OWA says as owners shirk their responsibility the collective becomes responsible for the liabilities. Fifty percent of the delinquent wells are owned by small junior companies that have insufficient finances but are still able to produce and collect revenue. The RMA says that the AER "props" up small companies to avoid increasing the already concerning number of orphan wells which results primarily from bankruptcies. The AER says that it is the RMA's role to collect taxes. A lawyer representing Action Surface Rights, a landowners group, Christine Laing, called on the AER to use the power it has more often and in a timely fashion to "protect the public interest". Cases, such as Lexin and Sequoia, shed light on the complexity and opacity of ownership groups. It also drew attention to the way in which AER licensed, and ATB Financial provided loans, to small limited liability companies that had insufficient financing. This allowed them to take on risky legacy wells, then declare bankruptcy and avoid paying for clean up. While Lexin is described in the media as a small Calgary-based limited liability company, its ownership group is MFC Resource Partnership of fifty-one companiesincluding Canadian Natural Resources Ltd., ExxonMobil Canada, and Husky Energywho are also responsible for Levin's ARO. AER had begun to receive concerns submitted by Lexin's Mazeppa Gas Plant employees in early 2016. These were forwarded to Occupational Health and Safety. In February 2017, in response to concerns about public safety, environmental and financial risks, AER suspended Lexin's 1,600 or more licenses in a rare enforcement actionthe largest suspension AER ever made. According to the Post, fifty-one companies, including Canadian Natural Resources Ltd., ExxonMobil Canada, and Husky Energy, who own some of Lexin Resources Ltd. assets, may share the responsibility for Lexin's AROs. Lexin had said that it would not be able to maintain its sour gas wells as of mid-February. The enforcement effectively placed Lexin in receivership with these wells and the Mazeppa Gas Plant being added to OWA's Inventory of orphan wells. AER sued Lexin to "recover money it is allegedly owed" saying that, "It is not open for a licensee, when times get tough, to transfer the burdens associated with holding AER licenses to the AER and/or the OWA." About 50% of the newly orphaned wells were the result of 2017 MFC/Lexin 1,400 wells OWA transfer. Two years after purchasing 2,300 well licences in 2016 from Perpetual Energy Inc., Sequoia Resources entered receivership. Its liabilities including 4,000 wells, pipelines and other facilities". Then veteran AER CEO Jim Ellis, admitted in a public statement that the Sequoia "situation has exposed a gap in the system" that needed to be fixed. Sequoia's owners took Perpetual to court in an attempt to unwind the original 2016 sale—the first time such an attempt was made by a bankruptcy trustee in the province. Were it to succeed it would increase risks to oil and gas companies buying and selling assets. In 2021, in response to the concerns filed by he OWA, CNRL, Sunoco, and dozens of landowners, in an "unusual step" AER called for a public hearing on Shell's application to transfer hundreds of its oil well licenses to a junior player with questionable. Landowners said that Shell was "shirking" its responsibilities by transferring dozens of wells to Pieridae, a small company that might not be able to cover the cost of cleaning up wells. In a 2020 BNN Bloomberg interview, a lawyer for landowners said that unlike CNRL and Sunoco, who take responsibility for their end-of-life wells, other major companies have been known to repackage liability wells with producing wells to sell to junior companies, with limited financial means. Premier Smith compared this to 2008 repackaged mortgages. Environmental impacts of orphan wells Gas contamination from both active and orphaned wells, particularly hydrogen sulfide and methane, is increasingly attracting attention from Alberta government and the public. In addition to fugitive gas emissions, shallow aquifers can also be contaminated by gas, causing very serious issues. Groundwater contamination can be caused by casing leakssuch as integrity failuresof which orphaned wells are susceptible. However, because orphaned well-induced groundwater contamination is not reported annually, statistical data was not available as of 2018. In comparison, gas emissions are more easily monitored and tracked by operators. Despite the lack of groundwater contamination data, gas emission data collected by AER from oil and gas industry may potentially reflect areas of groundwater contamination. Fugitive gas emissions In the 1980s, Alberta's Energy Resources Conservation Board (ERCB)the AER's precursorwarned of the dangers of fugitive gas emissions in 4,500 out of the 90,000 oil and gas wells in the province. The ERCB raised concerns of the increase in orphan wells in the 1980s and of the significant risks of GM in terms of contaminating useable groundwater. The Energy Resources Conservation Board (ERCB) first identified surface casing vent flow (SCVF) and gas migration (GM) issues as a "significant concern" in the Lloydminster, Alberta area in the 1980s. The ERCB said that 5% of the approximately 90,000 wells or 4,500 wells in the province had SCVF and that 150 wells had GM. In the 1980s, GM concerns included an increase in the number of orphan wells and the "protection of useable groundwater." In 2014, new regulation directed industry to "locate and test" any abandoned wells that were close to houses, airports, businesses, etc. that may pose a risk due to gas leakage. The resulting 33-page 2016 AER unpublished study showed that of the estimated 170,000 abandoned wells in Alberta, up to 3,400 posed a health risk. Of the 335 abandoned urban wells studied, there were 36 that were leaking and nine of these posed a risk to those who lived nearby. Most were in Medicine Hat, a city that now owns and operates 4,000 gas wells. The city's history is tied to the natural gas boom in the early 1900s which left many abandoned wells. In 2019, Intergovernmental Panel on Climate Change (IPCC) scientists warned that methane gas leakage from abandoned oil and gas wells were a serious contributing factor in climate change. The IPCC recommended that United Nations member countries track and publish methane leakage from abandoned oil and gas wells as this represented a "global warming risk." By 2020, only Canada and the United States had begun to monitor methane leakage from abandoned wells. Over a period of two decades, in terms of global warming potential (GWP), methane has 80 times the "heat-trapping power" carbon dioxide (). According to the International Energy Agency (IEA)'s "Global Methane Tracker 2022", if all countries adopted well-known and effective methane reduction policy measures using existing technologies, it would decrease global methane emissions from the oil and gas sector by 50%. Surface casing vent flow (SCVF) and gas migration (GM) According to a 2015 conference presentation, the primary factors that should be considered in the evaluation of gas emissions from oil and gas wells are cementing, drilling orientation, geological conditions, well age, and reservoir depth. They reported on three types of wellbore leakage8% of leaks were related to surface casing vent flow (SCVF) and gas migration (GM); 2% were the result of failures in the casing, and 2% were due to failures in the abandonment plugs. SCVF and Gas Migration are two commonly recognized gas contamination mechanisms. SCVF is defined as the flow of gas and/or liquid along the surface casing/casing annulus. GM is defined as a flow of gas that is detectable at the outer surface of the outermost casing string usually occurring at very shallow reservoir layers. According to recent statistics from the Alberta Energy Regulator (AER), a total of 617 billion m3 of methane was released into atmosphere through venting (GM and SCVF) and flaring in Alberta during 2016, which has been constantly decreasing since 2012. Among the total emitted gas, 81 ⁣⁣⁣⁣million m3 originated from 9,972 unrepaired wells by GM and SCVF. Historically, there are 18,829 repaired and unrepaired wells reported with SCVF, GM, or both in Alberta, with 7.0% of them being inactive (9,530 wells suspended and orphaned). Wells with reported gas migration issues within Alberta are shown by Bachu in 2017. Most of the thermal wells are orphaned oil or gas wells. A study from the International Journal of Greenhouse Gas Control concluded that gas migration mainly occurs within the central-northeastern part of the province, focusing around the Edmonton, Cold Lake, and Lloydminster areas. This observation is in agreement with the total gas flaring and venting conditions reported by the Alberta Energy Regulator (AER). Potential geothermal conversion of orphan wells in Alberta After oil wells become depleted, their depth and size make them good candidates for extraction of geothermal energy. The prospect of geothermal conversion of depleted wells is attractive for several reasons including potential recovery of abandonment costs, reduced consumption of non-renewable energy, and elimination of geothermal drilling costsa significant component in geothermal projects. Several studies propose the conversion of existing wells into double pipe heat exchangers through the installation of an insulated pipe inside the well for fluid circulation. Across the province, a general northwestern trend of increasing geothermal gradient is commonly recognized with geothermal gradients ranging between 10 °C/km and 55 °C/km. The controlling factors for this broad geothermal range in Alberta are poorly understood. Two main reasons have been proposed up to date to explain the observed patterns. The flow of formation waters is the main controlling factor of the geothermal field, where low geothermal gradient areas coincide with water recharge areas (major upland areas) and high geothermal gradient with discharge areas (major lowland areas). The differences in lithosphere thickness is responsible for the geothermal gradient distribution in Alberta since conduction is the main mechanism of transporting terrestrial heat from the basement to the surface. The bottom hole temperatures (BHT) of wells within reasonable proximity to Albertan communities are, at best, sufficient for heating. Communities on the western side of Alberta are more likely to benefit from geothermal conversion for direct heat purposes. Previous projects in the United States have shown that temperatures around 80 °C are feasible for direct heating of institutions and district heating. Another study also reported the use of a low-temperature geothermal well in China for heating within its proximity. There was a recent push by the US Department of Energy to investigate the feasibility of Deep Direct-Use (DDU) of low temperature geothermal resources. Responses Critics blame the self-regulating nature of energy industry and its close relationship with provincial regulatory bodies for the lack of enforcement of existing regulations which allows oil and gas companies to avoid paying for the clean up. Others say it is a lack of political will to be more proactive in establishing public policies that would remediate the situation. Suggested solutions to the orphaned and abandoned well crisis, include ensuring that there is enough funding attached to each wellsite for its cleanup paid by those who profited from oil and gas revenue for decades, and enforcing a "use-it-or-lose-it policy as is the case in the neighbouring oil-producing state, North Dakota. On March 23, 2023 Alberta auditor general, Doug Wylie, published another report critical of the United Conservative Party's (UCP) neglect of orphan wells and other oil patch liabilities in the province. The report said that even though the number of inactive wells increased every year since 2000except for the year that the federal government provided $1.2 billion dollarsoperators still have no timelines for site remediation. Two major issues have not been dealt with"so-called 'legacy sites' and "inadequate security collected". Current AER liability management processes to mitigate risks "associated with closure of oil and gas infrastructure" are not "well-designed" and are not effective. Martin Olszynski, a University of Calgary resource law professor said the audit shows that this is more than mere "bureaucratic incompetence"; it reveals that the AER has been "captured" by the oil and gas industry. He said the UCP has refused "to do anything that might cost the industry money". Kathleen Ganley the official opposition energy critic, said that the UCP has failed to protect taxpayers and is damaging the reputation of Alberta's energy industry's reputation. See also Orphaned and abandoned wells in the United States Selected timeline related to orphan wells in Alberta Notes External links Definitions of Orphan, Inactive, Abandoned, Remediation, and Reclamation Citations References A This includes the definitions for Orphan, Inactive, Abandoned wells, Remediation, and Reclamation B C D E This is in the purposes section of the Act and is therefore directional in nature. G H I J K L M N O SCC Redwater Decision This includes complete list of companies and their wells for decommissioning: P R S T W Y Z Athabasca oil sands Abandoned buildings and structures Oil wells Petroleum technology Environmental issues in Alberta
Orphan wells in Alberta, Canada
[ "Chemistry", "Engineering" ]
10,308
[ "Petroleum engineering", "Petroleum technology", "Oil wells" ]
59,866,656
https://en.wikipedia.org/wiki/Building%20consent%20authority
Building consent authorities (BCAs) are officials who enforce New Zealand's regulatory building control system. The New Zealand Building Act 2004 sets out a registration and accreditation scheme and technical reviews. The Act creates operational roles for BCAs. Authorities The following are the approved building consent authorities listed on the MBIE Register: Note that the register lists 80 BCAs but some of these are former territorial authorities that have been amalgamated into Auckland Council (such as Franklin District Council and North Shore City Council). Building consents on the Chatham Islands are contracted out to Wellington City Council and large dams on the Chatham's to Environment Canterbury. In addition to the regional and territorial authorities, Housing New Zealand made a decision in 2019 to establish Consentium, a national BCA in Kāinga Ora, that is responsible for building consents for public housing (up to and including four storeys) across New Zealand that Kāinga Ora intends to retain. Consentium achieved Accreditation in November 2020 and Registration in March 2021. Ashburton District Council Auckland Council Banks Peninsula District Council Buller District Council Carterton District Council Central Hawkes Bay District Council Central Otago District Council Christchurch City Council Clutha District Council Consentium, a division of Kāinga Ora Dunedin City Council Environment Canterbury Environment Waikato Far North District Council Gisborne District Council Gore District Council Grey District Council Hamilton City Council Hastings District Council Hauraki District Council Horowhenua District Council Hurunui District Council Hutt City Council Invercargill City Council Kaikōura District Council Kaipara District Council Kapiti Coast District Council Kawerau District Council MacKenzie District Council Manawatu District Council Marlborough District Council Masterton District Council Matamata-Piako District Council Napier City Council Nelson City Council New Plymouth District Council Northland District Council Opotiki District Council Otago Regional Council Otorohanga District Council Palmerston North City Council Porirua City Council Queenstown Lakes District Council Rangitikei District Council Rotorua District Council Ruapehu District Council Selwyn District Council South Taranaki District Council South Waikato District Council South Wairarapa District Council Southland District Council Stratford District Council Tararua District Council Tasman District Council Taupo District Council Tauranga City Council Thames-Coromandel District Council Timaru District Council Upper Hutt City Council Waikato District Council Waikato Regional Council Waimakariri District Council Waimate District Council Waipa District Council Wairoa District Council Waitaki District Council Waitomo District Council Wellington City Council Western Bay of Plenty District Council Westland District Council Whakatane District Council Whanganui District Council Whangarei District Council References Local government in New Zealand Urban planning
Building consent authority
[ "Engineering" ]
539
[ "Urban planning", "Architecture" ]
59,866,779
https://en.wikipedia.org/wiki/Signs%20Of%20LIfe%20Detector
Signs Of LIfe Detector (SOLID) is an analytical instrument under development to detect extraterrestrial life in the form of organic biosignatures obtained from a core drill during planetary exploration. The instrument is based on fluorescent immunoassays and it is being developed by the Spanish Astrobiology Center (CAB) in collaboration with the NASA Astrobiology Institute. SOLID is currently undergoing testing for use in astrobiology space missions that search for common biomolecules that may indicate the presence of extraterrestrial life, past or present. The system was validated in field tests and engineers are looking into ways to refine the method and miniaturize the instrument further. Science background Modern astrobiology inquiry has emphasized the search for water on Mars, chemical biosignatures in the permafrost, soil and rocks at the planet's surface, and even biomarker gases in the atmosphere that may give away the presence of past or present life. The detection of preserved organic molecules of unambiguous biological origin is fundamental for the confirmation of present or past life, but the 1976 Viking lander biological experiments failed to detect organics on Mars, and it is suspected it was because of the combined effects of heat applied during analysis and the unexpected presence of oxidants such as perchlorates in the Martian soil. The recent discovery of near surface ground ice on Mars supports arguments for the long-term preservation of biomolecules on Mars. SOLID demonstrated that antibodies are unaffected by acidity, heat and oxidants such as perchlorates, and it has emerged as a viable choice for an astrobiology mission directly searching for biosignatures. For a time, the ExoMars' Rosalind Franklin rover was planned to carry a similar instrument called Life Marker Chip. Instrument SOLID was designed for automatic in situ detection and identification of substances from liquid and crushed samples under the conditions of outer space. The system uses hundreds of carefully selected antibodies to detect lipids, proteins, polysaccharides, and nucleic acids. These are complex biological polymers that could only be synthesized by life forms, and are therefore strong indicators —biosignatures— of past or present life. SOLID consists of two separate functional units: a Sample Preparation Unit (SPU) for extractions by ultrasonication, and a Sample Analysis Unit (SAU), for fluorescent immunoassays. The antibody microarrays are separated in hundreds of small compartments inside a biochip only a few square centimeters in size. SOLID instrument is able to perform both "sandwich" and competitive immunoassays using hundreds of well characterized and highly specific antibodies. The technique called "sandwich immunoassay" is a non-competitive immunoassay in which the analyte (compound of interest in the unknown sample) is captured by an immobilized antibody, then a labeled antibody is bound to the analyte to reveal its presence. In other words, the "sandwich" quantify antigens (i.e. biomolecules) between two layers of antibodies (i.e. capture and detection antibody). For the competitive assay technique, unlabeled analyte displaces bound labelled analyte, which is then detected or measured. An optical system is set up so that a laser beam excites the fluorochrome label and a CCD detector captures an image of the microarray that can be measured. The instrument is able to detect a broad range of molecular size compounds, from the amino acid size, peptides, proteins, to whole cells and spores, with sensitivities at 1–2 ppb (ng/mL) for biomolecules and 104 to 103 spores per milliliter. Some compartments in the microarray are reserved for samples of known nature and concentrations, that are used as controls for reference and comparison. SOLID instrument concept avoids the high-temperature treatments of other techniques that may destroy organic matter in the presence of Martian oxidants such as perchlorates. Testing A field prototype of SOLID was first tested in 2005 in a simulated Mars drilling expedition called MARTE (Mars Analog Rio Tinto Experiment) where the researchers tested a drill in depth, sample-handling systems, and immunoassays relevant to the search for life in the Martian subsurface. MARTE was funded by the NASA Astrobiology Science and Technology for Exploring Planets (ASTEP) program. Using the sample cores, SOLID successfully detected several biological polymers in extreme environments in different parts of the world, including a deep South African mine, Antarctica's McMurdo Dry Valleys, Yellowstone, Iceland, Atacama Desert in Chile, and in the acid water of Rio Tinto. Extracts obtained from Mars analogue sites on Earth were added to various perchlorate concentrations at −20 °C for 45 days and then the samples were analyzed with SOLID. The results showed no interference from acidity or from the presence of 50 mM perchlorate which is 20 times higher than that found at the Phoenix landing site. SOLID demonstrated that the chosen antibodies are unaffected by acidity, heat and oxidants such as perchlorates, and it has emerged as a viable choice for an astrobiology mission directly searching for biosignatures. In 2018, another field test took place at the Atacama Desert with a rover called ARADS (Atacama Rover Astrobiology Drilling Studies) that carried a core drill, SOLID instrument, and another life detection system called Microfluidic Life Analyzer (MILA). MILA processes minuscule volumes of fluid samples to isolate amino acids, which are building blocks of proteins. The rover tested different strategies for searching for potential evidence of life in the soil, and established that roving, drilling and life detection can take place in concert. Status These tests validated the system for planetary exploration. Some improvements to be addressed in the future are instrument miniaturization, extraction protocols, and antibody stability under outer space conditions. SOLID would be one of the payloads of the proposed Icebreaker Life to Mars, or a lander to Europa. References Astrobiology Fluorescence Molecular biology Spacecraft instruments Space science experiments Spectrometers INTA spacecraft instruments
Signs Of LIfe Detector
[ "Physics", "Chemistry", "Astronomy", "Biology" ]
1,268
[ "Luminescence", "Fluorescence", "Origin of life", "Spectrum (physical sciences)", "Speculative evolution", "Astrobiology", "Molecular biology", "Biochemistry", "Spectrometers", "Spectroscopy", "Astronomical sub-disciplines", "Biological hypotheses" ]
59,867,294
https://en.wikipedia.org/wiki/Milan%20M.%20%C4%86irkovi%C4%87
Milan M. Ćirković (born 11 March 1971 in Belgrade) is a Serbian astronomer, astrophysicist, philosopher and science book author. He has worked in the fields of astrobiology, global catastrophic risks and future of humanity where he also co-authored with Nick Bostrom. A focus of his work is the Fermi Paradox for which he has critically discussed existing and also proposed novel solutions. References 1971 births Living people Serbian astronomers Astrophysicists 21st-century Serbian writers 21st-century science writers
Milan M. Ćirković
[ "Physics" ]
104
[ "Astrophysicists", "Astrophysics" ]
59,867,581
https://en.wikipedia.org/wiki/SoyBase%20Database
SoyBase is a database created by the United States Department of Agriculture. It contains genetic information about soybeans. It includes genetic maps, information about Mendelian genetics and molecular data regarding genes and sequences. It was started in 1990 and is freely available to individuals and organizations worldwide. History SoyBase was instituted by the Corn Insects and Corn Genetics Research Unit (CICGRU) in Ames, Iowa as a central repository for the soybean genetics community's published information. Originally, the database concentrated on genetic information such as genetic linkage maps and other Mendalian information. SoyBase genetic maps are a manually-curated composite of all published mapping and QTL studies, and thus provide a species level view of markers and QTL. In 2010 the soybean genome sequence was released along with gene models and many other types of genome annotations that were integrated in to SoyBase. SoyBase genetic linkage maps were integral to the assembly of the soybean genome. In 2018 the database received approximately 63,000 page requests from 2,600 users per month from 130 countries. About 40 organizations in the United States and 82 foreign educational institutions access SoyBase yearly. SoyBase supplies data to U.S. and foreign government organizations and corporate entities. Data submission and release policy Data is accepted from the original source generators only. Users that independently identify data for inclusion into the database can contact SoyBase directly. A number Excel-based spreadsheet templates are available to facilitate the inclusion of data into SoyBase. All data in SoyBase are available without restrictions. A number of data sub-setting and download tools are provided, and when needed ad hoc subsets of the data can be requested from the SoyBase Curator. Search tool The SoyBase Database Search Tool uses a text entry box for queries. Results are returned as text and as displays. Results display soybean genetic (and genomic) data using Generic Model Organism Database (GMOD) open-source software. In addition to SoyBase, objects identified by exact lexical matches to the query term, the tool also uses a soybean-specific ontology to identify biologically-related SoyBase objects. Some SoyBase sequence data and annotations are available through an InterMine instance (SoyMine), which is a collaboration with the Legume Information System Project. Graphical displays Genetic maps contain information on markers (SSR, RFLP, SNP, etc.), genes, and biparental and Genome-wide Association Study (GWAS) Quantitative Trait Loci (QTL). Soybean genetic maps are displayed using the CMap comparative genetic map viewer. Soybean genomic sequence and gene model data are displayed using the GBrowse sequence viewer. Other genome annotations in this viewer include epigenetic data such as DNA methylation and gene expression data of various soybean strains subjected to different treatments and from different soybean tissues/cultivars. Metabolic data and biochemical pathway information is displayed using Pathway Tools. Soybean metabolic pathway information (SoyCyc) was inferred by the Plant Metabolic Network project and was used to populate PathwayTools displays. References External links Plant Metabolic Network project Generic Model Organism Database project Legume Federation project Legume Information System United States Department of Agriculture InterMine Biological databases Model organism databases
SoyBase Database
[ "Biology" ]
680
[ "Model organism databases", "Model organisms" ]
59,867,757
https://en.wikipedia.org/wiki/Vaccine%20contamination%20with%20SV40
Vaccine contamination with Simian vacuolating virus 40, known as SV40 occurred in the United States and other countries between 1955 and 1961. SV40 is a monkey virus that has the potential to cause cancer in animals and humans, although this is considered very unlikely and there have been no known human cases. Soon after its discovery, SV40 was identified in early batches of the oral form of the polio vaccine. The vaccines in which SV40 was found were produced between 1955 and 1961 by Lederle (now a subsidiary of Wyeth). The contamination may have been in the original seed strain (coded SOM) or in the substrate—primary kidney cells from infected monkeys that were used to grow the vaccine virus during production. Both the Sabin vaccine (oral, live virus) and the Salk vaccine (injectable, killed virus) were affected; the technique used to inactivate the polio virus in the Salk vaccine, by means of formaldehyde, did not reliably kill SV40. The contaminated vaccine continued to be distributed to the public through 1963. It was difficult to detect small quantities of virus until the advent of polymerase chain reaction; since then, stored samples of vaccine made after 1962 have tested negative for SV40. In 1997, Herbert Ratner of Oak Park, Illinois, gave some vials of 1955 Salk vaccine to researcher Michele Carbone. Ratner, the Health Commissioner of Oak Park at the time the Salk vaccine was introduced, had kept these vials of vaccine in a refrigerator for over forty years. Upon testing this vaccine, Carbone discovered that it contained not only the SV40 strain already known to have been in the Salk vaccine (containing two 72-bp enhancers) but also the same slow-growing SV40 strain currently found in some malignant tumors and lymphomas (containing one 72-bp enhancer). It is unknown how widespread the virus was among humans before the 1950s, though one study found that 12% of a sample of German medical students in 1952 – prior to the advent of the vaccines – had SV40 antibodies. An analysis presented at the Vaccine Cell Substrate Conference in 2004 suggested that vaccines used in the former Soviet bloc countries, China, Japan, and Africa, could have been contaminated up to 1980, meaning that hundreds of millions more could have been exposed to the virus unknowingly. Population level studies show no evidence of any increase in cancer incidence as a result of exposure, though SV40 has been extensively studied. A thirty-five year followup found no excess of the cancers commonly associated with SV40. See also Bundaberg tragedy, deaths of 12 children following bacterial contamination of diphtheria vaccine References Vaccine controversies Vaccines
Vaccine contamination with SV40
[ "Chemistry", "Biology" ]
558
[ "Vaccines", "Vaccination", "Drug safety", "Vaccine controversies" ]
63,449,981
https://en.wikipedia.org/wiki/Davenport%E2%80%93Schinzel%20Sequences%20and%20Their%20Geometric%20Applications
Davenport–Schinzel Sequences and Their Geometric Applications is a book in discrete geometry. It was written by Micha Sharir and Pankaj K. Agarwal, and published by Cambridge University Press in 1995, with a paperback reprint in 2010. Topics Davenport–Schinzel sequences are named after Harold Davenport and Andrzej Schinzel, who applied them to certain problems in the theory of differential equations. They are finite sequences of symbols from a given alphabet, constrained by forbidding pairs of symbols from appearing in alternation more than a given number of times (regardless of what other symbols might separate them). In a Davenport–Schinzel sequence of order , the longest allowed alternations have length . For instance, a Davenport–Schinzel sequence of order three could have two symbols and that appear either in the order or , but longer alternations like would be forbidden. The length of such a sequence, for a given choice of , can be only slightly longer than its number of distinct symbols. This phenomenon has been used to prove corresponding near-linear bounds on various problems in discrete geometry, for instance showing that the unbounded face of an arrangement of line segments can have complexity that is only slightly superlinear. The book is about this family of results, both on bounding the lengths of Davenport–Schinzel sequences and on their applications to discrete geometry. The first three chapters of the book provide bounds on the lengths of Davenport–Schinzel sequences whose superlinearity is described in terms of the inverse Ackermann function . For instance, the length of a Davenport–Schinzel sequence of order three, with symbols, can be at most , as the second chapter shows; the third concerns higher orders. The fourth chapter applies this theory to line segments, and includes a proof that the bounds proven using these tools are tight: there exist systems of line segments whose arrangement complexity matches the bounds on Davenport–Schinzel sequence length. The remaining chapters concern more advanced applications of these methods. Three chapters concern arrangements of curves in the plane, algorithms for arrangements, and higher-dimensional arrangements, following which the final chapter (comprising a large fraction of the book) concerns applications of these combinatorial bounds to problems including Voronoi diagrams and nearest neighbor search, the construction of transversal lines through systems of objects, visibility problems, and robot motion planning. The topic remains an active area of research and the book poses many open questions. Audience and reception Although primarily aimed at researchers, this book (and especially its earlier chapters) could also be used as the textbook for a graduate course in its material. Reviewer Peter Hajnal calls it "very important to any specialist in computational geometry" and "highly recommended to anybody who is interested in this new topic at the border of combinatorics, geometry, and algorithm theory". References Combinatorics on words Discrete geometry Mathematics books 1995 non-fiction books
Davenport–Schinzel Sequences and Their Geometric Applications
[ "Mathematics" ]
596
[ "Discrete geometry", "Discrete mathematics", "Combinatorics on words", "Combinatorics" ]
63,450,596
https://en.wikipedia.org/wiki/NGC%20904
NGC 904 is an elliptical galaxy in the constellation Aries. It is estimated to be 244 million light years from the Milky Way and has a diameter of approximately 85,000 ly. NGC 904 was discovered on 13 December 1884 by the astronomer Edouard Stephan. See also List of NGC objects (1–1000) References External links Elliptical galaxies Aries (constellation) 0904 009112 Astronomical objects discovered in 1884 Discoveries by Édouard Stephan
NGC 904
[ "Astronomy" ]
92
[ "Aries (constellation)", "Constellations" ]
63,450,614
https://en.wikipedia.org/wiki/Distributed%20data%20processing
Distributed data processing (DDP) was the term that IBM used for the IBM 3790 (1975) and its successor, the IBM 8100 (1979). Datamation described the 3790 in March 1979 as "less than successful." Distributed data processing was used by IBM to refer to two environments: IMS DB/DC CICS/DL/I Each pair included a Telecommunications Monitor and a Database system. The layering involved a message, containing information to form a transaction, which was then processed by an application program. Development tools such as program validation services were released by IBM to facilitate expansion. Use of "a number of small computers linked to a central computer" permitted local and central processing, each optimized at what it could best do. Terminals, including those described as intelligent, typically were attached locally, to a "satellite processor." Central systems, sometimes multi-processors, grew to handle the load. Some of this extra capacity, of necessity, is used to enhance data security. Years before open systems made its presence felt, the goal of some hardware suppliers was "to replace the big, central mainframe computer with an array of smaller computers that are tied together." Lower case distributed data processing Hadoop adds another term to the mix: File System. Tools added for this use of distributed data processing include new programming languages. TSI/DPF Flexicom In 1976 Turnkey Systems Inc (TSI)/DPF Inc. introduced a hardware/software telecommunications front-end to off-load some processing that handled distributed data processing. Named Flexicom, The CPU was IBM-manufactured, and it ran (mainframe) DOS Rel. 26, with Flexicom's additions. Of four models available, the smallest had the CPU of a 360/30. See also HPCC References History of computing hardware Computer-related introductions in 1975
Distributed data processing
[ "Technology" ]
382
[ "Computing stubs", "History of computing hardware", "History of computing" ]
63,450,622
https://en.wikipedia.org/wiki/NGC%20905
NGC 905 is a lenticular galaxy with an active nucleus in the constellation Cetus south. It is estimated to be 644 million light-years from the Milky Way and has a diameter of approximately 85,000 ly. NGC 905 was discovered by astronomer Francis Leavenworth. See also List of NGC objects (1–1000) References External links Lenticular galaxies Cetus 0905 009038
NGC 905
[ "Astronomy" ]
85
[ "Cetus", "Constellations" ]
63,450,637
https://en.wikipedia.org/wiki/NGC%20906
NGC 906 is a barred spiral galaxy in the constellation Andromeda in the northern sky. It is estimated to be 215 million light years from the Milky Way and has a diameter of approximately 110,000 ly. NGC 906 was discovered on October 30, 1878 by astronomer Édouard Stephan. See also List of NGC objects (1–1000) References External links 0906 Barred spiral galaxies Andromeda (constellation) 009188 18781030 Discoveries by Édouard Stephan
NGC 906
[ "Astronomy" ]
98
[ "Andromeda (constellation)", "Constellations" ]
63,451,209
https://en.wikipedia.org/wiki/Vattapara%20accident%20zone
Vattapara Hairpin Turn also known as (Vattapara accident zone) is a road and place along the Indian National Highway 66 near Valanchery, Malappuram District, Kerala, India, that is known for a high number of accidents. Over a five-year period there were 300 accidents, 200 injuries, and 30 deaths. History Vattapara Hairpin Turn is located at a distance of about 4 km from Valanchery in Malappuram district, on the National Highway formerly known as NH 17 and now NH 66. The 'Vattapara bend' is an 'infamous' bend in Vattapara between Puthanathani and Valanchery. The number of vehicles that have flipped on the turn is not tracked. There have been more than 300 road accidents in the last five years. More than 30 deaths, more than 200 injured. It is a common sight for locals to see vehicles coming and going, and firefighters ringing bells and trying to lift the vehicle. This road also called Bermuda Triangle in Malappuram district at Vattapara bend, a death trap for tanker lorries Reason of accident Frequent vehicular accidents are caused by the slope of the road approaching the curve and the lack of scientificity in the construction of the curve in terms of the slope of the surface. Related to the theory of 'Banking of the Curve' (Banked turn). A vehicle speeding through a curve will still have a tendency to roll out. This is called centrifugal force. If this force survives the friction between the vehicle's tire and the road, the vehicle will fall out. This friction also depends on the speed of the vehicle, the condition of the road surface, and the load on the vehicle. Here the slope of the road is to the left. Curving vehicles are at risk by default. Tanker lorries and truckers who get off the first part at fairly good speeds at night and elsewhere without adequate warnings, or without heeding warnings, suddenly notice a single hairpin bend to the right. Most of the time, they cut to the right at once and the cart overturns to the left and the left to the left. Ways to cope with an accident Change traffic Solutions -1 is a bypass road that connects directly from Kanjipura to Valanchery Moodal before the Vattapara bend on the National Highway diverts traffic. Solutions - 2 The second solution is to widen the roads from Puthanathani to Thirunavaya Kuttipuram to divert traffic. See also Valanchery Roads in Kerala National Highway 66 (India) Traffic collisions in India Puthanathani Kuttipuram Banked turn References External links മലപപറം വളാഞചേരിയിൽ ടാങകർ ലോറി മറിഞഞ; ആളപായമിലല - Tanker lorry overturns at Malappuram Valanchery; No crowds Two killed in Valanchery road mishap Accident-prone highway stretch to be redesigned | Kozhikode News - Times of India Road incidents in India Road safety Collision Disasters in Kerala
Vattapara accident zone
[ "Physics" ]
612
[ "Collision", "Mechanics" ]
63,451,675
https://en.wikipedia.org/wiki/Regulation%20of%20artificial%20intelligence
Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD. Since 2016, numerous AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is deemed necessary to both foster AI innovation and manage associated risks. Furthermore, organizations deploying AI have a central role to play in creating and implementing trustworthy AI, adhering to established principles, and taking accountability for mitigating risks. Regulating AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem. Background According to Stanford University's 2023 AI Index, the annual number of bills mentioning "artificial intelligence" passed in 127 surveyed countries jumped from one in 2016 to 37 in 2022. In 2017, Elon Musk called for regulation of AI development. According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization." In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that AI is in its infancy and that it is too early to regulate the technology. Many tech companies oppose the harsh regulation of AI and "While some of the companies have said they welcome rules around A.I., they have also argued against tough regulations akin to those being created in Europe" Instead of trying to regulate the technology itself, some scholars suggested developing common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty. In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks". A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important". Perspectives The regulation of artificial intelligences is the development of public sector policies and laws for promoting and regulating AI. Regulation is now generally considered necessary to both encourage AI and manage associated risks. Public administration and policy considerations generally focus on the technical and economic implications and on trustworthy and human-centered AI systems, although regulation of artificial superintelligences is also considered. The basic approach to regulation focuses on the risks and biases of machine-learning algorithms, at the level of the input data, algorithm testing, and decision model. It also focuses on the explainability of the outputs. There have been both hard law and soft law proposals to regulate AI. Some legal scholars have noted that hard law approaches to AI regulation have substantial challenges. Among the challenges, AI technology is rapidly evolving leading to a "pacing problem" where traditional laws and regulations often cannot keep up with emerging applications and their associated risks and benefits. Similarly, the diversity of AI applications challenges existing regulatory agencies, which often have limited jurisdictional scope. As an alternative, some legal scholars argue that soft law approaches to AI regulation are promising because soft laws can be adapted more flexibly to meet the needs of emerging and evolving AI technology and nascent applications. However, soft law approaches often lack substantial enforcement potential. Cason Schmit, Megan Doerr, and Jennifer Wagner proposed the creation of a quasi-governmental regulator by leveraging intellectual property rights (i.e., copyleft licensing) in certain AI objects (i.e., AI models and training datasets) and delegating enforcement rights to a designated enforcement entity. They argue that AI can be licensed under terms that require adherence to specified ethical practices and codes of conduct. (e.g., soft law principles). Prominent youth organizations focused on AI, namely Encode Justice, have also issued comprehensive agendas calling for more stringent AI regulations and public-private partnerships. AI regulation could derive from basic principles. A 2020 Berkman Klein Center for Internet & Society meta-review of existing sets of principles, such as the Asilomar Principles and the Beijing Principles, identified eight such basic principles: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and respect for human values. AI law and regulations have been divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues. A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction. The development of public sector strategies for management and regulation of AI is deemed necessary at the local, national, and international levels and in a variety of fields, from public service management and accountability to law enforcement, healthcare (especially the concept of a Human Guarantee), the financial sector, robotics, autonomous vehicles, the military and national security, and international law. Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 entitled "Being Human in an Age of AI", calling for a government commission to regulate AI. As a response to the AI control problem Regulation of AI can be seen as positive social means to manage the AI control problem (the need to ensure long-term beneficial AI), with other social responses such as doing nothing or banning being seen as impractical, and approaches such as enhancing human capabilities through transhumanism techniques like brain-computer interfaces being seen as potentially complementary. Regulation of research into artificial general intelligence (AGI) focuses on the role of review boards, from university or corporation to international levels, and on encouraging research into AI safety, together with the possibility of differential intellectual progress (prioritizing protective strategies over risky strategies in AI development) or conducting international mass surveillance to perform AGI arms control. For instance, the 'AGI Nanny' is a proposed strategy, potentially under the control of humanity, for preventing the creation of a dangerous superintelligence as well as for addressing other major threats to human well-being, such as subversion of the global financial system, until a true superintelligence can be safely created. It entails the creation of a smarter-than-human, but not superintelligent, AGI system connected to a large surveillance network, with the goal of monitoring humanity and protecting it from danger. Regulation of conscious, ethically aware AGIs focuses on how to integrate them with existing human society and can be divided into considerations of their legal standing and of their moral rights. Regulation of AI has been seen as restrictive, with a risk of preventing the development of AGI. Global guidance The development of a global governance board to regulate AI development was suggested at least as early as 2017. In December 2018, Canada and France announced plans for a G7-backed International Panel on Artificial Intelligence, modeled on the International Panel on Climate Change, to study the global effects of AI on people and economies and to steer AI development. In 2019, the Panel was renamed the Global Partnership on AI. The Global Partnership on Artificial Intelligence (GPAI) was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology, as outlined in the OECD Principles on Artificial Intelligence (2019). The 15 founding members of the Global Partnership on Artificial Intelligence are Australia, Canada, the European Union, France, Germany, India, Italy, Japan, the Republic of Korea, Mexico, New Zealand, Singapore, Slovenia, the United States and the UK. In 2023, the GPAI has 29 members. The GPAI Secretariat is hosted by the OECD in Paris, France. GPAI's mandate covers four themes, two of which are supported by the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, namely, responsible AI and data governance. A corresponding centre of excellence in Paris will support the other two themes on the future of work, and on innovation and commercialization. GPAI also investigated how AI can be leveraged to respond to the COVID-19 pandemic. The OECD AI Principles were adopted in May 2019, and the G20 AI Principles in June 2019. In September 2019 the World Economic Forum issued ten 'AI Government Procurement Guidelines'. In February 2020, the European Union published its draft strategy paper for promoting and regulating AI. At the United Nations (UN), several entities have begun to promote and discuss aspects of AI regulation and policy, including the UNICRI Centre for AI and Robotics. In partnership with INTERPOL, UNICRI's Centre issued the report AI and Robotics for Law Enforcement in April 2019 and the follow-up report Towards Responsible AI Innovation in May 2020. At UNESCO's Scientific 40th session in November 2019, the organization commenced a two-year process to achieve a "global standard-setting instrument on ethics of artificial intelligence". In pursuit of this goal, UNESCO forums and conferences on AI were held to gather stakeholder views. A draft text of a Recommendation on the Ethics of AI of the UNESCO Ad Hoc Expert Group was issued in September 2020 and included a call for legislative gaps to be filled. UNESCO tabled the international instrument on the ethics of AI for adoption at its General Conference in November 2021; this was subsequently adopted. While the UN is making progress with the global management of AI, its institutional and legal capability to manage the AGI existential risk is more limited. An initiative of International Telecommunication Union (ITU) in partnership with 40 UN sister agencies, AI for Good is a global platform which aims to identify practical applications of AI to advance the United Nations Sustainable Development Goals and scale those solutions for global impact. It is an action-oriented, global & inclusive United Nations platform fostering development of AI to positively impact health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities. Recent research has indicated that countries will also begin to use artificial intelligence as a tool for national cyberdefense. AI is a new factor in the cyber arms industry, as it can be used for defense purposes. Therefore, academics urge that nations should establish regulations for the use of AI, similar to how there are regulations for other military industries. Regional and national regulation The regulatory and policy landscape for AI is an emerging issue in regional and national jurisdictions globally, for example in the European Union and Russia. Since early 2016, many national, regional and international authorities have begun adopting strategies, actions plans and policy papers on AI. These documents cover a wide range of topics such as regulation and governance, as well as industrial strategy, research, talent and infrastructure. Different countries have approached the problem in different ways. Regarding the three largest economies, it has been said that "the United States is following a market-driven approach, China is advancing a state-driven approach, and the EU is pursuing a rights-driven approach." Australia In October 2023, the Australian Computer Society, Business Council of Australia, Australian Chamber of Commerce and Industry, Ai Group (aka Australian Industry Group), Council of Small Business Organisations Australia, and Tech Council of Australia jointly published an open letter calling for a national approach to AI strategy. The letter backs the federal government establishing a whole-of-government AI taskforce. Brazil On September 30, 2021, the Brazilian Chamber of Deputies approved the Brazilian Legal Framework for Artificial Intelligence, Marco Legal da Inteligência Artificial, in regulatory efforts for the development and usage of AI technologies and to further stimulate research and innovation in AI solutions aimed at ethics, culture, justice, fairness, and accountability. This 10 article bill outlines objectives including missions to contribute to the elaboration of ethical principles, promote sustained investments in research, and remove barriers to innovation. Specifically, in article 4, the bill emphasizes the avoidance of discriminatory AI solutions, plurality, and respect for human rights. Furthermore, this act emphasizes the importance of the equality principle in deliberate decision-making algorithms, especially for highly diverse and multiethnic societies like that of Brazil. When the bill was first released to the public, it faced substantial criticism, alarming the government for critical provisions. The underlying issue is that this bill fails to thoroughly and carefully address accountability, transparency, and inclusivity principles. Article VI establishes subjective liability, meaning any individual that is damaged by an AI system and is wishing to receive compensation must specify the stakeholder and prove that there was a mistake in the machine's life cycle. Scholars emphasize that it is out of legal order to assign an individual responsible for proving algorithmic errors given the high degree of autonomy, unpredictability, and complexity of AI systems. This also drew attention to the currently occurring issues with face recognition systems in Brazil leading to unjust arrests by the police, which would then imply that when this bill is adopted, individuals would have to prove and justify these machine errors. The main controversy of this draft bill was directed to three proposed principles. First, the non-discrimination principle, suggests that AI must be developed and used in a way that merely mitigates the possibility of abusive and discriminatory practices. Secondly, the pursuit of neutrality principle lists recommendations for stakeholders to mitigate biases; however, with no obligation to achieve this goal. Lastly, the transparency principle states that a system's transparency is only necessary when there is a high risk of violating fundamental rights. As easily observed, the Brazilian Legal Framework for Artificial Intelligence lacks binding and obligatory clauses and is rather filled with relaxed guidelines. In fact, experts emphasize that this bill may even make accountability for AI discriminatory biases even harder to achieve. Compared to the EU's proposal of extensive risk-based regulations, the Brazilian Bill has 10 articles proposing vague and generic recommendations. Compared to the multistakeholder participation approach taken previously in the 2000s when drafting the Brazilian Internet Bill of Rights, Marco Civil da Internet, the Brazilian Bill is assessed to significantly lack perspective. Multistakeholderism, more commonly referred to as Multistakeholder Governance, is defined as the practice of bringing multiple stakeholders to participate in dialogue, decision-making, and implementation of responses to jointly perceived problems. In the context of regulatory AI, this multistakeholder perspective captures the trade-offs and varying perspectives of different stakeholders with specific interests, which helps maintain transparency and broader efficacy. On the contrary, the legislative proposal for AI regulation did not follow a similar multistakeholder approach. Future steps may include, expanding upon the multistakeholder perspective. There has been a growing concern about the inapplicability of the framework of the bill, which highlights that the one-shoe-fits-all solution may not be suitable for the regulation of AI and calls for subjective and adaptive provisions. Canada The Pan-Canadian Artificial Intelligence Strategy (2017) is supported by federal funding of Can $125 million with the objectives of increasing the number of outstanding AI researchers and skilled graduates in Canada, establishing nodes of scientific excellence at the three major AI centres, developing 'global thought leadership' on the economic, ethical, policy and legal implications of AI advances and supporting a national research community working on AI. The Canada CIFAR AI Chairs Program is the cornerstone of the strategy. It benefits from funding of Can$86.5 million over five years to attract and retain world-renowned AI researchers. The federal government appointed an Advisory Council on AI in May 2019 with a focus on examining how to build on Canada's strengths to ensure that AI advancements reflect Canadian values, such as human rights, transparency and openness. The Advisory Council on AI has established a working group on extracting commercial value from Canadian-owned AI and data analytics. In 2020, the federal government and Government of Quebec announced the opening of the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence, which will advance the cause of responsible development of AI. In June 2022, the government of Canada started a second phase of the Pan-Canadian Artificial Intelligence Strategy. In November 2022, Canada has introduced the Digital Charter Implementation Act (Bill C-27), which proposes three acts that have been described as a holistic package of legislation for trust and privacy: the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence & Data Act (AIDA). Morocco In Morocco, a new legislative proposal has been put forward by a coalition of political parties in Parliament to establish the National Agency for Artificial Intelligence (AI). This agency is intended to regulate AI technologies, enhance collaboration with international entities in the field, and increase public awareness of both the possibilities and risks associated with AI. China The regulation of AI in China is mainly governed by the State Council of the People's Republic of China's July 8, 2017 "A Next Generation Artificial Intelligence Development Plan" (State Council Document No. 35), in which the Central Committee of the Chinese Communist Party and the State Council of the PRC urged the governing bodies of China to promote the development of AI up to 2030. Regulation of the issues of ethical and legal support for the development of AI is accelerating, and policy ensures state control of Chinese companies and over valuable data, including storage of data on Chinese users within the country and the mandatory use of People's Republic of China's national standards for AI, including over big data, cloud computing, and industrial software. In 2021, China published ethical guidelines for the use of AI in China which state that researchers must ensure that AI abides by shared human values, is always under human control, and is not endangering public safety. In 2023, China introduced Interim Measures for the Management of Generative AI Services. Council of Europe The Council of Europe (CoE) is an international organization that promotes human rights, democracy and the rule of law. It comprises 46 member states, including all 29 Signatories of the European Union's 2018 Declaration of Cooperation on Artificial Intelligence. The CoE has created a common legal space in which the members have a legal obligation to guarantee rights as set out in the European Convention on Human Rights. Specifically in relation to AI, "The Council of Europe's aim is to identify intersecting areas between AI and our standards on human rights, democracy and rule of law, and to develop relevant standard setting or capacity-building solutions". The large number of relevant documents identified by the CoE include guidelines, charters, papers, reports and strategies. The authoring bodies of these AI regulation documents are not confined to one sector of society and include organizations, companies, bodies and nation-states. In 2019, the Council of Europe initiated a process to assess the need for legally binding regulation of AI, focusing specifically on its implications for human rights and democratic values. Negotiations on a treaty began in September 2022, involving the 46 member states of the Council of Europe, as well as Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America, and Uruguay, as well as the European Union. On 17 May 2024, the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law" was adopted. It was opened for signature on 5 September 2024. Although developed by a European organisation, the treaty is open for accession by states from other parts of the world. The first ten signatories were: Andorra, Georgia, Iceland, Norway, Moldova, San Marino, the United Kingdom, Israel, the United States, and the European Union. European Union The EU is one of the largest jurisdictions in the world and plays an active role in the global regulation of digital technology through the GDPR, Digital Services Act, the Digital Markets Act. For AI in particular, the Artificial intelligence Act is regarded in 2023 as the most far-reaching regulation of AI worldwide. Most European Union (EU) countries have their own national strategies towards regulating AI, but these are largely convergent. The European Union is guided by a European Strategy on Artificial Intelligence, supported by a High-Level Expert Group on Artificial Intelligence. In April 2019, the European Commission published its Ethics Guidelines for Trustworthy Artificial Intelligence (AI), following this with its Policy and investment recommendations for trustworthy Artificial Intelligence in June 2019. The EU Commission's High Level Expert Group on Artificial Intelligence carries out work on Trustworthy AI, and the Commission has issued reports on the Safety and Liability Aspects of AI and on the Ethics of Automated Vehicles. In 2020. the EU Commission sought views on a proposal for AI specific legislation, and that process is ongoing. On February 2, 2020, the European Commission published its White Paper on Artificial Intelligence – A European approach to excellence and trust. The White Paper consists of two main building blocks, an 'ecosystem of excellence' and a 'ecosystem of trust'. The 'ecosystem of trust' outlines the EU's approach for a regulatory framework for AI. In its proposed approach, the Commission distinguishes AI applications based on whether they are 'high-risk' or not. Only high-risk AI applications should be in the scope of a future EU regulatory framework. An AI application is considered high-risk if it operates in a risky sector (such as healthcare, transport or energy) and is "used in such a manner that significant risks are likely to arise". For high-risk AI applications, the requirements are mainly about the : "training data", "data and record-keeping", "information to be provided", "robustness and accuracy", and "human oversight". There are also requirements specific to certain usages such as remote biometric identification. AI applications that do not qualify as 'high-risk' could be governed by a voluntary labeling scheme. As regards compliance and enforcement, the Commission considers prior conformity assessments which could include 'procedures for testing, inspection or certification' and/or 'checks of the algorithms and of the data sets used in the development phase'. A European governance structure on AI in the form of a framework for cooperation of national competent authorities could facilitate the implementation of the regulatory framework. A January 2021 draft was leaked online on April 14, 2021, before the Commission presented their official "Proposal for a Regulation laying down harmonised rules on artificial intelligence" a week later. Shortly after, the Artificial Intelligence Act (also known as the AI Act) was formally proposed on this basis. This proposal includes a refinement of the 2020 risk-based approach with, this time, 4 risk categories: "minimal", "limited", "high" and "unacceptable". The proposal has been severely critiqued in the public debate. Academics have expressed concerns about various unclear elements in the proposal – such as the broad definition of what constitutes AI – and feared unintended legal implications, especially for vulnerable groups such as patients and migrants. The risk category "general-purpose AI" was added to the AI Act to account for versatile models like ChatGPT, which did not fit the application-based regulation framework. Unlike for other risk categories, general-purpose AI models can be regulated based on their capabilities, not just their uses. Weaker general-purpose AI models are subject transparency requirements, while those considered to pose "systemic risks" (notably those trained using computational capabilities exceeding 1025 FLOPS) must also undergo a thorough evaluation process. A subsequent version of the AI Act was finally adopted in May 2024. The AI Act will be progressively enforced. Recognition of emotions and real-time remote biometric identification will be prohibited, with some exemptions, such as for law enforcement. The European Union's AI Act has created a regulatory framework with significant implications globally. This legislation introduces a risk-based approach to categorizing AI systems, focusing on high-risk applications like healthcare, education, and public safety. It requires organizations to ensure transparency, data governance, and human oversight in their AI solutions. While this aims to foster ethical AI use, the stringent requirements could increase compliance costs and delay technology deployment, impacting innovation-driven industries. Observers have expressed concerns about the multiplication of legislative proposals under the von der Leyen Commission. The speed of the legislative initiatives is partially led by political ambitions of the EU and could put at risk the digital rights of the European citizens, including rights to privacy, especially in the face of uncertain guarantees of data protection through cyber security. Among the stated guiding principles in the variety of legislative proposals in the area of AI under the von der Leyen Commission are the objectives of strategic autonomy and the concept of digital sovereignty. On May 29, 2024, the European Court of Auditors published a report stating that EU measures were not well coordinated with those of EU countries; that the monitoring of investments was not systematic; and that stronger governance was needed. Germany In November 2020, DIN, DKE and the German Federal Ministry for Economic Affairs and Energy published the first edition of the "German Standardization Roadmap for Artificial Intelligence" (NRM KI) and presented it to the public at the Digital Summit of the Federal Government of Germany. NRM KI describes requirements to future regulations and standards in the context of AI. The implementation of the recommendations for action is intended to help to strengthen the German economy and science in the international competition in the field of artificial intelligence and create innovation-friendly conditions for this emerging technology. The first edition is a 200-page long document written by 300 experts. The second edition of the NRM KI was published to coincide with the German government's Digital Summit on December 9, 2022. DIN coordinated more than 570 participating experts from a wide range of fields from science, industry, civil society and the public sector. The second edition is a 450-page long document. On the one hand, NRM KI covers the focus topics in terms of applications (e.g. medicine, mobility, energy & environment, financial services, industrial automation) and fundamental issues (e.g. AI classification, security, certifiability, socio-technical systems, ethics). On the other hand, it provides an overview of the central terms in the field of AI and its environment across a wide range of interest groups and information sources. In total, the document covers 116 standardisation needs and provides six central recommendations for action. G7 On 30 October 2023, members of the G7 subscribe to eleven guiding principles for the design, production and implementation of advanced artificial intelligence systems, as well as a voluntary Code of Conduct for artificial intelligence developers in the context of the Hiroshima Process. The agreement receives the applause of Ursula von der Leyen who finds in it the principles of the AI Directive, currently being finalized. Israel On October 30, 2022, pursuant to government resolution 212 of August 2021, the Israeli Ministry of Innovation, Science and Technology released its "Principles of Policy, Regulation and Ethics in AI" white paper for public consultation. By December 2023, the Ministry of Innovation and the Ministry of Justice published a joint AI regulation and ethics policy paper, outlining several AI ethical principles and a set of recommendations including opting for sector-based regulation, a risk-based approach, preference for "soft" regulatory tools and maintaining consistency with existing global regulatory approaches to AI. Italy In October 2023, the Italian privacy authority approved a regulation that provides three principles for therapeutic decisions taken by automated systems: transparency of decision-making processes, human supervision of automated decisions and algorithmic non-discrimination. New Zealand , no AI-specific legislation exists, but AI usage is regulated by existing laws, including the Privacy Act, the Human Rights Act, the Fair Trading Act and the Harmful Digital Communications Act. In 2020, the New Zealand Government sponsored a World Economic Forum pilot project titled "Reimagining Regulation for the Age of AI", aimed at creating regulatory frameworks around AI. The same year, the Privacy Act was updated to regulate the use of New Zealanders' personal information in AI. In 2023, the Privacy Commissioner released guidance on using AI in accordance with information privacy principles. In February 2024, the Attorney-General and Technology Minister announced the formation of a Parliamentary cross-party AI caucus, and that framework for the Government's use of AI was being developed. She also announced that no extra regulation was planned at that stage. Philippines In 2023, a bill was filed in the Philippine House of Representatives which proposed the establishment of the Artificial Intelligence Development Authority (AIDA) which would oversee the development and research of artificial intelligence. AIDA was also proposed to be a watchdog against crimes using AI. The Commission on Elections has also considered in 2024 the ban of using AI and deepfake for campaigning. They look to implement regulations that would apply as early as for the 2025 general elections. Spain In 2018, the Spanish Ministry of Science, Innovation and Universities approved an R&D Strategy on Artificial Intelligence. United Kingdom The UK supported the application and development of AI in business via the Digital Economy Strategy 2015–2018 introduced at the beginning of 2015 by Innovate UK as part of the UK Digital Strategy. In the public sector, the Department for Digital, Culture, Media and Sport advised on data ethics and the Alan Turing Institute provided guidance on responsible design and implementation of AI systems. In terms of cyber security, in 2020 the National Cyber Security Centre has issued guidance on 'Intelligent Security Tools'. The following year, the UK published its 10-year National AI Strategy, which describes actions to assess long-term AI risks, including AGI-related catastrophic risks. In March 2023, the UK released the white paper A pro-innovation approach to AI regulation. This white paper presents general AI principles, but leaves significant flexibility to existing regulators in how they adapt these principles to specific areas such as transport or financial markets. In November 2023, the UK hosted the first AI safety summit, with the prime minister Rishi Sunak aiming to position the UK as a leader in AI safety regulation. During the summit, the UK created an AI Safety Institute, as an evolution of the Frontier AI Taskforce led by Ian Hogarth. The institute was notably assigned the responsibility of advancing the safety evaluations of the world's most advanced AI models, also called frontier AI models. The UK government indicated its reluctance to legislate early, arguing that it may reduce the sector's growth and that laws might be rendered obselete by further technological progress. United States Discussions on regulation of AI in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts. 2016–2017 As early as 2016, the Obama administration had begun to focus on the risks and regulations for artificial intelligence. In a report titled Preparing For the Future of Artificial Intelligence, the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI technologies with few restrictions. It is stated within the report that "the approach to regulation of AI-enabled products to protect public safety should be informed by assessment of the aspects of risk....". These risks would be the principal reason to create any form of regulation, granted that any existing regulation would not apply to AI technology. 2018–2019 The first main report was the National Strategic Research and Development Plan for Artificial Intelligence. On August 13, 2018, Section 1051 of the Fiscal Year 2019 John S. McCain National Defense Authorization Act (P.L. 115-232) established the National Security Commission on Artificial Intelligence "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States." Steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence. The Artificial Intelligence Initiative Act (S.1558) is a proposed bill that would establish a federal initiative designed to accelerate research and development on AI for, inter alia, the economic and national security of the United States. On January 7, 2019, following an Executive Order on Maintaining American Leadership in Artificial Intelligence, the White House's Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI. In response, the National Institute of Standards and Technology has released a position paper, and the Defense Innovation Board has issued recommendations on the ethical use of AI. A year later, the administration called for comments on regulation in another draft of its Guidance for Regulation of Artificial Intelligence Applications. Other specific agencies working on the regulation of AI include the Food and Drug Administration, which has created pathways to regulate the incorporation of AI in medical imaging. National Science and Technology Council also published the National Artificial Intelligence Research and Development Strategic Plan, which received public scrutiny and recommendations to further improve it towards enabling Trustworthy AI. 2021–2022 In March 2021, the National Security Commission on Artificial Intelligence released their final report. In the report, they stated that "Advances in AI, including the mastery of more general AI capabilities along one or more dimensions, will likely provide new capabilities and applications. Some of these advances could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should monitor advances in AI and make necessary investments in technology and give attention to policy so as to ensure that AI systems and their uses align with our goals and values." In June 2022, Senators Rob Portman and Gary Peters introduced the Global Catastrophic Risk Mitigation Act. The bipartisan bill "would also help counter the risk of artificial intelligence... from being abused in ways that may pose a catastrophic risk". On October 4, 2022, President Joe Biden unveiled a new AI Bill of Rights, which outlines five protections Americans should have in the AI age: 1. Safe and Effective Systems, 2. Algorithmic Discrimination Protection, 3.Data Privacy, 4. Notice and Explanation, and 5. Human Alternatives, Consideration, and Fallback. The Bill was introduced in October 2021 by the Office of Science and Technology Policy (OSTP), a US government department that advises the president on science and technology. 2023 In January 2023, the New York City Bias Audit Law (Local Law 144) was enacted by the NYC Council in November 2021. Originally due to come into effect on 1 January 2023, the enforcement date for Local Law 144 has been pushed back due to the high volume of comments received during the public hearing on the Department of Consumer and Worker Protection's (DCWP) proposed rules to clarify the requirements of the legislation. It eventually became effective on July 5, 2023. From this date, the companies that are operating and hiring in New York City are prohibited from using automated tools to hire candidates or promote employees, unless the tools have been independently audited for bias. In July 2023, the Biden–Harris Administration secured voluntary commitments from seven companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to manage the risks associated with AI. The companies committed to ensure AI products undergo both internal and external security testing before public release; to share information on the management of AI risks with the industry, governments, civil society, and academia; to prioritize cybersecurity and protect proprietary AI system components; to develop mechanisms to inform users when content is AI-generated, such as watermarking; to publicly report on their AI systems' capabilities, limitations, and areas of use; to prioritize research on societal risks posed by AI, including bias, discrimination, and privacy concerns; and to develop AI systems to address societal challenges, ranging from cancer prevention to climate change mitigation. In September 2023, eight additional companies – Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI – subscribed to these voluntary commitments. The Biden administration, in October 2023 signaled that they would release an executive order leveraging the federal government's purchasing power to shape AI regulations, hinting at a proactive governmental stance in regulating AI technologies. On October 30, 2023, President Biden released this Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The Executive Order addresses a variety of issues, such as focusing on standards for critical infrastructure, AI-enhanced cybersecurity, and federally funded biological synthesis projects. The Executive Order provides the authority to various agencies and departments of the US government, including the Energy and Defense departments, to apply existing consumer protection laws to AI development. The Executive Order builds on the Administration's earlier agreements with AI companies to instate new initiatives to "red-team" or stress-test AI dual-use foundation models, especially those that have the potential to pose security risks, with data and results shared with the federal government. The Executive Order also recognizes AI's social challenges, and calls for companies building AI dual-use foundation models to be wary of these societal problems. For example, the Executive Order states that AI should not "worsen job quality", and should not "cause labor-force disruptions". Additionally, Biden's Executive Order mandates that AI must "advance equity and civil rights", and cannot disadvantage marginalized groups. It also called for foundation models to include "watermarks" to help the public discern between human and AI-generated content, which has raised controversy and criticism from deepfake detection researchers. 2024 In February 2024, Senator Scott Wiener introduced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act to the California legislature. The bill drew heavily on the Biden executive order. It had the goal of reducing catastrophic risks by mandating safety tests for the most powerful AI models. If passed, the bill would have also established a publicly-funded cloud computing cluster in California. On September 29, Governor Gavin Newsom vetoed the bill. It is considered unlikely that the legislature will override the governor's veto with a two-thirds vote from both houses. On March 21, 2024, the State of Tennessee enacted legislation called the ELVIS Act, aimed specifically at audio deepfakes, and voice cloning. This legislation was the first enacted legislation in the nation aimed at regulating AI simulation of image, voice and likeness. The bill passed unanimously in the Tennessee House of Representatives and Senate. This legislation's success was hoped by its supporters to inspire similar actions in other states, contributing to a unified approach to copyright and privacy in the digital age, and to reinforce the importance of safeguarding artists' rights against unauthorized use of their voices and likenesses. On March 13, 2024, Utah Governor Spencer Cox signed the S.B 149 "Artificial Intelligence Policy Act". This legislation goes into effect on May 1, 2024. It establishes liability, notably for companies that don't disclose their use of generative AI when required by state consumer protection laws, or when users commit criminal offense using generative AI. It also creates the Office of Artificial Intelligence Policy and the Artificial Intelligence Learning Laboratory Program. Regulation of fully autonomous weapons Legal questions related to lethal autonomous weapons systems (LAWS), in particular compliance with the laws of armed conflict, have been under discussion at the United Nations since 2013, within the context of the Convention on Certain Conventional Weapons. Notably, informal meetings of experts took place in 2014, 2015 and 2016 and a Group of Governmental Experts (GGE) was appointed to further deliberate on the issue in 2016. A set of guiding principles on LAWS affirmed by the GGE on LAWS were adopted in 2018. In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation. The possibility of a moratorium or preemptive ban of the development and use of LAWS has also been raised on several occasions by other national delegations to the Convention on Certain Conventional Weapons and is strongly advocated for by the Campaign to Stop Killer Robots – a coalition of non-governmental organizations. The US government maintains that current international humanitarian law is capable of regulating the development or use of LAWS. The Congressional Research Service indicated in 2023 that the US doesn't have LAWS in its inventory, but that its policy doesn't prohibit the development and employment of it. See also AI alignment Algorithmic accountability Artificial intelligence Artificial intelligence and elections Artificial intelligence arms race Artificial intelligence in government Ethics of artificial intelligence Government by algorithm Legal informatics Regulation of algorithms References Existential risk from artificial general intelligence Computer law Regulation of technologies Regulation of artificial intelligence Politics and technology AI safety
Regulation of artificial intelligence
[ "Technology", "Engineering" ]
8,518
[ "Existential risk from artificial general intelligence", "Regulation of artificial intelligence", "Safety engineering", "Computer law", "AI safety", "Computing and society" ]
63,451,691
https://en.wikipedia.org/wiki/Moy%20%28salt%29
A moy was a measure for salt, used in British colonial North America. It amounted to about 15 bushels. It likely derives from the Portuguese moio and the trade in salt between North America and the Azores. Alternatively, the term may have come from the Scots, moy - a certain measure. Citations References Units of volume Edible salt Obsolete units of measurement
Moy (salt)
[ "Chemistry", "Mathematics" ]
75
[ "Obsolete units of measurement", "Units of volume", "Quantity", "Salts", "Edible salt", "Units of measurement" ]
63,452,220
https://en.wikipedia.org/wiki/NGC%20811
NGC 811 is an object in the New General Catalogue. It is an elliptical galaxy located in the constellation Cetus about 700 million light-years from the Milky Way. It was discovered by the American astronomer Francis Leavenworth in 1886. However, it is usually misidentified as a different object, the spiral galaxy PGC 7905. See also List of NGC objects (1–1000) References External links 0811 Elliptical galaxies Cetus 007870
NGC 811
[ "Astronomy" ]
97
[ "Cetus", "Constellations" ]
63,452,262
https://en.wikipedia.org/wiki/NGC%20812
NGC 812 is a spiral galaxy located in the Andromeda constellation, an estimated 175 million light-years from the Milky Way. NGC 812 was discovered on December 11, 1876 by astronomer Édouard Stephan. Two supernovae have been observed in NGC 812: SN2010jj (typeIIn, mag. 17) and SN2020udy (typeIax[02cx-like], mag. 19.6) See also List of NGC objects (1–1000) References 0812 Spiral galaxies Andromeda (constellation) 008066 Astronomical objects discovered in 1876 Discoveries by Édouard Stephan
NGC 812
[ "Astronomy" ]
129
[ "Andromeda (constellation)", "Constellations" ]
63,452,313
https://en.wikipedia.org/wiki/NGC%20813
NGC 813 is a lenticular galaxy in the constellation Hydrus. It is estimated to be 390 million light-years from the Milky Way and has a diameter of approximately 140,000 ly. NGC 813 was discovered on November 24, 1834, by the British astronomer John Herschel. One supernova, SN 2020abzv (type Ia, mag. 16.6), was discovered in NGC 813 on 9 December, 2020. See also List of NGC objects (1–1000) References Lenticular galaxies Hydrus 0813 007692
NGC 813
[ "Astronomy" ]
120
[ "Hydrus", "Constellations" ]
63,452,688
https://en.wikipedia.org/wiki/NGC%20814
NGC 814 is a lenticular galaxy in the constellation Cetus. It is estimated to be about 70 million light-years from the Milky Way and has a diameter of approximately 30,000 ly. NGC 814 was discovered on January 6, 1886, by the American astronomer Ormond Stone. See also List of NGC objects (1–1000) References External links Lenticular galaxies 0814 Cetus 008319
NGC 814
[ "Astronomy" ]
87
[ "Cetus", "Constellations" ]
63,455,108
https://en.wikipedia.org/wiki/List%20of%20gene%20therapies
This article contains a list of commercially available gene therapies. Gene therapies Alipogene tiparvovec (Glybera): AAV-based treatment for lipoprotein lipase deficiency (no longer commercially available) Axicabtagene ciloleucel (Yescarta): treatment for large B-cell lymphoma Beremagene geperpavec (Vyjuvek): treatment of wounds. Betibeglogene autotemcel (Zynteglo): treatment for beta thalassemia Brexucabtagene autoleucel (Tecartus): treatment for mantle cell lymphoma and acute lymphoblastic leukemia Cambiogenplasmid (Neovasculgen): treatment for vascular endothelial growth factor peripheral artery disease Ciltacabtagene autoleucel (Carvykti): treatment for multiple myeloma Delandistrogene moxeparvovec (Elevidys): treatment for Duchenne muscular dystrophy Elivaldogene autotemcel (Skysona): treatment for cerebral adrenoleukodystrophy Etranacogene dezaparvovec (Hemgenix): AAV-based treatment for hemophilia B Exagamglogene autotemcel (Casgevy): treatment for sickle cell disease. Gendicine: treatment for head and neck squamous cell carcinoma Idecabtagene vicleucel (Abecma): treatment for multiple myeloma Lovotibeglogene autotemcel (Lyfgenia): treatment for sickle cell disease. Nadofaragene firadenovec (Adstiladrin): treatment for bladder cancer Obecabtagene autoleucel (Aucatzyl): treatment of acute lymphoblastic leukemia Onasemnogene abeparvovec (Zolgensma): AAV-based treatment for spinal muscular atrophy Strimvelis: treatment for adenosine deaminase deficiency (ADA-SCID) Talimogene laherparepvec (Imlygic): treatment for melanoma in patients who have recurring skin lesions Tisagenlecleucel (Kymriah): treatment for B cell lymphoblastic leukemia Valoctocogene roxaparvovec (Roctavian): treatment for hemophilia A Voretigene neparvovec (Luxturna): AAV-based treatment for Leber congenital amaurosis See also FDA-approved CAR T cell therapies References External links Applied genetics Bioethics Biotechnology Medical genetics Gene therapies Gene delivery Genetic engineering
List of gene therapies
[ "Chemistry", "Technology", "Engineering", "Biology" ]
594
[ "Bioethics", "Genetics techniques", "Biological engineering", "Molecular-biology-related lists", "Genetic engineering", "Biotechnology", "Molecular biology techniques", "Gene therapy", "nan", "Molecular biology", "Ethics of science and technology", "Gene delivery" ]
63,455,582
https://en.wikipedia.org/wiki/Rayleigh%E2%80%93Gans%20approximation
Rayleigh–Gans approximation, also known as Rayleigh–Gans–Debye approximation and Rayleigh–Gans–Born approximation, is an approximate solution to light scattering by optically soft particles. Optical softness implies that the relative refractive index of particle is close to that of the surrounding medium. The approximation holds for particles of arbitrary shape that are relatively small but can be larger than Rayleigh scattering limits. The theory was derived by Lord Rayleigh in 1881 and was applied to homogeneous spheres, spherical shells, radially inhomogeneous spheres and infinite cylinders. Peter Debye has contributed to the theory in 1881. The theory for homogeneous sphere was rederived by Richard Gans in 1925. The approximation is analogous to Born approximation in quantum mechanics. Theory The validity conditions for the approximation can be denoted as: is the wavevector of the light (), whereas refers to the linear dimension of the particle. is the complex refractive index of the particle. The first condition allows for a simplification in expressing the material polarizability in the derivation below. The second condition is a statement of the Born approximation, that is, that the incident field is not greatly altered within one particle so that each volume element is considered to be illuminated by an intensity and phase determined only by its position relative to the incident wave, unaffected by scattering from other volume elements. The particle is divided into small volume elements, which are treated as independent Rayleigh scatterers. For an inbound light with s polarization, the scattering amplitude contribution from each volume element is given as: where denotes the phase difference due to each individual element, and the fraction in parentheses is the electric polarizability as found from the refractive index using the Clausius–Mossotti relation. Under the condition (n-1) << 1, this factor can be approximated as 2(n-1)/3. The phases affecting the scattering from each volume element are dependent only on their positions with respect to the incoming wave and the scattering direction. Integrating, the scattering amplitude function thus obtains: in which only the final integral, which describes the interfering phases contributing to the scattering direction (θ, φ), remains to be solved according to the particular geometry of the scatterer. Calling V the entire volume of the scattering object, over which this integration is performed, one can write that scattering parameter for scattering with the electric field polarization normal to the plane of incidence (s polarization) as and for polarization in the plane of incidence (p polarization) as where denotes the "form factor" of the scatterer: In order to only find intensities we can define P as the squared magnitude of the form factor: Then the scattered radiation intensity, relative to the intensity of the incident wave, for each polarization can be written as: where r is the distance from the scatterer to the observation point. Per the optical theorem, absorption cross section is given as: which is independent of the polarization. Applications Rayleigh–Gans approximation has been applied on the calculation of the optical cross sections of fractal aggregates. The theory was also applied to anisotropic spheres for nanostructured polycrystalline alumina and turbidity calculations on biological structures such as lipid vesicles and bacteria. A nonlinear Rayleigh−Gans−Debye model was used to investigate second-harmonic generation in malachite green molecules adsorbed on polystyrene particles. See also Mie scattering Anomalous diffraction theory Discrete dipole approximation Gans theory References Scattering, absorption and radiative transfer (optics) Radio frequency propagation X-ray scattering
Rayleigh–Gans approximation
[ "Physics", "Chemistry" ]
754
[ "Physical phenomena", " absorption and radiative transfer (optics)", "Spectrum (physical sciences)", "Radio frequency propagation", "X-ray scattering", "Electromagnetic spectrum", "Waves", "Scattering" ]
63,455,728
https://en.wikipedia.org/wiki/Modular%20forms%20modulo%20p
In mathematics, modular forms are particular complex analytic functions on the upper half-plane of interest in complex analysis and number theory. When reduced modulo a prime p, there is an analogous theory to the classical theory of complex modular forms and the p-adic theory of modular forms. Reduction of modular forms modulo 2 Conditions to reduce modulo 2 Modular forms are analytic functions, so they admit a Fourier series. As modular forms also satisfy a certain kind of functional equation with respect to the group action of the modular group, this Fourier series may be expressed in terms of . So if is a modular form, then there are coefficients such that . To reduce modulo 2, consider the subspace of modular forms with coefficients of the -series being all integers (since complex numbers, in general, may not be reduced modulo 2). It is then possible to reduce all coefficients modulo 2, which will give a modular form modulo 2. Basis for modular forms modulo 2 Modular forms are generated by and . It is then possible to normalize and to and , having integers coefficients in their -series. This gives generators for modular forms, which may be reduced modulo 2. Note the Miller basis has some interesting properties: once reduced modulo 2, and are just ; that is, a trivial reduction. To get a non-trivial reduction, one must use the modular discriminant . Thus, modular forms are seen as polynomials of , and (over the complex in general, but seen over integers for reduction), once reduced modulo 2, they become just polynomials of over . The modular discriminant modulo 2 The modular discriminant is defined by an infinite product, where is the Ramanujan tau function. Results from Kolberg and Jean-Pierre Serre demonstrate that, modulo 2, we have i.e., the -series of modulo 2 consists of to powers of odd squares. Hecke operators modulo 2 The action of the Hecke operators is fundamental to understanding the structure of spaces of modular forms. It is therefore justified to try to reduce them modulo 2. The Hecke operators for a modular form are defined as follows: with . Hecke operators may be defined on the -series as follows: if , then with Since modular forms were reduced using the -series, it makes sense to use the -series definition. The sum simplifies a lot for Hecke operators of primes (i.e. when is prime): there are only two summands. This is very nice for reduction modulo 2, as the formula simplifies a lot. With more than two summands, there would be many cancellations modulo 2, and the legitimacy of the process would be doubtable. Thus, Hecke operators modulo 2 are usually defined only for primes numbers. With a modular form modulo 2 with -representation , the Hecke operator on is defined by where It is important to note that Hecke operators modulo 2 have the interesting property of being nilpotent. Finding their order of nilpotency is a problem solved by Jean-Pierre Serre and Jean-Louis Nicolas in a paper published in 2012:. The Hecke algebra modulo 2 The Hecke algebra may also be reduced modulo 2. It is defined to be the algebra generated by Hecke operators modulo 2, over . Following Serre and Nicolas's notations, , i.e. . Writing so that , define as the -subalgebra of given by and . That is, if is a sub-vector-space of , we get . Finally, define the Hecke algebra as follows: Since , one can restrict elements of to to obtain an element of . When considering the map as the restriction to , then is a homomorphism. As is either identity or zero, . Therefore, the following chain is obtained: . Then, define the Hecke algebra to be the projective limit of the above as . Explicitly, this means . The main property of the Hecke algebra is that it is generated by series of and . That is: . So for any prime , it is possible to find coefficients such that . References Modular forms Algebraic number theory
Modular forms modulo p
[ "Mathematics" ]
860
[ "Modular forms", "Algebraic number theory", "Number theory" ]
63,456,035
https://en.wikipedia.org/wiki/NGC%20911
NGC 911 is an elliptical galaxy located in the constellation Andromeda about 258 million light years from the Milky Way. It was discovered by French astronomer Édouard Stephan on 30 October 1878. It is a member of the galaxy cluster Abell 347. See also List of NGC objects (1–1000) References External links Elliptical galaxies Andromeda (constellation) 0911 009221 18781030 Discoveries by Édouard Stephan
NGC 911
[ "Astronomy" ]
86
[ "Andromeda (constellation)", "Constellations" ]
63,456,091
https://en.wikipedia.org/wiki/NGC%20912
NGC 912 is a compact lenticular galaxy located in the constellation Andromeda about 197 million light years from the Milky Way. It was discovered by French astronomer Édouard Stephan in 1878. See also List of NGC objects (1–1000) References External links Lenticular galaxies Andromeda (constellation) 0912 009222 ? Discoveries by Édouard Stephan
NGC 912
[ "Astronomy" ]
73
[ "Andromeda (constellation)", "Constellations" ]
63,456,127
https://en.wikipedia.org/wiki/NGC%20913
NGC 913 is a lenticular galaxy located in the constellation Andromeda about 224 million light years from the Milky Way. It was discovered by French astronomer Édouard Stephan in 1878. See also List of NGC objects (1–1000) References External links Lenticular galaxies Andromeda (constellation) 0913 009230 ? Discoveries by Édouard Stephan
NGC 913
[ "Astronomy" ]
72
[ "Andromeda (constellation)", "Constellations" ]
56,492,382
https://en.wikipedia.org/wiki/Daedalea%20ryvardeniana
Daedalea ryvardeniana is a neotropical species of poroid fungus in the family Fomitopsidaceae. Found in Brazil, it was described as new to science in 2012. Taxonomy The type collection was made in Chapada dos Guimarães National Park, in the state of Mato Grosso. The specific epithet honours polypore specialist Leif Ryvarden. Description Morphological characteristics of this fungus include an irregular hymenophore, with relatively large pores numbering 1 to 3 per millimetre. The structure of the pore surface ranges from daedaloid (maze-like), to somewhat labyrinthic, to gill-like. This hymenophore irregularity helps distinguish it from the similar Daedalea stereoides. D. ryvardeniana also has larger spores, measuring 7.5–11.0 by 2.5–3.5 μm. The spores feature a unique central concavity next to the apiculum and a tapering tip. The fungus has a dimitic hyphal system, with thick-walled generative hyphae containing a winding lumen. Habitat and distribution The fungus, a saprophyte, grows on logs and fallen branches of angiosperms, in which it causes brown rot. It is found in dry and somewhat xerophytic areas in Caatinga and Cerrado ecosystems located in Ceará, Paraíba, and Pernambuco States. It was later recorded in the Caatinga area of Bahia. References Fungi described in 2012 Fungi of Brazil Fomitopsidaceae Fungus species
Daedalea ryvardeniana
[ "Biology" ]
328
[ "Fungi", "Fungus species" ]
56,492,965
https://en.wikipedia.org/wiki/Royole
Royole Corporation was a manufacturer of flexible displays and sensors that can be used in a range of human-machine interface products, including foldable smartphones and other smart devices. History Royole was founded by Stanford engineering graduates, including current Founder Bill Liu, in 2012. The company, backed by investors including IDG Capital, AMTD Group and Knight Capital, produces fully flexible displays in volume from its 4.5-million-square-feet quasi-G6 mass production campus in Shenzhen, China. Royole has offices in Fremont, California, Hong Kong and Shenzhen. Towards the end of 2020, Royole filed to go public on Shanghai's STAR Market, but withdraw their application shortly after when shareholder structure issues emerged. Milestones Royole produced the world's thinnest full-color flexible displays and flexible sensors (2014), the world's first foldable 3D mobile theater (2015), the world's first curved car dashboard based on flexible electronics (2016), the first smart writing pad, RoWrite, based on flexible sensors (2017), the volume production of Royole's quasi-G6 mass production campus for fully flexible displays (2018), and the world's first commercial foldable smartphone, FlexPai™, with a fully flexible display (2018). In May 2019, Royole partnered with Louis Vuitton to launch the “Canvas of the Future” line of handbags that featured built-in flexible displays. The bags were unveiled at Louis Vuitton's Cruise 2020 runway show in New York City. In December 2018, Airbus China Innovation Centre (ACCIC) announced a partnership with Royole to explore applications of flexible displays and sensors in aircraft development with an aim to improve cabin safety and increase energy conservation. Awards Royole has received international industry awards, including in the Red Dot Awards and International Design Awards, for its technological innovations and fast growth. Royole was named a 2018 VIP Award winner by TWICE Magazine. Controversies In December 2021, 10 months after failing to go public on Shanghai's STAR Market, it was reported that Royole has delayed salary payments to employees for months after the money-losing company suffered from tight cashflow. Founder and CEO Liu Zihong held an all-hands meeting on Nov. 30 to update the company's financial situation. Liu said at the meeting that the company was in the process of obtaining financing, and expected to receive funds in December, according to some employees. All back salaries would be paid at the end of December or in January, but there was still uncertainty, Liu said. References American companies established in 2012 Flexible electronics Electronics companies established in 2012 Companies based in Shenzhen Companies based in Fremont, California Chinese brands 2012 in Shenzhen
Royole
[ "Engineering" ]
555
[ "Electronic engineering", "Flexible electronics" ]
56,494,038
https://en.wikipedia.org/wiki/Snake%20Projection
The Snake Projection is a continuous map projection typically used as the planar coordinate system for realizing low distortion throughout long linear engineering projects. Details The Snake Projection was originally developed by University College London and Network Rail to provide a continuous low distortion projection for the West Coast Mainline infrastructure works. The parameters defining each Snake Projection are tailored for the specific project; the most typical use is with large-scale linear engineering projects such as rail infrastructure, however the projection is equally applicable to any application requiring a low distortion grid along a linear route (for example pipelines and roads). The name of the projection is derived from the sinuous snake-like nature of the projects it may be designed for. Typical map projection distance distortion characteristics of a Snake Projection are minimal over the whole route within approximately 20 kilometres of the centre line. The principal advantage of the projection is that, for the corridor defining the design space, distances measured on the ground have a nearly one to one relationship with distances in coordinate space (i.e. no scale factor need be applied to convert between distances in grid and distances on the ground). The length of the applicable corridor is variable on a project basis, however when required the projection can extend over several hundreds of kilometres to achieve grid distortion of less than 20 parts per million along the route. The main disadvantage is that away from the design corridor the distortion of the projection is not controlled. The Snake Projection is suited for engineering purposes due to its low distortion characteristics. An example of its differentiation from mapping grids is the 60m increase in length of the London to Birmingham section of the HS2 rail line, purely due to the more accurate grid representation compared to the length when using the national mapping coordinate system British National Grid. Usage The Snake Projection is the engineering coordinate system used for a significant proportion of primary rail routes in the UK, including that of the HS2 London to Birmingham high speed line. For the London to Glasgow West Coast Main Line the distortion in the Snake Projection used is no greater than 20 parts per million within 5 kilometres of either side of the track. Implementation The Snake Projection algorithm converts between geographical and grid coordinates, however the method of technical implementation can vary. One method of implementing a Snake Projection is to define using an NTv2 geodetic transformation coupled with a standard parameterised map projection (such as Transverse Mercator); this is increasing in popularity due to better compatibility with CAD and GIS software. The global EPSG geodetic coordinate system database features several snake projection definitions through the NTv2 approach. Other implementations include those published through the SnakeGrid organisation. See also List of map projections Surveying References Map projections Geodesy Rail infrastructure Surveying Civil engineering
Snake Projection
[ "Mathematics", "Engineering" ]
541
[ "Applied mathematics", "Map projections", "Construction", "Surveying", "Civil engineering", "Coordinate systems", "Geodesy" ]
56,495,427
https://en.wikipedia.org/wiki/Squeezed%20states%20of%20light
In quantum physics, light is in a squeezed state if its electric field strength Ԑ for some phases has a quantum uncertainty smaller than that of a coherent state. The term squeezing thus refers to a reduced quantum uncertainty. To obey Heisenberg's uncertainty relation, a squeezed state must also have phases at which the electric field uncertainty is anti-squeezed, i.e. larger than that of a coherent state. Since 2019, the gravitational-wave observatories LIGO and Virgo employ squeezed laser light, which has significantly increased the rate of observed gravitational-wave events. Quantum physical background An oscillating physical quantity cannot have precisely defined values at all phases of the oscillation. This is true for the electric and magnetic fields of an electromagnetic wave, as well as for any other wave or oscillation (see figure right). This fact can be observed in experiments and is described by quantum theory. For electromagnetic waves usually just the electric field is considered, because it is the one that mainly interacts with matter. Fig. 1. shows five different quantum states that a monochromatic wave could be in. The difference of the five quantum states is given by different electric field excitations and by different distributions of the quantum uncertainty along the phase . For a displaced coherent state, the expectation (mean) value of the electric field shows an oscillation, with an uncertainty independent of the phase (a). Also the phase- (b) and amplitude-squeezed states (c) show an oscillation of the mean electric field, but here the uncertainty depends on phase and is squeezed for some phases. The vacuum state (d) is a special coherent state and is not squeezed. It has zero mean electric field for all phases and a phase-independent uncertainty. It has zero energy on average, i.e. zero photons, and is the ground state of the monochromatic wave we consider. Finally, a squeezed vacuum state has also a zero mean electric field but a phase-dependent uncertainty (e). Generally, quantum uncertainty reveals itself through a large number of identical measurements on identical quantum objects (here: modes of light) that, however, give different results. Let us again consider a continuous-wave monochromatic light wave (as emitted by an ultra-stable laser). A single measurement of Ԑ is performed over many periods of the light wave and provides a single number. The next measurements of Ԑ will be done consecutively on the same laser beam. Having recorded a large number of such measurements we know the field uncertainty at . In order to get the full picture, and for instance Fig.1(b), we need to record the statistics at many different phases . Quantitative description of (squeezed) uncertainty The measured electric field strengths at the wave's phase are the eigenvalues of the normalized quadrature operator , defined as where and are the annihilation and creation operators, respectively, of the oscillator representing the photon. is the wave's amplitude quadrature, equivalent to the position in optical phase space, and is the wave's phase quadrature, equivalent to momentum. and are non-commuting observables. Although they represent electric fields, they are dimensionless and satisfy the following uncertainty relation: where stands for the variance. (The variance is the mean of the squares of the measuring values minus the square of the mean of the measuring values.) If a mode of light is in its ground state (having an average photon number of zero), the uncertainty relation above is saturated and the variances of the quadrature are . (Other normalizations can also be found in literature. The normalization chosen here has the nice property that the sum of the ground state variances directly provide the zero point excitation of the quantized harmonic oscillator ). While coherent states belong to the semi-classical states, since they can be fully described by a semi-classical model, squeezed states of light belong to the so-called nonclassical states, which also include number states (Fock states) and Schrödinger cat states. Squeezed states (of light) were first produced in the mid-1980s. At that time, quantum noise squeezing by up to a factor of about 2 (3 dB) in variance was achieved, i.e. . Today, squeeze factors larger than 10 (10 dB) have been directly observed. A limitation is set by decoherence, mainly in terms of optical loss. The squeeze factor in Decibel (dB) can be computed in the following way: Representation of squeezed states by quasi-probability densities Quantum states such as those in Fig. 1 (a) to (e) are often displayed as Wigner functions, which are quasi-probability density distributions. Two orthogonal quadratures, usually and , span a phase space diagram, and the third axes provides the quasi probability of yielding a certain combination of . Since and are not precisely defined simultaneously, we cannot talk about a 'probability' as we do in classical physics but call it a 'quasi probability'. A Wigner function is reconstructed from time series of and . The reconstruction is also called 'quantum tomographic reconstruction'. For squeezed states, the Wigner function has a Gaussian shape, with an elliptical contour line, see Fig.: 1(f). Physical meaning of measurement quantity and measurement object Quantum uncertainty becomes visible when identical measurements of the same quantity (observable) on identical objects (here: modes of light) give different results (eigen values). In case of a single freely propagating monochromatic laser beam, the individual measurements are performed on consecutive time intervals of identical length. One interval must last much longer than the light's period; otherwise the monochromatic property would be significantly disturbed. Such consecutive measurements correspond to a time series of fluctuating eigen values. Consider an example in which the amplitude quadrature was repeatedly measured. The time series can be used for a quantum statistical characterization of the modes of light. Obviously, the amplitude of the light wave might be different before and after our measurement, i.e. the time series does not provide any information about very slow changes of the amplitude, which corresponds to very low frequencies. This is a trivial but also fundamental issue, since any data taking lasts for a finite time. Our time series, however, does provide meaningful information about fast changes of the light's amplitude, i.e. changes at frequencies higher than the inverse of the full measuring time. Changes that are faster than the duration of a single measurement, however, are invisible again. A quantum statistical characterization through consecutive measurements on some sort of a carrier is thus always related to a specific frequency interval, for instance described by with Based on this insight, we can describe the physical meaning of the observable more clearly: The quantum statistical characterization using identical consecutive modes carried by a laser beam confers to the laser beam's electric field modulation within a frequency interval. The actual observable needs to be labeled accordingly, for instance as . is the amplitude (or depth) of the amplitude modulation and the amplitude (or depth) of the phase modulation in the respective frequency interval. This leads to the doggerel expressions 'amplitude quadrature amplitude' and 'phase quadrature amplitude'''. Within some limitations, for instance set by the speed of the electronics, and can be freely chosen in course of data acquisition and, in particular, data processing. This choice also defines the measurement object, i.e. the mode that is characterized by the statistics of the eigen values of and . The measurement object thus is a modulation mode that is carried by the light beam. – In many experiments, one is interested in a continuous spectrum of many modulation modes carried by the same light beam. Fig. 2 shows the squeeze factors of many neighboring modulation modes versus . The upper trace refers to the uncertainties of the same modes being in their vacuum states, which serves as the 0 dB reference. The observables in squeezed light experiments correspond exactly to those being used in optical communication. Amplitude modulation (AM) and frequency modulation (FM) are the classical means to imprint information on a carrier field. (Frequency modulation is mathematically closely related to phase modulation). The observables and also correspond to the measurement quantities in laser interferometers, such as in Sagnac interferometers measuring rotation changes and in Michelson interferometers observing gravitational waves. Squeezed states of light thus have ample applications in optical communication and optical measurements. The most prominent and important application is in gravitational-wave observatories. Arguably, it is the first end-user driven application of quantum correlations. Squeezed light originally was not planned to be implemented in either Advanced LIGO nor in Advanced Virgo, but now it contributes a significant factor towards the observatories design sensitivities and increases the rate of observed gravitational-wave events. Frequency-dependent squeezing Frequency-dependent squeezing is a method being implemented at the LIGO–Virgo–KAGRA collaboration to improve sensitivity using its 300 m long filter cavities to handle light differently according to frequencies which allows to improve accuracy of phases at high frequencies at the cost of more inaccuracy in amplitudes at low frequencies and equivalently better amplitudes at low frequencies but worse phases at high frequencies, manipulating the uncertainty relation by the measurement of interest. Noise at high frequencies is dominated by shot noise while at low frequencies is dominated by radiation pressure noise so when one source is reduced the other increases. Applications Optical high-precision measurements Squeezed light is used to reduce the photon counting noise (shot noise) in optical high-precision measurements, most notably in laser interferometers. There are a large number of proof-of-principle experiments. Laser interferometers split a laser beam in two paths and overlap them again afterwards. If the relative optical path length changes, the interference changes, and the light power in the interferometer's output port as well. This light power is detected with a photo diode providing a continuous voltage signal. If for instance the position of one interferometer mirror vibrates and thereby causes an oscillating path length difference, the output light has an amplitude modulation of the same frequency. Independent of the existence of such a (classical) signal, a beam of light always carries at least the vacuum state uncertainty (see above). The (modulation) signal with respect to this uncertainty can be improved by using a higher light power inside the interferometer arms, since the signal increases with the light power. This is the reason (in fact the only one) why Michelson interferometers for the detection of gravitational waves use very high optical power. High light power, however, produces technical problems. Mirror surfaces absorb parts of the light, become warmer, get thermally deformed and reduce the interferometer's interference contrast. Furthermore, an excessive light power can excite unstable mechanical vibrations of the mirrors. These consequences are mitigated if squeezed states of light are used for improving the signal-to-noise-ratio. Squeezed states of light do not increase the light's power. They also do not increase the signal, but instead reduce the noise. Laser interferometers are usually operated with monochromatic continuous-wave light. The optimal signal-to-noise-ratio is achieved by either operating the differential interferometer arm lengths such that both output ports contain half of the input light power (half fringe) and by recording the difference signal from both ports, or by operating the interferometer close to a dark fringe for one of the output ports where just a single photodiode is placed. The latter operation point is used in gravitational-wave (GW) detectors. For improving an interferometer sensitivity with squeezed states of light, the already existing bright light does not need to be fully replaced. What has to be replaced is just the vacuum uncertainty in the difference of the phase quadrature amplitudes of the light fields in the arms, and only at modulation frequencies at which signals are expected. This is achieved by injecting a (broadband) squeezed vacuum field (Fig. 1e) into the unused interferometer input port (Fig. 3). Ideally, perfect interference with the bright field is achieved. For this the squeezed field has to be in the same mode as the bright light, i.e. has to have the same wavelength, same polarisation, same wavefront curvature, same beam radius, and, of course, the same directions of propagation in the interferometer arms. For the squeezed-light enhancement of a Michelson interferometer operated at dark fringe, a polarising beam splitter in combination with a Faraday rotator is required. This combination constitutes an optical diode. Without any loss, the squeezed field is overlapped with the bright field at the interferometer's central beam splitter, is split and travels along the arms, is retro-reflected, constructively interferes and overlaps with the interferometer signal towards the photo diode. Due to the polarisation rotation of the Faraday rotator, the optical loss on signal and squeezed field is zero (in the ideal case). Generally, the purpose of an interferometer is to transform a differential phase modulation (of two light beams) into an amplitude modulation of the output light . Accordingly, the injected vacuum-squeezed field is injected such that the differential phase quadrature uncertainty in the arms is squeezed. On the output light amplitude quadrature squeezing is observed. Fig. 4 shows the photo voltage of the photo diode in the interferometer output port. Subtracting the constant offset provides the (GW) signal. A source of squeezed states of light were integrated in the gravitational-wave detector GEO600 in 2010, as shown in Fig. 4. The source was built by the research group of R. Schnabel at Leibniz Universität Hannover (Germany). With squeezed light, the sensitivity of GEO600 during observational runs has been increased to values, which for practical reasons were not achievable without squeezed light. In 2018, squeezed light upgrades are also planned for the gravitational wave detectors Advanced LIGO and Advanced Virgo. Going beyond squeezing of photon counting noise, squeezed states of light can also be used to correlate quantum measurement noise (shot noise) and quantum back action noise to achieve sensitivities in the quantum non-demolition (QND) regime. Radiometry and calibration of quantum efficiencies Squeezed light can be used in radiometry to calibrate the quantum efficiency of photo-electric photo detectors without a lamp of calibrated radiance. Here, the term photo detector refers to a device that measures the power of a bright beam, typically in the range from a few microwatts up to about 0.1 W. The typical example is a PIN photo diode. In case of perfect quantum efficiency (100%), such a detector is supposed to convert every photon energy of incident light into exactly one photo electron. Conventional techniques of measuring quantum efficiencies require the knowledge of how many photons hit the surface of the photo detector, i.e. they require a lamp of calibrated radiance. The calibration on the basis of squeezed states of light uses instead the effect, that the uncertainty product increases the smaller the quantum uncertainty of the detector is. In other words: The squeezed light method uses the fact that squeezed states of light are sensitive against decoherence. Without any decoherence during generation, propagation and detection of squeezed light, the uncertainty product has its minimum value of 1/16 (see above). If optical loss is the dominating decoherence effect, which usually is the case, the independent measurement of all optical losses during generation and propagation together with the value of the uncertainty product directly reveals the quantum uncertainty of the photo detectors used. When a squeezed state with squeezed variance is detected with a photo detector of quantum efficiency (with ), the actually observed variance is increased to Optical loss mixes a portion of the vacuum state variance to the squeezed variance, which decreases the squeeze factor. The same equation also describes the influence of a non-perfect quantum efficiency on the anti-squeezed variance. The anti-squeezed variance reduces, however, the uncertainty product increases. Optical loss on a pure squeezed state produces a mixed squeezed state. Entanglement-based quantum key distribution Squeezed states of light can be used to produce Einstein-Podolsky-Rosen-entangled light that is the resource for a high quality level of quantum key distribution (QKD), which is called 'one-sided device independent QKD'. Superimposing on a balanced beam splitter two identical light beams that carry squeezed modulation states and have a propagation length difference of a quarter of their wavelength produces two EPR entangled light beams at the beam splitter output ports. Quadrature amplitude measurements on the individual beams reveal uncertainties that are much larger than those of the ground states, but the data from the two beams show strong correlations: from a measurement value taken at the first beam (), one can infer the corresponding measurement value taken at the second beam (). If the inference shows an uncertainty smaller than that of the vacuum state, EPR correlations exist, see Fig. 5. The aim of quantum key distribution is the distribution of identical, true random numbers to two distant parties A and B in such a way that A and B can quantify the amount of information about the numbers that has been lost to the environment (and thus is potentially in hand of an eavesdropper). To do so, sender (A) sends one of the entangled light beams to receiver (B). A and B measure repeatedly and simultaneously (taking the different propagation times into account) one of two orthogonal quadrature amplitudes. For every single measurement they need to choose whether to measure or in a truly random way, independently from each other. By chance, they measure the same quadrature in 50% of the single measurements. After having performed a large number of measurements, A and B communicate (publicly) what their choice was for every measurement. The non-matched pairs are discarded. From the remaining data they make public a small but statistically significant amount to test whether B is able to precisely infer the measurement results at A. Knowing the characteristics of the entangled light source and the quality of the measurement at the sender site, the sender gets information about the decoherence that happened during channel transmission and during the measurement at B. The decoherence quantifies the amount of information that was lost to the environment. If the amount of lost information is not too high and the data string not too short, data post processing in terms of error correction and privacy amplification produces a key with an arbitrarily reduced epsilon-level of insecurity. In addition to conventional QKD, the test for EPR correlations not only characterizes the channel over which the light was sent (for instance a glas fibre) but also the measurement at the receiver site. The sender does not need to trust the receivers measurement any more. This higher quality of QKD is called one-sided device independent. This type of QKD works if the natural decoherence is not too high. For this reason, an implementation that uses conventional telecommunication glas fibers would be limited to a distance of a few kilometers. Generation Squeezed light is produced by means of nonlinear optics. The most successful method uses degenerate type I optical-parametric down-conversion (also called optical-parametric amplification) inside an optical resonator. To squeeze modulation states with respect to a carrier field at optical frequency , a bright pump field at twice the optical frequency is focussed into a nonlinear crystal that is placed between two or more mirrors forming an optical resonator. It is not necessary to inject light at frequency . (Such light, however, is required for detecting the (squeezed) modulation states). The crystal material needs to have a nonlinear susceptibility and needs to be highly transparent for both optical frequencies used. Typical materials are lithium niobate (LiNbO3) and (periodically poled) potassium titanyl phosphate (KTP). Due to the nonlinear susceptibility of the pumped crystal material, the electric field at frequency is amplified and deamplified, depending on the relative phase to the pump light. At the pump's electric field maxima, the electric field at frequency is amplified. At the pump's electric field minima, the electric field at frequency is squeezed. This way, the vacuum state (Fig. 1e) is transferred to a squeezed vacuum state (Fig. 1d). A displaced coherent state (Fig. 1a) is transferred to a phase squeezed state (Fig. 1b) or to an amplitude squeezed state (Fig. 1c), depending on the relative phase between coherent input field and pump field. A graphical description of these processes can be found in. The existence of a resonator for the field at is essential. The task of the resonator is shown in Fig. 6. The left resonator mirror has a typical reflectivity of about . Correspondingly of the electric field that (continuously) enters from the left gets reflected. The remaining part is transmitted and resonates between the two mirrors. Due to the resonance, the electric field inside the resonator gets enhanced (even without any medium inside). of the steady-state light power inside the resonator gets transmitted towards the left and interferes with the beam that was retro-reflected directly. For an empty loss-less resonator, 100% of the light power would eventually propagate towards the left, obeying energy conservation. The principle of the squeezing resonator is the following: The medium parametrically attenuates the electric field inside the resonator to such a value that perfect destructive interference is achieved outside the resonator for the attenuated field quadrature. The optimum field attenuation factor inside the resonator is slightly below 2, depending on the reflectivity of the resonator mirror. This principle also works for electric field uncertainties. Inside the resonator, the squeeze factor is always less than 6 dB, but outside the resonator it can be arbitrarily high. If quadrature is squeezed, quadrature is anti-squeezed – inside as well as outside the resonator. It can be shown that the highest squeeze factor for one quadrature is achieved if the resonator is at its threshold for the orthogonal quadrature. At threshold and above, the pump field is converted into a bright field at optical frequency . Squeezing resonators are usually operated slightly below threshold, for instance, to avoid damage to the photo diodes due to the bright down-converted field. A squeezing resonator works efficiently at modulation frequencies well inside its linewidth. Only for these frequencies highest squeeze factors can be achieved. At frequencies the optical-parametric gain is strongest, and the time delay between the interfering parts negligible. If decoherence was zero, infinite squeeze factors could be achieved outside the resonator, although the squeeze factor inside the resonator was less than 6 dB. Squeezing resonators have typical linewidths of a few tens of MHz up to GHz. Due to the interest in the interaction between squeezed light and atomic ensemble, narrowband atomic resonance squeezed light have been also generated through crystal and the atomic medium. Detection Squeezed states of light can be fully characterized by a photo-electric detector that is able to (subsequently) measure the electric field strengths at any phase . (The restriction to a certain band of modulation frequencies happens after the detection by electronic filtering.) The required detector is a balanced homodyne detector (BHD). It has two input ports for two light beams. One for the (squeezed) signal field, and another for the BHDs local oscillator (LO) having the same wavelength as the signal field. The LO is part of the BHD. Its purpose is to beat with the signal field and to optically amplify it. Further components of the BHD are a balanced beam splitter and two photo diodes (of high quantum efficiency). Signal beam and LO need to be overlapped at the beam splitter. The two interference results in the beam splitter output ports are detected and the difference signal recorded (Fig. 7). The LO needs to be much more intense than the signal field. In this case the differential signal from the photo diodes in the interval is proportional to the quadrature amplitude . Changing the differential propagation length before the beam splitter sets the quadrature angle to an arbitrary value. (A change by a quarter of the optical wavelength changes the phase by  .) The following should be stated at this point: Any information about the electro-magnetic wave can only be gathered in a quantized way, i.e. by absorbing light quanta (photons). This is also true for the BHD. However, a BHD cannot resolve the discrete energy transfer from the light to the electric current, since in any small time interval a vast number of photons are detected. This is ensured by the intense LO. The observable therefore has a quasi-continuous eigenvalue spectrum, as it is expected for an electric field strength. (In principle, one can also characterize squeezed states, in particular squeezed vacuum'' states, by counting photons, however, in general the measurement of the photon number statistic is not sufficient for a full characterization of a squeezed state and the full density matrix in the basis of the number states has to be determined.) See also Spin squeezed state References Quantum states Light
Squeezed states of light
[ "Physics" ]
5,319
[ "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Quantum mechanics", "Waves", "Light", "Quantum states" ]
56,495,502
https://en.wikipedia.org/wiki/Charles%20Meneveau
Charles Meneveau (born 1960) is a French-Chilean born American fluid dynamicist, known for his work on turbulence, including turbulence modeling and computational fluid dynamics. Charles Meneveau, the Louis M. Sardella Professor in Mechanical Engineering and an associate director of the Institute for Data Intensive Engineering and Science (IDIES) at the Johns Hopkins University, focuses his research on understanding and modeling hydrodynamic turbulence, and on complexity in fluid mechanics in general. He combines computational, theoretical and experimental tools for his research, with an emphasis on the multiscale aspects of turbulence, using tools such as subgrid-scale modeling, downscaling techniques, and fractal geometry, and applications to Large Eddy Simulation (LES). He pioneered the use of the Lagrangian dynamic procedure for sub-grid scale modeling in large-eddy simulation (LES) of turbulence. His recent work includes the use of LES for wind-energy-related applications and the development of the Johns Hopkins Turbulence Database for sharing large-scale datasets from high-fidelity computational fluid dynamics calculations. Education 1989: Ph.D. in mechanical engineering, Yale University, May 1989 1988: Master of Philosophy, Yale University, 1988 1987: Master of Science, Yale University, 1987 1985: B.S. in mechanical engineering, Universidad Técnica Federico Santa María, Valparaíso (Chile), 1985 His Ph.D. advisor was K. R. Sreenivasan and his thesis was on the multi-fractal nature of small-scale turbulence. At Yale he was also informally co-advised by B. B. Mandelbrot. Career and research Meneveau's postdoctoral position was at the Stanford University/NASA-Ames's Center. He has been on the faculty of the Johns Hopkins University since 1990. His main appointment is in the Department of Mechanical Engineering with secondary appointments in the Departments of Environmental Health and Engineering and Physics and Astronomy. Professor Meneveau’s research is focused on understanding and modeling hydrodynamic turbulence, and complexity in fluid mechanics in general. Special emphasis is placed on the multiscale aspects of turbulence, using tools such as subgrid-scale modeling, downscaling techniques, and fractal geometry. Applications of the results to Large Eddy Simulation (LES) have facilitated applications of LES to engineering, environmental and geophysical flow phenomena. Currently Meneveau is focused on applications of LES to wind energy and wind farm fluid dynamics, on developing advanced wall models for LES, on modeling oil dispersion in the ocean, as well as on building “big-data” tools to share the very large data sets that arise in computational fluid dynamics with broad constituencies of scientists and engineers around the world Among Meneveau’s main contributions are advances to turbulence modeling and large eddy simulations. The advances were made possible by elucidating the properties of the small-scale motions in turbulent flows and applying the new insights to the development of advanced subgrid-scale models, such as the Lagrangian dynamic model. This model has been implemented in various research and open source CFD codes (e.g. OpenFoam) and expanded the applicability of Large Eddy Simulations to complex-geometry flows of engineering and environmental interest, where prior models could not be used. Among the application areas of Large Eddy Simulation being pursued in Meneveau’s group is the study of complex flows in large wind farms. Using the improved simulation tools as well as wind tunnel tests, Meneveau and his colleagues identified the important process of vertical entrainment of mean flow kinetic energy into an array of wind turbines. This research has clarified the mechanisms limiting wind plant performance at a time when there is enormous growth in wind farms. The research has led to new engineering models that will allow for better designed wind farms thus increasing their economic benefit and helping to reduce greenhouse gas emissions from fossil fuels. Meneveau has participated in efforts to democratize access to valuable “big data” in turbulence. As a deputy director of JHU’s Institute for Data Intensive Engineering and Science, he worked with a team of computer scientists, applied mathematicians, astrophysicists, and fluid dynamicists that built the JHTDB (Johns Hopkins Turbulence Databases). This open numerical laboratory provides researchers from around the world with user-friendly access to large data sets arising from Direct Numerical Simulations of various types of turbulent flows. To date, hundreds of researchers worldwide have used the data, and flow data at over hundred trillion points have been sampled from the database. The system has demonstrated how “big data” resulting from large world-class numerical simulations can be shared with many researchers who lack the massive supercomputing resources needed to generate such data. Meneveau also has performed groundbreaking research on understanding several multiscale aspects of turbulence. As part of his doctoral work at Yale in the late 1980s, Meneveau and his advisor Prof. K. R. Sreenivasan established the fractal and multifractal theory for turbulent flows and confirmed the theory using experiments. Interfaces in turbulence were shown to have a fractal dimension of nearly 7/3, where the 1/3 exponent above the value of two valid for smooth surfaces could be related to the classic Kolmogorov theory. And a universal multi-fractal spectrum was established, leading to a simple cascade model, which has since been applied to many other physical, biological and socio-economic systems. Later, as a postdoc at Stanford University’s Center for Turbulence Research under the guidance of Prof. P. Moin, Meneveau pioneered the application of orthogonal wavelet analysis to turbulence, introducing the concept of wavelet spectrum and other scale-dependent statistical measures of variability. Awards, honors, societies and journal editorships Awards 2021: Recipient, 2021 Fluid Dynamics Award from the American Institute of Aeronautics and Astronautics (AIAA), "for advancing both the theoretical and practical understanding of turbulence through groundbreaking modeling techniques and applications of large-eddy simulation." 2018: Elected Member, National Academy of Engineering (NAE), “for contributions to turbulence small-scale dynamics, large-eddy simulations, wind farm fluid dynamics, and leadership in the fluid dynamics community”. 2016: Awarded honorary doctorate from the Danish Technical University, Doctor Tecnices, Honoris Causa for “Outstanding and highly innovative scientific achievements in fluid dynamics, particularly for his work on turbulence and atmospheric physics and its applications to wind energy”. 2014-2015: Midwest Mechanics Lecturer 2012-2013: Fulbright Scholar, US-Australia Fulbright Scholarship 2012: Stanley Corrsin Lecturer, Johns Hopkins University 2011: First recipient of the Stanley Corrsin Award from the American Physical Society, citation: “For his innovative use of experimental data and turbulence theory in the development of advanced models for large-eddy simulations, and for the application of these models to environmental, geophysical and engineering applications.”   2005: Foreign corresponding member of the Chilean Academy of Sciences 2005: Appointed to the Louis M. Sardella Professorship in Mechanical Engineering 2004: UCAR Outstanding Publication Award for co-authorship of the paper by Horst et al., that appeared in J. Atmospheric Science     2003: Johns Hopkins University Alumni Association Excellence in Teaching Award 2001: François N. Frenkiel Award for Fluid Mechanics, American Physical Society 1989: Henry P. Becton Prize for Excellence in Research, Yale University 1985: Premio Federico Santa María, UTFSM Valparaíso, Chile Societies American Academy of Mechanics, Fellow. American Society of Mechanical Engineers, Fellow. American Physical Society, Fellow. Pi Tau Sigma, Honorary Member American Geophysical Union, Member. American Institute for Aeronautics and Astronautics, Senior Member. Editorships 2010–Present: Deputy editor, Journal of Fluid Mechanics 2019: Chair, American Physical Society, Division of Fluid Dynamics 2008–Present: Key participant in the development and maintenance of the JHTDB (Johns Hopkins Turbulence Databases) open numerical laboratory 2003-2015: Editor-in-chief, Journal of Turbulence 2005-2010: Associate editor, Journal of Fluid Mechanics 2005-2010: Member, editorial committee, Annual Rev. of Fluid Mechanics 2001–Present: Member, advisory board, Theor. & Comp. Fluid Dynamics 2001-2003: Associate editor, Physics of Fluids 2003: Guest associate editor, Annual Reviews of Fluid Mechanics Journal publications Google Scholar page References External links Biography at Johns Hopkins University 1960 births Living people Chilean scientists Yale University alumni Johns Hopkins University faculty 21st-century American physicists Fluid dynamicists Members of the United States National Academy of Engineering Fellows of the American Physical Society
Charles Meneveau
[ "Chemistry" ]
1,746
[ "Fluid dynamicists", "Fluid dynamics" ]
56,495,543
https://en.wikipedia.org/wiki/Laser%20chemical%20vapor%20deposition
Laser chemical vapor deposition (LCVD) is a chemical process used to produce high purity, high performance films, fibers, and mechanical hardware (MEMS). It is a form of chemical vapor deposition in which a laser beam is used to locally heat the semiconductor substrate, causing the vapor deposition chemical reaction to proceed faster at that site. The process is used in the semiconductor industry for spot coating, the MEMS industry for 3-D printing of hardware such as springs and heating elements,2,6,7,9 and the composites industry for boron and ceramic fibers. As with conventional CVD, one or more gas phase precursors are thermally decomposed, and the resulting chemical species 1) deposit on a surface, or 2) react, form the desired compound, and then deposit on a surface, or a combination of (1) and (2). References Semiconductor device fabrication Thin film deposition
Laser chemical vapor deposition
[ "Chemistry", "Materials_science", "Mathematics" ]
186
[ "Microtechnology", "Thin film deposition", "Coatings", "Thin films", "Semiconductor device fabrication", "Planes (geometry)", "Solid state engineering" ]
56,497,087
https://en.wikipedia.org/wiki/ESA%20Vigil
Vigil, formerly known as Lagrange, is a space weather mission developed by the European Space Agency. The mission will provide the ESA Space Weather Office with instruments able to monitor the Sun, its solar corona and interplanetary medium between the Sun and Earth, to provide early warnings of increased solar activity, to identify and mitigate potential threats to society and ground, airborne and space based infrastructure as well as to allow 4 to 5 days space weather forecasts. To this purpose the Vigil mission will place for the first time a spacecraft at Sun-Earth Lagrange point 5 (L5) from where it would get a 'side' view of the Sun, observing regions of solar activity on the solar surface before they turn and face Earth. Monitoring space weather includes events such as solar flares, coronal mass ejections, geomagnetic storms, solar proton events, etc. The Sun-Earth L5 location provides opportunities for space weather forecasting by monitoring the Sun beyond the Eastern solar limb not visible from Earth, thus increasing the forecast lead time of potentially hazardous solar phenomena including solar flares, fast solar wind streams. The Vigil mission will improve the assessment of Coronal Mass Ejection (CME) motion and density, speed/energy, arrival time and impact on Earth to support protection of the critical infrastructure on ground and in space. The mission will also perform in-situ observations of the solar wind bulk velocity, density, and temperature as well as the Interplanetary magnetic field(IMF) at L5, to provide enhanced detection and forecasting of high-speed solar wind streams and corotating interaction regions. Status As part of the Space Situational Awareness Programme (SSA), ESA initiated in 2015 the assessment of two missions to enhance space weather monitoring. These missions were initially meant to utilize the positioning of satellites at the Sun-Earth Lagrangian L1 and L5 points. Eventually, in the frame of the cooperation on space-based space weather observations between the European Space Agency (ESA) and the United States National Oceanic and Atmospheric Administration (NOAA) National Environmental Satellite Data and Information Service (NESDIS) the following was agreed: NOAA/NESDIS will launch a Space Weather Follow On (SWFO) Mission to Lagrange Point L1 for continuity of operational space weather observations and to reduce the risk of a measurement gap in the current coronal mass ejection (CME) imagery and in-situ solar wind measurements. ESA will launch a mission to Lagrange Point L5 to provide capability for solar and space environment monitoring away from the Sun-Earth line. In the scope of this agreement the two agencies will share data and provide each other with instruments to be embarked on the respective platforms. The space segment of the Vigil mission completed the first part of Preliminary Definition (Phase B1) in June 2022. On 21 November 2022, ESA issued a Request for Quotation to Airbus Defence and Space Ltd. for the design, development and verification (Phase B2, C and D) of the Vigil Space Segment. The formal start of the activities is planned before the end of 2023. Vigil is scheduled to be launched in 2031, followed by 3 years of cruise to L5. The mission aims to operate nominally for 4.5 years, with a possibility of extension up to 5 additional years. Objectives Vigil mission objectives can be grouped in two main categories: Nowcasting with the aim to provide an early warning about solar flares and the onset of a Coronal Mass Ejections (CMEs). Thanks to the side view from SEL5, the Vigil mission will also be able improve the accuracy of the predicted arrival CME arrival time on Earth by 2 to 4 hours compared to the current capabilities; this will be achieved by monitoring the entire space between Sun and Earth allowing mid-course tracking of CME and in general solar wind features as they travel towards Earth. Forecasting up to 4 to 5 days of the developing solar activity thanks to the monitoring of active region development beyond the East limb no visible from Earth. In-situ measurements in Sun-Earth L5 will allow monitoring of high-speed solar wind streams and magnetic field several days in advance before they reach the Earth. Mission Architecture Space Segment Platform The Platform supplies all service-related functions required to support the proper operation and data collection of the Vigil Payload Suite. The key feature of spacecraft concept for an operational mission like Vigil is a robust avionics architecture able to remain operational during the most extreme space weather events seen in the last hundred years. The Failure Detection Isolation and Recover (FDIR) will be designed to enhance the autonomy of the spacecraft, thus reducing the risk of service interruption requiring ground intervention. The Mission Data downlink is via X-band at an average data rate of ~1 Mbps (~86 Gbits per day) with 24/7 coverage provided by ESTRACK supplemented by additional commercial stations. The mass at launch is projected close to 2500 kg. To reach SEL5 the proposed design will rely on a bi-propellant Chemical Propulsion System equipped with a 450 N main engine. Payload Suite Payload Suite will include: 3 remote sensing instruments; 2 in-situ instruments; In the frame of the inter-agency cooperation between ESA and NASA, Vigil will offer the possibility to accommodate an additional instrument NASA instrument of opportunity (NIO). Remote sensing instruments The remote sensing instruments will allow to estimate size, mass, speed, and direction of CMEs. Compact Coronagraph (CCOR): it will image the solar corona and be used to observe Coronal Mass Ejections (CMEs). With CCOR data the size, mass, speed, and direction of CMEs can be derived. The CCOR Instrument will be provided to ESA by NOAA and manufactured by U.S. Naval Research Laboratory (NRL). The design will instrument is based on the heritage of a similar instrument for NOAA's mission SWFO-1 and GOES-U. Heliospheric Imager (HI): it will provide wide-angle, white-light images of the region of space between the Sun and the Earth (i.e., the heliosphere). These images are required to enable tracking of Earth-directed CMEs over their propagation path once they have left the field-of-view of the coronagraph instrument. Photospheric Magnetic field Imager (PMI): it will scan a selected solar spectrum to generate 3D maps of the magnetic field (field strength, azimuth, inclination) and crucial physical parameters (e.g. distribution of vertical and horizontal magnetic fields, distribution of inclination angles, twist, writhe, helicity, current density, share angles, photospheric magnetic excess energy etc.) for enhanced space weather applications. The instrument will also generate solar white light images as by-products of magnetograph measurements and produced as continuum images observed at an additional wavelength point in the vicinity of the magnetically sensitive spectral line. In-situ instruments In-situ instruments can be used to monitor the Stream Interaction Regions (SIR) and Co-rotating Interaction Regions (CIR) up to 4–5 days in advance before their arrival at Earth. Plasma Analyser (PLA): it will measure Solar wind bulk velocity, solar wind bulk density and solar wind temperature, are required for monitoring of the solar wind that is turning towards the Earth and particularly for detection of high-speed solar wind streams that produce Stream Interaction Regions (SIR) and Co-rotating Interaction Regions (CIR). Magnetometer (MAG): it will measurement of the Interplanetary Magnetic Field (IMF) at L5; to minimise the effects of the electromagnetic interferences generated by the Vigil spacecraft itself, the MAG will be placed at the end of a 7m boom. Ground Segment The Ground Segment, consists of: Mission Operation Centre (MOC) located in European Space Operations Centre (ESOC) responsible for Satellite commanding, Satellite health monitoring, orbit control and on-board software configuration and maintenance. The Payload Data Centre (PDC) responsible for mission data acquisition, processing, archiving and distribution to the customer/users, as well as mission planning; Ground Station Network (GSN). The GSN shall be made up of a mix of ESA ESTRACK stations and commercial stations as Vigil has a specific need to maintain a 24/7 downlink capability, including over the Pacific Ocean where there is a gap in ESTRACK coverage, third party stations will be required. Launcher The launcher service is baselined as Ariane 6.2 by Arianespace from the Guiana Space Centre. The launcher will be in dual-launch configuration for injection in GTO. The spacecraft will be launched as secondary passenger with a commercial customer bound for geostationary orbit in a dual-launch with Ariane 6.4. This transfer option makes use of the Sun-Earth L1/L2 connection and the Weak Stability Boundary effects near L2 to reach L5. After release of the spacecraft into GTO, it will perform a series of 3 Apogee Raising Manoeuvres (ARM) to make its way towards L1 within a period of 14 days, planned to minimise the transitions through the Van Allen belts. From L1 the spacecraft will be placed on a zero to low-cost transfer trajectory towards L2 from which it will then leave towards SEL5. Deep Space Manoeuvres (DSM), preceded and followed by correction manoeuvres, will be executed as needed. When the spacecraft reaches L5, a braking manoeuvre to insert the spacecraft into the final orbit will be executed. Different options are investigated, resulting in a split of such manoeuvre in two burns. The cruise to L5 can take up to 3 years. To increase the use of the Vigil spacecraft, the mission will enter in a pre-operational phase once the halfway through the journey L5. Alternatives include the use of Ariane 6.2 for direct injection in SEL5, Ariane 6.4 or Falcon 9 provided by SpaceX. References Proposed space probes 2030s in spaceflight Missions to the Sun European Space Agency space probes Solar space observatories Solar telescopes
ESA Vigil
[ "Astronomy" ]
2,106
[ "Space telescopes", "Solar space observatories" ]
56,497,219
https://en.wikipedia.org/wiki/Amylocystis%20lapponica
Amylocystis lapponica (alternatively spelled Amylocystis lapponicus) is a species of bracket fungus in the family Fomitopsidaceae, and the type species of genus Amylocystis. It produces medium-sized, annual fruit bodies that are soft, and have a strong, distinct smell. The fungus is a saprophyte that feeds on coniferous wood of logs lying on the ground, and causes brown rot. It is a rather rare species that only occurs in old-growth forest. Taxonomy The fungus was originally described by Swedish mycologist Lars Romell in 1911, who called it Polyporus lapponicus. The type collection was made in Nattavaara (Sweden), where it was found growing on fir. Romell initially thought the fungus might be Climacocystis borealis, but ultimately rejected that opinion, as that species has an easily breakable fruit body, and its spores are of different size and shape. Amylocystis lapponica has been shuffled to several different polypore genera in its taxonomic history, including Ungulina (Pilát, 1934), Leptoporus (Pilát, 1938), and Tyromyces (J.Lowe, 1975). The fungus has microscopic characteristics that are typical of the genus Tyromyces, but differs by the presence of thick-walled amyloid cystidia in the hymenium. For this reason, A. Bondartsev and Rolf Singer created the genus Amylocystis in 1944 to contain the fungus. Polyporus ursinus, proposed by Curtis Gates Lloyd in 1915, is now considered a synonym of Amylocystis lapponica. Description The fungus has fruit bodies that range in form from crust-like to effused-reflexed (mostly crust-like, with edges curling out to form rudimentary caps). Individual fruit bodies measure up to wide, and have a dirty whitish to light buff surface colour that becomes reddish brown when dry or if bruised. Amylocystis lapponica has a monomitic hyphal system, containing only generative hyphae. These hyphae are mostly thick-walled and measure 4–10.5 μm thick. The spores are cylindrical, hyaline, and smooth, measuring 8–11 by 2.5–3.5 μm. They are unreactive in Melzer's reagent. Oligoporus fragilis is similar in appearance, but can be distinguished microscopically from Amylocystis lapponica by the lack of amyloid cystidia. Habitat and distribution Amylocystis lapponica decomposes fallen conifer wood, in which it causes brown rot. Its preferential hosts are spruce and larch, although it is occasionally found on fir. It has a circumboreal distribution in coniferous forests. In Europe, the fungus is restricted almost exclusively to old-growth forests. Several conditions are required to support local populations, including: "vegetative continuity (never cut), natural tree species composition, multi-aged structure, rich presence of dead wood in various stages of decay, relatively large area of virgin forest surrounded by near-natural forest, and a stable, cold and humid meso- and microclimate." Because of this requirement the species is rare. For example, in the Czech Republic, despite the long and intensive history of polypore study in that area, A. lapponica has only been recorded from the Boubínský prales virgin forest, even though there are other old-growth forests in the country. Similarly, in Poland it is known only from Białowieża Forest (Białowieża National Park). Both the Czech and Polish locations have a similar management history–"minimal influence by man". In contrast to its rarity in Central and Southern Europe, A. lapponica is known from hundreds of localities in Finland and Sweden, and dozens in Norway. Here the fungus is used as an indicator species to help evaluate areas in need of conservation. The fungus is widely distributed in western North America. It is also found in China. In Europe, the fungus has been recorded from 12 countries, and is red-listed in 7 countries. In 2004, Amylocystis lapponica was one of 33 species proposed for protection under the Bern Convention by the European Council for Conservation of Fungi. In both the Czech Republic and Poland, where it is considered critically endangered, the fungus is found on their Regional Red Lists and as such is protected by law. The discomycete Hyaloscypha epiporia grows only on the surface of old polypores fruiting on softwood, and is often found on old, partly decayed fruit bodies of Amylocystis lapponica. References Fomitopsidaceae Fungi described in 1911 Fungi of China Fungi of Europe Fungi of North America Fungus species
Amylocystis lapponica
[ "Biology" ]
1,012
[ "Fungi", "Fungus species" ]
56,498,399
https://en.wikipedia.org/wiki/Neurotree
Academic Family Tree, which began as Neurotree, is an online database for academic genealogy, containing numerous "family trees" of academic disciplines. Neurotree was established in 2005 as a family tree of neuroscientists. Later that year Academic Family Tree incorporated Neurotree and family trees of other scholarly disciplines. Unlike a conventional genealogy or family tree, in which connections among individuals are from kinship (e.g., parents to children), connections in Academic Family Tree are from mentoring relationships, usually among people working in academic settings (e.g., doctoral supervisors to students). Academic Family Tree has been used as sources of information for the history and prospects of academic fields such as psychology, meteorology, organizational communication, and neuroscience. It has been used to address infometrics, to research issues of scientific methodology, and to examine mentor characteristics that predict mentee academic success. Functioning and scope The founders of the initial trees, including Neurotree, populated them from published sources, such as ProQuest. Later, they set up discipline-specific family trees of Academic Family Tree to be volunteer-run; accuracy is maintained by a group of volunteer editors. Hierarchical connections between mentors ("parents") and mentees ("children") are defined as any meaningful mentoring relationship (research assistant, graduate student, postdoctoral fellow, or research scientist). Continuous records extend well into the Middle Ages and earlier. As of 29 September 2023, Academic Family Tree contained 871,361 people with 882,278 connections among them. Academic Family Tree encompasses a broad range of discipline-specific trees. As of 29 September 2023, there were 73 trees spanning science (e.g., human genetics, microbiology, and psychology), mathematics and philosophy, engineering, the humanities (e.g., economics, law, theology, and music), and business (e.g., organizational communication and advertising). All trees within Academic Family Tree are closely linked. A search for a person in one tree gives hits from all trees in Academic Family Tree. The data in Academic Family Tree are owned by the nonprofit academictree.org, but they are shared under the Creative Commons License (CC-BY 3.0). This means a person may use the data in any tree for any purpose as long as the source is cited. Tools All trees under Academic Family Tree have a set of tools similar to those of conventional genealogy applications. One is Distance that allows a user to enter two scholars' names and to determine the number of degrees of separation between the two. For example, the number of degrees of academic separation between Isaac Newton and Marie Curie is 9 (including research assistantships, postdoctoral positions, and research scientist positions). History Neurotree was founded in January 2005 by Stephen V. David, then an assistant professor in the Oregon Hearing Research Center of Oregon Health and Science University, and by Benjamin Y. Hayden, an assistant professor in the Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester. David and Hayden founded Academic Family Tree soon after founding Neurotree. In November 2014, David received funding for Neurotree from the Metaknowledge Network. In November 2016, David received funding for Academic Family Tree from the National Science Foundation (NSF) SciSIP Program. In July 2019, David again received funding for Neurotree from the NSF. Marsh (2017) pointed out that information for Neurotree and Academic Family Tree is provided by volunteers and it is not formally peer-reviewed. She cautioned that this can mean their information is inaccurate. Relation to other academic genealogies One other notable discipline-specific academic genealogy is the Mathematics Genealogy Project. Academic Family Tree has its own mathematics tree, MathTree but it is much less complete than the Mathematics Genealogy Project. As of 29 September 2023, MathTree contained 35,817 people whereas the Mathematics Genealogy Project contained 297,268 people. One other general academic genealogy was PhD Tree. PhD Tree ceased functioning some time after June 2017. See also Mathematics Genealogy Project References External links Academic Family Tree Projects established in 2005 Neuroscience projects History of neuroscience Historiography of science History of science
Neurotree
[ "Technology" ]
854
[ "History of science", "History of science and technology" ]
56,498,926
https://en.wikipedia.org/wiki/NGC%204564
NGC 4564 is an elliptical galaxy located about 57 million light-years away in the constellation Virgo. NGC 4564 was discovered by astronomer William Herschel on March 15, 1784. The galaxy is also a member of the Virgo Cluster. NGC 4564 has an estimated population of 213 ± 31 globular clusters. It is the host of a supermassive black hole with an estimated mass of about 56 million suns (). Supernova One supernova has been observed in NGC 4564: SN 1961H (type unknown, mag. 11.2) was discovered by Romano on 2 May 1961. See also NGC 3115 NGC 5102 List of NGC objects (4001–5000) References External links Virgo (constellation) Elliptical galaxies 4564 042051 07773 Astronomical objects discovered in 1784 Virgo Cluster Discoveries by William Herschel
NGC 4564
[ "Astronomy" ]
176
[ "Virgo (constellation)", "Constellations" ]
56,498,997
https://en.wikipedia.org/wiki/Trigger%20zone
In neuroscience and neurology, a trigger zone is an area in the body, or of a cell, in which a specific type of stimulation triggers a specific type of response. The term was first used in this context around 1914 by Hugh T. Patrick, who was writing about trigeminal neuralgia, a condition in which pain fibers in the trigeminal nerve become hypersensitive. In people with trigeminal neuralgia, even a light touch to some part of the body—often a tooth or a part of the face—can give rise to an extended period of excruciating pain. Patrick referred to the sensitive part of the body as the "dolorogenic zone", and used the term "trigger zone" as a simpler equivalent. Through the 1920s and 1930s the term came into steadily wider use, but almost always in the context of neuralgia. Starting in the late 1930s, other types of stimulation and other types of responses were characterized as having the properties of a trigger zone. In 1940, for example, Morison and Dempsey observed that a small area of the cerebral cortex could be triggered when electrical stimulation would evoke widespread activity in other parts of the cerebral cortex. In 1944 Paul Wilcox described triggering of epileptic seizure by electrical stimulation of another area of the cerebral cortex. The chemoreceptor trigger zone is within the area postrema of the medulla oblongata in which many types of chemical stimulation can provoke nausea and vomiting. This area was first identified and named in 1951 by Herbert L. Borison and Kenneth R. Brizzee. Parts of cells, rather than parts of the body, can also behave as trigger zones. The axon hillock of a neuron possesses the highest density of voltage-gated Na+ channels, and is therefore the region where it is easiest for the action potential threshold to be reached. References Neurophysiology Electrophysiology Voltage-gated ion channels Medulla oblongata Cellular neuroscience Cellular processes Membrane biology
Trigger zone
[ "Chemistry", "Biology" ]
412
[ "Membrane biology", "Cellular processes", "Molecular biology" ]
56,502,229
https://en.wikipedia.org/wiki/NGC%201969
NGC 1969 (also known as ESO 56-SC124) is an open star cluster in the Dorado constellation and is part of the Large Magellanic Cloud. It was discovered by James Dunlop on September 24, 1826. Its apparent size is 0.8 arc minutes. See also List of NGC objects (1001–2000) References External links Dorado ESO objects 1969 Open clusters Large Magellanic Cloud Astronomical objects discovered in 1826 Discoveries by James Dunlop
NGC 1969
[ "Astronomy" ]
96
[ "Dorado", "Constellations" ]
56,502,322
https://en.wikipedia.org/wiki/Vaginal%20epithelium
The vaginal epithelium is the inner lining of the vagina consisting of multiple layers of (squamous) cells. The basal membrane provides the support for the first layer of the epithelium-the basal layer. The intermediate layers lie upon the basal layer, and the superficial layer is the outermost layer of the epithelium. Anatomists have described the epithelium as consisting of as many as 40 distinct layers of cells. The mucus found on the epithelium is secreted by the cervix and uterus. The rugae of the epithelium create an involuted surface and result in a large surface area that covers 360 cm2. This large surface area allows the trans-epithelial absorption of some medications via the vaginal route. In the course of the reproductive cycle, the vaginal epithelium is subject to normal, cyclic changes, that are influenced by estrogen: with increasing circulating levels of the hormone, there is proliferation of epithelial cells along with an increase in the number of cell layers. As cells proliferate and mature, they undergo partial cornification. Although hormone induced changes occur in the other tissues and organs of the female reproductive system, the vaginal epithelium is more sensitive and its structure is an indicator of estrogen levels. Some Langerhans cells and melanocytes are also present in the epithelium. The epithelium of the ectocervix is contiguous with that of the vagina, possessing the same properties and function. The vaginal epithelium is divided into layers of cells, including the basal cells, the parabasal cells, the superficial squamous flat cells, and the intermediate cells. The superficial cells exfoliate continuously, and basal cells replace the superficial cells that die and slough off from the stratum corneum. Under the stratus corneum is the stratum granulosum and stratum spinosum. The cells of the vaginal epithelium retain a usually high level of glycogen compared to other epithelial tissue in the body. The surface patterns on the cells themselves are circular and arranged in longitudinal rows. The epithelial cells of the uterus possess some of the same characteristics of the vaginal epithelium. Structure Vaginal epithelium forms transverse ridges or rugae that are most prominent in the lower third of the vagina. This structure of the epithelium results in an increased surface area that allows for stretching. This layer of epithelium is protective, and its uppermost surface of cornified (dead) cells are unique in that they are permeable to microorganisms that are part of the vaginal flora. The lamina propria of connective tissue is under the epithelium. Cells Basal cells The basal layer of the epithelium is the most mitotically active and reproduces new cells. This layer is composed of one layer of cuboidal cells lying on top of the basal membrane. Parabasal cells The parabasal cells include the stratum granulosum and the stratum spinosum. In these two layers, cells from the lower basal layer transition from active metabolic activity to death (apoptosis). In these mid-layers of the epithelia, the cells begin to lose their mitochondria and other cell organelles. The multiple layers of parabasal cells are polyhedral in shape with prominent nuclei. Intermediate cells Intermediate cells make abundant glycogen and store it. Estrogen induces the intermediate and superficial cells to fill with glycogen. The intermediate cells contain nuclei and are larger than the parabasal cells and more flattened. Some have identified a transitional layer of cells above the intermediate layer. Superficial cells Estrogen induces the intermediate and superficial cells to fill with glycogen. Several layers of superficial cells exist that consist of large, flattened cells with indistinct nuclei. The superficial cells are exfoliated continuously. Cell junctions The junctions between epithelial cells regulate the passage of molecules, bacteria and viruses by functioning as a physical barrier. The three types of structural adhesions between epithelial cells are: tight junctions, adherens junctions, and desmosomes. "Tight junctions (zonula occludens) are composed of transmembrane proteins that make contact across the intercellular space and create a seal to restrict transmembrane proteins difusion. of molecules across the epithelial sheet. Tight junctions also have an organizing role in epithelial polarization by limiting the mobility of membrane-bound molecules between the apical and basolateral domains of the plasma membrane of each epithelial cell. Adherens junctions (zonula adherens) connect bundles of actin filaments from cell to cell to form a continuous adhesion belt, usually just below the microfilaments." Junction integrity changes as the cells move to the upper layers of the epidermis. Mucus The vagina itself does not contain mucous glands. Though mucus is not produced by the vaginal epithelium, mucus originates from the cervix. The cervical mucus that is located inside the vagina can be used to assess fertility in ovulating women. The Bartholin's glands and Skene's glands located at the entrance of the vagina do produce mucus. Development The epithelium of the vagina originates from three different precursors during embryonic and fetal development. These are the vaginal squamous epithelium of the lower vagina, the columnar epithelium of the endocervix, and the squamous epithelium of the upper vagina. The distinct origins of vaginal epithelium may impact the understanding of vaginal anomalies. Vaginal adenosis is a vaginal anomaly traced to displacement of normal vaginal tissue by other reproductive tissue within the muscular layer and epithelium of the vaginal wall. This displaced tissue often contains glandular tissue and appears as a raised, red surface. Cyclic variations During the luteal and follicular phases of the estrous cycle the structure of the vaginal epithelium varies. The number of cell layers vary during the days of the estrous cycle: Day 10, 22 layers Days 12-14, 46 layers Day 19, 32 layers Day 24, 24 layers The glycogen levels in the cells is at its highest immediately before ovulation. Lytic cells Without estrogen, the vaginal epithelium is only a few layers thick. Only small round cells are seen that originate directly from the basal layer (basal cells) or the cell layers (parabasal cells) above it. The parabasal cells, which are slightly larger than the basal cells, form a five- to ten-layer cell layer. The parabasal cells can also differentiate into histiocytes or glandular cells. Estrogen also influences the changing ratios of nuclear constituents to cytoplasm. As a result of cell aging, cells with shrunken, seemingly foamy cell nuclei (intermediate cells) develop from the parabasal cells. These can be categorized by means of the nuclear-plasma relation into "upper" and "deep" intermediate cells. Intermediate cells make abundant glycogen and store it. The further nuclear shrinkage and formation of mucopolysaccharides are distinct characteristics of superficial cells. The mucopolysaccharides form a keratin-like cell scaffold. Fully keratinized cells without a nucleus are called "floes". Intermediate and superficial cells are constantly exfoliated from the epithelium. The glycogen from these cells is converted to sugars and then fermented by the bacteria of the vaginal flora to lactic acid. The cells progress through the cell cycle and then decompose (cytolysis) within a week's time. Cytolysis occurs only in the presence of glycogen-containing cells, that is, when the epithelium is degraded to the upper intermediate cells and superficial cells. In this way, the cytoplasm is dissolved, while the cell nuclei remain. Epithelial microbiota Low pH is necessary to control vaginal microbiota. Vaginal epithelial cells have a relatively high concentration of glycogen compared to other epithelial cells of the human body. The metabolism of this complex sugar by the lactobacillus dominated microbiome is responsible for vaginal acidity. Function The cellular junctions of the vaginal epithelium help prevent pathogenic microorganisms from entering the body though some are still able to penetrate this barrier. Cells of the cervix and vaginal epithelium generate a mucous barrier (glycocalyx) in which immune cells reside. In addition, white blood cells provide additional immunity and are able to infiltrate and move through the vaginal epithelium. The epithelium is permeable to antibodies, other immune system cells, and macromolecules. The permeability of epithelium thus provides access for these immune system components to prevent the passage of invading pathogens into deeper vaginal tissue. The epithelium further provides a barrier to microbes by the synthesis of antimicrobial peptides (beta-defensins and cathelicidins) and immunoglobulins. Terminally differentiated, superficial keratinocytes extrude the contents of lamellar bodies out of the cell to form a specialized, intercellular lipid envelope that encases the cells of the epidermis and provides a physical barrier to microorganisms. Clinical significance Disease transmission Sexually transmitted infections, including HIV are rarely transmitted across intact and healthy epithelium. These protective mechanisms are due to frequent exfoliation of the superficial cells, low pH, and innate and acquired immunity in the tissue. Research into the protective nature of the vaginal epithelium has been recommended as it would help in the design of topical medication and microbicides. Cancer There are very rare malignant growths that can originate in the vaginal epithelium. Some are only known through case studies. They are more common in older women. Vaginal squamous-cell carcinoma arises from the squamous cells of the epithelium. Vaginal adenocarcinoma arises from secretory cells in the epithelium Clear cell adenocarcinoma of the vagina arises in response to prenatal exposure to diethylstilbestrol Vaginal melanoma arises from melanocytes in the epithelium Inflammation Candida vaginitis is a fungal infection; the discharge is irritating to the vagina and the surrounding skin. Bacterial vaginosis Gardnerella usually causes a discharge, itching, and irritation. Aerobic vaginitis thinned reddish vaginal epithelium, sometimes with erosions or ulcerations and abundant yellowish discharge Atrophy The vaginal epithelium changes significantly when estrogen levels decrease at menopause. Atrophic vaginitis usually causes scant odorless discharge History The vaginal epithelium has been studied since 1910 by a number of histologists. Research The use of nanoparticles that can penetrate the cervical mucus (present in the vagina) and vaginal epithelium has been investigated to determine if medication can be administered in this manner to provide protection from infection of the Herpes simplex virus. Nanoparticle drug administration into and through the vaginal epithelium to treat HIV infection is also being investigated. See also Vaginal cysts Vaginal tumors References External links Human female reproductive system Women and sexuality Women's health Anatomy Gynaecology Epithelium
Vaginal epithelium
[ "Biology" ]
2,474
[ "Anatomy" ]
56,502,378
https://en.wikipedia.org/wiki/NGC%201970
NGC 1970 (also known as ESO 56-SC127) is a bright open cluster and emission nebula in the Dorado constellation in the Large Magellanic Cloud. It was discovered by John Herschel on January 31, 1835. Its apparent size is 8.0. It is commonly known as the Tulip Nebula. See also List of NGC objects (1001–2000) References External links Emission nebulae Open clusters ESO objects 1970 Dorado Large Magellanic Cloud Astronomical objects discovered in 1835 Discoveries by John Herschel
NGC 1970
[ "Astronomy" ]
109
[ "Dorado", "Constellations" ]
56,503,536
https://en.wikipedia.org/wiki/DirtyTooth
DirtyTooth is a generic term for a feature in the Bluetooth profiles of an iPhone that may be exploited if the device is using an iOS version below 11.2. Android devices are not affected. History The first hack was reported on March 5, 2017, and was officially presented to the public at the RootedCon conference in August 2017 in Madrid, Spain and later at the ToorCon in San Diego. A research paper was published in 2017 using DirtyTooth with a real bluetooth speaker. In BlackHat Europe 2017 another demonstration was carried out, this time with a Raspberry Pi. Overview DirtyTooth is based on the way how Bluetooth notifies the user when it changes the profile. Some operating systems ask the user to accept the profile change but others like iOS, do not warn the user, changing automatically from one profile to another. Depending on the Bluetooth profile, it can provide different access levels to the services and the information located in the device. The DirtyTooth hack works impersonating the A2DP profile so that a user's iOS device connects, changing to a PBAP profile after pairing without having to enter a PIN if the device has Bluetooth version 2.1 or higher. Affected hardware The hack affected every iPhone from the 3G to the X, given that the smartphones were running any operating system below iOS version 11.2. Impact The data obtained exploiting the DirtyTooth hack may include personal and technical information about the user and the device. Mitigation This hack is resolved by updating the iPhone to iOS version 11.2 or higher. References External links Exploit Database DirtyTooth for Raspberry Pi Bluetooth IOS malware Computer security exploits
DirtyTooth
[ "Technology" ]
341
[ "Bluetooth", "Wireless networking", "Computer security exploits" ]
56,504,280
https://en.wikipedia.org/wiki/C10H13FN2O4
{{DISPLAYTITLE:C10H13FN2O4}} The molecular formula C10H13FN2O4 (molar mass: 244.22 g/mol) may refer to: Alovudine, also called fluorothymidine Fluorothymidine F-18 (FLT) Molecular formulas
C10H13FN2O4
[ "Physics", "Chemistry" ]
74
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
56,504,292
https://en.wikipedia.org/wiki/Spanish%20Cybersecurity%20Research%20Conference
The Spanish Cybersecurity Research Conference (Spanish: Jornadas Nacionales de Investigación en Ciberseguridad (JNIC)), is a scientific congress that works as a meeting point where different actors working in the field of cybersecurity research (universities, technological and research centres, companies and public authorities) can exchange knowledge and experience with the shared goal of strengthening research in the Cybersecurity field at the national level. Goals The need to run these kind of conferences was identified during the drafting of the Summary report of the feasibility study and design of a network of centers of excellence in R&D in cybersecurity, with the consensus of participants. The strategic plan of the Spanish Network of Excellence on Cybersecurity Research included on its measure #17, the creation of national cybsersecurity R&D+i conferences, intended to be the scientific meeting point in which both the Network of Excellence in particular and the research ecosystem in general could demonstrate their capacities, both in terms of knowledge and talent and in terms of research findings and their potential for transference to market. Equally, the measure #12 of the same study, proposed the design of an open call for proposals with mechanisms to evaluate and select candidates in order to grant awards and acknowledgement for research excellence. Organizers Each edition of the conferences is organised by the institution selected according to the procedure laid out in the regulation of the JNIC. An organising committee is named based on the regulations established for the JNIC, with the General chair of the committee being the representative from the organising institution who is responsible for the event. The Spanish National Cybersecurity Institute (INCIBE) in its mission to support research in cybersecurity for strengthening the cybersecurity sector, collaborates in the organization of this conference. Technology Transfer Program With the aim of converting the JNIC into a scientific forum of excellence in national cybersecurity field that promotes the innovation, for the first time in the 2017 edition, a complete Technological Transfer Program has been designed, that was an instrument to bring final users (companies, organisms, etc.) in contact with researchers in order to solve cybersecurity problems that were unresolved, formulated as scientific challenges. This initiative ran for several years until 2021 edition, helping to promote technology transfer. Latest edition JNIC 2024, will be held in Sevilla on June 27, 28 and 29, 2024 and organized by Escuela Técnica Superior de Ingeniería Informática (ETSII) de la Universidad de Sevilla. Past Editions JNIC2015, held in León on 14, 15 and 16 September 2015 and organized by Universidad de León. JNIC2016 held in Granada, on 15, 16 y 17 June 2016 and organized by Universidad de Granada. JNIC2017, held in Madrid, on 31 May, 1 and 2 June 2017 and organized by Universidad Rey Juan Carlos. JNIC2018, held in San Sebastián, on 13, 14 and 15 June 2018 and organized by Universidad de Mondragón. JNIC2019, held in Cáceres, Spain, on 5, 6 and 6 June 2019 and organized by University of Extremadura, COMPUTAEX Foundation and Complutense University of Madrid. JNIC2020, postponed to 2021 as a result of the situation generated by the coronavirus pandemic (COVID-19). JNIC 2021 LIVE, held online on 9 and 10 of June 2021 and organized by University of Castilla–La Mancha. JNIC 2022, held in Bilbao, on 27, 28 and 29 June 2022 and organized by Tecnalia. JNIC 2023, held in Vigo, on 21, 22 and 23 June 2023 and organized by atlanTTic (University of Vigo) and Gradiant (foundation). References Computer security Computer science conferences International conferences in Spain
Spanish Cybersecurity Research Conference
[ "Technology" ]
800
[ "Computer science", "Computer science conferences" ]
67,761,073
https://en.wikipedia.org/wiki/Transition%20metal%20nitrite%20complex
In organometallic chemistry, transition metal complexes of nitrite describes families of coordination complexes containing one or more nitrite () ligands. Although the synthetic derivatives are only of scholarly interest, metal-nitrite complexes occur in several enzymes that participate in the nitrogen cycle. Structure and bonding Bonding modes Three linkage isomers are common for nitrite ligands, O-bonded, N-bonded, and bidentate O,O-bonded. The former two isomers have been characterized for the pentamminecobalt(III) system, i.e. and , referred to as N-nitrito and O-nitrito, respectively. These two forms are sometimes called nitro and nitrito. These isomers can be interconverted in some complexes. An example of chelating nitrite is – "bipy" is the bidentate ligand 2,2′-bipyridyl. This bonding mode is sometimes described as κ2O,O-. Focusing on electron-counting in monometallic complexes, O-bonded, N-bonded are viewed as 1-electron pseudohalides ("X-ligand"). The bidentate O,O-bonded is an "L-X ligand", akin to bidentate carboxylate. With respect to HSAB theory, the N bonding mode is more common for softer metal centers. The O and O,O-bidentate modes are hard ligands, being found on Lewis-acidic metal centers. The kinetically-favored O-bonded isomer converts to . In its reaction with ferric porphyrin complexes, nitrite gives the O-bonded isomer, . Addition of donor ligands to this complex induces the conversion to the octahedral low-spin isomer, which now is a soft Lewis acid. The nitrite isomerizes to the N-bonded isomer, . The isomerization of to proceeds in an intramolecular manner. Homoleptic complexes Several homoleptic (complexes with only one kind of ligand) complexes have been characterized by X-ray crystallography. The inventory includes octahedral complexes , where M = Co (Sodium cobaltinitrite) and Rh. Square-planar homoleptic complexes are also known for Pt(II) and Pd(II). The potassium salts of (M = Zn, Cd) feature homoleptic complexes with four O,O-bidentate nitrite ligands. Synthesis of nitrito complexes Traditionally metal nitrito complexes are prepared by salt metathesis or ligand substitution reactions using alkali metal nitrite salts, such as sodium nitrite. At neutral pH, nitrite exists predominantly as the anion, not nitrous acid. Metal nitrosyl complexes undergo base hydrolysis, yielding nitrite complexes. This pattern is manifested in the behavior of nitroprusside: Reactions of nitrito complexes Some anionic nitrito complexes undergo acid-induced deoxygenation to give the nitrosyl complex. The reaction is reversible in some cases. Thus, one can generate nitrito complexes by base-hydrolysis of electrophilic metal nitrosyls. Nitro complexes also catalyze the oxidation of alkenes. Bioinorganic chemistry Metal nitrito complexes figure prominently in the nitrogen cycle, which describes the relationships and interconversions of ammonia up to nitrate. Because nitrogen is often a limiting nutrient, this cycle is important. Nitrite itself does not readily undergo redox reactions, but its metal complexes do. Oxidation to nitrate The molybdenum-containing enzyme nitrite oxidoreductase catalyzes the oxidation of nitrite to nitrate: Reduction The heme-based enzyme nitrite reductase catalyzes the conversion of nitrite to ammonia. The cycle begins with reduction of an iron-nitrite complex to a metal nitrosyl complex. The copper-containing enzyme nitrite reductase (CuNIR) catalyzes the 1-electron reduction of nitrite to nitric oxide. The proposed mechanism entails the protonation of a κ2O,O-NO2-Cu(I) complex. This protonation induces cleavage of an N–O bond, giving a HO–Cu–ON center, which features a nitric oxide ligand O-bonded to Cu(II) (an isonitrosyl). Related compounds , "Erdmann's salt". References Ligands Coordination chemistry Nitrites
Transition metal nitrite complex
[ "Chemistry" ]
959
[ "Ligands", "Coordination chemistry" ]
67,762,866
https://en.wikipedia.org/wiki/Economy%20monetization
The Economy monetization is a metric of the national economy, reflecting its saturation with liquid assets. The level of monetization is determined both by the development of the national financial system and by the whole economy. The monetization of economy also determines the freedom of capital movement. Long time ago scientists recognized the important role played by the money supply. Nevertheless, only approximately 50 years ago did Milton Friedman convincingly prove that change in the money quantity might have a very serious effect on the GDP. The monetization is especially important in low- to middle-income countries in which it is substantially correlated with the per-capita GDP and real interest rates. This fact suggests that supporting an upward monetization trend can be an important policy objective for governments. The reverse concept is called economy demonetization. Monetization coefficient The monetization coefficient (or ratio) of the economy is an indicator that is equal to the ratio of the money supply aggregate M2 to the gross domestic product (GDP)—both nominated in current prices. The coefficient reflects the proportion of the total of goods and services of an economy that is monetized—being actually paid for in money by the purchaser—to substitute bartering. This is one of the most important characteristics of the level and course of economic development. The ratio can be as low as 10–20% for the emerging economies and as high as 100%+ for the developed countries. Formula The ratio is, in fact, based on the money demand function of Milton Friedman. This coefficient gives an idea of the degree of financial security of the economy. Many scientific publications calculate not only the indicator of M2/GDP but also M3/GDP and M1/GDP. The higher the M3/GDP compared to M1/GDP, the more developed and elaborated the system of non-cash payments and the financial potential of the economy. A small difference indicates that in this country a significant proportion of monetary transactions are carried out in cash, and the banking system is poorly developed. It is impossible to artificially increase the monetization coefficient; its growth is based on the high level of savings within the national financial system and on the strengthened confidence in the national economic policy and economic growth. The ability of the state to borrow money in the domestic market and implement social programs depends on the value of the coefficient. The monetization ratio is positively related to the expected wealth and negatively related to the opportunity costs of holding money. A high level of economy monetization is typical for developed countries with a well-functioning financial sector. A low level of monetization creates an artificial shortage of capital and, consequently, investments. This fact limits any economic growth. At the same time, the saturation of the economy with money in an undeveloped financial system will only lead to an increase in inflation and, accordingly, an even greater decrease in the economy monetization. This is so due to the fact that the additional money supply enters the consumer market, increasing the aggregate demand, but does not proportionally affect the level of supply. Criticism There is a certain paradox associated with the difference between the nominal and real money supply. The uncontrolled monetary emission does not lead to an increase in the economy monetization—but to its decrease. The rapid increase of the nominal money supply during the period of high inflation leads to an increase in prices and, accordingly, in the nominal GDP, which outstrips the increase in the amount of money, which accordingly leads to a decrease in the monetization coefficient. In contrast, a decrease in the growth rate of the nominal money supply coupled with a growing GDP increases confidence in the national currency, leading to an increase in the economy monetization. The GDP tends to change in a linear manner whereas the money supply may change exponentially. This fact may distort the real situation. For developed countries the relationship between growth in the money supply and the economic performance may become weak. Methods to calculate both GDP and M2 may vary from country to country, sometimes making a direct comparison between ratios troublesome. The money supply is measured on a specific date whereas the GDP is calculated for a specific period of time (year). Economy demonetization There are two primary nonmonetized sectors in the economy: subsistence and barter. Modern economic publications define the economy demonetization as an increase in the share of barter in the economic life and its displacement of money as a medium of exchange. Demonetization, as a transition from monetary to barter exchange, oftentimes occurs during the periods of military operations and hyperinflation, that is, when money loses its natural role in the economy as a measure of value, means of circulation, accumulation, payment. Counterintuitively, the demonetization can also be observed in the peacetime, in the absence of the hyperinflation. The microeconomic explanation of demonetization is the hypothesis of so-called "liquidity constraints". When entrepreneurs simply do not have enough money to carry out the necessary transactions, they have to resort to the commodity-for-commodity form of exchange. It is noted that in the context of financial crises the demonetization is associated with a strict state monetary policy. The monetary tightening (higher taxes, lower government spending, a reduction in the money supply to prevent inflation, etc.) leads to a relative stabilization of the financial sector, which, due to a decrease in liquidity, leads to the demonetization of the economy and exacerbates the production crisis. The monetary easing, in turn, exacerbates the financial crisis. Alternative explanations suggest that the demonetization can be a form of tax evasion. Monetization coefficients for countries (2015–2018, %) The table includes data for both developed and emerging economies. See also The Buffett indicator, a valuation multiple used to assess how expensive or cheap the aggregate stock market is at a given point in time by compares the capitalization of the US Wilshire 5000 index to the US GDP. Complementary currency Debt monetization Money multiplier Non-monetary economy References External links What Is the Relationship Between Money Supply and GDP? M2 Money Stock (DISCONTINUED)/Gross Domestic Product Financial ratios 2000s in economic history Economic indicators Economy by field Monetary policy Inflation
Economy monetization
[ "Mathematics" ]
1,273
[ "Financial ratios", "Quantity", "Metrics" ]
67,763,627
https://en.wikipedia.org/wiki/Enterocin
Enterocin and its derivatives are bacteriocins synthesized by the lactic acid bacteria, Enterococcus. This class of polyketide antibiotics are effective against foodborne pathogens including L. monocytogenes, Listeria, and Bacillus. Due to its proteolytic degradability in the gastrointestinal tract, enterocin is used for controlling foodborne pathogens via human consumption. History Enterocin was discovered from soil and marine Streptomyces strains as well as from marine ascidians of Didemnum and it has also been found in a mangrove strains Streptomyces qinglanensis and Salinispora pacifica. Total synthesis The total synthesis of enterocin has been reported. Biosynthesis Enterocin has a caged, tricyclic, nonaromatic core and its formation undergoes a flavoenzyme (EncM) catalyzed Favorskii-like rearrangement of a poly(beta-carbonyl). Studies done on enterocin have shown that it is biosynthesized from a type II polyketide synthase (PKS) pathway, starting with a structure derived from phenylalanine or activation of benzoic acid followed by the EncM catalyzed rearrangement. The enzyme EncN catalyzes the ATP-dependent transfer of the benzoate to EncC, the acyl carrier protein. EncC transfers the aromatic unit to EncA-EncB, the ketosynthase in order for malonation via FabD, the malonyl-CoA:ACP transacylase. A Claisen condensation occurs between the benzoyl and malonyl groups and occurs six more times followed by reaction with EncD, a ketoreductase; the intermediate undergoes the EncM catalyzed oxidative rearrangement to form the enterocin tricyclic core. Further reaction with O-methyltransferase, EncK and cytochrome P450 hydroxylase, EncR yields enterocin. References Antibiotics Oxygen heterocycles Lactones Heterocyclic compounds with 3 rings Methoxy compounds
Enterocin
[ "Biology" ]
464
[ "Antibiotics", "Biocides", "Biotechnology products" ]
67,763,757
https://en.wikipedia.org/wiki/Transverse%20momentum%20distributions
In high energy particle physics, specifically in hadron-beam scattering experiments, transverse momentum distributions (TMDs) are the distributions of the hadron's quark or gluon momenta that are perpendicular to the momentum transfer between the beam and the hadron. Specifically, they are probability distributions to find inside the hadron a parton with a transverse momentum and longitudinal momentum fraction . TMDs provide information on the confined motion of quarks and gluons inside the hadron and complement the information on the hadron structure provided by parton distribution functions (PDFs) and generalized parton distributions (GPDs). In all, TMDs and PDFs provide the information of the momentum distribution (transverse and longitudinal, respectively) of the quarks (or gluons), and the GPDs, the information on their spatial distribution. Description, interpretation and usefulness TMDs are an extension of the concept of parton distribution functions (PDFs) and structure functions that are measured in deep inelastic scattering (DIS). Some TMDs provide the dependence of the probabilities that the PDFs represent and that give rise to the DIS structure functions, namely the quark momentum probability distribution for the unpolarized structure function and the quark spin probability distribution for the polarized structure functions . Here, denotes the fraction of hadron longitudinal momentum carried by the parton, and identifies with the Bjorken scaling variable in the infinite energy-momentum limit. The and PDFs are summed over all the values, and therefore, the -dependence of the probabilities is integrated out. TMDs provides the unintegrated probabilities, with their -dependence. Other TMDs exist that are not directly connected to and . In all, there are 16 dominant (viz leading-twist) independent TMDs, 8 for the quarks and 8 for the gluons. TMDs are, in particular, sensitive to correlations between the transverse momentum of partons in the parent hadron and their spin or the hadron spin. In turn, the correlations provide access the dynamics of partons in the transverse plane in momentum space. Thus, TMDs are comparable and directly complementary to the generalized parton distributions (GPDs) which describe the parton dynamics in the transverse plane in position space. Formally, TMDs access the correlations between a parton orbital angular momentum (OAM) and the hadron/parton spin because they require wave function components with nonzero OAM. Therefore, TMDs allow us to study the full three-dimensional dynamics of hadrons, providing more detailed information than that contained in conventional PDF. One example of the importance of TMDs is that they provide information about the quark and gluon OAM. Those are not directly accessible in regular DIS, but are crucial to understand the spin content of the nucleon and resolve the nucleon spin crisis. In fact, lattice QCD calculations indicate that quark OAM is the dominant contribution to the nucleon spin. Gluon TMDs Similarly to quark TMDs, gluon TMDs allow access to the gluonic orbital angular momentum, another possibly important contribution to the nucleon spin. Just as there are eight TMDs for quarks, there are eight gluon TMDs. Gluon TMDs were first proposed in 2001. Examples of TMDs The first and simplest example of quark TMD is . It arises when an unpolarized beam scatters off an unpolarized target hadron, and therefore does not carry quark/hadron spin information. The function provides the probability that a beam particle strikes a target quark of momentum fraction and transverse momentum . It is related to the traditional DIS PDF by . Similarly to , we have the and TMDs, whose integrals are, respectively, and the quark transversity distribution. In addition to the three above TMDs which are direct extension of the DIS PDFs, there are five other quark TMDs which depend not only on the magnitude of , but also on its direction. Therefore these TMDs vanish if simply integrated over , and do not directly connect to DIS PDFs. They are: The Sivers distribution which expresses, in a transversely polarized hadron, the asymmetric distribution of the quark transverse momentum around the center of the and plane. The azimuthally asymmetric quark distribution in the transverse momentum space is often called the “Sivers effect’’. In semi-inclusive DIS (SIDIS) in which a leading hadron is detected in addition to the scattered lepton, stems from gluon exchanges between the struck quark and the target remnants (final state interaction). In contrast, in the Drell–Yan process, stems from ‘’initial’’ state interaction. This leads to having opposite signs in the two processes (T-odd functions). The Boer–Mulders function characterizes the distribution of transversely polarized quarks in an unpolarized hadron. It is also a T-odd function, like . The , and functions. Measurements Our initial understanding of the short-distance nucleon structure has come from deep inelastic scattering (DIS) experiments. This description is essentially one-dimensional: DIS provides us with the parton momentum distributions in term of the single variable x, which is interpreted in the infinite momentum limit (the Bjorken limit) as the fraction of the nucleon momentum carried by the struck partons. Therefore, from DIS we only learn about the relative longitudinal momentum distribution of the partons, i.e. their longitudinal motions inside the nucleon. The measurement of TMDs allows to go beyond this one-dimensional picture. This entails that to measure TMDs, we need to gather more information from the scattering process. In DIS, only the scattered lepton is detected while the remnants of the shattered nucleon are ignored (inclusive experiment). Semi-inclusive DIS (SIDIS), where a high momentum (i.e. leading) hadron is detected in addition of the scattered lepton, allows us to obtain the needed additional details about the scattering process kinematics. The detected hadron results from the hadronization of the struck quark. This latter retains the information on its motion inside the nucleon, including its transverse momentum which allows to access the TMDs. In addition of its initial intrinsic transverse momentum the struck quark also acquires a transverse momentum during the hadronization process. Consequently, the structure functions entering the SIDIS cross-section or asymmetries are convolutions of a -dependent quark density, the TMD itself, and of a -dependent fragmentation function. Therefore, precise knowledge of fragmentation functions is important to extract TMDs from experimental results. Other reactions than SIDIS can be used to access TMDs, such as the Drell–Yan process. Quark TMDs measurements were pioneered at DESY by the HERMES experiment. They are currently (2021) being measured at CERN by the COMPASS experiment and several experiments at Jefferson Lab. Quark and gluon TDM measurements are an important part of the future electron–ion collider scientific program. References External links Scholarpedia page on the transverse momentum distribution, by M. Anselmino. Quantum chromodynamics Nuclear physics
Transverse momentum distributions
[ "Physics" ]
1,562
[ "Hadrons", "Subatomic particles", "Matter", "Nuclear physics" ]
67,764,026
https://en.wikipedia.org/wiki/Biogeoclimatic%20ecosystem%20classification
Biogeoclimatic ecosystem classification (BEC) is an ecological classification framework used in British Columbia to define, describe, and map ecosystem-based units at various scales, from broad, ecologically-based climatic regions down to local ecosystems or sites. BEC is termed an ecosystem classification as the approach integrates site, soil, and vegetation characteristics to develop and characterize all units. BEC has a strong application focus and guides to classification and management of forests, grasslands and wetlands are available for much of the province to aid in identification of the ecosystem units. History The biogeoclimatic ecosystem classification (BEC) system evolved from the work of Vladimir J. Krajina, a Czech-trained professor of ecology and botany at the University of British Columbia and his students, from 1949 - 1970. Krajina conceptualized the biogeoclimatic approach as an attempt to describe the ecologically diverse and largely undescribed landscape of British Columbia, the mountainous western-most province of Canada, using a unique blend of various contemporary traditions. These included the American tradition of community change and climax, the state factor concept of Jenny, the Braun-Blanquet approach, the Russian biogeocoenose, and environmental grids, and the European microscopic pedology approach The biogeoclimatic approach was subsequently adopted by the Forest Service of British Columbia in 1976—initially as a five-year program to develop the classification to assist with tree species selection in reforestation. The classification concepts adopted from Krajina were modified by the staff of the B.C. Forest Service in the implementation of a provincial classification. Over the past 40 years, the BEC approach has been expanded and applied to all regions of British Columbia. It has developed into a comprehensive framework for understanding ecosystems in a climatically and topographically complex region. Classification Framework Biogeoclimatic ecosystem classification (BEC) is best described as a classification framework that leverages a modified Braun-Blanquet vegetation classification approach to identify and delineate ecologically equivalent climatic regions and site conditions (Figure 1). The framework integrates vegetation classification with two other component hierarchical classifications: climate (or zonal) and site (Figure 2) where the vegetation classification hierarchy is used to develop the other two component hierarchies. The emphasis of the approach is to create ecological units with similar site potential as reflected by mature or climax plant communities. Vegetation Component The BEC approach classifies vegetation in a hierarchy (see Figure 2) that presents vegetation communities at various levels of generalization. At upper levels of the hierarchy, the communities may have the same dominant tree species and occur in the same broad climate, for example, western redcedar - western hemlock forests of maritime climates of British Columbia. Whereas, at lower levels of the hierarchy, the communities will have very similar understorey species and will occur on similar site conditions, for example, western redcedar forests dominated by skunk cabbage (Lysitchiton americanum) occurring on wet, swampy sites. Categories of the vegetation hierarchy are modelled after the Braun-Blanquet approach including the class, order, alliance, and association levels. Subcategories are generally also applied (i.e., subassociation, suballiance, suborder). Fundamentally, the climax plant association is the basic unit of BEC. In the BEC approach, mature or climax plant associations of the zonal site define biogeoclimatic subzones and ecologically equivalent sites within a given biogeoclimatic unit are recognized and differentiated by mature or climax plant associations and used to define site units. The climax forest state is recognized where main canopy tree species are the same as those regenerating in the understory. Climax forests of British Columbia (BC) are most commonly dominated by shade-tolerant tree species; however, under some climates or site conditions, shade-intolerant species will regenerate under the canopy. For example, in the driest forested biogeoclimatic units of BC, several pine species that are considered seral species through most of their distribution regenerate under the forest canopy and are recognized as zonal climax species: lodgepole pine (Pinus contorta) in the Sub-Boreal Pine Spruce [SBPS] zone and ponderosa pine (P. ponderosa) in the Ponderosa Pine [PP] zone. Climate or Zonal Component The BEC system classifies climates using a zonal site approach. The zonal site concept arose from the early works of the Russian soil scientist Vasily Dokuchaev (late 1800’s) and soil scientist/forester Georgy N. Vysotsky. They considered that sites and soils with average conditions best reflected the regional climate. BEC adopted the zonal concept and developed specific site and soils criteria to define a zonal site. The mature/climax plant association that occurs on zonal sites is termed the zonal plant association and is used to characterize, differentiate and map biogeoclimatic subzones (see Figure 2). In the climate component hierarchy, subzones are the fundamental unit, which are grouped into zones based largely on similarity of shade-tolerant (climax) tree species composition of the zonal plant association. Zonal plant orders characterize Biogeoclimatic Zones; zonal plant associations characterize Biogeoclimatic Subzones; and, zonal plant subassociations can be used to characterize Biogeoclimatic Variants. Site Component Plant associations and subassociations are similarly used to define zonal and azonal ecosystems on different site conditions within a consistent climate regime (biogeoclimatic subzone/variant). This approach emphasizes site conditions as the effects of climate regime are controlled. These biogeoclimatic and site specific ecosystem units are termed site series (see Figure 2). In the forested environments of BC, soil moisture and soil nutrient gradients are the primary site-level gradients. Soil moisture regime is strongly influenced by position on a slope (Figure 3). Soil moisture and soil nutrient regimes are the two categorical axes used in an edatopic grid to characterize the generalized environmental conditions of vegetation units within a biogeoclimatic unit for most types of terrestrial ecosystems (see Figure 4 for an example). Site series from different climates (biogeoclimatic units), which share the same mature or climax plant association, are said to occupy ecologically equivalent conditions and are combined into site associations. Site associations are similar in concept to forest site types of Cajander and habitat types of the U.S. Pacific Northwest. Where seral plant associations are defined, they are linked directly to the site series (Figure 5). Uses of BEC The BEC system is used by resource managers in British Columbia to assist them with the management of natural ecosystems for forestry, conservation, and wildlife, The system was initially developed to determine, on a site-specific basis, the ecologically suitable tree species for regeneration after forest harvesting. It has evolved into a tool that is also used for: Setting standards for species selection and stocking after harvest Setting conservation targets Determining at-risk ecosystems Understanding range management issues Determining wildlife suitability and capability Figure 6 demonstrates how the BEC system is used to present ecologically suitable tree species for regeneration. References Ecology of the Rocky Mountains Environment of British Columbia Ecology
Biogeoclimatic ecosystem classification
[ "Biology" ]
1,519
[ "Ecology" ]
67,764,323
https://en.wikipedia.org/wiki/D%C3%A0%20Mh%C3%ACle%20Distillery
Dà Mhìle is a Welsh whisky distillery. It was the second to produce producing commercially available whisky made in Wales since the 19th century, and its existence allowed the European Union to designate Wales as a whisky-producing country. The distillery's first whisky was commissioned to Springbank in 1992, and the second in 2000 to Loch Lomond. In 2012 the distillery opened on the site of the Glynhynod Farm, Ceredigion by First Minister for Wales Carwyn Jones. Background and products The distillery is set on the Glynhynod Farm, initially setup as a cheese producing farm. The name of the distillery, Dà Mhìle, translates to 'two thousand' in Scottish Gaelic (Dwy fil in Welsh) marked by the date of the first whisky commissioned. The first Welsh release whisky, aged in Oloroso sherry casks, is named "Tarian" after Chris Phillips, one of the first patrons. The distillery also produces organic spirits including liqueurs, rum, apple brandy, and seaweed gin. References External links Da Mhile website Distilleries
Dà Mhìle Distillery
[ "Chemistry" ]
227
[ "Distilleries", "Distillation" ]
67,766,374
https://en.wikipedia.org/wiki/Particulate%20inorganic%20carbon
Particulate inorganic carbon (PIC) can be contrasted with dissolved inorganic carbon (DIC), the other form of inorganic carbon found in the ocean. These distinctions are important in chemical oceanography. Particulate inorganic carbon is sometimes called suspended inorganic carbon. In operational terms, it is defined as the inorganic carbon in particulate form that is too large to pass through the filter used to separate dissolved inorganic carbon. Most PIC is calcium carbonate, CaCO3, particularly in the form of calcite, but also in the form of aragonite. Calcium carbonate makes up the shells of many marine organisms. It also forms during whiting events and is excreted by marine fish during osmoregulation. Overview Carbon compounds can be distinguished as either organic or inorganic, and dissolved or particulate, depending on their composition. Organic carbon forms the backbone of key component of organic compounds such as – proteins, lipids, carbohydrates, and nucleic acids. Inorganic carbon is found primarily in simple compounds such as carbon dioxide, carbonic acid, bicarbonate, and carbonate (CO2, H2CO3, HCO3−, CO32− respectively). Marine carbon is further separated into particulate and dissolved phases. These pools are operationally defined by physical separation – dissolved carbon passes through a 0.2 μm filter, and particulate carbon does not. There are two main types of inorganic carbon that are found in the oceans. Dissolved inorganic carbon (DIC) is made up of bicarbonate (HCO3−), carbonate (CO32−) and carbon dioxide (including both dissolved CO2 and carbonic acid H2CO3). DIC can be converted to particulate inorganic carbon (PIC) through precipitation of CaCO3 (biologically or abiotically). DIC can also be converted to particulate organic carbon (POC) through photosynthesis and chemoautotrophy (i.e. primary production). DIC increases with depth as organic carbon particles sink and are respired. Free oxygen decreases as DIC increases because oxygen is consumed during aerobic respiration. Particulate inorganic carbon (PIC) is the other form of inorganic carbon found in the ocean. Most PIC is the CaCO3 that makes up shells of various marine organisms, but can also form in whiting events. Marine fish also excrete calcium carbonate during osmoregulation. Some of the inorganic carbon species in the ocean, such as bicarbonate and carbonate, are major contributors to alkalinity, a natural ocean buffer that prevents drastic changes in acidity (or pH). The marine carbon cycle also affects the reaction and dissolution rates of some chemical compounds, regulates the amount of carbon dioxide in the atmosphere and Earth's temperature. Calcium carbonate Particulate inorganic carbon (PIC) usually takes the form of calcium carbonate (CaCO3), and plays a key part in the ocean carbon cycle. This biologically fixed carbon is used as a protective coating for many planktonic species (coccolithophores, foraminifera) as well as larger marine organisms (mollusk shells). Calcium carbonate is also excreted at high rates during osmoregulation by fish, and can form in whiting events. While this form of carbon is not directly taken from the atmospheric budget, it is formed from dissolved forms of carbonate which are in equilibrium with CO2 and then responsible for removing this carbon via sequestration. CO2 + H2O → H2CO3 → H+ + HCO3− Ca2+ + 2HCO3− → CaCO3 + CO2 + H2O While this process does manage to fix a large amount of carbon, two units of alkalinity are sequestered for every unit of sequestered carbon. The formation and sinking of CaCO3 therefore drives a surface to deep alkalinity gradient which serves to raise the pH of surface waters, shifting the speciation of dissolved carbon to raise the partial pressure of dissolved CO2 in surface waters, which actually raises atmospheric levels. In addition, the burial of CaCO3 in sediments serves to lower overall oceanic alkalinity, tending to raise pH and thereby atmospheric CO2 levels if not counterbalanced by the new input of alkalinity from weathering. The portion of carbon that is permanently buried at the sea floor becomes part of the geologic record. Calcium carbonate often forms remarkable deposits that can then be raised onto land through tectonic motion as in the case with the White Cliffs of Dover in Southern England. These cliffs are made almost entirely of the plates of buried coccolithophores. Carbonate pump The carbonate pump, sometimes called the carbonate counter pump, starts with marine organisms at the ocean's surface producing particulate inorganic carbon (PIC) in the form of calcium carbonate (calcite or aragonite, CaCO3). This CaCO3 is what forms hard body parts like shells. The formation of these shells increases atmospheric CO2 due to the production of CaCO3 in the following reaction with simplified stoichiometry:Coccolithophores, a nearly ubiquitous group of phytoplankton that produce shells of calcium carbonate, are the dominant contributors to the carbonate pump. Due to their abundance, coccolithophores have significant implications on carbonate chemistry, in the surface waters they inhabit and in the ocean below: they provide a large mechanism for the downward transport of CaCO3. The air-sea CO2 flux induced by a marine biological community can be determined by the rain ratio - the proportion of carbon from calcium carbonate compared to that from organic carbon in particulate matter sinking to the ocean floor, (PIC/POC). The carbonate pump acts as a negative feedback on CO2 taken into the ocean by the solubility pump. It occurs with lesser magnitude than the solubility pump. The carbonate pump is sometimes referred to as the "hard tissue" component of the biological pump. Some surface marine organisms, like coccolithophores, produce hard structures out of calcium carbonate, a form of particulate inorganic carbon, by fixing bicarbonate. This fixation of DIC is an important part of the oceanic carbon cycle. Ca2+ + 2 HCO3− → CaCO3 + CO2 + H2O While the biological carbon pump fixes inorganic carbon (CO2) into particulate organic carbon in the form of sugar (C6H12O6), the carbonate pump fixes inorganic bicarbonate and causes a net release of CO2. In this way, the carbonate pump could be termed the carbonate counter pump. It works counter to the biological pump by counteracting the CO2 flux from the biological pump. Calcite and aragonite seas An aragonite sea contains aragonite and high-magnesium calcite as the primary inorganic calcium carbonate precipitates. The chemical conditions of the seawater must be notably high in magnesium content relative to calcium (high Mg/Ca ratio) for an aragonite sea to form. This is in contrast to a calcite sea in which seawater low in magnesium content relative to calcium (low Mg/Ca ratio) favors the formation of low-magnesium calcite as the primary inorganic marine calcium carbonate precipitate. The Early Paleozoic and the Middle to Late Mesozoic oceans were predominantly calcite seas, whereas the Middle Paleozoic through the Early Mesozoic and the Cenozoic (including today) are characterized by aragonite seas. Aragonite seas occur due to several factors, the most obvious of these is a high seawater Mg/Ca ratio (Mg/Ca > 2), which occurs during intervals of slow seafloor spreading. However, the sea level, temperature, and calcium carbonate saturation state of the surrounding system also determine which polymorph of calcium carbonate (aragonite, low-magnesium calcite, high-magnesium calcite) will form. Likewise, the occurrence of calcite seas is controlled by the same suite of factors controlling aragonite seas, with the most obvious being a low seawater Mg/Ca ratio (Mg/Ca < 2), which occurs during intervals of rapid seafloor spreading. Whiting events A whiting event is a phenomenon that occurs when a suspended cloud of fine-grained calcium carbonate precipitates in water bodies, typically during summer months, as a result of photosynthetic microbiological activity or sediment disturbance. The phenomenon gets its name from the white, chalky color it imbues to the water. These events have been shown to occur in temperate waters as well as tropical ones, and they can span for hundreds of meters. They can also occur in both marine and freshwater environments. The origin of whiting events is debated among the scientific community, and it is unclear if there is a single, specific cause. Generally, they are thought to result from either bottom sediment re-suspension or by increased activity of certain microscopic life such as phytoplankton. Because whiting events affect aquatic chemistry, physical properties, and carbon cycling, studying the mechanisms behind them holds scientific relevance in various ways. Great Calcite Belt The Great Calcite Belt (GCB) of the Southern Ocean is a region of elevated summertime upper ocean calcite concentration derived from coccolithophores, despite the region being known for its diatom predominance. The overlap of two major phytoplankton groups, coccolithophores and diatoms, in the dynamic frontal systems characteristic of this region provides an ideal setting to study environmental influences on the distribution of different species within these taxonomic groups. The Great Calcite Belt, defined as an elevated particulate inorganic carbon (PIC) feature occurring alongside seasonally elevated chlorophyll a in austral spring and summer in the Southern Ocean, plays an important role in climate fluctuations, accounting for over 60% of the Southern Ocean area (30–60° S). The region between 30° and 50° S has the highest uptake of anthropogenic carbon dioxide (CO2) alongside the North Atlantic and North Pacific oceans. Knowledge of the impact of interacting environmental influences on phytoplankton distribution in the Southern Ocean is limited. For example, more understanding is needed of how light and iron availability or temperature and pH interact to control phytoplankton biogeography. Hence, if model parameterizations are to improve to provide accurate predictions of biogeochemical change, a multivariate understanding of the full suite of environmental drivers is required. The Southern Ocean has often been considered as a microplankton-dominated (20–200 μm) system with phytoplankton blooms dominated by large diatoms and Phaeocystis sp. However, since the identification of the GCB as a consistent feature and the recognition of picoplankton (< 2 μm) and nanoplankton (2–20 μm) importance in high-nutrient, low-chlorophyll (HNLC) waters, the dynamics of small (bio)mineralizing plankton and their export need to be acknowledged. The two dominant biomineralizing phytoplankton groups in the GCB are coccolithophores and diatoms. Coccolithophores are generally found north of the polar front, though Emiliania huxleyi has been observed as far south as 58° S in the Scotia Sea, at 61° S across Drake Passage, and at 65°S south of Australia. Diatoms are present throughout the GCB, with the polar front marking a strong divide between different size fractions. North of the polar front, small diatom species, such as Pseudo-nitzschia spp. and Thalassiosira spp., tend to dominate numerically, whereas large diatoms with higher silicic acid requirements (e.g., Fragilariopsis kerguelensis) are generally more abundant south of the polar front. High abundances of nanoplankton (coccolithophores, small diatoms, chrysophytes) have also been observed on the Patagonian Shelf and in the Scotia Sea. Currently, few studies incorporate small biomineralizing phytoplankton to species level. Rather, the focus has often been on the larger and noncalcifying species in the Southern Ocean due to sample preservation issues (i.e., acidified Lugol’s solution dissolves calcite, and light microscopy restricts accurate identification to cells > 10 μm. In the context of climate change and future ecosystem function, the distribution of biomineralizing phytoplankton is important to define when considering phytoplankton interactions with carbonate chemistry, and ocean biogeochemistry. The Great Calcite Belt spans the major Southern Ocean circumpolar fronts: the Subantarctic front, the polar front, the Southern Antarctic Circumpolar Current front, and occasionally the southern boundary of the Antarctic Circumpolar Current. The subtropical front (at approximately 10 °C) acts as the northern boundary of the GCB and is associated with a sharp increase in PIC southwards. These fronts divide distinct environmental and biogeochemical zones, making the GCB an ideal study area to examine controls on phytoplankton communities in the open ocean. A high PIC concentration observed in the GCB (1 μmol PIC L−1) compared to the global average (0.2 μmol PIC L−1) and significant quantities of detached E. huxleyi coccoliths (in concentrations > 20,000 coccoliths mL−1) both characterize the GCB. The GCB is clearly observed in satellite imagery spanning from the Patagonian Shelf across the Atlantic, Indian, and Pacific oceans and completing Antarctic circumnavigation via the Drake Passage. Coccolithophores Since the industrial revolution 30% of the anthropogenic CO2 has been absorbed by the oceans, resulting in ocean acidification, which is a threat to calcifying alga. As a result, there has been profound interest in these calcifying algae, boosted by their major role in the global carbon cycle. Globally, coccolithophores, particularly Emiliania huxleyi, are considered to be the most dominant calcifying algae, which blooms can even be seen from outer space. Calcifying algae create an exoskeleton from calcium carbonate platelets (coccoliths), providing ballast which enhances the organic and inorganic carbon flux to the deep sea. Organic carbon is formed by means of photosynthesis, where CO2 is fixed and converted into organic molecules, causing removal of CO2 from the seawater. Counterintuitively, the production of coccoliths leads to the release of CO2 in the seawater, due to removal of carbonate from the seawater, which reduces the alkalinity and causes acidification. Therefore, the ratio between particulate inorganic carbon (PIC) and particulate organic carbon (POC) is an important measure for the net release or uptake of CO2. In short, the PIC:POC ratio is a key characteristic required to understand and predict the impact of climate change on the global ocean carbon cycle. Calcium particle morphologies See also carbonate compensation depth aragonite compensation depth lysocline calcareous ooze Carbonate pump Marine biogenic calcification snowline: the depth at which carbonate disappear from sediments under steady-state conditions References Sources Chemical oceanography Environmental chemistry Soil
Particulate inorganic carbon
[ "Chemistry", "Environmental_science" ]
3,219
[ "Chemical oceanography", "Environmental chemistry", "nan" ]
67,766,855
https://en.wikipedia.org/wiki/Fionn%20Ferreira
Fionn Miguel Eckardt Ferreira (from Ballydehob, County Cork, Ireland) is an Irish inventor, chemistry student and Forbes 30 under 30 listee. He is known for his invention of a method to remove microplastic particles from water using a natural ferrofluid mixture. Early life Fionn Ferreira, whose full name is Fionn Miguel Eckardt Ferreira, was born in Cork to boat-builder and modeller Anne Eckardt, from Germany and West Cork, and boat-builder Rui Ferreira from Portugal, who had met in the UK in 1994 and settled in Ballydehob, County Cork. He was brought up in Ballydehob and attended St James' primary school in Durrus and subsequently Schull Community College in Schull, completing school at the age of 18 in 2019. Ferreira spent part of his childhood kayaking around remote coastal areas of Ireland with his dog India, noticing an increasing amount of plastic washing up on the coastline. He created a methodology to quantify and collect plastic pollution, with a focus on microplastics. He built several inventions using LEGO, bits of wood and some microcontrollers to test for these microplastics. He entered the national science fair, the BT Young Scientist and Technology Exhibition, three times, with two of his projects being Let Sanils do the cleaning, An investigation into the antioxidant concentration of different berries using the Briggs-Rauscher reaction in conjunction with a photometer and An investigation into the removal of microplastics from water using ferrofluids. Ferreira worked as a curator at Schull's planetarium. Microplastic removal technology Ferreira designed and tested a method to remove microplastics from water, following what he described as thousands of failed attempts. Ferreira has stated that he was inspired by an article by Fermilab physicist Arden Warner, who developed a new approach to cleaning up oil spills, using magnetic principles, and made a device that uses a magnet-based method to remove the particles from water with extraction rates of 87% (+/- 1.1%). The highest extraction rates observed in Fionn's experiments were for polyesters. He first exhibited the project at the Young Scientist and Technology Exhibition 2018 where he was awarded the Intel Award, best in category, Intellectual Ventures Award and a first-place award. Ferreira subsequently exhibited at ISEF 2018 winning the 1st place American Chemical Society award, 2nd place award in Chemistry, 1st place for Drug Chemical and associated technologies, a scholarship to the University of Arizona and a certificate of honourable mention by the American Statistical Association. In 2019, Ferreira exhibited at the 2019 Google Science Fair winning the global grand prize award of $50,000. Education In autumn 2019, Ferreira started a BSc in Chemistry at the University of Groningen in the Netherlands and graduated in 2022 with Cum Laude. Currently, Ferreira is following a MSc in Chemistry at the University of Groningen. Business In 2020, Ferreira founded a business, Fionn & Co., focused on microplastic removal technology. In 2020 and 2021 his work was featured by the global campaign by Hewlett-Packard: For every dream. Recognition In 2018 the MIT Lincoln Laboratory named a minor planet after Ferreira, following his being awarded 2nd place at the Intel International Science and Engineering Fair. In 2021 he was named a National Geographic Society Young Explorer; he has since been working on a new platform for youth in the space of invention with the help of the funding from the society. In 2021, he was named a Forbes 30 Under 30 listee in the Science and Healthcare category. In September 2021 Ferreira was awarded a Premio Internationale Giuseppe Sciacca award for his conservation and ecological efforts. Ferreira has spoken at several events, including: 2020 World Economic Forum 2019 Global Plastic Health Summit 2020 Smithsonian Institution Earth Optimism Summit 2020 GreenTech Festival Berlin 2021 Regeneron Pharmaceuticals ISEF 2021 Young Plastic Pollution Challenge 2021 And Leuven (conference) 2021 Viva Technology 2021 Infoshare conference Poland 2021 YOUTHTOPIA Younite 2022 ChangeNOW 2022 Ativa-te 2023 House of Lords Ferreira has been awarded: 2019 Google Science Fair Grand Prize 2020 World Economic Forum Change Maker 2021 Plastic action champion by the Global Plastic Action Partnership 2021 Premio Internationale Giuseppe Sciacca 2021 National Geographic Young Explorer 2021 Forbes 30 under 30 2022 Rennaisance Award 2023 European Patent Office Young Inventor's Prize References External links Interview and summary of invention concept Interview on invention process with BBC 2000 births People from Ballydehob Ocean pollution Scientists from County Cork 21st-century Irish chemists Living people University of Groningen alumni Irish people of Portuguese descent Irish people of German descent
Fionn Ferreira
[ "Chemistry", "Environmental_science" ]
956
[ "Ocean pollution", "Water pollution" ]
67,766,856
https://en.wikipedia.org/wiki/Principles%20for%20a%20Data%20Economy
The Principles for a Data Economy – Data Rights and Transactions is a transatlantic legal project carried out jointly by the American Law Institute (ALI) and the European Law Institute (ELI). The Principles for a Data Economy deals with a range of different legal questions that arise in the data economy. Since data is different from other tradeable items, the Principles draw up legal rules for data transactions and data rights that take into account the interests of different stakeholders involved in the data economy. The Principles are designed to facilitate contractual relations as well as the drafting of model agreements and can guide courts and legislators worldwide. The project proposes a set of principles that can be implemented in any legal system and is designed to work in conjunction with any kind of data privacy/data protection law, intellectual property law or trade secret law. The Principles do not address or seek to change any of the substantive rules of these bodies of law. The Project Team consists of Neil B Cohen and Christiane Wendehorst (as Project Reporters) and Lord John Thomas as well as Steven O. Weise (as Project Chairs). Characteristics of data The law governing trades in commerce has historically focused on trade in items that are tangible like goods or on intangible assets, such as shares or licenses. However, data does not fit into any of these traditional categories, nor does it qualify as a service. It is often unclear how traditional legal rules and doctrines can apply to data, as data is different from other assets in many ways. For example, data can be multiplied at basically no cost and can be used in parallel for a variety of different purposes by many different people at the same time (data is a “non-rivalrous” resource). Uncertainty regarding the applicable rules to govern the data economy may inhibit innovation and growth and trouble stakeholders like data-driven industries, start-ups, and consumers. Stakeholders in the data economy The Principles have taken the basic types of players and relations which can be found in data ecosystems as a starting point to provide guidance in different situations. The central actors in the data economy are data controllers (also called “data holders”). They are in a position to access the data and decide for which purposes and means this data should be processed. A controller may exercise control all by itself or share it with co-controllers, such as under a data pooling arrangement. Data processors provide the processing of data on a controller’s behalf as a service. Another important group of stakeholders includes those that contribute to the generation of data (e.g. data subjects). Other players in the data economy include data assemblers or data intermediaries (e.g. data trusts). History of the project and timeline Before the official adoption of the project by ALI and ELI bodies in 2018, the project team carried out a Feasibility Study from October 2016 to February 2018. In the following years, the project team produced a number of drafts (e.g. “Preliminary Drafts” No. 1 to 4, “Tentative Draft No. 1”) and project progress were regularly discussed with advisory bodies and members of both the ALI and the ELI. The project reporters also included feedback and insights from industry stakeholders and experts that was gained after several meetings and workshops, hosted, inter alia by UNCITRAL, UNIDROIT and several national governmental institutions. Tentative Draft No. 2 was presented at the ALI Annual Meeting in May 2021 and approved by ALI membership. The latest draft ("Final Council Draft") was also approved by the ELI Council and ELI Membership. The Principles for a Data Economy were presented at an international conference with representatives from institutions such as the Uniform Law Commission (ULC), the European Commission, UNIDROIT, the OECD, the International Chamber of Commerce (ICC) and the World Economic Forum (WEF) in October 2021. Project structure The current draft (“Tentative Draft No. 2”) of the Principles consists of five Parts that each governs different aspects of the data economy: General Provisions, Data Contracts, Data Rights, Third Party Aspects of Data Activities, and Multi-State Issues. General Provisions Part I includes general provisions that apply to all other Parts of the Principles for a Data Economy. This Part sets out the purpose of the Principles: they aim to make existing law in the field of the data economy more coherent and support the development of the law in this field by courts and legislators worldwide. It is also clarified that the Principles have a wide scope of application and can be used in a variety of ways by stakeholders in the data economy. The Principles may, for example, serve private parties as a basis for contract formation, guide the deliberations of arbitral tribunals or inspire national legislation. Part I then defines several key terms, such as ‘digital data’ and ‘data right’. The scope of the Principles is limited to matters where information is recorded as an asset, resource or tradeable commodity and where large amounts of data, rather than single pieces of information, are concerned. This Part also clarifies that remedies with respect to data contracts and data rights are left to the applicable national law. Data Contracts Part II lists different types of contracts that often occur in the data economy and establishes two broad categories, namely contracts for the supply and sharing of data and contracts for services with regard to data. Contracts for the supply and sharing of data include, e.g. data transfer contracts or data pooling arrangements, while contracts for services with regard to data cover contracts for the processing of data or data intermediary contracts. The Principles provide default terms for each contract type, on issues such as the manner in which data should supply or which characteristics the data supplied should meet. These default terms 'automatically' become part of the contract unless the parties agree otherwise. Data Rights Part III governs legally protected interests of players in the data economy that stem from the characteristics of data as a resource (e.g. its non-rivalrous nature) or from public interest considerations. Such data rights may include the right to data access, the right to require the controller to desist from data activities or to correct incorrect/incomplete data, or even to receive an economic share in profits derived from the use of data. For example, the Principles deal with data rights of stakeholders that had a share in the co-generation of data and identify different factors to be considered in determining whether to afford a party a data right. The underlying idea that parties who have contributed to the generation of data should have some rights in the utilization of the data is also recognized by governmental institutions, such as by the Japanese Ministry of Economy, Trade and Industry (METI), and the term co-generated data, which was coined by the Principles for a Data Economy, has been adopted, inter alia by the European Commission, the German Data Ethics Commission and the Global Partnership on Artificial Intelligence (GPAI). This Part also deals with data rights for the public interest, such as data sharing rights in the field of innovation. Third Party Aspects Part IV governs different situations in which data transactions interfere with the rights of third parties. Such rights include intellectual property rights or rights derived from data privacy or data protection law. This Part sets out under which circumstances data activities should be considered wrongful vis à vis another party. For example, a data activity (like data processing or the onward supply of data) could be considered wrongful, if a controller interferes with the rights of data subjects that are protected by data-protection law. A data activity could also be wrongful if the controller is non-compliant with contractual limitations on data activities, enforceable by the protected party (e.g. a controller may only process data for a certain purpose). If someone obtained access to data by unauthorized means (i.e. data “theft”) this could also be considered wrongful. The Part on Third-Party Aspects also takes a detailed look at the effects of the onward supply of data can have on third parties, while balancing the protection of third parties on the one hand, with the interests of data recipients and the desire to encourage data sharing on the other. Multi-State Issues As transactions in the data economy are international by nature and hardly occur within one legal system alone, the Part V of the Principles also briefly touches upon the applicability of the rules and doctrines of private international law to such transactions. Links Website of the “Principles for a Data Economy – Data Rights and Transactions”: https://principlesforadataeconomy.org Project page on the website of the American Law Institute: https://www.ali.org/projects/show/data-economy/ Project page on the website of the European Law Institute: https://www.europeanlawinstitute.eu/projects-publications/current-projects-upcoming-projects-and-other-activities/current-projects/data-economy/ Project page on “The ALI Adviser” blog: https://thealiadviser.org/data-economy/ Project Feature on Vimeo (by the American Law Institute): https://vimeo.com/524538197 Project webinar on Youtube (by the European Law Institute): https://www.youtube.com/watch?v=j43o85CU4WU References Principles Data management
Principles for a Data Economy
[ "Technology" ]
1,900
[ "Data management", "Data" ]
67,767,830
https://en.wikipedia.org/wiki/Ruthenium%28III%29%20bromide
Ruthenium(III) bromide is a chemical compound of ruthenium and bromine with the formula RuBr3. It is a dark brown solid that decomposes above 400 °C. Preparation Ruthenium(III) bromide can be prepared by the reaction of ruthenium metal with bromine at high temperature and pressure (720 K and 20 bar): 2 Ru + 3 Br2 → 2 RuBr3 Structure The crystal structures of ruthenium(III) bromide contain parallel (RuBr3)∞ columns. The compound undergoes a phase transition around 384 K (111 °C) from an ordered orthorhombic structure in space group Pnmm with alternating long and short Ru-Ru distances to a disordered hexagonal TiI3-like structure in space group P63/mcm with (on average) equal Ru-Ru distances. In the disordered polymorph, the Ru-Ru distances are not believed to actually be equal but appear so due to a random distribution of two distinct column conformations. Both polymorphs consist of hexagonally close-packed bromide ions. References Ruthenium(III) compounds Bromides Platinum group halides
Ruthenium(III) bromide
[ "Chemistry" ]
251
[ "Bromides", "Salts" ]
67,767,951
https://en.wikipedia.org/wiki/Stein%20discrepancy
A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers, but has since been used in diverse settings in statistics, machine learning and computer science. Definition Let be a measurable space and let be a set of measurable functions of the form . A natural notion of distance between two probability distributions , , defined on , is provided by an integral probability metric where for the purposes of exposition we assume that the expectations exist, and that the set is sufficiently rich that (1.1) is indeed a metric on the set of probability distributions on , i.e. if and only if . The choice of the set determines the topological properties of (1.1). However, for practical purposes the evaluation of (1.1) requires access to both and , often rendering direct computation of (1.1) impractical. Stein's method is a theoretical tool that can be used to bound (1.1). Specifically, we suppose that we can identify an operator and a set of real-valued functions in the domain of , both of which may be -dependent, such that for each there exists a solution to the Stein equation The operator is termed a Stein operator and the set is called a Stein set. Substituting (1.2) into (1.1), we obtain an upper bound . This resulting bound is called a Stein discrepancy. In contrast to the original integral probability metric , it may be possible to analyse or compute using expectations only with respect to the distribution . Examples Several different Stein discrepancies have been studied, with some of the most widely used presented next. Classical Stein discrepancy For a probability distribution with positive and differentiable density function on a convex set , whose boundary is denoted , the combination of the Langevin–Stein operator and the classical Stein set yields the classical Stein discrepancy. Here denotes the Euclidean norm and the Euclidean inner product. Here is the associated operator norm for matrices , and denotes the outward unit normal to at location . If then we interpret . In the univariate case , the classical Stein discrepancy can be computed exactly by solving a quadratically constrained quadratic program. Graph Stein discrepancy The first known computable Stein discrepancies were the graph Stein discrepancies (GSDs). Given a discrete distribution , one can define the graph with vertex set and edge set . From this graph, one can define the graph Stein set as The combination of the Langevin–Stein operator and the graph Stein set is called the graph Stein discrepancy (GSD). The GSD is actually the solution of a finite-dimensional linear program, with the size of as low as linear in , meaning that the GSD can be efficiently computed. Kernel Stein discrepancy The supremum arising in the definition of Stein discrepancy can be evaluated in closed form using a particular choice of Stein set. Indeed, let be the unit ball in a (possibly vector-valued) reproducing kernel Hilbert space with reproducing kernel , whose elements are in the domain of the Stein operator . Suppose that For each fixed , the map is a continuous linear functional on . . where the Stein operator acts on the first argument of and acts on the second argument. Then it can be shown that , where the random variables and in the expectation are independent. In particular, if is a discrete distribution on , then the Stein discrepancy takes the closed form A Stein discrepancy constructed in this manner is called a kernel Stein discrepancyLiu, Q., Lee, J. D., & Jordan, M. I. (2016). A kernelized Stein discrepancy for goodness-of-fit tests and model evaluation. International Conference on Machine Learning, 276–284. and the construction is closely connected to the theory of kernel embedding of probability distributions. Let be a reproducing kernel. For a probability distribution with positive and differentiable density function on , the combination of the Langevin—Stein operator and the Stein set associated to the matrix-valued reproducing kernel , yields a kernel Stein discrepancy with where (resp. ) indicated the gradient with respect to the argument indexed by (resp. ). Concretely, if we take the inverse multi-quadric kernel with parameters and a symmetric positive definite matrix, and if we denote , then we have . Diffusion Stein discrepancy Diffusion Stein discrepancies generalize the Langevin Stein operator to a class of diffusion Stein operators , each representing an Itô diffusion that has as its stationary distribution. Here, is a matrix-valued function determined by the infinitesimal generator of the diffusion. Other Stein discrepancies Additional Stein discrepancies have been developed for constrained domains, non-Euclidean domains, discrete domains, improved scalability., and gradient-free Stein discrepancies where derivatives of the density are circumvented. Furthermore, this approach is expanded into the Gradient-Free Kernel Conditional Stein Discrepancy, which targets conditional distributions. Properties The flexibility in the choice of Stein operator and Stein set in the construction of Stein discrepancy precludes general statements of a theoretical nature. However, much is known about the particular Stein discrepancies. Computable without the normalisation constant Stein discrepancy can sometimes be computed in challenging settings where the probability distribution admits a probability density function (with respect to an appropriate reference measure on ) of the form , where and its derivative can be numerically evaluated but whose normalisation constant is not easily computed or approximated. Considering (2.1), we observe that the dependence of on occurs only through the term which does not depend on the normalisation constant . Stein discrepancy as a statistical divergence A basic requirement of Stein discrepancy is that it is a statistical divergence, meaning that and if and only if . This property can be shown to hold for classical Stein discrepancy and kernel Stein discrepancy a provided that appropriate regularity conditions hold. Convergence control A stronger property, compared to being a statistical divergence, is convergence control, meaning that implies converges to in a sense to be specified. For example, under appropriate regularity conditions, both the classical Stein discrepancy and graph Stein discrepancy enjoy Wasserstein convergence control, meaning that implies that the Wasserstein metric between and converges to zero. For the kernel Stein discrepancy, weak convergence control has been established under regularity conditions on the distribution and the reproducing kernel , which are applicable in particular to (2.1). Other well-known choices of , such as based on the Gaussian kernel, provably do not enjoy weak convergence control. Convergence detection The converse property to convergence control is convergence detection, meaning that whenever converges to in a sense to be specified. For example, under appropriate regularity conditions, classical Stein discrepancy enjoys a particular form of mean square convergence detection, meaning that whenever converges in mean-square to and converges in mean-square to . For kernel Stein discrepancy, Wasserstein convergence detection has been established, under appropriate regularity conditions on the distribution and the reproducing kernel . Applications of Stein discrepancy Several applications of Stein discrepancy have been proposed, some of which are now described. Optimal quantisation Given a probability distribution defined on a measurable space , the quantization task is to select a small number of states such that the associated discrete distribution is an accurate approximation of in a sense to be specified. Stein points are the result of performing optimal quantisation via minimisation of Stein discrepancy: Under appropriate regularity conditions, it can be shown that as . Thus, if the Stein discrepancy enjoys convergence control, it follows that converges to . Extensions of this result, to allow for imperfect numerical optimisation, have also been derived. Sophisticated optimisation algorithms have been designed to perform efficient quantisation based on Stein discrepancy, including gradient flow algorithms that aim to minimise kernel Stein discrepancy over an appropriate space of probability measures. Optimal weighted approximation If one is allowed to consider weighted combinations of point masses, then more accurate approximation is possible compared to (3.1). For simplicity of exposition, suppose we are given a set of states . Then the optimal weighted combination of the point masses , i.e. which minimise Stein discrepancy can be obtained in closed form when a kernel Stein discrepancy is used. Some authors consider imposing, in addition, a non-negativity constraint on the weights, i.e. . However, in both cases the computation required to compute the optimal weights can involve solving linear systems of equations that are numerically ill-conditioned. Interestingly, it has been shown that greedy approximation of using an un-weighted combination of states can reduce this computational requirement. In particular, the greedy Stein thinning algorithm has been shown to satisfy an error bound Non-myopic and mini-batch generalisations of the greedy algorithm have been demonstrated to yield further improvement in approximation quality relative to computational cost. Variational inference Stein discrepancy has been exploited as a variational objective in variational Bayesian methods. Given a collection of probability distributions on , parametrised by , one can seek the distribution in this collection that best approximates a distribution of interest: A possible advantage of Stein discrepancy in this context, compared to the traditional Kullback–Leibler variational objective, is that need not be absolutely continuous with respect to in order for to be well-defined. This property can be used to circumvent the use of flow-based generative models, for example, which impose diffeomorphism constraints in order to enforce absolute continuity of and . Statistical estimation Stein discrepancy has been proposed as a tool to fit parametric statistical models to data. Given a dataset , consider the associated discrete distribution . For a given parametric collection of probability distributions on , one can estimate a value of the parameter which is compatible with the dataset using a minimum Stein discrepancy estimator The approach is closely related to the framework of minimum distance estimation, with the role of the "distance" being played by the Stein discrepancy. Alternatively, a generalised Bayesian approach to estimation of the parameter can be considered where, given a prior probability distribution with density function , , (with respect to an appropriate reference measure on ), one constructs a generalised posterior with probability density function for some to be specified or determined. Hypothesis testing The Stein discrepancy has also been used as a test statistic for performing goodness-of-fit testing and comparing latent variable models. Since the aforementioned tests have a computational cost quadratic in the sample size, alternatives have been developed with (near-)linear runtimes. See also Stein's method Divergence (statistics) References Statistical distance Theory of probability distributions
Stein discrepancy
[ "Physics" ]
2,306
[ "Physical quantities", "Statistical distance", "Distance" ]
67,768,433
https://en.wikipedia.org/wiki/Tecla%20house
The Tecla house is a prototype 3D-printed eco residential building made out of clay. The first model was designed by the Italian architecture studio Mario Cucinella Architects (MCA) and engineered and built by Italian 3D printing specialists WASP by April 2021, becoming the world's first house 3D-printed entirely from a mixture made from mainly local earth and water. Its name is a portmanteau of "technology" and "clay" and that of one of Italo Calvino's Invisible Cities whose construction never ceases. History The project was reportedly first conceived by WASP Founder Massimo Moretti and, via research of the Cucinella-founded training center School of Sustainability (SOS), MCA's founder Mario Cucinella. For construction, WASP's 3D printing technology Crane WASP was used. This 3D printer was used for the similar building "GAIA" – the first 3D printed earth building – completed in 2018, about 7 years after WASP's inception in 2012. Printing started in September 2019. It was developed as a solution that addresses urgent problems, like the climate crisis, via application of both ancient materials and techniques, and novel technologies. Technology and construction For the building WASP's 3D printing technology Crane WASP was used. It is the first 3D printer that can print from raw earth and is modular and multilevel. It consists of software and (in 2021) a stationary fixture with two synchronized printer arms that can simultaneously print an area of 50 m2 each. The material consists of local soil mixed with water, fibers from rice husks and a binder. The infilling material for thermal insulation consists of rice husk and rice straw from rice cultivation waste. The composition of the mixture and filling of the walls can be optimized depending on local climate. An early phase of the construction is the digging and mixing phase in which a digger digs up local soil which is then analyzed and mixed with water and additives. The house is made up of two modules up to 4.2 m in height, has an area of about 60 m³ and can be built with 200 hours of printing. It uses 7000 G-code machine codes, 350 12 mm layers and 150 km of extrusion from the printer arms, for an average consumption of less than 6 kW (total printing output of ~1200 kWh). As with any 3D printed product, the design can be modified for improvements and flexible adaptation to different purposes and environments. The buildings are dome-shaped, have a large glass door and are topped with ceiling-windows. As of 2021, the only prototype has no windows and paint on its walls. It was built with collaboration from a number of Italian companies and Massa Lombarda as an institutional partner. Uses and problems The use of local natural materials reduces waste and greenhouse gas emissions. Data and projections indicate an increasing relevance of buildings that are both low-cost and sustainable, notably that, according to a 2020 UN report, building and construction are responsible for ~38% of all energy-related carbon dioxide emissions, that, partly due to global warming, migration crises are expected to intensify in the future and that the UN estimates that by 2030, ~3 billion people or ~40% of the world's population will require access to accessible, affordable housing. Buildings like the Tecla prototype could be very cheap, well-insulated, stable and weatherproof, climate-adaptable, customizable, get produced rapidly, require only very little easily learnable manual labor, mitigate carbon emissions from concrete, require less energy, reduce homelessness, help enable intentional communities such as autonomous eco-communities, and enable the provision of housing for victims of natural disasters as well as – via knowledge- and technology-transfer to local people – for emigrants to Europe near their homes, rather than controversially in distant countries. The prototype is undergoing structural and thermal performance testing. The machines needed for construction take as much space as the container for shipping and are not mass-produced and inexpensive enough for common individual citizens to afford. Disadvantages of printing with clay-mixtures include height-limitations or horizontal space requirements, latencies due to having to let the mixture dry with current processes, and other problems related to the novelty of the product such as their connection to plumbing systems. While they are unlikely to be relevant for solutions to overpopulation crises such as in China, their early implementations may tend to enable societal innovation through autark communities and displacement- and migration-relief via use by citizens of African and Middle Eastern countries. See also Construction 3D printing Open Source Ecology Home construction Building science Off-grid Green economy Equipment rental Effects of climate change#Migration Sustainable architecture Sustainable city Sustainable building Sustainable design Green infrastructure Community garden References External links Environmental technology Building technology Affordable housing 2021 introductions 2021 in technology 2021 neologisms 3D printing Italian inventions Automation in construction Clay Buildings and structures in Emilia-Romagna Houses in Italy
Tecla house
[ "Engineering" ]
1,001
[ "Construction", "Automation in construction", "Automation" ]
67,769,253
https://en.wikipedia.org/wiki/IntFOLD
IntFOLD (Integrated Fold Recognition) is fully automated, integrated pipeline for prediction of 3D structure and function from amino acid sequences. The pipeline is wrapped up and deployed as a publicly-available Web Server. The core of the server method is quality assessment using built-in accuracy self-estimates (ASE) which improves performance prediction of 3D model using ModFOLD. Description IntFOLD server provides the tertiary structure prediction at a competitive accuracy and combines the cutting edge methods including IntFOLD-TS for generation of 3D models, ModFOLD for 3D model quality estimation, ReFOLD for refinement of 3D models, DisoCLUST for disorder prediction, DomFOLD for structural domain prediction, and FunFOLD for protein ligand binding site prediction. The integration of the tools enables users to reach all related information in a pipeline. IntFOLD Web Server has completed over 200,000 structure predictions since January 2010. The only required input is a protein sequence for the prediction of the protein 3D structure and function. The IntFOLD output is presented via a user-friendly interface for the use of life scientists. The raw data is also formatted in Critical Assessment of Methods for Protein Structure Prediction (CASP) standards with a detailed help page. Performance in CASP and CAMEO experiments The IntFOLD method was firstly benchmarked in Critical Assessment of Techniques for Protein Structure Prediction 9 (CASP9) and ranked among the top 5. The IntFOLD server has consolidated its performance in the following CASP experiments Its performance is being continually evaluated in Continuous Automated Model Evaluation (CAMEO) experiment. Applications of IntFOLD server Public Health IntFOLD was used to generate 3D models of the SARS-CoV-2 targets for the CASP Commons COVID-19 initiative and elsewhere accelerating the race of vaccines and other therapeutics development with regard to COVID-19 pandemic. In other aspect of chronic diseases, IntFOLD was used to model HEV PCP, an essential protein of Hepatitis E virus causing Hepatitis E disease. Additionally, IntFOLD was used to model disordered region of the Bovine milk αS2-casein proteins which were implicated in the formation amyloidogenic fibrils some of which are known to be major causes of neurodegenerative diseases. Food Security IntFOLD has been used in different aspects of food security. For instance, it has been used to model effector proteins molecules that causes fungus in Barley. Furthermore, it has been applied in modelling several proteins involved in the functioning of key systems in Atlantic salmon, and HaACBP1 protein, which is vital for development and growth of sunflower, a key crop plant used for production of widely used cooking oil. IntFOLD was used to model Chitin proteins in Podosphaera xanthii, a causal agent of fungal disease called cucurbit powdery mildew, which hamper crop productivity. Contribution to Protein Structure Prediction Methods Development IntFOLD has been used as one of the standard server-based methods in validating the performance of some of the newer methods used in prediction of the 3D-protein models. This is important in advancing the structural bioinformatics field. References Bioinformatics software
IntFOLD
[ "Biology" ]
640
[ "Bioinformatics", "Bioinformatics software" ]
67,769,798
https://en.wikipedia.org/wiki/2021%20San%20Jose%20shooting
On May 26, 2021, a mass shooting occurred at a Santa Clara Valley Transportation Authority (VTA) rail yard in San Jose, California, United States. A 57-year-old VTA employee, Samuel James Cassidy, shot and killed nine VTA employees before killing himself. It is the deadliest mass shooting in the history of the San Francisco Bay Area. As a result of the shooting, service throughout the VTA light rail system was suspended or limited for months. The city announced and later passed a package of policy proposals intended to curb gun violence, including mandatory liability insurance (the first law of its kind in the nation) and fees for gun owners. Background The VTA is a public transportation agency that operates bus and light rail services throughout Santa Clara County and employs about 2,000 workers. The shooting took place at the VTA's Guadalupe Division facility, which is located in the Civic Center neighborhood of San Jose, near the Santa Clara County Sheriff's Office and San Jose Police Department headquarters. The facility consists of five separate buildings, including the control center for bus and rail operations, surrounding the agency's light rail vehicle storage and maintenance yard. A total of 379 VTA employees were employed at the yard. California's gun laws are among the strictest in the country. Among other gun laws, the state has a red flag law that enables law enforcement authorities to seize a person's firearms based on a gun violence restraining order. In 2019, 122 such restraining orders were requested in Santa Clara County. Events Shooting and house arson The gunman left his house at 5:39 a.m. PDT (UTC−07) the day of the shooting, having set it on fire. No one was inside the residence at the time. According to police, he coordinated the fire with the shooting and ignited it by placing ammunition inside a pot on his stove, surrounding the pot with accelerants, and then turning on the stove. At 6:33 a.m. the San Jose Fire Department received a call to respond to the VTA facility, though the first caller did not mention anything about an active shooter incident, according to dispatch audio. A minute later, Santa Clara County authorities received 9-1-1 calls about shots being fired at the VTA facility. Sheriff's deputies and city police officers responded from their nearby offices. When they arrived at 6:35 a.m., they found multiple people shot. The shooting occurred in two separate buildings at the maintenance yard during the busiest time of day: a shift change in which employees from the overnight and morning shifts overlapped. Over 100 people were at the facility at the time of the shooting, according to the sheriff. The shooting began in a conference room in Building B, on the western side of the yard, during a power crew meeting with the local Amalgamated Transit Union president, who was spared. The gunman then walked over to Building A on the eastern side, where he continued firing. Police and witnesses later said the gunman targeted some of his victims and spared others from being shot. At 6:36 a.m., the fire at the gunman's house was first reported by a passerby. Two minutes later, the fire department responded to the home in South San Jose, about away from the VTA facility, and discovered hundreds of rounds of ammunition and a gas can there. The house sustained heavy damage from the fire, with the second floor collapsing from the heat of the blaze. At the same time the house fire was reported, the fire department received another call confirming there was an active shooter at the facility. At 6:38 a.m., responding officers heard more shots being fired. About ten minutes after the first 9-1-1 calls were received, dispatchers reported the final sounds of gunshots. At 6:43 a.m., officers closed in on the gunman as he killed himself on the third floor of Building A, between administrative offices and the operations control room. According to the sheriff on June 1, the gunman fatally shot himself twice: first under the chin, then in the side of the head. Immediate aftermath At 7:12 a.m., the sheriff's office instructed the public to stay away from the vicinity of the facility. At 8:08 a.m., the gunman was confirmed to be down. He had fired a total of 39 rounds from three semiautomatic handguns, and had been carrying 32 high-capacity magazines, some with 12 rounds and others with 15. About 40 people were rescued from the area by law enforcement. Police received reports of explosive devices inside the building, prompting a bomb squad to investigate. A locker belonging to the gunman was found to contain suspected materials for bombs and detonator cords, which were later deemed not to be dangerous. Agents from the FBI and the Bureau of Alcohol, Tobacco, Firearms and Explosives also responded. The FBI led the shooting site investigation, which concluded on May 31. Sheriff's deputies searched the gunman's house for three days, finding a total of 12 guns, 25,000 rounds of ammunition, and a dozen Molotov cocktails. As a precaution, bomb technicians also detonated a suspicious device at the house, using a specialized containing device that prevents the spread of shrapnel, but it turned out to be inert. Victims There were ten fatalities in the shooting, including Cassidy. All were male VTA employees. Their ages ranged from 29 to 63 years old, and many of them were longtime employees. The victims names were Paul Delacruz Megia, 42; Taptejdeep Singh, 36; Adrian Balleza, 29; Jose Dejesus Hernandez, 35; Timothy Michael Romo, 49; Michael Joseph Rudometkin, 40; Abdolvahab Alaghmandan, 63, Lars Kepler Lane, 63, and Alex Ward Fritch, 49. Eight people, including the gunman, died at the scene, while two of the victims were rushed to Santa Clara Valley Medical Center in critical condition, but one was declared dead upon arrival and the other died later that day. Six of the victims died in Building B, while the other three died at Building A. Before their deaths, some of the victims had led coworkers to safety. Autopsy reports confirmed that the manner of death for all nine victims was Homicide, and that the cause of death for all nine victims was multiple gunshot wounds. It was the deadliest mass shooting in the history of the Bay Area, exceeding the death toll of the 101 California Street shooting that occurred at a law firm in San Francisco in 1993, in which nine people including the gunman were killed. In August 2021, a railyard employee who had witnessed the shooting died by suicide on his first day back to work following the shooting. Perpetrator Police identified the gunman as 57-year-old VTA employee Samuel James Cassidy. He had been employed at the VTA since 2001; for his first two years, he was an electro-mechanic, and he was eventually promoted to a substation maintainer position in 2014. Cassidy owned numerous registered firearms, including shotguns and long rifles. He used three semiautomatic handguns in the shooting, which were legally obtained. It was unclear if he also legally owned the high-capacity magazines used in the shooting; they are prohibited in California unless they were obtained before January 1, 2000, and the buyer was not otherwise prohibited from possessing firearms. Cassidy had a "minor criminal history" and was charged in 1983 with misdemeanor obstruction for resisting a peace officer. Possible motives Cassidy's ex-wife, who had been married to him for ten years before their divorce in 2005, described him as having anger issues and often being angry at his co-workers and at the VTA for what he believed to be its unfair work assignments. She also said that he had talked about killing people at his workplace more than a decade ago. According to an initial review by the VTA, Cassidy had a pattern of insubordination and had gotten into verbal altercations with coworkers on at least four separate occasions. Though the incidents were elevated to management and he faced disciplinary action, he had never been formally disciplined for any of them, and managers even defended his work. According to coworkers, Cassidy was angered over a change in policy that ended cash payouts for unused vacation days and, in April 2021, aired his grievances over the radio communication system for light rail operators. Cassidy's sister said that she suspected something happened at work on May 25 that motivated her brother to commit the shooting the day after. 2016 detainment An August 2016 memo by the United States Department of Homeland Security (DHS) described how, after taking a trip to the Philippines, Cassidy was detained by officers with U.S. Customs and Border Protection (CBP) for a secondary inspection. They subsequently found books about terrorism in his possession, along with a memo book filled with notes about his hatred of the VTA. He was asked whether he had problems with anyone at his workplace, and he said he did not. According to a DHS official, Cassidy was detained by the officers partly because of red flags regarding sex tourism, but nothing relating to sex tourism was found, and the detention did not result in an arrest or apparently any follow-up action. A CBP report on the encounter, released in July 2021, revealed Cassidy had harbored "dark thoughts about harming" two specific people, whose names were redacted from the report; it is unclear if they were connected to VTA or among the shooting victims. However, the CBP agents apparently prioritized sex tourism in their questioning of Cassidy, noting "sex friendly" hotels that were mentioned in his writings and his text messages with women in the Philippines. San Jose officials later said the authorities were not informed of the detainment by federal officials. John Sandweg, the former acting director of U.S. Immigration and Customs Enforcement, said that there is no procedure for customs officials to alert local law enforcement agencies about a U.S. citizen detained as a safety threat. Santa Clara County district attorney Jeffrey F. Rosen said that, had his office been alerted about Cassidy's detainment, they would have had enough evidence to obtain a gun violence restraining order and seize his weapons. Aftermath and reactions Memorials and victim assistance efforts A hotline was set up for VTA employees and family members for additional information about the shooting and the victims. The VTA also announced plans to help survivors and victims' families and partner with them on erecting a public memorial to the victims. Some employees criticized the VTA's efforts to help them after the shooting; they claimed the Authority did not actually care for them and attributed the assistance efforts to the local Amalgamated Transit Union instead. Mayor Sam Liccardo said it was a "horrific day" for the city and the VTA, and he expressed his condolences to the victims and their families. He also emphasized that VTA employees have been essential workers during the COVID-19 pandemic. President Joe Biden and Vice President Kamala Harris both urged Congress to take action on gun control legislation. Biden ordered flags to be lowered to half-staff and called the shooting a "horrific tragedy". Governor Gavin Newsom made similar remarks during a visit to San Jose. The day after the shooting, a vigil was held outside San Jose City Hall and attended by hundreds, including the victims' families and many VTA employees, who were dressed in their work attire. On June 22, the San Jose City Council made plans to introduce a resolution commemorating the victims' lives. On June 28, the California State Legislature included $20 million allocated to the VTA in the state budget, as part of an effort to help it recover from the shooting. The funds are intended to help the VTA "provide mental health resources to employees and their families, resume light rail service and improve safety upgrades at the Guadalupe Rail Yard", where the shooting took place. VTA service interruptions and repairs VTA light rail service was suspended on the day of the shooting and replaced by a bus bridge. Due to a staffing shortage and the inaccessibility of the facility where the shooting occurred, the VTA discontinued the bus bridge on June 1 in favor of regular bus routes and confirmed that light rail service would be suspended indefinitely. As a form of mutual aid, the San Francisco Municipal Railway, Golden Gate Transit, SamTrans, AC Transit, and the Santa Cruz Metropolitan Transit District sent from 20 to 30 buses daily to Santa Clara County to supplement bus service while VTA workers attended funeral services for the victims. The VTA resumed limited passenger service in stages throughout August and September. On August 2, a free bus bridge began serving portions of the Blue and Orange lines between Paseo de San Antonio and Milpitas stations. On August 29, light rail service returned to the Orange Line and part of the Green Line. On September 2, service along the Green and Blue lines was extended southward through downtown San Jose. The remainder of the Blue Line was restored on September 12, followed by the Green Line south of downtown on September 18. The shooting caused significant damage to light rail operation and control equipment. Since the shooting, bus operations were relocated to a temporary facility as the VTA took steps to restore light rail service, including retraining and recertifying drivers and giving operators tours of the Guadalupe yard. The agency has not yet decided whether to remodel, or demolish and rebuild the buildings damaged during the shooting. It was expected to operate out of temporary facilities for three to five years. Investigations and legal proceedings , the Santa Clara County Sheriff's Office and an outside law firm retained by the VTA are both conducting investigations into the shooting. A case is open in probate court regarding Cassidy's estate, including the home he set on fire. The families of at least seven of the victims plan to file a lawsuit in 2022. Changes to San Jose's gun laws On June 8, Liccardo and four city councilmembers announced ten harm reduction policy proposals intended to curb gun violence in the city, including two policies that would be the first of any city or state in the country: gun owners would be required to carry liability insurance and pay an annual fee to compensate the city for the emergency response and other public costs associated with unintentional gun-related death, injuries, or property damage. A gun buyback program was also proposed. Some of these proposals had originally been put forward in 2019 after the Gilroy Garlic Festival shooting, the previous mass shooting to occur in Santa Clara County, but they were put on hold due to the COVID-19 pandemic. The liability insurance and annual fee policies were criticized by the executive director of Gun Owners of California, who said California's preemption laws gave the city no authority to enact differing gun laws. On June 15, the San Jose City Council unanimously approved an ordinance requiring, among other things, retailers to record video and audio footage of gun sales, with the intention of preventing straw purchases of firearms. On June 29, the City Council unanimously approved sending the ten ordinances to the City Attorney for review and to bring them back to council in September 2021 On January 25, 2022, the City Council passed the Gun Harm Reduction Ordinance, which would impose the first gun fee and gun liability insurance requirement in the country, prompting the National Association for Gun Rights (NAGR) to file a lawsuit against the city minutes later. On September 30, 2022, District Judge Beth Labson Freeman issued a partial dismissal on NAGR's lawsuit against San Jose's gun control ordinance, which had been consolidated with a suit by the Howard Jarvis Taxpayers Association, with leave to amend in part and without leave to amend in part. On July 13, 2023, she dismissed the consolidated lawsuit again with leave to amend in part and without leave to amend in part. , a date for the gun buyback program had not been announced. Policy proposals San Jose officials invited other California cities to join an amicus brief supporting the state's appeal of Miller v. Bonta, which by coincidence struck down the state's 1989 assault weapons ban several days after the shooting on June 4. Although Cassidy did not use an assault weapon, Attorney General Rob Bonta cited the San Jose shooting in a statement opposing the ruling. Representative Zoe Lofgren of San Jose emphasized the need to pass the Enhanced Background Checks Act and the Bipartisan Background Checks Act, which are unlikely to overcome a Senate filibuster. Following reports that local law enforcement agencies were not informed by the Homeland Security Department of Cassidy's 2016 detainment, a department spokeswoman said the agency was working to improve information-sharing with other law enforcement agencies. Issues with information-sharing between agencies had been a problem in recent years and has sometimes involved high-profile incidents such as the storming of the U.S. Capitol and the Stoneman Douglas High School shooting. Liccardo met with Biden on July 12 to discuss strategies on combating gun violence in the U.S. Following the shooting and rising anti-Asian sentiment in the United States, a petition was started, demanding officials to address concerns made by employees of the San Jose Public Library over their safety. The petition cited the library's lack of security infrastructure and procedures as reasons for the employees' concerns. See also List of mass shootings in the United States in 2021 List of homicides in California List of shootings in California References External links KGO-TV list of victims 2020s crimes in California 2021 fires in the United States 2021 in California 2021 mass shootings in the United States 2021 murders in the United States 2021 shooting Arson in 2021 Arson in California Attacks on buildings and structures in 2021 Attacks on buildings and structures in California Attacks on government buildings and structures in the United States Attacks on railway systems Deaths by firearm in California Mass murder in 2021 Mass murder in California Mass murder in the United States in the 2020s Mass shootings in California Mass shootings in the United States May 2021 crimes in the United States Murder in the San Francisco Bay Area Murder–suicides in California Santa Clara Valley Transportation Authority Workplace shootings in the United States
2021 San Jose shooting
[ "Technology" ]
3,739
[ "Railway accidents and incidents", "Attacks on railway systems" ]
67,770,433
https://en.wikipedia.org/wiki/Cistus%20%C3%97%20skanbergii
Cistus × skanbergii is a compact, bushy variety of rockrose, with pale pink flowers and grey-green foliage. It is also known by the common name dwarf pink rockrose. Taxonomy It was first described as a species by Italian botanist Michele Lojacono Pojero, who dedicated it "to Mr. A. Skanberg of Stockholm, an excellent botanist and most beloved friend". It is a natural hybrid between Cistus monspeliensis and Cistus parviflorus which originates where the two species overlap in Greece and Sicily. Description It is a low, dense subshrub that is 2 to 3 or 4 feet tall by 4-5 or 6 feet wide with soft, grey-green leaves. From spring to early summer, the 1 inch wide, pale pink flowers are borne with yellow stamens in the center. References skanbergii Ornamental plant cultivars Hybrid plants
Cistus × skanbergii
[ "Biology" ]
187
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
66,260,254
https://en.wikipedia.org/wiki/Mixed-precision%20arithmetic
Mixed-precision arithmetic is a form of floating-point arithmetic that uses numbers with varying widths in a single operation. Overview A common usage of mixed-precision arithmetic is for operating on inaccurate numbers with a small width and expanding them to a larger, more accurate representation. For example, two half-precision or bfloat16 (16-bit) floating-point numbers may be multiplied together to result in a more accurate single-precision (32-bit) float. In this way, mixed-precision arithmetic approximates arbitrary-precision arithmetic, albeit with a low number of possible precisions. Iterative algorithms (like gradient descent) are good candidates for mixed-precision arithmetic. In an iterative algorithm like square root, a coarse integral guess can be made and refined over many iterations until the error in precision makes it such that the smallest addition or subtraction to the guess is still too coarse to be an acceptable answer. When this happens, the precision can be increased to something more precise, which allows for smaller increments to be used for the approximation. Supercomputers such as Summit utilize mixed-precision arithmetic to be more efficient with regards to memory and processing time, as well as power consumption. Floating point format A floating-point number is typically packed into a single bit-string, as the sign bit, the exponent field, and the significand or mantissa, from left to right. As an example, a IEEE 754 standard 32-bit float ("FP32", "float32", or "binary32") is packed as follows: The IEEE 754 binary floats are: Machine learning Mixed-precision arithmetic is used in the field of machine learning, since gradient descent algorithms can use coarse and efficient half-precision floats for certain tasks, but can be more accurate if they use more precise but slower single-precision floats. Some platforms, including Nvidia, Intel, and AMD CPUs and GPUs, provide mixed-precision arithmetic for this purpose, using coarse floats when possible, but expanding them to higher precision when necessary. Automatic mixed precision PyTorch implements automatic mixed-precision (AMP), which performs autocasting, gradient scaling, and loss scaling. The weights are stored in a master copy at a high precision, usually in FP32. Autocasting means automatically converting a floating-point number between different precisions, such as from FP32 to FP16, during training. For example, matrix multiplications can often be performed in FP16 without loss of accuracy, even if the master copy weights are stored in FP32. Low-precision weights are used during forward pass. Gradient scaling means multiplying gradients by a constant factor during training, typically before the weight optimizer update. This is done to prevent the gradients from underflowing to zero when using low-precision data types like FP16. Mathematically, if the unscaled gradient is , the scaled gradient is where is the scaling factor. Within the optimizer update, the scaled gradient is cast to a higher precision before it is scaled down (no longer underflowing, as it is in a higher precision) to update the weights. Loss scaling means multiplying the loss function by a constant factor during training, typically before backpropagation. This is done to prevent the gradients from underflowing to zero when using low-precision data types. If the unscaled loss is , the scaled loss is where is the scaling factor. Since gradient scaling and loss scaling are mathematically equivalent by , loss scaling is an implementation of gradient scaling. PyTorch AMP uses exponential backoff to automatically adjust the scale factor for loss scaling. That is, it periodically increase the scale factor. Whenever the gradients contain a NaN (indicating overflow), the weight update is skipped, and the scale factor is decreased. References Floating point Computer arithmetic
Mixed-precision arithmetic
[ "Mathematics" ]
803
[ "Computer arithmetic", "Arithmetic" ]
66,260,789
https://en.wikipedia.org/wiki/Eva%20Haifa%20Giraud
Eva Haifa Giraud (born 1984) is a cultural and critical theorist and a scholar of media studies and feminist science studies whose work concerns activism and non-anthropocentric theory. She is presently a senior lecturer in the Department of Sociological Studies at the University of Sheffield. Her 2019 monograph What Comes After Entanglement? Activism, Anthropocentrism, and an Ethics of Exclusion was published by Duke University Press; her second sole-authored book, Veganism: Politics, Practice and Theory, was published in 2021 by Bloomsbury. Education and career Giraud read for a Master of Arts (MA) in English Literature at the University of Edinburgh from 2002–2006, and then went on to read for an MA (2006–7) and PhD (2007–11) in Critical Theory at the Centre for Critical Theory, University of Nottingham. Her doctoral thesis was entitled Articulating Animal Rights: Activism, Networks and Anthropocentrism. She worked at Nottingham for three years, before joining the Keele University in 2014. In 2019, she published a monograph entitled What Comes After Entanglement? Activism, Anthropocentrism, and an Ethics of Exclusion with Duke University Press. Giraud joined the Department of Sociological Studies at the University of Sheffield as a Senior Lecturer in Digital Media & Society in 2021. In the same year, Bloomsbury Academic published her book Veganism: Politics, Practice, and Theory. Research Giraud's research concerns the use of media by activists (including animal activists, food activists, environmental activists, and anti-racist activists); non-anthropocentric theory exploring how to live in ways that reject human exceptionalism; and online hate speech. In What Comes After Entanglement?, Giraud addresses the theoretical idea of entanglement, which cautions theorists and activists to "avoid proposing simple solutions to the world’s complex problems". Giraud explores case studies of activism including anti-McDonald's campaigning, anti-G8 campaigning, the SPEAK campaign, and food activism in Nottingham, arguing that there is a tension between, on the one hand, theoretical work on entanglement, and, on the other, the political practice of activists. She argues for an "ethics of exclusion", which recognises that certain ways of being are inevitably foreclosed by decisions made, and that decisions sometimes have to be made. She thus challenges the charges made by certain theorists that activist decision-making essentialist and insufficiently attentive to the world's complexity; her descriptions of protest ecologies and their everyday practices of decision-making and labour organisations "trouble the notion" of staying with the trouble. One reviewer said that the book would be valuable for scholars of a wide range of disciplines; another drew attention to the ongoing conversation that Giraud was conducting with Donna Haraway, a major influence on Giraud's work; and a third argued that one of the book's major contributions was emphasising the difference between animal studies and critical animal studies. Selected works Elizabeth Poole and Eva Giraud, eds. (2019). Right Wing Populism and Mediated Activism: Creative Responses & Counter-narrative. London: Open Library of Humanities. Eva Giraud (2019). What Comes After Entanglement? Activism, Anthropocentrism and an Ethics of Exclusion. Durham, NC: Duke University Press. Eva Giraud (2021). Veganism: Politics, Practice and Theory. London: Bloomsbury. References External links Eva Giraud at Keele University Eva Giraud at Humanities Commons Eva Giraud on Twitter Living people 1984 births Alumni of the University of Edinburgh Alumni of the University of Nottingham Academics of Keele University Academics of the University of Sheffield Critical theorists Science and technology studies scholars Feminist theorists Scholars of veganism
Eva Haifa Giraud
[ "Technology" ]
794
[ "Science and technology studies", "Science and technology studies scholars" ]
66,261,527
https://en.wikipedia.org/wiki/List%20of%20plant%20genera%20named%20for%20people%20%28D%E2%80%93J%29
Since the first printing of Carl Linnaeus's Species Plantarum in 1753, plants have been assigned one epithet or name for their species and one name for their genus, a grouping of related species. Thousands of plants have been named for people, including botanists and their colleagues, plant collectors, horticulturists, explorers, rulers, politicians, clerics, doctors, philosophers and scientists. Even before Linnaeus, botanists such as Joseph Pitton de Tournefort, Charles Plumier and Pier Antonio Micheli were naming plants for people, sometimes in gratitude for the financial support of their patrons. Early works researching the naming of plant genera include an 1810 glossary by and an etymological dictionary in two editions (1853 and 1856) by Georg Christian Wittstein. Modern works include The Gardener's Botanical by Ross Bayton, Index of Eponymic Plant Names and Encyclopedia of Eponymic Plant Names by Lotte Burkhardt, Plants of the World by Maarten J. M. Christenhusz (lead author), Michael F. Fay and Mark W. Chase, The A to Z of Plant Names by Allan J. Coombes, the four-volume CRC World Dictionary of Plant Names by Umberto Quattrocchi, and Stearn's Dictionary of Plant Names for Gardeners by William T. Stearn; these supply the seed-bearing genera listed in the first column below. Excluded from this list are genus names not accepted (as of January 2021) at Plants of the World Online, which includes updates to Plants of the World (2017). Key Ba = listed in Bayton's The Gardener's Botanical Bt = listed in Burkhardt's Encyclopedia of Eponymic Plant Names Bu = listed in Burkhardt's Index of Eponymic Plant Names Ch = listed in Christenhusz's Plants of the World Co = listed in Coombes's The A to Z of Plant Names Qu = listed in Quattrocchi's CRC World Dictionary of Plant Names St = listed in Stearn's Dictionary of Plant Names for Gardeners In addition, Burkhardt's Index is used as a reference for every row in the table, except as noted. Genera See also List of plant genus names with etymologies: A–C, D–K, L–P, Q–Z List of plant family names with etymologies Notes Citations References See http://creativecommons.org/licenses/by/4.0/ for license. See http://creativecommons.org/licenses/by/4.0/ for license. See http://www.plantsoftheworldonline.org/terms-and-conditions for license. Further reading Systematic Systematic Taxonomy (biology) Glossaries of biology Gardening lists Genera named for people (D-J) Named for people (D-J) Wikipedia glossaries using tables Lists of eponyms
List of plant genera named for people (D–J)
[ "Biology" ]
611
[ "Lists of plants", "Plants", "Lists of biota", "Taxonomy (biology)", "Taxonomic lists", "Glossaries of biology" ]
66,261,739
https://en.wikipedia.org/wiki/List%20of%20plant%20genera%20named%20for%20people%20%28Q%E2%80%93Z%29
Since the first printing of Carl Linnaeus's Species Plantarum in 1753, plants have been assigned one epithet or name for their species and one name for their genus, a grouping of related species. Thousands of plants have been named for people, including botanists and their colleagues, plant collectors, horticulturists, explorers, rulers, politicians, clerics, doctors, philosophers and scientists. Even before Linnaeus, botanists such as Joseph Pitton de Tournefort, Charles Plumier and Pier Antonio Micheli were naming plants for people, sometimes in gratitude for the financial support of their patrons. Early works researching the naming of plant genera include an 1810 glossary by and an etymological dictionary in two editions (1853 and 1856) by Georg Christian Wittstein. Modern works include The Gardener's Botanical by Ross Bayton, Index of Eponymic Plant Names and Encyclopedia of Eponymic Plant Names by Lotte Burkhardt, Plants of the World by Maarten J. M. Christenhusz (lead author), Michael F. Fay and Mark W. Chase, The A to Z of Plant Names by Allan J. Coombes, the four-volume CRC World Dictionary of Plant Names by Umberto Quattrocchi, and Stearn's Dictionary of Plant Names for Gardeners by William T. Stearn; these supply the seed-bearing genera listed in the first column below. Excluded from this list are genus names not accepted (as of January 2021) at Plants of the World Online, which includes updates to Plants of the World (2017). Key Ba = listed in Bayton's The Gardener's Botanical Bt = listed in Burkhardt's Encyclopedia of Eponymic Plant Names Bu = listed in Burkhardt's Index of Eponymic Plant Names Ch = listed in Christenhusz's Plants of the World Co = listed in Coombes's The A to Z of Plant Names Qu = listed in Quattrocchi's CRC World Dictionary of Plant Names St = listed in Stearn's Dictionary of Plant Names for Gardeners In addition, Burkhardt's Index is used as a reference for every row in the table. Genera See also List of plant genus names with etymologies: A–C, D–K, L–P, Q–Z List of plant family names with etymologies Notes Citations References See http://creativecommons.org/licenses/by/4.0/ for license. See http://creativecommons.org/licenses/by/4.0/ for license. See http://www.plantsoftheworldonline.org/terms-and-conditions for license. Further reading Systematic Systematic Taxonomy (biology) Glossaries of biology Gardening lists Genera named for people (Q-Z) Named for people (Q-Z) Wikipedia glossaries using tables Lists of eponyms
List of plant genera named for people (Q–Z)
[ "Biology" ]
607
[ "Lists of plants", "Plants", "Lists of biota", "Taxonomy (biology)", "Taxonomic lists", "Glossaries of biology" ]
66,261,793
https://en.wikipedia.org/wiki/AWM-SIAM%20Sonia%20Kovalevsky%20Lecture
The AWM-SIAM Sonia Kovalevsky Lecture is an award and lecture series that "highlights significant contributions of women to applied or computational mathematics." The Association for Women in Mathematics (AWM) and the Society for Industrial and Applied Mathematics (SIAM) planned the award and lecture series in 2002 and first awarded it in 2003. The lecture is normally given each year at the SIAM Annual Meeting. Award winners receive a signed certificate from the AWM and SIAM presidents. The lectures are named after Sonia Kovalevsky (1850–1891), a well-known Russian mathematician of the late 19th century. Karl Weierstrass regarded Kovalevsky as his most talented student. In 1874, she received her Doctor of Philosophy degree from the University of Göttingen under the supervision of Weierstrass. She was granted privatdozentin status and taught at the University of Stockholm in 1883; she became an ordinary professor (the equivalent of full professor) at this institution in 1889. She was also an editor of the journal Acta Mathematica. Kovalevsky did her important work in the theory of partial differential equations and the rotation of a solid around a fixed point. Recipients The Kovalevky Lecturers have been: 2003 Linda R. Petzold, University of California, Santa Barbara, “Towards the Multiscale Simulation of Biochemical Networks” 2004 Joyce R. McLaughlin, Rensselaer Polytechnic Institute, “Interior Elastodynamics Inverse Problems: Creating Shear Wave Speed Images of Tissue” 2005 Ingrid Daubechies, Princeton University, “Superfast and (Super)sparse Algorithms” 2006 Irene Fonseca, Carnegie Mellon University, “New Challenges in the Calculus of Variations” 2007 Lai-Sang Young, Courant Institute, “Shear-Induced Chaos” 2008 Dianne P. O'Leary, University of Maryland, “A Noisy Adiabatic Theorem: Wilkinson Meets Schrödinger’s Cat” 2009 Andrea Bertozzi, University of California, Los Angeles 2010 Suzanne Lenhart, University of Tennessee at Knoxville, “Mixing it up: Discrete and Continuous Optimal Control for Biological Models” 2011 Susanne C. Brenner, Louisiana State University, “A Cautionary Tale in Numerical PDEs” 2012 Barbara Keyfitz, Ohio State University, “The Role of Characteristics in Conservation Laws” 2013 Margaret Cheney, Colorado State University, “Introduction to Radar Imaging” 2014 Irene M. Gamba, University of Texas at Austin, “The evolution of complex interactions in non-linear kinetic systems” 2015 Linda J. S. Allen, Texas Tech University, “Predicting Population Extinction” 2016 Lisa J. Fauci, Tulane University, “Biofluids of Reproduction: Oscillators, Viscoelastic Networks and Sticky Situations” 2017 Liliana Borcea, University of Michigan, “Mitigating Uncertainty in Inverse Wave Scattering” 2018 Eva Tardos, Cornell University, “Learning and Efficiency of Outcomes in Games” 2019 Catherine Sulem, University of Toronto, “The Dynamics of Ocean Waves” 2020 Bonnie Berger, MIT, “Compressive genomics: leveraging the geometry of biological data” 2021 Vivette Girault, Université Pierre et Marie Curie, "From linear poroelasticity to nonlinear implicit elastic and related models" 2022 Anne Greenbaum, University of Washington, "Two of my Favorite Problems” 2023 Annalisa Buffa, Ecole Polytechnique Fédérale de Lausanne (EPFL), TBA See also Falconer Lecture Noether Lecture List of mathematics awards References External links Women in mathematics Awards of the Society for Industrial and Applied Mathematics Awards and prizes of the Association for Women in Mathematics 2003 establishments in the United States Science lecture series Recurring events established in 2003 Awards established in 2003 Awards honoring women
AWM-SIAM Sonia Kovalevsky Lecture
[ "Technology" ]
764
[ "Women in science and technology", "Women in mathematics" ]
66,261,967
https://en.wikipedia.org/wiki/List%20of%20plant%20genera%20named%20for%20people%20%28K%E2%80%93P%29
Since the first printing of Carl Linnaeus's Species Plantarum in 1753, plants have been assigned one epithet or name for their species and one name for their genus, a grouping of related species. Thousands of plants have been named for people, including botanists and their colleagues, plant collectors, horticulturists, explorers, rulers, politicians, clerics, doctors, philosophers and scientists. Even before Linnaeus, botanists such as Joseph Pitton de Tournefort, Charles Plumier and Pier Antonio Micheli were naming plants for people, sometimes in gratitude for the financial support of their patrons. Early works researching the naming of plant genera include an 1810 glossary by and an etymological dictionary in two editions (1853 and 1856) by Georg Christian Wittstein. Modern works include The Gardener's Botanical by Ross Bayton, Index of Eponymic Plant Names and Encyclopedia of Eponymic Plant Names by Lotte Burkhardt, Plants of the World by Maarten J. M. Christenhusz (lead author), Michael F. Fay and Mark W. Chase, The A to Z of Plant Names by Allan J. Coombes, the four-volume CRC World Dictionary of Plant Names by Umberto Quattrocchi, and Stearn's Dictionary of Plant Names for Gardeners by William T. Stearn; these supply the seed-bearing genera listed in the first column below. Excluded from this list are genus names not accepted (as of January 2021) at Plants of the World Online, which includes updates to Plants of the World (2017). Key Ba = listed in Bayton's The Gardener's Botanical Bt = listed in Burkhardt's Encyclopedia of Eponymic Plant Names Bu = listed in Burkhardt's Index of Eponymic Plant Names Ch = listed in Christenhusz's Plants of the World Co = listed in Coombes's The A to Z of Plant Names Qu = listed in Quattrocchi's CRC World Dictionary of Plant Names St = listed in Stearn's Dictionary of Plant Names for Gardeners In addition, Burkhardt's Index is used as a reference for every row in the table, except as noted. Genera See also List of plant genus names with etymologies: A–C, D–K, L–P, Q–Z List of plant family names with etymologies Notes Citations References See http://creativecommons.org/licenses/by/4.0/ for license. See http://creativecommons.org/licenses/by/4.0/ for license. See http://www.plantsoftheworldonline.org/terms-and-conditions for license. Further reading Systematic Systematic Taxonomy (biology) Glossaries of biology Gardening lists Genera named for people (K-P) Named for people (K-P) Wikipedia glossaries using tables Lists of eponyms
List of plant genera named for people (K–P)
[ "Biology" ]
611
[ "Lists of plants", "Plants", "Lists of biota", "Taxonomy (biology)", "Taxonomic lists", "Glossaries of biology" ]
66,263,581
https://en.wikipedia.org/wiki/SDSS%20J1228%2B1040
SDSS J1228+1040 (SDSS J122859.93+104032.9, WD 1226+110) is a white dwarf with a debris disk around it. The disk formed when a planetary body was tidally disrupted around the white dwarf. It is the first gaseous disk discovered around a white dwarf. SDSS J1228+1040 was first identified as a white dwarf in 2006 from SDSS spectroscopic data. These observations identified it as a DA white dwarf, which indicates the detection of hydrogen. The gaseous disk was discovered in 2006, using data from the William Herschel Telescope. This gaseous disk was discovered by the emission of the calcium triplet at 850-866 nm and weaker emission due to iron at 502 nm and 517 nm. The double peak of the calcium triplet is seen as evidence of a rotating disk. The authors constrain the outer radius of the gaseous disk to 1.2 . The authors also find absorption due to magnesium. Additional elements in emission were detected in 2016. Hubble far-ultraviolet observations did not detect any emission-lines, which constrained the gaseous disk temperature to around 5000 K. The researchers modelled the disk to have a spiral shape. In 2010 it was found that the calcium emission line changed between two epochs. The red side of the emission line complex switched to the blue side. This was first interpreted as a clumpy disk and the change in emission lines was seen as possible evidence of these clumps moving. Spectroscopic data from 2003 to 2015 were used for doppler imaging, which resolved the gaseous disk. The changes in calcium emission were interpreted as precession of the disk, with a period of 24-30 years. These timescales are in agreement with precession under the influence of general relativity. Modelling of the gaseous disk were carried out in 2021, finding an eccentricity of 0.188 ±0.004 and semi-major axis of 0.879 ±0.005 for the gas ring. The gaseous disk was modelled in detail in 2024, finding an inner disk radius of 0.57 , an outer radius of 1.7 and a peak emission at 1 . The disk shows eccentricity with the eccentricity of the inner edge being 0.44 and at the outer edge being nearly zero. The inclination is unconstrained in this work. The precession period was found to be 20.5 years. The researchers point out that the progenitor had a very eccentric orbit around the white dwarf, before it was disrupted. The precession should dissipate within around 200 years, meaning the disk is very young and should contain most of the mass of the progenitor, which they estimate to be 1021 g, equivalent to a body with a size of about 50 km. In 2009 a dusty component was discovered, thanks to the detection of infrared excess. This discovery was made with observations from the Very Large Telescope, the United Kingdom Infrared Telescope and the Spitzer Space Telescope. The modelled dusty disk has an inner radius of 18 white dwarf radii and the outer radius is 107 white dwarf radii. The outer radius is similar to the gaseous disk radius of 108 white dwarf radii. The inner disk has a temperature of 1670 K and the outer disk has a temperature of 450 K. According to this work the disk has an inclination of around 70°. Later modelling found that the dusty disk has an inner temperature of 1300±50 K, an outer temperature of 500±70 K. It was found that the disk is variable in infrared light. The 3.6 and 4.5 μm flux decreased by 20% from 2007 to 2014 and remained at this level until 2018. A candidate planetesimal The planetesimal, called SDSS 1228+1040 b, was suggested as an explanation of a 123.4 minute variation of the calcium emission line. The researchers found that this planetesimal must be orbiting within the disk. The body was modelled to have a size of around 72 km. Another study does however blame precession for the variability of the calcium emission line. Other gaseous white dwarf disks Other gaseous disks were discovered. Especially Gaia helped in increasing this sample and these systems often also show variable emission lines, which could be a sign of precession in these disks. See also List of exoplanets and planetary debris around white dwarfs WD 0145+234, another gaseous white dwarf disk WD 1145+017, another white dwarf disk showing precession References Virgo (constellation) white dwarfs Circumstellar disks SDSS objects WISE objects
SDSS J1228+1040
[ "Astronomy" ]
967
[ "Virgo (constellation)", "Constellations" ]
66,265,271
https://en.wikipedia.org/wiki/Combinatorics%3A%20The%20Rota%20Way
Combinatorics: The Rota Way is a mathematics textbook on algebraic combinatorics, based on the lectures and lecture notes of Gian-Carlo Rota in his courses at the Massachusetts Institute of Technology. It was put into book form by Joseph P. S. Kung and Catherine Yan, two of Rota's students, and published in 2009 by the Cambridge University Press in their Cambridge Mathematical Library book series, listing Kung, Rota, and Yan as its authors (ten years posthumously in the case of Rota). The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries. Topics Combinatorics: The Rota Way has six chapters, densely packed with material: each could be "a basis for a course at the Ph.D. level". Chapter 1, "Sets, functions and relations", also includes material on partially ordered sets, lattice orders, entropy (formulated in terms of partitions of a set), and probability. The topics in Chapter 2, "Matching theory", as well as matchings in graphs, include incidence matrices, submodular set functions, independent matchings in matroids, the Birkhoff–von Neumann theorem on the Birkhoff polytope of doubly stochastic matrices, and the Gale–Ryser theorem on row and column sums of (0,1) matrices. Chapter 3 returns to partially ordered sets and lattices, including material on Möbius functions of incidence algebras, Sperner's theorem on antichains in power sets, special classes of lattices, valuation rings, and Dilworth's theorem on partitions into chains. One of the things Rota became known for, in the 1970s, was the revival of the umbral calculus as a general technique for the formal manipulation of power series and generating functions, and this is the subject of Chapter 4. Other topics in this chapter include Sheffer sequences of polynomials, and the Riemann zeta function and its combinatorial interpretation. Chapter 5 concerns symmetric functions and Rota–Baxter algebras, including symmetric functions over finite fields. Chapter 6, "Determinants, matrices, and polynomials", concludes the book with material including the roots of polynomials, the Grace–Walsh–Szegő theorem, the spectra of totally positive matrices, and invariant theory formulated in terms of the umbral calculus. Each chapter concludes with a discussion of the history of the problems it covers, and pointers to the literature on these problems. Also included at the end of the book are solutions to some of the "exercises" provided at the end of each chapter, each of which could be (and often is) the basis of a research publication, and which connect the material from the chapters to some of its applications. Audience and reception Combinatorics: The Rota Way is too advanced for undergraduates, but could be used as the basis for one or more graduate-level mathematics courses. However, even as a practicing mathematician in combinatorics, reviewer Jennifer Quinn found the book difficult going, despite the many topics of interest to her that it covered. She writes that she found herself "unsatisfied as a reader", "bogged down in technical details", and missing a unified picture of combinatorics as Rota saw it, even though a unified picture of combinatorics was exactly what Rota often pushed for in his own research. Quinn nevertheless commends the book as "a fine reference" for some beautiful mathematics. Like Quinn, John Mount complains that parts of the book are unmotivated and lacking in examples and applications, "like a compressed Bourbaki treatment of discrete mathematics". He also writes that some of the exercises, such as one asking for a reproof of the Robertson–Seymour theorem on graph minors (without a guide to its original proof, which extended over a series of approximately 20 papers) are "needlessly cruel". However, he recommends Combinatorics: The Rota Way to students and researchers who have already seen the topics it presents, as a second source "for an alternate and powerful treatment of the topic". Alessandro Di Bucchianico also writes that he is "not entirely positive" about the book, complaining about its "endless rows of definitions, statements, and proofs" without a connecting thread or motivation. He concludes that, although it is a good book for finding a clear description of Rota's favorite pieces of mathematics and their proofs, it is missing the enthusiasm and sense of unity that Rota himself brought to the subject. On the other hand, Michael Berg reviews the book more positively, calling its writing "crisp and elegant", its exercises deep, "important and fascinating", its historical asides "fun", and the overall book "simply too good to pass up". References Mathematics textbooks 2009 non-fiction books Algebraic combinatorics
Combinatorics: The Rota Way
[ "Mathematics" ]
1,004
[ "Fields of abstract algebra", "Algebraic combinatorics", "Combinatorics" ]
66,265,679
https://en.wikipedia.org/wiki/Vesatolimod
Vesatolimod (GS-9620) is an antiviral drug developed by Gilead Sciences, which acts as a potent and selective agonist of Toll-like receptor 7 (TLR7), a receptor involved in the regulation of the immune system. It is used to stimulate the immune system, which can increase its ability to combat chronic viral infections. Vesatolimod is in clinical trials to determine whether it is safe and effective in patients with Hepatitis B and HIV/AIDS, and has also shown activity against other viral diseases such as norovirus and enterovirus 71. See also Imiquimod Motolimod References Antiviral drugs
Vesatolimod
[ "Chemistry", "Biology" ]
140
[ "Pharmacology", "Antiviral drugs", "Medicinal chemistry stubs", "Pharmacology stubs", "Biocides" ]
66,267,205
https://en.wikipedia.org/wiki/Innovative%20Marketing
Innovative Marketing, Inc., also known as Innovative Marketing Ukraine, was a cybercrime company based in Kyiv, Ukraine, founded by Shaileshkumar "Sam" Jain and Bjorn Sundin. The company developed and sold scareware anti-virus programs that claimed to detect and remove viruses from computers. The company's software was distributed by hackers who infected machines with scareware, as well as illegitimate ads on major websites. The company earned $180 million in revenue in 2009, and $500 million in the three years they sold malicious software. Jain and Sundin are both fugitives and are wanted by the FBI. References Malware Software companies of Ukraine
Innovative Marketing
[ "Technology" ]
135
[ "Malware", "Computer security exploits" ]
66,267,411
https://en.wikipedia.org/wiki/Lepidiota%20stigma
Lepidiota stigma, also known as sugarcane white grub, is a species of insect native to Southeast Asia. The species is known to attack sugarcane fields in the region. References Beetles of Asia Pests (organism) Articles with 'species' microformats Melolonthinae
Lepidiota stigma
[ "Biology" ]
60
[ "Pests (organism)" ]
66,267,412
https://en.wikipedia.org/wiki/Cygnus%20OB9
Cygnus OB9 is an OB association in Cygnus. It is near to the Cygnus OB2 association. The region is embedded within a wider one of star formation known as Cygnus X, which is one of the most luminous objects in the sky at radio wavelengths. The region is approximately 5000 light years from Earth in the constellation of Cygnus. Although Cygnus OB9 has many O and B type stars, Cygnus OB9 is also hidden behind a massive dust cloud known as the Cygnus Rift like Cygnus OB2. See also List of most massive stars References Stellar associations Cygnus (constellation) Star-forming regions
Cygnus OB9
[ "Astronomy" ]
139
[ "Cygnus (constellation)", "Constellations" ]
66,267,685
https://en.wikipedia.org/wiki/Mixed-valence%20complex
Mixed valence complexes contain an element which is present in more than one oxidation state. Well-known mixed valence compounds include the Creutz–Taube complex, Prussian blue, and molybdenum blue. Many solids are mixed-valency including indium chalcogenides. Robin–Day classification Mixed-valence compounds are subdivided into three groups, according to the Robin–Day classification: Class I, where the valences are trapped—localized on a single site—such as Pb3O4 and antimony tetroxide. There are distinct sites with different specific valences in the complex that cannot easily interconvert. Class II, which are intermediate in character. There is some localization of distinct valences, but there is a low activation energy for their interconversion. Some thermal activation is required to induce electron transfer from one site to another via the bridge. These species exhibit an intense Intervalence charge transfer (IT or IVCT) band, a broad intense absorption in the infrared or visible part of the spectrum, and also exhibit magnetic exchange coupling at low temperatures. The degree of interaction between the metal sites can be estimated from the absorption profile of the IVCT band and the spacing between the sites. This type of complex is common when metals are in different ligand fields. For example, Prussian blue is an iron(II,III)–cyanide complex in which there is an iron(II) atom surrounded by six carbon atoms of six cyanide ligands bridged to an iron(III) atom by their nitrogen ends. In the Turnbull's blue preparation, an iron(II) solution is mixed with an iron(III) cyanide (c-linked) complex. An electron-transfer reaction occurs via the cyanide ligands to give iron(III) associated with an iron(II)-cyanide complex. Class III, wherein mixed valence is not distinguishable by spectroscopic methods as the valence is completely delocalized. The Creutz–Taube complex is an example of this class of complexes. These species also exhibit an IT band. Each site exhibits an intermediate oxidation state, which can be half-integer in value. This class is possible when the ligand environment is similar or identical for each of the two metal sites in the complex. In fact, Robson type dianionic tetraimino-diphenolate ligands which provide equivalent N2O2 environments for two metal centres have stabilized the mixed valence diiron complexes of class III. The bridging ligand needs to be very good at electron transfer, be highly conjugated, and be easily reduced. Creutz–Taube ion The Creutz–Taube complex is a robust, readily analyzed, mixed-valence complex consisting of otherwise equivalent Ru(II) and Ru(III) centers bridged by the pyrazine. This complex serves as a model for the bridged intermediate invoked in inner-sphere electron transfer. Mixed valence organic compounds Organic mixed valence compounds are also known. Mixed valency in fact seems to be required for organic compounds to exhibit electrical conductivity. References Physical chemistry Electron
Mixed-valence complex
[ "Physics", "Chemistry" ]
652
[ "Electron", "Molecular physics", "Applied and interdisciplinary physics", "nan", "Physical chemistry" ]
66,268,533
https://en.wikipedia.org/wiki/Marcos%20Sobral
Marcos Eduardo Guerra Sobral (born 1960) is a Brazilian botanist. Since 2009 he has been an adjunct professor of Botany at the Federal University of São João del-Rei. As of the end of 2020, Sobral has authored 79 publications and published 237 taxon names, particularly within the Eugenia, Myrcia, and Plinia genera of the family Myrtaceae. Academic background Marcos received his undergraduate degree in Biological Sciences in 2003 from Federal University of Rio Grande do Sul. He received his PhD in Botany from the Universidade Federal de Minas Gerais in 2007. His thesis was titled "Evolution of taxonomic knowledge in Brazil (1990-2006)". References External links 1960 births Living people 20th-century Brazilian botanists 21st-century Brazilian botanists Taxon authorities Federal University of Rio Grande do Sul alumni Academic staff of the Federal University of São João del-Rei
Marcos Sobral
[ "Biology" ]
174
[ "Taxon authorities", "Taxonomy (biology)" ]
66,268,596
https://en.wikipedia.org/wiki/Full30
Full30 is an American online video-sharing platform mainly dedicated to firearms and shooting sports-related content. The service was established in 2014 by Tim Harmsen and Mark Hammonds as a result of YouTube's increasing restrictions on gun-related videos. History After the 2018 Parkland high school shooting, many companies attempted to distance themselves from any association with the firearms industry. As a result, YouTube began demonetizing and sometimes outright deleting firearms-related videos, and in one case, popular YouTube poster Hickok45's channel was completely deleted but later restored. In response, Harmsen, who operates the Military Arms Channel on YouTube, decided to create his own video-hosting website to allow himself and other firearms content creators a platform free from such restrictions; he named the website Full30 — a reference to the popular 30-round STANAG magazine. In July 2020, site representatives announced the site had new ownership. Contributors Hickok45 Military Arms Channel Forgotten Weapons Bavarian Shooter Liberty Doll CloverTac References Internet properties established in 2014 Video hosting Social media Firearms-related organizations
Full30
[ "Technology" ]
220
[ "Computing and society", "Social media" ]
66,268,682
https://en.wikipedia.org/wiki/Kepler-445
Kepler-445 is a red dwarf star located away in the constellation Cygnus. It hosts three known exoplanets, discovered by the transit method using data from the Kepler space telescope and confirmed in 2015. None of the planets orbit within the habitable zone. Planetary system Kepler-445b, c, and d orbit Kepler-445 every 3, 5, and 8 days, and have equilibrium temperatures of , , and , respectively. With a radius of 2.72 times that of Earth, Kepler-445c is likely a mini-Neptune with a volatile-rich composition, and has been compared to GJ 1214 b. Kepler-445d is only slightly larger than the Earth, with a radius of . References Cygnus (constellation) M-type main-sequence stars Planetary systems with three confirmed planets 2704 Planetary transit variables J19545665+4629548 268060194
Kepler-445
[ "Astronomy" ]
193
[ "Cygnus (constellation)", "Constellations" ]
70,678,697
https://en.wikipedia.org/wiki/2%2C4%2C6-Triisopropylbenzenesulfonyl%20azide
2,4,6-Triisopropylbenzenesulfonyl azide (trisyl azide) is an organic chemical used as a reagent to supply azide for electrophilic amination reactions, such as for the asymmetric synthesis of unnatural amino acids. Introduction of an azide on the α carbon of carboxylic acid derivative using trisyl azide is an efficient alternative to electrophilic halogenation followed by nucleophilic substitution using anionic azide. Using an oxazolidinone as chiral auxiliary typically gives good induction of the stereochemistry at the α position. Subsequent reduction converts the α-azide to an α-amine. References Further reading Arylsulfonamides Azido compounds Reagents for organic chemistry Isopropyl compounds
2,4,6-Triisopropylbenzenesulfonyl azide
[ "Chemistry" ]
174
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs", "Reagents for organic chemistry" ]
70,679,437
https://en.wikipedia.org/wiki/Jinyoung%20Park%20%28mathematician%29
Jinyoung Park (; born 1982) is a South Korean mathematician at the Courant Institute of Mathematical Sciences at New York University working in combinatorics and graph theory. She and Huy Tuan Pham proved the Kahn–Kalai conjecture on estimating the positions of phase transitions in statistical mechanics and random graph theory. Education and career Park entered Seoul National University in 2001 and received her B.S. in Mathematics Education in 2004. She worked as a mathematics teacher in secondary schools in Seoul from 2005 to 2011. She began her graduate studies at Rutgers University in 2014, where she received her Ph.D. in 2020 under the supervision of Jeff Kahn. Her doctoral work earned the 2022 Dissertation Prize from the Association for Women in Mathematics. She was a Member of the Institute for Advanced Study from 2020 to 2021. From 2021 to 2022 she was a Szegö Assistant Professor at Stanford University, where her postdoctoral mentor was Jacob Fox. She joined the Courant Institute of Mathematical Sciences at New York University in 2023, where she is currently an assistant professor. Recognition In 2023, Park received the Maryam Mirzakhani New Frontiers Prize "for contributions to the resolution of several major conjectures on thresholds and selector processes". In 2024, she received the Dénes König Prize together with her coauthor Huy Tuan Pham, "for their outstanding research in discrete mathematics, in special recognition of their ingenious, short, and surprising proof of the Kahn–Kalai conjecture". She is the 2025 recipient of the Levi L. Conant Prize, given for her article "Threshold phenomena for random discrete structures" in the Notices of the American Mathematical Society. Selected works References External links Living people 21st-century South Korean mathematicians South Korean women mathematicians Combinatorialists Rutgers University alumni 1982 births Seoul National University alumni
Jinyoung Park (mathematician)
[ "Mathematics" ]
377
[ "Combinatorialists", "Combinatorics" ]
70,679,542
https://en.wikipedia.org/wiki/Mars%20carbon%20dioxide%20ice%20cloud
Mars's atmosphere is predominantly composed of CO2 (around 95%) with seasonal air pressure change that facilitates the vaporization and condensation of carbon dioxide. The CO2 cycle on the planet Mars has facilitated the formation of CO2 ice clouds at various locations and seasons on the red planet. Due to low temperatures, especially at Mars's polar caps, carbon dioxide gas can freeze in Mars’s atmosphere to form ice crystallized clouds. Several missions, such as the Viking, Mars Global Surveyor, and Mars Express, have led to interesting observations and measurements regarding CO2 ice clouds. MOLA data in addition to TES spectra have documented ice clouds forming during the winter season of Mars’s northern and southern polar caps. In addition, the Curiosity rover has imaged clouds well above 60 kilometers in the sky at the planet’s equator during the coldest time of Mars’s orbital year (when Mars is furthest away from the Sun due to its elliptical orbit), indicating the possibility of CO2 ice clouds around the planet’s equator. Although further data collection is needed to confirm the formation of CO2 ice clouds on Mars, especially at the planet’s equator, previous measurements have developed a strong argument for frozen carbon dioxide clouds on Mars. CO2 ice clouds at Mars's polar caps The Mars Global Surveyor, launched on November 7, 1996, has been an effective spacecraft that has orbited and measured the surface of Mars thousands of times. Among the various instruments on the Mars Global Surveyor, the Mars Orbiter Laser Altimeter (MOLA) and Thermal Emission Spectrometer (TES) are used to map the topography of Mars and study the surface and atmosphere of the planet. During the winter season on Mars, temperatures at the planet’s polar caps can reach below CO2’s condensation temperature (150 K). Noted as orbit #10075 by Dr. Ivanov and Dr. Muheleman of the Mars Global Surveyor, data from the MOLA instrument recorded cloud returns at the planet’s south polar cap during the southern winter season. A dense amount of data collection was collected around -77 through -80 degrees latitude. The TES instrument, capable of thermal infrared spectroscopy to measure Mars’s surface and atmosphere, recorded temperature values for orbit #10075 that proved the atmosphere’s ability to condense CO2. The collection of cloud returns in addition to temperatures at CO2’s solid-state property provides a strong argument for carbon dioxide ice clouds to tend to form at the planet’s southern polar cap during the southern winter season. In addition, measurements from the MOLA instrument, Mars Climate Sounder (MCS), and the Martian General Circulation Model (MGCM), point to cloud formations on the planet's north polar cap during the northern winter season. Similar to the southern polar cap, temperatures during the winter season at Mars's northern polar cap make it possible for CO2 ice clouds to form. Calculations from the MGCM have determined ice clouds can form up to 40 kilometers in the sky at an altitude of 70 degrees north latitude. Circulation models have proven to be consistent with measurements from the Mars Climate Sounder (MCS). CO2 ice clouds observed at Mars equator Images from the Curiosity rover show clouds nearly 80 kilometres high in Mars’s sky. Due to the low temperatures (below 150 K) at that altitude, the clouds were probably composed of CO2 or dry ice, as opposed to water ice The interesting observation from the Curiosity rover is that carbon dioxide cloud formations were documented around Mars’s equator. Interpolations of several images show faint clouds traveling above the Curiosity rover just after sunset. In addition, the documented images were taken as Mars was furthest away from the sun due to its oval-like orbit, providing a cold enough climate around the Martian equator for carbon dioxide to condense and form clouds. References Wikipedia Student Program Atmosphere of Mars Carbon dioxide Clouds
Mars carbon dioxide ice cloud
[ "Chemistry" ]
798
[ "Greenhouse gases", "Carbon dioxide" ]
70,679,579
https://en.wikipedia.org/wiki/Institute%20of%20Organic%20Chemistry%20and%20Biochemistry
Institute of Organic Chemistry and Biochemistry of the Czech Academy of Sciences (shortened as IOCB Prague) () is a research institute under the Czech Academy of Sciences (CAS). The institute centers around research in the fields of organic chemistry, biochemistry and neighboring disciplines, mostly oriented at applications in medicine and environment. It is known for its contribution in the development of key drugs against HIV and HBV. The institute also takes part in university education, supervising master's and doctoral theses. Research IOCB's research in oriented on basic research in organic chemistry, bioorganic and medicinal chemistry, biochemistry, theoretical, physical, and analytical chemistry, materials science, bioconjugate chemistry, chemical biology, nanotechnology. The research is conducted by approximately 50 independent groups divided into several categories according to their main research focus, type, and status: Cluster: CHEM, BIO, PHYS Type: research, research-service, targeted research, service Status: junior, senior, honorary, distinguished emeritus, distinguished Impact In addition to basic research, IOCB has been active in applied research and practical applications, particularly in medicinal chemistry. Its most significant results are acyclic nucleotide phosphonate antivirals (ANPs) discovered by Antonín Holý at IOCB in collaboration with Erik De Clercq from the Rega Institute, which revolutionized the development of antiviral drugs used in the treatment of HIV and hepatitis B. These compounds, later developed by Gilead Sciences, Inc. into approved drugs, include cidofovir, the first marketed compound from the ANPs family and approved for the treatment of cytomegalovirus retinitis in AIDS patients, adefovir, approved as a drug for treatment of hepatitis B and marketed as Hepsera, and tenofovir. Its prodrug forms tenofovir disoproxil fumarate and tenofovir alafenamide fumarate developed by Gilead Sciences were used in multiple drugs used for the treatment of HIV/AIDS and chronic hepatitis B, e.g. Viread (approved in 2001), Truvada (2004), Atripla (2006), Complera/Eviplera (2011), Stribild (2012), Genvoya (2015), Odefsey (2016), Descovy (2016), Vemlidy (2016), Biktarvy (2018), Symtuza (2018). The success of ANPs and widespread use in HIV drugs brought IOCB significant financial resources making it the richest research institution in the Czech Republic, allowing to extend and refurbish its campus and grow significantly. Its general income from licenses in 2021 was around USD 130 mil. At IOCB originated also other nucleoside compounds which became approved drugs: Decitabine used in the treatment of acute myeloid leukemia and Azacytidine for myelodysplastic syndrome, both discovered by Alois Pískala, and an acyclic nucleoside analogue duvira, 9-(2,3-dihydroxypropyl)adenine (DHPA), discovered by Antonín Holý, clinically used in treatment of infections caused by the herpes simplex virus. Among its other achievements are human peptide hormones and their analogues, eg. the development of first method for industrial production of human neurohypophysial hormone oxytocin by Josef Rudinger, development of its analogues, such as Carbetocin, or the development of Lysin-vasopressin, Terlipressin, and Desmopressin, as well as an ointment Dermazulen based on a natural compound Guaiazulene. History The original institute that has later turned into IOCB was established at the end of the Second World War before the establishment of CAS. The original institute was established under the Faculty of Chemistry of Czech Technical University in Prague and has been renamed multiple times. The official date that is considered as the establishment of IOCB is the year 1953, when the institute was renamed Institute of Organic Chemistry and incorporated into newly established Czechoslovak Academy of Sciences. In 1955, the institute was renamed to Chemical Institute of CAS. The institute was finally renamed to IOCB in 1993, after the dissolution of Czechoslovakia and establishment of the Czech Republic. In 2007, the institute was transformed into a public research institute. References Czech Academy of Sciences Science and technology in the Czech Republic Research institutes established in 1953 Research institutes in the Czech Republic Chemical research institutes Institute of Organic Chemistry and Biochemistry of the CAS 1953 establishments in Czechoslovakia
Institute of Organic Chemistry and Biochemistry
[ "Chemistry" ]
958
[ "Chemical research institutes" ]
70,679,614
https://en.wikipedia.org/wiki/QOI%20%28image%20format%29
The Quite OK Image Format (QOI) is a specification for lossless image compression of 24-bit (8 bits per color RGB) or 32-bit (8 bits per color with 8-bit alpha channel RGBA) color raster (bitmapped) images, invented by Dominic Szablewski and first announced on 24 November 2021. Description The intended purpose was to create an open source lossless compression method that was faster and easier to implement than PNG. Figures specified in the blog post announcing the format claim 20-50 times faster encoding, and 3-4 times faster decoding speed compared to PNG, with similar file sizes. The author has donated the specification to the public domain (CC0). Software and language support QOI is supported natively by ImageMagick, Imagine (v1.3.9+), IrfanView (v4.60+), FFmpeg (v5.1+), and GraphicConverter (v11.8+). Microsoft PowerToys (v0.76+) for Windows 10 and 11 adds support for previewing QOI images to the Windows File Explorer. Community made plugins are available in GIMP, Paint.NET and XnView MP. The game engine GameMaker designates bzip2 + QOI as the default format of texture groups since version 2022.1.0.609, to achieve the better compression but still quicker to decompress, while the standalone QOI and PNG formats are optional for the even faster performance and better compabilities respectively. There are also implementations for various languages such as Rust, Python, Java, C++, C# and more. A full list can be found on the project's Git(Hub) repository README. File format Header A QOI file consists of a 14-byte header, followed by any number of data “chunks” and an 8-byte end marker. qoi_header { char magic[4]; // magic bytes "qoif" uint32_t width; // image width in pixels (BE) uint32_t height; // image height in pixels (BE) uint8_t channels; // 3 = RGB, 4 = RGBA uint8_t colorspace; // 0 = sRGB with linear alpha // 1 = all channels linear }; The colorspace and channel fields are purely informative. They do not change the way data chunks are encoded. Encoding Images are encoded row by row, left to right, top to bottom. The decoder and encoder start with as the previous pixel value. An image is complete when all pixels specified by have been covered. Pixels are encoded as: Run-length encoding of the previous pixel () an index into the array of previously seen pixels () a difference compared to the previous pixel value in r,g,b ( or ) Full r,g,b or r,g,b,a values ( or ) The color channels are assumed to not be premultiplied with the alpha channel (“un-premultiplied alpha”). A running (zero-initialized) of previously seen pixel values is maintained by the encoder and decoder. Each pixel that is seen by the encoder and decoder is put into this array at the position formed by a hash function of the color value. In the encoder, if the pixel value at the index matches the current pixel, this index position is written to the stream as . The hash function for the index is: index_position = (r * 3 + g * 5 + b * 7 + a * 11) % 64 Each chunk starts with a 2- or 8-bit tag, followed by a number of data bits. The bit length of chunks is divisible by 8 - i.e. all chunks are byte aligned. All values encoded in these data bits have the most significant bit on the left. The 8-bit tags have precedence over the 2-bit tags. A decoder must check for the presence of an 8-bit tag first. The byte stream's end is marked with 7 bytes followed by a single byte. The possible chunks are: 8-bit tag (254) 8-bit red channel value 8-bit green channel value 8-bit blue channel value The alpha value remains unchanged from the previous pixel. 8-bit tag (255) 8-bit red channel value 8-bit green channel value 8-bit blue channel value 8-bit alpha channel value 2-bit tag 6-bit index into the color index array: A valid encoder must not issue 2 or more consecutive chunks to the same index. should be used instead. 2-bit tag 2-bit red channel difference from the previous pixel 2-bit green channel difference from the previous pixel 2-bit blue channel difference from the previous pixel The difference to the current channel values are using a wraparound operation, so will result in 255, while will result in 0. Values are stored as unsigned integers with a bias of 2. E.g. −2 is stored as 0 (). 1 is stored as 3 (). The alpha value remains unchanged from the previous pixel. 2-bit tag 6-bit green channel difference from the previous pixel 4-bit red channel difference minus green channel difference 4-bit blue channel difference minus green channel difference The green channel is used to indicate the general direction of change and is encoded in 6 bits. The red and blue channels (dr and db) base their diffs off of the green channel difference. I.e.: dr_dg = (cur_px.r - prev_px.r) - (cur_px.g - prev_px.g) db_dg = (cur_px.b - prev_px.b) - (cur_px.g - prev_px.g) The difference to the current channel values are using a wraparound operation, so will result in 253, while will result in 1. Values are stored as unsigned integers with a bias of 32 for the green channel and a bias of 8 for the red and blue channel. The alpha value remains unchanged from the previous pixel. 2-bit tag 6-bit run-length repeating the previous pixel The run-length is stored with a bias of −1. Note that the runlengths 63 and 64 ( and ) are illegal as they are occupied by the and tags. References External links Format website: C Source code and benchmark results 1 page PDF specification GitHub repository (including C implementation) How PNG Works: Compromising Speed for Quality - YouTube A video comparing compression techniques in PNG and QOI with animations and examples. Computer-related introductions in 2021 Graphics standards Image compression Open formats Raster graphics file formats
QOI (image format)
[ "Technology" ]
1,437
[ "Computer standards", "Graphics standards" ]
70,682,280
https://en.wikipedia.org/wiki/H3Y41P
H3Y41P is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the phosphorylation the 41st tyrosine residue of the histone H3 protein. To impose cell cycle-dependent regulation of constitutive heterochromatin, H3Y41p collaborates with other regulatory mechanisms. In activated B-cell–like diffuse large B-cell lymphoma, JAK1 mediates autocrine IL-6 and IL-10 cytokine activation via a noncanonical epigenetic regulation mechanism involving phosphorylation of histone H3 on tyrosine 41. Nomenclature The name of this modification indicates the protein phosphorylation of tyrosine 41 on histone H3 protein subunit: Serine/threonine/tyrosine phosphorylation The addition of a negatively charged phosphate group can lead to major changes in protein structure, leading to the well-characterized role of phosphorylation in controlling protein function. It is not clear what structural implications histone phosphorylation has, but histone phosphorylation has clear functions as a post-translational modification. Clinical effect of modification In activated B-cell–like (ABC) diffuse large B-cell lymphoma, JAK1 mediates autocrine IL-6 and IL-10 cytokine activation via a noncanonical epigenetic regulation mechanism involving phosphorylation of histone H3 on tyrosine 41. On histone H3, JAK2 phosphorylates Y41. Heterochromatin protein 1 alpha (HP1) binds to this region of H3, and JAK2 activation of H3Y41 hinders this binding. Inhibition of JAK2 activity lowers both the expression of the hematopoietic oncogene LMO2 and the phosphorylation of H3Y41 at its promoter in leukemic cells while enhancing HP1 binding at the same location. Constitutive heterochromatin The ability to be transcribed is directly influenced by the degree of chromatin compaction at distinct chromosomal locations. Euchromatin, which is generally open and found in most coding sequences, and heterochromatin, which is more compacted and found in highly repetitive sections, are the two types of chromatin. Heterochromatin is further separated into facultative and constitutive heterochromatin, the former of which is found in DNA sequences that code for developmental proteins. Non-coding repetitive DNA associated with specific chromosomal locations, such as centromeres and telomeres, is referred to as constitutive heterochromatin. To impose cell cycle-dependent regulation of constitutive heterochromatin, H3Y41p collaborates with other regulatory mechanisms. H3Y41p is closely controlled throughout the cell cycle, with phosphorylation occurring in M-phase and lasting until mid-S-phase; this happens before the septation index peaks, and its presence coincides with Swi6 displacement from centromeric heterochromatin. H3Y41p causes a transition in CD proteins, with Swi6/HP1 being displaced to provide RNA polymerase II access to the DNA and Chp1, a RITS component, being recruited for the RNAi-mediated formation of heterochromatin. Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. Post-translational modification of histones such as histone phosphorylation has been shown to modify the chromatin structure by changing protein:DNA or protein:protein interactions. Histone post-translational modifications modify the chromatin structure. The most commonly associated histone phosphorylation occurs during cellular responses to DNA damage, when phosphorylated histone H2A separates large chromatin domains around the site of DNA breakage. Researchers investigated whether modifications of histones directly impact RNA polymerase II directed transcription. Researchers choose proteins that are known to modify histones to test their effects on transcription, and found that the stress-induced kinase, MSK1, inhibits RNA synthesis. Inhibition of transcription by MSK1 was most sensitive when the template was in chromatin, since DNA templates not in chromatin were resistant to the effects of MSK1. It was shown that MSK1 phosphorylated histone H2A on serine 1, and mutation of serine 1 to alanine blocked the inhibition of transcription by MSK1. Thus results suggested that the acetylation of histones can stimulate transcription by suppressing an inhibitory phosphorylation by a kinase as MSK1. Mechanism and function of modification Phosphorylation introduces a charged and hydrophilic group in the side chain of amino acids, possibly changing a protein's structure by altering interactions with nearby amino acids. Some proteins such as p53 contain multiple phosphorylation sites, facilitating complex, multi-level regulation. Because of the ease with which proteins can be phosphorylated and dephosphorylated, this type of modification is a flexible mechanism for cells to respond to external signals and environmental conditions. Kinases phosphorylate proteins and phosphatases dephosphorylate proteins. Many enzymes and receptors are switched "on" or "off" by phosphorylation and dephosphorylation. Reversible phosphorylation results in a conformational change in the structure in many enzymes and receptors, causing them to become activated or deactivated. Phosphorylation usually occurs on serine, threonine, tyrosine and histidine residues in eukaryotic proteins. Histidine phosphorylation of eukaryotic proteins appears to be much more frequent than tyrosine phosphorylation. In prokaryotic proteins phosphorylation occurs on the serine, threonine, tyrosine, histidine or arginine or lysine residues. The addition of a phosphate (PO43-) molecule to a non-polar R group of an amino acid residue can turn a hydrophobic portion of a protein into a polar and extremely hydrophilic portion of a molecule. In this way protein dynamics can induce a conformational change in the structure of the protein via long-range allostery with other hydrophobic and hydrophilic residues in the protein. Epigenetic implications The post-translational modification of histone tails by either histone-modifying complexes or chromatin remodeling complexes is interpreted by the cell and leads to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states, which define genomic regions by grouping different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterized by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. The human genome is annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Methods The histone mark can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well-positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well-positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. References Epigenetics Post-translational modification
H3Y41P
[ "Chemistry" ]
1,984
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
70,682,568
https://en.wikipedia.org/wiki/Darja%20Lisjak
Darja Lisjak is a Slovenian material scientist and professor at the Department for Material Synthesis at Jozef Stefan Institute in Ljubljana, Slovenia. Her work focuses on materials chemistry, nanotechnology, and solid-state chemistry. Education and career Lisjak earned her PhD in chemistry and chemical technology from the University of Ljubljana in 1999. Selected publications Mertelj, Alenka, et al. "Ferromagnetism in suspensions of magnetic platelets in liquid crystal." Nature 504.7479 (2013): 237-241. Lisjak, Darja, and Alenka Mertelj. "Anisotropic magnetic nanoparticles: A review of their properties, syntheses and potential applications." Progress in Materials Science 95 (2018): 286-328. Mertelj, Alenka, et al. "Magneto-optic and converse magnetoelectric effects in a ferromagnetic liquid crystal." Soft Matter 10.45 (2014): 9065-9072. References External links Women materials scientists and engineers University of Ljubljana alumni Slovenian women scientists 21st-century Slovenian women Year of birth missing (living people) Living people
Darja Lisjak
[ "Materials_science", "Technology" ]
242
[ "Women materials scientists and engineers", "Materials scientists and engineers", "Women in science and technology" ]
70,683,178
https://en.wikipedia.org/wiki/Deuterotherium
Deuterotherium is an extinct genus of South American native ungulates, which lived during the Deseadan age of the Oligocene in what is now Argentina. Its type species is Deuterotherium distichum. It was named by Florentino Ameghino in 1895. The holotype of Deuterotherium distichum is a calcaneum. It was formerly identified as a proterotheriid litoptern. In 1999, Shockey argued Deuterotherium was certainly not a litoptern and interpreted it as a notohippid notoungulate. In research by Soria posthumously published in 2001, Soria considered Deuterotherium a nomen dubium. References Toxodonts Oligocene mammals of South America Deseadan Paleogene Argentina Fossils of Argentina Fossil taxa described in 1894 Taxa named by Florentino Ameghino Prehistoric placental genera Nomina dubia Golfo San Jorge Basin Sarmiento Formation
Deuterotherium
[ "Biology" ]
208
[ "Biological hypotheses", "Nomina dubia", "Controversial taxa" ]
70,683,474
https://en.wikipedia.org/wiki/Patricia%20Zambryski
Patricia C. Zambryski is a plant and microbial scientist known for her work on Type IV secretion and cell-to-cell transport in plants. She is also professor emeritus at the University of California, Berkeley. She was an elected member of the National Academy of Sciences, the American Association for the Advancement of Science, and the American Society for Microbiology. Education and career Zambryski received her B.S. from McGill University in 1969, and earned a Ph.D. from the University of Colorado in 1974. Research Zambryski is known for her work in the field of genetic engineering, specifically for her work with Agrobacterium tumefaciens, a bacterium she uses to track the molecular mechanisms that change plants and how plant cells communicate with each other. She has examined the structure of plant cells that have been altered by Agrobacterium tumefaciens. While working in Marc Van Montagu's lab, Zambryski determined how the Ti plasmid is identified by the bacterium, and she developed a vector that allowed the transfer of genetic material into a plant without altering the plant tissue. This advance was used to inject novel genes into plants. She has also examined plasmodesmata, which are the channels that reach across the spaces in plant cells. Selected publications Awards and honors In 2001 she was elected a member of the National Academy of Sciences and a fellow of the American Society for Microbiology. In 2010 she was elected a fellow of the American Association for the Advancement of Science. References Living people History of biotechnology McGill University alumni University of Colorado alumni University of California, Berkeley faculty 21st-century botanists American geneticists Women microbiologists Year of birth missing (living people)
Patricia Zambryski
[ "Biology" ]
358
[ "History of biotechnology" ]
70,683,505
https://en.wikipedia.org/wiki/Funneliformis%20mosseae
Funneliformis mosseae is a species of fungus in the family Glomeraceae, which is an arbuscular mycorrhizal (AM) fungi that forms symbiotic relationships with plant roots. Funneliformis mosseae has a wide distribution worldwide, and can be found in North America, South America, Europe, Africa, Asia and Australia. Funneliformis are characterized by having an easily visible septum in the area of the spore base and are often cylindrical or funnel-shaped. Funneliformis mosseae similarly resembles Glomus caledonium, however the spore wall of Funneliformis mosseae contains three layers, whereas Gl. caledonium spore walls are composed of four layers. Funneliformis is an easily cultivated species which multiplies well in trap culture, along with its high distribution, F. mosseae is not considered endangered and is often used for experimental purposes when combined with another host. Morphology The morphology of Funneliformis mosseae can be varied depending on location and generation. Spore structure The spores of Funneliformis mosseae are yellow to golden yellow in color and are globose or subglobose (80-)185(−280) μm diameter, with one subtending hypha. The spore wall is made up of three layers all with distinct phenotypes. The first layer is hyaline and mucilogenous and is approximately 1.4–2.5 μm thick (mean = 2.1 μm). This layer is found in the juvenile spores of F. mosseae, and degrades as the spore matures and goes through sloughing, producing a granular appearance. The second layer of the spore wall is also hyaline and is a thickness of 0.8–1.6 μm thick (mean = 1.2 μm). This layer is often observed as sliver-like or partially decomposed fragments as it separates from the third layer of the spore wall. The second layer is variable in appearance between various spores, but must be firmly attached to the laminae as it forms small pits and irregular shape which causes parts of the layer to break away when under pressure. The third layer is a pale yellow to yellow brown, laminate, and is 3.2–6.4 μm thick (mean of 4.7 μm). Subtending hyphae The Funneliformis mosseae species has a subtending hypha,  a characteristic of AM fungi, which is the hyphae that the spores are produced from. The hyphae structure tends to be flask shaped and a yellow to yellow brown in color. In juvenile spores, the walls of the subtending hyphae are made up of three layers that are continuous with the layers of the spore walls. As the spores mature, oftentimes the hyphal wall will become one to two layers thick. Germination The germ tube in Funneliformis mosseae emerges from the spore wall and originates from the recurved septum. After the hypha emerges, extensive branching and growth of the germ tubes is able to occur. Distribution, habitat, season Funneliformis mosseae is a hypogenous fungi, that is commonly found in loose aggregate soils. It has been found in a wide range of locations, and can be collected throughout the year, in all seasons. It is widespread in the Pacific Northwest, Midwest, Hawaii, England, Scotland, Germany, Australia, New Zealand and Pakistan. F. mosseae can withstand conditions ranging from coastal dune sands, mountain forests, and semi-arid zones; especially in alkaline flats, road banks, fields and forest clearings. Arbuscular mycorrhizal interactions Funneliformis mosseae is a fungus that falls into the category of arbuscular mycorrhizal fungi (AMF), which are fungi that form symbiotic relationships with most terrestrial plants. The relationship is mutualistic, meaning that both the plant and fungi have benefits in forming these interactions with one another. Arbuscules are the site of entrance into the cells for the fungi, and a large hyphal network is formed, which allows for nutrient exchanges between the plant and fungi. Plants can often benefit greatly from these mutualistic interactions with certain fungi, such as increased nutrient absorption, resistance to varying environmental conditions, and resistance to some plant pathogens. Uses One of the common uses of Funneliformis mosseae is in scientific research to study the ways in which AM fungi interact with their plant hosts. Many of these studies aim to determine the ways in which AM fungus relationships to plants can alter the conditions of the environment for growth. In previous studies using Funneliformis mosseae have shown to increase resistance to plant pathogens, increased resistance to heavy metal toxicity, increase resistance to drought and poor soil conditions, increased nutrient absorption, and has shown to increase root and shoot biomass in inoculated plants. References Fungi of Hawaii Fungi of the United States Fungi of Europe Fungi of Australia Fungi of New Zealand Fungi of Pakistan Glomerales Fungi without expected TNC conservation status Fungus species
Funneliformis mosseae
[ "Biology" ]
1,082
[ "Fungi", "Fungus species" ]
70,683,630
https://en.wikipedia.org/wiki/Convection%20enhanced%20delivery
Convection-enhanced delivery (CED) is method of drug delivery in which the drug is delivered into the brain using bulk flow rather than conventional diffusion. This is done by utilizing catheters inserted into the target region of the brain and utilizing pressure to deliver the therapeutic to a target region. CED has been used to delivery drugs to the central nervous system (CNS) for diseases such as cancer, epilepsy, and Parkinson's disease. CED has been used to deliver drugs to the CNS for its ability to bypass the blood–brain barrier (BBB) and target specific regions for targeted treatment, but current techniques using CED have failed to progress past clinical trials due to a variety of physical limitations associated with CED itself. Background The blood brain barrier (BBB) has historically proved to be a very difficult obstacle to overcome when aiming to deliver a drug to the brain. In order to overcome the difficulties in delivering therapeutic levels of drug past the BBB, drugs had to either be lipophilic molecules with a molecular weight below 600 Da or be transported across the BBB using some sort of cellular transport system. In the 1990s, a research group led by Edward Oldfield at the National Institutes of Health proposed utilizing CED to deliver drugs and molecules too big to bypass the BBB to the brain. CED is also useful to delivery drugs that have poor diffusive properties and allows for targeted placement of the catheter used to deliver the drugs. A vast majority of current clinical studies using CED are currently using CED as a method to treat brain tumors that are inoperable or have shown little response to conventional therapies. Mechanism of action CED is a method of drug delivery in which a pressure gradient is created at the tip of a catheter to use bulk flow rather than diffusion to delivery drugs into the brain. Diffusion has been limited by the diffusivity of the tissue, and can be expressed using Fick's law, , where J is diffusion, D is the diffusivity of the targeted tissue, and is the concentration gradient of the drug. Diffusion is can only be modified by the concentration gradient of a drug, meaning that in order to deliver drug to large parts of a tissue, high concentrations of a drug are needed in order to promote diffusion, which can result in toxicity. In comparison, bulk flow is limited only by Darcy's law, defined as , where v is velocity, K is the hydraulic conductivity of the molecule, and is the pressure gradient. Using bulk flow to deliver a drug can mean a drug can be delivered further into a target tissue with higher pressure, resulting in lower concentrations and less risk of drug toxicity. To perform a CED treatment, catheters are inserted through burr holes drilled into the skull. Treatments can use multiple catheters for a single delivery if that is required. The catheters are inserted into the interstitial space of the brain using image guidance. Once the catheters are placed at the desired site, the catheters are connection to an infusion pump which is used to create the pressure gradient for bulk flow. Infusion rates are typically set to 0.1-10 μL/min and the drug is delivered into the interstitial space, displacing any extracellular fluid. CED can result in delivery of drug centimeters deep into the tissue from the delivery site, rather than the millimeters deep that would result from delivery of drugs via diffusion. Clinical trials evaluating CED Current clinical trials exploring the use of CED to date have not resulted in any FDA approved treatments. These clinical trials have mostly been focused on using CED to treat glioblastoma and only two studies have been able to progress to stage 3 clinical trials. The first study began in 2004 and was comparing the efficacy of cintredekin besudotox delivered using CED and gliadel for the treatment of glioblastoma multiform. Results from this study showed similar survival rates between the two groups, but patients who were given CED treatment had higher rates of pulmonary emboli. The second stage 3 clinical trial began in 2008 and was delivering trabedersen via CED to treat anaplastic astrocytoma glioblastoma. This trial was terminated early due to the inability to recruit enough trial participants and efficacy of CED in this treatment was not established. These two studies have been the only major clinical trials which have compared the efficacy of CED treatment to current clinical standards for treatment. While CED clinical trials have primarily explored treating brain tumors, other conditions involving the brain have also been investigated in clinical trials. To date there have been 2 registered clinical trials, both in stage 1, which aim to use CED to treat Parkinson's disease. The first trial, which was registered in 2009, was withdrawn in 2017 for unknown reasons. The other clinical trial, which reached completion in 2022, delivered an adenovirus (AAV2) encoding for a glial cell line derived neurotropic growth factor (GDNF) directly into the brain using CED. GDNF is known to protect neurons which produce dopamine. Parkinson's disease has been shown to decrease the amount of dopamine which can be produced in the brain, so researchers hope to be able to decrease the side effects of Parkinson's disease by protecting neurons which produce dopamine. While results from this study have not been published as of April 2022, the pre-clinical research done in a Parkinson's disease model rhesus monkey showed that CED treatment with AAV2-GDNF resulted in neurological improvement without significant side effects. Non-clinical research Even though current clinical trials have not yet resulted in an FDA approved treatment, there is still plenty of research being done on delivering different types of therapeutics and treating different diseases being done. One of these areas of research is the visualization of the region of treatment. One research group was able visualize the regions of the brain that received drug from bulk flow mixing the desired drug with Gd-DTPA, a common MRI contrast agent. This allowed researchers to immediately take an MRI post CED treatment to assess if the drug was reaching the targeted area. Research has also tagged nanocarriers of their therapeutic with the MRI contrast gadoteridol for real time treatment imaging. Other than MRI contrasts, it has been shown to be possible to tag a therapeutic microcarrier with a radiolabeled or fluorescent molecule that can then be excited during imaging. The biggest limitation of this drug distribution visualization is that this technique only works ex vivo. One research group was able to optimize their liposomal design using this technique, showing the usefulness of this technique. While a common use of CED is to directly deliver drugs to the brain, it is also possible to deliver non-chemical therapeutics, such as proteins or growth factors, using CED. There are several types of microcarriers which have been used for CED, including nanospheres, nanoparticles, liposomes, micelles, and dendrites. Nanocarriers have several unique benefits for delivering therapeutics compared to conventional drug solutions. Firstly, nanocarriers can be modified to create an optimal carrier for the system that is being developed. These modifications can include tagging them for imaging, size, charge, osmolarity, viscosity, and changes in surface coating. The other large area of research being done currently on CED is the translation of CED from being used for brain tumors to other brain diseases. The primary conditions being researched for non-cancerous treatments include Parkinson's disease and epilepsy. Animal model research using CED to deliver therapeutics to the brain to treat Parkinson's disease have shown 3 promising therapeutics for treatment. Researchers for these therapeutics have typically used adenovirus carriers for therapeutics since many drugs used to treat Parkinson's disease currently are not chemical based but rather gene therapy or protein based. Current research areas focus on using GDNF, a growth factor which protects dopamine producing brain cells, glutamic acid decarboxylase (GAD), which is another therapeutic that helps to protect dopamine producing brain cells, and neurturin, which is a GDNF homolog. Another reported use of CED is in the treatment of epilepsy. Current epilepsy treatments are too large to pass through the BBB, so utilizing CED to delivery these drugs is currently one of the only ways to target the brain. The two primary antiepileptic drugs (AEDs) being delivered using CED in research are conotoxin N-type calcium channel antagonists and botulinum neurotoxins. Results from these studies showed promise in reducing the risk of seizures for up to 5 days when treated with calcium channel antagonists and up to 50 days when using botulinum neurotoxins Limitations and future directions While there has been promise in the use of CED to delivery drugs directly into the brain, there are some drawbacks with it. A vast majority of studies to date have failed to have consistent delivery from patient to patient for technical reasons surrounding the usage of CED. Incorrect placement of catheters can result in a less effective treatment with increased risks of leaks from the brain into other parts of the central nervous system (CNS). Another, more common occurrence is the incidence or reflux of the drug back into the catheter. Reflux can cause leakage into unintended areas as well as decrease the true volume of drug delivered. CED catheter improvements are currently being researched, with some research groups modifying the tips of the catheters to prevent reflux. The design of a balloon tipped catheter for use in CED has been proposed, and results showed that drug was successfully delivered into the brain using the balloon tipped catheter without any complication. Other proposed designs include the utilization of catheters with multiple exit sites, catheters with porous tips, and catheters with tips that are smaller than the rest of the catheter. New catheter designs also aim to allow for a greater flow rate while still minimizing the risk of reflux. These improvements to the technical limitations of CED aim to help researchers determine efficacy of a treatment without worrying about failed treatments due to limitations in the equipment of CED. With this in mind, there is a fast growing tech company in Baltimore, Maryland named CraniUS LLC, for which is inventing, designing and engineering the world's first fully-implantable, MRI-compatible, wirelessly-charged, bluetooth-enabled, high-profile craniofacial implant device to provide neurosurgical patients a safe option for receiving direct and chronic medicine delivery to their brain via convection-enhanced delivery; using a novel embedded, microfluidic-pump system and easy port-access system for repeated, transcutaneous filling. |url=http://www.CraniUSmed.com References Drug delivery devices
Convection enhanced delivery
[ "Chemistry" ]
2,287
[ "Pharmacology", "Drug delivery devices" ]
70,684,172
https://en.wikipedia.org/wiki/Persistence%20%28botany%29
Persistence is the retention of plant organs, such as flowers, seeds, or leaves, after their normal function has been completed, in contrast with the shedding of deciduous organs after their purpose has been fulfilled. Absence or presence of persistent plant organs can be a helpful clue in plant identification, and may be one of many types of anatomical details noted in the species descriptions or dichotomous keys of plant identification guides. Many species of woody plants with persistent fruit provide an important food source for birds and other wildlife in winter. The terms persistent and deciduous are not used in a consistent manner by botanists. Related terms such as long-persistent, generally deciduous, and caducous suggest that some plant parts are more persistent than others. However, these terms lack clear definitions. Species with persistent parts There are numerous herbaceous and woody plant species that produce persistent parts such as bud scales, sepals (), fronds, fruits, seeds, strobili (cones) or styles. Note that the trait of persistence exhibited by a given species within a genus may not be exhibited by all species within the genus. For example, the Equisetum genus includes some species that have persistent strobili while other species have deciduous strobili. Common witch-hazel (Hamamelis virginiana) may have a persistent calyx or a persistent fruit (or both at the same time). After flowering in the fall, the sepals (calyx) and pollinated ovary persist during the winter months. After the ovary is fertilized in the spring, it fuses with the calyx to form a greenish fruit, which eventually becomes woody and brown. In the fall, the ripe fruit suddenly splits, explosively dispersing black seeds up to . The empty capsule persists after the seeds are dispersed. Image gallery See also Evergreen Semi-deciduous Marcescence References Bibliography Plant physiology
Persistence (botany)
[ "Biology" ]
386
[ "Plant physiology", "Plants" ]
70,684,791
https://en.wikipedia.org/wiki/AZD9272
AZD 9272 is a drug which acts as a selective antagonist for the metabotropic glutamate receptor subtype mGluR5. It was unsuccessful in human trials as an analgesic, but continues to be widely used in research especially as its radiolabelled forms. See also Basimglurant Fenobam References MGlu5 receptor antagonists Fluoroarenes Pyridines Nitriles Oxadiazoles
AZD9272
[ "Chemistry" ]
94
[ "Pharmacology", "Functional groups", "Medicinal chemistry stubs", "Pharmacology stubs", "Nitriles" ]