id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
75,492,878
https://en.wikipedia.org/wiki/S-309309
S-309309 is an experimental MGAT2 inhibitor developed as an anti-obesity drug by the Japanese company Shionogi. Phase II trial results are expected in late 2023. References Experimental anti-obesity drugs Spiro compounds Sulfones Fluoroarenes Pyridines Chromanes Pyrazolopyridines Acetamides
S-309309
[ "Chemistry" ]
74
[ "Organic compounds", "Sulfones", "Functional groups", "Spiro compounds" ]
75,492,899
https://en.wikipedia.org/wiki/Pemvidutide
Pemvidutide (ALT-801) is an experimental dual GLP-1/glucagon receptor agonist developed by Altimmune. The drug reduced LDL-C in a clinical trial and does not require dose titration as with GLP-1 mono agonists. References GLP-1 receptor agonists Glucagon receptor agonists
Pemvidutide
[ "Chemistry" ]
83
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
75,493,324
https://en.wikipedia.org/wiki/Vutiglabridin
Vutiglabridin (HSG4112) is an experimental anti-obesity drug that is a synthetic structural analog of glabridin. References Experimental anti-obesity drugs Phenols Ethoxy compounds Heterocyclic compounds with 3 rings Oxygen heterocycles
Vutiglabridin
[ "Chemistry" ]
60
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
75,496,122
https://en.wikipedia.org/wiki/Exercise%20mimetic
An exercise mimetic is a drug that mimics some of the biological effects of physical exercise. Exercise is known to have an effect in preventing, treating, or ameliorating the effects of a variety of serious illnesses, including cancer, type 2 diabetes, cardiovascular disease, and psychiatric and neurological diseases such as Alzheimer's disease. As of 2021, no drug is known to have the same benefits. Known biological targets affected by exercise have also been targets of drug discovery, with limited results. These known targets include: The majority of the effect of exercise in reducing cardiovascular and all-cause mortality cannot be explained via improvements in quantifiable risk factors, such as blood cholesterol. This further increases the challenge of developing an effective exercise mimetic. Moreover, even if a broad spectrum exercise mimetic were invented, it is not necessarily the case that its public health effects would be superior to interventions to increase exercise in the population. References Exercise biochemistry Drugs
Exercise mimetic
[ "Chemistry", "Biology" ]
197
[ "Pharmacology", "Products of chemical industry", "Exercise biochemistry", "Biochemistry", "Chemicals in medicine", "Drugs" ]
75,496,826
https://en.wikipedia.org/wiki/MOTS-c
MOTS-c (mitochondrial open reading frame of the 12S rRNA-c) is a peptide encoded in mitochondrial DNA. It is believed to be involved in skeletal muscle and glucose metabolism. It is upregulated in response to exercise, and is considered an exercise mimetic. MOTS-c binds to casein kinase 2. Society and culture Researchers discovered MOTS-c in 2015. MOTS-c is not approved to treat any medical condition and is banned by the World Anti-Doping Agency, explicitly beginning in 2024. References Exercise mimetics World Anti-Doping Agency prohibited substances Mitochondria Hexadecapeptides 2015 in science
MOTS-c
[ "Chemistry" ]
138
[ "Mitochondria", "Exercise mimetics", "Metabolism", "Exercise biochemistry" ]
75,499,251
https://en.wikipedia.org/wiki/List%20of%20largest%20star%20clusters
Below is a list of the largest known star clusters, ordered by diameter in light years, above the size of 50 light years in diameter. This list includes globular clusters, open clusters, super star clusters, and other types. List See also List of most massive star clusters References Star clusters Largest Largest star clusters Largest star clusters
List of largest star clusters
[ "Astronomy" ]
68
[ "Lists of superlatives in astronomy", "Astronomy-related lists", "Astronomical objects", "Star clusters" ]
75,499,368
https://en.wikipedia.org/wiki/Layla%20Zakaria%20Abdel%20Rahman
Layla Zakaria Abdel Rahman (; died 2015) was a Sudanese scientist in the field of biotechnology. She graduated from the University of Khartoum and earned her master's and PhD degrees from UMIST. Rahman's work in her research laboratory revolutionized sugar cane cultivation with a cheaper and more effective growing method. By taking cells from the plant's roots, shoots, or leaves, and growing them in a liquid culture, her work enabled the creation of artificial seeds that can be germinated. Her method created global impact in improving efficiency and affordability in developing countries. She died in 2015, aged 59. References Year of birth missing 2015 deaths Alumni of the University of Manchester Institute of Science and Technology Biotechnologists University of Khartoum alumni Sudanese scientists Sudanese women Women biotechnologists
Layla Zakaria Abdel Rahman
[ "Biology" ]
166
[ "Biotechnologists", "Women biotechnologists" ]
75,499,594
https://en.wikipedia.org/wiki/Halosulfuron-methyl
Halosulfuron-methyl is a sulfonylurea post-emergence herbicide used to control some annual and perennial broad-leaved weeds and sedges (such as nutsedge/nutgrass) in a range of crops (particularly rice), established landscape woody ornamentals and turfgrass. It is marketed under several tradenames including Sedgehammer and Sandea. Effects Halosulfuron-methyl is systemic and selective, and acts as an inhibitor of acetohydroxyacid synthase (AHAS, also known as acetolactate synthase) restricting the biosynthesis of the essential amino acids, valine and isoleucine, thus restricting plant growth. Symptoms take several weeks to develop and include general stunting, chlorosis, and necrosis of the growing points. It typically does not affect other major annual and perennial weed grasses and broadleaves such as spurge, dandelions, lambsquarters, and oxalis. References External links Herbicides Pyrazoles Pyrimidines Sulfonylureas Methyl esters
Halosulfuron-methyl
[ "Biology" ]
229
[ "Herbicides", "Biocides" ]
75,500,805
https://en.wikipedia.org/wiki/Pasterski%E2%80%93Strominger%E2%80%93Zhiboedov%20triangle
In theoretical physics, the Pasterski–Strominger–Zhiboedov (PSZ) triangle or infrared triangle is a series of relationships between three groups of concepts involving the theory of relativity, quantum field theory and quantum gravity. The triangle highlights connections already known or demonstrated by its authors, Sabrina Gonzalez Pasterski, Andrew Strominger and Alexander Zhiboedov. The connections are among weak and lasting effects caused by the passage of gravitational or electromagnetic waves (memory effects), quantum field theorems on graviton and photon and geometrical symmetries of spacetime. Because all of this occurs under conditions of low energy, known as infrared in the language of physicists, it is also referred to as the infrared triangle. Elements of the triangle Related concepts The concepts that are interconnected by the triangle are: a) soft particle theorems (quantum field theory theorems regarding the behavior of low-energy gravitons or photons): soft graviton theorem, published by Steven Weinberg in 1965; extension of the previous theorem, published by Freddy Cachazo and Strominger in 2014; soft photon theorem, also published by Weinberg in the same paper of 1965 regarding the graviton; b) asymptotic symmetries (symmetries of spacetime distant from the sources of the fields): supertranslations of the Bondi-Metzner-Sachs group, published in 1962; superrotations (symmetry analogous to that of the Virasoro algebra), published by Glenn Barnich and Cédric Troessaert in 2010; symmetries of U(1) gauge theories, published by Pasterki in 2017; c) memory effects: gravitational memory effect, published by Yakov Zeldovich and A. G. Polnarev in 1974 and Demetrios Christodoulou in 1991; new gravitational memory effects, published by Pasterski, Strominger and Zhiboedov in 2016; the electromagnetic analogue of the memory effect, published by Lydia Bieri and David Garfinkle in 2013. Binding relationships Each group is linked to another by special relationships: Fourier transforms tie together soft theorems and memory effects; vacuum transitions tie together asymptotic symmetries and memory effects; Ward's identities tie together soft theorems and asymptotic symmetries. So, for example: the soft graviton theorem (a.1) is related to the supertranslations (b.1) by a Ward's identity; the supertranslations (b.1) correspond to different vacuum states created by the gravitational memory effect (c.1) the gravitational memory effect (c.1) reduces to the soft graviton theorem (a.1) via a Fourier transform. In addition to the first triangular relationship highlighted by the authors, several others may exist and have been hypothesized. See also Gravitational memory effect Bondi-Metzner-Sachs group Soft graviton theorem References Bibliography External links Theory of relativity Quantum field theory
Pasterski–Strominger–Zhiboedov triangle
[ "Physics" ]
630
[ "Quantum field theory", "Quantum mechanics", "Theory of relativity" ]
75,500,914
https://en.wikipedia.org/wiki/E-values
In statistical hypothesis testing, e-values quantify the evidence in the data against a null hypothesis (e.g., "the coin is fair", or, in a medical context, "this new treatment has no effect"). They serve as a more robust alternative to p-values, addressing some shortcomings of the latter. In contrast to p-values, e-values can deal with optional continuation: e-values of subsequent experiments (e.g. clinical trials concerning the same treatment) may simply be multiplied to provide a new, "product" e-value that represents the evidence in the joint experiment. This works even if, as often happens in practice, the decision to perform later experiments may depend in vague, unknown ways on the data observed in earlier experiments, and it is not known beforehand how many trials will be conducted: the product e-value remains a meaningful quantity, leading to tests with Type-I error control. For this reason, e-values and their sequential extension, the e-process, are the fundamental building blocks for anytime-valid statistical methods (e.g. confidence sequences). Another advantage over p-values is that any weighted average of e-values remains an e-value, even if the individual e-values are arbitrarily dependent. This is one of the reasons why e-values have also turned out to be useful tools in multiple testing. E-values can be interpreted in a number of different ways: first, an e-value can be interpreted as rescaling of a test that is presented on a more appropriate scale that facilitates merging them. Second, the reciprocal of an e-value is a p-value, but not just any p-value: a special p-value for which a rejection `at level p' retains a generalized Type-I error guarantee. Third, they are broad generalizations of likelihood ratios and are also related to, yet distinct from, Bayes factors. Fourth, they have an interpretation as bets. Fifth, in a sequential context, they can also be interpreted as increments of nonnegative supermartingales. Interest in e-values has exploded since 2019, when the term 'e-value' was coined and a number of breakthrough results were achieved by several research groups. The first overview article appeared in 2023. Definition and mathematical background Let the null hypothesis be given as a set of distributions for data . Usually with each a single outcome and a fixed sample size or some stopping time. We shall refer to such , which represent the full sequence of outcomes of a statistical experiment, as a sample or batch of outcomes. But in some cases may also be an unordered bag of outcomes or a single outcome. An e-variable or e-statistic is a nonnegative random variable such that under all , its expected value is bounded by 1: . The value taken by e-variable is called the e-value. In practice, the term e-value (a number) is often used when one is really referring to the underlying e-variable (a random variable, that is, a measurable function of the data). Interpretations As the continuous interpretation of a test A test for a null hypothesis is traditionally modeled as a function from the data to . A test is said to be valid for level if This is classically conveniently summarized as a function from the data to that satisfies . Moreover, this is sometimes generalized to permit external randomization by letting the test take value in . Here, its value is interpreted as a probability with which one should subsequently reject the hypothesis. An issue with modelling a test in this manner, is that the traditional decision space or does not encode the level at which the test rejects. This is odd at best, because a rejection at level 1% is a much stronger claim than a rejection at level 10%. A more suitable decision space seems to be . The e-value can be interpreted as resolving this problem. Indeed, we can rescale from to and to by rescaling the test by its level: , where we denote a test on this evidence scale by to avoid confusion. Such a test is then valid if . That is: it is valid if it is an e-value. In fact, this reveals that e-values bounded to are rescaled randomized tests, that are continuously interpreted as evidence against the hypothesis. The standard e-value that takes value in appears as a generalization of a level 0 test. This interpretation shows that e-values are indeed fundamental to testing: they are equivalent to tests, thinly veiled by a rescaling. From this perspective, it may be surprising that typical e-values look very different from traditional tests: maximizing the objective for an alternative hypothesis would yield traditional Neyman-Pearson style tests. Indeed, this maximizes the probability under that . But if we continuously interpret the value of the test as evidence against the hypothesis, then we may also be interested in maximizing different targets such as . This yields tests that are remarkably different from traditional Neyman-Pearson tests, and more suitable when merged through multiplication as they are positive with probability 1 under . From this angle, the main innovation of the e-value compared to traditional testing is to maximize a different power target. As p-values with a stronger data-dependent-level Type-I error guarantee For any e-variable and any and all , it holds that . This means is a valid p-value. Moreover, the e-value based test with significance level , which rejects if , has a Type-I error bounded by . But, whereas with standard p-values the inequality (*) above is usually an equality (with continuous-valued data) or near-equality (with discrete data), this is not the case with e-variables. This makes e-value-based tests more conservative (less power) than those based on standard p-values. In exchange for this conservativeness, the p-value comes with a stronger guarantee. In particular, for every possibly data-dependent significance level , we have if and only if . This means that a p-value satisfies this guarantee if and only if it is the reciprocal of an e-variable . The interpretation of this guarantee is that, on average, the relative Type-I error distortion caused by using a data-dependent level is controlled for every choice of the data-dependent significance level. Traditional p-values only satisfy this guarantee for data-independent or pre-specified levels. This stronger guarantee is also called the post-hoc Type-I error, as it allows one to choose the significance level after observing the data: post-hoc. A p-value that satisfies this guarantee is also called a post-hoc p-value. As is a post-hoc p-value if and only if for some e-value , it is possible to view this as an alternative definition of an e-value. Under this post-hoc Type-I error, the problem of choosing the significance level vanishes: we can simply choose the smallest data-dependent level at which we reject the hypothesis by setting it equal to the post-hoc p-value: . Indeed, at this data-dependent level we have since is an e-variable. As a consequence, we can truly reject at level and still retain the post-hoc Type-I error guarantee. For a traditional p-value , rejecting at level p comes with no such guarantee. Moreover, a post-hoc p-value inherits optional continuation and merging properties of e-values. But instead of an arithmetic weighted average, a weighted harmonic average of post-hoc p-values is still a post-hoc p-value. As generalizations of likelihood ratios Let be a simple null hypothesis. Let be any other distribution on , and let be their likelihood ratio. Then is an e-variable. Conversely, any e-variable relative to a simple null can be written as a likelihood ratio with respect to some distribution . Thus, when the null is simple, e-variables coincide with likelihood ratios. E-variables exist for general composite nulls as well though, and they may then be thought of as generalizations of likelihood ratios. The two main ways of constructing e-variables, UI and RIPr (see below) both lead to expressions that are variations of likelihood ratios as well. Two other standard generalizations of the likelihood ratio are (a) the generalized likelihood ratio as used in the standard, classical likelihood ratio test and (b) the Bayes factor. Importantly, neither (a) nor (b) are e-variables in general: generalized likelihood ratios in sense (a) are not e-variables unless the alternative is simple (see below under "universal inference"). Bayes factors are e-variables if the null is simple. To see this, note that, if represents a statistical model, and a prior density on , then we can set as above to be the Bayes marginal distribution with density and then is also a Bayes factor of vs. . If the null is composite, then some special e-variables can be written as Bayes factors with some very special priors, but most Bayes factors one encounters in practice are not e-variables and many e-variables one encounters in practice are not Bayes factors. As bets Suppose you can buy a ticket for 1 monetary unit, with nonnegative pay-off . The statements " is an e-variable" and "if the null hypothesis is true, you do not expect to gain any money if you engage in this bet" are logically equivalent. This is because being an e-variable means that the expected gain of buying the ticket is the pay-off minus the cost, i.e. , which has expectation . Based on this interpretation, the product e-value for a sequence of tests can be interpreted as the amount of money you have gained by sequentially betting with pay-offs given by the individual e-variables and always re-investing all your gains. The betting interpretation becomes particularly visible if we rewrite an e-variable as where has expectation under all and is chosen so that a.s. Any e-variable can be written in the form although with parametric nulls, writing it as a likelihood ratio is usually mathematically more convenient. The form on the other hand is often more convenient in nonparametric settings. As a prototypical example, consider the case that with the taking values in the bounded interval . According to , the are i.i.d. according to a distribution with mean ; no other assumptions about are made. Then we may first construct a family of e-variables for single outcomes, , for any (these are the for which is guaranteed to be nonnegative). We may then define a new e-variable for the complete data vector by taking the product , where is an estimate for , based only on past data , and designed to make as large as possible in the "e-power" or "GRO" sense (see below). Waudby-Smith and Ramdas use this approach to construct "nonparametric" confidence intervals for the mean that tend to be significantly narrower than those based on more classical methods such as Chernoff, Hoeffding and Bernstein bounds. A fundamental property: optional continuation E-values are more suitable than p-value when one expects follow-up tests involving the same null hypothesis with different data or experimental set-ups. This includes, for example, combining individual results in a meta-analysis. The advantage of e-values in this setting is that they allow for optional continuation. Indeed, they have been employed in what may be the world's first fully 'online' meta-analysis with explicit Type-I error control. Informally, optional continuation implies that the product of any number of e-values, , defined on independent samples , is itself an e-value, even if the definition of each e-value is allowed to depend on all previous outcomes, and no matter what rule is used to decide when to stop gathering new samples (e.g. to perform new trials). It follows that, for any significance level , if the null is true, then the probability that a product of e-values will ever become larger than is bounded by . Thus if we decide to combine the samples observed so far and reject the null if the product e-value is larger than , then our Type-I error probability remains bounded by . We say that testing based on e-values remains safe (Type-I valid) under optional continuation. Mathematically, this is shown by first showing that the product e-variables form a nonnegative discrete-time martingale in the filtration generated by (the individual e-variables are then increments of this martingale). The results then follow as a consequence of Doob's optional stopping theorem and Ville's inequality. We already implicitly used product e-variables in the example above, where we defined e-variables on individual outcomes and designed a new e-value by taking products. Thus, in the example, the individual outcomes play the role of 'batches' (full samples) above, and we can therefore even engage in optional stopping "within" the original batch : we may stop the data analysis at any individual outcome (not just "batch of outcomes") we like, for whatever reason, and reject if the product so far exceeds . Not all e-variables defined for batches of outcomes can be decomposed as a product of per-outcome e-values in this way though. If this is not possible, we cannot use them for optional stopping (within a sample ) but only for optional continuation (from one sample to the next and so on). Construction and optimality If we set independently of the data we get a trivial e-value: it is an e-variable by definition, but it will never allow us to reject the null hypothesis. This example shows that some e-variables may be better than others, in a sense to be defined below. Intuitively, a good e-variable is one that tends to be large (much larger than 1) if the alternative is true. This is analogous to the situation with p-values: both e-values and p-values can be defined without referring to an alternative, but if an alternative is available, we would like them to be small (p-values) or large (e-values) with high probability. In standard hypothesis tests, the quality of a valid test is formalized by the notion of statistical power but this notion has to be suitably modified in the context of e-values. The standard notion of quality of an e-variable relative to a given alternative , used by most authors in the field, is a generalization of the Kelly criterion in economics and (since it does exhibit close relations to classical power) is sometimes called e-power; the optimal e-variable in this sense is known as log-optimal or growth-rate optimal (often abbreviated to GRO). In the case of a simple alternative , the e-power of a given e-variable is simply defined as the expectation ; in case of composite alternatives, there are various versions (e.g. worst-case absolute, worst-case relative) of e-power and GRO. Simple alternative, simple null: likelihood ratio Let and both be simple. Then the likelihood ratio e-variable has maximal e-power in the sense above, i.e. it is GRO. Simple alternative, composite null: reverse information projection (RIPr) Let be simple and be composite, such that all elements of have densities (denoted by lower-case letters) relative to the same underlying measure. Grünwald et al. show that under weak regularity conditions, the GRO e-variable exists, is essentially unique, and is given by where is the Reverse Information Projection (RIPr) of unto the convex hull of . Under further regularity conditions (and in all practically relevant cases encountered so far), is given by a Bayes marginal density: there exists a specific, unique distribution on such that . Simple alternative, composite null: universal inference (UI) In the same setting as above, show that, under no regularity conditions at all, is an e-variable (with the second equality holding if the MLE (maximum likelihood estimator) based on data is always well-defined). This way of constructing e-variables has been called the universal inference (UI) method, "universal" referring to the fact that no regularity conditions are required. Composite alternative, simple null Now let be simple and be composite, such that all elements of have densities relative to the same underlying measure. There are now two generic, closely related ways of obtaining e-variables that are close to growth-optimal (appropriately redefined for composite ): Robbins' method of mixtures and the plug-in method, originally due to Wald but, in essence, re-discovered by Philip Dawid as "prequential plug-in" and Jorma Rissanen as "predictive MDL". The method of mixtures essentially amounts to "being Bayesian about the numerator" (the reason it is not called "Bayesian method" is that, when both null and alternative are composite, the numerator may often not be a Bayes marginal): we posit any prior distribution on and set and use the e-variable . To explicate the plug-in method, suppose that where constitute a stochastic process and let be an estimator of based on data for . In practice one usually takes a "smoothed" maximum likelihood estimator (such as, for example, the regression coefficients in ridge regression), initially set to some "default value" . One now recursively constructs a density for by setting . Effectively, both the method of mixtures and the plug-in method can be thought of learning a specific instantiation of the alternative that explains the data well. Composite null and alternative In parametric settings, we can simply combine the main methods for the composite alternative (obtaining or ) with the main methods for the composite null (UI or RIPr, using the single distribution or as an alternative). Note in particular that when using the plug-in method together with the UI method, the resulting e-variable will look like which resembles, but is still fundamentally different from, the generalized likelihood ratio as used in the classical likelihood ratio test. The advantage of the UI method compared to RIPr is that (a) it can be applied whenever the MLE can be efficiently computed - in many such cases, it is not known whether/how the reverse information projection can be calculated; and (b) that it 'automatically' gives not just an e-variable but a full e-process (see below): if we replace in the formula above by a general stopping time , the resulting ratio is still an e-variable; for the reverse information projection this automatic e-process generation only holds in special cases. Its main disadvantage compared to RIPr is that it can be substantially sub-optimal in terms of the e-power/GRO criterion, which means that it leads to tests which also have less classical statistical power than RIPr-based methods. Thus, for settings in which the RIPr-method is computationally feasible and leads to e-processes, it is to be preferred. These include the z-test, t-test and corresponding linear regressions, k-sample tests with Bernoulli, Gaussian and Poisson distributions and the logrank test (an R package is available for a subset of these), as well as conditional independence testing under a model-X assumption. However, in many other statistical testing problems, it is currently (2023) unknown whether fast implementations of the reverse information projection exist, and they may very well not exist (e.g. generalized linear models without the model-X assumption). In nonparametric settings (such as testing a mean as in the example above, or nonparametric 2-sample testing), it is often more natural to consider e-variables of the type. However, while these superficially look very different from likelihood ratios, they can often still be interpreted as such and sometimes can even be re-interpreted as implementing a version of the RIPr-construction. Finally, in practice, one sometimes resorts to mathematically or computationally convenient combinations of RIPr, UI and other methods. For example, RIPr is applied to get optimal e-variables for small blocks of outcomes and these are then multiplied to obtain e-variables for larger samples - these e-variables work well in practice but cannot be considered optimal anymore. A third construction method: p-to-e (and e-to-p) calibration There exist functions that convert p-values into e-values. Such functions are called p-to-e calibrators. Formally, a calibrator is a nonnegative decreasing function which, when applied to a p-variable (a random variable whose value is a p-value), yields an e-variable. A calibrator is said to dominate another calibrator if , and this domination is strict if the inequality is strict. An admissible calibrator is one that is not strictly dominated by any other calibrator. One can show that for a function to be a calibrator, it must have an integral of at most 1 over the uniform probability measure. One family of admissible calibrators is given by the set of functions with . Another calibrator is given by integrating out : Conversely, an e-to-p calibrator transforms e-values back into p-variables. Interestingly, the following calibrator dominates all other e-to-p calibrators: . While of theoretical importance, calibration is not much used in the practical design of e-variables since the resulting e-variables are often far from growth-optimal for any given . E-processes Definition Now consider data arriving sequentially, constituting a discrete-time stochastic process. Let be another discrete-time process where for each can be written as a (measurable) function of the first outcomes. We call an e-process if for any stopping time is an e-variable, i.e. for all . In basic cases, the stopping time can be defined by any rule that determines, at each sample size , based only on the data observed so far, whether to stop collecting data or not. For example, this could be "stop when you have seen four consecutive outcomes larger than 1", "stop at ", or the level--aggressive rule, "stop as soon as you can reject at level -level, i.e. at the smallest such that ", and so on. With e-processes, we obtain an e-variable with any such rule. Crucially, the data analyst may not know the rule used for stopping. For example, her boss may tell her to stop data collecting and she may not know exactly why - nevertheless, she gets a valid e-variable and Type-I error control. This is in sharp contrast to data analysis based on p-values (which becomes invalid if stopping rules are not determined in advance) or in classical Wald-style sequential analysis (which works with data of varying length but again, with stopping times that need to be determined in advance). In more complex cases, the stopping time has to be defined relative to some slightly reduced filtration, but this is not a big restriction in practice. In particular, the level--aggressive rule is always allowed. Because of this validity under optional stopping, e-processes are the fundamental building block of confidence sequences, also known as anytime-valid confidence intervals. Technically, e-processes are generalizations of test supermartingales, which are nonnegative supermartingales with starting value 1: any test supermartingale constitutes an e-process but not vice versa. Construction E-processes can be constructed in a number of ways. Often, one starts with an e-value for whose definition is allowed to depend on previous data, i.e., for all (again, in complex testing problems this definition needs to be modified a bit using reduced filtrations). Then the product process with is a test supermartingale, and hence also an e-process (note that we already used this construction in the example described under "e-values as bets" above: for fixed , the e-values were not dependent on past-data, but by using depending on the past, they became dependent on past data). Another way to construct an e-process is to use the universal inference construction described above for sample sizes The resulting sequence of e-values will then always be an e-process. History Historically, e-values implicitly appear as building blocks of nonnegative supermartingales in the pioneering work on anytime-valid confidence methods by well-known mathematician Herbert Robbins and some of his students. The first time e-values (or something very much like them) are treated as a quantity of independent interest is by another well-known mathematician, Leonid Levin, in 1976, within the theory of algorithmic randomness. With the exception of contributions by pioneer V. Vovk in various papers with various collaborators (e.g.), and an independent re-invention of the concept in an entirely different field, the concept did not catch on at all until 2019, when, within just a few months, several pioneering papers by several research groups appeared on arXiv (the corresponding journal publications referenced below sometimes coming years later). In these, the concept was finally given a proper name ("S-Value" and "E-Value"; in later versions of their paper, also adapted "E-Value"); describing their general properties, two generic ways to construct them, and their intimate relation to betting). Since then, interest by researchers around the world has been surging. In 2023 the first overview paper on "safe, anytime-valid methods", in which e-values play a central role, appeared. References Statistical hypothesis testing Statistical concepts Probability theory
E-values
[ "Mathematics" ]
5,388
[ "Statistical concepts" ]
75,501,216
https://en.wikipedia.org/wiki/2-%282-%28Dimethylamino%29ethoxy%29ethanol
2-[2-(Dimethylamino)ethoxy]ethanol is an organic compound with the molecular formula C6H15NO2 and is a liquid at room temperature. Dimethylaminoethoxyethanol is polyfunctional, having a tertiary amine, ether and hydroxyl functionality. Like other organic amines, it acts as a weak base. Manufacture Dimethylaminoethoxyethanol is manufactured by reacting dimethylamine and ethylene oxide. Other methods are also available producing streams rich in the substance which then need to be further purified. Uses As dimethylaminoethoxyethanol is weakly basic, it has been studied as a method of absorbing Greenhouse gases and in particular carbon dioxide. Dimethylaminoethoxyethanol is used extensively in surfactants which have also been evaluated as corrosion inhibitors. Surfactants prepared are usually cationic and may also be used as a biocide. This is particularly important for oilfield applications against Sulfate-reducing microorganisms. The material has other uses which include: General such as clays, intermediates, plasticizers and adhesives. As a catalyst and especially for polyurethanes. Process regulators Propellants and blowing agents Toxicity The toxicity of dimethylaminoethoxyethanol has been extensively studied. References {{DEFAULTSORT:Dimethylamino)ethoxy]ethanol, 2-(2-(}} Tertiary amines Catalysts Dimethylamino compounds Ethanolamines
2-(2-(Dimethylamino)ethoxy)ethanol
[ "Chemistry" ]
319
[ "Catalysis", "Catalysts", "Chemical kinetics" ]
72,656,350
https://en.wikipedia.org/wiki/Mahmoud%20Fustuq
Mahmoud Fustuq (1936 – 8 February 2006) was a Lebanese businessman who had various companies in Saudi Arabia. He was known for being brother-in-law of former Saudi Arabian ruler King Abdullah and for his involvement in the horse business. Biography Fustuq was born in Lebanon in 1936. His family is from Palestine. He was the eldest of nine siblings. He attended the University of Oklahoma in the late 1950s and received a degree in petroleum engineering. His sister, Aida, married King Abdullah. His another sister, Abla, was married to the Lebanese politician Nassib Lahoud. Fustuq had varied businesses in Saudi Arabia. He acquired the Buckram Oak Farm near Lexington, Kentucky, in 1978 which he sold in 2005. He also owned other farms in Ocala, Florida, and Kentucky where he had race horses, including Star Gallant who won the Illinois Derby in 1982 and Silver Train, who won a Breeder's Cup race. His other prominent horses were Najran, Silver Hawk Siberian Summer and Green Forest. He died in Pompano Beach, Florida, on 8 February 2006 in a traffic accident. He was buried in Saudi Arabia. Controversy In the 1970s Fustuq acquired a commission from British Leyland following the sale of a fleet of Land Rovers by the company to the Saudi Arabian National Guard headed by Prince Abdullah, later King Abdullah. The Guardian reported that after this transaction he bought the farms in the USA and a mansion in near Chantilly, France. References 20th-century Lebanese businesspeople 21st-century Lebanese businesspeople 1936 births 2006 deaths Racehorse owners and breeders University of Oklahoma alumni Road incident deaths in Florida Owners of a Breeders' Cup winner Lebanese engineers Petroleum engineers Lebanese people of Palestinian descent
Mahmoud Fustuq
[ "Engineering" ]
357
[ "Petroleum engineers", "Petroleum engineering" ]
72,657,494
https://en.wikipedia.org/wiki/%C4%B6intu%20well
Ķintu well is a historical object in Cīrava parish, Latvia. It is made of large carved stone blocks up to 2 meters long. The cross-section of the well forms a square with 1.25 m long sides. The Ķintu well is an archaeological monument, possibly the remains of an earlier larger megalithic complex. References Archaeological sites in Latvia South Kurzeme Municipality Water wells
Ķintu well
[ "Chemistry", "Engineering", "Environmental_science" ]
81
[ "Hydrology", "Water wells", "Environmental engineering" ]
72,658,320
https://en.wikipedia.org/wiki/Intercellular%20communication
Intercellular communication (ICC) refers to the various ways and structures that biological cells use to communicate with each other directly or through their environment. Often the environment has been thought of as the extracellular spaces within an animal. More broadly, cells may also communicate with other animals, either of their own group or species, or other species in the wider ecosystem. Different types of cells use different proteins and mechanisms to communicate with one another using extracellular signalling molecules or electric fluctuations which could be likened to an intercellular ethernet. Components of each type of intercellular communication may be involved in more than one type of communication, making attempts at clearly separating the types of communication listed somewhat futile. Broadly speaking, intercellular communication may be categorized as being within a single animal or between an animal and other animals in the ecosystem in which it lives. In this article, intercellular communication has been further collated into various areas of research rather than by functional or structural characteristics. Communication within an organism Cell signalling Molecular cell signaling Single-celled organisms sense their environment to seek food and may send signals to other cells to behave symbiotically or reproduce. A classic example of this is the slime mold. The slime mold shows how intercellular communication with a small molecule (e.g., cyclic AMP) allows a simple organism to form from an organized aggregation of single cells. Research into cell signalling investigated a receptor specific to each signal or multiple receptors potentially being activated by a single signal. It is not only the presence or absence of a signal that is important but also the strength. Using a chemical gradient to coordinate cell growth and differentiation continues to be important as multicellular animals and plants become more complex. This type of intercellular communication within an organism is commonly referred to as cell signalling. This type of intercellular communication is typified by a small signalling molecule diffusing through the spaces around cells, often relying on a diffusion gradient forming part of the signalling response. Cell junctions Complex organisms may have molecules to hold the cells together which can also be involved in intercellular communication. Some binding molecules are termed the extracellular matrix and may involve longer molecules like cellulose for the cell wall in plants or collagen in animals. When the membranes of two animal cells are close, they may form special types of cell junctions, which come in three broad types: occluding junctions (such as tight junctions and septate junctions), anchoring junctions (such as adherens junctions, desmosomes, focal adhesions, and hemidesmosomes), and communicating junctions (such as gap junctions). The structures they form also form parts of complex protein signaling pathways. In one respect, tight junctions play a generic role in cell signaling in that they may form a tight zip around cells, forming a barrier to stop even small, unwanted signalling molecules from getting between cells. Without these junctions, signalling molecules may spread to another group of cells which are not requiring the signal or escape too quickly from where they are needed. Gap junctions allow neighboring cells to directly exchange small molecules. Pannexins, connexins, innexins Pannexins, connexins, and innexins are transmembrane proteins that are all named after the Latin term nexus, meaning to connect. They are grouped as they all share a similar structure of 4 transmembrane domains crossing the cell membrane in a similar way, but they do not all share enough sequence homology to allow them to be considered directly related. Earlier investigations involving the connexins demonstrated cells forming a direct connection with each other using groups of connexins but not connections with the cell exterior. As such they were not considered to participate in the extracellular cell signalling at the time. Later studies made it apparent connexins could connect directly to the cell exterior meaning they are a conduit for the release an uptake of signalling molecules from the environment external to the cell. Furthermore, pannexins appear to do this to such an extent they may rarely if ever participate in direct cell to cell coupling. As indicated on the pannexin/innexin/connexin tree illustrated many animals do not appear to have pannexins/innexins/connexins, perhaps indicating there may be other similar proteins still to be discovered that serve to aid intercellular communication in these animals. Direct links between cells Septal pores In fungi, pores crossing their cell walls that separate cellular compartments act as an ICC for the movement of molecules to their neighboring compartments. Most red algae may have pores in the cell septum that partitions a cell/filament called a pit connection. As a leftover of the mitotic division it may be plugged up by the cell. There are also similar connections between neighboring cells/filaments that may allowing sharing of nutrients. Cells of a different species may initiate and form a pit connection with the host algae. Plasmodesmata in plants Plant cells usually have thick cell walls which need to be crossed if neighboring cells are to communicate directly. Plasmodesmata form a pipe through the cell wall forming an ICC. The pipe has another smaller membranous pipe concentric to it connecting the endoplasmic reticulum of the two cells via a tube called the desmotubule. The larger pipe also contains cytoskeletal and other elements. It is presumed viruses use plasmodesmata as a route through the cell walls to spread through the plant. Gap junctions in animals Gap junctions can form intercellular links, effectively a tiny direct regulated "pipe" called a connexon pair between the cytoplasms of the two cells that form the junction. 6 connexins make a connexon, 2 connexons make a connexon pair so 12 connexin proteins build each tiny ICC. This ICC allows two cells to communicate directly while being sealed from the outside world. Cells may form one or thousands of these tiny ICCs between them and their other neighbors, potentially forming large networks of directly linked cells. The connexon pairs form ICCs that can transport water, many other molecules up to around 1000 atoms in size and can be very rapidly signaled to turn on and off as required. These ICCs are also communicating electrical signals that can be rapidly turned on and off. To add to their versatility there are a range of these ICC types due to their being over 20 different connexins with different properties that can combine with each other in a variety of ways. The variety of potential signaling combinations that results is enormous. A much studied example of gap junctions electrical signalling abilities is in the electrical synapses found on nerves. In heart muscle gap junctions function to coordinate the beating of the heart. Adding even further to their versatility gap junctions can also function to form a direct connection to the exterior of a cell paralleling the functioning of the protein cousin the pannexins which are explained elsewhere. Intercellular bridge Intercellular bridges are larger than gap junction ICCs so are able to allow the movement of not only small signaling molecules but also large DNA molecules or even whole cell organelles. They are maintained between two cells allowing them to exchange cytoplasmic contents and are frequently observed when cells need intimate communication such as when they are reproducing. They are found in Prokaryotes for exchanging DNA, small organisms such as Pinnularia, Valonia ventricosa, Volvox, C. elegans and mitosis generally (Cytokinesis), Blepharisma for sexual reproduction and during Meiosis including Spermatocytogenesis to synchronise development of germ cells and oogenesis in larger organisms. Bridges have shown to assist in cell migration as shown in the adjacent picture. Cytoplasmic bridges can also be used to attack another cell as in the case of Vampirococcus. Cell fusion Cells that require a more permanent, extensive cytoplasmic linkage may fuse with each other to varying degrees in many cases forming one large cell or syncytium. This happens extensively during the development of skeletal muscle forming large muscle fibers. Later it was confirmed in other tissues such as the eye lens. Though both involving cell fibers, in the case of the eye lens the cell fusion is more limited in scope resulting in a less extensively fused stratified syncytium. Vesicles Lipid membrane bound vesicles of a large range of sizes are found inside and outside of cells, containing a huge variety of things ranging from food to invading organisms, water to signaling molecules. Using an electrical nerve impulse from a neuron of a neuromuscular junction to stimulate a muscle to contract is an example of very small (about 0.05μm) vesicles being directly involved in regulating intercellular communication. The neuron produces thousands of tiny vesicles, each containing thousands of signalling molecules. One vesicle is released close to the muscle every second or so when resting. When activated by a nerve impulse more than 100 vesicles will be released at once, hundreds of thousands of signalling molecules, causing a significant contraction of the muscle fiber. All this happens in a small fraction of a second. Generally small vesicles used to transport signalling molecules released from the cell are termed exosomes or simply extracellular vesicles (EV), and in addition to their importance to the organism they are also important for biosensors. Extracellular vesicles can be released from malignant cancer cells. These extracellular vesicles have been shown to contain gap junction proteins over-expressed in the malignant cells that spread to non-cancerous cells appearing to enhance the spread of the malignancy. Vesicles are also associated with the transport of materials outside of the cell to enable growth and repair of tissues in the extracellular matrix. In situations such as these they may be given special designations such as Matrix Vesicles (MV). Examples of larger vesicles are in regulatory secretary pathways in endocrine, exocrine tissues, transcytosis and the vesiculo-vacuolar organelle (VVO) in endothelial and perhaps other cell types. Another form of transfer of pieces of membrane around junctions is called trans-endocytosis. Some large intercellular vesicles also appear to stay intact as they transport their contents from one part of a tissue to another and involve gap junction plaques. Communication in nervous systems When we think of intercellular communication we often use our nervous system as a point of reference. Nerves made up of many cells in vertebrates are typically highly specialized in form and function usually being the most complex in the brain. They ensure rapid precise, directional cell to cell communication over longer distances, for example from your brain to your hand. The nerve cells can be thought of as intermediary's, not so much communicating with each other but rather passing on the messages from one neighboring cell to another. Being "accessory" cells that pass on the message they require an additional space and can consume a lot of energy within an organism. Simpler organisms such as sponges and placozoans often have less food availability and so less energy to spare. Their nervous systems are less specialized and the cells that are part of it are required to do other functions as well. Ephaptic coupling When groups of nerve cells form another type of intercellular communication called ephaptic coupling can arise. It was first quantified by Katz in 1940 but it has been difficult to associate any one structure or "ephapse" with this form of communication. There are reductionist attempts to associate particular groups of nerve cells exhibiting ephaptic coupling with particular functions in the brain. As yet there are no studies on the simplest neural systems such as the polar bodies of Ctenophores to see if ephaptic coupling may explain some of their more complex behaviors. Ecosystem intercellular communication The definition of biological communication is not simple. In the field of cell biology early research was at a cellular to organism level. How the individual cells in one organism could affect those in another was difficult to trace and not of primary concern. If intercellular communication includes one cell transmitting a signal to another to elicit a response, intercellular communication is not restricted to the cells within a single organism. Over short distances interkingdom communication in plants is reported. In-water reproduction often involves vast synchronized release of gametes called spawning. Over large distances cells in one plant will communicate with cells in another plant of the same species and other species by releasing signals into the air such as green leaf volatiles that can, among other things, pre-warn neighbors of herbivores or in the case of ethylene gas the signal triggers ripening in fruits. Intercellular signalling in plants can also happen below ground with the mycorrhizal network which can link large areas of plants via fungal networks allowing the redistribution of environmental resources. Looking at insect colonies such as bees and ants we have discovered the pheromones released from one organism's cells to another organism's cells can coordinate colonies in a way reminiscent of slime molds. Cell to cell signalling using "pheromones" was also found in more complex animals. As complexity increases so does the effect of signals. "Pheromones" in more complex animals such as vertebrates are now more correctly referred to as "chemosignals" including between species. The idea that intercellular communication is so similar among cells within an organism as well as cells between different organisms, even prey, is demonstrated by vinnexin. This protein is a modified form of an innexin protein found in a caterpillar. That is, the vinnexin is very similar to the caterpillar's own innexin, and could only have been derived from a non-viral innexin in some way that is unclear. The caterpillar innexin forms normal intercellular connections inside the caterpillar as part of the caterpillar's immune response to an egg implanted by a parasitic wasp. The innexin helps ensure the wasp egg is neutralized, saving the caterpillar from the parasite. So what does the vinnexin do and how? Evolution has led to a virus that communicates with the wasp in a way that evades the wasps antiviral responses, allowing the virus to live and replicate in the wasps ovaries. When the wasp injects its egg into the caterpillar host many virus from the wasp's ovary are also injected. The virus particles do not replicate in the caterpillar cells but rather communicate with the caterpillars genetic machinery to produce vinnexin protein. The vinnexin protein incorporates itself into the caterpillar's cells altering the communication in the caterpillar so the caterpillar goes on living but with an altered immune response. Vinnexins are able to mix with normal innexins to alter communication within the caterpillar and probably do. The altered communication within the caterpillar prevents the caterpillar's defenses rejecting the wasps egg. As a result, the wasp egg hatches, consumes the caterpillar and the virus from the wasp larva's mother, and repeats the cycle. It can be seen the virus and wasp are essential to each other and communicate well with each other to allow the virus to live and replicate, but only in a non-destructive way inside the wasp ovary. The virus is injected into a caterpillar by the wasp, but the virus does not replicate in the caterpillar, the virus only communicates with the caterpillar to modify it in a non-lethal way. The wasp larvae will then slowly eat the caterpillar without being stopped while communicating with the virus again to ensure that the wasp has a place in its ovary for it to again replicate. Connexins/innexins/vinnexins, once thought to only participate in providing a path for signaling molecules or electrical signals have now been shown to act as a signaling molecule itself. References Cell biology Cell communication Cell anatomy Cell signaling Systems biology
Intercellular communication
[ "Biology" ]
3,297
[ "Cell communication", "Cell biology", "Cellular processes", "Systems biology" ]
72,658,868
https://en.wikipedia.org/wiki/Schizophyllum%20amplum
Schizophyllum amplum is a species of fungus, also known as poplar bells. It is a small inedible bell-shaped fungus that grows from September until November, with a cap sized between 5–15 mm. The fungus grows on fallen branches of a number of hardwood trees. It was transferred to the genus Schizophyllum in 1996 by Karen K. Nakasone as a new combination after a study of Auriculariopsis albomellea and Phlebia albida . It is common in Europe but found across the world including the United States, Netherlands, France, Spain, Romania, New Zealand, Canada, Austria, Germany, Hungary, Yugoslavia, Russia, Iran and Denmark. References Schizophyllaceae Fungi described in 1848 Taxa named by Joseph-Henri Léveillé Fungi of Europe Fungi of North America Fungus species
Schizophyllum amplum
[ "Biology" ]
179
[ "Fungi", "Fungus species" ]
72,659,566
https://en.wikipedia.org/wiki/Zero-touch%20provisioning
Zero-touch provisioning (ZTP), or zero-touch enrollment, is the process of remotely provisioning large numbers of network devices such as switches, routers and mobile devices without having to manually program each one individually. The feature improves existing provisioning models, solutions and practices in the areas of wireless networks, (complex) network management and operations services, and cloud based infrastructure services provisioning. ZTP saves configuration time while reducing errors. The process can also be used to update existing systems using scripts. Research has shown that ZTP systems allow for faster provisioning versus manual provisioning. The global market for ZTP services was estimated to be $2.1 Billion in 2021. In April 2019, the Internet Engineering Task Force published RFC 8572 Secure Zero Touch Provisioning (SZTP) as a Proposed Standard. The FIDO Alliance published FIDO Device Onboard version 1.0 in December 2020, and followed up with a FIDO Device Onboard version 1.1 in April 2022. Several FDO "app notes" augment this specification. FIDO Device Onboard is also a ZTP type protocol. Applications One application of the technology is to improve delivery of cloud computing services. The concept has been particularly influential for information technology when paired with mobile device management. Repetitive processes that can be automated and streamlined include configuring settings; collecting inventory details; deploying apps; managing licenses; and implementing security policy, including password management and wiping remote devices. System architecture A basic ZTP system requires a network device that supports ZTP, a server that supports Dynamic Host Configuration Protocol (DHCP) or Trivial File Transfer Protocol (TFTP), and a file server. When a ZTP-enabled device is powered on, the device's boot file sets up configuration parameters. A switch then sends a request using DHCP or TFTP to get the device's configuration file from a central location. The file then runs and configures ports, IP addresses and other server parameters for each location. Similar concepts A similar concept is the zero-touch network, which integrates zero-touch provisioning with automation, artificial intelligence and machine learning. Standards activity In December 2017, the European Telecommunications Standards Institute (ETSI) formed the Zero-touch network and Service Management group (ZSM) to accelerate development and standardization of the technology. In the summer of 2019, the group published a series of documents defining ZSM requirements, reference architecture and terminology. In April 2019, the Internet Engineering Task Force published RFC 8572 Secure Zero Touch Provisioning (SZTP) as a Proposed Standard. References External links ETSI ZSM standards What is ZTP (Zero Touch Provisioning)? Communications protocols Networks Cloud computing
Zero-touch provisioning
[ "Technology" ]
555
[ "Computer standards", "Communications protocols" ]
72,662,717
https://en.wikipedia.org/wiki/Semantic%20spacetime
Semantic spacetime is a theoretical framework for agent-based modelling of spacetime, based on Promise Theory. It is relevant both as a model of computer science and as an alternative network based formulation of physics in some areas. Semantic Spacetime was introduced by physicist and computer scientist Mark Burgess, in a series of papers called Spacetimes with Semantics, as a practical alternative to describing space and time, initially for Computer Science.  It attempts to unify both quantitative and qualitative aspects of spacetime processes into a single model. This is referred to by Burgess as covering both “dynamics and semantics”. Promise theory is used as a representation for semantics. Directed adjacency is the graph theoretic logical primitive, but with the caveat that each node must both emit and absorb adjacency relations, cooperatively, similar to the unitary structure of quantum probabilities and transitions. Thus space is made up of cooperating nodes and edges. The representation of spacetime becomes a form of labelled graph, specifically built from promise theoretic bindings. Origins According to Burgess, Semantic Spacetime originates from asking what are the implications of Promise Theory to our understanding of space and time. The traditional view of spacetime seems to have no relevance to phenomena in computing, electronics, biology, or many other information based processes. The classical understanding of spacetime from Newton's era is based on ballistics, the idea about space and time was that of a purely passive theatre for the motion and behaviours of material bodies. Einstein partially changed that perception with General Relativity, in which spacetime geometry is an active participant with its own properties, i.e. curvature, energy, and mass. In the process models of Computer Science, Electronics, Biology, and Logistics, however, space is formed from functional components that act more like service providers. Processes are representations of autonomous modular outcomes, a result of information passing between agents in networks of such active components, with a certain strength of coupling.  Burgess also observed a relationship between semantic knowledge representations and the bigraphs of Robin Milner, but found existing languages excessively formal and lacking in expressibility. In Semantic Spacetime one uses the language of Promise Theory to formulate a process (spacetime) model for autonomous agents. The property of autonomy becomes closely linked to locality in physics, so the approach has an appeal to universality. Relationship to other models Burgess has stated that Semantic Spacetime is an attempt to demystify the explanation of certain phenomena in both physics and information science. "Until we can get past the prejudices of classical separation of science into disciplines we will not make progress in understanding computer systems at enormous scale".   In 2019, Burgess wrote an extended book about the idea called ‘’Smart Spacetime’’ to encourage interest in the approach and explain the vision behind Semantic Spacetime, and made a documentary video. The book goes further in pointing out `deep connections’ to other fields of science, suggesting a multi-disciplinary viewpoint. Commentators have likened the idea to other graph theoretic models of spacetime, such as Causal Sets, Quantum Graphity and the Wolfram Physics Project, however Burgess emphasizes key differences that go beyond the obvious use of graphs for modelling space in these writings. In physics, spacetime is a purely quantitative description of metric properties, labelled by coordinates to map out a region or a volume; but in Information Sciences spacetime may also have semantics, or ‘’qualitative’’ functional aspects, which arise as the container of active processes.  These also need to be included in descriptions of phenomena. Classically, the role is separated from space and time, but this may add layers of unwanted complexity as there are hidden assumptions behind a model of spacetime.  For example, one region of space might be a factory, while another could be a river. In biology, cells are regions of spacetime that play different roles in an organism, and organs are larger regions composed of many cells. Regions of spacetime thus take of the role of agents, and a full description of the topology and dynamics of these may be required to model the behaviour of the whole. Semantic spacetime doesn't distinguish between space and matter, it treats matter as a local property of the spacetime network of agents. Reception and usage Burgess describes Semantic Spacetime as an idea in its infancy, with much work left to do, attracting a small amount of interest mainly from deep specialists. In a number of papers, he has developed applications of the idea mainly in the design of technology systems. In interviews he states that some documents, pertaining to technology, are proprietary and thus cannot be published or referenced. Semantic Spacetime model and Promise Theory were references as an approach to multi-model database design and Resource Description Framework embedding for ArangoDB. Limited papers on smart data pipelines and consistent propagation of information have been based on semantic spacetime and led to startups Aljabr and Dianemo to develop the respective technologies. It has also been the subject of much interest for understanding 5G telecommunications, especially in China.  Applications of the model to neuroscience and machine learning were recognized by an invitation to a special closed event salon in October 2022 by the Kavli Foundation (United States). Virtual Motion and Sociophysics Semantic Spacetime, identifies three ways in which motion can be understood for a graph. These are called Motion of the First, Second, and Third kinds. Burgess writes that `The semantics of ordinary space and time are diverse in interpretation. For space, we think of distance, trajectory, adjacency (topology), neighbourhood, continuity, direction, etc. For time, we have clock time, duration, time of day, partial ordering, etc.’. Semantic spacetime unifies these in promise theoretic (and thus graph theoretic) language. The notion of Semantic Spacetime allows phenomena in Cloud computing to be viewed as a form of virtual physics, in which processes and properties (such as data records) can move around from host to host as moving promises. A description of this in terms of Promise Theory and Semantic Spacetime has been developed in a series of papers called Motion of the Third Kind. Burgess has claimed that we should expect to "rediscover physics again in the cloud". Trust is the underlying measure of promise keeping in Promise Theory. Semantic Spacetime has also been used as an agent-based model for sociophysics in which trust plays a role similar to that of energy in ordinary mechanics. Tutorial series A tutorial series with programming examples was published under the name "Semantic Spacetime and Data Analytics". A video documentary called Bigger, Faster, Smarter was also produced. References Formal methods Theoretical computer science
Semantic spacetime
[ "Mathematics", "Engineering" ]
1,356
[ "Theoretical computer science", "Applied mathematics", "Software engineering", "Formal methods" ]
72,662,932
https://en.wikipedia.org/wiki/Valletta%20Design%20Cluster
The Valletta Design Cluster (VDC) is a culture and creativity centre in Valletta, Malta. Inaugurated in March 2021, it is housed in a former slaughterhouse known as the Old Abattoir () which was originally built in around the 17th century. Location The VDC is located in the lower part of Valletta, at the bottom of Old Mint Street () and adjacent to the rear of Auberge de Bavière. The area had historically been neglected prior to the renovation project. History The building which now houses the VDC is reportedly one of the oldest surviving buildings in Valletta, and in the 17th century it was in use as an abattoir. It housed residences and soldiers' barracks in the early 18th century, and later on parts of it were used for light industries including cotton spinning and bakeries. The bakery ovens remained operational until the late 1980s and they still exist today. Parts of the building were used as housing until the 1980s, when its residents were evicted as the site was earmarked for demolition and redevelopment into new housing units. The planned interventions were not implemented, and parts of the building were occupied by squatters. The site fell into a state of disrepair and abandonment, remaining in a dilapidated state for decades. Most of the building's roof had collapsed by the 2010s. The conversion of the Old Abattoir into the Valletta Design Cluster was announced in June 2015, and it was one of several infrastructural works commissioned for Valletta's role as European Capital of Culture in 2018. The VDC's aim is to offer spaces for use by cultural and creative operators, and the renovation project also aimed to contribute to the urban regeneration of the lower part of Valletta. Engagement efforts made during the project's early stages involved various stakeholders including communities living in the area along with the design sector. The project architect was Amanda Degiovanni, while the roof garden was designed by the Japanese firm Tetsuo Kondo Architects. On-site renovation works commenced in 2017. While works were ongoing, an open day was held on 15 December 2018 as one of the European Capital of Culture events. The project cost a total of about €10.4 million, including €4.3 million from the European Regional Development Fund. The project had initially been scheduled for completion in 2018, but after several delays the planned opening date was moved to late 2019 and then to the second half of 2020. The VDC was finally inaugurated by Prime Minister Robert Abela on 24 March 2021, while its roof garden was inaugurated a month later. At the 2021 Malta Architecture and Spatial Planning Awards held in March 2022, the Restoration Directorate and Tetsuo Kondo Architects won the Public Open Space Award for designing the VDC's roof garden and green wall. Architecture and layout The Valletta Design Cluster consists of two blocks separated by a long courtyard which was originally part of Old Mint Street. This area is covered by a retractable glass canopy and it has an informal layout which can be adjusted for various activities and events. The building's ground floor includes a makerspace workshop, a coworking space and a food space which includes a teaching kitchen, a canteen and other amenities. The other floors include five meeting spaces, a conference room and fifteen studios which can be used for creative activities by long-term tenants. Apart from the Old Abattoir itself, the Valletta Design Cluster also includes two adjacent townhouses in Bull Street (Triq il-Gendus). Known as the International Project Labs, these include accommodation and self-catering facilities for 11 people, and are meant to be used by visiting users, researchers or artists. The building's roof is open to the general public as a roof garden. The green space includes indigenous trees and shrubs along with a pond, while the area also has seating, places for group gatherings, two multifunctional spaces and a meeting room. All floors of the VDC including the roof garden are accessible to all through a lift and stairs. Notes References Further reading External links 2021 establishments in Malta Agricultural buildings Bakeries of Malta Barracks in Malta Buildings and structures in Valletta Culture in Valletta Design Gardens in Malta Hackerspaces Limestone buildings in Malta Roof gardens
Valletta Design Cluster
[ "Engineering" ]
862
[ "Design" ]
72,663,291
https://en.wikipedia.org/wiki/84%20Ursae%20Majoris
84 Ursae Majoris, also known as HD 120198, is a star about 300 light years from the Earth, in the constellation Ursa Major. It is a 5th magnitude star, making it faintly visible to the naked eye of an observer far from city lights. It is an Ap star with an 1,100 gauss magnetic field, and an α2 CVn variable star, varying in brightness from magnitude 5.65 to 5.70, over a period of 1.37996 days. 84 Ursae Majoris is located just 70 arcseconds from the star LDS 2914, but that star is believed to be a background star not physically associated with 84 Ursae Majoris. Gerhard Jackisch discovered that 84 Ursae Majoris is a variable star, with a period greater than one day, in 1972. It was given the variable star designation CR Ursae Majoris in 1974. In 1994 John Rice and William Wehlau used Doppler imaging to map the distribution of iron and chromium on the surface of 84 Ursae Majoris. They found that the distribution of those elements across the surface was similar, and the abundances of those elements varied by a factor of 15 across the surface. Chromium was found to be about 600 times more abundant than on the Sun in the regions of the 84 Ursae Majoris surface with the minimum chromium abundance. The size of 84 Ursae Majoris was measured in red light during 2015 and 2016, using the CHARA array. The limb darkened angular diameter was milliarcseconds. References Ursa Major 67231 120198 Durchmusterung objects Ursae Majoris, CR Ursae Majoris, 84 Alpha2 Canum Venaticorum variables B-type main-sequence stars
84 Ursae Majoris
[ "Astronomy" ]
378
[ "Ursa Major", "Constellations" ]
72,665,270
https://en.wikipedia.org/wiki/Uncertain%20geographic%20context%20problem
The uncertain geographic context problem or UGCoP is a source of statistical bias that can significantly impact the results of spatial analysis when dealing with aggregate data. The UGCoP is very closely related to the Modifiable areal unit problem (MAUP), and like the MAUP, arises from how we divide the land into areal units. It is caused by the difficulty, or impossibility, of understanding how phenomena under investigation (such as people within a census tract) in different enumeration units interact between enumeration units, and outside of a study area over time. It is particularly important to consider the UGCoP within the discipline of time geography, where phenomena under investigation can move between spatial enumeration units during the study period. Examples of research that needs to consider the UGCoP include food access and human mobility. The uncertain geographic context problem, or UGCoP, was first coined by Dr. Mei-Po Kwan in 2012. The problem is highly related to the ecological fallacy, edge effect, and Modifiable areal unit problem (MAUP) in that, it relates to aggregate units as they apply to individuals. The crux of the problem is that the boundaries we use for aggregation are arbitrary and may not represent the actual neighborhood of the individuals within them. While a particular enumeration unit, such as a census tract, contains a person's location, they may cross its boundaries to work, go to school, and shop in completely different areas. Thus, the geographic phenomena under investigation extends beyond the delineated boundary . Different individuals, or groups may have completely different activity spaces, making an enumeration unit that is relevant for one person meaningless to another. For example, a map that aggregates people by school districts will be more meaningful when studying a population of students than the general population. Traditional spatial analysis, by necessity, treats each discrete areal unit as a self-contained neighborhood and does not consider the daily activity of crossing the boundaries. Implications The UGCoP has further implications when considering the area outside of a study area. Tobler's second law of geography states, "the phenomenon external to a geographic area of interest affects what goes on inside." As a study area is often a subset of the planet, data on the edges of the study area will be excluded. If the boundary demarcating the study area is permeable to travel, then the phenomena under investigation within it may extend beyond, and be impacted by, forces excluded from the analysis. This uncertainty contributes to the UGCoP. All maps are wrong, and a cartographer must ensure that their maps' limitations are well documented to avoid misleading the users. With modern technology, there is an emphasis on individual-level data and understanding how individuals interact with their environment. When making maps with this individual-level data, the UGCoP is one source of bias that can impact the results of an analysis. When these results inform policy, they can have real world ramifications. The UGCoP is particularly important when understanding food access and human mobility. Suggested solutions Geographic information systems, along with technologies that can monitor the position of individuals in real time, are possible methods for addressing the UGCoP. These technologies allow scientists to analyze and visualize the 3D space-time path of people moving through a study area, and better understand their actual activity space. Web GIS has also been employed to address the UGCoP by allowing researchers to better contextualize subjects' real and perceived activity space. These technologies have helped to address the problem by moving away from aggregate data and introducing a temporal component to the modeling of subject activity. See also Arbia's law of geography Automotive navigation system Collaborative mapping Concepts and Techniques in Modern Geography Counter-mapping Distributed GIS Geographic information systems in geospatial intelligence GIS and aquatic science GIS and public health GIS in archaeology Historical GIS Integrated Geo Systems List of GIS data sources List of GIS software Map database management Modifiable temporal unit problem Neighborhood effect averaging problem Participatory GIS QGIS Technical geography Tobler's first law of geography Tobler's second law of geography Traditional knowledge GIS Virtual globe References Bias Geographic information systems Problems in spatial analysis
Uncertain geographic context problem
[ "Technology" ]
864
[ "Information systems", "Geographic information systems" ]
72,666,509
https://en.wikipedia.org/wiki/Zandkreekdam
The is a compartmentalisation dam located approximately 3 kilometres north of the city of Goes in The Netherlands, which connects Zuid-Beveland with Noord-Beveland, and separates the Oosterschelde from the Veerse Meer. A navigation lock in the dam permits shipping connections to Middelburg and Vlissingen, via the Veerse Meer and the Walcheren navigation channel. The Zandkreekdam is 830 metres in length, and was the first compartmentalisation dam to be constructed as part of the Delta Works, having been proposed by Johan van Veen as part of the (English: Three Islands Plan) which originated in the 1930s. It was the second project constructed under the Delta Works Plan, after the Stormvloedkering Hollandse IJssel which was completed in 1958. The construction of the Zandkreekdam, together with the Veerse Gatdam in 1961, created the freshwater Veerse Meer (Veerse Lake). Poor water quality in the lake led to the decision to build a control lock, known as the , which was completed in 2004 and re-established saltwater intrusion from the Oosterschelde into the Veerse Meer, and led to a significant improvement in water quality. There are two bridges at the Zandkreekdam locks to permit vehicular traffic to pass over it at any time. Johan van Veen's Three-Island Plan required that construction of the Zankreekdam and the Veerse Gatdam should be undertaken as early as possible in the Delta Works programme, to permit Dutch civil engineers and contractors to gain experience that would be necessary for more complicated Delta Works projects such as the Brouwersdam and Oosterscheldekering. Feasibility, planning and design Johan van Veen had been developing his Three Islands Plan since the 1930s, in which he considered land reclamation around the islands of Walcheren, Nord-Beveland and Zuid-Beveland and proposed the closure of two bodies of water: the Veerse Gat and the Zandkreek. In combination with the effects of the previously-constructed Sloedam, this would shorten the coastline from 52 kilometers to 2.5 kilometres and open up large areas of land which could then be reclaimed from the sea. Van Veen recognised the need to close both bodies of water, with the Zandkreekdam acting as a secondary dam to make the works on the Veerse Gatdam easier and therefore being constructed first. Having made extensive studies, van Veen realised that the closure of the Veerse Gat alone would cause unacceptable tidal streams in the Zandkreek. The Delta Plan was of such unprecedented size and complexity that the plan was to start with the easiest parts and gain experience along the way. There were a total of four sea arms to be closed in the Delta region, of which the Veerse Gat - extending east into the Zandkreek - was the smallest. By commencing with the smaller works, the engineers of the Delta Service could thus gain knowledge of construction methods, materials, and equipment - essential exercises for closing the larger Brouwershavense Gat and the Eastern Scheldt. The location pinpointed by van Veen for the Zandkreekdam is at a , a Dutch term for the point at which the tidal currents from both sea arms meet at high tide, and the current is minimal. It was also important that construction of the Veerse Gatdam did not lag too far behind the Zandkreekdam, as closing only the Zandkreek would dangerously increase the effects of storm surges in both the Veerse Gat and the Zandkreek. The body set up to implement the Delta Works scheme, known as the (English: Delta Commission), adopted the Three Islands Plan and the Zandkreekdam was taken forward. The design was based on the use of caissons 6 metres high, 7.5 metres wide and 11 metres long to form a closure dam, along with the construction of a lock to permit navigation. Construction Construction began in the spring of 1957, with dredging undertaken to form a foundation trench 6.5 metres below Amsterdam Ordnance Datum (, N.A.P.). Weak soils including soft clay and peat were removed and replaced with approximately 160,000 cubic metres of sand, and excavation depths up to 14 metres below N.A.P. were realised. Unit caissons were used to construct the dam, with the maximum depth of the closing hole being 5m below N.A.P. On 3 May 1960, a pair of caissons were sunk into the final gap and the dam was then completed to a height of 8.25m above N.A.P. The navigation lock, 140 metres long and 20 metres wide, was ready for shipping in the spring of 1960. See also Delta Works Flood control in the Netherlands Rijkswaterstaat Johan van Veen References External links Information on the Zandkreekdam from the official Watersnoodmuseum website Delta Works Dams completed in 1960 Dams in Zeeland Noord-Beveland Zuid-Beveland Transport in Goes
Zandkreekdam
[ "Physics" ]
1,097
[ "Physical systems", "Hydraulics", "Delta Works" ]
72,667,533
https://en.wikipedia.org/wiki/Human%20chimera
A human chimera is a human with a subset of cells with a distinct genotype than other cells, that is, having genetic chimerism. In contrast, an individual where each cell contains genetic material from a human and an animal is called a human–animal hybrid, while an organism that contains a mixture of human and non-human cells would be a human-animal chimera. Mechanisms Some consider mosaicism to be a form of chimerism, while others consider them to be distinct. Mosaicism involves a mutation of the genetic material in a cell, giving rise to a subset of cells that are different from the rest. Natural chimerism is the fusion of more than one fertilized zygote in the early stages of prenatal development. It is much rarer than mosaicism. In artificial chimerism, an individual has one cell lineage that was inherited genetically at the time of the formation of the human embryo and the other that was introduced through a procedure, including organ transplantation or blood transfusion. Specific types of transplants that could induce this condition include bone marrow transplants and organ transplants, as the recipient's body essentially works to permanently incorporate the new blood stem cells into it. Examples Natural chimerism Natural chimerism has been documented in humans in several instances. The Dutch sprinter Foekje Dillema was expelled from the 1950 national team after she refused a mandatory sex test in July 1950; later investigations revealed a Y-chromosome in her body cells, and the analysis showed that she was probably a 46,XX/46,XY mosaic female. In 1953, a human chimera was reported in the British Medical Journal. A woman was found to have blood containing two different blood types. Apparently this resulted from her twin brother's cells living in her body. A 1996 study found that such blood group chimerism is not rare. In 2002, an article in the New England Journal of Medicine described a woman, later identified as Karen Keegan, in whom tetragametic chimerism was unexpectedly identified after she underwent preparations for kidney transplant. Those preparations for the transplant required the patient and her immediate family to undergo histocompatibility testing, the result of which had suggested that she was not the biological mother of two of her three children. In 2002, Lydia Fairchild was denied public assistance in Washington state when DNA evidence appeared to show that she was not the mother of her children. A lawyer for the prosecution heard of the case of Karen Keegan in New England, and suggested the possibility to the defense, who were able to show that Fairchild, too, was a chimera with two sets of DNA, and that one of those sets could have produced the children. In 2009, singer Taylor Muhl's large torso birthmark was diagnosed as resulting from chimerism. Non-intentional chimerism related to treatments Several cases of chimera phenomena have been reported in bone marrow recipients. In 2019, the blood and seminal fluid of a man in Reno, Nevada (who had undergone a vasectomy), exhibited only the genetic content of his bone marrow donor. Swabs from his lips, cheek and tongue showed mixed DNA content. The DNA content of semen from an assault case in 2004 matched that of a man who had been in prison at the time of the assault, but who had been a bone marrow donor for his brother, who was later determined to have committed the crime. In 2008, a man was killed in a traffic accident that occurred in Seoul, South Korea. A DNA analysis to identify him revealed that his blood, along with some of his organs, appeared to show that he was female. It was later determined that he had received a bone marrow transplant from his daughter. Another instance of treatment-related human chimerism was published in 1998, where a male human had some partially developed female organs due to chimerism. He had been conceived by in-vitro fertilization. Human-animal chimeras Human-animal chimeras include humans having undergone non-human to human xenotransplantation, which is the transplantation of living cells, tissues or organs from one species to another. Patient derived xenografts are created by xenotransplantation of human tumor cells into immunocompromised mice, and is a research technique frequently used in pre-clinical oncology research. The first stable human-animal chimeras to actually exist were first created by Shanghai Second Medical University scientists in 2003, the result of having fused human cells with rabbit eggs. In 2017, a human-pig chimera was reported to have been created; the chimera was also reported to have 0.001% human cells, with the balance being pig. The embryo consisted mostly pig cells and some human cells. Scientists stated that they hope to use this technology to address the shortage of donor organs. In 2021, a human-monkey chimera was created as a joint project between the Salk Institute in the US and Kunming University in China and published in the journal Cell. This involved injecting human stem cells into monkey embryos. The embryos were only allowed to grow for a few days, but the study demonstrated that some of these embryos still had human stem cells surviving at the end of the experiments. Because humans are more closely related to monkeys than other animals, it means there is more chance of the chimeric embryos surviving for longer periods so that organs can develop. The project has opened up possibilities into organ transplantation as well as ethical concerns particularly concerning human brain development in primates. Chimera identification Non-artificial chimerism has traditionally been considered to be rare due the low amount of reported cases in medical literature. However, this may be due to the fact that humans might not often be aware of this condition to begin with. There are usually no signs or symptoms for chimerism other than a few physical symptoms such as hyper-pigmentation, hypo-pigmentation, Blaschko's lines, body asymmetry or heterochromia iridum (possessing two different colored eyes). However, these signs do not necessarily mean an individual is a chimera and should only be seen as possible symptoms. Again, forensic investigation or curiosity over an unexpected maternity/paternity DNA test result usually leads to the accidental discovery of this condition. By simply undergoing a DNA test, which usually consists of either a swift cheek swab or a blood test, the discovery of the once unknown second genome is made, therefore identifying that individual as a chimera. Chimerism and intersex The concept of a "human hermaphrodite" resulting from chimerism is largely a misconception. Most intersex individuals are not chimeras, and most human chimeras are not observed to have intersex traits. Theoretically, if a gynandromorphic human chimera were to have fully functioning male and female gonad tissue, such an individual could self-fertilize; this hypothesis is backed by the fact that hermaphroditic animal species commonly reproduce in this way, and it has been observed in a rabbit. However, no such case of functional self-fertilization has ever been documented in humans; and it is non-existent or extremely rare in mammals, especially in humans. While humans are known to have sex characteristics that diverge from typical males or typical females, these individuals fall under the social umbrella of intersex conditions and traits, and some consider the term "hermaphrodite" to be a slur when applied to them. Legislation The Human Chimera Prohibition Act On 11 July 2005, a bill known as The Human Chimera Prohibition Act was introduced into the United States Congress by Senator Samuel Brownback; however, it died in Congress sometime in the next year. The bill was introduced based on findings that science had progressed to the point where human and nonhuman species could be merged to create new forms of life. Because of this, ethical issues might arise as the line blurred between humans and other animals, and according to the bill with this blurring of lines would come a show of disrespect for human dignity. The final claim brought up in The Human Chimera Prohibition Act was that there was an increasing amount of zoonotic diseases, and that the creation of human-animal chimeras might allow these diseases to reach humans. On 22 August 2016, another bill, The Human-Animal Chimera Prohibition Act of 2016, was introduced to the United States House of Representatives by Christopher H. Smith. It identified a human-animal chimera as: a human embryo into which a nonhuman cell or cells (or the component parts thereof) had been introduced to render the embryo's membership in the species Homo sapiens uncertain; a chimera human/animal embryo produced by fertilizing a human egg with nonhuman sperm; a chimera human/animal embryo produced by fertilizing a nonhuman egg with human sperm; an embryo produced by introducing a nonhuman nucleus into a human egg; an embryo produced by introducing a human nucleus into a nonhuman egg; an embryo containing at least haploid sets of chromosomes from both a human and a nonhuman life form; a nonhuman life form engineered such that human gametes developed within the body of a nonhuman life form; or a nonhuman life form engineered such that it contained a human brain or a brain derived wholly or predominantly from human neural tissues. The bill would have prohibited the attempts to create a human-animal chimera, the transfer or attempt to transfer a human embryo into a nonhuman womb, the transfer or attempt to transfer a nonhuman embryo into a human womb, and the transport or receipt of an animal chimera for any purpose. Proposed penalties for violations of this bill included fines and/or imprisonment of up to 10 years. The bill was referred to the Subcommittee on Crime, Terrorism, Homeland Security, and Investigations on October 11, 2016, but died there. Patenting In the U.S., efforts into creating a chimeric entity appeared to be legal when the topic first came up. Developmental biologist Stuart Newman, a professor at New York Medical College in Valhalla, N.Y., applied for a patent on a human-animal chimera in 1997 as a challenge to the U.S. Patent and Trademark Office and the U.S. Congress, motivated by his moral and scientific opposition to the notion that living things can be patented at all. Prior legal precedent had established that genetically engineered entities, in general, could be patented, even if they were based on beings occurring in nature. After a seven-year process, Newman's patent finally received a flat rejection. The legal process had created a paper trail of arguments, giving Newman what he claimed was a victory. The Washington Post ran an article on the controversy that stated that it had raised "profound questions about the differences—and similarities—between humans and other animals, and the limits of treating animals as property." References Reproduction Intersex healthcare Genetic anomalies Twin
Human chimera
[ "Biology" ]
2,299
[ "Biological interactions", "Behavior", "Chimerism", "Reproduction" ]
72,667,649
https://en.wikipedia.org/wiki/Spatial%20join
A spatial join is an operation in a geographic information system (GIS) or spatial database that combines the attribute tables of two spatial layers based on a desired spatial relation between their geometries. It is similar to the table join operation in relational databases in merging two tables, but each pair of rows is correlated based on some form of matching location rather than a common key value. It is also similar to vector overlay operations common in GIS software such as Intersect and Union in merging two spatial datasets, but the output does not contain a composite geometry, only merged attributes. Spatial joins are used in a variety of spatial analysis and management applications, including allocating individuals to districts and statistical aggregation. Spatial join is found in most, if not all, GIS and spatial database software, although this term is not always used, and sometimes it must be derived indirectly by the combination of several tools. Spatial relation predicates Fundamental to the spatial join operation is the formulation of a spatial relationship between two geometric primitives as a logical predicate; that is, a criterion that can be evaluated as true or false. For example, "A is less than 5km from B" would be true if the distance between points A and B is 3km, and false if the distance is 10km. These relation predicates can be of two types: A Topological relation is a qualitative relationship between two shapes that does not depend on a measurable space (that is, coordinates). Common examples of such predicates include "A is completely inside B," "A overlaps B," "A is adjacent to B" (i.e., sharing a boundary but no interior), and "A is disjoint from B" (not touching at all). These are commonly specified according to some form of the 9-Intersection Model, which is incorporated into the international Simple Feature Access specification (ISO 19125-2). A Metric relation is a quantitative (measurable) relationship between two shapes in a coordinate space, most commonly a distance or direction. Common examples include "A is due north of B" or "A is less than 5 km from B." Not all software implementations support metric relations. Note that some relations are commutative (e.g., A overlaps B if and only if B overlaps A) while others are not (e.g., A is within B does not mean B is within A). The geometric primitives involved in these relations may be of any dimension (points, lines, or regions), but some relations may only have meaning with certain dimensions. For example, "A is within B" has a clear meaning if A is a point and B is a region, but is meaningless if both A and B are points. Other relations may be vague; for example, the distance between two regions or two lines may be interpreted as the minimal distance between their closest boundaries, or a mean distance between their centroids. Operation As in a relational table join as defined in the relational algebra, two input layers or tables are provided (hereafter X and Y), and the output is a table containing all of the columns of each of the inputs (or some subset thereof if selected by the user). The rows of the new table are a subset of Cross join or Cartesian product of the two tables, all possible pairs of rows {X1-Y1, X1-Y2, X1-Y3, X2-Y1, X2-Y2, X2-Y3, X3-Y1, X3-Y2, X3-Y3, ...}. Rather than include all possible combinations, each pair is evaluated according to the given spatial predicate; those for which the predicate is true are considered "matching" and are retained, while those for which the predicate is false are discarded. For example, consider the following two tables: When the spatial join is executed, the direction of attachment must be specified, for two reasons: 1) the given spatial predicate may not be commutative, and 2) there is often a many-to-one relationship between the rows (e.g., many students are inside each school district). In the example above, a common goal would be to join the schools table to the students table (the target table), with the relation predicate being "student.residence within school.district." Assuming that the districts do not overlap, each student point will be in no more than one school district, so the output would have the same rows as the students table, with the corresponding school attributes attached, as: The reverse operation, in this case attaching the student information to the schools table, is not as simple because many rows must be joined to one row. Some GIS software does not allow this operation, but most implementations allow for an aggregate join, in which aggregate summaries of the matching rows can be included, such as arrays, counts, sums, or means. For example, the result table might look like: Another option when there are multiple matches is to use some criterion to select one of the rows from the matching set, usually a spatial optimization criterion. For example, one could join the school building points (not the districts) to the student residents points by selecting the school that is nearest to each student. Not all software implements this option directly, although in some cases it can be derived through a combination of tools. External links Spatial Join tool in ArcGIS Pro Join attributes by location tool in QGIS Join attributes by nearest tool in QGIS Spatial Join in Manifold GIS Spatial Joins in PostGIS References GIS software Geographic information systems
Spatial join
[ "Technology" ]
1,169
[ "Information systems", "Geographic information systems" ]
72,668,549
https://en.wikipedia.org/wiki/Taiga%20of%20North%20America
The Taiga of North America is a Level I ecoregion of North America designated by the Commission for Environmental Cooperation (CEC) in its North American Environmental Atlas. The taiga ecoregion includes much of interior Alaska as well as the Yukon forested area, and extends on the west from the Bering Sea to the Richardson Mountains in on the east, with the Brooks Range on the north and the Alaska Range on the south end. It is a region with a vast mosaic of habitats and a fragile yet extensive patchwork of ecological characteristics. All aspects of the region such as soils and plant species, hydrology, and climate interaction, and are affected by climate change, new emerging natural resources, and other environmental threats such as deforestation. These threats alter the biotic and abiotic components of the region, which lead to further degradation and to various endangered species. Flora, fauna, and soil Soils and plant species The main type of soil in the taiga is Spodosol. These soils contain a Spodic horizon, a sandy layer of soil that has high accumulations of iron and aluminum oxides, which lays underneath a leached A horizon. The color contrast between the Spodic horizon and the overlying horizon is very easy to identify. The color change is the result of the migration of iron and aluminum oxides from small, but consistent amounts of rainfall from the top horizon to the lower horizon of the soil. The decomposition of organic matter is very slow in the taiga because of the cold climate and low moisture. With the slow decomposition of organic matter, nutrient cycling is very slow and the nutrient level of the soil is also very low. The soils in the taiga are quite acidic as well. A relatively small amount of rainfall coupled with the slow decomposition of organic material allows the acidic plant debris to sit and saturate the top horizons of the soil profile. As a result of the infertile soil, only a few plant species can really thrive in the taiga. The common plant species in the taiga are coniferous trees. Not only do conifer trees thrive in acidic soils, they actually make the soil more acidic. Acidic leaflitter (or needles) from conifers falls to the forest floor and the precipitation leaches the acids down into the soil. Other species that can tolerate the acidic soils of the taiga are lichens and mosses, yellow nutsedge, and water horsetail. The depth to bedrock has an effect on the plants that grow well in the taiga as well. A shallow depth to bedrock forces the plants to have shallow roots, limiting overall stability and water uptake. Keystone species Beaver, Canadian lynx, bobcat, wolverine, and snowshoe hare are all keystone species in the taiga area. These species are keystone because they have learned to adapt to the cold climate of the area and are able to survive year-round. These species survive year-round in taiga by changing fur color and growing extra fur. They have adapted to use each other to survive too. All of the predators depend on the snowshoe hare at some point during the year. All of the species also depend on forests in the area for shelter. Endangered species The taiga is inhabited by many species, some of which are endangered, and include the Canadian lynx, gray wolf, and grizzly bear. The Canadian lynx is one well-known animal to inhabit the North American taiga region and is listed as threatened in the U.S. The mother lynx will have a litter of about 4 kittens in the spring. Following the birth, the female is the sole caretaker, nursing them for about 5 months and teaching them to hunt. They will stay with her until the next breeding season. According to the USDS Forest Service, protection for the lynx has increased since 2000, which marks the date it became protected under the Endangered Species Act. Since much of the lynx's habitat is land managed by the agency, efforts to maintain and increase the habitat for the Canadian lynx using forest management plans are underway. The taiga region is also interspersed with various plant species. The endangered or threatened species include Labrador tea, lady's slipper orchid, helleborine orchid, longleaf pine, lingonberry plant, Newfoundland pine marten, Methuselahs beard, lodgepole pine, and Scots pine. The life history of longleaf pine is a tree species that has been around for quite some time and can reach more than 250 years in age. To begin the tree's life, a seed falls from the parent in October to late November awaiting water to begin germination in a few weeks. Those individuals that make it, will enter what is known as the grass stage. During this stage, the roots are established, and the bud of the tree is protected from fire. Years later, the longleaf will reach about in height and the diameter will increase with time. Somewhere around 30 years after the trees will begin to produce cones with fertile seeds and average about at maturity. One recent study discusses the effects of logging in the 1950s on pine species. Since then, conservation efforts have increased the number of pine (and other) tree species. The Nature Conservancy is prioritizing its protection efforts to rebuild long-leaf pine forests through land purchases, conservation easements, and management of land sites. Restoration is also a large part of efforts to ensure the long-leaf pine remains extant. By planting seedlings, controlling competitive vegetation, and controlling burning methods, scientists and volunteers are working to increase the number of long-leaf pine. Hydrology Watersheds characterize much of the taiga ecoregion as interconnecting rivers, streams, lakes, and coastlines. Due to a cool climate, low evaporation levels keep moisture levels high and enable water to have serious influences on ecosystems. The vast majority of water in the taiga is freshwater, occupying lakes and rivers. Many watersheds are dominated by large rivers that dump huge amounts of freshwater into the ocean such as the Lena river in Central Siberia. This exportation of freshwater helps control the thermohaline circulation and the global climate. Flow rates of taiga rivers are variable and "flashy" due to the presence of permafrost that keeps water from percolating deep into the soil. Due to global warming, flow rates have increased as more of the permafrost melts every year. In addition to "flashy" flow levels, the permafrost in the taiga allows dissolved inorganic nitrogen and organic carbon levels in the water to be higher while calcium, magnesium, sulfate, and hydrogen bicarbonate levels are shown to be much lower. As a dominant characteristic in the soil, permafrost also influences the degree to which water percolates into the soil. Where there is year-long permafrost, the water table is located much deeper in the soil and is less available to organisms, while discontinuous permafrost provides much shallower access. Lakes that cover the taiga are characteristically formed by receding glaciers and therefore have many unique features. The vast majority of lakes and ponds in the taiga ecoregion are oligotrophic and have much higher levels of allochthonous versus autochthonous matter. This is due to glacier formation and has implications for how trophic levels interact with limiting nutrients. These oligotrophic lakes show organic nitrogen and carbon as more limiting nutrients for trophic growth over phosphorus. This contrasts sharply with mesotrophic or eutrophic lakes from similar climates. Climate When we look at the climate of the taiga, we are looking at average temperatures, abiotic factors such as precipitation, and circulatory patterns. According to the study in Global Change Biology, the average yearly temperatures across the Alaskan and Canadian taiga ranged from −26.6 °C to 4.8 °C. This indicates the extremely cold weather the taiga has for the majority of the year. As for precipitation, the majority of it is snow, but rain is also an important factor. According to The International Journal of Climatology, precipitation in the form of rain ranged from 40 mm average in August, to 15 mm average in April over a multi-year study. Rain is not the only kind of precipitation that affects the taiga; the main factor in precipitation is usually snow. According to CEC Ecological Regions of North America, snow and freshwater ice can occupy the taiga for half to three-quarters of the year. A CEC Ecological Regions of North America document states that the lowest average precipitation is on the western side of taiga; can be as little as 200 mm and on the east coast it can be as high as exceeding 1,000 mm. As for circulatory patterns, we're finding that the temperature increases have led to a season shift. Global Change Biology also has noted with the change in temperature over time, as well as the overall climate change, the growing season has lengthened. Their findings illustrate that the growing season has grown 2.66 days per ten years. This growing season change as a result of global warming is having an extreme effect on the taiga. Environmental threats Climate change has played its role in threatening the taiga ecoregion. Equally as harmful are the human effects like deforestation, however, many associations and regulations are working to protect the taiga and reverse the damage. Climate change is resulting in rising temperatures and decreases in moisture, which causes parasites and other insects to be more active thus causing tree stress and death. Thawing permafrost has led to many forests experiencing less stability and they become "drunken forests" (the decrease in soil stability causes the trees to lean or fall over). Increased tree death then leads to a carbon dioxide outflux, thus further propagating the increases in global warming. It is essential for climate change to be combated with global action, which is what the Kyoto Protocol in 1997 was created to do. Other measures to protect the taiga would be to prohibit unsustainable deforestation, switch to renewable energy, and protect old-growth forests, (they sequester the most carbon dioxide). The taiga also suffers from more direct human effects such as logging and mining sites. Logging has been a very profitable business in the region, however, fragmentation of forests leads to loss of habitats, relocation of keystone species, increases in erosion, increases in magnitude and frequency of flooding, and altered soil composition. Regions in which permafrost has thawed and trees have fallen take centuries to recover. Canadian and Russian governments enacted a Protection Belt, which covers 21.1 million ha, and initiatives like the Far East Association for the use of non-timber forest products, give economic significance to the forests while avoiding logging. In addition to logging, studies have measured over 99,300 tones of airborne pollutants from just one metal-extracting plant over a 50-year span. These pollutants are 90% sulfur dioxide, which is a precursor to acid rain. Other emissions include nitrogen oxides, sulfurous anhydrides, and inorganic dust. Forests in a radius of these sites can serve little to no biological services once affected, and there has been the little appearance of protection measures to regulate mining plants. Effects of climate change Over the next 100 years, global annual mean temperatures are expected to rise by 1.4−5.8 °C, but changes in high latitudes where the boreal biome exists will be much more extreme (perhaps as much as a 10 °C rise). The warming observed at high latitudes over the past 50 years exceeds the global average by as much as a factor of 5 (2–3 °C in Alaska versus the 0.53° global mean). The effects of increased temperature on boreal forest growth have varied, often depending on tree species, site type, and region, as well as whether or not the warming is accompanied by increases or decreases in precipitation. However, studies of tree rings from all parts of the boreal zone have indicated an inverse growth response to temperature, likely as a result of direct temperature and drought stress. As global warming increases, negative effects on growth are likely to become more widespread as ecosystems and species will be unable to adapt to increasingly extreme environmental conditions. Perhaps the most significant effect of climate change on the boreal region is the increase in the severity of disturbance regimes, particularly fire and insect outbreaks. Fire is the dominant type of disturbance in boreal North America, but the past 30-plus years have seen a gradual increase in fire frequency and severity as a result of warmer and drier conditions. From the 1960s to the 1990s, the annual area burned increased from an average of 1.4 to 3.1 million hectares per year. Insect outbreaks also represent an increasingly significant threat. Historically, temperatures have been low enough in the wintertime to control insect populations, but under global warming, many insects are surviving and reproducing during the winter months, causing severe damage to forests across the North American boreal. The main culprits are the mountain pine beetle in the western provinces of British Columbia and Alberta, and the spruce bark beetle in Alaska. Natural resources Taiga (boreal forests) has amazing natural resources that are being exploited by humans. Human activities have a huge effect on the taiga ecoregions mainly through extensive logging, natural gas extraction, and mine-fracking. This results in the loss of habitat and increases the rate of deforestation. It is important to use natural resources but its key to use natural resources sustainably and not over-exploit them. In recent years rules and regulations have been set in place to conserve the forests to reduce the number of trees that are cut. There has been an increase in oil extraction and mining throughout the United States and Canada. Exploitation of tar sands oil reserves has increased mining. This is a large operation that started in Alberta Canada. Oil extraction has a direct effect on the taiga forests because the most valuable and abundant oil resources come from taiga forests. Tar sands have affected over 75% of the habitat in the Alberta taiga forest due to the clearing of the forests and the oil ponds that come from the extraction. These tar sands also create awful toxic oil ponds that affect wildlife and surrounding vegetation. Oil extraction also affects the forest soil, which harms tree and plant growth. Today, the world population has an increasingly high ecological footprint and a large part of that has to do with the population's carbon footprint. As a result of that, oil supplies have increased, which has spread across the U.S. and into other countries. This is detrimental to natural ecosystems. Taiga is the largest region and is seeing major consequences of our actions on extracting oil and natural gas. This is also causing climate change temperatures to increase at a rapid rate, which is affecting wildlife and forests. However, even though Human activities are responsible for the exploitation of these natural resources humans are the solution and have the tools to fix this issue. It is crucial that humans reduce the consumption rate of these natural resources to increase environmental conditions. Subregions Alaska Boreal Interior Interior Bottomlands (ecoregion) Interior Forested Lowlands and Uplands (ecoregion) Yukon Flats (ecoregion) Taiga Cordillera Mackenzie and Selwyn Mountains (ecoregion) Ogilvie Mountains (ecoregion) Peel River and Nahanni Plateaus (ecoregion) Taiga Plain Great Bear Plains (ecoregion) Hay and Slave River Lowlands (ecoregion) Taiga Shield Coppermine River and Tazin Lake Uplands (ecoregion) Kazan River and Selwyn Lake Uplands (ecoregion) La Grande Hills and New Quebec Central Plateau (ecoregion) Smallwood Uplands (ecoregion) Ungava Bay Basin and George Plateau (ecoregion) References "Beavers - A Keystone Species in North America." Beavers - A Keystone Species in North America. N.p., n.d. Web. 24 February 2013. "Snowshoe Rabbit." Snowshoe Rabbit. Missouri Botanical Garden, 2006. Web. 24 February 2013. "Species Profile for Canada Lynx (Lynx Canadensis)." Species Profile for Canada Lynx (Lynx Canadensis). N.p., n.d. Web. 24 February 2013. "Spodosol (soil Type)." Encyclopædia Britannica Online. Encyclopædia Britannica, n.d. Web. 24 February 2013. A, Justin. "Bobcat - Felis Rufus." Bobcat - Felis Rufus. N.p., 2001. Web. 24 February 2013. Alaska Peninsula Montane Taiga (2013) R. Hagenstein, T. Ricketts, World Wildlife Fund. Retrieved 12 March 2013 http://worldwildlife.org/ecoregions/na0601 Commission of Environmental Corporation. (1997) Ecological Regions of North America Towards a Common Perspective. *Commission of Environmental Corporation Secretariat. Retrieved from ftp://ftp.epa.gov/wed/ecoregions/cec_na/CEC_NAeco.pdf Day, T., & Garratt, R. (2006). Threats to the taiga. Human Impacts on the Tundra- Taiga Zone Dynamics: The Case of the Russian Lesotundra (pp. 144–163). New York: Chelsea House. Dillon, B (2000). Northern Lynx. Taiga Animals. Retrieved from https://web.archive.org/web/20130419103809/http://www.blueplanetbiomes.org/taiga_animal_page.htm. Ferguson, C., Nelson, E., & Sherman, G. (2008). Turning up the heat: Global warming and the degradation of Canada's boreal forest. Greenpeace, Retrieved from Glick, Daniel Tar Sands Trouble (Dec 2011/Jan 2012) National Wildlife World Edition vol.50 issue 1 page 26-29 Hagenstein, R., Ricketts, T., Sims, M., Kavanagh, K., & Mann, G. (2012). Interior Alaska-Yukon lowland taiga ecoregions. WWF - Endangered Species Conservation World Wildlife Fund. Retrieved 22 February 2013, from http://worldwildlife.org/ecoregions/na0607 Jeffries, A., Menckeberg, P. (2011). Taiga Endangered Species. Retrieved from http://priynspecies.weebly.com/endangered-species-list.html. McGinley, M. (2008). North American Taiga. Retrieved from http://www.eoearth.org/article/Taiga_ecoregion_(CEC)?topic=58071. Olsson, R. (2009). Boreal forest and climate change. Air Pollution & Climate Secretariat, Retrieved from http://www.airclim.org/sites/default/files/documents/APC23_borealforest_0.pdf Schraer, M., Stoltze, J. (1993) Biology: The Study of Life. 5th ed. Chapter 38. Seal, U.S., Foose, T. (1983) Species survival plan for Siberian tigers in North American zoos: a strategy for survival. American Association of Zoo Veterinarians, 1983. Retrieved from http://apps.webofknowledge.com/full_record.do?product=UA&search_mode=Refine&qid=5&SID=3D9@HGh192PlaAKBM6F&page=5&doc=42. Seguin, M., Stein, J., Nilo, O., Jalbert, C., Ding, Y. (1998). Hydrogeophysical Investigation of the Wolf Creek Watershed, Yukon Territory, Canada. Wolf Creek Research Basin: Hydrology, Ecology, Environment. Sykes, M., & Prentice, I. (2010). Taiga rescue network - the boreal forest. The Great Northern Kingdom. Retrieved 23 February 2013, from http://www.taigarescue.org Taiga, Case Studies: Taiga Deforestation. (1997) retrieved 25 February 2013, https://web.archive.org/web/20130514002252/http://www1.american.edu/TED/TAIGA.HTM Taiga, Internet Geology (2009). Retrieved 24 February 2013 http://www.geography.learnontheinternet.co.uk/topics/taiga.html#where The Life of a Longleaf. (2002). Retrieved from https://web.archive.org/web/20130128192821/http://www.auburn.edu/academic/forestry_wildlife/longleafalliance/ecosystem/longleaftree/longleaftree5.htm. Vlassova, T. K. (2007). Physiological Boundaries. Human Impacts on the Tundra- Taiga Zone Dynamics: The Case of the Russian Lesotundra (pp. 30–36). New York: Royal Swedish Academy of Sciences. Springer Publications. Walsh, Joe (2000). Protection Increased for Canada Lynx. USDS Forest Service. Retrieved from http://www.fs.fed.us/news/2000/03/03212000.shtml. Woods Hole Research Center (2012). Ecosystem studies and management. Retrieved from https://web.archive.org/web/20130517084257/http://www.whrc.org/ecosystem/highlatitude/climate.html Ecoregions of North America Ecosystems North America-related lists Taiga and boreal forests
Taiga of North America
[ "Biology" ]
4,504
[ "Symbiosis", "Ecosystems" ]
72,669,698
https://en.wikipedia.org/wiki/Cauchy%20wavelet
In mathematics, Cauchy wavelets are a family of continuous wavelets, used in the continuous wavelet transform. Definition The Cauchy wavelet of order is defined as: where and therefore, its Fourier transform is defined as . Sometimes it is defined as a function with its Fourier transform where and for almost everywhere and for all . Also, it had used to be defined as in previous research of Cauchy wavelet. If we defined Cauchy wavelet in this way, we can observe that the Fourier transform of the Cauchy wavelet Moreover, we can see that the maximum of the Fourier transform of the Cauchy wavelet of order is happened at and the Fourier transform of the Cauchy wavelet is positive only in , it means that: (1) when is low then the convolution of Cauchy wavelet is a low pass filter, and when is high the convolution of Cauchy wavelet is a high pass filter. Since the wavelet transform equals to the convolution to the mother wavelet and the convolution to the mother wavelet equals to the multiplication between the Fourier transform of the mother wavelet and the function by the convolution theorem. And, (2) the design of the Cauchy wavelet transform is considered with analysis of the analytic signal. Since the analytic signal is bijective to the real signal and there is only positive frequency in the analytic signal (the real signal has conjugated frequency between positive and negative) i.e. where is a real signal (, for all ) And the bijection between analytic signal and real signal is that where is the corresponded analytic signal of the real signal , and is Hilbert transform of . Unicity of the reconstruction Phase retrieval problem A phase retrieval problem consists in reconstructing an unknown complex function from a set of phaseless linear measurements. More precisely, let be a vector space, whose vectors are complex functions, on and a set of linear forms from to . We are given the set of all , for some unknown and we want to determine . This problem can be studied under three different viewpoints: (1) Is uniquely determined by (up to a global phase)? (2) If the answer to the previous question is positive, is the inverse application is “stable”? For example, is it continuous? Uniformly Lipschitz? (3) In practice, is there an efficient algorithm which recovers from ? The most well-known example of a phase retrieval problem is the case where the represent the Fourier coefficients: for example: , for , where is complex-valued function on Then, can be reconstruct by as . and in fact we have Parseval's identity . where i.e. the norm defined in . Hence, in this example, the index set is the integer , the vector space is and the linear form is the Fourier coefficient. Furthermore, the absolute value of Fourier coefficients can only determine the norm of defined in . Unicity Theorem of the reconstruction Firstly, we define the Cauchy wavelet transform as: . Then, the theorem is as followed Theorem. For a fixed , if exist two different numbers and the Cauchy wavelet transform defined as above. Then, if there are two real-valued functions satisfied , and , , then there is a such that . implies that and . Hence, we get the relation and . Back to the phase retrieval problem, in the Cauchy wavelet transform case, the index set is with and , the vector space is and the linear form is defined as . Hence, determines the two dimensional subspace in . References Mathematical terminology
Cauchy wavelet
[ "Mathematics" ]
752
[ "nan" ]
72,671,161
https://en.wikipedia.org/wiki/Internet%20in%20the%20European%20Union
The internet in the European Union is built through the infrastructure of member states, and regulated by EU law for data privacy, and a free and open media. Infrastructure WiFi boxes Cables, copper to fibre-optic Electro-magnetic signals Transnational lines Regulation Electronic Communications Code Directive 2018/1972 arts 3-17, 61-84 Access Directive 2002/19/EC arts 3-6 and Annex I Information Society Directive 2015/1535 Annex I Electronic Commerce Directive 2000/31/EC arts 1, 3, 14-15 ( General Data Protection Regulation 2016/679 arts 4(11), 5-8, 13-17 Net Neutrality Regulation 2015/2120 art 3(3) Roaming Regulation (EU) No 531/2012 arts 7-8 Speed The European Union pledges that all households will have at least 100 Mbps internet speed in 2025, and 1000 Mbps not until 2030. See also EU law Internet by country Internet
Internet in the European Union
[ "Technology" ]
193
[ "Internet", "Internet by country", "Transport systems", "IT infrastructure" ]
74,076,419
https://en.wikipedia.org/wiki/DMDEE
DMDEE is an acronym for dimorpholinodiethyl ether but is almost always referred to as DMDEE (pronounced dumdee) in the polyurethane industry. It is an organic chemical, specifically a nitrogen-oxygen heterocycle with tertiary amine functionality. It is a catalyst used mainly to produce polyurethane foam. It has the CAS number 6425-39-4 and is TSCA and REACH registered and on EINECS with the number 229-194-7. The IUPAC name is 4-[2-(2-morpholin-4-ylethoxy)ethyl]morpholine and the chemical formula C12H24N2O3. Other names Main section reference. Morpholine, 4,4'-(oxydi-2,1-ethanediyl)bis- Bis(2-morpholinoethyl) Ether 4,4'-(Oxybis(ethane-2,1-diyl))dimorpholine 2,2-Dimorpholinodiethylether 2,2'-Dimorpholinodiethyl ether 4,4'-(Oxydiethylene)bis(morpholine) 4-[2-(2-morpholin-4-ylethoxy)ethyl]morpholine 2,2'-Dimorpholinyldiethyl ether Use as a polyurethane catalyst DMDEE tends to be used in one-component rather than 2-component polyurethane systems. Its use has been investigated in polyurethanes for controlled drug release and also adhesives for medical applications. Its use as a catalyst including the kinetics and thermodynamics have been studied and reported on extensively. It is a popular catalyst along with DABCO. Toxicity The material has been in use for some time and so the toxicity is generally well understood. However, some sources say toxicity data is limited and work continues to acquire the necessary data and publish to ensure it is in the public domain. References Tertiary amines Catalysis 4-Morpholinyl compounds Ethers
DMDEE
[ "Chemistry" ]
458
[ "Catalysis", "Functional groups", "Organic compounds", "Ethers", "Chemical kinetics" ]
74,077,355
https://en.wikipedia.org/wiki/Titan%20submersible%20implosion
On 18 June 2023, Titan, a submersible operated by the American tourism and expeditions company OceanGate, imploded during an expedition to view the wreck of the Titanic in the North Atlantic Ocean off the coast of Newfoundland, Canada. Aboard the submersible were Stockton Rush, the American chief executive officer of OceanGate; Paul-Henri Nargeolet, a French deep-sea explorer and Titanic expert; Hamish Harding, a British businessman; Shahzada Dawood, a Pakistani-British businessman; and Dawood's son, Suleman. Communication between Titan and its mother ship, , was lost 1 hour and 33 minutes into the dive. Authorities were alerted when it failed to resurface at the scheduled time later that day. After the submersible had been missing for four days, a remotely operated underwater vehicle (ROV) discovered a debris field containing parts of Titan, about from the bow of the Titanic. The search area was informed by the United States Navy's (USN) sonar detection of an acoustic signature consistent with an implosion around the time communications with the submersible ceased, suggesting the pressure hull had imploded while Titan was descending, resulting in the instantaneous deaths of all five occupants. The search and rescue operation was performed by an international team organized by the United States Coast Guard (USCG), USN, and Canadian Coast Guard. Support was provided by aircraft from the Royal Canadian Air Force and United States Air National Guard, a Royal Canadian Navy ship, as well as several commercial and research vessels and ROVs. Numerous industry experts had stated concerns about the safety of the vessel. OceanGate executives, including Rush, had not sought certification for Titan, arguing that excessive safety protocols and regulations hindered innovation. Background OceanGate OceanGate was a private company, initiated in 2009 by Stockton Rush and Guillermo Söhnlein. From 2010 until the loss of the Titan submersible, OceanGate transported paying customers in leased commercial submersibles off the coast of California, in the Gulf of Mexico, and in the Atlantic Ocean. The company was based in Everett, Washington, US. Rush realized that visiting shipwreck sites was a method of getting media attention. OceanGate had previously conducted voyages to other shipwrecks, including its 2016 dive to the wreck of aboard their other submersible Cyclops1. (A near disaster on that expedition was recounted in Vanity Fair in 2023.) In 2019, Rush told Smithsonian magazine: "There's only one wreck that everyone knows... If you ask people to name something underwater, it's going to be sharks, whales, Titanic". Titanic The Titanic was a British ocean liner that sank in the North Atlantic Ocean on 15 April 1912, after colliding with an iceberg. More than 1,500 people died, making it the deadliest sinking of a single ship at the time. In 1985, Robert Ballard located the wreck of the Titanic from the coast of Newfoundland. The wreck lies at a depth of about . Since its discovery, it has been a destination for research expeditions and tourism. By 2012, 140 people had visited the wreck site. Submersible Titan Formerly known as Cyclops 2, Titan was a five-person submersible vessel operated by OceanGate Inc. The , vessel was constructed from carbon fibre and titanium. The entire pressure vessel consisted of two titanium hemispheres (domes) with matching titanium interface rings bonded to the internal diameter, carbon fibre-wound cylinder. One of the titanium hemispherical end caps could be detached to provide the hatch and was fitted with a acrylic window. In 2020, Rush said that the hull, originally designed to reach below sea level, had been downgraded to a depth rating of after demonstrating signs of cyclic fatigue. In 2020 and 2021, the hull was repaired or rebuilt. Rush told the Travel Weekly editor-in-chief that the carbon fibre had been sourced at a discount from Boeing because it was too old for use in the company's airplanes. Boeing stated they have no records of any sale to Rush or to OceanGate. OceanGate had initially not sought certification for Titan, arguing that excessive safety protocols hindered innovation. Lloyd's Register, a ship classification society, refused OceanGate's request to class the vessel in 2019. Titan could move at as much as using four electric thrusters, arrayed two horizontal and two vertical. Its steering controls consisted of a Logitech F710 wireless game controller with modified longer analogue sticks resembling traditional joysticks. The University of Washington's Applied Physics Laboratory assisted with the control design on the Cyclops 1 using a DualShock 3 video game controller, which was carried over to Titan, substituting with the Logitech controller. The use of commercial off-the-shelf game controllers is common for remote-controlled vehicles such as unmanned aerial vehicles or bomb disposal robots, whilst the United States Navy uses Xbox 360 controllers to control periscopes in s. OceanGate claimed on its website that Titan was "designed and engineered by OceanGate Inc. in collaboration [with] experts from NASA, Boeing, and the University of Washington" (UW). A -scale model of the Cyclops 2 pressure vessel was built and tested at the Applied Physics Laboratory (APL) at UW; the model was able to sustain a pressure of , corresponding to a depth of about . After the disappearance of Titan in 2023, these earlier associates disclaimed involvement with the Titan project. UW claimed the APL had no involvement in the "design, engineering, or testing of the Titan submersible". A Boeing spokesperson also claimed Boeing "was not a partner on Titan and did not design or build it". A NASA spokesperson said that NASA's Marshall Space Flight Center had a Space Act Agreement with OceanGate, but "did not conduct testing and manufacturing via its workforce or facilities". It was designed and developed originally in partnership with UW and Boeing, both of which put forth numerous design recommendations and rigorous testing requirements, which Rush ignored, despite prior tests at lower depths resulting in implosions at UW's lab. The partnerships dissolved as Rush refused to work within quality standards. According to OceanGate, the vessel contained monitoring systems to continuously monitor the strength of the hull. The vessel had life support for five people for 96 hours. There is no GPS underwater; the support ship, which monitored the position of Titan relative to its target, sent text messages to Titan providing distances and directions. According to OceanGate, Titan had several backup systems intended to return the vessel to the surface in case of emergency, including ballasts that could be dropped, a balloon, thrusters, and sandbags held by hooks that dissolved after a certain number of hours in saltwater. Ideally, this would release the sandbags, allowing the vessel to float to the surface. An OceanGate investor explained that if the vessel did not ascend automatically after the elapsed time, those inside could help release the ballast either by tilting the ship back and forth to dislodge it or by using a pneumatic pump to loosen the weights. Dives to wreck of Titanic Dives by Titan to the wreck of the Titanic occurred as part of multi-day excursions organized by OceanGate, which the company referred to as "missions". Five missions occurred in the middle of 2021 and 2022. Titan imploded during the fifth mission of 2023; it was the first mission of the year in which a dive came close to Titanic, due to poor weather during previous attempts. Passengers would sail to and from the wreckage site aboard a support ship and spend approximately five days in the ocean above the Titanic wreckage site. Two dives were usually attempted during each excursion, though dives were often cancelled or aborted due to weather or technical malfunctions. Each dive typically had a pilot, a guide, and three paying passengers aboard. Once inside the submersible, the hatch would be bolted shut and could only be reopened from the outside. The descent from the surface to the Titanic wreck typically took two hours, with the full dive taking about eight hours. Throughout the journey, the submersible was expected to emit a safety ping every 15 minutes to be monitored by the above-water crew. The vessel and surface crew were also able to communicate via brief text messages. Customers who travelled to the wreck with OceanGate, referred to as "mission specialists" by the company, paid each for the eight-day expedition. OceanGate intended to perform multiple dives to the Titanic wreck in 2023, but the dive in which Titan was destroyed was the only one the company had launched that year. Safety Because Titan operated in international waters and did not carry passengers from a port, it was not subject to safety regulations. The vessel was not certified as seaworthy by any regulatory agency or third-party organization. Reporter David Pogue, who completed the expedition in 2022 as part of a CBS News Sunday Morning feature, said that all passengers who enter Titan sign a waiver confirming their knowledge that it is an "experimental" vessel "that has not been approved or certified by any regulatory body, and could result in physical injury, disability, emotional trauma or death". Television producer Mike Reiss, who also completed the expedition, said the waiver "mention[s] death three times on page one". A 2019 article published in Smithsonian magazine referred to Rush as a "daredevil inventor". In the article, Rush is described as having said that the U.S. Passenger Vessel Safety Act of 1993 "needlessly prioritized passenger safety over commercial innovation". In a 2022 interview, Rush told CBS News, "At some point, safety just is pure waste. I mean, if you just want to be safe, don't get out of bed. Don't get in your car. Don't do anything." Rush said in a 2021 interview, "I've broken some rules to make [Titan]. I think I've broken them with logic and good engineering behind me. The carbon fibre and titanium, there's a rule you don't do that. Well, I did." OceanGate claimed that Titan was the only crewed submersible that used an integrated real-time monitoring system (RTM) for safety. The proprietary system, patented by Rush in 2021, used acoustic sensors and strain gauges at the pressure boundary to analyse the effects of increasing pressure as the watercraft ventured deeper into the ocean and to monitor the hull's integrity in real time. This would supposedly give early warning of problems and allow enough time to abort the descent and return to the surface. Prior concerns In 2018, OceanGate's director of marine operations, David Lochridge, composed a report documenting safety concerns he had about Titan. In court documents, Lochridge said that he had urged the company to have Titan assessed and certified by the American Bureau of Shipping, but OceanGate had refused to do so, instead seeking classification from Lloyd's Register. He also said that the transparent viewport on its forward end, due to its nonstandard and therefore experimental design, was only certified to a depth of , only a third of the depth required to reach the Titanic wreck. According to Lochridge, RTM would "only show when a component is about to fail – often milliseconds before an implosion" and could not detect existing flaws in the hull before it was too late. Lochridge was also concerned that OceanGate would not perform nondestructive testing on the vessel's hull before undertaking crewed dives and alleged that he was "repeatedly told that no scan of the hull or Bond Line could be done to check for delaminations, porosity and voids of sufficient adhesion of the glue being used due to the thickness of the hull". The viewport was rated to only , and the engineer of the viewport also prepared an analysis from an independent expert that concluded the design would fail after only a few 4,000 m dives. OceanGate said that Lochridge, who was not an engineer, had refused to accept safety approvals from OceanGate's engineering team and that the company's evaluation of Titan hull was stronger than any kind of third-party evaluation Lochridge thought necessary. OceanGate sued Lochridge for allegedly breaching his confidentiality contract and making fraudulent statements. Lochridge counter-sued, stating that his employment had been wrongfully terminated as a whistleblower for stating concerns about Titan ability to operate safely. The two parties settled the case a few months later, before it came to court. He filed a whistleblower complaint with Occupational Safety and Health Administration, but withdrew it after the lawsuit was filed. Later in 2018, a group organized by William Kohnen, the chair of the Submarine Group of the Marine Technology Society, drafted a letter to Rush expressing "unanimous concern regarding the development of 'TITAN' and the planned Titanic Expedition", indicating that the "current experimental approach ... could result in negative outcomes (from minor to catastrophic) that would have serious consequences for everyone in the industry". The letter said that OceanGate's marketing of the Titan was misleading because it claimed that the submersible would meet or exceed the safety standards of classification society DNV, even though the company had no plans to have the craft certified formally by the society. While the letter was never sent officially by the Marine Technology Society, it did result in a conversation with OceanGate that resulted in some changes, but in the end Rush "agreed to disagree" with the rest of the civilian submarine community. Kohnen told the New York Times that Rush had telephoned him after reading it to tell him that he believed industry standards were stifling innovation. Another signatory, engineer Bart Kemper, agreed to sign the letter because of OceanGate's decision not to use established engineering standards like ASME Pressure Vessels for Human Occupancy (PVHO) or design validation. Kemper said the submersible was "experimental, with no oversight". Kohnen and Kemper stated OceanGate's methods were not representative of the industry. Kohnen and Kemper are both members of the ASME Codes and Standards committee for PVHOs, which develops and maintains the engineering safety standards for submarines, commercial diving systems, hyperbaric systems, and related equipment. Kemper is an engineering researcher who has published a number of technical papers on submarine windows, including the need to innovate. In March 2018, one of Boeing's engineers involved in the preliminary designs, Mark Negley, carried out an analysis of the hull and emailed Rush directly stating, "We think you are at high risk of a significant failure at or before you reach 4,000 meters. We do not think you have any safety margin." He included a graph of the strain of the design with a skull and crossbones at a red line of 4,000 meters. Also in March 2018, Rob McCallum, a major deep sea exploration specialist, emailed Rush to warn him he was potentially risking his clients' safety and advised against the submersible's use for commercial purposes until it had been tested independently and classified: "I implore you to take every care in your testing and sea trials and to be very, very conservative." Rush replied that he was "tired of industry players who try to use a safety argument to stop innovation ... We have heard the baseless cries of 'you are going to kill someone' way too often. I take this as a serious personal insult". McCallum then sent Rush another email in which he said: "I think you are potentially placing yourself and your clients in a dangerous dynamic. In your race to Titanic you are mirroring that famous catch cry: 'She is unsinkable. This prompted OceanGate's lawyers to threaten McCallum with legal action. In 2022, the British actor and television presenter Ross Kemp, who had participated previously with deep sea dives for the television channel Sky History, had planned to mark the 110th anniversary of the sinking of the Titanic by recording a documentary in which he would undertake a dive to the wreck using Titan. Kemp's agent Jonathan Shalit said that the project was cancelled after checks by production company Atlantic Productions deemed the submersible to be unsafe and not "fit for purpose". Previous incidents In 2021, a new hull was constructed after a previous hull had cracked after 50 submersion dives, only three of which were to 4,000 m. Scale models of the hull imploded at the UW lab, so a different method of curing the hull was developed and passed a full-sized pressure test at a facility in Maryland. Rush refused to construct new domes and other components from the failed submersible and instructed the engineers to salvage and reuse parts. Anonymous former employees told Wired that damage to the components could have weakened the join with the new hull. They also added lifting rings, which was previously warned against by engineers because the submersible could not handle any tension or load. In 2022, reporter David Pogue was aboard the surface ship when Titan became lost and could not locate the wreck of the Titanic during a dive. Pogue's December 2022 report for CBS News Sunday Morning, which questioned Titan safety, went viral on social media after the submersible lost contact with its support ship in June 2023. In the report, Pogue commented to Rush that "it seems like this submersible has some elements of MacGyvery jerry-rigged-ness". He said that a $30 Logitech F710 wireless game controller with modified control sticks was used to steer and pitch the submersible and that construction pipes were used as ballast. In another 2022 dive to the wreck, one of Titans thrusters was accidentally installed backwards and the submersible started spinning in circles when trying to move forward near the sea floor. As documented by the BBC documentary Take Me to Titanic, the issue was bypassed by steering while holding the game controller sideways. According to November 2022 court filings, OceanGate reported that, in a 2022 dive, the submersible suffered from battery problems and, as a result, had to be attached manually to a lifting platform, causing damage to external components. On 15 July 2022 (dive 80), Titan experienced a "loud acoustic event" as it was ascending, which was heard by the passengers aboard and picked up by Titan'''s real-time monitoring system (RTM). Data from the RTM later revealed that the hull had permanently shifted following this event. Incident Expedition arrangements The voyage was booked in early 2023. Rush offered Jay Bloom, an American businessman, two discounted tickets, intending for Bloom and his son to be on the excursion. Bloom, a billionaire, was offered a price of $150,000 per seat, rather than the full price of $250,000, with Rush claiming that it was "safer than crossing the street", but Bloom declined the offer due to his concerns about its safety. At that time, the excursion was scheduled for May, but unfavourable weather caused it to be delayed until June. 16–17 June preparations On 16 June 2023 at 9:31 a.m., (local time; 12:01 UTC) the expedition to the Titanic wreck, which the company referred to as "Mission 5," departed from St. John's, Newfoundland, aboard the Canadian-flagged research and expedition ship . One of the occupants, Hamish Harding, posted on Facebook: "Due to the worst winter in Newfoundland in 40 years, this mission is likely to be the first and only crewed mission to Titanic in 2023. A weather window has just opened up and we are going to attempt a dive tomorrow." He also indicated that the operation was scheduled to begin about 4:00 a.m. EDT (08:00 UTC). 18 June, dive, disappearance, and implosion The ship arrived in vicinity of the Titanic wreck site on 18 June 5:15 a.m. Newfoundland Daylight Time (NDT; UTC−02:30). Around 8:30 a.m., five people were on-boarded into the Titan mounted on top of a floating platform, known as the launch and recovery system (LARS). Subsequently, the forward dome was secured for the expedition designated by the company as "Dive 88". At 8:55 a.m., the platform was vented, causing it to sink below the surface of the water. At 9:18 a.m., Titan disengaged from the platform and commenced diving. For the first hour and a half of the descent, Titan communicated with Polar Prince via text about every 15 minutes and received a "ping" every 5–10 seconds. At a depth of , the submarine sent "all good here", and usual "pings" continued on the communications channel. There were no messages during the descent that indicated trouble. A final text communication was sent from Titan at 10:47:27 a.m., at an approximate depth of which read "dropped two wts". Final "ping" (data) from Titan was received at 10:47:33 a.m. NDT (13:17:33 UTC), at depth of . Titan's location was . A U.S. Navy acoustic detection system designed to locate military submarines detected an acoustic signal consistent with an implosion hours after Titan submerged. Shortly after the disaster, James Cameron indicated that it was likely the submersible's early warning system alerted the passengers to an impending delamination of the hull, saying "we understand from inside the community that they had dropped their ascent weights and were coming up, trying to manage an emergency." Bob Ballard, the discoverer of the Titanic wreck, also said that the crew was likely "experiencing difficulties" and was trying to ascend at the time of the implosion. In September 2024, Tym Catterson, an OceanGate contractor who was aboard the Polar Prince at the time of the disaster, testified at the United States Coast Guard's inquiry that there is no indication the crew was aware of any problems before the implosion. The last human-written communication by Titan indicated that they dropped two weights, amounting to about of the or of dropweights on board. This was apparently routine to adjust the Titans buoyancy from negative to neutral as it approached the seabed, and was an indication that the crew was not aware of any emergency situation. The last automatic ping was received by the Polar Prince approximately six seconds later, after which contact was lost. Simulations developed in 2023 suggest the implosion of the vessel took less than one second, likely only tens of milliseconds, faster than the brain can process information; there would not have been time for the victims to experience the collapse of the hull, and they would have died immediately, with no pain, as their bodies were crushed. 18–22 June, search and rescue efforts The submersible was expected to resurface at 4:30 p.m. (19:00 UTC). At 7:10 p.m. (21:40 UTC), the U.S. Coast Guard was notified that the vessel was missing. The Navy reviewed its acoustic data from that time, and passed the information about the possible implosion event to the Coast Guard. Titan had as much as 96 hours of breathable air supply for its five passengers when it set out, which would have expired on the morning of 22 June 2023 if the submersible had remained intact. The United States Coast Guard, United States Navy, and Canadian Coast Guard organized the search. Aircraft from the Royal Canadian Air Force and United States Air National Guard, a Royal Canadian Navy ship, and several commercial and research ships and remotely operated underwater vehicle (ROVs) also assisted with the search. The surface was searched, as were the depths by sonar. Crews from the United States Coast Guard launched search missions from the shore of Cape Cod, Massachusetts. Joint Rescue Coordination Centre Halifax reported that a Royal Canadian Air Force Lockheed CP-140 Aurora aircraft and CCGS Kopit Hopson 1752 were participating in the search in response to a request for assistance by the Maritime Rescue Coordination Center in Boston made on 18 June at 9:43 p.m. (00:13 UTC). The search on 19 June involved three C-130 Hercules aircraft, two from the United States and one from Canada; a P-8 Poseidon anti-submarine warfare aircraft from the United States, and sonobuoys. Search and rescue was hampered by low-visibility weather conditions, which cleared the next day. The U.S. Coast Guard indicated that the search and rescue mission was difficult because of the remote location, weather, darkness, sea conditions, and water temperature. Rear Admiral John Mauger said that they were "deploying all available assets". Many submersibles have acoustic beacons that can be detected underwater by rescuers; Titan did not. The pipe-laying ship Deep Energy, operated by TechnipFMC, arrived on site on 20 June 2023, with two ROVs and other equipment suited to the seabed depths in the area. As of 10:45 a.m. (13:15 UTC), the U.S. Coast Guard had searched . The New York Air National Guard's 106th Rescue Wing joined in the search and rescue mission with a HC-130J, with plans for two more to join by the end of the day. According to an internal U.S. government memo, a Canadian CP-140 Aurora's sonar picked up underwater noises while searching for the submersible. The U.S. Coast Guard officially acknowledged the sounds early the next morning, but reported that early investigations had not yielded results. Rear Admiral John Mauger of the U.S. Coast Guard said the source of the noise was unknown and may have come from the many metal objects at the site of the wreck. A Canadian CP-140 Aurora airplane had previously spotted a "white rectangular object" floating on the surface. A ship sent to find and identify the object was diverted to help find the source of the noise. The noises were later described by the U.S. Coast Guard as being apparently unrelated to the missing vessel. CCGS John Cabot arrived on the morning of 21 June, bringing additional sonar capabilities to the search effort. Commercial vessels Skandi Vinland and Atlantic Merlin also arrived that day, as did a US Coast Guard C-130 crew. As of about 3:00 p.m. (17:30 UTC), five air and water vehicles were searching actively for Titan, and another five were expected to arrive in the next 24–48 hours. Search and rescue assets included two ROVs, one CP-140 Aurora aircraft, and the C-130 aircraft. The U.S. Navy's Flyaway Deep Ocean Salvage System (FADOSS), a ship lift system designed to lift large and heavy objects from the deep sea, arrived in St. John's, though no ships were available to carry the system to the wreck site. Officials estimated it would take about 24 hours to weld the FADOSS system to the deck of a carrier ship before it could set sail to the search and rescue operation. Despite increasing concerns about the depletion of air supplies in Titan, a U.S. Coast Guard spokesperson said at a press conference "This is a search and rescue mission 100%", rather than a wreckage recovery mission. An Odysseus6k ROV from Pelagic Research Services, travelling aboard the Canadian-flagged offshore tugboat MV Horizon Arctic, reached the sea floor and began its search for the missing submersible. The French RV L'Atalante also deployed its ROV , which can reach depths of as much as and transmit images to the surface. 22 June, discovery of debris At 1:18 p.m. (15:48 UTC) on 22 June the U.S. Coast Guard's Northeast Sector announced that a debris field had been found near the wreck of the Titanic. The debris, located by Pelagic Research Services' Odysseus6k ROV five hours into its search, was later confirmed to be part of the submersible. At 4:30 p.m. (19:00 UTC) – at a U.S. Coast Guard press conference in Boston – the Coast Guard said that the loss of the submersible was due to an implosion of the pressure chamber and that pieces of Titan had been found on the sea floor about 1,600 feet (about 500 metres) northeast of the bow of the Titanic. The identified debris consisted of the tail cone (not part of the pressure vessel) and the forward and aft end bells – both part of the pressure vessel intended to protect the crew from the ocean environment. According to the U.S. Coast Guard, the debris field was concentrated in two areas, with the aft end bell lying separate from the front end bell and the tail cone. Rear Admiral John Mauger of the US Coast Guard said that the debris was consistent with a "catastrophic loss of the pressure chamber". Mauger stated that he did not have an answer as to whether the bodies of those on board would be recovered, but he did say that it was "an incredibly unforgiving environment". Fatalities The implosion killed all five occupants: Recovery operations Pelagic Research Services confirmed on 23 June 2023 that a new mission to the Titan debris field was already underway and that it had taken the Odysseus 6k ROV one hour to reach the site to continue searching and documenting debris. It was further reported that the debris from Titan was too heavy for Pelagic's ROV to lift and that any recovery would need to occur at a later time. On 24 June, Polar Prince returned to St. John's harbour. In their bid to understand what caused Titan catastrophic loss, investigators boarded the support ship. Another boat was seen in the harbour towing the floating launch platform, which the company referred to as the launch and recovery system (LARS), which Titan used. On 28 June, Horizon Arctic returned to St. John's Harbour with the remains of Titan that were recovered from the debris field. Photographs and videos showed the titanium covers on both ends of Titan intact, with the single viewport missing, mangled pieces of the tail cone, electronics, the landing frame and other debris. The debris was to be transported to the U.S. as evidence for the investigation. The Coast Guard confirmed that presumed human remains were found within the debris, and that American medical professionals would conduct an analysis. Pelagic Research Services, which was operating the Odysseus 6K ROV from Horizon Arctic, confirmed that its team had completed their mission. The initial human remains underwent DNA testing, but no report was released shortly after. In September 2024, during the public hearing by the Marine Board of Investigation, USCG confirmed that the Armed Forces DNA Identification Laboratory, located in Dover, Delaware, positively identified DNA profiles for the five victims. On 30 June, Insider published an analysis of the recovery photos by Plymouth University professor Jasper Graham-Jones. He concluded that a failure of the carbon-fibre hull was the most likely cause of the loss, given that no large pieces of carbon fibre are known to have been recovered. Another possible cause was the acrylic viewing window. He noted that the window was absent from its bell housing when it was recovered. While the salvage team may have removed the window before salvaging its bell housing, they more likely would have left it in place. However, Graham-Jones said that if the window had failed before the hull rather than after, he would have expected larger pieces of carbon fibre to be recovered. During early October, engineers recovered the rest of the debris and presumed human remains. Investigations On 23 June, both the Canadian and the United States federal governments announced that they were beginning investigations of the incident. They were joined by authorities from France (Bureau d'Enquêtes sur les Événements de Mer, BEAmer) and the United Kingdom (Marine Accident Investigation Branch, MAIB) by 25 June; the final report will be issued to the International Maritime Organization (IMO). Whether lasting reforms will result from the investigation is uncertain. While there are variety of possible options, the IMO may not have appropriate regulatory authority. United States The United States investigation is being directed by the Coast Guard (USCG) with support from the National Transportation Safety Board; the Coast Guard is taking control because it declared the incident a "major marine casualty". USCG Captain Jason Neubauer has been named the chief investigator for a Marine Board of Investigation. Though at first it was anticipated to be completed within one year, the USCG eventually acknowledged it would take longer. "The investigation into the implosion of the Titan submersible is a complex and ongoing effort", said Neubauer in June 2024. "We are working closely with our domestic and international partners to ensure a comprehensive understanding of the incident." Canada The Transportation Safety Board of Canada (TSB) is investigating because Titan support vessel, MV Polar Prince, is a Canadian-flagged ship. A team of TSB investigators headed to the port of origin, St. John's, Newfoundland, to "gather information, conduct interviews and assess the occurrence", with other agencies also expected to be involved. The Royal Canadian Mounted Police (RCMP) also announced that it was performing a preliminary examination of the incident in order to determine whether to begin a full investigation, which will occur if the RCMP determine criminal, federal, or provincial laws were broken. Lawsuit On 6 August 2024, Nargeolet's family sued OceanGate for wrongful death. Financial costs of operations Numerous assets from the U.S. Air Force and the U.S. Coast Guard were deployed to search for the submersible, and to subsequently retrieve the victims' remains. On 23 June 2023, a Washington Post analysis made by Mark Cancian, a defence budget expert, estimated the costs of U.S. Coast Guard operations alone at about USD$1.2 million of taxpayers' money as of 23 June 2023, with the additional operations to recover the submersible's debris not included. Cancian said that while the Titan search operation was funded by money already in the federal budget, the U.S. military would assume some unexpected costs, since personnel and equipment were used in an unforeseen manner. Deploying a single Lockheed CP-140 Aurora aircraft and 341 sonobuoys cost Canadian taxpayers at least CAD$3 million, and the total Canadian contribution is likely to be much greater when all expenditures are tallied. Chris Boyer of the National Association for Search and Rescue said the search for Titan likely cost millions of dollars of public funds; however, the USCG refused to give an estimate, saying they "do not associate cost with saving a life". According to U.S. attorney Stephen Koerting, the USCG is generally prohibited by federal law from collecting reimbursement related to any search or rescue service. The incident renewed past debates about whether taxpayers should bear the cost of search and rescue missions involving wealthy people engaged in high-risk adventuring, such as incidents involving Steve Fossett and Richard Branson. Reactions Discussing the scale of the search and rescue response, Sean Leet, co-founder and chair of Horizon Maritime Services, the company that owns Polar Prince, said: The scale of the search and rescue efforts and media coverage compared to those for the Messenia migrant boat disaster, which occurred days earlier, sparked criticism. In the Ionian Sea off the coast of Pylos, Messenia, Greece, a fishing boat sank while carrying an estimated 400 to 750 migrants, resulting in nearly 100 persons confirmed dead, another 100 rescued, and hundreds more missing and presumed dead. Search and rescue efforts for the migrant ship were conducted by the Hellenic Coast Guard and military. Ishaan Tharoor of The Washington Post wrote that Pakistani Internet users compared and contrasted the Pakistani victims in both incidents, who were on opposite sides of Pakistan's large socioeconomic divide. According to David Scott-Beddard, the CEO of White Star Memories Ltd, a Titanic exhibition company, the likelihood of performing future research at the Titanic wreck decreased due to the incident. James Cameron, who directed the 1997 movie Titanic, visited the Titanic wreck 33 times, and piloted Deepsea Challenger to the bottom of the Mariana Trench, said he was "struck by the similarity" between the submersible's implosion and the events that resulted in the Titanic disaster. He noted that both disasters seemed preventable, and were caused indirectly by someone deliberately ignoring safety warnings from others. Cameron criticized the choice of carbon-fibre composite construction of the pressure vessel, saying it has "no strength in compression" when subject to the immense pressures at depth. Cameron said that pressure hulls should be made out of contiguous materials such as steel, titanium, ceramic, or acrylic, and that the wound carbon fibre of Titans hull had seemed like a bad idea to him from the beginning. He stated that it was long known that composite hulls were vulnerable to microscopic water ingress, delamination, and progressive failure over time. He also criticized Rush's real-time monitoring of the hull as an inadequate solution that would do little to prevent an implosion. Cameron expressed regret for not being more outspoken about these concerns before the accident, and criticized what he termed "false hopes" being presented to the victims' families; he and his colleagues realized early on that for communication and tracking (the latter housed in a separate pressure vessel, with its own battery) to be lost simultaneously, the cause was almost certainly a catastrophic implosion. The Logitech F710 game controller used to steer Titan sold out on Amazon soon after the incident, which was described as "a more benign form of disaster tourism" by the New York weblog the Cut. In social and mass media The submersible became widely discussed on social media as the story developed and was the subject of "public schadenfreude", inspiring grimly humorous Internet memes, namely interactive video game recreations and image macros that ridiculed the submersible's deficient construction, OceanGate's perceived poor safety record, and the individuals who died. The memes were criticized as insensitive, with David Pogue regarding such media as "inappropriate and a little bit sick". Some have felt the negative reaction to the victims may be a response to past news coverage of other expeditions by billionaires, often using their own companies such as Blue Origin. Molly Roberts wrote in The Washington Post that those joking about the incident were demonstrating Internet users' impulses to be ironic, provocative, and angry with each other, combined with an "eat-the-rich attitude". According to media psychology expert Pamela Rutledge, an American expert in social media and mass media, the Titan incident was widely treated on social media as entertainment. Major elements include the allure of disasters, fascination with the wealthy, conspiracy theories, uncertainty, and the mythology of the Titanic, as well as the romance of rescue operations. Rutledge opined that the trend displayed a lack of accountability and empathy. She asserted there is a need for individuals to rethink the way in which they use social media. In September 2023, it was announced that a new movie about the Titan submersible incident, named Salvaged, was in development. The amount of media coverage and public attention for the Titan incident was criticized by people such as Barack Obama, the former U.S president. commenting that the contemporaneous 2023 Messenia migrant boat disaster had received much less attention. The 2024 American Broadcasting Company (ABC) special Truth and Lies: Fatal Dive to the Titanic examined the submersible implosion of the Titan. In February 2024, a movie inspired by the events of the Titan submersible incident, titled Locker, was announced. In March 2024, a two-part documentary by ITN Productions, Minute by Minute: The Titan Sub Disaster'', was broadcast by UK's Channel 5. The documentary included interviews with the Canadian air crew that searched the surface, Edward Cassano of the Pelagic remotely-operated vehicle team that found the wreckage, and members of the Marine Technology Society William Kohnen and Bart Kemper. Kohnen and Kemper had warned OceanGate about their deviation from accepted engineering practices in 2018. Analysis of the mysterious "banging" sounds that seemed to indicate the occupants were still alive was a main feature of the first part. See also List of shipwrecks in 2023 List of submarine and submersible incidents since 2000 Notes References External links Titan Submersible Marine Board of Investigation | U.S. Coast Guard Marine Board of Investigation Marine transportation safety investigation M23A0169 | Transportation Safety Board of Canada June 2023 Maritime incidents in 2023 Maritime incidents involving engineering failures Submarine accidents Submarines lost with all hands Internet memes introduced in 2023 submersible, 2023 incident Articles containing video clips Implosion 2023 controversies
Titan submersible implosion
[ "Physics" ]
8,488
[ "Mechanics", "Implosion" ]
74,078,426
https://en.wikipedia.org/wiki/List%20of%20lightest%20mirrorless%20cameras
This is a list of the lightest and smallest mirrorless digital cameras ever released with an interchangeable lens mount, excluding smartphones and action cameras, sorted by weight including battery and memory card. Nearly all the lightest models have been discontinued, as smartphone cameras have rapidly improved and taken over their market. Some high-end smartphones now exceed several of these models in weight, sensor size, and functionality. (For example, an iPhone 15 Pro Max weighs 221 g, and a Galaxy S24 Ultra weighs 233 g.) The lightest mirrorless cameras in production today are the Olympus E-P7 at 337 g and Sony ZV-E10 at 343 g. The lightest models in production with an electronic viewfinder (EVF) are the Panasonic G100D at 346 g and Canon R100 at 356 g. With the exception of the E-P7's in-body image stabilization (IBIS), these models eschew certain hardware features, such as IBIS and weather sealing, that add weight. Most newer models include one or more of these features, as the bulk of the mirrorless camera sector has moved upmarket in the face of increasing competition from smartphones. Of these ultracompact models, the Micro Four Thirds cameras (Panasonic GM1, Panasonic GM5, and Z CAM E1) have by far the largest sensor, with an area nearly twice as large as Samsung's and Nikon's "1-inch" sensors and nearly eight times as large as the Pentax Q's sensor. On the other hand, Pentax was able to include in-body image stabilization in their Q-series bodies, because of the tiny sensors. Lightest mirrorless cameras with an APS-C sensor The lightest interchangeable-lens mirrorless cameras in production today with an APS-C sensor are the Sony ZV-E10 at 343 g, Fujifilm X-M5 at 355 g, and Canon R100 at 356 g. The Ricoh GR III at 257 g and Ricoh GR IIIx at 262 g are even lighter than the models in the above list and contain an APS-C sensor, but they include a non-interchangeable lens. Lightest mirrorless cameras with a full frame sensor Lightest rangefinder cameras with a full frame sensor All rangefinder cameras (including digital Leica M-series cameras) are technically mirrorless, because they do not contain a mirror. However, rangefinder cameras are usually not considered mirrorless cameras, a differentiation dating back to when they co-existed with SLR film cameras. More specifically, rangefinder cameras lack autofocus and employ a very different manual focusing method involving a rangefinder mechanism with an optical viewfinder. Furthermore, most digital rangefinder cameras (except Leica's recent models) lack live preview, which is sometimes considered a defining feature of mirrorless cameras. Lightest mirrorless cameras with a medium format sensor Lens-style cameras Lens-style cameras are lighter than all other interchangeable lens cameras with their respective sensor sizes, but they are usually not classified with other mirrorless cameras because they have no screen or viewfinder. They are designed to be attached to a smartphone so that the phone’s screen can be used as the camera’s display. Industrial cameras Sony introduced a mirrorless camera designed for industrial applications that has no screen or viewfinder and no internal battery. It is lighter than all other full-frame interchangeable lens cameras, and as with other modular cameras it is designed to be attached to other hardware, e.g. a drone. See also List of bridge cameras List of large sensor fixed-lens cameras Notes References Cameras by type Lists of cameras
List of lightest mirrorless cameras
[ "Technology" ]
763
[ "Mirrorless cameras", "System cameras" ]
74,079,300
https://en.wikipedia.org/wiki/UGC%205101
UGC 5101 is a galaxy merger located in the constellation Ursa Major. It is located at a distance of about 530 million light years from Earth. It is an ultraluminous infrared galaxy. The total infrared luminosity of the galaxy is estimated to be and the galaxy has a total star formation rate of 105 per year. UGC 5101 has a single nucleus surrounded by spiral isophotes. The nucleus of UGC 5101 has been found to be active and it has been categorised as a type 1.5 Seyfert galaxy or a LINER based on the radio continuum. The most accepted theory for the energy source of active galactic nuclei is the presence of an accretion disk around a supermassive black hole. The mass of the black hole in the centre of UGC 5101 is estimated to be 108.2 (160 million) based on stellar velocity dispersion. The galaxy also hosts a water megamaser, probably originating from the nucleus. The nucleus emits hard X-rays, which are strongly absorbed, while there is also a soft X-rays component, which could originate from a hidden starburst region. Also NeV emission has been detected in the nucleus, indicating the presence of a hot gas in the coronal line region, while hot dust has been detected around the nucleus, as indicated by the presence of PAH emission and strong silicate absorption. The nucleus is surrounded by a dust torus with an opening angle larger than 41° which partly obstructs the nucleus with a column density of NHLS about cm−2. The hole of the torus is covered with compton thin material. The integrated intensities of HCN to 13CO indicate the gas in the torus is very dense. When observed with very-long-baseline interferometry the galaxy features a ridgeline that could be compact jets generated by the active nucleus. The galaxy has a tidal tail, seen edge on, and a faint halo of stars that was created during the merger. A second tidal tail appears to loop around the nucleus, forming a ring. See also NGC 6240 and Markarian 273 - two other near ultraluminous infrared galaxies with active nuclei References External links UGC 5101 on SIMBAD Interacting galaxies Luminous infrared galaxies Active galaxies Ursa Major 05101 27292 Galaxy mergers
UGC 5101
[ "Astronomy" ]
482
[ "Ursa Major", "Constellations" ]
74,081,567
https://en.wikipedia.org/wiki/Sophie%20Germain%27s%20identity
In mathematics, Sophie Germain's identity is a polynomial factorization named after Sophie Germain stating that Beyond its use in elementary algebra, it can also be used in number theory to factorize integers of the special form , and it frequently forms the basis of problems in mathematics competitions. History Although the identity has been attributed to Sophie Germain, it does not appear in her works. Instead, in her works one can find the related identity Modifying this equation by multiplying by gives a difference of two squares, from which Germain's identity follows. The inaccurate attribution of this identity to Germain was made by Leonard Eugene Dickson in his History of the Theory of Numbers, which also stated (equally inaccurately) that it could be found in a letter from Leonhard Euler to Christian Goldbach. The identity can be proven simply by multiplying the two terms of the factorization together, and verifying that their product equals the right hand side of the equality. A proof without words is also possible based on multiple applications of the Pythagorean theorem. Applications to integer factorization One consequence of Germain's identity is that the numbers of the form cannot be prime for . (For , the result is the prime number 5.) They are obviously not prime if is even, and if is odd they have a factorization given by the identity with and . These numbers (starting with ) form the integer sequence Many of the appearances of Sophie Germain's identity in mathematics competitions come from this corollary of it. Another special case of the identity with and can be used to produce the factorization where is the fourth cyclotomic polynomial. As with the cyclotomic polynomials more generally, is an irreducible polynomial, so this factorization of infinitely many of its values cannot be extended to a factorization of as a polynomial, making this an example of an aurifeuillean factorization. Generalization Germain's identity has been generalized to the functional equation which by Sophie Germain's identity is satisfied by the square function. References Algebraic identities Factorization
Sophie Germain's identity
[ "Mathematics" ]
420
[ "Factorization", "Mathematical identities", "Arithmetic", "Algebraic identities" ]
74,085,954
https://en.wikipedia.org/wiki/University%20of%20Illinois%20Center%20for%20Supercomputing%20Research%20and%20Development
The Center for Supercomputing Research and Development (CSRD) at the University of Illinois (UIUC) was a research center funded from 1984 to 1993. It built the shared memory Cedar computer system, which included four hardware multiprocessor clusters, as well as parallel system and applications software. It was distinguished from the four earlier UIUC Illiac systems by starting with commercial shared memory subsystems that were based on an earlier paper published by the CSRD founders. Thus CSRD was able to avoid many of the hardware design issues that slowed the Illiac series work. Over its 9 years of major funding, plus follow-on work by many of its participants, CSRD pioneered many of the shared memory architectural and software technologies upon which all 21st century computation is based. History UIUC began computer research in the 1950s, initially for civil engineering problems, and eventually succeeded by cooperative activities among the Math, Physics, and Electrical Engineering Departments to build the Illiac computer series. This led to founding the Computer Science Department in 1965. By the early 1980s, a time of world-wide HPC expansion arrived, including the race with the Japanese 5th generation system targeting innovative parallel applications in AI. HPC/supercomputing had emerged as a field, commercial supercomputers were in use by industry and labs (but little by academia), and academic architecture and compiler research were expanding. This led to formation of the Lax committee. to study the academic needs of focused HPC research, and to provide commercial HPC systems for university research. When HPC practitioner Ken Wilson won the Nobel physics prize in 1982, he expanded his already strong advocacy of both, and soon several government agencies introduced HPC R&D programs. As a result, the UIUC Center for Supercomputing R&D (CSRD) was formed in 1984 (with funding from DOE, NSF, and UIUC, as well as DoD Darpa and AFOSR), under the leadership of three CS professors who had worked together since the Illiac 4 project – David Kuck (Director), Duncan Lawrie (Assoc. Dir. for SW) and Ahmed Sameh (Assoc. Dir for applications), plus Ed Davidson (Assoc. Dir. for hardware/ architecture) who joined from ECE. Many graduate students and post-docs were already contributing to constituent efforts; full time academic professionals were hired, and other faculty cooperated. A total of up to 125 people were involved at the peak, over the nine years of full CSRD operation The UIUC administration responded to the computing and scientific times. CSRD was set up as a Graduate College unit, with space in Talbot Lab. UIUC President Stanley Ikenberry arranged to have Governor James Thompson directly endow CSRD with $1 million per year to guarantee personnel continuity. CSRD management helped write proposals that led to a gift from Arnold Beckman of a $50 million building, the establishment of NCSA, and a new CSRD building (now CSL). The CSRD plan for success took a major departure from earlier Illiac machines by integrating four commercially built parallel machines using an innovative interconnection network and global shared memory. Cedar was based on designing and building a limited amount of innovative hardware, driven by SW that was built on top of emerging parallel applications and compiler technology. By breaking the tradition of building hardware first and then dealing with SW details later, this codesign approach led to the name Cedar instead of Illiac 5. Earlier work by the CSRD founders had intensively studied a variety of new high-radix interconnection networks, built tools to measure the parallelism in sequential programs, designed and built a restructuring compiler (Parafrase) to transform sequential programs into parallel forms, as well as inventing parallel numerical algorithms. During the Parafrase development of the 1970s, several papers were published proposing ideas for expressing and automatically optimizing parallelism. These ideas influenced later compiler work at IBM, Rice U. and elsewhere. Parafrase had been donated to Fran Allen's IBM PTRAN group in the late 1970s, Ken Kennedy had gone there on sabbatical and obtained a Parafrase copy, and Ron Cytron joined the IBM group from UIUC. Also, KAI was founded in 1979, by three Parafrase veterans who wrote KAP, a new source-source restructurer, (Kuck, Bruce Leasure, and Mike Wolfe). The key Cedar idea was to exploit feasible-scale parallelism, by linking together a number of shared memory nodes through an interconnection network and memory hierarchy. Alliant Computers, Inc. Alliant Computer Systems had obtained venture capital funding (in Boston), based on an earlier architecture paper by the CSRD team and was then shipping systems. The Cedar team was thus immediately able to focus on designing hardware to link 4 Alliant systems and add a global shared memory to the Alliant 8-processor shared memory nodes. In distinction to this, other academic teams of the era pursued massively parallel systems (CalTech, later in cooperation with Intel), fetch-and-add combining networks (NYU), innovative caching (Stanford), dataflow systems (MIT), etc. In sharp contrast, two decades earlier, the Illiac 4 team required years of work with state of the art industry hardware technology leaders to get the system designed and built. The 1966 industrial hardware proposals for Illiac 4 hardware technology even included a GE Josephson Junction proposal which John Bardeen helped evaluate while he was developing the theory that led to his superconductivity Nobel prize. After contracting with Burroughs Corp to build and integrate an all-transistor hardware system, lengthy discussions ensued about the semiconductor memory design (and schedule slips) with subcontractor Texas Instruments' Jack Kilby (IC inventor and later Nobelist), Morris Chang (later TSMC founder) and others. Earlier Illiac teams had pushed contemporary technologies, with similar implementation problems and delays. Many attempts at parallel computing startups arose in the decades following Illiac 4, but nothing achieved success until adequate languages and software were developed in the 1970s and 80s. Parafrase veteran Steve Chen joined Cray and led development of the parallel/vector Cray-XMP, released in 1982. The 1990s were a turning point with many 1980s startups failing, the end of bipolar technology cost-effectiveness, and the general end of academic computer building. By the 2000s, with Intel and others manufacturing massive numbers of systems, shared memory parallelism had become ubiquitous. CSRD and the Cedar system played key roles in advancing shared memory system effectiveness. Many CSRD innovations of the late 80s (Cedar and beyond) are in common use today, including hierarchical shared memory hardware. Cedar also had parallel Fortran extensions, a vectorizing and parallelizing compiler, and custom Linux-based OS, that were used to develop advanced parallel algorithms and applications. These will be detailed below. Cedar design and construction One unusually productive aspect of the Cedar design effort was the ongoing cooperation among the R&D efforts of architects, compiler writers, and application developers. Another was the substantial legacy of ideas and people from the Parafrase project in the 1970s. These enabled the team to focus on several design topics quickly: Interconnection network and shared memory hierarchy Compiler algorithms, OS, and SW tools Applications and performance analysis The architecture group had a decade of parallel interconnect and memory experience and high-radix shuffle network chosen, so after selecting Alliant as the node manufacturer, custom interfacing hardware was designed in conjunction with Alliant engineers. The compiler team started by designing Cedar Fortran for this architecture, and by modifying the Kuck & Assoc. (KAI) source-to-source translator with Cedar-specific transformations for the Alliant compiler. Having nearly two decades of parallel algorithm experience (starting from Illiac 4), the applications group chose several applications to study, based on emerging parallel algorithms. This was later extended to include some widely used applications that shared the need for the chosen algorithms . Designing, building and integrating the system was then a multi-year effort, including architecture, hardware, compiler, OS and algorithm work. System Architecture & Hardware The hardware design led to 3 different types of 24” printed circuit boards, with the network board using CSRD-designed crossbar gate array chips. The boards were assembled into three custom racks in a machine room in Talbot Lab using water-cooled heat exchangers. Cedar’s key architectural innovations and features included: A hierarchical/cluster-based shared-memory multiprocessor (SMP) design using Alliant FX as the building block. This approach is still being followed in today’s parallel machines, where Alliant 8-processor systems have been shrunken to single chip multi-core nodes containing 16, 32 or more cores, depending on power and thermal limitations. The first SMP use of a scalable high-radix multi-stage, shuffle-exchange interconnection network, i.e. a 2-stage Omega network using 8x8 crossbar switches as building blocks. In 2005, the Cray BlackWidow Cray X2 used a variant of such networks in the form of a high radix 3-stage Clos network, with 32x32 crossbar switches. Subsequently many other systems have adopted the idea. The first hardware data prefetcher, using a “next-array-element” prefetching scheme (instead of “next-cache-line” used in some later machines) to load array data from the shared global memory. Data prefetching is a critical technology on today’s multicores. [Need Ref] The first “processor-in-memory” (PIM) in its shared global memory to perform long-latency synchronization operations. Today, using PIM to carry out various operations in shared global memory is still an active architectural research area. Software-combining techniques for scalable synchronization operations Language and compiler By 1984, Fortran was still the standard language of HPC programming, but no standard existed for parallel programming. Building on the ideas of Parafrase and emerging commercial programming methods, Cedar Fortran was designed and implemented for programming Cedar and to serve as the target of the Cedar autoparallelizer. Cedar Fortran contained a two-level parallel loop hierarchy that reflected the Cedar architecture. Each iteration of outer parallel loops made use of one cluster and a second level parallel loop made use of one of the eight processors of a cluster for each of its iterations. Cedar Fortran also contained primitives for doacross synchronization and control of critical sections. Outer-level parallel loops were initiated, scheduled and synchronized using a runtime library while inner loops relied on Alliant hardware instructions to initiate the loops, schedule and synchronize their iterations. Global variables and arrays were allocated in global memory while those declared local to iterations of outer parallel loops were allocated within clusters. There were no caches between clusters and main memory and therefore, programmers had to explicitly copy from global memory to local memory to attain faster memory accesses. These mechanisms worked well in all cases tested and gave programmers control over processor assignment and memory allocation. As discussed in the next section, numerous applications were implemented in Cedar Fortran. Cedar compiler work started with the development of a Fortran parallelizer for Cedar built by extending KAP, a vectorizer, which was contributed by KAI to CSRD. Because it was built on a vectorizer the first modified version of KAP developed at CSRD lacked some important capabilities necessary for an effective translation for multiprocessors, such as array privatization and parallelization of outer loops. Unlike Parafrase (written in PL/1), which ran only on IBM machines, KAP (written in C) ran on many machines (KAI customer base). To identify the missing capabilities and develop the necessary translation algorithms, a collection of Fortran programs from the Perfect Benchmarks was parallelized by hand. Only techniques that were considered implementable were used in the manual parallelization study. The techniques were later used for a second generation parallelizer that proved effective on collections of programs not used in the manual parallelization study . Applications and benchmarking Meanwhile the algorithms/applications group was able to use Cedar Fortran to implement and test algorithms and run them on the four quadrants independently before system integration. The group was focused on developing a library of parallel algorithms and their associated kernels that mainly govern the performance of large-scale computational science and engineering (CSE) applications. Some of the CSE applications that were considered during the Cedar project included: electronic circuit and device simulation, structural mechanics and dynamics, computational fluid dynamics, and the adjustment of very large geodetic networks. A systematic plan for performance evaluation of many CSE applications on the Cedar platform was outlined in and. In almost all of the above-mentioned CSE applications, dense and sparse matrix computations proved to largely govern the overall performance of these applications on the Cedar architecture. Parallel algorithms that realize high performance on the Cedar architecture were developed for: solving dense, and large sparse (structured as well as unstructured) linear systems, computing few eigenpairs of large symmetric tridiagonal matrices, computing all the eigenpairs of dense symmetric standard eigenvalue problems, and all the singular triplets of dense non-symmetric real matrices, computing few of the smallest eigenpairs of large sparse standard and generalized symmetric eigenvalue problems, and computing few of the largest or smallest singular triplets of large sparse nonsymmetric real matrices. In preparing to evaluate candidate hardware building blocks and the final Cedar system, CSRD managers began to assemble a collection of test algorithms; this was described in and later evolved into the Perfect Club. Before that, there were only kernels and focused algorithm approaches (Linpack, NAS benchmarks). In the following decade the idea became popular, especially as many manufacturers introduced high performance workstations, which buyers wanted to compare; SPEC became the workhorse of the field and was followed by many others. SPEC was incorporated in 1988 and released its first benchmark in 1992 (Spec92) and a high performance benchmark in 1994. (David Kuck and George Cybenko were early advisors, Kuck served on the BoD in the early 90s, and Rudolf Eigenmann drove the Spec HPG effort, leading to the release of a first high performance benchmark in 1996.) In a joint effort between the CSRD groups, the Parafrase memory hierarchy loop blocking work of Abu Sufah was exploited for the Cedar cache hierarchy. Several papers were published demonstrating performance enhancement for basic linear algebra algorithms on the Alliant quadrants and Cedar. A sabbatical spent at CSRD at the time by Jack Dongarra and Danny Sorensen led this work to be transferred as the BLAS 3 (to extend the simpler BLAS 1 and BLAS 2), a standard that is now widely used. Cedar conclusion CSRD had many alumni who went on to important careers in computing. Some left early, others came late, etc. Among the leaders were UIUC faculty member Dan Gajski, who was affiliated with the CSRD directors in formulating plans and proposals, but left UIUC just before CSRD actually commenced. Another was Mike Farmwald who joined as an Associate Director for hardware/architecture when Ed Davidson left. Immediately after leaving Mike was a co-founder of Rambus, which continues as a memory design leader. David Padua became Assoc. Director for SW after Duncan Lawrie left, and continued many CSRD projects as a UIUC CS professor. Over time, CSRD researchers became CS and ECE department heads at 5 Big Ten universities. By 1990, the Cedar system had been completed. The CSRD team was able to scale applications from single clusters to the full 4-cluster system and begin performance measurements. Despite these innovation successes, there was no follow up machine construction project. After the end of the Cedar project, the Stanford DASH/FLASH projects, and the MIT Alewife project around 1995, the era of large, multi-faculty academic machine designs had come to an end. Cedar was a preeminent part of the last wave of such projects. ISCA’s 25th Anniversary Proceedings contain several retrospective papers describing some of the machines in that last wave, including one on Cedar. About 50 remaining CSRD students, academic professionals and faculty became a research group within the Coordinated Science Laboratory by 1994. For several years, they continued the work initiated in the 1980s, including experimental evaluations of Cedar and continuation of several lines of CSRD compiler research . Other CSRD contributions Beyond the core CSRD work of designing, building and using Cedar, many related topics arose. Some were directly motivated by the Cedar project. Many of these had value well beyond Cedar, were pursued well-beyond the official end of CSRD, and were taken up by many academic and industrial groups. Next, the most important such topics are discussed. Guided Self Scheduling In the mid 1980s, C. Polychronopoulos developed one of the most influential strategies for the scheduling of parallel loop iterations. The strategy, called Guided Self-Scheduling, schedules the execution of a group of loop iterations each time a processor becomes available. The number of iterations in these groups decreases as the execution of the loop progresses in such a way that the load imbalance is reduced relative to the static or dynamic scheduling techniques used at the time. Guided Self-Scheduling influenced research and practice with numerous citations of the paper introducing the technique and the adoption of the strategy by OpenMP as one of its standard loop scheduling techniques. Approximation by superpositions of a sigmoidal function In the mid to late 1980’s, the so-called “Parallel Distributed Processing” (PDP) effort recast earlier generations of neural computation by demonstrating effective machine learning algorithms and neural architectures. The computing paradigm, far removed from traditional von Neumann computer architecture, demonstrated that PDP approaches and algorithms could address a variety of application problems in novel ways. However, it was not known what kinds of problems could be solved using such massively parallel neural network architectures. In 1989, CSRD researcher George Cybenko, demonstrated that even the simplest nontrivial neural network had the representational power to approximate a wide variety of functions, including categorical classifiers and continuous real-valued functions. That work was seminal in that it showed that, in principle, neural machines based on biological nervous systems could effectively emulate any input-output relationship that was computable by traditional machines. As a result, Cybenko’s result has been often called the “Universal Approximation Theorem” in the literature. The proof of that result relied on advanced functional analysis techniques and was not constructive. Even so, it gave rigorous justification for generations of neural network architectures, including deep learning and large language models in wide use in the 2020’s. While Cybenko’s Universal Approximation Theorem addressed the capabilities of neural-based computing machines, it was silent on the ability of such architectures to effectively learn their parameter values from data. Cybenko and CSRD colleagues, Sirpa Saarinen and Randall Bramley, subsequently studied the numerical properties of neural networks which are typically trained using stochastic gradient descent and its variants. They observed that neurons saturate when network parameters are very negative or very positive leading to arbitrarily small gradients which turn result in optimization problems that are numerically poorly conditioned. This property has been called the “vanishing gradient” problem in machine learning. BLAS 3 The Basic Linear Algebra Subroutines (BLAS) are among the most important mathematical software achievements. They are essential components of LINPACK and versions are used by every major vendor of computer hardware. The BLAS library was developed in three different phases. BLAS 1 provided optimized implementations for basic vector operations; BLAS 2 contributed matrix-vector capabilities to the library. Blas 3 involves optimizations for matrix-matrix operations. The multi-cluster shared memory architecture of Cedar inspired a great deal of library optimization research involving cache locality and data reuse for matrix operations of this type. The official BLAS 3 standard was published in 1990 as. This was inspired, in part, on. Additional CSRD research data management for complex memory management followed and some of the more theoretical work was published as and. The performance impact of these algorithms when running on Cedar is reported in . OpenMP Beyond CSRD, the many parallel startup companies of the 1980s created a profusion of ad hoc parallel programming styles, based on various process and thread models. Subsequently, many parallel language and compiler ideas were proposed, including compilers for Cray Fortran, KAI-based source-to-source optimizers, etc. Some of these tried to create product differentiation advantages, but largely went contrary to user desires for performance portability. By the late 1980s, KAI started a standardization effort that led to the ANSI X3H5 draft standard, which was widely adopted. In the 1990s, after CSRD, these ideas influenced KAI in auto-parallelization, and soon another round of standardization was begun. By 1996 KAI had SGI as a customer and they joined the effort to form the OpenMP consortium – the OpenMP Architecture Review Board incorporated in 1997 with a growing collection of manufacturers. KAI also developed parallel performance and thread checking tools, which Intel bought with its purchase of KAI in 2000. Many KAI staff members remain, and the Intel development continues, directly inherited from Parafrase and CSRD. Today, OMP is the industry standard shared memory programming API for C/C++ and Fortran. Speculative parallelization For his PhD thesis, Rauchwerger introduced an important paradigm shift in the analysis of program loops for parallelization. Instead of first validating the transformation into parallel form through a priori analysis either statically by the compiler or dynamically at runtime, the new paradigm speculatively parallelized the loop and then checked its validity. This technique, named “speculative parallelization", executes a loop in parallel and tests subsequently if any data dependences could have occurred. If this validation test fails, then the loop is re-executed in a safe manner, starting from a safe state, e.g., sequentially from a previous checkpoint. This approach, known as the LRPD Test (Lazy Reduction and Privatization Doall Test). Briefly, the LRPD test instruments the shared memory references of the loop in some “shadow" structures and then, after loop execution, analyzes them for dependent patterns. This pioneering contribution has been quite influential and has been applied throughout the years by many researchers from CSRD or elsewhere. Race detection In 1987, Allen pioneered the use of memory traces for the detection of race conditions in parallel programs. Race conditions are defects of parallel programs that manifest in different outcomes for different exertions of the same program and the same input data. Because of their dynamic nature, race detections are difficult to detect and the techniques introduced by Allen and expanded in are the best strategy known to cope with this problem. The strategy has been highly influential with numerous researchers working on the topic during the last decades. The technique has been incorporated into numerous experimental and commercial tools, including Intels' Inspector. Contributions to Benchmarking – SPEC One of CSRD’s thrusts was to develop metrics able to evaluate both hardware and software systems using real applications. To this end, the Perfect Benchmarks provided a set of computational applications, collected from various science domains, which were used to evaluate and drive the study of the Cedar system and its compilers. In 1994, members of CSRD and the Standard Performance Evaluation Corporation (SPEC) expanded on this thrust, forming the SPEC High-Performance Group. This group released a first real-application SPEC benchmark suite, SPEC HPC 96. SPEC has been continuing the development of benchmarks for high-performance computing to this date, a recent suite being SPEChpc 2021. With CSRD’s influence, the SPEC High-Performance Group also prompted a close collaboration of industrial and academic participants. A joint workshop in 2001 on Real-Application Benchmarking founded a workshop series, eventually leading to the formation of the SPEC Research Group, which in turn co-initiated the now annual ACM/SPEC International Conference on Performance Engineering. Parallel Programming Tools Funded by Darpa, the HPC++ project was led by Dennis Gannon and Allen Malony and Postdocs Francois Bodin from William Jalby’s group in Rennes and Peter Beckman now at Argonne National Lab. This work led from a collaboration between Malony, Gannon and Jalby that began at CSRD. HPC++ is based extensions to C++ standard template library to support a number parallel programming scenarios including single-program-multiple-data (SPMD) and Bulk Synchronous Parallel on both shared memory and distributed memory parallel systems. The most significant outcome of this collaboration was the development of the TAU Parallel Performance System. Originally developed for HPC++, it has become a standard for measuring, visualization and optimizing parallel programs for nearly all programming languages and is available for all parallel computing platforms. It supports various programming interfaces such as OpenCL, DPC++/SYCL, OpenACC, and OpenMP. It can also gather performance information of GPU computations from different vendors such as Intel and NVIDIA. TAU has been used for many HPC applications and projects. Applications The Cedar project has strongly influenced the research activities of many of CSRD’s faculty members long after the end of the project. After the termination of the Cedar project, the first task undertaken by three members of Cedar’s Algorithm and Application group (A. Sameh, E. Gallopoulos, and B. Philippe) was documenting the parallel algorithms developed, and published in a variety of journals and conference proceedings, during the lifetime of the project. The result was a graduate textbook: “Parallelism in Matrix Computations” by E. Gallopoulos, B. Philippe, and A. Sameh, published by Springer, 2016. The parallel algorithm development experience gained by one of the members of the Cedar project (A. Sameh) proved to be of great value in his research activities after leaving UIUC. He used many of these parallel algorithms in joint research projects: • fluid-particle interaction with the late Daniel Joseph (a National Academy of Science faculty member in Aerospace Engineering at the University of Minnesota, Twin Cities), • fluid-structure interaction with Tayfun Tezduyar (Mechanical Engineering at Rice University), • computational nanoelectronics with Mark Lundstrom (Electrical & Computer Engineering at Purdue University). These activities were followed, in 2020, by a Birkhauser volume (edited by A. Grama and A. Sameh) containing two parts: part I consisting of some recent advances in high performance algorithms, and part II consisting of some selected challenging computational science and engineering applications. Compiler assisted cache coherence Cache coherence is a key problem in building shared memory multiprocessors. It was traditionally implemented in hardware via coherence protocols. However, the advent of systems like Cedar allowed one to consider a compiler-assisted implementation of cache coherence for parallel programs, with minimal and completely local hardware support. Where a hardware coherence protocol like МESI relies on remote invalidation of cache lines, a compiler-assisted protocol performs a local self-invalidation as directed by a compiler.. CSRD researchers developed several different approaches to compiler-assisted coherence , including a scheme with directory assistance. All these schemes performed a post-invalidation at the end of a parallel region. This work has influenced research with numerous citations across decades until today Compilers for GPUs Early CSRD work on program optimization for classical parallel computers, also spurred developments of languages and compilers for more specialized accelerators, such as Graphics Processing Units (GPU). For example, in the early 2000s, CSRD researcher Rudolf Eigenmann developed translation methods for compilers that enabled programs written in the standard OpenMP programming model to be executed efficiently on GPUs. Until then, GPUs had been programmed primarily in the specialized CUDA language. The new methods showed that high-level programming of GPUs was not only feasible for classical computational applications, but also for certain types of problems that exhibited irregular program patterns. This work incentivized further initiatives toward high-level programming models for GPUs and accelerators in general, such as OpenACC and OpenMP for accelerators. In turn, these initiatives contributed to the use of GPUs for a wide range of computational problems, including neural networks for deep-learning whose mathematical foundation was studied by Cybenko as discussed above. References Supercomputing University of Illinois System
University of Illinois Center for Supercomputing Research and Development
[ "Technology" ]
5,911
[ "Supercomputing" ]
74,090,169
https://en.wikipedia.org/wiki/It%C3%B4%E2%80%93Nisio%20theorem
The Itô-Nisio theorem is a theorem from probability theory that characterizes convergence in Banach spaces. The theorem shows the equivalence of the different types of convergence for sums of independent and symmetric random variables in Banach spaces. The Itô-Nisio theorem leads to a generalization of Wiener's construction of the Brownian motion. The symmetry of the distribution in the theorem is needed in infinite spaces. The theorem was proven by Japanese mathematicians Kiyoshi Itô and in 1968. Statement Let be a real separable Banach space with the norm induced topology, we use the Borel σ-algebra and denote the dual space as . Let be the dual pairing and is the imaginary unit. Let be independent and symmetric -valued random variables defined on the same probability space be the probability measure of some -valued random variable. The following is equivalent converges almost surely. converges in probability. converges to in the Lévy–Prokhorov metric. is uniformly tight. in probability for every . There exist a probability measure on such that for every Remarks: Since is separable point (i.e. convergence in the Lévy–Prokhorov metric) is the same as convergence in distribution . If we remove the symmetric distribution condition: in a finite-dimensional setting equivalence is true for all except point (i.e. the uniform tighness of ), in an infinite-dimensional setting is true but does not always hold. Literature References Probability theorems Banach spaces
Itô–Nisio theorem
[ "Mathematics" ]
298
[ "Theorems in probability theory", "Mathematical theorems", "Mathematical problems" ]
74,090,321
https://en.wikipedia.org/wiki/HD%2033541
HD 33541, also known as HR 1683, is a white-hued star located in the northern circumpolar constellation Camelopardalis. It has an apparent magnitude of 5.83, making it faintly visible to the naked eye. Gaia DR3 parallax measurements imply a distance of 358 light years and it is currently receding with a heliocentric radial velocity of . At its current distance HD 33541's brightness is diminished by 0.16 magnitudes due to interstellar extinction and it has an absolute magnitude of +0.58. The object has a stellar classification of A0 V, indicating that it is an ordinary A-type main-sequence star. It has 2.69 times the mass of the Sun and 2.52 times the Sun's radius. It radiates 69.3 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 33541 has an iron abundance 71% that of the Sun ([Fe/H] = −0.15) and it is estimated to be 300 million years old. The star spins modestly with a projected rotational velocity of . HD 33541 was originally considered to be a solitary star. However, Abt & Morell (1995) suggested that HD 33541 may be a close binary with two components that each have rotational velocities of 10 km/s. A later paper gives the rotational velocity of the primary as and of the secondary . It is now considered to be a spectroscopic binary with a period of 20.8 hours and a somewhat eccentric orbit based on Gaia DR3 models. References A-type main-sequence stars Spectroscopic binaries Camelopardalis BD+73 00280 033541 024732 1683
HD 33541
[ "Astronomy" ]
367
[ "Camelopardalis", "Constellations" ]
74,091,597
https://en.wikipedia.org/wiki/Stroke-based%20sorting
Stroke-based sorting, also called stroke-based ordering or stroke-based order, is one of the five sorting methods frequently used in modern Chinese dictionaries, the others being radical-based sorting, pinyin-based sorting, bopomofo and the four-corner method. In addition to functioning as an independent sorting method, stroke-based sorting is often employed to support the other methods. For example, in Xinhua Dictionary (新华字典), Xiandai Hanyu Cidian (现代汉语词典) and Oxford Chinese Dictionary, stroke-based sorting is used to sort homophones in Pinyin sorting, while in radical-based sorting it helps to sort the radical list, the characters under a common radical, as well as the list of characters difficult to lookup by radicals. In stroke-based sorting, Chinese characters are ordered by different features of strokes, including stroke counts, stroke forms, stroke orders, stroke combinations, stroke positions, etc. Stroke-count sorting This method arranges characters according to their numbers of strokes ascendingly. A character with less strokes is put before those of more strokes. For example, the different characters in "" (Chinese character strokes) are sorted into "汉(5)字(6)画(8)笔(10)[筆(12)畫(12)]漢(14)", where stroke counts are put in brackets. (Please note that both 筆 and 畫 are of 12 strokes and their order is not determinable by stroke-count sorting.). Stroke-count sorting was first used in Zihui to arrange the radicals and the characters under each radical when the dictionary was published in 1615 It was also used in Kangxi Chinese Character Dictionary when the dictionary was first compiled in 1710s. Stroke-count–stroke-order sorting This is a combination of stroke-count sorting and stroke-order sorting. Characters are first arranged by stroke-counts in ascending order. Then Stroke-order sorting is employed to sort characters with the same number of strokes. The characters are firstly arranged by their first strokes according to an order of stroke form groups, such as “heng (横, ㇐), shu (竖, ㇑), pie (撇, ㇓), dian (点, ㇔), zhe (折, ㇕)”, or “dian (点), heng (横), shu (竖), pie (撇), zhe (折)”. If the first strokes of two characters belong to the same group, then sort by their second strokes in a similar way, and so on. In our example of the previous section, both 筆 and 畫 are of 12 strokes. 筆 starts with stroke "㇓" of the pie (撇) group, and 畫 starts with "㇕" of the zhe (折) group, and pie is before zhe in the groups order, so 筆 comes before 畫. Hence the different characters in "汉字笔画, 漢字筆劃" are finally sorted into "汉(5)字(6)画(8)笔(10)筆(12㇓)畫(12㇕)漢(14)", where each character is put at its unique position. Stroke-count-stroke-order sorting was used in Xinhua Dictionary and Xiandai Hanyu Cidian before the national standard for stroke-based sorting was released in 1999. GB stroke-based order The Standard of GB13000.1 Character Set Chinese Character Order (Stroke-Based Order) (GB13000.1字符集汉字字序(笔画序)规范)) is a standard released by the National Language Commission of China in 1999 for Chinese characters sorting by strokes. This is an enhanced version of the traditional stroke-count–stroke-order sorting. According to this standard, Two characters are first sorted by stroke counts. If they are of the same stroke counts, sort by stroke order (of the five families of heng, shu, pie, dian and zhe). If the characters are of the same stroke order, they will be sorted by the primary-secondary stroke order. For example, 子 and 孑 each have three strokes and are written, in stroke-order, ㇐㇚㇐ and ㇐㇚㇀. ㇐ and ㇀ both belong to the heng family, so there is a tie under (2). Under (3), ㇐ is considered a primary stroke and sorts before the secondary stroke ㇀. As a result, 子 sorts before 孑. If two characters are of the same stroke count, stroke order and primary-secondary stroke, then sort them according to their modes of stroke combination. Stroke separation comes before stroke connection, and connection comes before stroke intersection. For example, 八, 人, 乂 all have 2 strokes in the order of ㇓㇏. They sort in the order of 八, 人, 乂, because 八 has separated strokes, 人 has a simple connection, and 乂 has an intersection. This standard has been employed by the new editions of Xinhua Dictionary and Xiandai Hanyu Cidian. YES sorting YES is a simplified stroke-based sorting method free of stroke counting and grouping, without comprise in accuracy. Briefly speaking, YES arranges Chinese characters according to their stroke orders and an "alphabet" of 30 strokes: ㇐ ㇕ ㇅ ㇎ ㇡ ㇋ ㇊ ㇍ ㇈ ㇆ ㇇ ㇌ ㇀ ㇑ ㇗ ㇞ ㇉ ㄣ ㇙ ㇄ ㇟ ㇚ ㇓ ㇜ ㇛ ㇢ ㇔ ㇏ ㇂ built on the basis of Unicode CJK strokes. To compare the sort-order of two characters, one expands each character into a string of strokes and compare them using the sort-order of the 30 strokes, much like one sorts two words in a dictionary using the sort-order of letters. Equivalently, one first decides whether the first stroke is sufficient to result in a sort (for example, because 汉 starts with ㇔ and 笔 starts with ㇚, 笔 sorts before 汉); if they happen to be identical, then one moves on to the second stroke (for example, 汉 expands to ㇔㇔... and 字 expands to ㇔㇑..., hence 字 sorts before 汉). The YES order of the different characters in "" is "", where each character is put at its unique position. YES sorting has been applied to the indexing of all the characters in Xinhua Zidian and Xiandai Hanyu Cidian. Word-sorting All of the aforementioned examples describe the sorting of single characters. To sort two words that consists of multiple characters: Select a method for comparing two characters. If the first character of word #1 sorts before the first character of word #2, then word #1 sorts before word #2. Otherwise, advance until a character that sorts differently is found, or if a word ends, in which case the shorter word sorts before the longer one. This method is used in the YES-CEDICT Chinese Dictionary, using YES for character comparison. See also Modern Chinese characters References Chinese lexicography Chinese character collation Chinese character components
Stroke-based sorting
[ "Technology" ]
1,432
[ "Components", "Chinese character components" ]
74,091,926
https://en.wikipedia.org/wiki/Blended%20artificial%20intelligence
Blended artificial intelligence (blended AI) refers to the blending of different artificial intelligence techniques or approaches to achieve more robust and practical solutions. It involves integrating multiple AI models, algorithms, and technologies to leverage their respective strengths and compensate for their weaknesses. Background In the context of machine learning, blended AI can involve using different types of models, such as generative AI, decision trees, neural networks, and support vector machines. By combining their results, predictions are more accurate and reliable. This blending of models can be done through techniques like ensemble learning, where multiple models are trained independently and their predictions are combined to make a final decision. Blended AI can also involve combining different AI techniques or technologies, such as natural language processing, computer vision, and expert systems, to tackle complex problems that require a multi-dimensional approach. For example, in a sales scenario AI could be used for lead generation and gathering information from social media such as LinkedIn posts, or understanding a prospect's hobbies and interests. Another blended AI could achieve customer profiling including past interactions and purchasing habits, by them, their industry and growth areas. Blended AI could be used to do predictive analytics to look at historical sales data, market trends, and external factors to generate accurate sales forecasts. This method is critical to gauge and increase "efficiency, revenue, and productivity". Lastly, another could integrate all the information into the CRM to build and maintain better prospect and customer profiles. Blended AI aims to leverage the strengths of different AI techniques and technologies, allowing them to complement each other and create more powerful and comprehensive AI solutions. By combining multiple approaches, blended AI aims to achieve better performance, higher accuracy, improved robustness, and enhanced capabilities in solving diverse and challenging problems. References External links F5: How AI can be blended into IT automation security  - Intelligent CIO Africa A New Artificial Intelligence (AI) Study Proposes A 3D-Aware Blending Technique With Generative NeRFs Google blends AI with art to create these fantastic casual games The Perfect Blend: How to Successfully Combine AI and Human Approaches to Business Artificial intelligence engineering
Blended artificial intelligence
[ "Engineering" ]
423
[ "Software engineering", "Artificial intelligence engineering" ]
74,092,846
https://en.wikipedia.org/wiki/Wetlands%20and%20islands%20in%20Germanic%20paganism
A prominent position was held by wetlands and islands in Germanic paganism, as in other pagan European cultures, featuring as sites of religious practice and belief from the Nordic Bronze Age until the Christianisation of the Germanic peoples. Depositions of items such as food, weapons and riding equipment have been discovered at locations such as rivers, fens and islands varied over time and location. The interpretations of these finds vary with proposed explanations including efforts to thank, placate or ask for help from supernatural beings that were believed to either live in, or be able to be reached through, the wetland. In addition to helpful beings, Old English literary sources record some wetlands were also believed to be inhabited by harmful creatures such as the nicoras and þyrsas fought by the hero Beowulf. Scholars have argued that during the 5th century CE, the religious importance of watery places was diminished through the actions of the newly forming aristocratic warrior class that promoted a more centralised hall culture. Their cultic role was further reduced upon the introduction of institutionalized Christianity to Germanic-speaking areas when a number of laws were issued that sought to suppress persisting worship at these sites. Despite this, some aspects of heathen religious practice and conceptions seem to have continued after the establishment of Christianity through adaptation and assimilation into the incoming faith such as the persistence of depositions at holy sites. History Background and origins As with elsewhere in Europe, wetland depositions in the areas later inhabited by Germanic peoples, such as England and Scandinavia, were performed in the New Stone Age and continue throughout the Bronze Age (when weapon deposits in Scandinavia begin), Iron Age and into the Viking Age. Throughout this long period there was significant regional and temporal variation with different sites favouring deposition of different types of items at different points in time. Wetlands have further importance to archaeologists as the waterlogged and acidic conditions preserve organic material that would otherwise have degraded such as clothes and wood. Pre-Roman and Roman Iron Age Wetland depositions have been found in continental Germanic areas such as Oberdorla in Thuringia, which was used as a ritual site from the Hallstatt period into at least the Merovingian Period, following trends seen elsewhere of continuity in deposition practice despite migrations or language changes in the area. Large weapon depositions have been found at sites such as Hjortspring, Ådal, Esbøl and Skedemosse, with that at Hjortspring being the oldest of its type, occurring approximately 400 years before depositions began to occur widely throughout Northern Europe. Animal sacrifices in Skedemosse and Hjortspring are approximately contemporary, however, dating to around 300 BCE. It has been theorised that the intensification of deposition in the late Pre-Roman Iron Age at sites like Skedemosse resulted from cultural contact with Celtic peoples who began to spread out over Europe at that time and had similar deposition practices. Bog bodies typically linked to cultural practices of early Germanic peoples cluster around 600 BCE to 300 CE, and include individuals such as the Tollund man and Osterby Man. Throughout the 1st millennium CE, whole or parts of infants were deposited throughout the Germanic area both in naturally occurring wetlands, such as bogs, and in manmade "wells" at settlements such as Trelleborg, which could have been perceived of as "cultivated bogs" that acted like wetlands built at settlements to be tended to by the population. The cause of death of found infants remains unclear. Human depositions have been further linked to an account in Tacitus' Germania of the ritual washing of the god Nerthus, after which those who cleaned her were drowned. He also states that the Germanic peoples drowned individuals that transgressed certain societal rules. Given that bog bodies typically do not show signs of having died by drowning, it has been suggested that these precise details may not be accurate. Despite the number of famous finds, human remains are notably rare in comparison to other types of deposition. Germanic Iron Age Fibulae and bracteates were also placed either in, or at the edges of, wetlands during the 5th to the first half of the 6th century. Large depositions of weapons cease to occur after the end of the Migration Period, with only small depositions continuing into the Viking Age, and at different sites than before. The gradual reduction of wetland depositions around the 5th century CE has been linked by Terry Gunnell with the centralising of religious traditions and the rise in prominence of halls and the male warrior elite. He further proposes that during this period, female figures associated with bodies of water reduced in prominence and their conception as rulers of realms of the dead was replaced by developing ideas of Valhöll. Similarly, it has been proposed that the stabilisation of the elite class during the 6th century CE led to fewer conflicts, resulting in fewer war spoils being deposited in wetlands. Wetland deposition of artefacts was practised in Britain prior to the Anglo-Saxon settlement and continued into the Anglo-Saxon period. Finds consist principally of weapons but also include other items such as horse equipment, jewellery and tools. During the Christianisation of Anglo-Saxon England in the 7th century CE, and the subsequent establishment of institutionalised Christianity in the 10th century CE, some wetland practices were made illegal in attempts to suppress them, resulting in wetland depositions continued at a reducing frequency. This attempt to suppress practices perceived as heathen is paralleled during this period in continental Europe in cases such as the Indiculus superstitionum et paganiarum, that includes well worship in its list of condemned practices deemed pagan or superstitions from around Saxony. Other examples include Langobardic law compiled in 727 CE, that made it a fineable offence to worship at trees and wells, and the Capitulatio de partibus Saxoniae composed in 769 CE, which further forbids worship at wells. In other cases, the meaning of wetland practices was altered to fit the context of the incoming religion. Specific springs and wells were connected to specific saints and baptisms were sometimes performed in rivers. Furthermore, rivers continued to act as boundaries to liminal places, with monasteries often being built so as to be accessed by crossing rivers. It has been further argued that the conception of wetlands as home to supernatural beings remained widespread, such as in accounts of the Anglo-Saxon saint Guthlac of Crowland such as the Latin and the poems 'Guthlac A & B', in which the saint chooses to live in a fen that is home to evil spirits. Viking Age and later The reduction in depositions seen in Scandinavia during the Germanic Iron Age was not permanent, with depositions of items such as weapons, jewellery, coins and tools resurfacing again during the late 8th century until the beginning of the 11th century CE, when depositions again reduce, coinciding with the increased establishment of Christianity. Weapon depositions in the Viking Age continue the practice seen previously in the Germanic Iron Age with only small numbers of items found and often at different sites than before. These sites include wetlands in regions inhabited by the Viking diaspora, in regions such as Ireland and modern France and the Netherlands. Consistent with this, the Byzantine De Administrando Imperio describes the Rūs Vikings performing sacrifices on St. Gregory's Island in the Dnipro river, which has been linked to finds of Scandinavian swords in the region. Other written accounts include that of the 10th century Andalusian traveller Ibrahim ibn Yaqub that describes those living in Hedeby would throw excess children into the sea. The deposition of weapons in wetlands may be reflected in names of rivers in Nordic mythology such as ("The one bobbing with spears"), ("The stinging") and ("The dangerously sharp"). Of these three rivers, and run around the sanctuaries or homes of the gods while flows through the lands of humans before falling into Hel, and is explicitly described as flowing with swords and seaxes. In the case of England, the settlement of North-Germanic peoples coincides with an increase in wetland depositions in the region. Objects pertaining to both Anglo-Saxon and Scandinavian art styles have been found, principally in the Thames, Lea and Witham. Notable finds include the Seax of Beagnoth, the Nene River Ring. In both England and Scandinavia, deposits often cluster around crossing points of rivers such as bridges and fords. Among the sites with the most discovered weapons is the Danish lake Tissø, by which a settlement has been further found that could only be reached in the Viking Age using a 50 m long wooden bridge. At this site are also two deviant burials dated to the 11th-century CE. Depositions have yet to be found, however, at the English lake of analogous name Tyesmere. Due to the close resemblance between depositions at specific landscape features in England and elsewhere, it has been argued that the relatively low number of finds in England result from an under-representation in the archaeological record, be it through lack of discovery or reporting, rather than a lower prevalence of depositional practices. As with those imposed previously, laws issued in England by Cnut the Great between 1020 and 1023 CE forbade the worship of rivers and wells or springs, consistent with the archaeological record of depositions at those wetland sites. It has been suggested that this may be referring to practices in England either by Scandinavians alone, or that the migrations of heathen Scandinavians led to a resurgence of Anglo-Saxon pagan practices. Religious intention While wetland depositions have been interpreted by some scholars as accidental, such as being left behind after battles, the sizes of deposits and the condition of items within them suggest this idea does not explain most finds. Instead, they have typically been interpreted as votive offerings, in contrast to dryland deposits which were viewed as hoards to be uncovered at a later date. Deposition of weapons has further been suggested to be an attempt to prevent the beings in the wetlands from harming those who are trying to cross it, ensuring safe passage. Many archaeologists have traditionally distinguished between "war-booty sacrifices", consisting of weapons and horse-riding gear, and "fertility sacrifices" consisting of anything else such as agricultural produce, pottery, animal remains and humans. It is typically proposed that weapon deposits are to thank the gods for victory in battle. "Fertility sacrifices" on the other hand are usually seen as part of a reciprocal process of giving. Weapon sacrifices are often believed by scholars to be performed by the victors, thanking the gods by giving them the defeated side's war gear. An alternative suggestion is that it was intended to quell the power of the weapons, which would have been tainted by their association with killing. This division into two distinct classes has been challenged, however, with it being suggested that this strict dichotomy may not correspond well to the conceptions of those making the depositions. Some adults deposited in bogs have been interpreted as having been executed as a punishment or offered as a sacrifice. Others may have been buried there in a normal fashion. It has been proposed that infant deposition was justified by desperation and only resorted to in extreme cases. Consistent with this, the majority of infant depositions date to the Migration Period - a time of widespread strife and population movements. On the contrary, it has also been argued that infanticide was socially acceptable due to factors such as the high infant mortality rate, not seeing children as full humans until they reached certain milestones like first breastfeeding and it being safer to the mother than abortions. Similarities have been noted between the contemporary practices of infant deposition in wetlands and those in spaces in settlements, such as in postholes and beneath hearths, which were more common. It has been suggested that in the animist mindset of Germanic pagans in the 1st millennium CE, in which the boundaries between some objects and living beings was blurred, that in certain contexts human bodies and infants may have been conceived of as animate objects that could be used as ritual tools, either intact or in pieces. It has also been suggested that placing bodies in bogs was a way of preventing them from returning as beings such as draugs. Notable outliers to these reasonings include the Skuldelev ships, which were intentionally scuttled for defensive purposes, blocking the entrance to Roskilde fjord, rather than serving a religious function. Inhabitation by supernatural beings Accounts such as Grímnismál describe lakes and bogs as the dwellings of female gods. Frigg lives in ("Fen halls") and Sága lives in ("Sunken benches") where she drinks with Odin beneath the waves. Supernatural beings are also described as residing at watery sites described as (translated variously as "lakes", "wells" and "ponds"). These include Urðarbrunnr (where three norns live and the gods meet to give judgement) and Mímisbrunnr (associated with the wise being Mímir). Gods and other supernatural beings are also described as living across bodies of water on islands. The beginning of Grímnismál describes two brothers get lost while fishing and are stranded, whereupon they are taken care of by Odin and Frigg in disguise. Gods could also be seen as being situated in specific geographical locations such as Ægir on læsø and Nerthus in a holy grove on an island in sea. In Gautreks saga, Starkaðr meets Odin among other gods on an island near Hordaland, while in Jómsvíkinga saga, Earl Hákon goes to an island to blót to Þorgerðr Hölgabrúðr and Irpa. Overtly harmful beings are also described as living in wetlands in Old English accounts. In Beowulf, the eponymous hero kills nine beasts in the sea during his swimming competition with Brecca the Bronding. He later journeys into a lake in the marshes in which Grendel's mother lives in order to fight her, where he finds her living in a hall beneath the water. Consistent with this, fens are described in Maxims II as the characteristic dwelling place of a , a type of being that includes Grendel. Nicors also feature in the poem, where they live beneath the surface of pools and are presented as terrifying and dangerous creatures. Similarly, the name of Fenrir, the wolf prophesied in Völuspá to eat Odin at Ragnarök, likely translates as "Fen-dweller". Toponomy Germanic placenames associated with water that have been proposed to derive from their historical role in pagan religious practice: Crossing points England Weeford (Wēoh ford), village in Staffordshire Wyfordby (Wēoh ford settlement), village in Leicestershire Islands Norway Goðeyjar (Islands of the gods), islands in Salten Helgøya (The holy island), island in Lake Mjøsa Sweden Helgö (Holy island), Island in Mälaren Frösön (The island dedicated to the god Freyr), island in Jämtland. Wetlands Denmark Gudenå (Gods' stream), river in Jutland Tissø (Týr's or god's lake), lake in Zealand England Tyesmere (Tīw's mere), lake in Worcestershire Sweden Odensjö (Odin's lake), lake in Scania Significance of watery places Water as marking liminal spaces Wetlands, intertidal regions and seasonal and tidal islands have been interpreted as being conceptionally distinct in the cognitive landscapes of many past cultures. Similarly, watercourses, islands and bridges have often served as markers of territories, natural barriers, crossing points and facilitators of travel. It has been proposed that these qualities led early Germanic people to see them as liminal spaces in which supernatural encounters were more likely. Water also often is described as separating the lands of the living from both those of the dead and of the gods, such as Gjǫll which is mentioned in Gylfaginning as marking the border to Hel. The conception of lands of the dead being separated from those of the living frequently recurs across the world and is attested throughout Northwestern Europe from the 6th century CE onwards. The Byzantine historian Procopius describes that the people of the Low Countries ferry the souls of the dead to an island off the coast. Similarly in Beowulf and the Prose Edda, the bodies of Scyld Scefing and Baldr are laid on ships and sent out to sea. In the latter case, the ship is burnt and the god is later found in Hel. Similarly, in the prose section of Frá dauða Sinfjǫtla, a boatman identified by some scholars as Odin ferries the body of Sinfjǫtli across a fjord. These textual sources have been connected with wider evidence such as archaeological finds of ship burials that occur in Northern Europe from the Iron Age, likely reflecting the idea that those who had died could reach the land of the dead by boat. Similarly, many Iron Age graves have been found on uninhabited islands, and many Iron Age and Viking Age grave fields are separated from settlements by streams. Furthermore, some mounds, such as those at Borre in Vestfold, have ditches around them that would have filled up with water at certain times of year, making them transient islands that could be reached by bridges that were built over the ditches. It has been suggested that bodies of water such as Odensjo were conceived of as passages to the otherworld where gods and other beings resided, similar to beliefs associated with saajve in south Saami tradition. Comparable beliefs have been noted in later Northern European folklore such as people reaching the land of the elves by jumping into ponds, rivers or the sea in Icelandic folklore. Reflection of mythical locations in religious sites Mímisbrunnr has been connected to the waters of Mimling in Germany, and Mimesøa and Mimesjöen in Sweden. It has been argued that these names suggest that, like Mímisbrunnr, there was a belief in a wise prophetic being living beneath these waters. Adam of Bremen describes in Gesta Hammaburgensis ecclesiae pontificum that at the temple at Uppsala, heathen sacrifices were made at the site of a tree and a spring or well (). Similarities have been noted between this site and Yggdrasil and the wells ("") that stand beneath its roots. It has been noted that the combination of trees and wells is common at pagan religious sites in accounts from the 6th century CE to the time of Charlemagne. Relationship with other practices It has been argued that the use of watery places should be seen in the context of the wider holy landscapes in the minds of the Germanic peoples that also included other important features involved in religious practice such as burial mounds, temples, hills, fields and groves. See also Water and religion Notes Citations References Secondary Germanic paganism Wetlands in folklore Sacred islands Wetlands
Wetlands and islands in Germanic paganism
[ "Environmental_science" ]
3,868
[ "Hydrology", "Wetlands" ]
74,095,600
https://en.wikipedia.org/wiki/Pile%20Cloth%20Media%20Filtration
Pile Cloth Media Filtration is a mechanical process for the separation of organic and inorganic solids from liquids. It belongs to the processes of surface filtration and cake filtration where, in addition to the sieve effect, real filtration effects occur over the depth of the pile layer. Pile Cloth Media Filtration represents a branch of cloth filtration processes and is used for water and wastewater treatment in medium and large scale. In Pile Cloth Media Filtration, three-dimensional textile fabrics (pile cloth) are used as filter media. During the filter cleaning of the pile layer the filtration process continues and is not interrupted. History and development In the 1970s the Swiss company Mecana S.A. began the development of cloth filtration. The initially applied needle felt has fundamentally evolved since through the use of pile cloth media as filter media. Needle felt – needle-punched random fiber nonwoven fabric consisting of a large number of small fibers – were gradually replaced by woven pile cloth media. Three-dimensional fabrics – developed from a paint roller material – has emerged today to a technically designed filter medium, the so-called pile cloth media. The difference to needle felt is mainly in the flexible structure, so that with pile layers the removal of the retained substances during filter cleaning is much more effective. The world's first Pile Cloth Media Filtration system was installed in Weinfelden (Switzerland) as a disc filter for the treatment of paper mill wastewater. In 2010, the Italian company MITA Water Technologies designed and marketed the first vertical-axis free-fiber cloth filters. The solution aims at combining the advantages of classical horizontal-axis filters, in terms of total suspended solids removal, and a structural compactness for installation in small spaces (especially in industrial plants). The company is also proposing a method of washing filter cloths with citric acid. Filter media Today, woven pile cloth media are used as filter media, which is the reason for the name of the process. Woven pile clothes have a multidimensional structure consisting of a filter-active fluidizable pile layer and a non-filter-active backing. The backing, made of continuous filament with large non-filter-active pores, serves as a support for the pile layer, which is composed of multiple superimposed filaments or fibers woven into the backing. The solids retention of the pile cloth media is determined only by the pile layer. The finer the individual filaments of the pile layer, the higher the solids retention and the smaller the particles that can be separated. Pile cloth media can be defined, among other parameters, by the length of the erected pile [mm], the diameter of the individual filaments [μm], the specific surface area of the pile layer [m²/m²], the specific weight per unit area of the pile cloth media [g/m²] and the size of the flow-relevant pores [mm] of the backing. A defined pore size for pile cloth media or the pile layer does not exist by definition. Effective cleaning of the pile layer is essential for continuous operation. Depending on the application, conventional standard fibers, microfibers or ultrafibers are used. Common materials applied for pile layers are polyester (PES) and polyamide (PA). Pile cloth media are subject to system-related mechanical stress caused by filter cleaning. However, significant wear of the pile fiber layer has not been detected. The service life of the pile fabrics is influenced by the application and the corresponding fouling behavior of the backing. Suction cleaning of the pile layer does not prevent biofouling or scaling on the backing, which result in an increase in cloth resistance and shorter backwash intervals with time. Fewer cleaning cycles generally extend the lifetime of the pile cloth media until maintenance or replacement. The effluent quality is not affected by the aging process of the pile cloth media. Functionality and design variations During filtration operation, suspended solids accumulate in the pile layer. With increasing solids retention by the pile layer, the hydraulic resistance increases, resulting in an increase of the water level or differential pressure. The pile cloth media is permanently and completely submerged during both filtration and filter cleaning, so that 100% of the filter surface is used. Regular discharge of the accumulated solids is necessary. If the raw water level exceeds a certain level or differential pressure, filter cleaning is triggered. In this process, the solids layer formed on the outside is removed from the filter media by filter cleaning vacuum pump(s) applying differential pressure via suction bars in reverse flow direction (inside-out cleaning). Cleaning is performed by rotating the filter disc/drum or by a static filter support with movable cleaning unit. Due to the negative pressure, the pile layer erects (fluidization), so that the solids are removed and vacuumed away. In normal operation mode, the pile layer lies flat against the backing, so that a filter cake is formed again. Due to biofouling or scaling on the backing, manual or chemical cleaning of the pile cloth media may also be necessary. During vacuum filter cleaning, filtration is not interrupted. Compared to sand filters or microstrainers, no additional backwash water is required for filter cleaning. A backwash water storage tank is therefore not required. Pile Cloth Media Filters can be used in free-flow systems and closed pressure systems. The pile cloth media is mounted either on a disc or drum, which results in the different designs (disc filters and drum filters). Special designs include e.g. pressurized drum filters, diamond filters (grating support with rhombic cross-section) or plate filters. Disc filters are equipped with removable filter segments arranged around a central effluent tube. The cleaning system, consisting of a suction bar and a filter cleaning pump, is able to clean several discs simultaneously. Suction cleaning is also possible by a central pump and a valve per cleaning system. Free-flow systems consist of a filter unit with inlet and outlet weir to hydraulically decouple the pile cloth filter machine from upstream and downstream processes. The filter machine is mounted in a filter tank and is usually equipped with inlet weir (with optional penstock regulation), outlet weir, emergency overflow and a level measurement. The raw water flows through the fully submerged filter construction from the outside to the inside (outside-in filtration) through the pile cloth media into the central effluent tube, retaining suspended solids. The filtrate then passes through the ascending pipe and the outlet weir. The zero water level corresponding to the height of the outlet weir serves as a reference for plant control. In closed systems, pressurized drum filters are used, whereby the feed water also flows through the pile cloth media from the outside to the inside. The system is controlled here via tank pressure. While in free-flow systems the differential pressure is commonly up to 60 mbar, this may exceed 1 bar in pressurized drum filters. Process characteristics The design is based on the pile cloth media and fluid specific solids loading capacity (σ) [g/m2], solids surface loading rate (SLR) [g/m2/h], filter velocity (vF) [m/h] and solids content in the feed flow at average and maximum loading. The selection of pile cloth media depends on the effluent requirements and the solids content in the raw water. The pile cloth media properties and the efficiency of the suction cleaning system determine the SLR. Due to a high achievable SLR of up to >800 g/m2/h, Pile Cloth Media Filters are space-saving compared to many other separation processes (such as sedimentation, flotation, sand filters) and thus have a low footprint. Disc filters in particular have high specific filter surface areas of up to 9 m²/m². Due to the low hydraulic losses (max. ca. 50 cm), pile cloth media filters can be operated in a very energy-efficient and cost-effective way (depending on application, solids loading and number of cleaning intervals with approximately 0.3 to 20 Wh/m3 treated water). The maximum filter velocity is determined by the filter design and is not limited by the filter media. In free-flow systems the maximum filter velocity generally can be up to 8 - 16 (20) m/h. Pressurized drum filters, on the other hand, can be operated with a maximum filter velocity of up to 60 m/h. The filter velocity has no direct influence on the effluent quality. Pile Cloth Media Filters are unaffected by peak loads. The backwash water quantity is directly proportional to the frequency of the cleaning cycles and depends on the pile cloth media used, the solids loading and the behaviour of the filtered substances. The amount of backwash water can be estimated by the number of filters, filter surface, cleaning duration and frequency. The characteristics of the sludge water depend on the backwash frequency, type and quantity of the retained solids. Applications Pile Cloth Media Filters are used in municipal and industrial wastewater treatment, water reuse, road runoff and combined sewer treatment, drinking water treatment and desalination. Among other solids, algae, helminth eggs, microplastics, tire wear, phosphorus, powdered activated carbon, can be effectively removed depending on the application. The following are examples of potential application areas: Industrial wastewater treatment Combined sewer overflow Surface water and drinking water treatment Substitute for clarifiers in biofilm processes Tertiary filtration Phosphorus removal Primary filtration Road runoff Mircropollutant removal (retention of powdered activated carbon, pre-filtration before ozonation or granular activated carbon filters) Pre-/Post-Filtration for moving bed reactors Pre-Filtration before disinfection (e.g. UV or ozone applications) Water reuse (e.g. irrigation in agriculture, process water supply, ...) Pre-Filtration for desalination plants Pre-Filtration for cooling water or heat exchangers (e.g. heat recovery from wastewater or surface water) Stormwater treatment Algae harvesting References Water filters Sewage treatment plants 1970s introductions
Pile Cloth Media Filtration
[ "Chemistry" ]
2,089
[ "Water treatment", "Water filters", "Filters" ]
74,096,650
https://en.wikipedia.org/wiki/Wubuntu
Wubuntu (also known as "Windows Ubuntu") is a Brazilian Linux distribution for PCs based on the Kubuntu distribution. The first version of the system was released in January 2022. Wubuntu comes bundled with software such as the OnlyOffice office suite, web browsers, Kodi communications, .exe support, Android support and multimedia software. The distribution also comes bundled with Wine, a compatibility layer for running Windows applications. External links Wubuntu Operating System on SourceForge.net Linux distributions Ubuntu derivatives 2007 software X86-64 Linux distributions
Wubuntu
[ "Technology" ]
128
[ "Operating system stubs", "Computing stubs" ]
66,855,594
https://en.wikipedia.org/wiki/ID2299
ID2299 is an elliptical galaxy 9 billion light-years away. It was found and detailed in January 2021, due to its phenomenon of catastrophic gas loss. This is due, unless the prolonged observations are inexplicably misleading or a poorly understood mechanism is at hand, to a catastrophic merger – prompting a secondary part of the galaxy that hosts rapid star formation. ID2299's high star formation rate is far outweighed by its ejection of gas. Its trailing tail has grown to approximately half of its size. ID2299 is extrapolated to lose so much more gas that it will only remain active – capable of new star formation – for a few more tens of millions of years. Observation This galaxy has been observed for the first time thanks to the Atacama Large Millimeter Array (ALMA) telescope, which is the biggest radio telescope worldwide, in Chile, which scans the sky looking for distant variations in radiation. Astronomers observed an extreme instance of the "death" of a galaxy, by losing its gas. An impression has been made by Martin Kornmesser, a graphist at ESA (the European Space Agency). This represents, in intensified form, the visible and near-visible wavelength corrollaries to the seen radiation, which cannot be picked up with present equipment due to great lengthening of wavelength (redshift). On 11 January 2021 the study was published in the journal Nature Astronomy. Characteristics Distance The light from this galaxy is deemed to be 9 billion light-years from the Earth, having taken that time to reach the Earth. When astronomers observe ID2299, it is as it was 4.5 billion years ago, while it is extrapolated that it is now about 13 billion light years away. Context Deep space astronomic observations strongly imply around 2,000 billion galaxies exist or have existed, each on average composed of billions to hundreds of billions of stars. Every galaxy has a key component of gas, which allows them to produce stars, and when all these die, and no more are creating, the galaxy will cease to exist. This can happens well before tens of billions of years if a galaxy becomes inactive such as by losing virtually all the interstellar gas. Such a loss makes it impossible to create new stars. Composition ID2299 is observed with extreme – and very likely total – gas loss underway, which it ejects as a tidal tail. The 46% which forms the tail is being augmented at a rate of around per year. Within the other 54%, are intense zones of star production totalling about per year. To compare, the Milky Way now births about per year. If it continues at this rhythm, or similar, the galaxy has only a few tens of millions of years left for star production, a very minute fraction of cosmic history. Explanation Major gas loss had been modelled as likely either from stellar winds, from star formation or from relativistic jets and other ejections from the supermassive black hole and its sphere of influence, in the galactic nucleus. The accretion of matter there is accompanied by the emission of large amounts of energy and the appearance of powerful winds, capable of sweeping away the galaxy's gas. ID2299's data to a high probability presents to ESA scientists another mechanism: the collision of galaxies. Even if astronomers only observed the galaxy for few minutes, they concluded that this tidal tail, which will lead to the death of ID2299 is the result of a catastrophic collision between two galaxies, integral to the form and fate of ID2299. In this earlier stage of the universe galaxies were closer together so more mergers took place, many dislodging high quantities of their respective interstellar matter. This catastrophic merger mechanism, if matched by very similar observations as the working group hypothesises it will be, has contributed to shape the make-up and distribution of the later surviving galaxies – including their host galaxy clusters and superclusters as we see in the more local universe. References Galaxies
ID2299
[ "Astronomy" ]
824
[ "Galaxies", "Astronomical objects" ]
66,856,176
https://en.wikipedia.org/wiki/Measurement%20dysfunction
Measurement dysfunction describes a situation or behavior where actual data metrics, statistics and especially their meaning (or communicated meaning), can become problematic due to misuse. Specifically, in areas such as Human Resources (Performance measurements), Technology (Safety), Finance or Health, measurement dysfunctionality are critical, as it can lead to negative outcomes, wrong predictions or forecasts. Practices to avoid: Reward of wrong behavior (also persons who manipulate) Measuring the wrong things Measuring either not enough or too much Cheating or data manipulation (intentional or unintentional due to wrong calculation models, systematic errors, human errors, etc.) On eliminating dysfunctional measurement: Establish, and monitor the move to and adherence to ‘policies’ for good, functional measurement Support technical correctness Periodically evaluate the information need and value delivered by measurements Trivia "What gets measured gets manipulated." See also Measurement uncertainty Leadership Performance measurement Plagiarism OKR Corporate culture Verification and validation Scientific rigor References Measurement
Measurement dysfunction
[ "Physics", "Mathematics" ]
198
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
66,856,352
https://en.wikipedia.org/wiki/Ammonium%20propionate
Ammonium propionate or ammonium propanoate is the ammonium salt of propionic acid. It has the chemical formula NH4(C2H5COO). Reaction It is formed by the reaction of propionic acid and ammonia. Uses It is used in several products, which include: fertilizers, water treatment chemicals, and plant protection products. It is also used in different areas, such as: manufacturing, forestry, agriculture, and fishing. It also serves as an antiseptic, antifungal agent, antimould agent, and preservative in feed industry or food industry. Ammonium propionate also prevents spoilage of cosmetics by preventing bacterial growth. See also Calcium propionate Potassium propionate Sodium propionate References Ammonium compounds
Ammonium propionate
[ "Chemistry" ]
165
[ "Ammonium compounds", "Salts" ]
66,857,053
https://en.wikipedia.org/wiki/Calgon%20%28water%20softener%29
Calgon is a brand of water softener products owned by the Anglo-Dutch company Reckitt Benckiser. Advertising In Portugal, the Calgon advertisement jingle has been the same popular one for almost 30 years. In Italy, Calgon was called Calfort from 1965 to early 2008. In the UK & Ireland, Calgon started advertising on TV in March 1985 and it's still in use today. Criticism In May 2011 a study by Which? magazine demonstrated that there was no evidence to suggest that washing machines lasted longer when treated with Calgon under "normal" washing conditions. Calgon disputes this, however. In October 2011, Dutch TROS TV program Radar also concluded Calgon water softener is not necessary under "normal" washing conditions for Dutch customers. References Reckitt brands Cleaning products American brands
Calgon (water softener)
[ "Chemistry" ]
166
[ "Cleaning products", "Products of chemical industry" ]
66,857,299
https://en.wikipedia.org/wiki/Wittig%20reagents
In organic chemistry, Wittig reagents are organophosphorus compounds of the formula R3P=CHR', where R is usually phenyl. They are used to convert ketones and aldehydes to alkenes: Preparation Because they typically hydrolyze and oxidize readily, Wittig reagents are prepared using air-free techniques. They are typically generated and used in situ. THF is a typical solvent. Some are sufficiently stable to be sold commercially. Formation of phosphonium salt Wittig reagents are usually prepared from a phosphonium salt, which is in turn prepared by the quaternization of triphenylphosphine with an alkyl halide. Wittig reagents are usually derived from a primary alkyl halide. Quaternization of triphenylphosphine with secondary halides is typically inefficient. For this reason, Wittig reagents are rarely used to prepare tetrasubstituted alkenes. Bases for deprotonation of phosphonium salts The alkylphosphonium salt is deprotonated with a strong base such as n-butyllithium: [Ph3P+CH2R]X− + C4H9Li → Ph3P=CHR + LiX + C4H10 Besides n-butyllithium (nBuLi), other strong bases like sodium and potassium t-butoxide (tBuONa, tBuOK), lithium, sodium and potassium hexamethyldisilazide (LiHMDS, NaHMDS, KHDMS, where HDMS = N(SiMe3)2), or sodium hydride (NaH) are also commonly used. For stabilized Wittig reagents bearing conjugated electron-withdrawing groups, even relatively weak bases like aqueous sodium hydroxide or potassium carbonate can be employed. The identification of a suitable base is often an important step when optimizing a Wittig reaction. Because phosphonium ylides are seldom isolated, the byproduct(s) generated upon deprotonation essentially plays the role of an additive in a Wittig reaction. As a result, the choice of base has a strong influence on the efficiency and, when applicable, the stereochemical outcome of the Wittig reaction. Substituent effects Electron-withdrawing groups (EWGs) enhance the ease of deprotonation of phosphonium salts. This behavior is illustrated by the finding that deprotonation of triphenylcarbethoxymethylphosphonium requires only sodium hydroxide. The resulting triphenylcarbethoxymethylenephosphorane is somewhat air-stable. It is however less reactive than ylides lacking EWGs. For example they usually fail to react with ketones, necessitating the use of the Horner–Wadsworth–Emmons reaction as an alternative. Such stabilized ylides usually give rise to an E-alkene product when they react, rather than the more usual Z-alkene. Reactions Olefination Wittig reagents are used for olefination reactions, i.e. the Wittig reaction. Protonation Wittig reagents are prepared by deprotonation of alkyl phosphonium salts, and this reaction can be reversed. The methodology can be useful in the preparation of unusual Wittig reagents. Alkylation Alkylation of Ph3P=CH2 with a primary alkyl halide R−CH2−X, produces substituted phosphonium salts: Ph3P=CH2 + RCH2X → Ph3P+ CH2CH2R X− These salts can be deprotonated in the usual way to give Ph3P=CH−CH2R. Deprotonation Although ylides are "electron-rich", they are susceptible to deprotonation of alkyl substituents. Treatment of Me3PCH2 with butyl lithium affords Me2P(CH2)2Li. Having carbanion-like properties, lithiated ylides function as ligands. Thus Me2P(CH2)2Li is a potential bidentate ligand. Examples (Chloromethylene)triphenylphosphorane Methoxymethylenetriphenylphosphorane Methylenetriphenylphosphorane Triphenylcarbethoxymethylenephosphorane Hexaphenylcarbodiphosphorane Structure Wittig reagents are usually described as a combination of two resonance structures: Ph3P+CR2− ↔ Ph3P=CR2 The former is called the ylide form and the latter is called the phosphorane form, which is the more familiar representation. Crystallographic characterization of methylenetriphenylphosphorane shows that the phosphorus atom is tetrahedral. The PCH2 centre is planar and the P=CH2 distance is 1.661 Å, which is much shorter than the other P-C distances (1.823 Å). External links Wittig reaction in Organic Syntheses, Coll. Vol. 10, p. 703 (2004); Vol. 75, p. 153 (1998). (Article) Wittig reaction in Organic Syntheses, Coll. Vol. 5, p. 361 (1973); Vol. 45, p. 33 (1965). (Article) Visual depiction on Tumblr of a Wittig reagent synthesis References organophosphorus compounds
Wittig reagents
[ "Chemistry" ]
1,214
[ "Organophosphorus compounds", "Organic compounds", "Functional groups" ]
66,857,385
https://en.wikipedia.org/wiki/Huawei%20Mate%20X2
The Huawei Mate X2 is an Android-based high end foldable smartphone produced by Huawei. The phone, unveiled on 22 February 2021, serves as the successor to the Mate X and Mate Xs. The phone was vastly redesigned from the previous generation, adopting a dual-screen design very similar to the Samsung Galaxy Z Fold 2. Design Unlike the Mate X and Mate Xs, the Mate X2 has dual displays: a foldable 8 inch display that is concealed when folded, and a smaller 6.45 inch display on the outside. The display format is very similar to the Samsung Galaxy Z Fold 2, which was released the previous year. The quad-camera array is situated on the back, opposite the second screen, and a selfie camera is present in a cutout in the upper left-hand corner of that smaller display. Unlike the Galaxy Z Fold 2, the Mate X2 lacks a camera on the side of the main screen. The device comes in four colors, Black, White, Light Blue, and Rose Gold. References External links Official website Huawei smartphones Android (operating system) devices Phablets Mobile phones introduced in 2021 Foldable smartphones Mobile phones with multiple rear cameras Mobile phones with infrared transmitter
Huawei Mate X2
[ "Technology" ]
251
[ "Mobile technology stubs", "Flagship smartphones", "Crossover devices", "Foldable smartphones", "Phablets", "Discontinued flagship smartphones" ]
66,858,127
https://en.wikipedia.org/wiki/Vigia%20%28nautical%29
A vigia is a warning on a nautical chart indicating a possible rock, shoal, or other hazard which has been reported but not yet verified or surveyed. Some non-existent vigias have remained on successive charts for centuries as a precaution by hesitant hydrographers. One such example was 'Las Casses Bank', a vigia between Menorca and Sardinia in the Mediterranean Sea which first appeared on charts in 1373 and remained on some charts as late as 1852. Another notable false vigia was 'Aitkins' Rock' off the northwest coast of Ireland, first reported in 1740 with six further reports over the following eighty years the supposed rock was blamed for numerous lost ships. Surveys by the Royal Navy in 1824, 1827, and 1829 failed to locate the rock, until a final extensive six week survey in 1840 using two brigs led to the conclusion that the rock had never existed. Captain Alexander Thomas Emeric Vidal, who led the final survey, noted that such false sightings were likely due to floating debris or whales. The term vigia is derived from the Spanish vigía or Portuguese vigia, from the Latin vigilia. See also References Cartography Hydrography Phantom islands
Vigia (nautical)
[ "Environmental_science" ]
246
[ "Hydrography", "Hydrology" ]
66,858,528
https://en.wikipedia.org/wiki/Blue%20Ridge%20Ophiolite
Blue Ridge Ophiolite is an ultramafic series of pods found in the Blue Ridge Mountains of the Appalachian mountain chain. The pods formed before the Taconic orogeny. Throughout the middle and late Ordovician period, the rocks were affected by regional metamorphism leading to resulting in altered mineralogy for some pods. Location The Blue Ridge Ophiolite occurs frequently throughout the Appalachian Mountains. There are many pods located in the western parts of North Carolina such as the Newdale, Daybook, and Buck Creek Dunite. Some of these also pods extend into Tennessee and South Carolina. Mineralogy The Blue Ridge Ophiolite can be broken up into two categories: altered and unaltered. Unaltered ultramafic The majority of the Blue Ridge Ophiolite has minimal altered composition of dunite. Forsterite is the main olivine endmember most consistently found in samples of the Blue Ridge Ophiolite. Other minerals found in noticeable amounts in the formation are orthopyroxene, clinopyroxene, and chromite. In rare cases garnets and plagioclase can be found in some samples. Unaltered samples of the Blue Ridge Ophiolite are green or brown. Samples tend to have a grainy texture like sugar, with conchoidal fracture. In thin section the most abundant and easiest to identify mineral is olivine, about 60% to 80% of the thin section. Altered ultramafic The mineralogy of these pods of the Blue Ridge Ophiolite show evidence of metamorphism through their specific altered metamorphic mineralogy involving fluids. Minerals that appear in these rocks along with olivine are chlorite, talc, phlogopite, tremolite, and hornblende. Fluid and differential stress are major factors leading to the formation of metamorphic minerals in these rocks. When minerals like olivine are introduced to water they break down to form talc, and then through further alterations phlogopite can form. Through reworking, the minerals can form phyllite and other metamorphic textures. Altered samples of the Blue Ridge Ophiolite are green, with white and black crystals that are visible. In thin section some samples have no olivine such as the sample form Todd, North Carolina. Altered minerals that can be seen in thin section are chlorite and chromite. Formation Evidence points to the protolith being from a mid-ocean ridge mafic rock around the time of the Taconic orogeny. Early Ordovician tectonic activity is the cause of first metamorphism of ophiolite pods in what would later become the Blue Ridge Mountains. The oldest dated sample of the Blue Ridge Ophiolite at Buck Creek, North Carolina, is 458 million years old. These were dated using rhenium–osmium dating to determine the age. The use of examination of chromite in rock samples show deformation as far back as the early to middle Ordovician period. Middle Ordovician deformation caused other metamorphic suites in the Appalachian Mountains and buried and altered ophiolite pods throughout the region. References Appalachian Mountains Ultramafic rocks
Blue Ridge Ophiolite
[ "Chemistry" ]
666
[ "Ultramafic rocks", "Igneous rocks by composition" ]
66,861,619
https://en.wikipedia.org/wiki/Osmundastrum%20pulchellum
Osmundastrum pulchellum is an extinct species of Osmundastrum, leptosporangiate ferns in the family Osmundaceae from the lower Jurassic (Pliensbachian-Toarcian?) Djupadal Formation of Southern Sweden. It remained unstudied for 40 years. It is one of the most exceptional fossil ferns ever found, preserving intact calcified (thus dead) tissue with DNA and cells. Its exceptional preservation has allowed the study of the DNA relationships with extant Osmundaceae ferns, proving a 180-million-year genomic stasis. It has also preserved its biotic interactions and even ongoing mitosis. History and discovery The only known specimen was recovered at the mafic pyroclastic and epiclastic deposits of the Djupadal Formation, dated Pliensbachian-Toarcian(?), that are present near Korsaröd Lake, at the north of Höör, central Skåne, southern Sweden. The location was studied first by Gustav Andersson, a local farmer, who was a passionate follower of scientific discoveries. Through his interest in geology, he identified several coeval volcanic plugs, and motivated by the presence of volcanic soils, he excavated a location at the south of the Korsaröd lake. Initially nothing was found, but a second deeper dig revealed a series of aggregated wood remains on volcanic lahar-derived stones. Samples taken from the location were sent to the geologist Hans Tralau, who carried out palynological research on them, estimating an age of deposition of Late Toarcian-Aalenian(?). A petrified rhizome was sent to Tralau, who understood the significance of the fossil and intended to publish it formally, but his untimely death in March 1977 made it impossible. The rhizome, along with the fossil wood, was archived at the Swedish Museum of Natural History, where the geologist Britta Lundblad tried also to publish it formally, what was also impossible due to her retirement in 1986. The fossil was lying forgotten in the archives of the museum until 2013, when it was discovered again and studied, finding that it preserved spectacular cellular detail, rarely seen on fossils. In 2015, it was finally published as Osmunda pulchella by B. Bomfleur, G. W. Grimm and S. McLoughlin. The specific epithet pulchella (Latin diminutive of pulchra, 'beautiful', 'fair;) was chosen in reference to the exquisite preservation and aesthetic appeal of the holotype specimen. The name Osmunda pulchella was mostly used in the main publications referring to it until in 2017 a revision of the cladistic status of the fossil Osmundales showed that the fossil was in fact a member of the genus Osmundastrum, so it became Osmundatrum pulchellum. Description The Osmundastrum pulchellum holotype is a calcified rhizome fragment about 6 cm long and up to 4 cm in diameter that probably come from a small (approx. 50 cm tall) fern. It is composed of a small central stem surrounded by a compact mantle of helically arranged petiole bases and interspersed rootlets that extend outwards perpendicular to the axis, indicating a low rhizomatous rather than arborescent growth. This, together with the asymmetrical distribution of the roots, points to a creeping habit. The stem is around about 7.5 mm in diameter and the pith about 1.5 mm in diameter and entirely parenchymatous. In the pith, cell walls lack the presence of an internal endodermis or internal phloem, considered to be an original feature, rather than a loss due to inadequate preservation. Traces of leaves and associated rootlets are present traversing the outer cortex. This specimen is well known for the quality of its preservation, quality revealing cellular and subcellular detail: from tracheids with preserved wall thickenings, to parenchyma cells containing preserved cellular contents. Some of the parenchyma cells contain oblate particles about 1–5 μm in diameter, interpreted as putative amyloplasts. Classification The exceptional preservation of Osmundastrum pulchellum has allowed the establishment of an evolutionary overview of royal ferns since the lower Jurassic. At its description as Osmunda pulchella, it was compared with Todea, Leptopteris, Plenasium and Claytosmunda, and found as a bridge in the morphological gap between extant Osmundastrum and the subgenus Osmunda inside Osmunda – the closest species to Osmundastrum. It was shown that this species and the extant Osmundaceae share the same chromosome count and DNA content. In 2017, a re-examination of the phylogeny of the fossil Osmundales showed it to be a member of the genus Osmundastrum and a probable precursor of the modern Osmundastrum cinnamomeum. Latter, a new species, Osmundastrum gvozdevae from the Middle Jurassic of the Russian Kursk Region was recovered as a possible sister taxon. Biology Osmundastrum pulchellum is well known thanks to exceptional preservation of detailed anatomical structures (e.g., pith, stele, petiole base, adventitious roots, and even nuclei). As well is the only known case of fossilized ongoing mitosis. This is shown by the fact that the chromosomes and cell nuclei show marked structural heterogeneities compared to the cell walls during different stages of the cell cycle. A rapid calcite permineralization "froze" the organic molecules in time, which suggests the fern rhizome was fossilized probably on a very short time, perhaps even minutes thanks to a fast lahar deposit. The tissues show cells with nuclei, nucleoli, and chromosomes during the interphase, prophase, prometaphase, and possible anaphase of the cell cycle. Some cells also show pyknotic nuclei typical of cells undergoing apoptosis (programmed cell death). The subcellular detail is nearly unique, as other ferns preserved in similar conditions lack them, for example Ashicaulis liaoningensis. Several biotic interactions were recovered on the rhizome. Exotic roots were recovered on the petiole bases, with a level of preservation that matches that of the whole plant, bearing a similar vasculature as seen in modern lycophytes. They are interpreted as belonging to a small herbaceous epiphytic lycopsid, with its megaspores also linked with the specimen. Other sporangial fragments from other ferns (Deltoidospora toralis, Cibotiumspora jurienensis, etc.) were also recovered, known from the nearby deposits. A similar community was recovered on a Todea rhizome from the early Eocene of Patagonia, but with the epiphytic plants being in Osmundastrum pulchellum exclusively lycopsids and ferns, which may indicate that bryophytes had not yet evolved the epiphytic habit during the Jurassic. Possible oogonia of Peronosporomycetes are found in a parasitic or saprotrophic relation with the plant. If the identification of the oogonia of Peronosporomycetes is correct, then this implies regularly moist conditions for the growth of Osmundastrum pulchellum. Thread-like structures were found, identified as derived from a pathogenic or saprotrophic fungus invading necrotic tissues of the host plant. The interaction of the fungus with the plant was probably mycorrhizal. Excavations up to 715 μm in diameter are evident, filled with pellets that resemble the coprolites of oribatid mites, found also in Paleozoic and Mesozoic woods. Paleoenvironment The Djupadal Formation was deposited in the Central Skane region, linked to the late Early Jurassic Volcanism. Several coeval Volcanic necks are recovered on the region, such as Eneskogen (A large hill covered by quaternary sediments. Some few boulders and basalt pillars were exposed), Bonnarp (5–6 m height and covers roughly 5,000 square meters, covered by Jurassic sediments) and Säte (Comprise two basalt pipes, each roughly 6–10 m high and some 10,000 square meters in area). The Korsaröd member includes a volcanic-derived lagerstatten where this fern was found, probably derived from a fast lahar deposition. Thanks to the data provided by the fossilized wood rings, it was found that the location of Korsaröd hosted a middle-latitude Mediterranean-type biome in the late Early Jurassic, with low rainfall. Superimposed on this climate were the effects of a local active Strombolian Volcanism and hydrothermal activity. This location has been compared with modern Rotorua, New Zealand, considered an analogue for the type of environment represented in southern Sweden at this time. The locality was populated mostly by Cupressaceae trees (including specimens up to 5 m in circunference), known thanks to the great abundance of the wood genus Protophyllocladoxylon and the high presence of the genus Perinopollenites elatoides (also Cupressaceae) and Eucommiidites troedsonii (Erdtmanithecales). The underlying Höör Sandstone Formation hosts abundant Chasmatosporites spp. pollen produced by plants related to cycadophytes, while the Djupadal volcanogenic deposits are dominated by cypress family pollen with an understorey component rich in putative Erdtmanithecales, both representing vegetation of disturbed habitats. The abundance of Protophyllocladoxylon sp. is also related with a sporadic intraseasonal and multi-year episodes of growth disruption, probably due to the volcanic action. Pollen, spores, wood and charcoal locally indicate a complex forest community subject to episodic fires and other forms of disturbance in an active volcanic landscape under a moderately seasonal climate. Osmundastrum pulchellum was a prominent understorey element in this vegetation and was probably involved in various competitive interactions with neighboring plant species, such as lycophytes, whose roots have been recovered inside the rhizome. The ferns were part of a fern- and conifer-rich vegetation occupying a topographic depression in the landscape (moist gully) that was engulfed by one or more lahar deposits. References Osmundales Plants described in 2015 Jurassic Sweden Paleontology in Sweden Prehistoric plants
Osmundastrum pulchellum
[ "Biology" ]
2,223
[ "Prehistoric plants", "Plants" ]
66,868,680
https://en.wikipedia.org/wiki/Hydrogen%20assisted%20magnesiothermic%20reduction
The hydrogen assisted magnesiothermic reduction ("HAMR") process is a thermochemical process to obtain titanium metal from titanium oxides. A technical challenge in the production of titanium metal is the formation of oxide impurities. The Kroll process, which is widely used commercially, addresses this challenge by converting titanium ore (an oxide) into titanium tetrachloride (TiCl4). This intermediate is readily purified. It is reduced to titanium metal with magnesium. This technology is both capital, energy, and carbon-intensive. One advantage of the Kroll process, and several like it, is that it starts with titanium ores (e.g., illmenite), not a purified dioxide. The HAMR technology also entails a two step process, starting with TiO2 under an atmosphere of hydrogen gas. The product TiH2 can be further processed to titanium metal through standard methods. The reduction of titanium oxides to titanium metal using magnesium does not occur. The novelty of the HAMR process is the inclusion of hydrogen. References Titanium processes Chemical processes Hydrogen economy
Hydrogen assisted magnesiothermic reduction
[ "Chemistry" ]
229
[ "Metallurgical processes", "Titanium processes", "Chemical processes", "nan", "Chemical process engineering" ]
66,868,742
https://en.wikipedia.org/wiki/%CE%91-Hydroxyetizolam
α-Hydroxyetizolam is the pharmacologically active metabolite of etizolam. α-Hydroxyetizolam has a half-life of approximately 8.2 hours. Etizolam's other non-pharmacologically active metabolite in humans is 8-hydroxyetizolam. See also Etizolam Alprazolam Brotizolam Clotiazepam Deschloroetizolam Metizolam Benzodiazepine dependence Benzodiazepine withdrawal syndrome Long-term effects of benzodiazepines References External links Inchem.org - Etizolam 2-Chlorophenyl compounds Designer drugs GABAA receptor positive allosteric modulators Hypnotics Thienotriazolodiazepines Human drug metabolites
Α-Hydroxyetizolam
[ "Chemistry", "Biology" ]
180
[ "Hypnotics", "Behavior", "Sleep", "Human drug metabolites", "Chemicals in medicine" ]
66,869,815
https://en.wikipedia.org/wiki/Antimanic%20drugs
Antimanic drugs are psychotropic drugs that are used to treat symptoms of mania. Though there are different causes of mania, the majority is caused by bipolar disorder, therefore antimanic drugs are mostly similar to drugs treating bipolar disorder. Since 1970s, antimanic drugs have been used specifically to control the abnormal elevation of mood or mood swings during manic episodes. One purpose of antimanic drugs is to alleviate or shorten the duration of an acute mania. Another objective is to prevent further cycles of mania and maintain the improvement achieved during the acute episode. The mechanism of antimanic drugs has not yet been fully known, it is proposed that they mostly affect chemical neurotransmitters in the brain. However, the usage of antimanic drugs should be consulted with a doctor or pharmacist due to their side effects and interactions with other drugs and food. History Early discoveries and development During the early 19th century, sedatives were the most common treatment for manic patients. Alkaloids, one of the most widely used sedatives, were introduced as an antimanic treatment by the isolation of morphine from opium by German pharmacist Friedrich Wilhelm Sertürner in 1805. The most successful alkaloids in anti-manic treatment were isolated chemicals of the Solanaceae family, which were plants known for their hallucinogenic effects. One of them was Hyoscyamus, which was isolated by chemists from the German company E. Merck in 1839. Another alkaloid called Hyoscine was isolated by Albert Ladenburg in Germany in 1880. The alkaloids demonstrated sedative and hypnotic properties, which became popular ingredients used in psychotic cocktails for antimanic patients. In 1832, a chemist from Giessen, Justus von Liebig synthesised chloral hydrate. It was accessed as a hypnotic in 1869 by pharmacologist Mathias Otto Liebreich. In 1870, American psychiatrist William J. Elstun reported that 5 patients from Indiana Hospital for the Insane improved after receiving chloral hydrate. It soon replaced both morphine and solanaceous alkaloids in antimanic treatment due to its oral convenience. In the late 19th century, a pharmacist from Montpellier, Antoine Balard isolated bromides. They were first used as anticonvulsants, then were widely used as sedatives in European mental hospitals. In 1863, barbiturates were synthesised by Adolf von Baeyer. From the beginning of the 20th century to the mid-1950s, barbiturates had been the most widely used medication in antimanic treatment. During the 1950s and in the late 1960s, the antimanic efficacy of lithium salts was demonstrated. Its antimanic indication was authorised by the Food and Drugs Administration (FDA) of the United States in 1970. In 1995, valproic acid, an anticonvulsant agent, was approved by the FDA for its antimanic indication. Carbamazepine, an anticonvulsant drug, was also developed, which was authorized by numerous regulatory organisations worldwide. Since 2000, different antipsychotic drugs have had their antimanic indications authorised by FDA. They include olanzapine, risperidone, quetiapine, ziprasidone, aripiprazole, etc. List of drugs Below is the list of common antimanic drugs. Mechanism of actions Lithium The precise mechanism of actions of lithium is currently not known. The overall effect of lithium is most probably due to stimulating inhibitory neurotransmission and inhibiting excitatory transmission. Lithium is suggested to affect multiple neurotransmitter systems including noradrenaline, dopamine, serotonin, and gamma aminobutyric acid, along with second messenger systems including cyclic adenosine monophosphate and cyclic guanosine monophosphate. In patients with bipolar disorder, lithium appears to increase neurogenesis and neuroprotective factors. Lithium may also preserve or increase cortical gray matter, white matter integrity etc. Anticonvulsants Although some mechanisms of actions of anticonvulsants are still unknown or suspected, there are mainly several types of mechanism of actions. The most common mechanism is affecting voltage-dependent sodium channels, which is mainly adopted by both carbamazepine and lamotrigine. Other mechanisms include affecting calcium currents, GABA activity and glutamate receptors. Antipsychotics The mechanism of actions of most antipsychotics is post-synaptic blockage of brain dopamine D2 receptors. Second generation antipsychotics also bind with serotonin 5HT2 receptors at a high affinity, which is suggested to be the cause for the lowered risk of extrapyramidal side effects compared with first generation antipsychotics. Adverse effects Lithium Common adverse effects of lithium include nausea, headache, diarrhoea and vomiting. When concentration of lithium in serum increases to 1.5 mmol/L, toxicity may be induced. This leads to loss of coordination, drowsiness, weakness, slurred speech and blurred vision. More adverse effects including chaotic cardiac rhythm and brain-wave activity with seizures may also occur when lithium concentration in serum increases to 2 mmol/L. Prolonged use of lithium may damage the body's ability to respond properly to hormone vasopressin (ADH), which stimulates water reabsorption. This gives rise to diabetes insipidus, a disorder characterized by polyuria and polydipsia. Other adverse effects of lithium also include tremor and weight gain. Toxicity of lithium induced by overdose may be solved by hemodialysis, which removes excess lithium from blood using a hemodialyzer. Anticonvulsants Most anticonvulsants may cause mild disturbances to central nervous system, which include dizziness, drowsiness, headache and nausea. Some anticonvulsants such as gabapentin may even lead to discomfort in the gastrointestinal system, such as constipation and diarrhoea. For valproic acid, more severe adverse effects like hepatic dysfunction may be caused. Signs include persistent vomiting, abdominal pain, anorexia, jaundice, oedema and loss of seizure control. Antipsychotics The use of antipsychotics may lead to agitation, arrhythmia, sedation, sexual dysfunction, increased weight, urinary retention and vomiting. Hypotension may also be induced, which is related to the dose used. Interactions with other drugs and/or food Lithium Concurrent use of lithium and several types of drugs should be avoided, as these drugs increases the concentration of lithium in the body and in turn increases risk of lithium toxicity, including antihypertensive drugs, NSAIDs, ACE inhibitors and antibiotics. For antihypertensive drugs, diuretics causes sodium loss, which reduces the renal clearance of lithium, while symptoms of lithium toxicity have also been reported when methyldopa is used together with lithium. NSAIDs have similar effects to diuretics drugs, which is decreasing the renal clearance of lithium. Some other drugs decrease the concentration of lithium in the body, which decreases the effectiveness of lithium, including verapamil, osmotic diuretics, carbonic anhydrase inhibitors, caffeine and theophylline. Anticonvulsants Compared with other anticonvulsants, valproate and carbamazepine are more likely to have interactions with other drugs due to high cytochrome P450 enzymatic activity. Valproate inhibits CYP enzymes and would increase the concentrations of drugs that are inactivated by these enzymes in the body. Carbamazepine is a potent inducer of several types of cytochrome P450 enzymes, and therefore decreases the effect of drugs that are also metabolized by these enzymes. Drugs affected include corticosteroids, selective serotonin reuptake inhibitors, calcium channel blockers, oral contraceptives, warfarin etc. The metabolism of anticonvulsants may be inhibited by antidepressants and antipsychotics, increasing the concentrations of anticonvulsants in the body, which in turn increases the adverse effects of anticonvulsants. P-glycoprotein or other transporters such as uridine diphosphate glucuronosyltransferase are affected by anticonvulsants. This alters the serum concentration of drugs transported by these proteins. Valproate is a drug that extensively bounds to plasma proteins and will therefore displace or be displaced by highly protein-bound drugs such as salicylates, naproxen and diazepam. Anticonvulsants may also affect each other, though these interactions are generally modest since they can usually compensate for any decrease in anticonvulsant efficacy that may occur. Antipsychotics Antipsychotics depend on cytochrome P450 enzymes for metabolism. Concurrent administration of medications that are inducers and inhibitors of these enzymes may increase or decrease concentrations of antipsychotics in the body, and change in dosage may be necessary in order to maintain the effectiveness of antipsychotics. Asenapine and quetiapine may have interactions with medications having similar side effects, such as sedation, anticholinergic effects, weight gain, hypotension or Parkinsonism, which may cause more serious side effects. Olanzapine may be affected by cigarette smoke, while ziprasidone may cause QT prolongation if used with other drugs that have similar cardiac effects. References Psychoactive drugs Mania Treatment of bipolar disorder
Antimanic drugs
[ "Chemistry" ]
2,015
[ "Psychoactive drugs", "Neurochemistry" ]
68,289,612
https://en.wikipedia.org/wiki/Citicorp%20Center%20engineering%20crisis
In July 1978, a possible structural flaw was discovered in Citicorp Center, a skyscraper that had recently been completed in New York City. Workers surreptitiously made repairs over the next few months. The building, now known as Citigroup Center, occupied an entire block and was to be the headquarters of Citibank. Its structure, designed by William LeMessurier, had several unusual design features, including a raised base supported by four offset stilts and a column in the center, diagonal bracing which absorbed wind loads from upper stories, and a tuned mass damper with a 400-ton concrete weight floating on oil to counteract oscillation movements. It was the first building that used active mechanical elements (the tuned mass damper) for stabilization. Concerned about "quartering winds" directed diagonally toward the corners of the building, Princeton University undergraduate student Diane Hartley investigated the structural integrity of the building and found it wanting. However, it is not clear whether her study ever came to the attention of LeMessurier, the chief structural engineer of the building. At around the same time as Hartley was studying the question, an architecture student at New Jersey Institute of Technology (NJIT) named Lee DeCarolis chose the building as the topic for a report assignment in his freshman class on the basic concepts of structural engineering. A Professor Zoldos of NJIT expressed reservations to DeCarolis about the building's structure, and DeCarolis contacted LeMessurier, relaying what his professor had said. LeMessurier had also become aware that during the construction of the building, changes had been made to his design without his approval, and he reviewed the calculations of the building's stress parameters and the results of wind tunnel experiments. He concluded there was a problem. Worried that a high wind could cause the building to collapse, LeMessurier directed that the building be reinforced. The reinforcements were made stealthily at night while the offices in the building were open for regular operation during the day. The concern was for the integrity of the building structure in high wind conditions. Estimates at the time suggested that if the mass damper was disabled by a power failure, the building could be toppled by a quartering wind, with possibly many people killed as a result. The reinforcement effort was kept secret until 1995. The tuned mass damper has a major effect on the stability of the structure, so an emergency backup generator was installed and extra staff was assigned to ensure that it would keep working reliably during the structural reinforcement. The city had plans to evacuate the Citicorp Center and other surrounding buildings if high winds did occur. Hurricane Ella did threaten New York during the retrofitting, but it changed course before arriving. Ultimately, the retrofitting may not have been necessary. A NIST reassessment using modern technology later determined that the quartering wind loads were not the threat that LeMessurier and Hartley had thought. They recommended a reevaluation of the original building design to determine if the retrofitting had really been warranted. It is not clear whether the NIST-recommended reevaluation was ever conducted, although the question is only an academic one, since the reinforcement had been done. Background The Citigroup Center, originally known as Citicorp Center, is a 59-story skyscraper at 601 Lexington Avenue in the Midtown Manhattan neighborhood of New York City. It was designed by architect Hugh Stubbins as the headquarters for First National City Bank (later Citibank), along with associate architect Emery Roth & Sons. LeMessurier Associates and James Ruderman were the structural engineers, and Bethlehem Steel was the steel subcontractor. The building was dedicated on October 12, 1977. As part of Citicorp Center's construction, a new building for the site's previous occupant, St. Peter's Lutheran Church, was erected at the site's northwest corner; by agreement, it was supposed to be separate from the main tower. To avoid the church, the tower is supported by four stilts positioned underneath the centers of each of the tower's edges. (Early plans called for the supports to be placed under the tower's corners, but the agreement with the church prevented that.) To allow this design to work, Bill LeMessurier specified that load-bearing braces in the form of inverted chevrons be stacked above the stilts inside each face of the building. These braces are designed to distribute tension loads created by the wind from the upper stories down to the stilts. The long, multi-story diagonal braces had to be fabricated in sections and assembled on-site, requiring five joints in each brace. LeMessurier's original design for the chevron load braces used welded joints. To save money, Bethlehem Steel proposed changing the construction plans to use bolted joints, a design modification accepted by LeMessurier's office but unknown to the engineer himself until later. For his original design, LeMessurier focused primarily on the wind load on the building when the wind blew perpendicularly against the side of the building. Although he had initially studied winds from various directions, he had concluded that quartering winds were not the critical case, and came to rely primarily on the calculations for perpendicular winds. Perpendicular winds were the only calculations required by New York City building code. Such winds are normally the worst case, and typically a structural system capable of handling them can easily cope with wind from any other angle. Discovery In May 1978, after the building structure was completed, LeMessurier was designing a similar building with wind braces in Pittsburgh, and a potential contractor questioned the expense of using welded rather than bolted joints. LeMessurier asked his office how the welds went at the Citicorp construction and was then told that bolts had been substituted for the welded joints he had prescribed. LeMessurier had not seen the analysis that had been performed when this substitution was done. In June 1978, Princeton University engineering student Diane Hartley was writing her senior thesis about Citicorp Center's design at the suggestion of her professor, David Billington. As part of that work she analyzed the structural design and calculated stresses from quartering winds, finding them higher than the maximum expected stress values provided to her by LeMessurier's firm. Hartley asked her contact at the building design company, Joel S. Weinstein, a junior member of its staff, about the issue, and he provided her with a copy of the firm's calculations for perpendicular winds (but not for quartering winds). Only Weinstein was indicated as signing off on the copies of the calculations he provided to her, although she expected to see them initialed by a second person to confirm them, as was the usual practice in the industry. According to Hartley, she asked for calculations about quartering winds, and Weinstein said he would provide them but then didn't. Calculations for quartering winds were not required by the building code at the time, and were not common practice in the industry (although the design of the building was obviously unusual and would have justified special analysis). Weinstein assured her that the building could handle the necessary forces, and she did not further pursue the issue beyond writing about it in her thesis, which recorded her concerns and the response she received. In his feedback on Hartley's thesis, Billington questioned why her calculations weren't checked against figures from the firm. In June 1978, LeMessurier was answering questions via phone with a young architectural student, self-identified more than 40 years later as Lee DeCarolis. Those phone calls and the bolt substitutions convinced him to recalculate the wind loads, including the diagonal wind loads. On July 24, 1978, LeMessurier went to his office and conducted calculations on Citicorp Center's design. He had thought that perpendicular winds were the critical case for the building rather than quartering winds. He found that, for four of the eight tiers of chevrons, quartering winds would create a 40 percent increase in wind loads and a 160 percent increase in the load at the bolted joints. Citicorp Center's use of bolted joints and the loads from quartering winds would not have caused concern if these issues had been isolated. However, the combination of the two findings prompted LeMessurier to run tests on structural safety. He concluded that the original welded-joint design could withstand the load from both straight-on and quartering winds, but the modified bolted-joint design could be vulnerable to a near-hurricane force quartering wind. LeMessurier also discovered that his firm had used New York City's truss safety factor of 1:1 instead of the column safety factor of 1:2. On July 26, LeMessurier visited wind-tunnel expert Alan Garnett Davenport at the University of Western Ontario. Davenport's team conducted calculations on the building and concluded not only that LeMessurier's modeling was correct but also that, in a real-world situation, member stresses could increase by more than the 40 percent LeMessurier had calculated. LeMessurier then went to his Maine summer home on July 28 to analyze the issue. With the tuned mass damper active, LeMessurier estimated that a wind capable of toppling the building had a one in fifty-five chance of happening any year. But if the tuned mass damper could not function due to a power outage, a wind strong enough to cause the building's collapse had one chance in sixteen of happening any year. Repairs LeMessurier agonized over how to deal with the problem. If the issues were made known to the public, he risked ruining his professional reputation and causing panic in the immediate area surrounding the building and the occupants. LeMessurier considered never bringing the issue up, and he also briefly contemplated committing suicide before anyone else found out about the defect. LeMessurier ultimately contacted Stubbins's lawyer and insurance carrier. LeMessurier then contacted Citicorp's lawyers, the latter of which hired Leslie E. Robertson as an expert adviser. Citicorp accepted LeMessurier's proposal to weld steel plates over the bolted joints, and Karl Koch Erecting was hired for the welding process. Very few people were made aware of the issue, besides Citicorp leadership, mayor Ed Koch, acting buildings commissioner Irving E. Minkin, and the head of the welder's union. Construction crews started installing the welded panels at night in August 1978. Officials made no public mention of any possible structural issues, and the city's three major newspapers had gone on strike. Officials barely acknowledged the issue, instead describing the work as a routine procedure. Henry DeFord III of Citicorp claimed the Citicorp Center could withstand a 100-year wind and that there were no "noticeable problems in the building at all". As precautions, emergency generators were installed for the mass damper, strain gauges were placed on critical beams and weather forecasters were engaged. Citicorp and local officials created emergency evacuation plans for the immediate neighborhood. However, these evacuation plans were not publicized at the time, although thousands of people could have been killed in a potential collapse. Six weeks into the work, a major storm (Hurricane Ella) was off Cape Hatteras and heading for New York. The reinforcement was only half-finished, with New York City hours away from emergency evacuation, but at that point the backup generators were in place and the mass damper was being continually monitored by special staff, and enough of the bracing had been completed that the tower was estimated to be able to survive a 200-year storm. Ella eventually turned eastward and veered out to sea. The weather watch ended on September 13. Repairs were completed in October 1978, and most of the newspapers remained out of production for weeks after it was completed. LeMessurier claimed a wind strong enough to topple the repaired building would occur only once every 700 years. Stubbins and LeMessurier's insurance carrier covered all of the repair costs, estimated to be several million dollars. Publication Since no structural failure occurred, the work was not publicized until 1995, when a lengthy article appeared in The New Yorker. The 1995 story in The New Yorker described the student as a "young man, whose name has been lost in the swirl of subsequent events" who called LeMessurier saying "that his professor had assigned him to write a paper on the Citicorp tower". However, it was clear that Diane Hartley had never contacted LeMessurier directly she had spoken only to Joel S. Weinstein. According to one second-hand report, when one of LeMessurier's colleagues asked whether the student was female, "LeMessurier responded that he didn't know because he had not actually spoken with the student." However, in a lecture on the subject, LeMessurier himself said he had spoken directly and repeatedly with the student and referred to the student as male. LeMessurier died in 2007 without describing any communication with him about the interaction between Hartley and Weinstein. Hartley identified herself as the probable engineering student in 2011, more than fifteen years after the New Yorker article was published. However, another student at a different institution later identified himself as the student that LeMessurier spoke to by phone identified himself as Lee DeCarolis in an article published on the Online Ethics Center website. He said he learned in 2011 how he played a part in the Citicorp Building history from reading Einstein's Refrigerator, a 2001 book by the high school teacher and podcaster Steve Silverman. By that time, LeMessurier had died. While he had mentioned his role to acquaintances and even written a play about it, DeCarolis revealed himself to the public at large only after a reassessment by NIST had determined that the effect of the wind loads had not been as severe as Hartley and LeMessurier had determined. Ethical questions According to a case study by the American Institute of Architects (AIA) Trust, "many have viewed the actions of LeMessurier as nearly heroic, and many engineering schools and ethics educators now use LeMessurier's story as an example of how to act ethically." However, others have criticized LeMessurier for his lack of oversight that led to the issues and his lack of honesty toward neighborhood residents, architects, engineers, and other members of the public when the problems were discovered. Architect Eugene Kremer discussed the ethical questions raised in this case in 2002. Kremer listed six key points that he perceived as ethically problematic: Analysis of wind loads: Although quartering wind loads were considered early in the design process, LeMessurier initially reached the conclusion that they were not the critical case for the building's structural analysis, and came to rely primarily on the calculations for perpendicular winds, as required by building codes, rather than checking all calculations and scenarios thoroughly. Design changes: The steel framework subcontractor (Bethlehem Steel) proposed to use bolted joints instead of full-penetration welding, and the proposal was approved by LeMessurier's firm without LeMessurier personally reviewing the details. Kremer reported that Robert McNamara, "the managing principal for Citicorp in LeMessurier Associates' Cambridge office", stated that after he reviewed the proposal, he "presented the suggested change to Bill LeMessurier", who "discussed [with him] the technical implications and did calculations as to what effect the bolt extension in the connection would have on the movement of the tower ...", and that LeMessurier's firm then approved the details of the change without LeMessurier personally reviewing those details. This somewhat contradicts LeMessurier, who said he wasn't aware of the substitution until after the work had been completed. Professional responsibility: Before LeMessurier decided to make Citicorp aware of the design defects, he briefly considered concealing the issues instead, or even committing suicide. Kremer said he should not have entertained such thoughts, even briefly. In contrast, the AIA study reports that it is clear LeMessurier never really considered the other options seriously. Public statements: In press interviews and releases of information at the time, officials either omitted or lied about details of the defects. Kremer cites the National Society of Professional Engineers (NSPE) Code of Ethics, which says engineers shall "Issue public statements only in an objective and truthful manner." Public safety: When Hurricane Ella threatened the city in August and September 1978, evacuation plans for the surrounding area were made in secret. Kremer cites the NSPE Board of Ethical Review (BER), which, although it was not commenting about the Citicorp Center specifically, said "withholding critical information from thousands of individuals whose safety is compromised over a significant period of time" is improper (although it could be argued that the Citicorp Center situation did not rise to meet that standard, when considering that no storms with high winds actually occurred in New York City during the period in question, and other steps had been taken to reduce the risk, and evacuation plans were ready if a high-wind storm were to occur). Advancing professional knowledge: Kremer argues that concealing the crisis for almost 20 years prevented some of the ethical and engineering analysis and learning that could have taken place if information had been released about the Citicorp Center case. References Sources (accessible only once without a subscription) 1978 in New York City History of structural engineering Ethics of science and technology Wind 1978 disasters in the United States 1970s in Manhattan
Citicorp Center engineering crisis
[ "Technology", "Engineering" ]
3,649
[ "Structural engineering", "History of structural engineering", "Ethics of science and technology" ]
68,289,950
https://en.wikipedia.org/wiki/Mutualism%20Parasitism%20Continuum
The hypothesis or paradigm of Mutualism Parasitism Continuum postulates that compatible host-symbiont associations can occupy a broad continuum of interactions with different fitness outcomes for each member. At one end of the continuum lies obligate mutualism where both host and symbiont benefit from the interaction and are dependent on it for survival. At the other end of the continuum highly parasitic interactions can occur, where one member gains a fitness benefit at the expense of the others survival. Between these extremes many different types of interaction are possible. The degree of change between mutualism or parasitism varies depending on the availability of resources, where there is environmental stress generated by few resources, symbiotic relationships are formed while in environments where there is an excess of resources, biological interactions turn to competition and parasitism. Classically the transmission mode of the symbiont can also be important in predicting where on the mutualism-parasitism-continuum an interaction will sit. Symbionts that are vertically transmitted (inherited symbionts) frequently occupy mutualism space on the continuum, this is due to the aligned reproductive interests between host and symbiont that are generated under vertical transmission. In some systems increases in the relative contribution of horizontal transmission can drive selection for parasitism. Studies of this hypothesis have focused on host-symbiont models of plants and fungi, and also of animals and microbes. See also Red King Hypothesis Red Queen Hypothesis Black Queen Hypothesis Biological interaction References Evolution Biological interactions
Mutualism Parasitism Continuum
[ "Biology" ]
309
[ "Biological interactions", "Ethology", "Behavior", "nan" ]
68,290,336
https://en.wikipedia.org/wiki/Ariel%20Anbar
Ariel Anbar is an isotope geochemist and President's Professor at Arizona State University. He has published over 180 refereed papers on topics ranging from the origins of Earth's atmosphere to detecting life on other worlds to diagnosing human disease. Education and career Anbar was born in Rehovot, Israel and raised in Palo Alto, California and Amherst, New York. He received a A.B. in Geological Sciences and Chemistry from Harvard University in 1989. While at Harvard, he worked under the supervision of Heinrich Holland and conducted experiments that suggested the importance of photochemical oxidation in Archean oceans, especially as a possible source of manganese oxides before the Great Oxidation Event. He received a Ph.D. in geochemistry from the California Institute of Technology in 1996, advised by Gerald Wasserburg, where he developed methods for ultra-sensitive determination of rhenium and iridium in seawater. He was on the faculty of the Department of Earth and Environmental Sciences at the University of Rochester from 1996 to 2004. Since 2004, he has been on the faculty in the School of Earth and Space Exploration and the School of Molecular Sciences at Arizona State University. Research Anbar's research group uses multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) to study natural variations in the “non-traditional” stable isotope abundances of transition metals as biomarkers and as probes of ancient ocean oxygenation. His group was the first to report natural fractionation of molybdenum isotopes, including how and why molybdenum isotopes fractionate during adsorption to manganese oxides. This work provided a foundation for the use of molybdenum isotopes to study ancient ocean redox change. Anbar and colleagues discovered a "whiff of oxygen" fifty million years before the Great Oxidation Event Anbar's group has also worked on iron isotopes, demonstrating abiotic fractionation in low and high temperature systems. They have also worked to develop the uranium isotope system as a paleoredox proxy, opening up the carbonate sedimentary record for investigation of changes in ocean oxygenation and their linkages to evolution. Anbar has also been involved in development of method to use calcium isotopes to study bone disease. Leadership Anbar led the NASA Astrobiology Institute program at Arizona State University from 2009 to 2015. He served as President-Elect and President of the Biogeosciences Section of the American Geophysical Union from 2015 to 2019. He currently directs the Center for Education Through Exploration at Arizona State University, which is reinventing digital learning around curiosity, exploration, and discovery. Awards Anbar is a Fellow of the Geological Society of America, the Geochemical Society, the European Association of Geochemistry, and the American Geophysical Union. In 2002, he was awarded the Young Scientist Award (Donath Medal) from the Geological Society of America. In 2014, he was appointed a Howard Hughes Medical Institute Professor in recognition of his work in digital learning innovation. In 2017, he was named one of 10 “teaching innovators” by the Chronicle of Higher Education. He was the Endowed Biogeochemistry Lecturer at the Goldschmidt Geochemistry Conference in 2017, and received the Samuel Epstein Science Innovation Award from the European Association of Geochemistry in 2019. He received the Arthur L. Day Medal from the Geological Society of America in 2020. He is a Distinguished Sustainability Scholar in the Julie Ann Wrigley Global Institute of Sustainability at Arizona State University. References External links Ariel Anbar publications indexed by Google Scholar Center for Education Through Exploration at Arizona State University American geochemists Living people Harvard College alumni Year of birth missing (living people) Arizona State University faculty
Ariel Anbar
[ "Chemistry" ]
765
[ "Geochemists", "American geochemists" ]
68,291,117
https://en.wikipedia.org/wiki/KELT-6
KELT-6, also known as BD+31 2447, is a star in the constellation Coma Berenices. With an apparent magnitude of 10.34, it is impossible to see with the unaided eye, but can be seen with a powerful telescope. The star is located 791 light years away from the Solar System based on parallax, but is drifting away with a radial velocity of 1.62 km/s. Properties KELT-6 is an F-type star that is 13% more massive and 53% larger than the Sun. It radiates at 3.25 times the Sun's luminosity from its photosphere at an effective temperature of 6,727 K. KELT-6 has a projected rotational velocity of 4.53 km/s, and is slightly older than the Sun, with an age of 4.9 billion years. Unlike most host stars of exoplanets, it has a poor metallicity, with 52.5% the abundance of heavy metals compared to the Sun. Planetary system In 2013, a long period "hot Jupiter" was discovered orbiting the star using the transit method. Another planet was discovered in 2015 using the radial velocity (doppler spectroscopy) method. See also List of most luminous stars List of most massive stars Lists of stars Lists of stars by constellation References F-type subgiants Coma Berenices Durchmusterung objects Planetary systems with two confirmed planets
KELT-6
[ "Astronomy" ]
299
[ "Coma Berenices", "Constellations" ]
68,291,457
https://en.wikipedia.org/wiki/Amine%20value
In organic chemistry, amine value is a measure of the nitrogen content of an organic molecule. Specifically, it is usually used to measure the amine content of amine functional compounds. It may be defined as the number of milligrams of potassium hydroxide (KOH) equivalent to one gram of epoxy hardener resin. The units are thus mg KOH/g. List of ASTM methods There are a number of ASTM analytical test methods to determine amine value. A number of states in the United States have adopted their own test methods but they are based on ASTM methods. Although there are similarities with the method it is not the same as an acid value. ASTM D2073 - This is a potentiometric method. ASTM D2074-07 ASTM D2896 - potentiometric method with perchloric acid. ASTM D6979-03 First principles The amine value is useful in helping determine the correct stoichiometry of a two component amine cure epoxy resin system. It is the number of Nitrogens x 56.1 (Mwt of KOH) x 1000 (convert to milligrams) divided by molecular mass of the amine functional compound. So using Tetraethylenepentamine (TEPA) as an example: Mwt = 189, number of nitrogen atoms = 5 So 5 x 1000 x 56.1/189 = 1484. So the Amine Value of TEPA = 1484 Other amines All numbers are in units of mg KOH/g. Ethylenediamine. Amine value = 1870 Diethylenetriamine. Amine value = 1634 Triethylenetetramine. Amine value = 1537 Aminoethylpiperazine. Amine value = 1305 Isophorone diamine. Amine value = 660 Hexamethylenediamine. Amine value = 967 1,2-Diaminocyclohexane. Amine value = 984 1,3-BAC. Amine value = 790 2-Methylpentamethylenediamine -Dytek A. Amine value = 967 m-Xylylenediamine -MXDA. Amine value = 825 See also-related test methods Acid value Bromine number Epoxy value Hydroxyl value Iodine value Peroxide value Saponification value References Further reading External links The chemistry of epoxide Synthesis of amines Analytical chemistry
Amine value
[ "Chemistry" ]
524
[ "nan" ]
68,291,597
https://en.wikipedia.org/wiki/NGC%203598
NGC 3598 is a lenticular galaxy located in the constellation Leo. It was discovered by the astronomer Albert Marth on March 4, 1865. See also List of galaxies List of largest galaxies List of nearest galaxies References External links Leo (constellation) 3598 Lenticular galaxies 034306
NGC 3598
[ "Astronomy" ]
62
[ "Leo (constellation)", "Constellations" ]
68,291,716
https://en.wikipedia.org/wiki/Vida%20Dujmovi%C4%87
Vida Dujmović is a Canadian computer scientist and mathematician known for her research in graph theory and graph algorithms, and particularly for graph drawing, for the structural theory of graph width parameters including treewidth and queue number, and for the use of these parameters in the parameterized complexity of graph drawing. She is a professor of electrical engineering & computer science at the University of Ottawa, where she holds the University Research Chair in Structural and Algorithmic Graph Theory. Education Dujmović studied telecommunications and computer science as an undergraduate at the University of Zagreb, graduating in 1996. She came to McGill University for graduate study in computer science, earning a master's degree in 2000 and completing her Ph.D. in 2004. Her dissertation, Track Layouts of Graphs, was supervised by Sue Whitesides, and won the 2005 NSERC Doctoral Prize of the Natural Sciences and Engineering Research Council. Career She was an NSERC Postdoctoral Fellow at Carleton University, a CRM-ISM Postdoctoral Fellow at McGill University, and a postdoctoral researcher again at Carleton University before finally becoming an assistant professor at Carleton University in 2012. She moved to the University of Ottawa in 2013. Recognition In 2023 the University of Ottawa gave her the Glinski Award for Excellence in Research and the University Research Chair in Structural and Algorithmic Graph Theory. Vida Dujmović was an invited speaker at the 9th European Congress of Mathematics. References External links Home page Living people Canadian computer scientists Canadian mathematicians Canadian women computer scientists Canadian women mathematicians Academic staff of Carleton University Yugoslav emigrants to Canada Graph theorists McGill University alumni Researchers in geometric algorithms Academic staff of the University of Ottawa University of Zagreb alumni Year of birth missing (living people)
Vida Dujmović
[ "Mathematics" ]
340
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
68,292,003
https://en.wikipedia.org/wiki/Temperature%20paradox
The Temperature paradox or Partee's paradox is a classic puzzle in formal semantics and philosophical logic. Formulated by Barbara Partee in the 1970s, it consists of the following argument, which speakers of English judge as wildly invalid. The temperature is ninety. The temperature is rising. Therefore, ninety is rising. (invalid conclusion) Despite its obvious invalidity, this argument would be valid in most formalizations based on traditional extensional systems of logic. For instance, the following formalization in first order predicate logic would be valid via Leibniz's law: t=90 R(t) R(90) (valid conclusion in this formalization) To correctly predict the invalidity of the argument without abandoning Leibniz's Law, a formalization must capture the fact that the first premise makes a claim about the temperature at a particular point in time, while the second makes an assertion about how it changes over time. One way of doing so, proposed by Richard Montague, is to adopt an intensional logic for natural language, thus allowing "the temperature" to denote its extension in the first premise and its intension in the second. extension(t)=90 R(intension(t)) R(90) (invalid conclusion) Thus, Montague took the paradox as evidence that nominals denote individual concepts, defined as functions from a world-time pair to an individual. Later analyses build on this general idea, but differ in the specifics of the formalization. Notes External links Non-classical logic Philosophical logic Predicate logic Formal semantics (natural language) Paradoxes
Temperature paradox
[ "Mathematics" ]
327
[ "Basic concepts in set theory", "Predicate logic", "Mathematical logic" ]
68,293,236
https://en.wikipedia.org/wiki/Ferrole
In organoiron chemistry, a ferrole is a type of diiron complex containing the (OC)3FeC4R4 heterocycle that is pi-bonded to a Fe(CO)3 group. These compounds have Fe-Fe bonds (ca. 252 pm) and semi-bridging CO ligands (Fe-C distances = 178, 251 pm). They are typically air-stable, soluble in nonpolar solvents, and red-orange in color. Synthesis Ferroles typically arise by the reaction of alkynes with iron carbonyls. Such reactions are known to generate many products, e.g. complexes of cyclopentadienones and para-quinones. Another route involves the desulfurization of thiophenes (SC4R4) by iron carbonyls, shown in the following idealized equation: Fe3(CO)12 + SC4R4 → Fe2(CO)6C4R4 + FeS + 6CO An unusual route to ferroles involves treatment of Collman's reagent with trimethylsilyl chloride (tms = (CH3)3Si): 2Na2Fe(CO)4 + 4tmsCl → Fe2(CO)6C4(Otms)4 + 2CO + 4NaCl (warning: unbalanced reaction !) Reactions Some ferroles react with tertiary phosphines to give the substituted flyover complex Fe2(CO)5(PR3)(C4R4CO). References Organoiron compounds Carbonyl complexes Trimethylsilyl compounds
Ferrole
[ "Chemistry" ]
343
[ "Functional groups", "Trimethylsilyl compounds" ]
68,296,169
https://en.wikipedia.org/wiki/Erin%20Dolan
Erin Dolan is the Georgia Athletic Association Professor of Innovative Science Education at the University of Georgia. Dolan is a biochemist known for her research on engaging students in science research. Education and career Dolan has a B.A. in biology from Wellesley College (1993) where she did an honors thesis on SCPb, a neurotransmitter in the American lobster. She earned a Ph.D. in neuroscience from the University of California, San Francisco where she worked on developmental plasticity in the nematode Caenorhabditis elegans. Following her Ph.D., she worked at the University of Arizona for two years before moving to Virginia Tech in 2002. In 2011, Dolan moved to the University of Georgia where she was named the Georgia Athletic Association Professor of Innovative Science Education in 2016. From 2014 until 2016 she was the executive director of the Texas Institute for Discovery Education in Science at the University of Texas at Austin. In 2010 Dolan was named Editor-in-chief of the journal CBE: Life Sciences Education. Research As a neuroscientist, Dolan worked on sensory signalling, gene expression, and nerve development in the nematode Caenorhabditis elegans. Following her graduate work, Dolan started researching science education where she focuses on the development of programs to increase retention of students in science disciplines and how social and cultural phenomena impact student learning and development, particularly in course-based undergraduate research experiences called CUREs. Selected publications Awards and honors Bruce Alberts Award for Excellence in Science Education (2018) Award for Exemplary Contributions to Education, American Society for Biochemistry and Molecular Biology (2017) Excellence in Education of the American Society of Plant Biologists (2013) References External links University of California, San Francisco alumni Wellesley College alumni University of Georgia faculty Living people Nematologists Women biochemists Science teachers American neuroscientists 1971 births
Erin Dolan
[ "Chemistry" ]
389
[ "Biochemists", "Women biochemists" ]
68,296,332
https://en.wikipedia.org/wiki/Drug%20Safety%20Research%20Unit
Drug Safety Research Unit (DSRU) is an independent, non-profit organisation in the United Kingdom, in the field of pharmacology. It is an associate college of the University of Portsmouth, offering postgraduate qualifications in pharmacovigilance. The unit is based in Southampton, and was established in 1981 by Bill Inman and David Finney. Its director as of July 2021 is Professor Saad Shakir. It is operated by the Drug Safety Research Trust, a charitable organization registered in England and Wales. References External links 1981 establishments in the United Kingdom Organisations based in Southampton Drug safety University of Portsmouth
Drug Safety Research Unit
[ "Chemistry" ]
125
[ "Drug safety" ]
68,296,727
https://en.wikipedia.org/wiki/Hegedus%20indole%20synthesis
The Hegedus indole synthesis is a name reaction in organic chemistry that allows for the generation of indoles through palladium(II)-mediated oxidative cyclization of ortho-alkenyl anilines. The reaction can still take place for tosyl-protected amines. Application 2-Allylaniline can be converted to 2-Methylindole using the Hegedus indole synthesis. References Indole forming reactions Carbon-heteroatom bond forming reactions Name reactions
Hegedus indole synthesis
[ "Chemistry" ]
106
[ "Organic reactions", "Name reactions", "Carbon-heteroatom bond forming reactions", "Chemical reaction stubs", "Ring forming reactions" ]
68,297,723
https://en.wikipedia.org/wiki/Shadow%20enhancer
Shadow enhancers are groups of two or more enhancers that control the same target gene and drive overlapping spatiotemporal expression patterns. Shadow enhancers are found in a wide range of organisms, from insects to plants to mammals, particularly in association with developmental genes. While seemingly redundant, the individual enhancers of a shadow enhancer group have been shown to be critical for proper gene expression in the face of both environmental and genetic perturbations. Such perturbations may exacerbate fluctuations in upstream regulators. References Gene expression
Shadow enhancer
[ "Chemistry", "Biology" ]
111
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
68,300,645
https://en.wikipedia.org/wiki/List%20of%20hottest%20exoplanets
This is a list of the hottest exoplanets so far discovered, specifically those with temperatures greater than for exoplanets irradiated by a nearby star and greater than for self-luminous exoplanets. For comparison, the hottest planet in the Solar System is Venus, with a temperature of . List of hottest exoplanets irradiated by a nearby star Methods for finding temperature: Teff: Measured effective temperature. Teq: The temperature of the planet has not been measured, so it is listed with the calculated equilibrium temperature. List of hottest self-luminous exoplanets All these are measured temperatures. Unconfirmed candidates These planet candidates have not been confirmed. Notes References hottest Exoplanets, hottest
List of hottest exoplanets
[ "Astronomy" ]
151
[ "Astronomy-related lists", "Lists of superlatives in astronomy" ]
77,118,957
https://en.wikipedia.org/wiki/Fowler%20Challenger%20III%20amphibious%20tractor
Fowler Challenger III is a continuous track amphibious launch tractor, which was specifically designed for the Royal National Lifeboat Institution (RNLI), to launch and recover carriage mounted lifeboats, from beach-launched lifeboat stations. A total of 13 tractors were constructed over a seven-year period from 1953 to 1960. The tractor is a highly modified version of the standard 95 bhp Fowler Challenger III diesel crawler tractor, manufactured by John Fowler & Co. of Leeds. A prototype was trialled successfully at in 1952 The tractor developed a draw-bar pull of 21,100 Lbs, through its six-speed gearbox, and was fitted with a specially designed winch, allowing it to exert a maximum pull of 38,500 Lbs. at its lowest speed. Extended gear and clutch levers were fitted, to assist the driver when submerged. The tractor has been made completely watertight, with the engine compartment sealed with watertight panels and doors. The engine was designed to run for long periods without overheating. Circular brass rubber-seated valves were fitted to the air intake and discharge ports, to be closed when submerged. To prevent damaged, an automatic stop device was also employed. There were many requirements of the RNLI variant tractor: Able to tow a life-boat and carriage weighing up to 14 tons over various types of terrain, including soft sand and deep shingle. Able to pull the life-boat and carriage up gradients of 1 in 4 and to hold them by its brakes on these gradients. Able to haul life-boat and carriage to the water, which at low water may be several miles at some stations. Capable of operating continuously at full power, in water up to a depth of seven feet. Fowler Challenger III fleet See also New Holland TC45 launch tractor Talus MB-764 launch tractor Talus MB-H launch tractor Talus MB-4H launch tractor Talus Atlantic 85 DO-DO launch carriage References Royal National Lifeboat Institution launch vehicles Sea-going tractors Tractors Rescue equipment
Fowler Challenger III amphibious tractor
[ "Engineering" ]
403
[ "Engineering vehicles", "Tractors" ]
77,119,632
https://en.wikipedia.org/wiki/DXVK
DXVK is an open-source translation layer which converts Direct3D 8/9/10/11 calls to Vulkan. It is used by Proton/Steam for Linux, by Intel Windows drivers, VirtualBox 7.0, and it can be used to run Direct3D-based games under Windows using Vulkan. DXVK has been confirmed to support over 80% of Direct3D Windows games "near flawlessly". History DXVK was first developed by Philip Rebohle to support Direct3D 11 games only as a result of poor compatibility and low performance of Wine's Direct3D 11 to OpenGL translation layer. In 2018, the developer was sponsored by Valve to work on the project full-time in order to advance compatibility of the Linux version of Steam with Windows games. In 2019, DXVK received Direct3D 9 support by merging with d9vk. In November 2022, version 2.0 was released, introducing improvements to Direct3D 9 memory management, shader compilation, state cache, as well as, support for Direct3D 11 feature level 12_1. Vulkan 1.3 support is now required. Released on January 24, 2023, version 2.1 implemented HDR support and improved quality for certain old games. Released on May 12, 2023, version 2.2 added D3D11On12 support. Released on July 10, 2024, version 2.4 added support for Direct3D 8. Released on November 11, 2024, version 2.5 features an overhauled memory and resource management which resulted in VRAM savings up to 1GB in certain games. Direct3D 8 and 9 received support for software cursor. Controversies The use of Wine/DXVK has been associated with users getting banned from online gaming platforms because game publishers have no way of verifying game integrity for people using Linux. References External links ProtonDB - a Proton/Wine/DXVK compatibility database DXVK - GitHub repository Wine - a Win32 compatibility layer for POSIX operating systems 2018 software Compatibility layers Computing platforms Cross-platform software Free software programmed in C++ Free system software Vulkan (API) Linux emulation software Software using the GNU Lesser General Public License
DXVK
[ "Technology" ]
466
[ "Computing platforms" ]
77,120,772
https://en.wikipedia.org/wiki/Dorette%20Pronk
Dorothea Ariette (Dorette) Pronk (born 1968) is a Dutch and Canadian mathematician specializing in category theory and categorical approaches to differentiation. She is a professor of mathematics at Dalhousie University. As well as for her research, she is also known for her work promoting mathematics competitions in Canada. Education and career Pronk is originally from Rotterdam. She became a mathematics student at Utrecht University in The Netherlands, where she earned a master's degree in 1991 and completed her Ph.D. in 1995. Her doctoral dissertation, Groupoid Representations for Sheaves on Orbifolds, was jointly promoted by Dirk van Dalen and Ieke Moerdijk. She took a faculty position at Dalhousie University in 2000, after previously being a postdoctoral researcher there. She is a professor of mathematics at Dalhousie. Pronk began working in mathematics competitions as an observer for the Canadian team at the 1998 and 1999 International Mathematical Olympiads. She chaired Canada's committee on the IMO from 2014 to 2015, the mathematical competitions committee of the Canadian Mathematical Society beginning in 2016, and Canada's committee on the European Girls' Mathematical Olympiad since 2018. She has also served as a leader of Canada's mathematics team. At Dalhousie, she has organized a mathematics challenge club and Math circles focused both on Nova Scotia students and on First Nations students. Recognition Pronk was the 2023 recipient of the Graham Wright Award for Distinguished Service of the Canadian Mathematical Society. In the same year she was named as a Fellow of the Canadian Mathematical Society. Personal life Pronk was raised in a strict denomination of Reformed (Calvinist) Christianity in the Netherlands. After resisting the call because of her strict background, she has been participating in messianic dance through All Nations Christian Reformed Church in Halifax, Nova Scotia since 2002. References External links Dorette Pronk in nLab 1968 births Living people Scientists from Rotterdam Canadian mathematicians Canadian women mathematicians Dutch mathematicians Dutch women mathematicians Category theorists Utrecht University alumni Academic staff of Dalhousie University Fellows of the Canadian Mathematical Society
Dorette Pronk
[ "Mathematics" ]
421
[ "Category theorists", "Mathematical structures", "Category theory" ]
77,121,003
https://en.wikipedia.org/wiki/Nonmetallic%20material
Nonmetallic material, or in nontechnical terms a nonmetal, refers to materials which are not metals. Depending upon context it is used in slightly different ways. In everyday life it would be a generic term for those materials such as plastics, wood or ceramics which are not typical metals such as the iron alloys used in bridges. In some areas of chemistry, particularly the periodic table, it is used for just those chemical elements which are not metallic at standard temperature and pressure conditions. It is also sometimes used to describe broad classes of dopant atoms in materials. In general usage in science, it refers to materials which do not have electrons that can readily move around, more technically there are no available states at the Fermi energy, the equilibrium energy of electrons. For historical reasons there is a very different definition of metals in astronomy, with just hydrogen and helium as nonmetals. The term may also be used as a negative of the materials of interest such as in metallurgy or metalworking. Variations in the environment, particularly temperature and pressure can change a nonmetal into a metal, and vica versa; this is always associated with some major change in the structure, a phase transition. Other external stimuli such as electric fields can also lead to a local nonmetal, for instance in certain semiconductor devices. There are also many physical phenomena which are only found in nonmetals such as piezoelectricity or flexoelectricity. General definition The original approach to conduction and nonmetals was a band-structure with delocalized electrons (i.e. spread out in space). In this approach a nonmetal has a gap in the energy levels of the electrons at the Fermi level. In contrast, a metal would have at least one partially occupied band at the Fermi level; in a semiconductor or insulator there are no delocalized states at the Fermi level, see for instance Ashcroft and Mermin. These definitions are equivalent to stating that metals conduct electricity at absolute zero, as suggested by Nevill Francis Mott, and the equivalent definition at other temperatures is also commonly used as in textbooks such as Chemistry of the Non-Metals by Ralf Steudel and work on metal–insulator transitions. In early work this band structure interpretation was based upon a single-electron approach with the Fermi level in the band gap as illustrated in the Figure, not including a complete picture of the many-body problem where both exchange and correlation terms can matter, as well as relativistic effects such as spin-orbit coupling. A key addition by Mott and Rudolf Peierls was that these could not be ignored. For instance, nickel oxide would be a metal if a single-electron approach was used, but in fact has quite a large band gap. As of 2024 it is more common to use an approach based upon density functional theory where the many-body terms are included. Rather than single electrons, the filling involves quasiparticles called orbitals, which are the single-particle like solutions for a system with hundreds to thousands of electrons. Although accurate calculations remain a challenge, reasonable results are now available in many cases. It is also common to nuance somewhat the early definitions of Alan Herries Wilson and Mott. As discussed by both the chemist Peter Edwards and colleagues, as well as Fumiko Yonezawa,it is also important in practice to consider the temperatures at which both metals and nonmetals are used. Yonezawa provides a general definition: When a material 'conducts' and at the same time 'the temperature coefficient of the electric conductivity of that material is not positive under a certain environmental condition,' the material is metallic under that environmental condition. A material which does not satisfy these requirements is not metallic under that environmental condition. Band structure definitions of metallicity are the most widely used, and apply both to single elements such as insulating boron as well as compounds such as strontium titanate. (There are many compounds which have states at the Fermi level and are metallic, for instance titanium nitride.) There are many experimental methods of checking for nonmetals by measuring the band gap, or by ab-initio quantum mechanical calculations. Functional definition An alternative in metallurgy is to consider various malleable alloys such as steel, aluminium alloys and similar as metals, and other materials as nonmetals; fabricating metals is termed metalworking, but there is no corresponding term for nonmetals. A loose definition such as this is often the common usage, but can also be inaccurate. For instance, in this usage plastics are nonmetals, but in fact there are (electrically) conducting polymers which should formally be described as metals. Similar, but slightly more complex, many materials which are (nonmetal) semiconductors behave like metals when they contain a high concentration of dopants, being called degenerate semiconductors. A general introduction to much of this can be found in the 2017 book by Fumiko Yonezawa Periodic table elements The term nonmetal (chemistry) is also used for those elements which are not metallic in their normal ground state; compounds are sometimes excluded from consideration. Some textbooks use the term nonmetallic elements such as the Chemistry of the Non-Metals by Ralf Steudel, which also uses the general definition in terms of conduction and the Fermi level. The approach based upon the elements is often used in teaching to help students understand the periodic table of elements, although it is a teaching oversimplification. Those elements towards the top right of the periodic table are nonmetals, those towards the center (transition metal and lanthanide) and the left are metallic. An intermediate designation metalloid is used for some elements. The term is sometimes also used when describing dopants of specific elements types in compounds, alloys or combinations of materials, using the periodic table classification. For instance metalloids are often used in high-temperature alloys, and nonmetals in precipitation hardening in steels and other alloys. Here the description implicitly includes information on whether the dopants tend to be electron acceptors that lead to covalently bonded compounds rather than metallic bonding or electron acceptors. Nonmetals in astronomy A quite different approach is used in astronomy where the term metallicity is used for all elements heavier than helium, so the only nonmetals are hydrogen and helium. This is a historical anomaly. In 1802, William Hyde Wollaston noted the appearance of a number of dark features in the solar spectrum. In 1814, Joseph von Fraunhofer independently rediscovered the lines and began to systematically study and measure their wavelengths, and they are now called Fraunhofer lines. He mapped over 570 lines, designating the most prominent with the letters A through K and weaker lines with other letters. About 45 years later, Gustav Kirchhoff and Robert Bunsen noticed that several Fraunhofer lines coincide with characteristic emission lines identifies in the spectra of heated chemical elements. They inferred that dark lines in the solar spectrum are caused by absorption by chemical elements in the solar atmosphere. Their observations were in the visible range where the strongest lines come from metals such as Na, K, Fe. In the early work on the chemical composition of the sun the only elements that were detected in spectra were hydrogen and various metals, with the term metallic frequently used when describing them. In contemporary usage all the extra elements beyond just hydrogen and helium are termed metallic. The astrophysicst Carlos Jaschek, and the stellar astronomer and spectroscopist Mercedes Jaschek, in their book The Classification of Stars, observed that: Stellar interior specialists use 'metals' to designate any element other than hydrogen and helium, and in consequence ‘metal abundance’ implies all elements other than the first two. For spectroscopists this is very misleading, because they use the word in the chemical sense. On the other hand photometrists, who observe combined effects of all lines (i.e. without distinguishing the different elements) often use this word 'metal abundance', in which case it may also include the effect of the hydrogen lines. Metal-insulator transition There are many cases where an element or compound is metallic under certain circumstances, but a nonmetal in others. One example is metallic hydrogen which forms under very high pressures. There are many other cases as discussed by Mott, Inada et al and more recently by Yonezawa. There can also be local transitions to a nonmetal, particularly in semiconductor devices. One example is a field-effect transistor where an electric field can lead to a region where there are no electrons at the Fermi energy (depletion zone). Properties specific to nonmetals Nonmetals have a wide range of properties, for instance the nonmetal diamond is the hardest known material, while the nonmetal molybdenum disulfide is a solid lubricants used in space. There are some properties specific to them not having electrons at the Fermi energy. The main ones, for which more details are available in the links are: Dielectric polarization, approximately equivalent to alignment of local dipoles with an electric field, as in capacitors. Electrostriction, a change in volume due to an electric field, or more accurately polarization density. Flexoelectricity, where there is a coupling between strain gradients and polarization. This plays a role in the generation of static electricity due to the triboelectric effect. Piezoelectricity, a coupling between polarization and linear strains. A decreased resistance with temperature, due to having more carriers (via Fermi–Dirac statistics) available in partially occupied higher energy bands Increased conductivity when illuminated with light or ultraviolet radiation, called photoconductivity. This is similar to the effect of temperature, but with the photons exciting electrons into partially occupied states. Transmit electric fields as in the capacitor figure above; in a metal there is electric-field screening that prevents this beyond very small distances, see Classical Electrodynamics. See also References Chemical physics Condensed matter physics Materials science Metallurgy Nonmetals Periodic table Solid-state chemistry
Nonmetallic material
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,091
[ "Periodic table", "Applied and interdisciplinary physics", "Metallurgy", "Phases of matter", "Nonmetals", "Materials science", "Chemical physics", "Condensed matter physics", "nan", "Matter", "Solid-state chemistry" ]
77,121,916
https://en.wikipedia.org/wiki/Thomas%20Ristenpart
Thomas Ristenpart is a professor of computer security at Cornell Tech. Biography Ristenpart received his B.S. in computer science and engineering from the University of California, Davis in 2003, where he also received his M.S. under Matt Bishop in 2005. He then moved to the University of California, San Diego where he received his Ph.D. in computer science under Mihir Bellare. Research Ristenpart's research touches on many areas of computer security. Three of his papers are among the highest cited computer security papers of all time. In cryptography, Ristenpart developed Honey Encryption, a technique that can encrypt data in a way that, if decrypted incorrectly, will return fake data. Ristenpart also developed techniques to develop typo-tolerant passwords, allowing users to authenticate even if they have mistyped their password. In his cloud security work, Ristenpart found that users on Microsoft's Azure and Amazon's EC2 services could arrange to be placed on the same virtual machine as another user and therefore exploit a side-channel attack to learn information about their data. Recently, Ristenpart has studied machine learning privacy and security. He was one of the first researchers to show that machine learning models can leak details about their training datasets. He showed that if a machine learning model is trained on images of peoples faces, then it is possible to reconstruct images of the people contained in the training dataset. Ristenpart also showed that it is possible to "steal" a machine learning model and reverse-engineer how it works querying the model. Once stolen, it is possible to use the stolen model to generate proprietary data used to train it. Ristenpart was the Program Chair the USENIX Security Symposium in 2017; Crypto in 2020, and the IEEE Symposium on Security and Privacy in 2022 and 2023. Awards Ristenpart received a Best Paper at USENIX Security 2014, ACM CHI 2018, USENIX Security 2020, CSCW 2020, CHI 2022, USENIX Security 2023, and test of time awards for his paper at CCS 2009 and CCS 2012. References External links https://rist.tech.cornell.edu/ Living people 20th-century births Year of birth missing (living people) Computer scientists Cornell Tech faculty University of California, Davis alumni
Thomas Ristenpart
[ "Technology" ]
500
[ "Computer science", "Computer scientists" ]
77,122,686
https://en.wikipedia.org/wiki/Abell%201942%20BCG
Abell 1942 BCG (short for Abell 1942 Brightest Cluster Galaxy), also known as PGC 1256558, is a massive elliptical galaxy of type-cD residing as the brightest cluster galaxy of the Abell 1942 galaxy cluster, located in the constellation Virgo. With a redshift of 0.224, the galaxy is located nearly 2.7 billion light-years away from Earth. Characteristics Abell 1942 BCG is one of the largest galaxies with a diameter of 939,200 light-years across. A luminous red galaxy observed by Sloan Digital Sky Survey, the total stellar mass of the galaxy is estimated to be ~3 × 1011 Msolar. It is also classfied as an active wide-angled tailed radio galaxy. With an astrophysical jet speed within of the range (0.3-0.7)c and a peaked spectrum source of between 1.4 GHz and 325 MHz, Abell 1942 BCG is found to be radio-luminous with values below 1022 WHz−1 at 1.4 GHz. Moreover, Abell 1942 BCG contains an extended radio source listed by researchers in the 408-MHz Molonglo Reference Catalogue and also Parkes Catalogue of radio sources. Its luminosity function is estimated to be frequencies of 400 and 2700 MHz. Abell 1942 BCG is aligned along its major axis towards its parent cluster. It has a large galactic halo displaying a light profile with surface brightness ranging from 27.5 mag arcsec-2 at 100 kpc to ~32 mag arcsec-2 at 700 kpc as observed through r-bands. Such of these light profiles in massive galaxies like Abell 1942 BCG tend to reach up to several hundreds of kilometers. Because Abell 1942 BCG has special properties, researchers theorized it might have been formed through the process of galactic cannibalism as the galaxy merges with its surrounding satellite galaxies thus increasing its luminosity. As merger process continues, the mass of Abell 1942 BCG is build-up while at the same time, the number of satellite galaxies reduces. The star formation in Abell 1942 BCG, is estimated to contribute less mass friction and a stellar age of 200 Myr. Because the galaxy host's cluster has X-ray properties and the young star population, this strongly hints the star formation of Abell 1942 BCG is fueled by gas cooling out of the intracluster medium. As observed by Galaxy Evolution Explorer (GALEX), Spitzer Space Telescope, and Two Micron All Sky Survey (2MASS), researchers found the galaxy displays ultraviolet (38%) and mid-infrared emission (43%) from 8 to 160 μm, above as expected. Abell 1942 The galaxy cluster where Abell 1942 BCG is residing, is found be a rich cluster. As observed by researchers, they found it has a mean redshift value of z = 0.22513 \pm 0.0008 and also a velocity dispersion sigma of = 908^{+147}_{-139} km/s. Through the analysis, they found the cluster is relaxed with no signs of remarkable features and a fair distribution of the X-ray emission traces. They also found Abell 1942 has two possible optical substructures, seen at ~5 arcmin from the center towards the Northwest and the Southwest direction. However, they are not confirmed by the velocity field. The clumps are however, kinematically bound to the main structure of Abell 1942. Upon looking at the velocity dispersion through usage of the T_X-sigma scaling relation, they found the temperature is in good agreement with measured galaxies velocities. According to researchers conducting a photometric redshift analysis, they suggest the weak lensing signal observed at the south part of the cluster, is contributed to dark matter concentration by background sources, that are possibly distributed as a single filamentary structure. Thus, they could use limiting magnitude of H = 22, for detection of such clusters with appropriate mass that have comparable redshifts with the mean redshift of background sources. References Elliptical galaxies Virgo (constellation) 1256558 Radio galaxies 2MASS objects
Abell 1942 BCG
[ "Astronomy" ]
865
[ "Virgo (constellation)", "Constellations" ]
77,122,761
https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Jaeger%20%28mathematician%29
 François Philippe Louis Jaeger was a French mathematician, working in graph theory, matroid theory, and knot theory. Education and career Jaeger was born in Boulogne-Billancourt to a father who was a doctor and a mother who was a pharmacist but did not practice in order to be able to take care of the education of her three children. According to his father, he was particularly interested in mathematics from an early age. In 1967, he was admitted to the École Polytechnique in Paris. In 1970, after graduating, he started a thesis in Grenoble with Jean Kuntzmann at the IMAG laboratory in the Algebra, Logic and Combinatorics team. This team later became the laboratory Logique, Structures Discrètes et Didactique (LSD2, Grenoble 1982-1995) and was subsequently integrated into the Leibniz laboratory (1995-2005). Jaeger also met frequently with mathematicians from the University of Geneva and from the Institut Fourier (Grenoble), in particular Yves Colin de Verdière. Jaeger was hired by the French National Centre for Scientific Research as a permanent researcher in Grenoble in 1971, at the same time as his close colleague, collaborator and friend Charles Payan, and he spent his entire career in this organisation. He defended his PhD thesis "Study of some invariants and existence problems in graph theory" on June 8, 1976 (supervised by Jean Kuntzmann and Charles Payan). Jaeger died on August 18, 1997 in La Tronche from a lung cancer, at the age of 49. Research Initially, most of his research themes had as their main subject graphs, hypergraphs and matroids, and were linked to famous and difficult problems in Graph coloring and nowhere-zero flows, such as the Four color theorem, the Strong perfect graph theorem, and Tutte's 3-flow, 4-flow , and 5-flow conjectures. Gradually the research themes of Jaeger became more and more linked to algebra and topology, to knot theory, with also an interest in statistical physics and computational complexity. Jaeger is known for a number of conjectures in graph theory (see for instance the Petersen coloring conjecture) and linear algebra. In addition to his research work, he wrote several surveys on various topics, including the Cycle double cover conjecture, link invariants, and nowhere-zero flows Honors A year after his death, his colleagues organized a symposium in his memory in Grenoble, attended by international experts in graph theory, combinatorial optimization, matroid theory, and knot theory, including Claude Berge, Jack Edmonds, Vaughan Jones, Alexander Schrijver, Paul Seymour, William Tutte, and Dominic Welsh. A special volume of the Annales de l'Institut Fourier was published in his honor. A volume of the Banach Center Publications (Polish Academy of Sciences) was also dedicated to his memory. External links List of publications of François Jaeger on zbMATH Open. List of publications of François Jaeger on DBLP. References 1947 births 1997 deaths 20th-century French mathematicians Graph theorists École Polytechnique alumni French National Centre for Scientific Research scientists Academic staff of Grenoble Alpes University People from Boulogne-Billancourt Deaths from lung cancer Grenoble Alpes University alumni
François Jaeger (mathematician)
[ "Mathematics" ]
677
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
77,123,099
https://en.wikipedia.org/wiki/Triple%20step%20%28music%29
Triple step, in music, represents a rhythmic pattern covering three dance steps done on music. 1977, British-American rock band Fleetwood Mac's released single, "Don't Stop", penned by musician and keyboardist Christine McVie from their Rumours album integrated rhythms influenced by triple step dance rhythmic patterns incorporated into the song featuring both traditional acoustic and tack piano, the second of these instrumental sounds achieved by affixing nails to the hammers' striking points on the strings, resulting in a more percussive sound. Gqom (3-Step) The term "three-step" distinct from triple step was first coined in the mid-2010s by gqom record producers Sbucardo and Citizen Boy to describe the South African music genre gqom, named for its beat structure associated with triple metre . As the genre became more mainstream and evolved, incorporating various production techniques and styles, other gqom producers such as Emo Kid, DJ Lag, Ben Myster, and Menzi pioneered as well as developed a distinct variation of gqom music known as "3-step" (also referred to as 3 step, three-step, and other spelling variations) between the late 2010s and early 2020s. The gqom subgenre 3-step is defined by its blend of traditional gqom elements with triple metre and broken beat characteristics. Producers often fuse 3-step with other production styles and musical genres. Waltz (music) A waltz, referred to as "Walzer" in German, "Valse" in French, "Valzer" in Italian, "Vals" in Spanish and "Walc" in Polish, is a style of dance music recognized for its triple metre, typically notated in a time signature. The waltz likely originated from the German, Ländler. In typical waltz compositions, each measure is associated with a single chord. Yoruba music In Yoruba music, triple metre, among other rhythmic patterns, creates a distinctive, flowing quality through a repeating cycle of three beats per measure. This rhythmic structure is prevalent in traditional Yoruba drumming and significantly influences dance movements and ceremonial performances. Additionally, triple metre is present in oríkì praise poetry, where it enhances the lyrical delivery. See also 2-step garage 2-step (breakdance move) Duple and quadruple metre Triple step Waltz References Chord progressions Musical notation Rhythm and meter
Triple step (music)
[ "Physics" ]
490
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
77,124,872
https://en.wikipedia.org/wiki/Greenlash
Greenlash (a portmanteau of "green" and "backlash") is a political term used to describe a backlash against the environmental movement and green politics. History The term was popularised by Nathalie Tocci. In March 2023, the Farmer–Citizen Movement finished as the largest party in the 2023 Dutch provincial elections campaigning against new limits on nitrogen emissions. In May 2023, governor of Florida Ron DeSantis banned government officials from promoting environmental, social, and governance goals. That month, French president Emmanuel Macron and Belgian prime minister Alexander De Croo called for a temporary pause in new green initiatives at the European level. Expansion of the London Ultra Low Emission Zone in August 2023, provoked a campaign of vandalism. In February 2024, president of the European Commission Ursula von der Leyen announced that the commission would shelve a proposed policy to reduce pesticide use in half by 2030. In April 2024, Maroš Šefčovič, Executive Vice-president of the European Commission for the European Green Deal, said that "from the recent farmer protests to the rise in support for populism cultivating a resistance to climate policies, we can see signs of wariness among our citizens." Analysis Elisabetta Cornago of the Centre for European Reform has stated that there are four broad types of policies that can trigger greenlash: policies that affect cost of living, policies banning carbon-intensive technologies that limit consumer choice, policies forcing "greening of existing assets," and policies that directly affect special interest groups like farmers. Guillaume Chapron of the Swedish University of Agricultural Sciences has stated that "the speed at which EU and national politicians abandoned green policies reflects the strong penetration of industrial agriculture into decision spheres." Nathalie Tocci has suggested that far-right political parties in Europe have changed their rhetoric surrounding the climate crisis as part of the greenlash, saying that they are "no longer openly climate crisis deniers," but instead "denounce the inequalities and the harm caused to industry they say are exacerbated by climate policies." Responses Mikael Leyi, secretary general of Solidar, has stated that "rather than focusing solely on abstract emissions targets, we should underscore the local, immediate and long-term benefits of sustainable policies" to counter greenlash. See also Greenwashing References 2023 neologisms Environmentalism Green politics Public relations terminology Environmental social science concepts
Greenlash
[ "Environmental_science" ]
502
[ "Environmental social science concepts", "Environmental social science" ]
77,124,983
https://en.wikipedia.org/wiki/IC%204182
IC 4182 is a Magellanic spiral galaxy in the constellation Canes Venatici. The galaxy lies about 14 million light years away from Earth, which means, given its apparent dimensions, that IC 4182 is approximately 30,000 light years across. It was discovered by Max Wolf in 1904. IC 4182 is seen nearly face-on. It has a low surface brightness disk with patch of star formation and no spiral pattern. The galaxy is close enough for its brightest stars to be resolvable through large telescopes, having a photometric blue filter apparent magnitude of 19.2, and a visual magnitude of around 20 for the brightest blue stars and around 21 for the brightest red stars. The density of ultraviolet sources decreases monotonically with radius. IC 4182 has been the home of one supernova, SN 1937C (type Ia, mag. 8.4). Fritz Zwicky discovered the supernova, which was located 30 arcseconds north and 40 arcseconds east of the nucleus, on 24 August 1937. The supernova was a few days post maximum. The peak apparent B-magnitude was estimated to have been 8.7. The galaxy was observed by the Hubble Space Telescope, leading to the discovery of Cepheid variable stars within it. SN 1937C then became the first type Ia supernova to have its distance calibrated with Cepheid stars, and then used as standard candles to calculate the Hubble constant. The galaxy is considered to be a member of the M94 Group, while Garcia considered the galaxy to be a member of the LGG 334 group, along with NGC 5005 and NGC 5033. References External links Magellanic spiral galaxies Unbarred spiral galaxies Canes Venatici M94 Group 4182 08188 +09-18-055 45314 Discoveries by Max Wolf Astronomical objects discovered in 1904
IC 4182
[ "Astronomy" ]
396
[ "Canes Venatici", "Constellations" ]
77,125,011
https://en.wikipedia.org/wiki/ASKAP%20J1935%2B2148
ASKAP J1935+2148 (also known as ASKAP J193505.1+214841.0) is a neutron star/magnetar candidate located in the constellation Vulpecula, approximately 15,800 light-years away. With a rotation period of 53.8 minutes (more precisely, 3,225.313 seconds), it would be the slowest spinning neutron star ever discovered. Discovery and observations ASKAP J1935+2148 was discovered while observing the same area as gamma-ray burst GRB 221009A, which had occurred a few days earlier, with the first pulses being detected on 15 October 2022 by the Australian Square Kilometre Array Pathfinder telescope, located in Western Australia, from which it derives its name. The first observation lasted six hours, in which four 10- to 50-second pulses were detected, with the peak flux density being 119 mJy. During the observation, ASKAP was operating in the square_6 by 6 configuration with 1.05° pitch and a central frequency of 887.5 MHz. The observation field also encompassed the magnetar SGR 1935+2154, which had produced fast radio bursts in 2020. ASKAP J1935+2148 was also detected in four subsequent observations, with the pulses visible across the entire observing band of 288 MHz. The pulses were quantified to be >90% linearly polarised with a rotation measure of 159.3 rad m−2, consistent with nearby pulsars. Observations at 1,284 MHz with the MeerKAT radio interferometer, including estimating the time of arrival of future pulses, were used to determine a rotation period of 3,225.313 seconds. Properties ASKAP J1935+2148 goes through three phases every rotation period, which were also detected by the MeerKAT telescope: the first phase is characterised by bright and highly linear polarised pulses, lasting between 10 and 50 seconds, the second phase is characterised by weak and circularly polarised pulses 26 times weaker than in the first phase, lasting approximately 370 milliseconds, and the third phase is characterised by quiescence, with no activity. Another pulsar, PSR J1107−5907, shows similar phases, and pulsars PSR B0823+26 and PSR B2111+46 have shown similar activity. Explanations There are two possible explanations for ASKAP J1935+2148. The first explanation is that the object is a white dwarf with an unusually strong magnetic field; the second is that the object is a neutron star emitting radiation from its poles despite its slow rotation. See also GPM J1839−10 GLEAM-X J162759.5−523504.3 GCRT J1745−3009 PSR J0901–4046 Rotating radio transients (RRATs) References Astronomical objects discovered in 2022 Vulpecula
ASKAP J1935+2148
[ "Astronomy" ]
611
[ "Vulpecula", "Constellations" ]
77,125,334
https://en.wikipedia.org/wiki/Solar%20and%20Space%20Physics%20Decadal%20Survey
The Solar and Space Physics Decadal Survey is a publication of the National Research Council produced for NASA, as well as other US government agencies such as NOAA and the National Science Foundation. It is produced with the purpose of identifying a recommended scientific strategy in the field of heliophysics for the following decade. Agencies such as NASA utilize the decadal survey in order to prioritize funding for specific missions or scientific research projects. As of 2024, two decadal surveys have been published. The first, "The Sun to the Earth — and Beyond: A Decadal Research Strategy in Solar and Space Physics" was published in 2003 for the period 2003-2012. The second, "Solar and Space Physics: A Science for a Technological Society" was released in 2013 for the period 2013-2022. A third decadal survey, "The Next Decade of Discovery in Solar and Space Physics", covering the period 2024-2033, is nearing the end of production and is planned to be released in early winter 2024. 2003-2012, The Sun to the Earth — and Beyond "The Sun to the Earth — and Beyond: A Decadal Research Strategy in Solar and Space Physics" was released in 2003. The committee was chaired by Louis J. Lanzerotti of Lucent Technologies. The highest priority recommendation named in the report was the Solar Probe mission, intended to explore the explore the inner regions around the Sun. It also recommended the development of the Magnetospheric Multiscale Mission and of a Jupiter Polar Orbiter. 2013-2022, A Science for a Technological Society "Solar and Space Physics: A Science for a Technological Society" was released in August 2012. The committee was chaired by Daniel N. Baker of the University of Colorado. It laid out four key science goals: establishing the origins of solar activity, gaining a deeper understanding of Earth's magnetic field, exploring the Sun's interactions with the solar system and interstellar medium, and characterizing the fundamental processes of the heliosphere. The top priorities recommended to NASA were the restoration of the Medium-Class Explorers program, continuation of the Living With a Star and Solar Terrestrial Probes programs, and continued development of the Solar Probe Plus mission and the Geospace Dynamics Constellation as part of LWaS. 2024-2033, The Next Decade of Discovery in Solar and Space Physics A pre-publication copy of The Next Decade of Discovery in Solar and Space Physics was released in December 2024. The committee was chaired by Stephen A. Fuselier of the Southwest Research Institute and Robyn Millan of Dartmouth College. Rather than articulate specific goals, the document was intended to cover a wider, more diverse array of science and space weather research based around six interconnected themes similar to the previous Decadal Survey. The new Survey recommended that NOAA establish a new ground-based space weather research office, that the NSF increase workforce support initiatives and investment in research infrastructure including CubeSats, and that NASA fund two new flagship missions, the Links constellation to study Earth's magnetosphere from more than two dozen positions, and the Solar Polar Orbiter mission to fully succeed Ulysses and Solar Probe Plus, in addition to continuing the Geospace Dynamics Constellation's development. See also Astronomy and Astrophysics Decadal Survey Planetary Science Decadal Survey Earth Science Decadal Survey Snowmass Process References NASA Astronomical surveys Decadal science surveys
Solar and Space Physics Decadal Survey
[ "Astronomy" ]
696
[ "Astronomical surveys", "Astronomical objects", "Works about astronomy" ]
77,125,910
https://en.wikipedia.org/wiki/Mu%C4%8Dnik%20reducibility
In computability theory, a set P of functions is said to be Mučnik-reducible to another set Q of functions when for every function g in Q, there exists a function f in P which is Turing-reducible to g. Unlike most reducibility relations in computability, Mučnik reducibility is not defined between functions but between sets of such functions. These sets are called "mass problems" and can be viewed as problems with more than one solution. Informally, P is Mučnik-reducible to Q when any solution of Q can be used to compute some solution of P. See also Medvedev reducibility Turing reducibility Reduction (computability) References Theoretical computer science
Mučnik reducibility
[ "Mathematics" ]
151
[ "Functions and mappings", "Mathematical logic", "Mathematical objects", "Reduction (complexity)", "Mathematical relations", "Mathematical logic stubs" ]
77,125,925
https://en.wikipedia.org/wiki/Medvedev%20reducibility
In computability theory, a set P of functions is said to be Medvedev-reducible to another set Q of functions when there exists an oracle Turing machine which computes some function of P whenever it is given some function from Q as an oracle. Medvedev reducibility is a uniform variant of Mučnik reducibility, requiring a single oracle machine that can compute some function of P given any oracle from Q, instead of a family of oracle machines, one per oracle from Q, which compute functions from P. See also Mučnik reducibility Turing reducibility Reduction (computability) References Theoretical computer science Reduction (complexity) Computability theory
Medvedev reducibility
[ "Mathematics" ]
141
[ "Functions and mappings", "Theoretical computer science", "Applied mathematics", "Mathematical logic", "Mathematical objects", "Reduction (complexity)", "Mathematical relations", "Computability theory", "Mathematical logic stubs" ]
77,126,298
https://en.wikipedia.org/wiki/Seamus%20Martin%20%28biochemist%29
Seamus J. Martin is an Irish molecular biologist and immunologist working at The Smurfit Institute of Genetics in Trinity College Dublin. Since 1999, he has held the Smurfit Chair of Medical Genetics at Trinity College Dublin, and his research focuses on the links between cell death, cell stress, and inflammation. Martin is known for his contributions to understanding the molecular control of the mode of regulated cell death known as apoptosis. Martin received the 'GlaxoSmithKline Award' of the Biochemical Society in 2006, the British Science Association's 'Charles Darwin Award' in 2005, and The 'RDS-Irish Times Boyle Medal' in 2014, for his work on deciphering the role of caspases in apoptosis. In 2006, he was elected to the Royal Irish Academy, in 2009 he awarded EMBO Membership, and in 2023 he was elected to the Academia Europaea. His research work is widely cited and he received a European Research Council Advanced Research award in 2021. Martin is an author of the 11th, 12th, and 13th editions of the award-winning textbook, Essential Immunology, and since 2014, he has served as Editor-in-Chief of The FEBS Journal (Cambridge, UK), an international life sciences academic journal. Biography Martin studied biology and chemistry as an undergraduate at The National University of Ireland, Maynooth (NUIM), followed by a PhD in Cell Biology working with Tom Cotter at Maynooth University. After completion of his PhD, he moved to the Dept. of Immunology at University College London (UK) to carry out a post-doctoral fellowship working on HIV immunopathology with internationally known immunologist Ivan Roitt, FRS.  Supported by a Wellcome Trust International Prize Fellowship, he then relocated to the La Jolla Institute for Immunology, University of California, San Diego, USA, to undertake a second post-doc with US Immunologist and National Academy Member Douglas R. Green. In 1999 Martin moved to the Dept. of Genetics, Trinity College Dublin, where he was appointed to the Smurfit Chair of Medical Genetics. Scientific contributions Martin's research focuses on the molecular mechanisms governing regulated cell death and inflammation. Initially working on the role of proteases in coordinating programmed cell death (apoptosis), he made contributions to our understanding of how caspases become activated during apoptosis, the order of caspase activation events in the intrinsic and extrinsic caspase activation cascades, and how caspases coordinate apoptosis through proteolysis of hundreds of substrate proteins. More recently, his work has focused on how caspases coordinate inflammatory cascades downstream of death receptor engagement. In parallel to his work on caspases, he has also made contributions to our understanding of how neutrophil proteases promote inflammation through processing and activation of members of the extended IL-1 family and has championed the idea that IL-1 family members represent the canonical ‘damage-associated molecular patterns’ that promote inflammation upon release from necrotic cells While working with Doug Green at La Jolla, Martin pioneered annexin V labeling as a probe for apoptotic cells which has become the ‘gold standard’ for the measurement of apoptosis. He also established a mammalian ‘cell-free’ system for the study of caspase activation pathways in mammals,[13][14] and continued this work upon establishing his own laboratory.[15][16][17][18] Martin's recent work has focused on exploring the links between cell death signals and inflammatory signaling cascades. His laboratory has published a series of studies demonstrating that essentially all initiators of programmed cell death can also promote inflammation[19][20][21] and his current research is focused upon understanding how chemotherapeutic drugs can frequently trigger inflammation that may be detrimental to killing cancer cells.[222] Select publications Martin, S. J., Amarante-Mendes, G. P., Shi, L., Chuang, T.-H., Casiano, C. A., O'Brien, G. A., Fitzgerald, P., Tan, E. M., Bokoch, G. M., Greenberg, A. H., and Green, D. R.  (1996) The cytotoxic cell protease granzyme B initiates apoptosis in a cell-free system by proteolytic processing and activation of the ICE/CED-3 family protease, CPP32, via a novel two-step mechanism.  EMBO Journal.  15, 2407-2416. Slee, E.A., Harte, M.T., Kluck, R.M., Wolf, B.B., Casiano, C.A., Newmeyer, D.D., Wang, H.-G., Reed, J.C., Nicholson, D.W., Alnemri, E.S., Green D.R., and Martin S.J. (1999) Ordering the Cytochrome c-Initiated Caspase Cascade: Hierarchical Activation of Caspases -2, -3, -6, -7, -8 and -10 in a Caspase-9-Dependent Manner. The Journal of Cell Biology 144:281-292. Lüthi, A.U., Cullen, S.P., McNeela, E.A., Duriez, P.J., Afonina, I.S., Sheridan, C., Brumatti, G., Taylor, R.C., Kersse, K., Vandenabeele, P., Lavelle, E.C. and Martin SJ (2009) Suppression of IL-33 Bioactivity through Proteolysis by Apoptotic Caspases.  Immunity 31:84-98.  Cullen, SP, Henry CM, Kearney, CJ, Logue SE, Feoktistova M, Tynan GA, Lavelle EC, Leverkus M, and Martin SJ (2013) Fas/CD95-Induced Chemokines can Serve as ‘Find-Me’ Signals for Apoptotic Cells.  Molecular Cell, 49, 1034–1048. Hollville, E., Carroll, R., and Martin SJ. (2014) Bcl-2 Family Proteins Participate in Mitochondrial Quality Control by Regulating Parkin/PINK1-Dependent Mitophagy. Molecular Cell 55:451-66. Henry CM and Martin SJ (2017) Caspase-8 Acts in a Non-enzymatic Role as a Scaffold for Assembly of a Pro-inflammatory "FADDosome" Complex upon TRAIL Stimulation. Molecular Cell, 65, 715-729. Sullivan GP, O'Connor H, Henry CM, Davidovich P, Clancy DM, Albert ML, Cullen SP, and Martin SJ. (2020) TRAIL Receptors Serve as Stress-Associated Molecular Patterns to Promote ER-Stress-Induced Inflammation. Developmental Cell 52, 714-730. Sullivan GP, Davidovich P, Muñoz-Wolf N, Ward RW, Hernandez Santana YE, Clancy DM, Gorman A, Najda Z, Turk B, Walsh PT, Lavelle EC, and Martin SJ. (2022) Myeloid cell-derived proteases produce a pro-inflammatory form of IL-37 that signals via IL-36 receptor engagement. Science Immunology 7, eade5728 1-15. References External links Google Scholar Official website Living people 21st-century Irish biologists Irish immunologists Molecular biologists Academics of Trinity College Dublin Irish biochemists Year of birth missing (living people)
Seamus Martin (biochemist)
[ "Chemistry" ]
1,630
[ "Molecular biologists", "Biochemists", "Molecular biology" ]
77,126,555
https://en.wikipedia.org/wiki/David%20W.%20Flaherty
David W. Flaherty is the Thomas C. Loach Jr. Endowed Professor in the School of Chemical and Biomolecular Engineering at Georgia Institute of Technology, joining in June 2023 after previously serving at the University of Illinois, Urbana-Champaign. His research focuses on catalysis, surface science, and materials synthesis aimed at sustainability. Education and career B.S. in Chemical Engineering, University of California, Berkeley Ph.D. in Chemical Engineering, University of Texas at Austin (advisor: Charles Buddie Mullins) Postdoctoral research with Prof. Enrique Iglesia at the University of California, Berkeley Research Flaherty's research focuses on developing the science and application of catalysis for sustainability. Awards and honors Eastman Foundation Distinguished Lecturer in Catalysis, University of California, Berkeley (2021) Department of Energy Early Career Award (2019) National Science Foundation CAREER Award (2016) ACS PRF Doctoral New Investigator Award (2013) References External links [Google Scholar Profile](https://scholar.google.com/citations?user=EULNYK8AAAAJ&hl=en) Georgia Tech faculty Chemical engineers University of California, Berkeley alumni University of Texas at Austin alumni Year of birth missing (living people) Living people
David W. Flaherty
[ "Chemistry", "Engineering" ]
258
[ "Chemical engineering", "Chemical engineers" ]
77,126,911
https://en.wikipedia.org/wiki/2014%20Midwest%20FurFest%20gas%20attack
On December 7, 2014, Midwest FurFest was targeted by a chlorine gas attack, hospitalizing 19 attendees. At the time, Midwest FurFest was the second-largest furry convention in the country, with over 5,400 attendees. The convention took place at the Hyatt hotel in Rosemont, Illinois, from December 5 to December 7. Timeline Around 12:45 a.m. on the final night of the convention, the Rosemont Public Safety Department received several reports of a noxious odor on the ninth floor of the hotel. At 1:10 a.m. the hotel was evacuated and guests were sent to the Stephens Convention Center. Hazmat technicians decontaminated the area for two hours and the building was opened back up at 4:21 a.m. Investigation Firefighters investigating the scene found a broken glass jar filled with white powder on the ninth floor of the emergency stairway, as well as a liquid on the walls. The air in the stairwell and in the hotel's large atrium both registered high levels of chlorine gas. The concentration within the stairwell was measured above 60 ppm, surpassing the meter's maximum range. Samples of both the powder and liquid were taken, but due to improperly calibrated equipment the results were inconclusive. In the hours after the gas leak, the Rosemont police said the evidence "suggests an intentional act" and began a criminal investigation. In the following days, the police and FBI interviewed suspects, hotel guests and employees, and hospital workers, among others. The FBI continued to interview suspects until 2019, when the Illinois statute of limitations ran out. As of 2022, nobody has been charged in this case. Media Coverage On December 8th, the MSNBC show Morning Joe was covering the attack. While reporting the news, hosts Mika Brzezinski and Joe Scarborough started laughing, leaving Willie Geist to finish reading the segment. The broadcast cut to an interview with Dr. Samuel Conway, and when the hosts returned, Mika Brzezinski was running out of the studio. Many furries were offended by this dismissive coverage of the event. In 2024, reporter Nicky Woolf released Fur & Loathing, an investigative podcast series about the event. The podcast interviewed two of the FBI's prime suspects in the case, including Robert Sojkowski (AKA Magnus Diridian), a well known controversial figure within the fandom. Fur & Loathing was featured in Time's "Best Podcasts of 2024 So Far" article and Bloomberg Businessweek 2024 "Jealousy List". References 2014 in Illinois Furry conventions Rosemont, Illinois Unsolved crimes in the United States Unidentified American criminals Attacks in the United States in 2014 Attacks on buildings and structures in 2014 Attacks on entertainment venues Improvised explosive device bombings in 2014 Chemical weapons attacks Terrorist incidents by unknown perpetrators
2014 Midwest FurFest gas attack
[ "Chemistry" ]
577
[ "Chemical weapons attacks", "Chemical weapons" ]
77,126,934
https://en.wikipedia.org/wiki/Conjunction/disjunction%20duality
In propositional logic and Boolean algebra, there is a duality between conjunction and disjunction, also called the duality principle. It is the most widely known example of duality in logic. The duality consists in these metalogical theorems: In classical propositional logic, the connectives for conjunction and disjunction can be defined in terms of each other, and consequently, only one of them needs to be taken as primitive. If is used as notation to designate the result of replacing every instance of conjunction with disjunction, and every instance of disjunction with conjunction (e.g. with , or vice-versa), in a given formula , and if is used as notation for replacing every sentence-letter in with its negation (e.g., with ), and if the symbol is used for semantic consequence and ⟚ for semantical equivalence between logical formulas, then it is demonstrable that  ⟚ , and also that if, and only if, , and furthermore that if  ⟚  then  ⟚ . (In this context, is called the dual of a formula .) Mutual definability The connectives may be defined in terms of each other as follows: (1) (2) (3) Functional completeness Since the Disjunctive Normal Form Theorem shows that the set of connectives is functionally complete, these results show that the sets of connectives and are themselves functionally complete as well. De Morgan's laws De Morgan's laws also follow from the definitions of these connectives in terms of each other, whichever direction is taken to do it. (4) (5) Duality properties The dual of a sentence is what you get by swapping all occurrences of ∨ and &, while also negating all propositional constants. For example, the dual of (A & B ∨ C) would be (¬A ∨ ¬B & ¬C). The dual of a formula φ is notated as φ*. The Duality Principle states that in classical propositional logic, any sentence is equivalent to the negation of its dual. Duality Principle: For all φ, we have that φ = ¬(φ*). Proof: By induction on complexity. For the base case, we consider an arbitrary atomic sentence A. Since its dual is ¬A, the negation of its dual will be ¬¬A, which is indeed equivalent to A. For the induction step, we consider an arbitrary φ and assume that the result holds for all sentences of lower complexity. Three cases: If φ is of the form ¬ψ for some ψ, then its dual will be ¬(ψ*) and the negation of its dual will therefore be ¬¬(ψ*). Now, since ψ is less complex than φ, the induction hypothesis gives us that ψ = ¬(ψ*). By substitution, this gives us that φ = ¬¬(ψ*), which is to say that φ is equivalent to the negation of its dual. If φ is of the form (ψ ∨ χ) for some ψ and χ, then its dual will be (ψ* & χ*), and the negation of its dual will therefore be ¬(ψ* & χ*). Now, since ψ and χ are less complex than φ, the induction hypothesis gives us that ψ = ¬(ψ*) and χ = ¬(χ*). By substitution, this gives us that φ = ¬(ψ*) ∨ ¬(χ*) which in turn gives us that φ = ¬(ψ* & χ*) by DeMorgan's Law. And that is once again just to say that φ is equivalent to the negation of its dual. If φ is of the form ψ ∨ χ, the result follows by analogous reasoning. Further duality theorems Assume . Then by uniform substitution of for . Hence, , by contraposition; so finally, , by the property that  ⟚ , which was just proved above. And since , it is also true that if, and only if, . And it follows, as a corollary, that if , then . Conjunctive and disjunctive normal forms For a formula in disjunctive normal form, the formula will be in conjunctive normal form, and given the result that , it will be semantically equivalent to . This provides a procedure for converting between conjunctive normal form and disjunctive normal form. Since the Disjunctive Normal Form Theorem shows that every formula of propositional logic is expressible in disjunctive normal form, every formula is also expressible in conjunctive normal form by means of effecting the conversion to its dual. References Mathematical logic
Conjunction/disjunction duality
[ "Mathematics" ]
968
[ "Mathematical logic" ]
77,127,480
https://en.wikipedia.org/wiki/Detroit%20Wayne%20%281919%20ship%29
The steamship Detroit Wayne was steel-hulled freighter built for the United States Shipping Board in 1919. She carried freight across the Atlantic in 1920 and 1921. Afterward, she was likely idled until 1932 when she was converted into a dredge for the U.S. Army Corps of Engineers. She was active in the Mississippi River for several years. In 1940, Detroit Wayne was sold to private interests, renamed Raritan, and converted back into a freighter. She was wrecked off the North Carolina coast in February 1942. Construction and characteristics When the United States entered World War I in April 1917, neither it nor any allied power had shipping capacity to carry the two million Americans who sailed for Europe, much less all their accompanying armament and supplies. What shipping did exist in the Atlantic was pared back by Germany's U-boats, which sank almost 5,000 ships during the war. The United States Shipping Board through its wholly-owned Emergency Fleet Corporation mass produced ships to a few standard designs to "build a bridge across the ocean." Detroit Wayne was one of those ships. Detroit Wayne was built to the Shipping Board's standard Design 1099. She was built of welded steel plates. She was long between perpendiculars, with a beam of , and a depth of hold of . Her fully loaded draft was just over . Deadweight tonnage, the weight of cargo which could be carried, was 4050 tons. Gross register tonnage was 2,606, while her net register tonnage was 1,612. Detroit Wayne had a single propeller which was driven by a single triple-expansion steam engine with . This engine had high, medium, and low pressure cylinders with diameters of 21 inches, 35 inches, and 59 inches respectively, with a stroke of 42 inches. Steam was provided by two boilers, which were oil-fired. The ship was capable of reaching 9.5 knots. Her fuel tanks could hold 708 tons of oil, giving her a steaming range of just over 8,000 miles. She had two cargo holds, each of which had two hatches. Each hold was serviced by four cargo booms, each of which had its own winch. The heaviest load that could be winched aboard was 4 tons. Detroit Wayne had an effective cargo capacity of 166,806 cubic feet for baled cargo and 180,033 cubic feet for grain. The ship was named Lake Fairton when her keel was laid, but she was launched as Detroit Wayne. Her name was changed to honor Detroit and Wayne County for oversubscribing to all of the Liberty Loan bond issues which funded America's World War I spending. She was built by the Detroit Shipbuilding Company, a unit of the American Shipbuilding Company, at its Wyandotte, Michigan shipyard. She was launched on 8 November 1919 and delivered to the Shipping Board in April 1920. Her original cost was $777,751.41. Service history United States Shipping Board (19191932) After delivery to the Shipping Board, Detroit Wayne made her way through the Great Lakes to the Atlantic, passing through the Lachine Canal on 23 June 1920, and arriving at Portland, Maine on 17 July 1920. She sailed for New York the next day and on to Philadelphia on 30 July 1920 to begin her work. The Shipping Board consigned Detroit Wayne to the Clyde Steamship Company, which placed her in trans-Atlantic service. On 29 August 1920 she left Philadelphia for Genoa, Italy with stops in Marseilles, and Port St. Louis du Rhone. She passed Gibraltar on 12 September 1920 and arrived in Genoa on 25 September 1920. Detroit Wayne made seven more trans-Atlantic crossings, returning to Genoa twice, and sailing to Avonmouth, U.K. twice. She stopped in New York and Philadelphia on both outbound and return trips from Europe. On each of these trips she stopped at a number of intermediate ports including Hull, Newcastle, Naples, and Bizerte. On her last trans-Atlantic crossing, she departed Marseilles, bound for Philadelphia on 18 September 1921. Detroit Wayne was redelivered to the Shipping Board at Norfolk, Virginia on 5 October 1921. In December 1921, the Shipping Board executed a bareboat charter agreement with Halschaw Steamship Line, Incorporated. Halschaw failed to fund the surety bond required by the Shipping Board and litigation ensued. It is not clear if Detroit Wayne ever sailed for Halschaw. U.S. Army Corps of Engineers (19321940) In 1932, the Shipping Board transferred Detroit Wayne and sister ship Lake Fenn, both of which were idle in the reserve fleet in the James River, to the War Department. The boilers, winches, and other equipment were removed from Lake Fenn and installed on Detroit Wayne to convert her from a freighter into an agitator dredge for the U.S. Army Corps of Engineers. Sealed bids for the work were opened on 10 June 1932. The Maryland Drydock Company, Incorporated submitted the low bid of $154,000 for the project and was awarded the contract. The two ships were towed from James River to the Maryland Drydock Company shipyard in Baltimore, arriving on 7 July 1932. The out-of-pocket cost of the conversion, not counting the parts removed from Lake Fenn, was $156,317.13. The conversion was designed by U.S. Army Captain H. B. Vaughn, jr. The two boilers on Lake Fenn were removed and installed on Detroit Wayne to provide additional steam for engines which drove the dredge pumps. There were two triple-expansion steam engines, each with an indicated horsepower of 775, which drove the pumps. They had high, medium, and low pressure cylinders of 13.25 inches, 20 inches, and 31.5 inches with a stroke of 20 inches. The suction and output of the pumps was 30 inches in diameter. The suction side of the pumps were connected to port and starboard drag arms which could be lowered to the bottom. The #2 hold was converted into a pump room to accommodate the new equipment. In her new dredge configuration, her crew was variously reported as 80 or 99 men, for which accommodations were provided aboard. After the conversion, Lake Fenn was scrapped. Two additional design 1099 ships, Lake Fairfax and Lake Faxon also underwent this conversion, with Lake Fairfax surviving as a dredge. Detroit Wayne was an agitator dredge. She would dredge while anchored or proceeding slowly up river. Her pumps would dislodged stabilized material at the bottom and then pump it into the stream which would carry the silt away with the current. Under certain conditions, this allowed the power of the river itself to erode the newly-exposed stream bed into a deeper channel. In September 1932, just after her conversion, Detroit Wayne dredged the James River between Dancing Point Shoal and Swann Point. This test was successful. In October, the ship stopped in at the Norfolk Naval Shipyard to have her drag arms unshipped and placed on deck for the voyage to her new home port. She was assigned to the 2nd U.S. Engineers District, based in New Orleans, and became U.S.E.D Detroit Wayne. The ship was used to dredge multiple obstructions on the Mississippi River and in the Atchafalaya Basin from 1933 until at least 1937. Among the locations she dredged were around Vicksburg, Natchez, Waterproof, Louisiana, and Grand Gulf, Mississippi. Raritan Steamship Corporation (19401942) Detroit Wayne was purchased by the Raritan Steamship Company of New York in 1940. Her name was changed to Raritan, and she was converted back into a freighter. Her home port was New Orleans. The ship was managed by Smith-Johnson Steamship Corporation. In January 1941, Raritan was loaded in Maryland with defense materials bound primarily for St. Lucia and Antigua. These materials were intended for new U.S. facilities acquired from Britain under the destroyers-for-bases-deal. Raritan was headed north with a load of coffee on 25 February 1942. Unbeknownst to her captain, the Frying Pan Shoals lightship had been withdrawn. The ship was advised to stay close to shore due to the U-boat threat, but the night was stormy and the crew were unsure of their position. Just after midnight Raritan grounded hard on the shoal. After seven hours aground, all 29 of the crew were rescued by the Coast Guard. Two hours later the ship floated off and broke up as the tide came in, and she sank in deeper water. References 1919 ships Ships of the United States Army Dredges Shipwrecks Ships built in Michigan Dredgers
Detroit Wayne (1919 ship)
[ "Engineering" ]
1,772
[ "Dredges", "Mining equipment" ]
77,127,710
https://en.wikipedia.org/wiki/Maia%20Biotechnology
Maia Biotechnology is a public, Texas based, immune-oncology company. Company It is led by CEO/Chairman Vlad Vitoc, MD, MBA and Chief Scientific Officer Sergei M. Gryaznov, PhD. In August 2022, the company's initial funding round closed, raising $10M. In March 2024, the company's equity fell below .5 M. In April 2024 the company announced the completion of a $1M private placement. Drug candidates THIO (6-thio-2'-deoxyguanosine) is a drug that attempts to kill cells that express telomerase, which is found in 85% of human cancers. Clinical trials Phase II THIO-101 A phase II THIO-101 clinical trial of THIO sequenced with cemiplimab to treat advanced non-small cell lung cancer (NSCLC). The trial is a multicentre, open-label, dose-finding trial designed to assess anti-tumour activity followed by PD-L1 inhibition. The study population was patients with advanced NSCLC who did not respond or developed resistance to first-line treatments using another checkpoint inhibitor. The trial's primary objectives are to assess tolerability, safety, and clinical efficacy using overall response rate (ORR) as the primary endpoint. See also Geron Corporation Imetelstat References External links MAIA Biotechnology's Telomere Targeting Functionality is Shown Viable by FDA's Approval of a Telomerase Inhibitor Agent Therapy Biotechnology companies Cancer immunotherapy Telomeres
Maia Biotechnology
[ "Engineering", "Biology" ]
323
[ "Senescence", "Biotechnology organizations", "Telomeres", "Biotechnology companies" ]
77,128,002
https://en.wikipedia.org/wiki/Paleohistology
Paleohistology is the study of the microstructure of fossilized skeletal tissues, offering insights into the biology, growth patterns, and physiology of extinct organisms. Despite the decay of organic components, the inorganic elements of bone preserve critical structures such as osteocyte lacunae, vascular canals, and collagen fibers. This highly specialized field within paleontology yields insights into the lives of extinct animals, including growth history and age at death. History The microscopic study of biological tissues traces back to 1828 when Henry Witham and William Nicol pioneered techniques for examining petrified tree trunks under a microscope. Subsequently, Louis Agassiz applied these methods to fossil vertebrates. In 1849, John Thomas Quekett published a seminal paper detailing the histological structure of bone across various vertebrate groups, laying the foundation for further research. Gideon Mantell made significant contributions to paleohistology in the mid-19th century. In 1850, Mantell provided the first clear description of dinosaur bone microstructure, including thin sections from a "dorsal dermal spine" of Hylaerosaurus and a humerus of Pelorosaurus. These observations marked a pivotal moment in the study of ancient tissues, highlighting the preservation of intricate structures in fossilized bone. Throughout the 20th century, technological advancements revolutionized paleohistology. The introduction of hard plastic resins, tungsten carbide microtome blades, and diamond-edged saw blades enabled researchers to produce thinner sections and conduct more detailed analyses of mineralized tissues. These innovations expanded the scope of paleohistological research, facilitating the examination of fully mineralized bone samples. In the 1960s and 1970s, Armand de Ricqlès made significant strides in paleohistology by correlating histological features with growth rates and thermal physiology in extinct organisms. Drawing from neontological observations, de Ricqlès demonstrated that avascular bone is deposited more slowly than vascular bone, with implications for understanding the physiology of extinct taxa. His work on dinosaur bone histology suggested physiological similarities between dinosaurs and endothermic birds, challenging prevailing notions of reptilian physiology. Recent studies in paleohistology have expanded our understanding of ancient tissues, with a focus on quantitative analyses, comparative histology, and interdisciplinary approaches. Ongoing research continues to uncover new insights into the biology and evolution of extinct organisms, leveraging advancements in imaging technology and analytical techniques. Methods Paleohistologists employ a variety of techniques to study ancient tissues, including thin sectioning, histological staining, and microscopy. Thin sectioning involves cutting slices of fossilized bone or tooth tissue, which are then mounted on slides and examined under a microscope. Histological staining techniques allow researchers to visualize different tissue types, such as bone, cartilage, and teeth, while microscopy enables detailed examination of cellular structures. Recent advances in imaging technology, such as confocal microscopy and synchrotron radiation, have revolutionized paleohistology by providing higher resolution imaging and non-destructive analysis of fossil specimens. Applications Paleohistology has diverse applications in paleontology, evolutionary biology, and related fields. By analyzing the microstructure of fossilized tissues, paleohistologists can infer growth rates, metabolic rates, and physiological adaptations of extinct organisms. This information contributes to our understanding of vertebrate evolution, including the origins of flight in birds, the evolution of mammalian reproduction, and the diversity of dinosaurian growth strategies. Additionally, paleohistological data can provide insights into paleoecological dynamics, such as population demographics, habitat preferences, and responses to environmental change. By reconstructing past environments and ecosystems, paleohistology helps scientists understand the long-term effects of climate change, mass extinctions, and other evolutionary processes. References Histology Micropaleontology
Paleohistology
[ "Chemistry" ]
783
[ "Histology", "Microscopy" ]
77,128,339
https://en.wikipedia.org/wiki/8030%20aluminium%20alloy
8030 aluminum alloy is produced using iron and copper as additives. It is commonly used in electronics due to high thermal stability and electrical conductivity. Chemical composition Applications Aluminium 8030 is used in high voltage power transmission lines. References External links Material Properties Aluminium alloys
8030 aluminium alloy
[ "Chemistry" ]
54
[ "Alloys", "Alloy stubs", "Aluminium alloys" ]
77,129,349
https://en.wikipedia.org/wiki/Zasocitinib
Zasocitinib (TAK-279, NDI-034858) is a drug which is an orally active, highly selective tyrosine kinase 2 (TYK2) inhibitor. It has been researched for various inflammatory conditions including psoriatic arthritis and Crohn's disease. It is significantly more selective than earlier compounds over side targets such as JAK1, which is hoped to give it an improved side effect profile. See also Deucravacitinib References Tyrosine kinase inhibitors Methoxy compounds Cyclobutyl compounds Carboxamides Pyridines Pyrazolopyrimidines Secondary amines
Zasocitinib
[ "Chemistry" ]
137
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
77,129,977
https://en.wikipedia.org/wiki/Axial%20loading
Axial loading is defined as applying a force on a structure directly along a given axis of said structure. In the medical field, the term refers to the application of weight or force along the course of the long axis of the body. The application of an axial load on the human spine can result in vertebral compression fractures. Axial loading takes place during the practice of head-carrying, an activity which a prospective case–control study in 2020 shows leads to "accelerated degenerative changes, which involve the upper cervical spine more than the lower cervical spine and predisposes it to injury at a lower threshold." References Biomechanics
Axial loading
[ "Physics" ]
132
[ "Biomechanics", "Mechanics" ]
77,131,907
https://en.wikipedia.org/wiki/Big%20Night%20%28amphibians%29
Big Night is an annual event common to amphibians as they emerge from underground hibernation in the spring, travel to vernal pools, and mate. Background The reason it is referred to as a big night is because there is a large number of salamanders moving at the same time. Warmer air and loose soil coupled with rain cause salamanders to leave their underground burrows. The event takes place at night to minimize predation. The rain on the big night keeps the salamanders skin from becoming dry. Amphibians such as salamanders and frogs in a local area usually use the same overwintering area and the same breeding area, returning generation after generation to the area in which they were spawned. The breeding locations are areas where vernal pools develop from snowmelt and spring rains. The two locations can be a half-mile apart or even farther. Although referred to as the Big Night, the event for a species sometimes occurs on several occasions over days or weeks. In temperate areas, the event usually happens when temperatures are optimal for the particular species, after a rain. The salamander gathering for the mating ritual is known as a salamander congress. Human assistance In some areas where the path of migration crosses a roadway, volunteers may assist the amphibians to cross the road safely. In some, amphibian and reptile tunnels have been built to funnel the migrating animals safely under the roadway. In popular culture Loren Eiseley wrote an essay, The Dance of the Frogs, about Big Night. References Amphibians Annual events Mating systems
Big Night (amphibians)
[ "Biology" ]
323
[ "Behavior", "Animals", "Amphibians", "Mating systems", "Mating" ]
77,132,409
https://en.wikipedia.org/wiki/Claudia%20Turro
Claudia Turro is an American inorganic chemist who is the Dow Professor of Chemistry at The Ohio State University (OSU). Since July 2019 she has been the Chair of the OSU Department of Chemistry and Biochemistry. She was elected Fellow of the American Chemical Society in 2010 and is a member of the American Academy of Arts and Sciences (2023) and the National Academy of Sciences (2024). Education Claudia Turro earned her B.S. with Honors from Michigan State University in 1987. She completed her Ph.D. in 1992 at the same institution, where she collaborated with Daniel G. Nocera and George E. Leroi. Following this, she was awarded a Jane Coffin Childs Memorial Fund for Medical Research Postdoctoral Fellowship, which allowed her to conduct postdoctoral research at Columbia University with Nicholas J. Turro (no relation) from 1992 to 1995. Research and career Turro joined the faculty of The Ohio State University in 1996. She and her group study light-initiated reactions of metal complexes, with applications in photochemotherapy (PCT) and treatment of diseases, luminescent sensors, and solar energy conversion. They investigate the excited states of mononuclear and dinuclear transition metal complexes to enhance their reactivity. Their research focuses on controlling the dynamics of excited states, including photophysical properties and reactivity, such as energy transfer, charge separation, recombination, and photochemical reactions. This understanding is crucial for applications in solar energy, PCT, and sensing. Awards and honors 1998 Early CAREER Award by the National Science Foundation 1999 Arnold and Mabel Beckman Foundation Young Investigators Award 2010 Elected Fellow of the American Chemical Society 2012 Fellow of the American Association for the Advancement of Science 2014 Award in Photochemistry from the Inter-American Photochemical Society 2016 Recipient of Edward W. Morley Medal from the Cleveland section of the ACS 2023 Elected member of the American Academy of Arts and Sciences 2024 Elected member of the National Academy of Sciences Selected publications References Living people American chemists American inorganic chemists American women chemists Fellows of the American Chemical Society Lists of American Academy of Arts and Sciences members Members of the United States National Academy of Sciences Year of birth missing (living people)
Claudia Turro
[ "Chemistry" ]
451
[ "American inorganic chemists", "Inorganic chemists" ]
77,132,719
https://en.wikipedia.org/wiki/IRAS%2023077%2B6707
IRAS 23077+6707 (Dracula's Chivito) is a protoplanetary disk seen edge-on. The disk blocks the light of the young star, causing the dark band in the middle. Dust particles scatter the light from the star, causing the bright nebula above and below the disk. The disk is 11 arcseconds in diameter and its distance is poorly constrained. Name IRAS 23077+6707 is the name of the infrared source observed by IRAS. The discoverers named the object Dracula's Chivito (DraChi), in reference to Gomez's Hamburger (GoHam), a well-known edge-on protoplanetary disk. The first part of the name is in reference to the fictional character of Count Dracula, called so because the first author Ciprian Berghea grew up in Transylvania and because the very faint protrusions extending far out north from the two disk lobes resembling 'fangs'. The second part is in reference to a chivito, suggested by the co-author Ana Mosquera, who is from Uruguay. Discovery IRAS 23077+6707 was first observed as a possible pre-main-sequence star in 1993 and in 2014 it was identified as a possible young stellar object with the help of AKARI. The disk around IRAS 23077+6707 was discovered in 2016 from Pan-STARRS images during the search for Active Galactic Nuclei. Later a group of French amateur astronomers suspected this object to be a planetary nebula and in 2019 obtained a spectrum of the nebula. This spectrum helped to characterize the star in this system. Physical parameters The discovery paper adopted a distance of around 300 parsec and measured an inclination of 82° for the disk. The researchers use this distance to infer a disk radius of 1650 astronomical units and a disk mass of 0.2 . The spectrum showed a spectral type of A9 for the central star, with a mass between 1.5 and 2.0 . The central star is suspected to be a Herbig Ae star. DraChi is the only third edge-on disk hosting such a massive star (the previous ones being Gomez's Hamburger and PDS 144N) and the largest of them. Later observations with the Submillimeter Array (SMA) detected carbon monoxide (CO) gas emission in this disk. This gas shows Keplerian rotation, thus confirming a rotating disk around a very young star as opposed to a planetary nebula and a dying star. See also List of resolved circumstellar disks examples of other protoplanetary disks: TW Hydrae AB Aurigae IM Lupi References Circumstellar disks IRAS catalogue objects Cepheus (constellation) A-type stars Herbig Ae/Be stars
IRAS 23077+6707
[ "Astronomy" ]
569
[ "Constellations", "Cepheus (constellation)" ]
77,133,017
https://en.wikipedia.org/wiki/Z583
Z583 (GLXC-26150) is a chemical compound which acts as a potent and highly selective inhibitor of JAK3, and was developed for the treatment of rheumatoid arthritis. See also Ritlecitinib Tofacitinib References Non-receptor tyrosine kinase inhibitors Pyrazolopyrimidines Ethanolamines Guanidines Pyrazoles Methoxy compounds Anilides
Z583
[ "Chemistry" ]
93
[ "Pharmacology", "Guanidines", "Functional groups", "Medicinal chemistry stubs", "Pharmacology stubs" ]
77,133,261
https://en.wikipedia.org/wiki/Personality%20hire
In recruitment, a personality hire refers to the practice of hiring candidates for their personality, rather than their tangible skill set. Personality hires typically have stronger soft skills than hard skills, may serve as a morale booster within the workplace, and help build corporate culture. Some candidates may label themselves as personality hires due to imposter syndrome. The term came into mainstream use in 2023 and is similar to that of a diversity hire. A personality hire may be reflective of an implicit cognitive affinity bias. Personality hires have been criticized for their lack of skills and competency. Due to their sociable personalities, personality hires may have to set personal boundaries. See also Cult of personality References Personality Recruitment Employment services Human resource management
Personality hire
[ "Biology" ]
143
[ "Behavior", "Personality", "Human behavior" ]
75,502,077
https://en.wikipedia.org/wiki/Praseodymium%28III%29%20phosphate
Praseodymium(III) phosphate is an inorganic compound with the chemical formula PrPO4. Preparation Praseodymium(III) phosphate hemihydrate can be obtained by reacting praseodymium chloride and phosphoric acid: It can also be produced by reacting silicon pyrophosphate (SiP2O7) and praseodymium(III,IV) oxide (Pr6O11) at 1200 °C. Properties Praseodymium(III) phosphate forms light green crystals in the monoclinic crystal system, with space group P21/n and cell parameters a = 0.676 nm, b = 0.695 nm, c = 0.641 nm, β = 103.25°, Z = 4. It forms a crystal hydrate of the composition PrPO4·nH2O, where n < 0.5, with light green crystals of hexagonal crystal system, space group P6222, and cell parameters a = 0.700 nm, c = 0.643 nm, Z = 3. Praseodymium(III) phosphate reacts with sodium fluoride to obtain Na2PrF2(PO4). References Praseodymium(III) compounds Phosphates
Praseodymium(III) phosphate
[ "Chemistry" ]
267
[ "Phosphates", "Salts" ]
75,503,906
https://en.wikipedia.org/wiki/NGC%202210
NGC 2210 is a globular cluster located in the Large Magellanic Cloud, in the constellation Dorado. It is situated south of the celestial equator and, as such, it is more easily visible from the southern hemisphere. It was first discovered by astronomer John Herschel on January 31, 1835. In 2017, Rachel Wagner-Kaiser and a group of researchers from the University of Florida discovered that NGC 2210, as well as five other globular clusters located in the Large Magellanic Cloud were of roughly the same age as some star clusters found in the Milky Way, and that NGC 2210 is roughly 11.6 billion years old. It was first imaged by the Hubble Space Telescope in 2023. References Globular clusters Large Magellanic Cloud Astronomical objects discovered in 1835 Discoveries by John Herschel Dorado 2210
NGC 2210
[ "Astronomy" ]
173
[ "Dorado", "Constellations" ]
75,504,618
https://en.wikipedia.org/wiki/M.A.%20Mortenson%20Company
The M.A. Mortenson Company, more commonly known under its Mortenson Construction brand, is an American construction company based in Minneapolis, Minnesota, with 2014 sales of (estimated) $3 billion. Sports venues Mortenson is noted as a general contractor that has built numerous sports stadiums and arenas, including U.S. Bank Stadium, Fiserv Forum, and Chase Center. As of 2014, the company had built over 150 entertainment and sports venues in the United States; by 2018, that number had grown to 170, at a valuation of $11 billion, which made Mortenson the second largest sports arena builder in the country; and by 2023, more than 230 such venues had been built, valued at $15 billion. Its most recent completed sports stadium project is the $1.9 billion Allegiant Stadium, home to the Las Vegas Raiders and UNLV Rebels football team, and slated for 2024 is a proposed $1.5 billion ballpark, also in Las Vegas, that will house the relocated Oakland Athletics. Renewable energy Mortenson Construction is also active in the field of renewable energy, having started in 1995 with a single wind turbine. In the area of wind energy, Mortenson received the contract for the 300 MW Blackspring Ridge Wind Project in Carmangay, Alberta, Canada for EDF-EN Canada. Mortenson installed a total of 15,000 megawatts of wind power by 2015. Mortenson built the Alamo 6 Solar and the Pearl Solar fields in Texas, with over 438,000 and 203,000 panels, respectively, atop 1,797 acres of land in Pecos County In 2014, with an addition of 512.9 megawatts solar power capacity, Mortenson was the second largest US company after First Solar (1,023 megawatts) and ahead of SolarCity ( 502 megawatts). One of the largest projects is the construction of the solar power plants Solar Star I and II in Rosamond, California with a total of 597 MW of output that can generate electricity for 255,000 households. History The company was founded in Richfield, Minnesota in April 1954 by M. A. Mortenson, Sr., formerly a vice president with the D'Arcy Leck Construction Co. While with D'Arcy Leck, Mortenson had supervised the construction of several local schools, a veterinary building on the farm campus of the University of Minnesota, and other industrial and commercial sites. References Construction and civil engineering companies
M.A. Mortenson Company
[ "Engineering" ]
524
[ "Construction and civil engineering companies", "Civil engineering organizations" ]
75,504,801
https://en.wikipedia.org/wiki/Receptor%20degrader
A receptor degrader binds to a receptor and induces its breakdown, causing down-regulation of signaling of that receptor. It is distinct from the mechanism of action of receptor antagonists and inverse agonists, which reduce receptor signaling but do not cause receptor breakdown. Examples include selective estrogen receptor degraders and androgen receptor degraders, both developed for hormone-sensitive cancers. References Receptor degraders
Receptor degrader
[ "Chemistry" ]
86
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
75,505,829
https://en.wikipedia.org/wiki/48-volt%20electrical%20system
A 48-volt DC electrical system voltage is a relatively low-voltage electrical system that is increasingly used in vehicles. It began in the 2010s as a way to increase the propulsion and battery recharge during braking for fuel savings in internal combustion engine vehicles, especially mild hybrid vehicles. History Traditionally, vehicle low-voltage applications were powered by a 12-volt system. In the 1990s, an attempt by a cross-industry standards group to specify a 42-volt electrical system failed to catch on and was abandoned by 2009. During the 2010s, renewed interest arose for a 48-volt low-voltage standard for powering automotive electronics, especially in hybrid vehicles. In 2011, German car manufacturers Audi, BMW, Daimler Benz, Porsche, and Volkswagen agreed on a 48 V system supplementing the legacy 12 V low-voltage automotive standard. In model year 2017, the Renault Scenic dCi Hybrid Assist was the first 48 V mild-hybrid passenger car. As of 2018, a 48 V electrical subsystem operated production vehicles such as Porsche and Bentley SUVs. Audi and Mercedes-Benz used a 48 V subsystem in 2018 vehicles such as A6, A7, A8 with 3.0 TDI 48 V mild-hybrid, CLS, E-Class, S-Class with M256 3.0 Turbo Otto 48 V Mild-Hybrid. Hyundai Tucson, Hyundai Santa Fe, Kia Ceed and Kia Sportage followed in model year 2019 with 1.6 and 2.0 turbodiesel engines supported by 48 V mild-hybrid technology. A European automotive trade association, CLEPA, estimated in 2018 that as many as 1 of every 10 new vehicles in 2025 would use at least one 48-volt device in the vehicle, covering 15 million vehicles per year. In March 2023, Tesla Inc. revealed that the Tesla Cybertruck and next-generation vehicle would utilize a 48-volt mid-voltage subsystem as a replacement of 12 V system, migrating the low-voltage components with highest power demand to 48 V. In December 2023, in order to accelerate the adoption by other automakers of 48 V system voltage for automotive components, Tesla offered a "48-volt electrical system whitepaper" to all industry leaders. CEO Jim Farley confirmed that Ford had received a copy and agreed to 'help the supply base move into the 48-volt future". Tesla also adopted 48 volts for its Optimus robot. Benefits A 48 V system can provide more power, improve energy recuperation, and allow up to an 85% decrease in cable mass. 12-volt systems can provide only 3.5 kilowatts, while a 48 V power could achieve 15 to 20 kW or even 50 kW. 48 volts is below the level that is considered safe in dry conditions without special protective measures. (See the article on electrical injury.) One example of where these benefits can be used is in the Gordan Murray Automotive T.50, where it uses an integrated starter-generator to generate power for a 48 V AC compressor, without the need for a belt. This allows the engine to rev more freely and give the vehicle good AC, no matter the RPM. Another example is with the use of electric turbochargers, active suspension, and rear-wheel steering systems that require a lot of power to run, and might be more responsive and capable with a 48 V system. See also Automotive battery Extra-low voltage Load dump List of electric vehicle battery manufacturers References 48 V Vehicle Electrical System – More Than Just a Bridging Technology? Dusan Graovac, Christoph Schulz-Linkholt, Thomas Blasius, 23 April 2020, EE Times/Asia. ISO 21780:2020(en) Road vehicles — Supply voltage of 48 V — Electrical requirements and tests Electric power distribution Automotive electrics
48-volt electrical system
[ "Engineering" ]
792
[ "Electrical engineering", "Automotive electrics" ]
75,505,886
https://en.wikipedia.org/wiki/SAGE-324
SAGE-324, also known as BIIB124, is an experimental drug. It is a neurosteroid that works as a GABAA receptor positive allosteric modulator. SAGE-324 was being developed by Biogen for the treatment of essential tremor. Its development was discontinued in 2024 due to lack of efficacy in Phase 2 clinical trials. References GABAA receptor positive allosteric modulators Experimental drugs Tetrazoles Methoxy compounds Ketones
SAGE-324
[ "Chemistry" ]
99
[ "Ketones", "Functional groups" ]