id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
64,667,394 | https://en.wikipedia.org/wiki/Raffaele%20Mezzenga | Raffaele Mezzenga is a soft condensed matter scientist, currently heading the Laboratory of Food and Soft Materials at the Swiss Federal Institute of Technology in Zurich. He is among the 0.1% most cited scientists according to the Clarivate 2023 Highly Cited Researchers list in the cross-field discipline.
Education
Prof. Mezzenga received his M.S. in Materials Science (1997) from Perugia University in Italy, while actively working for the Alpha Magnetic Spectrometer experiment at the European Center for Nuclear Research (CERN) and NASA (NASA Space Shuttle Discovery mission STS91), followed by a PhD in the field of Polymer Physics at the Swiss Federal Institute of Technology, Lausanne, Switzerland (2001).
Research and career
Mezzenga did postdoctoral research on semiconductive polymer colloids at the University of California Santa Barbara (UCSB) and then moved to the Nestlé Research Center in Lausanne as research scientist, working on the self-assembly of surfactants, natural amphiphiles and lyotropic liquid crystals. In 2005 he was hired as Associate Professor in the Physics Department of the University of Fribourg, and he then joined ETH Zurich on 2009 as Full Professor.
His research mainly focuses on the fundamental understanding of self-assembly processes in polymers, lyotropic liquid crystals, biological and food colloidal systems. His work has led to over 400 scientific publications and about 20 patents. He has made seminal contributions to several fields of soft condensed matter such as in protein aggregation, biopolymers and surfactants self-organisation. He has pioneered the use of protein-based materials in the establishment of new technologies for environmental remediation, health and advanced materials design.
Awards and honours
Prof. Mezzenga was the recipient of the 2011 John H. Dillon Medal by the American Physical Society. He was elected Fellow of the American Physical Society in 2017. Other awards include the 2011 Young Scientist Research Award of the American Oil Chemist Society, the 2013 Biomacromolecules/Macromolecules Young Investigator Award of the American Chemical Society and the 2019 Spark Award for the most promising ETH Zurich invention in 2019.
Mezzenga served as an Executive, Associate and Guest Editor for various journals including Food Biophysics, Food Hydrocolloids, Polymer International, Trends in Food Science, and has been a board member of Swiss Chemical Society for over 15 years.
External links
References
Year of birth missing (living people)
Living people
Condensed matter physicists
21st-century Swiss scientists
Fellows of the American Physical Society
People associated with CERN
University of Perugia alumni
École Polytechnique Fédérale de Lausanne alumni
Academic staff of the University of Fribourg
Academic staff of ETH Zurich | Raffaele Mezzenga | [
"Physics",
"Materials_science"
] | 561 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
64,668,653 | https://en.wikipedia.org/wiki/OpenCell | OpenCell is a laboratory in London.
Laboratories
OpenCell is primarily used for work related to biochemical and biomolecular activities such as DNA sequencing. It opened to the public in June 2018. The space uses shipping containers to house biotechnology laboratories. The laboratories contain biotechnology equipment including real-time PCR instruments, Plate reader, Opentrons liquid handling robots, flow hoods, non-ducted fume cupboards, -80, -20 and 4C storage, incubators (static/shaking), centrifuges (1ml-50ml refrigerated), and bench space
COVID-19 testing
In August 2020, a shipping container laboratory for COVID-19 diagnostics was delivered to the Bailiwick of Jersey. The laboratory began processing tests on Tuesday, September 15, with 170 samples, collected from arriving airport passengers, processed within an average of 12 hours. Deputy Medical Officer of Health Dr Ivan Muscat said: “The opening of the covid-19 laboratory is a significant milestone in managing Jersey’s testing requirements."
References
Companies based in the London Borough of Hammersmith and Fulham
Public laboratories
Shipping containers
2018 establishments in England
COVID-19 pandemic in England | OpenCell | [
"Biology"
] | 250 | [
"Biotechnology stubs"
] |
64,668,694 | https://en.wikipedia.org/wiki/Opentrons | Opentrons Labworks, Inc. (or Opentrons) is a biotechnology company that manufactures liquid handling robots that use open-source software, which at one point used open-source hardware but no longer does. Their robots can be used by scientists to manipulate small volumes of liquids for the purpose of undertaking biochemical or chemical reactions. Currently, they offer the OT-2 and Flex robots. These robots are used primarily by researchers and scientists interested in DIY biology, but they are increasingly being used by other biologists.
Products
Current:
OT-2 – The OT-2 was released in 2018 and has seen utilization as one of the tools that researchers are leveraging in the fight against COVID-19. The OT-2 and later products, including its electronic micropipettes and hardware modules, are closed source (proprietary) hardware. Only coarse CAD files for the enclosure have been released, with no details on the internals, such that it no longer complies with current open hardware standards. The software remains open source.
Flex – Successor to the OT-2, the Flex was released in 2023, "measures two feet by two feet by two feet", and is purchased with a one-time cost rather than a robot as a service (RaaS) subscription. Its open-source and accessible API allows it to interact with potential AI tools.
Flex Prep – Similar to the Flex, the Flex Prep was released in 2024 and provides a no-code software for setting up pipetting tasks and executing that workflow through the Flex Prep touchscreen.
Discontinued:
OT-1 – The OpenTrons OT-1 was the result of a crowdfunding campaign on the Kickstarter platform and was released in 2015 for $2,000. This robot employed adapters to actuate handheld micropipettes. The release of the OT-1 marked the first commercial open source liquid handling robot in the life science industry. It was also the last in the series to adhere to open hardware standards, however, editable CAD files were not released. It is no longer commercially available, though at least one replication was attempted.
History
The company originated from Genspace, a community biology laboratory in Brooklyn, New York. Will Canine, a biohacker and former Occupy Wall Street organizer, partnered with Nicholas Wagner and Chiu Chau as his eventual co-founders who he found from a DIY-bio listserve.
In 2014, the startup officially launched with financial backing from HAXLR8TR, a hardware accelerator in Shenzhen, China. In late 2014, they launch a Kickstarter campaign. They show their machine inserting DNA inside E. coli after their campaign successfully gets funded. Jonathan Brennan-Badal, who was VP of strategy at ComiXology and a board member of Genspace, joined Opentrons in 2014 and is the current CEO.
In 2016, Opentrons was part of Y Combinator's Winter cohort of startups.
Impact
Opentrons robots have had a variety of uses in the scientific and DIY community. Scientists at UCSD modified an existing OT-1 robot to automate adding in reagents and imaging their cell signaling experiments. Scientists at Carnegie Mellon University used the OT-2, Opentrons Python API, and OpenAI's GPT-4 to autonomously design, plan, and perform experiments.
During the COVID-19 pandemic, Opentrons helped set up the Pandemic Response Lab (PRL), a sequencing facility located in Queens, New York. Opentrons' robots at the PRL helped speed up turnaround time for COVID-19 testing, going from 7 to 14 days to 12 hours, and reducing costs from $2,000 to under $28. Institutions that made use of Opentrons' robots for COVID-19 testing include: Mayo Clinic, Harvard, Stanford, Caltech, MIT, and BioNTech.
Subsidiaries
As a company, Opentrons has a number of subsidiaries.
Opentrons Robotics – business unit for user-friendly lab automation
Pandemic Response Lab (PRL) – in partnership with NYU Langone Health, provides diagnostic lab services to health systems across the US, and as of December 31, 2022 has shut down
Neochromosome (Neo) – acquired in March 2021, Neo creates genome-scale cell engineering solutions for therapeutics
Zenith AI – acquired in June 2021, Zenith AI brings no-code AI and modern machine learning to the platform
See also
Laboratory automation
Liquid handling robot
List of biotech and pharmaceutical companies in the New York metropolitan area
References
External links
Opentrons' Y Combinator profile
Opentrons' GitHub organization page
Laboratory robots
Open-source robots
Biotechnology companies
Companies based in Queens, New York
Y Combinator companies | Opentrons | [
"Engineering",
"Biology"
] | 976 | [
"Biotechnology organizations",
"Biotechnology companies"
] |
64,668,822 | https://en.wikipedia.org/wiki/Rainbow-independent%20set | In graph theory, a rainbow-independent set (ISR) is an independent set in a graph, in which each vertex has a different color.
Formally, let be a graph, and suppose vertex set is partitioned into subsets , called "colors". A set of vertices is called a rainbow-independent set if it satisfies both the following conditions:
It is an independent set – every two vertices in are not adjacent (there is no edge between them);
It is a rainbow set – contains at most a single vertex from each color .
Other terms used in the literature are independent set of representatives, independent transversal, and independent system of representatives.
As an example application, consider a faculty with departments, where some faculty members dislike each other. The dean wants to construct a committee with members, one member per department, but without any pair of members who dislike each other. This problem can be presented as finding an ISR in a graph in which the nodes are the faculty members, the edges describe the "dislike" relations, and the subsets are the departments.
Variants
It is assumed for convenience that the sets are pairwise-disjoint. In general the sets may intersect, but this case can be easily reduced to the case of disjoint sets: for every vertex , form a copy of for each such that contains . In the resulting graph, connect all copies of to each other. In the new graph, the are disjoint, and each ISR corresponds to an ISR in the original graph.
ISR generalizes the concept of a system of distinct representatives (SDR, also known as transversal). Every transversal is an ISR where in the underlying graph, all and only copies of the same vertex from different sets are connected.
Existence of rainbow-independent sets
There are various sufficient conditions for the existence of an ISR.
Condition based on vertex degree
Intuitively, when the departments are larger, and there is less conflict between faculty members, an ISR should be more likely to exist. The "less conflict" condition is represented by the vertex degree of the graph. This is formalized by the following theorem: If the degree of every vertex in is at most , and the size of each color-set is at least , then has an ISR. The is best possible: there are graph with vertex degree and colors of size without an ISR. But there is a more precise version in which the bound depends both on and on .
Condition based on dominating sets
Below, given a subset of colors (a subset of ), we denote by the union of all subsets in (all vertices whose color is one of the colors in ), and by the subgraph of induced by . The following theorem describes the structure of graphs that have no ISR but are edge-minimal, in the sense that whenever any edge is removed from them, the remaining graph has an ISR. If has no ISR, but for every edge in , has an ISR, then for every edge in , there exists a subset of the colors and a set of edges of , such that:
The vertices and are both in ;
The edge is in ;
The set of vertices adjacent to dominates ;
;
is a matching – no two edges of it are adjacent to the same vertex.
Hall-type condition
Below, given a subset of colors (a subset of ), an independent set of is called special for if for every independent subset of vertices of of size at most , there exists some in such that is also independent. Figuratively, is a team of "neutral members" for the set of departments, that can augment any sufficiently small set of non-conflicting members, to create a larger such set. The following theorem is analogous to Hall's marriage theorem:If, for every subset S of colors, the graph contains an independent set that is special for , then has an ISR.Proof idea. The theorem is proved using Sperner's lemma. The standard simplex with endpoints is assigned a triangulation with some special properties. Each endpoint of the simplex is associated with the color-set , each face of the simplex is associated with a set of colors. Each point of the triangulation is labeled with a vertex of such that: (a) For each point on a face , is an element of – the special independent set of . (b) If points and are adjacent in the 1-skeleton of the triangulation, then and are not adjacent in . By Sperner's lemma, there exists a sub-simplex in which, for each point , belongs to a different color-set; the set of these is an ISR.
The above theorem implies Hall's marriage condition. To see this, it is useful to state the theorem for the special case in which is the line graph of some other graph ; this means that every vertex of is an edge of , and every independent set of is a matching in . The vertex-coloring of corresponds to an edge-coloring of , and a rainbow-independent-set in corresponds to a rainbow-matching in . A matching in is special for , if for every matching in of size at most , there is an edge in such that is still a matching in .Let be a graph with an edge-coloring. If, for every subset of colors, the graph contains a matching that is special for , then has a rainbow-matching.
Let be a bipartite graph satisfying Hall's condition. For each vertex of , assign a unique color to all edges of adjacent to . For every subset of colors, Hall's condition implies that has at least neighbors in , and therefore there are at least edges of adjacent to distinct vertices of . Let be a set of such edges. For any matching of size at most in , some element of has a different endpoint in than all elements of , and thus is also a matching, so is special for . The above theorem implies that has a rainbow matching . By definition of the colors, is a perfect matching in .
Another corollary of the above theorem is the following condition, which involves both vertex degree and cycle length:If the degree of every vertex in is at most 2, and the length of each cycle of is divisible by 3, and the size of each color-set is at least 3, then has an ISR.Proof. For every subset of colors, the graph contains at least vertices, and it is a union of cycles of length divisible by 3 and paths. Let be an independent set in containing every third vertex in each cycle and each path. So contains at least vertices. Let be an independent set in of size at most . Since the distance between each two vertices of is at least 3, every vertex of is adjacent to at most one vertex of . Therefore, there is at least one vertex of which is not adjacent to any vertex of . Therefore is special for . By the previous theorem, has an ISR.
Condition based on homological connectivity
One family of conditions is based on the homological connectivity of the independence complex of subgraphs. To state the conditions, the following notation is used:
denotes the independence complex of a graph (that is, the abstract simplicial complex whose faces are the independent sets in ).
denotes the homological connectivity of a simplicial complex (i.e., the largest integer such that the first homology groups of are trivial), plus 2.
is the set of indices of colors, For any subset of , is the union of colors for in .
is the subgraph of induced by the vertices in .
The following condition is implicit in and proved explicitly in. If, for all subsets of :
then the partition admits an ISR.As an example, suppose is a bipartite graph, and its parts are exactly and . In this case so there are four options for :
then and and the connectivity is infinite, so the condition holds trivially.
then is a graph with vertices and no edges. Here all vertex sets are independent, so is the power set of , i.e., it has a single -simplex (and all its subsets). It is known that a single simplex is -connected for all integers , since all its reduced homology groups are trivial (see simplicial homology). Hence the condition holds.
this case is analogous to the previous one.
then , and contains two simplices and (and all their subsets). The condition is equivalent to the condition that the homological connectivity of is at least 0, which is equivalent to the condition that is the trivial group. This holds if-and-only-if the complex contains a connection between its two simplices and . Such a connection is equivalent to an independent set in which one vertex is from and one is from . Thus, in this case, the condition of the theorem is not only sufficient but also necessary.
Other conditions
Every properly coloured triangle-free graph of chromatic number contains a rainbow-independent set of size at least .
Several authors have studied conditions for existence of large rainbow-independent sets in various classes of graphs.
Computation
The ISR decision problem is the problem of deciding whether a given graph and a given partition of into colors admits a rainbow-independent set. This problem is NP-complete. The proof is by reduction from the 3-dimensional matching problem (3DM). The input to 3DM is a tripartite hypergraph , where , , are vertex-sets of size , and is a set of triplets, each of which contains a single vertex of each of , , . An input to 3DM can be converted into an input to ISR as follows:
For each edge in , there is a vertex in ;
For each vertex in , let
For each , , , , , there is an edge in ;
For each , , , , , there is an edge in ;
In the resulting graph , an ISR corresponds to a set of triplets such that:
Each triplet has a different value (since each triplet belongs to a different color-set );
Each triplet has a different value and a different value (since the vertices are independent).
Therefore, the resulting graph admits an ISR if and only if the original hypergraph admits a 3DM.
An alternative proof is by reduction from SAT.
Related concepts
If is the line graph of some other graph , then the independent sets in are the matchings in . Hence, a rainbow-independent set in is a rainbow matching in . See also matching in hypergraphs.
Another related concept is a rainbow cycle, which is a cycle in which each vertex has a different color.
When an ISR exists, a natural question is whether there exist other ISRs, such that the entire set of vertices is partitioned into disjoint ISRs (assuming the number of vertices in each color is the same). Such a partition is called strong coloring.
Using the faculty metaphor:
A system of distinct representatives is a committee of distinct members, with or without conflicts.
An independent set is a committee with no conflict.
An independent transversal is a committee with no conflict, with exactly one member from each department.
A graph coloring is a partitioning of the faculty members into committees with no conflict.
A strong coloring is a partitioning of the faculty members into committees with no conflict and with exactly one member from each department. Thus this problem is sometimes called the happy dean problem.
A rainbow clique or a colorful clique is a clique in which every vertex has a different color. Every clique in a graph corresponds to an independent set in its complement graph. Therefore, every rainbow clique in a graph corresponds to a rainbow-independent set in its complement graph.
See also
Graph coloring
List coloring
Rainbow coloring
Rainbow-colorable hypergraph
Independence complex
References
Graph theory
Rainbow problems
NP-complete problems | Rainbow-independent set | [
"Mathematics"
] | 2,436 | [
"Discrete mathematics",
"Graph theory",
"Computational problems",
"Combinatorics",
"Mathematical relations",
"Mathematical problems",
"NP-complete problems"
] |
64,670,859 | https://en.wikipedia.org/wiki/Italian%20Federation%20of%20Chemical%20and%20Oil%20Workers | The Italian Federation of Chemical and Oil Workers (, FILCEP) was a trade union representing workers in the chemical and mining industries in Italy.
The union was founded in 1960, when the Italian Federation of Chemical Workers merged with the Italian Union of Oil Workers and the Italian Federation of Mining Industry Workers. Like its predecessors, it affiliated to the Italian General Confederation of Labour. In December 1968, it merged with the Federation of Glass and Ceramics, to form the Italian Federation of Chemical and Allied Workers.
General Secretaries
1960: Angelo Di Gioia
1968: Giovan Battista Trespidi
References
Chemical industry in Italy
Chemical industry trade unions
Trade unions established in 1960
Trade unions disestablished in 1968
Trade unions in Italy | Italian Federation of Chemical and Oil Workers | [
"Chemistry"
] | 143 | [
"Chemical industry trade unions"
] |
64,671,582 | https://en.wikipedia.org/wiki/Collective%20classification | In network theory, collective classification is the simultaneous prediction of the labels for multiple objects, where each label is predicted using information about the object's observed features, the observed features and labels of its neighbors, and the unobserved labels of its neighbors. Collective classification problems are defined in terms of networks of random variables, where the network structure determines the relationship between the random variables. Inference is performed on multiple random variables simultaneously, typically by propagating information between nodes in the network to perform approximate inference. Approaches that use collective classification can make use of relational information when performing inference. Examples of collective classification include predicting attributes (ex. gender, age, political affiliation) of individuals in a social network, classifying webpages in the World Wide Web, and inferring the research area of a paper in a scientific publication dataset.
Motivation and background
Traditionally, a major focus of machine learning is to solve classification problems. (For example, given a collection of e-mails, we wish to determine which are spam, and which are not.) Many machine learning models for performing this task will try to categorize each item independently, and focus on predicting the class labels separately. However, the prediction accuracy for the labels whose values must be inferred can be improved with knowledge of the correct class labels for related items. For example, it is easier to predict the topic of a webpage if we know the topics of the webpages that link to it. Similarly, the chance of a particular word being a verb increases if we know that the previous word in the sentence is a noun; knowing the first few characters in a word can make it much easier to identify the remaining characters. Many researchers have proposed techniques that attempt to classify samples in a joint or collective manner, instead of treating each sample in isolation; these techniques have enabled significant gains in classification accuracy.
Example
Consider the task of inferring the political affiliation of users in a social network, where some portion of these affiliations are observed, and the remainder are unobserved. Each user has local features, such as their profile information, and links exist between users who are friends in this social network. An approach that does not collectively classify users will consider each user in the network independently and use their local features to infer party affiliations. An approach which performs collective classification might assume that users who are friends tend to have similar political views, and could then jointly infer all unobserved party affiliations while making use of the rich relational structure of the social network.
Definition
Consider the semi supervised learning problem of assigning labels to nodes in a network by using knowledge of a subset of the nodes' labels. Specifically, we are given a network represented by a graph with a set of nodes and an edge set representing relationships among nodes. Each node is described by its attributes: a feature vector and its label (or class) .
can further be divided into two sets of nodes: , the set of nodes for which we know the correct label values (observed variables), and , the nodes whose labels must be inferred. The collective classification task is to label the nodes in with a label from a label set .
In such settings, traditional classification algorithms assume that the data is drawn independently and identically from some distribution (iid). This means that the labels inferred for nodes whose label is unobserved are independent of each other. One does not make this assumption when performing collective classification. Instead, there are three distinct types of correlations that can be utilized to determine the classification or label of :
The correlations between the label of and the observed attributes of . Traditional iid classifiers which make use of feature vectors are an example of approaches that use this correlation.
The correlations between the label of and the observed attributes (including observed labels) of nodes in the neighborhood of .
The correlations between the label of and the unobserved labels of objects in the neighborhood of .
Collective classification refers to the combined classification of a set of interlinked objects using the three above types of information.
Methods
There are several existing approaches to collective classification. The two major methods are iterative methods and methods based on probabilistic graphical models.
Iterative methods
The general idea for iterative methods is to iteratively combine and revise individual node predictions so as to reach an equilibrium. When updating predictions for individual nodes is a fast operation, the complexity of these iterative methods will be the number of iterations needed for convergence. Though convergence and optimality is not always mathematically guaranteed, in practice, these approaches will typically converge quickly to a good solution, depending on the graph structure and problem complexity. The methods presented in this section are representative of this iterative approach.
Label propagation
A natural assumption in network classification is that adjacent nodes are likely to have the same label (i.e., contagion or homophily). The predictor for node using the label propagation method is a weighted average of its neighboring labels
Iterative Classification Algorithms (ICA)
While label propagation is surprisingly effective, it may sometimes fail to capture complex relational dynamics. More sophisticated approaches can use richer predictors.
Suppose we have a classifier that has been trained to classify a node given its features and the features and labels of its neighbors . Iterative classification applies uses a local classifier for each node, which uses information about current predictions and ground truth information about the node's neighbors, and iterates until the local predictions converge to a global solution. Iterative classification is an “algorithmic framework,” in that it is agnostic to the choice of predictor; this makes it a very versatile tool for collective classification.
Collective classification with graphical models
Another approach to collective classification is to represent the problem with a graphical model and use learning and inference techniques for the graphical modeling approach to arrive at the correct classifications. Graphical models are tools for joint, probabilistic inference, making them ideal for collective classification. They are characterized by a graphical representation of a probability distribution , in which random variables are nodes in a graph . Graphical models can be broadly categorized by whether the underlying graph is directed (e.g., Bayesian networks or collections of local classifiers) or undirected (e.g., Markov random fields (MRF)).
Gibbs sampling
Gibbs sampling is a general framework for approximating a distribution. It is a Markov chain Monte Carlo algorithm, in that it iteratively samples from the current estimate of the distribution, constructing a Markov chain that converges to the target (stationary) distribution.
The basic idea for Gibbs Sampling is to sample for the best label estimate for given all the values for the nodes in using local classifier for a fixed number of iterations. After that, we sample labels for each and maintain count statistics for the number of times we sampled label for node . After collecting a predefined number of such samples, we output the best label assignment for node by choosing the label that was assigned the maximum number of times to while collecting samples.
Loopy belief propagation
For certain undirected graphical models, it is possible to efficiently perform exact inference via message passing, or belief propagation algorithms. These algorithms follow a simple iterative pattern: each variable passes its "beliefs" about its neighbors' marginal distributions, then uses the incoming messages about its own value to update its beliefs. Convergence to the true marginals is guaranteed for tree-structured MRFs, but is not guaranteed for MRFs with cycles.
Statistical relational learning (SRL) related
Statistical relational learning is often used to address collective classification problems. A variety of SRL methods has been applied to the collective classification setting. Some of the methods include direct methods such probabilistic relational models (PRM), coupled conditional models such as link-based classification,
and indirect methods such as Markov logic networks (MLN) and Probabilistic Soft Logic (PSL).
Applications
Collective classification is applied in many domains which exhibit relational structure, such as:
Social network analysis, where collective approaches to node classification tasks such as detecting malicious users can utilize information about relationships between nodes.
Entity resolution, where one can make use of co-authorship relationships to identify authors of papers.
Named entity recognition, where some approaches treat this as a text sequence labeling problem and jointly infer the labels of every word in a sentence, typically by using a conditional random field which models a linear chain of dependencies between the labels of adjacent words in the sentence.
Document classification, where for example inter-document semantic similarities can be collectively utilized as signals that certain documents belong to the same class.
Computational biology, where graphical models such as Markov random fields are utilized to jointly infer relations between biological entities such as genes.
Computer vision, where for example collective classification can be applied to recognizing multiple objects simultaneously.
See also
Machine learning
Classification
Similarity (network science)
Graph (discrete mathematics)
Statistical relational learning
Bayesian Networks
Markov Random Field
References
Network theory | Collective classification | [
"Mathematics"
] | 1,820 | [
"Network theory",
"Mathematical relations",
"Graph theory"
] |
74,805,735 | https://en.wikipedia.org/wiki/Consumer-resource%20model | In theoretical ecology and nonlinear dynamics, consumer-resource models (CRMs) are a class of ecological models in which a community of consumer species compete for a common pool of resources. Instead of species interacting directly, all species-species interactions are mediated through resource dynamics. Consumer-resource models have served as fundamental tools in the quantitative development of theories of niche construction, coexistence, and biological diversity. These models can be interpreted as a quantitative description of a single trophic level.
A general consumer-resource model consists of resources whose abundances are and consumer species whose populations are . A general consumer-resource model is described by the system of coupled ordinary differential equations,
where , depending only on resource abundances, is the per-capita growth rate of species , and is the growth rate of resource . An essential feature of CRMs is that species growth rates and populations are mediated through resources and there are no explicit species-species interactions. Through resource interactions, there are emergent inter-species interactions.
Originally introduced by Robert H. MacArthur and Richard Levins, consumer-resource models have found success in formalizing ecological principles and modeling experiments involving microbial ecosystems.
Models
Niche models
Niche models are a notable class of CRMs which are described by the system of coupled ordinary differential equations,
where is a vector abbreviation for resource abundances, is the per-capita growth rate of species , is the growth rate of species in the absence of consumption, and is the rate per unit species population that species depletes the abundance of resource through consumption. In this class of CRMs, consumer species' impacts on resources are not explicitly coordinated; however, there are implicit interactions.
MacArthur consumer-resource model (MCRM)
The MacArthur consumer-resource model (MCRM), named after Robert H. MacArthur, is a foundational CRM for the development of niche and coexistence theories. The MCRM is given by the following set of coupled ordinary differential equations:where is the relative preference of species for resource and also the relative amount by which resource is depleted by the consumption of consumer species ; is the steady-state carrying capacity of resource in absence of consumption (i.e., when is zero); and are time-scales for species and resource dynamics, respectively; is the quality of resource ; and is the natural mortality rate of species . This model is said to have self-replenishing resource dynamics because when , each resource exhibits independent logistic growth. Given positive parameters and initial conditions, this model approaches a unique uninvadable steady state (i.e., a steady state in which the re-introduction of a species which has been driven to extinction or a resource which has been depleted leads to the re-introduced species or resource dying out again). Steady states of the MCRM satisfy the competitive exclusion principle: the number of coexisting species is less than or equal to the number of non-depleted resources. In other words, the number of simultaneously occupiable ecological niches is equal to the number of non-depleted resources.
Externally supplied resources model
The externally supplied resource model is similar to the MCRM except the resources are provided at a constant rate from an external source instead of being self-replenished. This model is also sometimes called the linear resource dynamics model. It is described by the following set of coupled ordinary differential equations:where all the parameters shared with the MCRM are the same, and is the rate at which resource is supplied to the ecosystem. In the eCRM, in the absence of consumption, decays to exponentially with timescale . This model is also known as a chemostat model.
Tilman consumer-resource model (TCRM)
The Tilman consumer-resource model (TCRM), named after G. David Tilman, is similar to the externally supplied resources model except the rate at which a species depletes a resource is no longer proportional to the present abundance of the resource. The TCRM is the foundational model for Tilman's R* rule. It is described by the following set of coupled ordinary differential equations:where all parameters are shared with the MCRM. In the TCRM, resource abundances can become nonphysically negative.
Microbial consumer-resource model (MiCRM)
The microbial consumer resource model describes a microbial ecosystem with externally supplied resources where consumption can produce metabolic byproducts, leading to potential cross-feeding. It is described by the following set of coupled ODEs:where all parameters shared with the MCRM have similar interpretations; is the fraction of the byproducts due to consumption of resource which are converted to resource and is the "leakage fraction" of resource governing how much of the resource is released into the environment as metabolic byproducts.
Symmetric interactions and optimization
MacArthur's Minimization Principle
For the MacArthur consumer resource model (MCRM), MacArthur introduced an optimization principle to identify the uninvadable steady state of the model (i.e., the steady state so that if any species with zero population is re-introduced, it will fail to invade, meaning the ecosystem will return to said steady state). To derive the optimization principle, one assumes resource dynamics become sufficiently fast (i.e., ) that they become entrained to species dynamics and are constantly at steady state (i.e., ) so that is expressed as a function of . With this assumption, one can express species dynamics as,
where denotes a sum over resource abundances which satisfy . The above expression can be written as , where,
At un-invadable steady state for all surviving species and for all extinct species .
Minimum Environmental Perturbation Principle (MEPP)
MacArthur's Minimization Principle has been extended to the more general Minimum Environmental Perturbation Principle (MEPP) which maps certain niche CRM models to constrained optimization problems. When the population growth conferred upon a species by consuming a resource is related to the impact the species' consumption has on the resource's abundance through the equation, species-resource interactions are said to be symmetric. In the above equation and are arbitrary functions of resource abundances. When this symmetry condition is satisfied, it can be shown that there exists a function such that:After determining this function , the steady-state uninvadable resource abundances and species populations are the solution to the constrained optimization problem:The species populations are the Lagrange multipliers for the constraints on the second line. This can be seen by looking at the KKT conditions, taking to be the Lagrange multipliers:Lines 1, 3, and 4 are the statements of feasibility and uninvadability: if , then must be zero otherwise the system would not be at steady state, and if , then must be non-positive otherwise species would be able to invade. Line 2 is the stationarity condition and the steady-state condition for the resources in nice CRMs. The function can be interpreted as a distance by defining the point in the state space of resource abundances at which it is zero, , to be its minimum. The Lagrangian for the dual problem which leads to the above KKT conditions is, In this picture, the unconstrained value of that minimizes (i.e., the steady-state resource abundances in the absence of any consumers) is known as the resource supply vector.
Geometric perspectives
The steady states of consumer resource models can be analyzed using geometric means in the space of resource abundances.
Zero net-growth isoclines (ZNGIs)
For a community to satisfy the uninvisibility and steady-state conditions, the steady-state resource abundances (denoted ) must satisfy,
for all species . The inequality is saturated if and only if species survives. Each of these conditions specifies a region in the space of possible steady-state resource abundances, and the realized steady-state resource abundance is restricted to the intersection of these regions. The boundaries of these regions, specified by , are known as the zero net-growth isoclines (ZNGIs). If species survive, then the steady-state resource abundances must satisfy, . The structure and locations of the intersections of the ZNGIs thus determine what species and feasibly coexist; the realized steady-state community is dependent on the supply of resources and can be analyzed by examining coexistence cones.
Coexistence cones
The structure of ZNGI intersections determines what species can feasibly coexist but does not determine what set of coexisting species will be realized. Coexistence cones determine what species determine what species will survive in an ecosystem given a resource supply vector. A coexistence cone generated by a set of species is defined to be the set of possible resource supply vectors which will lead to a community containing precisely the species .
To see the cone structure, consider that in the MacArthur or Tilman models, the steady-state non-depleted resource abundances must satisfy, where is a vector containing the carrying capacities/supply rates, and is the th row of the consumption matrix , considered as a vector. As the surviving species are exactly those with positive abundances, the sum term becomes a sum only over surviving species, and the right-hand side resembles the expression for a convex cone with apex and whose generating vectors are the for the surviving species .
Complex ecosystems
In an ecosystem with many species and resources, the behavior of consumer-resource models can be analyzed using tools from statistical physics, particularly mean-field theory and the cavity method. In the large ecosystem limit, there is an explosion of the number of parameters. For example, in the MacArthur model, parameters are needed. In this limit, parameters may be considered to be drawn from some distribution which leads to a distribution of steady-state abundances. These distributions of steady-state abundances can then be determined by deriving mean-field equations for random variables representing the steady-state abundances of a randomly selected species and resource.
MacArthur consumer resource model cavity solution
In the MCRM, the model parameters can be taken to be random variables with means and variances:
With this parameterization, in the thermodynamic limit (i.e., with ), the steady-state resource and species abundances are modeled as a random variable, , which satisfy the self-consistent mean-field equations, where are all moments which are determined self-consistently, are independent standard normal random variables, and and are average susceptibilities which are also determined self-consistently.
This mean-field framework can determine the moments and exact form of the abundance distribution, the average susceptibilities, and the fraction of species and resources that survive at a steady state.
Similar mean-field analyses have been performed for the externally supplied resources model, the Tilman model, and the microbial consumer-resource model. These techniques were first developed to analyze the random generalized Lotka–Volterra model.
See also
Theoretical ecology
Community (ecology)
Competition (biology)
Lotka–Volterra equations
Competitive Lotka–Volterra equations
Generalized Lotka–Volterra equation
Random generalized Lotka–Volterra model
References
Further reading
Stefano Allesina's Community Ecology course lecture notes: https://stefanoallesina.github.io/Theoretical_Community_Ecology/
Ecology
Ordinary differential equations
Mathematical modeling
Biophysics
Community ecology
Ecological niche
Population ecology
Dynamical systems
Random dynamical systems
Theoretical ecology | Consumer-resource model | [
"Physics",
"Mathematics",
"Biology"
] | 2,345 | [
"Mathematical modeling",
"Applied and interdisciplinary physics",
"Applied mathematics",
"Random dynamical systems",
"Ecology",
"Biophysics",
"Mechanics",
"Dynamical systems"
] |
74,806,347 | https://en.wikipedia.org/wiki/IPhone%2015 | The iPhone 15 and iPhone 15 Plus are smartphones developed and marketed by Apple. They are the seventeenth generation of iPhones, succeeding the iPhone 14 and iPhone 14 Plus. The devices were announced on September 12, 2023, during the Apple Event at Apple Park in Cupertino, California alongside the higher-priced iPhone 15 Pro and 15 Pro Max. Pre-orders began on September 15, 2023, and the devices were made available on September 22, 2023. Like the iPhone 15 Pro and Pro Max, the 15 and 15 Plus are the first iPhones to replace the proprietary Lightning connector with USB-C to comply with European Union mandates.
History
In September 2021, the European Commission began considering a proposal to mandate USB-C on all devices in the European Union, including iPhones. Apple analyst Ming-Chi Kuo claimed that Apple would drop its proprietary Lightning connector by 2023. At the time of those claims, Apple was considering switching to USB-C due to the likelihood that the EU proposal would pass. The proposal was passed into law in October 2022, becoming the Radio Equipment Directive. Apple confirmed it would comply with the regulations later that month.
Two weeks prior to the formal introduction of the iPhone 15, it was announced that some of the devices which were made in India would for the first time be sold around the world on the launch day.
Design
The iPhone 15 is the first major redesign since the iPhone 12, featuring rounder edges and a slightly curved display, and back glass. Both models are available in five colors: blue, pink, yellow, green and black. This makes it the first entry level iPhone since the iPhone XR to not ship with a Product Red variant at launch.
Hardware
Display
The iPhone 15 features a display with Super Retina XDR OLED technology at a resolution of 2556×1179 pixels and a pixel density of about 460 PPI with a refresh rate of 60 Hz. The iPhone 15 Plus features a display with the same technology at a resolution of 2796×1290 pixels and a pixel density of about 460 PPI. Both models have an improved typical brightness of up to 1,000 nits, a peak HDR brightness of up to 1,600 nits, and a peak outdoor brightness of up to 2,000 nits. The Dynamic Island feature, previously exclusive to iPhone 14 Pro, is now standard on iPhone 15, replacing the notch that was introduced in the iPhone X.
Charging and transfer speeds
The iPhone 15 and iPhone 15 Plus use USB-C with USB 2.0 transfer speeds (up to 480 Mb/s or 60 MB/s), compared to the iPhone 15 Pro and iPhone 15 Pro Max which have faster USB 3.2 Gen 2 transfer speeds (up to 10 Gb/s or 1.25 GB/s). The iPhone 15 and iPhone 15 Plus, as well as the iPhone 15 Pro and iPhone 15 Pro Max, are the first iPhone models to use USB-C, as well as the first iPhones since the iPhone 5 to switch to a new charging port.
Video output
All iPhone 15 models have support for DisplayPort Alternate Mode over USB-C video output with HDR up to 4K resolution.
Previous iPhone models (from iPhone 5 until iPhone 14) had a maximum supported resolution of 1600 x 900 (slightly less than 1080p FHD) with the Lightning Digital AV Adapter due to technical constraints of the Lightning connector.
Battery
The iPhone 15 Plus offers users up to 26 hours of video playback and up to 100 hours of audio playback, and the iPhone 15 offers significantly less, with up to 20 hours of video playback and up to 80 hours of audio playback.
Software
The iPhone 15 and iPhone 15 Plus launched with iOS 17 and is compatible with iOS 18. Consistent with the UK Product Security and Telecommunications Infrastructure regulation, it will continue to receive major software updates for a minimum of five years to at least 2028.
Specifications
Criticism
Overheating
Some owners claimed that their iPhone 15s were experiencing overheating issues, reportedly reaching temperatures as high as . Apple later stated that there were several reasons why the phones heat up, mainly hinting at a software issue. It was stated that it would be fixed with an update to iOS 17.0.3. The overheating issues were reported to persist after the update.
See also
List of iPhone models
History of the iPhone
Comparison of smartphones
Timeline of iPhone models
References
External links
– official website
Mobile phones introduced in 2023
Products introduced in 2023
Mobile phones with 4K video recording
Mobile phones with multiple rear cameras
Flagship smartphones | IPhone 15 | [
"Technology"
] | 927 | [
"Flagship smartphones"
] |
74,810,557 | https://en.wikipedia.org/wiki/Ovine%20forestomach%20matrix | Ovine forestomach matrix (OFM), marketed as AROA ECM, is a layer of decellularized extracellular matrix (ECM) biomaterial isolated from the propria submucosa of the rumen of sheep. OFM is used in tissue engineering and as a tissue scaffold for wound healing and surgical applications.
History
OFM was developed and is manufactured by Aroa Biosurgery Limited (New Zealand, formerly Mesynthes Limited, New Zealand) and was first patented in 2008 and described in the scientific literature in 2010. OFM is manufactured from sheep rumen tissue, using a process of decellularization to selectively remove the unwanted sheep cells and cell components to leave an intact and functional extracellular matrix. OFM comprises a special layer of tissue found in rumen, the propria submucosa, which is structurally and functionally distinct from the submucosa of other gastrointestinal tissues.
OFM was first cleared by the FDA in 2009 for the treatment of wounds. Since 2008 there have been >70 publications describing OFM and its clinical applications, and over 6 million clinical applications of OFM-based devices.
Composition
OFM comprises more than 24 collagens (most notably types I and III), but also contains many growth factors, polysaccharides and proteoglycans that naturally exist as part of the extracellular matrix and play important roles in wound healing and soft tissue repair. The composition includes more than 150 different proteins, including elastin, fibronectin, glycosaminoglycans, basement membrane components, and various growth factors, such as vascular endothelial growth factor (VEGF), fibroblast growth factor (FGF) and platelet derived growth factor (PDGF). OFM has been shown to recruit mesenchymal stem cells, stimulate cell proliferation, angiogenesis and vascularogenesis, and modulate matrix metalloproteinase and neutrophil elastase. The porous structure of OFM has been characterized by differential scanning calorimetry (DSC), scanning electron microscopy (SEM), atomic force microscopy (AFM), histology, Sirius Red staining, small-angle x-ray scattering (SAXS), and micro computerized topography (MicroCT). OFM has been shown to contain residual vascular channels that facilitate blood vessel formation through angioconduction.
Tissue engineering
OFM can be fabricated into a range of different product presentations for tissue engineering applications, and can be functionalized with therapeutic agents including silver, doxycycline and hyaluronic acid. OFM has been commercialized as single and multi-layered sheets, reinforced biologics and powders.
When placed in the body OFM does not elicit a negative inflammatory response and is absorbed into the regenerating tissues via a process called tissue remodeling.
Clinical significance
Wound healing
Aroa Biosurgery Limited first distributed OFM commercially in 2012 as Endoform™ Dermal Template (later Endoform™ Natural) through a distribution partnership with Hollister Incorporated (IL, USA). Endoform™ Natural and Endoform™ Antimicrobial (0.3% ionic silver w/w), are single layers of OFM is used in the treatment of acute and chronic wounds, including diabetic foot ulcers (DFU) and venous leg ulcers (VLU). Endoform™ Natural has been shown to accelerate wound healing of DFU. The wound product Symphony™ combines OFM and hyaluronic acid and is designed to support healing during the proliferative phase particularly in patients whose healing is severely impaired or compromised due to disease
Complex plastics and reconstructive surgery
OFM was cleared by the FDA in 2016 and 2021 for surgical applications in plastics and reconstructive surgery as a multi-layered product (Myriad Matrix™) and powdered format (Myriad Morcells™). OFM-based surgical devices are routinely used in complex lower extremity reconstruction, pilonidal sinus reconstruction, hidradenitis suppurativa and complex traumatic wounds.
OFM-based surgical devices are routinely used in plastics and reconstructive surgery for the regeneration of soft tissues when used as an artificial skin
Hernia repair
Multi-layered OFM devices, reinforced with synthetic polymer were first described in 2008 and in the scientific literature in 2010. These devices, termed ‘reinforced biologics’ have been designed for applications in the surgical repair of hernia as an alternative to synthetic surgical mesh (a mesh prosthesis). OFM reinforced biologics are distributed in the US by Tela Bio Inc. Clinical studies have shown that OFM reinforced biologics have lower hernia recurrence rates versus synthetic hernia meshes or biologics such as acellular dermis.
References
Tissue engineering | Ovine forestomach matrix | [
"Chemistry",
"Engineering",
"Biology"
] | 1,021 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
74,810,935 | https://en.wikipedia.org/wiki/Miti%20hue | Miti hue is a traditional sauce in Polynesian cuisine made from the flesh of the coconut and salt water mixed together and fermented.
Preparation
is prepared from the young coconut known as , a stage where the flesh of the green coconut starts to harden and begins losing its water. The flesh of the is cut into pieces and placed in a calabash vessel, with salt water and the heads of freshwater prawns. The mixture is left in the sun for a few days to ferment. is served as an accompaniment to traditional Tahitian dishes, most notably the fermented fish dish Fafaru. The preparation of is also similar to Miti hue, though crushed crustaceans are entirely absent from the recipe. Flavourings like lemon, lime and chilli can also be added to Tai monomono, with the addition of chilli being known as .
Fermented coconut sauce is also eaten in Tonga, the Samoan islands and the Polynesian island of Rotuma, but the process differs from Miti hue as the sauce is a byproduct of converting coconut shells into containers, a practice that was common in the West Polynesian islands. A mature coconut has a hole drilled into it and the water inside the nut is removed, replaced with sea water. A stopper is placed into the hole and is left to ferment for a few weeks, resulting the inner flesh breaking down into a gruel.
Names
Cook Islands:
French Polynesia:
Rotuma:
Samoa and American Samoa:
Tonga:
See also
Taioro – A fermented paste made from coconut meat, eaten in Oceania.
References
Condiments
Fermented foods
French Polynesian cuisine
Cook Islands cuisine
Fijian cuisine
Samoan cuisine
Tongan cuisine
Polynesian cuisine
Foods containing coconut | Miti hue | [
"Biology"
] | 360 | [
"Fermented foods",
"Biotechnology products"
] |
74,812,207 | https://en.wikipedia.org/wiki/Annals%20of%20Combinatorics | Annals of Combinatorics is a quarterly peer-reviewed scientific journal covering research in combinatorics. It was established in 1997 by William Chen and is published by Birkhäuser.
The journal publishes articles in combinatorics and related areas with a focus on algebraic combinatorics, analytic combinatorics, graph theory, and matroid theory.
Until December 2019, the journal was edited by George Andrews, William Chen, and Peter Paule. The current editors-in-chief are Frédérique Bassino, Kolja Knauer, and Matjaž Konvalinka.
Abstracting and indexing
The journal is abstracted and indexed in
MathSciNet,
Science Citation Index Expanded,
Scopus, and
ZbMATH Open.
According to the Journal Citation Reports, the journal has a 2022 impact factor of 0.5.
References
External links
Combinatorics journals
Academic journals established in 1997
Springer Science+Business Media academic journals
Quarterly journals
English-language journals | Annals of Combinatorics | [
"Mathematics"
] | 199 | [
"Combinatorics journals",
"Combinatorics"
] |
74,812,564 | https://en.wikipedia.org/wiki/SPICES%20%28Scouting%29 | The SPICES (Social, Physical, Intellectual, Character/Creativity, Emotional and Spiritual) are learning objectives, or areas of personal development explored through scouting programmes in a number of countries. The acronym was created during the development of the ONE Programme scheme by Scouting Ireland, but has since been adopted by Scouts Canada, Scouts Australia, Scouts New Zealand and Scout Association of Malta. These objectives reflect the aims of Scouting rather than the methodologies – the Scout Method.
Background
On the merging of legacy scout associations to create Scouting Ireland in 2004, a need was identified to merge or replace existing programmes into a unified youth programme, eventually becoming "ONE Programme". Thirty-six fundamental learning objectives, categorised as social, physical, intellectual, character, emotional and spiritual areas, were identified as the central aim of the organisation. Interim steps were identified so that these areas of growth could be targeted across the age ranges of the youth members.
The success of the ONE Programme development, prompted other scout organisations to base their youth programme revisions on Scouting Ireland's research. Some examples include, Scouts Canada, Scouts Australia, Scouts Aotearoa and the Scout Association of Malta.
National implementations
Australia
The SPICES are adapted for each of the programme sections – Joeys, Cubs, Scouts, Venturers, Rovers.
Canada
As part of the "Canadian Path", from beaver scouts to rover scouts, the SPICES are considered the attributes that best represent well rounded youth, prepared for the world.
The Spiritual element is not necessarily religion focused, but could include a scout's relationship with an abrahamic god or connectedness with nature or the global community.
Ireland
In programme books and materials, for Beaver Scouts and Cub Scouts, the SPICES are represented by characters representing those traits.
Beavers track their progress through the SPICES in the Bree (first year), Ruarc (second year) and Conn (third year) lodges.
Cubs track their progress by marking their "travel cards" which contain a checklist of all the learning objectives. SPICES beads, and annual personal progress badges are awarded as the travel cards are filled. Venture Scouts plan activities based on a self-assessment of their current personal development using the SPICES (similar to wheel of life tool).
Scouts, Venture Scouts, Rover Scouts review their progress as part of the general review of programme cycles.
Malta
The "C" in SPICES has been adapted to represent "Creativity". The sections are cubs, scouts, ventures and rovers.
New Zealand
The SPICES are used in the five sections – Keas, Cubs, Scouts, Venturers, Rovers. Scouts Aotearoa has linked the SPICES to a similar concept from the Hauora philosophy of health and wellbeing. There are four dimensions (or whare walls) of hauora: taha tinana (physical well-being – health), taha hinengaro (mental and emotional well-being – self-confidence), taha whanau (social well-being – self-esteem) and taha wairua (spiritual well-being – personal beliefs).
References
Scouting Ireland
Scouting and Guiding in Australia
Scouting and Guiding in Canada
Scouting and Guiding in Malta
Scouting and Guiding in New Zealand
Personal development | SPICES (Scouting) | [
"Biology"
] | 650 | [
"Personal development",
"Behavior",
"Human behavior"
] |
74,813,414 | https://en.wikipedia.org/wiki/List%20of%20bustards | Bustards are birds in the family Otididae in the monotypic order Otidiformes. There are currently 26 extant species of bustards recognised by the International Ornithologists' Union. Many species of fossil bustards are known from the Miocene onwards; however, their exact number and taxonomy are unsettled due to ongoing discoveries.
Conventions
Conservation status codes listed follow the International Union for Conservation of Nature (IUCN) Red List of Threatened Species. Range maps are provided wherever possible; if a range map is not available, a description of the bustard's range is provided. Ranges are based on the IOC World Bird List for that species unless otherwise noted. Population estimates are of the number of mature individuals and are taken from the IUCN Red List.
This list follows the taxonomic treatment (designation and order of species) and nomenclature (scientific and common names) of version 13.2 of the IOC World Bird List. Where the taxonomy proposed by the IOC World Bird List conflicts with the taxonomy followed by the IUCN or the 2023 edition of The Clements Checklist of Birds of the World, the disagreement is noted next to the species's common name (for nomenclatural disagreements) or scientific name (for taxonomic disagreements).
Classification
The International Ornithologists' Union (IOU) recognises 26 species of bustards in twelve genera. This list does not include hybrid species, extinct prehistoric species, or putative species not yet accepted by the IOU.
Family Otididae
Genus Otis: one species
Genus Ardeotis: four species
Genus Chlamydotis: two species
Genus Neotis: four species
Genus Eupodotis: two species
Genus Heterotetrax: three species
Genus Lophotis: three species
Genus Afrotis: two species
Genus Lissotis: two species
Genus Houbaropsis: one species
Genus Sypheotides: one species
Genus Tetrax: one species
Bustards
Notes
References
Lists of birds
Lists of animals
Otididae | List of bustards | [
"Biology"
] | 406 | [
"Lists of biota",
"Lists of animals",
"Animals"
] |
74,814,186 | https://en.wikipedia.org/wiki/Living%20Indus%20Initiative | The Living Indus is an umbrella initiative by Ministry of Climate Change, Government of Pakistan and United Nations in Pakistan.
The original Living Indus Initiative document was developed by a team led by Dr. Adil Najam as its Lead Author. The initiative serves as an overarching program, rallying call to action, seeks to spearhead and unify various efforts aimed at revitalizing the ecological well-being of the Indus River within Pakistan's borders. It emerges as a direct response to Pakistan's heightened susceptibility to the adverse effects of climate change.
Background
The Indus River flows down from the Himalayas, through Indian and Pakistan Administered Kashmir, Gilgit-Baltistan, and Khyber Pakhtunkhwa, flowing south-by-southwest through the length of Pakistan before emptying into the Arabian Sea near Karachi.
Ninety percent of Pakistan's people and more than three-quarters of its economy reside in the Indus basin. More than 80% of Pakistan's arable land is irrigated by its waters.
The Indus Basin is facing devastating challenges due to environmental degradation, unsustainable population growth, rapid urbanization and industrialization, the unregulated utilization of resources, inefficient water use, and poverty. The Indus and its ecosystems are under pressure both from the seemingly inexorable changing climate, temperature fluctuations, disruption of rainfall patterns, and early-stage efforts to adapt to and mitigate these effects.
The Indus has supported a civilization for thousands of years, but with the current state of the management of the basin and the impact of climate change on the monsoon and the glacial melt, it might not be able to sustain Pakistan for another 100 years.
Description
Living Indus is an umbrella initiative and a call to action to lead and consolidate initiatives to restore the ecological health of the Indus within the boundaries of Pakistan. The initiatives have been incorporated into a ‘Living Indus’ prospectus jointly developed by the Government of Pakistan and the United Nations. Initiated in 2021, it has been endorsed by all governments, the initiative is expected to continue receiving support.
The scale of the initiatives requires the adoption of collective and innovative approaches by all stakeholders, including the government, the private sector, and the UN, toward mobilizing resources. The response of Living Indus is one of building resilience and adaptation to the threats the Indus faces from the impacts of both human use and climate change over the next few decades.
A number of specific interventions under the Initiative are now operational, including the 'Recharge Pakistan' project led by the Ministry of Climate Change, Government of Pakistan and WWF-Pakistan.
Interventions
Extensive consultations with the government, led by the Chief Ministers of all the provinces, the public sector, private sector, experts, and civil society led to a ‘living’ menu of 25 preliminary interventions. These interventions are in line with global best practices, focusing on green infrastructure and nature-based approaches driven by the community.
The Ministry of Climate Change and Environmental Coordination (MoCC&EC), Government of Pakistan has highlighted eight priority interventions out of the 25. Implementation plans are being prepared for these.
World Restoration Flagship
Designated as a World Restoration Flagship by the UN Environment Programme, the Living Indus Initiative embodies the principles of the UN Decade on Ecosystem Restoration. This accolade acknowledges its exemplary contributions to large-scale ecosystem restoration and its alignment with global restoration objectives.
Inger Andersen, executive director of UN Environment Programme stated:
References
Environmental organisations based in Pakistan
Climate change organizations
Water organizations
Nature conservation organizations
Ecosystems
Indus basin
Nature conservation in Pakistan | Living Indus Initiative | [
"Biology"
] | 720 | [
"Symbiosis",
"Ecosystems"
] |
74,814,591 | https://en.wikipedia.org/wiki/Development%20of%20the%20respiratory%20system | Development of the respiratory system begins early in the fetus. It is a complex process that includes many structures, most of which arise from the endoderm. Towards the end of development, the fetus can be observed making breathing movements. Until birth, however, the mother provides all of the oxygen to the fetus as well as removes all of the fetal carbon dioxide via the placenta.
Timeline
The development of the respiratory system begins at about week 4 of gestation. By week 28, enough alveoli have matured that a baby born prematurely at this time can usually breathe on its own. The respiratory system, however, is not fully developed until early childhood, when a full complement of mature alveoli is present.
Weeks 4-7
Respiratory development in the embryo begins around week 4. Ectodermal tissue from the anterior head region invaginates posteriorly to form olfactory pits, which fuse with endodermal tissue of the developing pharynx. An olfactory pit is one of a pair of structures that will enlarge to become the nasal cavity. At about this same time, the lung bud forms. The lung bud is a dome-shaped structure composed of tissue that bulges from the foregut. The foregut is endoderm just inferior to the pharyngeal pouches. The laryngotracheal bud is a structure that forms from the longitudinal extension of the lung bud as development progresses. The portion of this structure nearest the pharynx becomes the trachea, whereas the distal end becomes more bulbous, forming bronchial buds. A bronchial bud is one of a pair of structures that will eventually become the bronchi and all other lower respiratory structures.
Weeks 7-16
Bronchial buds continue to branch as development progresses until all of the segmental bronchi have been formed. Beginning around week 13, the lumens of the bronchi begin to expand in diameter. By week 16, respiratory bronchioles form. The fetus now has all major lung structures involved in the airway.
Weeks 16-24
Once the respiratory bronchioles form, further development includes extensive vascularization, or the development of the blood vessels, as well as the formation of alveolar ducts and alveolar precursors. At about week 19, the respiratory bronchioles have formed. In addition, cells lining the respiratory structures begin to differentiate to form type I and type II pneumocytes. Once type II cells have differentiated, they begin to secrete small amounts of pulmonary surfactant. Around week 20, fetal breathing movements may begin.
Weeks 24-term
Major growth and maturation of the respiratory system occurs from week 24 until term. More alveolar precursors develop, and larger amounts of pulmonary surfactant are produced. Surfactant levels are not generally adequate to create effective lung compliance until about the eighth month of pregnancy. The respiratory system continues to expand, and the surfaces that will form the respiratory membrane develop further. At this point, pulmonary capillaries have formed and continue to expand, creating a large surface area for gas exchange. The major milestone of respiratory development occurs at around week 28, when sufficient alveolar precursors have matured so that a baby born prematurely at this time can usually breathe on its own. However, alveoli continue to develop and mature into childhood. A full complement of functional alveoli does not appear until around 8 years of age.
Fetal breathing
Although the function of fetal breathing movements is not entirely clear, they can be observed starting at 20–21 weeks of development. Fetal breathing movements involve muscle contractions that cause the inhalation of amniotic fluid and exhalation of the same fluid, with pulmonary surfactant and mucus. Fetal breathing movements are not continuous and may include periods of frequent movements and periods of no movements. Maternal factors can influence the frequency of breathing movements. For example, high blood glucose levels, called hyperglycemia, can boost the number of breathing movements. Conversely, hypoglycemia can reduce the number of fetal breathing movements. Tobacco use is also known to lower fetal breathing rates. Fetal breathing may help tone the muscles in preparation for breathing movements once the fetus is born. It may also help the alveoli to form and mature. Fetal breathing movements are considered a sign of robust health.
Birth
Prior to birth, the lungs are filled with amniotic fluid, mucus, and surfactant. As the fetus is squeezed through the birth canal, the fetal thoracic cavity is compressed, expelling much of this fluid. Some fluid remains, however, but is rapidly absorbed by the body shortly after birth. The first inhalation occurs within 10 seconds after birth and not only serves as the first inspiration, but also acts to inflate the lungs. Pulmonary surfactant is critical for inflation to occur, as it reduces the surface tension of the alveoli. Preterm birth around 26 weeks frequently results in severe respiratory distress, although with current medical advancements, some babies may survive. Prior to 26 weeks, sufficient pulmonary surfactant is not produced, and the surfaces for gas exchange have not formed adequately; therefore, survival is low.
Sources
Respiratory system
Human development | Development of the respiratory system | [
"Biology"
] | 1,079 | [
"Behavior",
"Respiratory system",
"Human development",
"Behavioural sciences",
"Organ systems"
] |
74,815,191 | https://en.wikipedia.org/wiki/Butoxyacetic%20acid | Butoxyacetic acid is an aliphatic organic chemical. It is a liquid. It has the formula C6H12O3 and CAS Registry Number of 2516-93-0. It is REACH registered with the EC number 677-344-8. n-Butyl glycidyl ether is metabolized renally to this compound as is 2-butoxyethanol. Methods have been developed and papers published to detect the compound in urine and blood.
Uses
It is used as a biocide.
References
Organic acids
Ethers | Butoxyacetic acid | [
"Chemistry"
] | 117 | [
"Organic acids",
"Acids",
"Functional groups",
"Organic compounds",
"Ethers"
] |
74,815,847 | https://en.wikipedia.org/wiki/List%20of%20New%20World%20barbets | New World barbets are birds in the family Capitonidae in the order Piciformes. The New World barbets are plump birds, with short necks and large heads. Most species are brightly coloured, with bold patterns of mainly green, red, yellow, white, or black. Their rictal bristles (stiff hair-like feathers at the base of the beak) are shorter and less dense than those of the Asian and African barbets. They are native to the Neotropics of South and Central America, where they inhabit a variety of forests.
There are currently 15 extant species of New World barbets recognised by the International Ornithologists' Union.
Conventions
Conservation status codes listed follow the International Union for Conservation of Nature (IUCN) Red List of Threatened Species. Range maps are provided wherever possible; if a range map is not available, a description of the barbet's range is provided. Ranges are based on the IOC World Bird List for that species unless otherwise noted. Population estimates are of the number of mature individuals and are taken from the IUCN Red List.
This list follows the taxonomic treatment (designation and order of species) and nomenclature (scientific and common names) of version 13.2 of the IOC World Bird List. Where the taxonomy proposed by the IOC World Bird List conflicts with the taxonomy followed by the IUCN or the 2023 edition of The Clements Checklist of Birds of the World, the disagreement is noted next to the species's common name (for nomenclatural disagreements) or scientific name (for taxonomic disagreements).
Classification
The International Ornithologists' Union (IOU) recognises 15 species of New World barbets in two genera. This list does not include hybrid species, extinct prehistoric species, or putative species not yet accepted by the IOU.
Family Capitonidae
Genus Capito: eleven species
Genus Eubucco: four species
New World barbets
Notes
References
Lists of animals
Lists of birds
Capitonidae | List of New World barbets | [
"Biology"
] | 405 | [
"Lists of biota",
"Lists of animals",
"Animals"
] |
74,817,372 | https://en.wikipedia.org/wiki/Bifidobacterium%20adolescentis | Bifidobacterium adolescentis is an anaerobic species of bacteria found in the gastrointestinal tracts of humans and other primates. It is one of the most abundant and prevalent Bifidobacterium species detected in human populations, especially in adults.
Research into health benefits
Bifidobacterium adolescentis has been studied for its health benefits, as strains have been shown to potentially protect against or improve recovery from several diseases, including liver-related, metabolic, allergic airway, colitis, arthritis, and bacterial infections. Strains have also been demonstrated to possess anti-inflammatory, anxiolytic, antioxidant, antidepressant, and/or antiviral activity.
In addition, B. adolescentis strains have been of interest for their ability to metabolize prebiotics such as arabinoxylan, XOS, and GOS. Bifidobacteria typically produce acetic acid and lactic acid, though the exact ratio depends on the bacterial strain, the carbohydrate being metabolized, and the growth conditions. Production of short chain fatty acids and lactic acid in the colon is associated with health benefits.
Bifidobacterium adolescentis contributes to the production of GABA, a neurotransmitter that plays a role in reducing stress and anxiety.
Some B. adolescentis strains can also synthesize B vitamins, such as folic acid.
One strain has been shown to be bifidogenic in the GI tract. That is, the presence of one B. adolescentis strain enhances the growth of all bifidobacteria, a group that generally confers positive health benefits and is important for healthy aging.
Some B. adolescentis have been shown to strengthen the intestinal barrier that is important in preventing pathogenic bacteria and toxins from traveling from the gut lumen into the body. Another study suggested the opposite effect: an undefined B. adolescentis strain was observed to disrupt gut barrier functions in colonic epithelial cell cultures.
Multiple probiotics are marked as containing B. adolescentis, however there are a limited number of commercially available strains (PRL2019, iVS1) with published scientific studies supporting their health claims.
References
Bifidobacteriales
Gut flora bacteria | Bifidobacterium adolescentis | [
"Biology"
] | 477 | [
"Gut flora bacteria",
"Bacteria"
] |
74,817,490 | https://en.wikipedia.org/wiki/Battery%20leakage | Battery leakage is the escape of chemicals, such as electrolytes, within an electric battery due to generation of pathways to the outside environment caused by factory or design defects, excessive gas generation, or physical damage to the battery. The leakage of battery chemical often causes destructive corrosion to the associated equipment and may pose a health hazard.
Leakage by type
Primary
Zinc–carbon
Zinc–carbon batteries were the first commercially available battery type and are still somewhat frequently used, although they have largely been replaced by the similarly composed alkaline battery. Like the alkaline battery, the zinc–carbon battery contains manganese dioxide and zinc electrodes. Unlike the alkaline battery, the zinc–carbon battery uses ammonium chloride as the electrolyte (zinc chloride in the case of "heavy-duty" zinc–carbon batteries), which is acidic.
Either when it has been completely consumed or after three to five years from its manufacture (its shelf life), a zinc–carbon battery is prone to leaking. The byproducts of the leakage may include manganese hydroxide, zinc ammonium chloride, ammonia, zinc chloride, zinc oxide, water and starch. This combination of materials is corrosive to metals, such as those of the battery contacts and surrounding circuitry.
Anecdotal evidence suggests that zinc–carbon battery leakage can be effectively cleaned with sodium bicarbonate (baking soda).
Alkaline
Alkaline batteries use manganese dioxide and zinc electrodes with an electrolyte of potassium hydroxide. The alkaline battery gets its name from the replacement of the acidic ammonium chloride of zinc–carbon batteries with potassium hydroxide, which is an alkaline. Alkaline batteries are considerably more efficient, more environmentally friendly, and more shelf-stable than zinc–carbon batteries—five to ten years, when stored room temperature. Alkaline batteries largely replaced zinc–carbon batteries in regular use by 1990.
After an alkaline battery has been spent, or as it reaches the ends of its shelf life, the chemistry of its cells change, and hydrogen gas is generated as a byproduct. When enough pressure has been built up internally, the casing splits at the bases or side (or both), releasing manganese oxide, zinc oxide, potassium hydroxide, zinc hydroxide, and manganese hydroxide.
Alkaline battery leakage can be effectively neutralized with lemon juice or distilled white vinegar. Eye protection and rubber gloves should be worn, as the potassium hydroxide electrolyte is caustic.
Rechargeable
Nickel–cadmium (Ni-Cd)
Nickel–cadmium batteries (Ni-Cd) use nickel oxide hydroxide and metallic cadmium electrodes with an electrolyte of potassium hydroxide. Sealed Ni-Cd batteries were widely used in photography equipment, handheld power tools, and radio-controlled toys from the early 1940s until the early 1990s, when nickel–metal hydride batteries supplanted them (like how alkaline batteries replaced zinc–carbon batteries). In personal computers, Ni-Cd batteries first saw use in the mid-1980s as a cheaper alternative to lithium batteries for powering real-time clocks and preserving BIOS settings. Nickel–cadmium batteries were also briefly used in laptop battery packs, until the advent of commercially viable nickel–metal hydride batteries in the early 1990s. Ni-Cd batteries are still used in some uninterruptible power supplies and emergency lighting setups.
Except in aeronautical or other high-risk applications, Ni-Cd batteries are intentionally not hermetically sealed and include pressure vents for safety if the batteries are charged improperly. With age and sufficient thermal cycles the seal will degrade and allow electrolyte to leak through. The leakage usually travels down the positive and/or negative terminals onto any surrounding circuitry (see the top image).
Like with alkaline battery leakage, Ni-Cd leakage can be effectively neutralized with lemon juice or distilled white vinegar.
Nickel–metal hydride (Ni-MH)
Nickel–metal hydride batteries (Ni-MH) largely replaced Ni-Cd batteries in the early 1990s. They replaced the metallic cadmium electrode with a hydrogen-absorbing alloy, allowing it to have over two times the capacity of Ni-Cd batteries while being easier to recycle. Their heyday in computer equipment was in the early- to mid-1990s. By 1995, most motherboard manufacturers switched to non-rechargeable lithium button cells to keep the BIOS chip powered. Lithium-based battery packs replaced Ni-MH packs in all but the lowest-end laptops by the early 2000s.
The practical shelf life of a Ni-MH is roughly five years. Cylindrical jelly-roll Ni-MH cells, like the ones used in 1990s laptop battery packs, discharge at a rate of up to 2% per day, while button cells like the ones used in motherboard batteries discharge at a rate of less than 20% per month. They are said to leak less frequently than alkaline batteries but have a similar failure mode.
Ni-MH leakage can be effectively neutralized with lemon juice or distilled white vinegar.
History
In the United States in 1964, the Federal Trade Commission proscribed the use of the word leakproof or the phrase "guaranteed leakproof" in advertisements for or on the packages of dry-cell batteries, as they had determined that no manufacturer had yet developed a battery that was truly impervious to leaking. The FTC repealed this ban in 1997.
References
Leakage
Corrosion
Technological failures | Battery leakage | [
"Chemistry",
"Materials_science",
"Technology"
] | 1,156 | [
"Metallurgy",
"Technological failures",
"Corrosion",
"Electrochemistry",
"Materials degradation"
] |
74,818,254 | https://en.wikipedia.org/wiki/VT640 | The VT640 Retro-Graphics, originally known as the VT100 Retro-Graphics, is an expansion board that was developed by Digital Engineering, Inc., for Digital Equipment Corporation's popular VT100 terminal, allowing it to be used as a graphics terminal capable of a resolution of 640 by 480 pixels. Digital Engineering introduced the VT640 in September 1980 as the second in their line of Retro-Graphics text-to-graphics-terminal conversion boards.
Specifications
The VT640 board displays graphics at a resolution of 640 by 480 pixels on the VT100's monochrome, green-phosphor CRT. The board boasts full graphical compatibility with the Tektronix 4010 and featured the ability to plot individual points on the screen as well as solid, dotted, and dashed lines based on vector instructions, as well as the ability to selectively erase portions of the screen and change the size of text characters on the fly. The VT640 could work with Tektronix's Plot 10 CAD software and ISSCO's Tellagraf chart-making software and Tellaplan report generator. An optional light pen allows the VT100 with the VT640 board installed to emulate the 4010 in the latter's graphic input mode.
History
Digital Engineering reportedly sold millions of dollars worth of the VT640 and other Retro-Graphics products within the first year of availability. A large institutional user of the VT640 in 1983 was the Los Alamos National Laboratory (LANL), who retrofitted 200 of their VT100s with VT640 boards. LANL used the VT640 to render geometrically complex models of technologies such as nuclear reactors and check for visually obvious errors in the models before they are ready to be subjected to simulations. Another large customer of the VT640 was the Lockheed Missiles and Space Company, who used it to display the output of interferometers during mechanical stress and strain measurements conducted on the materials used as the skin of their aircraft. In around 1983, New England Digital began equipping their Synclavier II musical sampler–synthesizer workstation with VT640-equipped VT100s.
Digital Engineering released an update to the VT640 in 1981 in the form of the VT640S, spread across three expansion boards. The company went out of business by 1986.
References
DEC computer terminals
Computer-related introductions in 1980
Graphical terminals | VT640 | [
"Technology"
] | 489 | [
"Computing stubs",
"Computer hardware stubs"
] |
74,818,847 | https://en.wikipedia.org/wiki/Giant%20birefringence | When values of birefingence are very high, the property is termed giant birefringence which more generically is called giant optical anisotropy. Values for giant birefringence exceed 0.3. Much bigger numbers (over 2.0) are termed "colossal birefringence". These are achieved using nanostructures.
Some oxides, for example borate or iodate can have high birefringence. Also compounds containing C=O bonds have higher levels. These include oxalates, squarates and cyanurates. One trade-off is with band gap. If the band gap is small, then the material is not transparent to visible light, but can be transparent for infrared. Chalgogenides may have high birefringence, but only in the infrared. Halide perovskites such as CsPbBrxCl3−x have fairly high birefringence that varies significantly in the optical spectrum.
Some transition metal oxyhalides: MoOCl4, WOCl4, have birefringence in the giant category and MoO2Br2, WOBr4, NbOBr2, and NbOI2 are predicted to have birefringence over 0.6 at 1065 nm.
List
References
Optical phenomena | Giant birefringence | [
"Physics"
] | 280 | [
"Optical phenomena",
"Physical phenomena"
] |
74,820,324 | https://en.wikipedia.org/wiki/Rock%20hyrax%20midden | A rock hyrax midden is a stratified accumulation of fecal pellets and a brown amber-like a urinary product known as hyraceum excreted by the rock hyrax and closely related species.
Hyrax middens form very slowly (ranging from ~5 years to >1000 years for 1 mm of hyraceum accumulation), over long periods of time, with many spanning tens of thousands of years and some dating as far back as ~70,000 years. Hyrax middens contain a diverse range of paleoenvironmental proxies, including fossil pollen and stable carbon, nitrogen and hydrogen isotopes. Combined with the antiquity of hyrax middens, and the often-continuous nature of their deposition, hyrax middens have become a valuable means of reconstructing past environmental and climate change
Rock hyraxes are known to use communal latrines. These sites are often found in sheltered locations, where the threat of predation is limited, and middens form when they are protected from the elements. At well-protected sites, it may accumulate in deposits in excess of a meter thick and several meters across. The thickness of hyrax middens depends on the nature of the shelter and the regional climate history and geology. Hyraceum shows hygroscopic properties and periods of increased precipitation or elevated ambient humidity will destroy existing middens, while more arid periods allow their development/preservation. Thicker formations tend to occur in shallow shelters that during more arid periods, presumably provided sufficient shelter from rainfall for substantial midden accumulations, but under wetter conditions no longer provide adequate protection, resulting in the removal of the more soluble components of the midden. At poorly protected sites in arid regions hyrax urine leaves a white, calcium carbonate precipitate on the rocks. Varying degrees of protection result in varying degrees of midden preservation. Small overhangs, vertical fractures in cap rocks, and groundwater flow along weakness in the shelter’s architecture may lead to midden degradation if rainfall exceeds a certain amount and/or intensity. The thickest middens have been found at sites composed of massive, horizontally bedded rock such as granite and quartzites with between ~30 and 480 mm of annual rainfall. In more humid environments (>800 mm mean annual rainfall), there is little to no evidence of hyraceum accumulation, and middens typically resemble piles of compost, as the masticated plant material in the pellets rapidly decomposes. Hyraceum-rich middens do not typically form in coastal situations, despite the presence of hyraxes, and it is considered that the ambient humidity of the air and the occurrence of coastal fogs preclude midden development
Comparisons with fossilised herbivore middens
Studies of other herbivore midden remains have been very effective in palaeoenvironmental studies in dryland regions on several continents. In the southwestern United States pack rat middens have provided an unprecedented record of environmental changes over the last 40,000 years. As a result of this work, the vegetation dynamics of this area are some of the best understood for any of the world’s drylands at this timescale, and the critical data provided have dramatically helped define the range of regional climate variability. This work has also led to important perspectives on ecological theory, which have impacted on management strategies by allowing a distinction to be made between anthropogenic environmental impacts and natural processes.
Midden studies have also been undertaken in Australia and South America. This work has highlighted a fundamental difference between middens from these regions and hyrax middens. American and Australian middens are essentially nests composed of sticks and other macrobotanical remains. These middens are generally reported have no clear stratigraphy, and researchers have thus adopted the methodology of processing them as single samples that provide a palaeoenvironmental snapshot. Hyrax middens, on the other hand, are primarily urino-fecal deposits, and are deposited progressively as a series of layers. This diachroneity is one of the fundamental advantages of hyrax middens over nest middens, which are only secondarily preserved as the animals urinate in their shelters.
Examinations of the internal and external structure of hyrax middens suggest flow/deposition dynamics similar to speleothems (cave deposits, e.g. stalactites), with the fresh urine flowing across the surface of the midden, then drying and crystallising, preserving the stratigraphic integrity of the midden. The general morphology of middens is often characterised by (1) lobate forms, (2) undulating weathering features on exposed midden faces, and (3) in some cases the formation of thin (1–3 mm in diameter) stalactites on the underside of some middens. As a result, questions over the potential for post-depositional remobilisation of hyraceum may be raised. The examination of over 150 middens, however, has confirmed the visible stratigraphic integrity of the middens, and while some surficial alteration of exposed surfaces can occur, consistently coherent age-depth models, and the nearly vertical exposed external faces of the middens indicate that, once dry, hyraceum is not prone to significant remobilization.
Hyrax midden structure, accumulation rates and age
Hyrax midden structures and accumulation rates can vary considerably based on the relative proportion of their two primary components, pellets and hyraceum, which is determined by the architecture of the site itself. Depending on the shape and irregularities of the floor of the site in question, pellets are likely to either accumulate (in concave structures) or roll away (in convex or inclined structures). Whereas hyrax urine will deposit only a very thin film of hyraceum after evaporation, pellets are usually 0.5–1 cm in diameter, and thereby accumulate much more quickly., with deep piles accumulating perhaps within just a few years, or even months. Compared to this, we have observed that middens composed primarily of hyraceum accumulate much more slowly; generally between ~5 and >1000 years/mm. The rate of hyraceum accumulation depends on the morphology of the midden, the architecture of the site, as well as presumably the size of the hyrax colony, and as such net rates can be highly variable
Hyrax midden ages
Radiocarbon ages from hyraceum are not subject to reservoir effects or the inclusion of new carbon. This is primarily a function of middens being isolated systems, and that through respiration the hyraceum is brought into equilibrium with atmospheric 14C at the time of deposition. Published data show that hyrax middens can be of considerable antiquity, and middens from the Groenfontein site in the Cederberg Mountains of South Africa are considered to have begun accumulating ~70,000 years ago
It has been commonly observed that many middens are no longer actively accumulating. Often this is controlled by the shelters in which they are found, with accumulation ceasing when the middens grow to such an extent that the hyraxes can no longer physically enter the shelters. Until recently, field sampling was limited to the collection of middens that were most accessible and easiest to sample. In many cases this meant that the individual sampled middens were relatively thin (<5 cm) with aggregate records subsequently constructed from fragments of as many as 25 separate middens. (Scott and Woodborne, 2007a, b). With recent developments in sampling tools and techniques, larger, more stratigraphically coherent middens are more regularly sampled, which better represent the full period of accumulation at a given site
Composition of hyrax middens
The very nature of hyrax middens implies that they comprise a mixture of materials, which include animal metabolic products, undigested food, and any allochtonous material blown into the middens or deposited via feet or fur. In terms of organic matter, the existence of such potentially distinct sources (i.e. extraneous organic matter and animal metabolites) implies that a range of information concerning inter alia: animal diet, animal behaviour, metabolic responses to environmental stress, changing behaviour, as well the wider palaeoecological setting of the site may all be preserved within hyraceum.
Chemical composition
Hyraceum essentially comprises a mix of organic compounds, soluble salts, calcium carbonate and the mineral sylvite. More recent data from Raman Spectroscopy and Fourier Transform Infrared (FTIR) Spectroscopy demonstrate the presence of a number of CaCO3 polymorphs, the abundance of sylvite (KCl) and an organic component
The organic components within hyraceum have been investigated using pyrolysis-GC/MS (py-GC/MS) and GC/MS analysis of solvent-extractable lipids. Py-GC/MS is commonly applied to elucidate macromolecular organic matter structure and composition. Py-GC/MS measurements on samples from two distant sites, Spitzkoppe, Namibia and Truitjes Kraal, Western Cape Province, South Africa produced remarkably similar suites of pyrolysis products, despite their contrasting environmental settings. The pyrolysis products were dominated by aromatic compounds; notably the nitrogenous compounds benzonitrile and benzamide. Pyrolysis in the presence of a methylating agent tetramethylammonium hydroxide (TMAH) implied that benzamide is a monomer of a larger polymeric structure, the major organic component of the hyraceum OM. This is further supported by the ubiquity of benzamide within solvent extracts and it is probable that it is derived from hippuric or benzoic acid, which are common metabolites in ruminants. Given its abundance, the metabolite (or metabolite product) benzamide is likely the major source of organic nitrogen and carbon measured in bulk stable isotope analyses, and can therefore provide insights into animal diet and its isotopic signature. Interestingly, common plant-derived pyrolysis products, such as lignin were not detected using py-GC/MS, although low molecular weight polysaccharide pyrolysis products (e.g. acetyl furan, furaldehyde, dimethyl furan) were found in trace amounts
That such plant-derived compounds might be identified with this technique following more detailed analytical pyrolysis protocols is implied by new FTIR analyses of the organic fraction, which support the basic pyrolysis-based interpretation of Spitzkoppe and Truitjes Kraal midden chemical compositions. The Spitzkoppe FTIR spectra following carbonate removal contains a broad absorption band at w3300 cm1 as well as sharper absorptions from w1600 to 1700, 1400, 1130, 770 and 690 cm−1. Multiplets between 1560 and 1640 cm−1 have been reported as being due to NeH bending in primary amines., while a signal at w1650 cm1 is representative of C==O stretches of the amide band. The spectra thus bear a strong resemblance to benzamide. FTIR spectra from the Truitjes Kraal midden, which is rich in faecal material, shows some resemblance to that of cellulose, with strong broad bands at 3400 cm−1 and 1050 cm−1, and some weaker broad bands at 1730 and1670 cm−1. Overall, the FTIR spectra and previous studies reveal a complex mixture of salts and organic compounds, with the latter incorporating aromatics, polysaccharides, amines, amides and other carbonyl-containing compounds. There are also clear similarities with the spectrum of benzamide, particularly at Spitzkoppe, which is consistent with the pyrolysis data
Palaeoenvironmental proxies
Part of the extraordinary potential of hyrax middens as palaeoenvironmental archives is the large range of proxies that are contained within them. Initially, when their diachronic nature was less evident, they were viewed as the poor relation to the better studied pack rat middens. While pack rat middens are rich in identifiable macrofossils, which can be directly dated and provide high taxonomic resolution, hyrax middens are poor in macroremains. Those that are found are almost exclusively masticated material that has been incorporated into the deposits as faecal pellets. While some studies have analysed these midden components, more recent work suggests that this approach does not maximise the full potential of hyrax middens as palaeoenvironmental archives
Hyrax middens contain a suite of proxies that have the potential to provide clear insights into past climate and vegetation change. Working within the context of the middens’ stratigraphy, and building on robust chronologies indicating predictable and consistent accumulation rates, sampling methodologies are now more akin to those applied to speleothems rather than to packrat middens. Whereas the early focus was on small (<1 kg), accessible middens and in some cases in-situ sub-sampling, it is now standard practice to collect larger (10–70 kg) segments of the best-developed middens. The segments are then split and polished in a controlled environment, and subsamples for radiocarbon dating and proxy analysis. That multiple proxies can be analysed from the same subsample allows for direct comparability, and much more reliable insights into the interrelationships between the systems being studied. This is valuable when comparing proxies that reflect vegetation change (e.g. fossil pollen) and those that are primarily influenced by climate (e.g. δ15N), as the relative roles of climatic forcing versus vegetation dynamics related to competitive processes within an ecosystem can be better resolved, resulting in a fuller and more reliable understanding of palaeoenvironmental dynamics
Pollen
Hyrax middens contain well-preserved micro plant material including pollen, which is sealed in middens by hyraceum, protecting it from microbial activity and decay. The earliest study of fossil pollen from a hyrax midden was undertaken in the late 1950s by Pons and Quézel. in the Hoggar Massif of Algeria, whereas the first palynological analyses of southern African middens were undertaken during the late 1980s and early 1990s, and demonstrated that hyrax middens are very useful as pollen and microfossil traps. Subsequently, hyrax middens have been become an important archive for fossil pollen analysis in South Africa and Namibia. Studies of fossil pollen in hyrax middens have also been undertaken in Jordan Ethiopia, Yemen and Oman
Taphonomy, preservation and concentrations
Middens are excellent traps for pollen derived from the local and regional surroundings either via the alimentary channel of the animals (excreted in pellets) or via deposition on the middens. The airborne pollen rain is incorporated by (1) collecting on the surface of the midden, (2) being brought in on the fur of the hyraxes, or (3) being ingested as dust on dietary items such as plant leaves or drinking water. The dietary component may also represent the ingestion of flowers, which may result in the occasional over-representation of pollen of certain plant species in the pellet fraction of certain middens. A clear benefit of midden pollen spectra over wetland pollen spectra is that they may more clearly reflect terrestrial vegetation, without the high proportions of hydrophilic elements found in wetland sequences, which is particularly problematic in some dryland pollen records. Furthermore, as the pollen found in hyraceum is not exclusively wind-transported, usually under-represented entomophilous plants are more clearly represented.
Preservation of pollen sealed in hyraceum is usually very good, but the degradation of pollen grains has been occasionally observed in loose pellets or middens semi-exposed to the elements, such as in dolerite shelters in the central grassland region of South Africa, where some Asteraceae pollen have apparently lost their ektexine (L. Scott, unpublished observation). Compared to other available palaeoarchives in the region, such as fluvial sediments or paleosols, and to more widely used pollen records from peat bog and lakes, middens contain high fossil pollen concentrations; usually between 1 and 2 x 105 pollen grains per gram of sample. Pollen concentrations are high even in poorly productive ecosystems such as the Namib Desert margins. Concentrations increase markedly when analysing pollen contents from pellets, reaching 5-30 x 105 pollen grains/gram of sample.
Interpretation of pollen data
There are some potential drawbacks for the palynological analysis of middens, however, as the diverse taphonomic vectors can complicate interpretations if they are not adequately considered and controlled for. Pollen spectra from pellets - reflecting the animal’s needs or preferences on a particular day - may contrast strongly with pollen spectra preserved in hyraceum, and which thought to be primarily brought to the midden via the fur of the hyraxes (which is collected as it moves through the vegetation around its shelter) and the wind. The degree to which dietary biases affect pollen spectra in pellets is a subject that is not fully understood. While Scott and Cooremans have shown that at the biome scale, fresh pellets reflect vegetation of the region from which they were collected, including the seasonal variations within vegetation types, most published studies also indicate significant differences both between modern pellets, and between modern pellets and surface sediment samples from the same site. A number of options might explain this, but it is assumed that as any given pellet represents what was eaten in the last day(s), there will be substantial inter-seasonal and inter-annual variation in the pollen preserved.
In most fossil pollen archives, wind-pollinated plants may dominate the natural pollen rain. Pollen production, however, is likely to have a less significant influence on the pollen that hyraxes ingest, and considering the wide variety of plants that they may eat, it may be possible to control for the taxon over-representation resulting from a production bias while still attaining a reasonable representation of the local vegetation.
A study from the Lower Omo Basin of Ethiopia collected several dozen pellets from different areas around the study site, aggregated them into a single sample, and compared them to the local vegetation.
Structured studies to clarify the relative influence of regional (aeolian) and local (fur) signals in the pollen preserved in hyraceum remain to be completed. At least in some cases, aeolian inputs appear to be negligible as some middens that have accumulated in vertical cracks - precluding the incorporation of pellets and direct contact with the animals - have been found to be devoid of pollen. If aeolian pollen does represent a small percentage of the pollen preserved in hyraceum then it might be inferred that hyraceum pollen assemblages reflect primarily local vegetation cover from within the animals’ primary feeding range.
Stable isotopes
As hyrax middens have been developed as palaeoenvironmental archives, there has been increasing emphasis on the application of stable isotope analyses to midden sequences. Initially this focussed on the use of bulk 13C data, with an emphasis on identifying changes in the relative abundance of C3/C4/CAM vegetation and associated palaeoecological/palaeoenvironmental inferences. This is useful in climatic transition zones, such as the Western Cape Province of South Africa, where modern rainfall seasonality has a strong impact on C3/C4 grass distributions. δ13C records can also be used in some ecoregions, such as the dry savannah at Spitzkoppe in Namibia, as an indicator of the reliability of grass cover. As hyraxes will preferentially graze (grasses are C4 in the region), more depleted δ13C values from hyrax middens have been interpreted as evidence that the animals were forced to obtain a greater proportion of their diet from trees and shrubs, which are less susceptible to extended periods of drought. However, these data do not necessarily provide a direct and unambiguous indicator of past arid/humid shifts. As such, other studies have focussed on the use of δ15N data as a potential proxy for water availability in the environment
δ15N of hyrax middens as an indicator of past hydrologic change
In palaeoclimatology, the variables for which reconstructions are most often sought are humidity and temperature. Unfortunately, direct, or even reliable, proxies for these are rarely available, and it is necessary to make several inferential steps in order to interpret their past variability. Recent work on hyrax middens has shown that δ15N records from middens may provide a clearer, more direct estimation of water availability than previously possible in southern Africa, e.g.
As described by Chase et al. it has long been understood that the 15N abundance in animal tissues is influenced by diet, climate and/or physiology. In terms of diet, a clear distinction exists between δ15N values in carnivores and herbivores, with enrichment in 15N occurring up trophic levels. Among herbivores, a link between increased δ15N values in animal tissues and aridity was identified very early, but it was thought to be predominantly a function of the animals’ metabolism. Ambrose and DeNiro, based in part on the apparent lack of relationship between 15N/14N ratios in plants and the amount of rainfall, developed a model to account for the enrichment of 15N in animal tissues based on physiological mechanisms of water conservation and nitrogen isotope mass balance. In this model, under arid conditions drought-tolerant herbivores concentrate their urine and excrete more 15N-depleted urea, leaving the body enriched in 15N. Conversely, water-dependent species that do not concentrate their urine were observed to have smaller δ15N ranges and lower mean values in their tissues. This predicted differentiation between drought-tolerant and water-dependent species, however, only found partial support in South Africa. This study suggested that animals in arid regions are likely to eat lower protein diets (%N decreases with increasing aridity), and that the additional protein produced by symbiotic bacteria in the animals’ digestive tracts would essentially result in a shift in trophic level and an enrichment of 15N in the animals’ tissues. Similarly, Codron and Codron found no significant difference in faecal δ15N between drought-tolerant and water-dependent herbivores, but did identify a significant correlation between %N and δ15N.
In contrast to the initial findings of Heaton et al., subsequent studies of soils and plants across aridity gradients, indicate a clear negative correlation between 15N and rainfall. As this was the original impetus for the construction of the mass balance model and its corollaries, these models, and their implications for interpreting δ15N records in plant and animals tissues, should be reconsidered. Although a strong relationship has been established between soil and plant δ15N, the link with rainfall is sometimes considered to be less robust (e.g.). One of the primary difficulties in determining the relationship between precipitation and δ15N values in soils, plants, animal tissues and excrement is the means by which precipitation is determined. It has been noted by Handley et al. that δ15N in soils and plants may change substantially across a landscape as a function of variations in soil moisture. Since soil moisture varies as a result of subtle changes in topography, aspect and soil type, particularly in drylands where sparse vegetation and poorly-formed soils exacerbate the heterogeneity of the biogeochemical landscape, the common practice of using rainfall records from the nearest gauge and/or interpolated from regional stations will inevitably weaken the significance of any correlation. Soil moisture and δ15N also vary significantly over short, sub-seasonal timescales and, combined, these fine-scale spatio-temporal variations need to be adequately controlled for if reliable δ15N-climate correlations are to be identified.
If we accept that plant δ15N is determined by soil δ15N, and the link with climate, while identified, has been imperfectly explored, it remains to determine to what extent variations in plant d15N account for the variations identified in animal tissue and/or excrement. Murphy and Bowman investigated variations in grass and kangaroo bone δ15N from across Australia and demonstrated a remarkably consistent relationship between plant and bone δ15N signals. Moisture availability, through its influence on the isotopic signature of plants/diet, was inferred as the primary control on animal δ15N, with metabolism having no clear effect. It is interesting to note that Ambrose and DeNiro’s findings are not inconsistent with these results, as drought-tolerant species can inhabit more arid regions with less regular rainfall (higher, wider δ15N range) while water-dependent animals will be more restricted to well-watered areas (lower, smaller δ15N range). To extend the findings of Murphy and Bowman to the study of excrement and hyrax middens, one can consider the studies of (1) Codron and Codron, which concluded that faecal δ15N correspond to changes in plant δ15N, and (2) Sponheimer et al., which found that, while preferential urinary excretion of isotopically light nitrogen may occur under conditions of disequilibrium, an unstressed animal at “steady state” will have equivalent dietary and excreta δ15N. Since faecal and animal δ15N track plant δ15N, and under normal conditions total excreta δ15N is equivalent to dietary (plant) δ15N, it follows that urinary δ15N, while perhaps more negative relative to dietary δ15N, will reflect trends in plant δ15N and water availability.
Hyrax middens thus provide an optimal archive for the study of δ15N as a proxy for long-term environmental change. The effects of contemporary ecosystem variability are mitigated by the spatial and temporal averaging intrinsic in hyraxes’ wide dietary preferences, restricted range, the probable contribution of multiple individuals to a single δ15N sample, and the relatively long periods of time incorporated into each sample. In these archives, microtopographic variations in soil moisture (and thus δ15N) are accounted for by the feeding habits of the hyrax, and it is expected that the spatio-temporal averaging will allow for the reliable identification of long-term changes in water availability as reflected in variations in midden δ15N. Over long timescales (102-103 yr), this expectation is borne out, and the potential of hyrax middens as diachronic palaeoclimatic records has been supported by strong similarities between variations in δ15N records and a range of palaeoenvironmental proxies reflecting changes in precipitation.
References
Biogeochemistry | Rock hyrax midden | [
"Chemistry",
"Environmental_science"
] | 5,740 | [
"Chemical oceanography",
"Biogeochemistry",
"Environmental chemistry"
] |
74,820,340 | https://en.wikipedia.org/wiki/Random%20generalized%20Lotka%E2%80%93Volterra%20model | The random generalized Lotka–Volterra model (rGLV) is an ecological model and random set of coupled ordinary differential equations where the parameters of the generalized Lotka–Volterra equation are sampled from a probability distribution, analogously to quenched disorder. The rGLV models dynamics of a community of species in which each species' abundance grows towards a carrying capacity but is depleted due to competition from the presence of other species. It is often analyzed in the many-species limit using tools from statistical physics, in particular from spin glass theory.
The rGLV has been used as a tool to analyze emergent macroscopic behavior in microbial communities with dense, strong interspecies interactions. The model has served as a context for theoretical investigations studying diversity-stability relations in community ecology and properties of static and dynamic coexistence. Dynamical behavior in the rGLV has been mapped experimentally in community microcosms. The rGLV model has also served as an object of interest for the spin glass and disordered systems physics community to develop new techniques and numerical methods.
Definition
The random generalized Lotka–Volterra model is written as the system of coupled ordinary differential equations,where is the abundance of species , is the number of species, is the carrying capacity of species in the absence of interactions, sets a timescale, and is a random matrix whose entries are random variables with mean , variance , and correlations for where . The interaction matrix, , may be parameterized as,where are standard random variables (i.e., zero mean and unit variance) with for . The matrix entries may have any distribution with common finite first and second moments and will yield identical results in the large limit due to the central limit theorem. The carrying capacities may also be treated as random variables with Analyses by statistical physics-inspired methods have revealed phase transitions between different qualitative behaviors of the model in the many-species limit. In some cases, this may include transitions between the existence of a unique globally-attractive fixed point and chaotic, persistent fluctuations.
Steady-state abundances in the thermodynamic limit
In the thermodynamic limit (i.e., the community has a very large number of species) where a unique globally-attractive fixed point exists, the distribution of species abundances can be computed using the cavity method while assuming the system is self-averaging. The self-averaging assumption means that the distribution of any one species' abundance between samplings of model parameters matches the distribution of species abundances within a single sampling of model parameters. In the cavity method, an additional mean-field species is introduced and the response of the system is approximated linearly.
The cavity calculation yields a self-consistent equation describing the distribution of species abundances as a mean-field random variable, . When , the mean-field equation is,where , and is a standard normal random variable. Only ecologically uninvadable solutions are taken (i.e., the largest solution for in the quadratic equation is selected). The relevant susceptibility and moments of , which has a truncated normal distribution, are determined self-consistently.
Dynamical phases
In the thermodynamic limit where there is an asymptotically large number of species (i.e., ), there are three distinct phases: one in which there is a unique fixed point (UFP), another with a multiple attractors (MA), and a third with unbounded growth. In the MA phase, depending on whether species abundances are replenished at a small rate, may approach arbitrarily small population sizes, or are removed from the community when the population falls below some cutoff, the resulting dynamics may be chaotic with persistent fluctuations or approach an initial conditions-dependent steady state.
The transition from the UFP to MA phase is signaled by the cavity solution becoming unstable to disordered perturbations. When , the phase transition boundary occurs when the parameters satisfy,In the case, the phase boundary can still be calculated analytically, but no closed-form solution has been found; numerical methods are necessary to solve the self-consistent equations determining the phase boundary.
The transition to the unbounded growth phase is signaled by the divergence of as computed in the cavity calculation.
Dynamical mean-field theory
The cavity method can also be used to derive a dynamical mean-field theory model for the dynamics. The cavity calculation yields a self-consistent equation describing the dynamics as a Gaussian process defined by the self-consistent equation (for ),where , is a zero-mean Gaussian process with autocorrelation , and is the dynamical susceptibility defined in terms of a functional derivative of the dynamics with respect to a time-dependent perturbation of the carrying capacity.
Using dynamical mean-field theory, it has been shown that at long times, the dynamics exhibit aging in which the characteristic time scale defining the decay of correlations increases linearly in the duration of the dynamics. That is, when is large, where is the autocorrelation function of the dynamics and is a common scaling collapse function.
When a small immigration rate is added (i.e., a small constant is added to the right-hand side of the equations of motion) the dynamics reach a time transitionally invariant state. In this case, the dynamics exhibit jumps between and abundances.
Related articles
Generalized Lotka–Volterra equation
Competitive Lotka–Volterra equations
Lotka–Volterra equations
Consumer-resource model
Theoretical ecology
Random dynamical system
Spin glass
Cavity method
Dynamical mean-field theory
Quenched disorder
Community (ecology)
Ecological stability
References
Further reading
Stefano Allesina's Community Ecology course lecture notes: https://stefanoallesina.github.io/Theoretical_Community_Ecology/
Bunin, Guy (2017-04-28). "Ecological communities with Lotka-Volterra dynamics". Physical Review E. 95 (4): 042414. Bibcode:2017PhRvE..95d2414B. doi:10.1103/PhysRevE.95.042414. PMID 28505745.
Community ecology
Complex systems theory
Theoretical ecology
Random dynamical systems
Dynamical systems
Mathematical modeling
Biophysics
Ordinary differential equations
Population ecology
Ecology | Random generalized Lotka–Volterra model | [
"Physics",
"Mathematics",
"Biology"
] | 1,304 | [
"Mathematical modeling",
"Applied and interdisciplinary physics",
"Random dynamical systems",
"Applied mathematics",
"Ecology",
"Biophysics",
"Mechanics",
"Dynamical systems"
] |
67,642,835 | https://en.wikipedia.org/wiki/Hilsenhoff%20Biotic%20Index | The Hilsenhoff Biotic Index (HBI) is a quantitative method of evaluating the abundance of arthropod fauna in stream ecosystems as a measurement of estimating water quality based on the predetermined pollution tolerances of the observed taxa. This biotic index was created by William Hilsenhoff in 1977 to measure the effects of oxygen depletion in Wisconsin streams resulting from organic or nutrient pollution.
Calculating the HBI
The collection sample should contain 100+ arthropods. A tolerance value of 0 to 10 is assigned to each arthropod species (or genera) based on its known prevalence in stream habitats with varying states of detritus contamination. A highly tolerant species would receive a value of 10, while a species collected only in unaltered streams with high water quality would receive a value of 0. The sum products of the number of individuals in each species (or genera) multiplied by the tolerance of the species is divided by the total number of specimens in the sample to determine the HBI value.
;
where n = number of specimens in taxa; a = tolerance value of taxa; N = total number of specimens in the sample.
Precautions should be taken to account for confounding variables, such as the effects of dominant species over-abundance, seasonal temperature stress, and water currents. Limiting the collection of individuals from each species to a maximum of 10 (10-Max BI) has been shown to minimize the effects of these phenomena on the True BI.
The biotic index is then ranked for water quality and degree of organic pollution, as follows:
References
Arthropod ecology
Environmental indices
Eponymous indices
Water pollution
Environmental science
Water quality indicators | Hilsenhoff Biotic Index | [
"Chemistry",
"Environmental_science"
] | 341 | [
"Water quality indicators",
"nan",
"Water pollution"
] |
67,642,894 | https://en.wikipedia.org/wiki/CendR | CendR (C-end Rule) is a position-dependent protein motif that regulates cellular uptake and vascular permeability through interaction with neuropilin-1. The CendR motif has a consensus (R/K)XX(R/K) and it is able to interact with its receptor only when the second basic residue is exposed at the C-terminus.
Mechanism of action
C-terminal CendR motif engages with widely expressed neuropilin-1 receptors to trigger an increased permeability of the vasculature and penetration of tissue parenchyma by an endocytotic/exocytotic transport mechanism. The CendR pathway starts with an endocytosis step that is distinct from known endocytosis pathways. It most closely resembles macropinocytosis, but unlike macropinocytosis, the CendR pathway is receptor (neuropilin)-initiated and its activity is controlled by the nutrient status of the cell or tissue. CendR is an active transport process that requires energy. It is not limited to extravasation, but also includes penetration of tissue parenchyma, potentially via cell-to-cell transport.
CendR elements that are not C-terminally exposed are unable to bind to neuropilin-1. However, such cryptic CendR elements can be activated by proteolytic cleavage (e.g. by furin, urokinase type plasminogen activator, and other proteases of suitable substrate specificity).
Clinical significance
The CendR pathway is used to enhance transport of coupled and co-administered anti-cancer drugs into tumors. Tumor penetrating peptides (TPP, a class of tumor homing peptides containing a cryptic CendR motif) activate tumor specific transport through a three-step process that involves binding to a primary tumor-specific receptor, a proteolytic activation of CendR element, and binding to NRP-1 to activate the trans-tissue transport pathway. Clinical-stage prototypic CendR peptide iRGD, developed by Lisata Therapeutics as LSTA1, is utilized to make solid tumors temporarily more accessible to circulating anti-cancer drugs to increase their therapeutic index. Several viruses, including the SARS-CoV2 coronavirus, are also using the CendR system for cellular entry and tissue penetration, and it is known that viruses that have the system are more virulent and deadly.
References
Peptides
Infectious diseases | CendR | [
"Chemistry"
] | 519 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
67,643,326 | https://en.wikipedia.org/wiki/Programming%20Languages%3A%20History%20and%20Fundamentals | Programming Languages: History and Fundamentals is a book about programming languages written by Jean E. Sammet. Published in 1969, the book gives an overview of the state of the art of programming in the late 1960s, and records the history of programming languages up to that time.
The book was considered a standard work on programming languages by professionals in the field. According to Dag Spicer, senior curator of the Computer History Museum, Programming Languages "was, and remains, a classic."
Contents
Programming Languages provides a history and description of 120 programming languages, with an extensive bibliography of reference works about each language and sample programs for many of them. The book outlines both the technical definition and usage of each language, as well as the historical, political, and economic context of each language.
Because Sammet was deeply involved in the history of programming language creation in the United States, she was able to give an insider's perspective. The author excluded most programming languages used only outside the US, and excluded those she considered not to be high-level programming languages.
Languages
The book covers both well-known and obscure programming languages. Among the 120 languages included in the book are:
ALGOL
ALTRAN
BASIC
COBOL, co-created by Sammet herself
COLINGO, from the mid-1960s, the name stands for Compile On-LINe and GO
Culler-Fried
FLOW-MATIC
FORTRAN
Klerer-May
Laning and Zierler
JOVIAL
Lincoln Reckoner, an interactive, distributed mathematics program including matrix operations for the TX-2 computer
MATHLAB
Magic Paper, a symbolic mathematics system
OMNITAB
PL/1
Protosynthex, a query language for English text
SIMULA
SNOBOL
History
Sammet pioneered the COBOL language while working at Sylvania and FORMAC (an extension of FORTRAN) while at IBM. While managing IBM's Boston Advanced Programming Department, Sammet began researching programming languages more widely and collecting documentation. Starting in 1967 she published annual reports in Computers and Automation, the first computer magazine, on the languages in use across the field of programming.
Computers were new and rare in the 1960s, and were a subject of fascination that book publishers hoped to profit from. Prentice Hall approached Sammet asking her to write about FORTRAN. Sammet said that she would rather write about every programming language. Prentice Hall and IBM told her to go ahead.
Sammet used her book to advocate for high-level languages at a time when assembly languages were popular and there was widespread doubt about the value of high-level languages in the field of programming.
An image of the Tower of Babel was printed on the dust jacket of the book, with the names of various programming languages printed on the bricks making up the tower. A similar image had appeared on the January 1961 issue of the Communications of the ACM.
See also
The Art of Computer Programming
The Preparation of Programs for an Electronic Digital Computer
The C Programming Language
References
Computer programming books
Handbooks and manuals
1969 non-fiction books
Prentice Hall books
Women in computing
History of computing | Programming Languages: History and Fundamentals | [
"Technology"
] | 618 | [
"Computers",
"History of computing"
] |
67,643,876 | https://en.wikipedia.org/wiki/Chaetocerotales | Chaetocerotales is an order of diatoms belonging to the class Mediophyceae.
Families:
Acanthocerataceae
Attheyaceae
Chaetocerotaceae
References
Diatoms
Diatom orders | Chaetocerotales | [
"Biology"
] | 47 | [
"Diatoms",
"Algae"
] |
67,645,357 | https://en.wikipedia.org/wiki/Japanese%20Red%20List | The Japanese is the Japanese domestic counterpart to the IUCN Red List of Threatened Species. The national Red List is compiled and maintained by the Ministry of the Environment, alongside a separate Red List for marine organisms. Similarly drawing on the relevant scientific authorities, NGOs, and local governments, the Ministry of the Environment also prepares and publishes a that provides further information on species and habitats.
The first Red List was published by the then Environmental Agency as part of the first Red Data Book in 1991; in 2020, the fifth edition of the fourth version of the Red List was published. In line with the Marine Biodiversity Conservation Strategy, decided upon by the Ministry in 2011, in 2017 the first Marine Life Red List was published, excluding species subject to international agreements, such as those within the remit of the Western and Central Pacific Fisheries Commission (WCPFC) (e.g., Pacific bluefin tuna) and International Whaling Commission (IWC), species under evaluation by the Fisheries Agency, smaller Cetaceans, and those already evaluated for the Red List.
With the renewed focus on evaluating the rarity or otherwise of marine life in line with the National Biodiversity Strategy 2012–2020, using the same evaluation criteria and categories as the Ministry of the Environment, and working in collaboration with the Ministry, the Fisheries Agency has also produced a Red List of marine resources and smaller Cetaceans, excluding species subject to international agreements, such as those in the remit of the WCPFC and IWC. Evaluations of 94 species were published in 2017, all falling outside the rankings (i.e., being of Least Concern), other than Pleuronichthys japonicus (Data Deficient).
The Red List (and Red Data Book) itself has no legal force but is intended to be used to provide information and to serve as a "warning to society". Appropriate action may be taken under the 1992 Conservation of Endangered Species of Wild Fauna and Flora Act [ja].
Classification
As of the 2020 edition, thirteen taxa are used for classification purposes by the Ministry of the Environment:
Fauna
Flora
Five further taxa are used for the Marine Life Red List:
The following categories are used to indicate organisms' conservation status specifically within Japan; where a species or subspecies is endemic, the status EX (Extinct) is indicative of its global status.
Statistics
Extinct taxa
Mammals: Hokkaido wolf (Canis lupus hattai), Japanese wolf (Canis lupus hodophilax), Japanese river otter (Lutra lutra nippon), Hokkaido river otter (Lutra lutra whiteleyi), Okinawa flying fox (Pteropus loochoensis) , (Rhinolophus pumilus miyakonis), Bonin pipistrelle (Pipistrellus sturdeei)
Birds: Crested shelduck (Tadorna cristata) , Ryukyu wood pigeon (Columba jouyi) , Bonin wood pigeon (Columba versicolor) , Bonin nankeen night heron (Nycticorax caledonicus crassirostris) , Iwo Jima rail (Porzana cinerea brevipes) , Daito buzzard (Buteo buteo oshiroi), Ryukyu kingfisher (Todiramphus miyakoensis), Tristram's woodpecker (Dryocopus javensis richardsi), Izu Peregrine falcon (Falco peregrinus furuitii), Daito varied tit, (Poecile varius orii), Mukojima white-eye (Apalopteron familiare familiare), Daito wren (Troglodytes troglodytes orii), Bonin thrush (Cichlopasser terrestris) , Southern Ryukyu robin (Luscinia komadori subrufus), Bonin grosbeak (Chaunoproctus ferreorostris)
Brackish and Freshwater Fish: Green sturgeon (Acipenser medirostris), Gnathopogon elongatus suwae, Amur stickleback (Pungitius kaibarae)
Insects: Ishikawatrechus intermedius, Rakantrechus elegans, Prodaticus satoi [ja], Macroplea japana
Molluscs: Ogasawarana chichijimana , Ogasawarana habei , Ogasawarana rex , Assiminea sp. D, Hirasea diplomphalus latispira, Hirasea eutheca , Hirasea goniobasis , Hirasea hypolia , Hirasea major , Hirasea nesiotica liobasis, Hirasea planulata biconcava, Hirasea planulata planulata, Hirasea profundispira , Hirasea sinuosa , Hirasiella clara , Lamprocystis hahajimana pachychilus, Trochochlamys ogasawarana , Vitrinula chichijimana , Vitrinula hahajimana
Other Invertebrates: Compressalges nipponiae
Vascular Plants: Elatostema lineolatum var. majus, Ranunculus gmelinii, Rubus hatsushimae, Flemingia strobilifera, Lespedeza hisauchii, Euphrasia insignis insignis var. omiensis, Euphrasia insignis insignis var. pubigera, Euphrasia multifolia var. kirisimana, Cirsium toyoshimae, Aletris makiyataroi, Burmannia coelestris, Thismia tuberculata, Eriocaulon cauliferum, Cyperus diaphanus, Cyperus procerus, Fimbristylis leptoclada var. takamineana, Fimbristylis pauciflora, Acanthephippium striatum
Algae: Chara fibrosa var. brevibracteata, Chara globularis var. hakonensis, Nitella minispora, Porphyra angusta
Lichens: Erioderma tomentosum, Heterodermia angustiloba, Heterodermia leucomelos, Siphula ceratites
Fungi: Agaricus hahashimensis, Albatrellus cantharellus, Camarophyllus microbicolor, Chlorophyllum agaricoides, Circulocolumella hahashimensis, Clitocybe castaneofloccosa, Collybia matris, Coprinus boninensis, Cyathus boninensis, Ganoderma colossus, Gymnopilus noviholocirrhus, Hygrocybe macrospora, Hygrocybe miniatostriata, Lactarius ogasawarashimensis, Lentinus lamelliporus, Lepiota boninensis, Leptoglossum boninense, Lycoperdon henningsii, Pleurotus cyatheae, Pluteus daidoi, Pluteus horridilamellus, Psathyrella boninensis, Pyrrhoglossum subpurpureum, Rhodophyllus brunneolus, Russula boninensis
Coral: Merulina coral (Boninastrea boninensis)
Prefectural Red Lists
Localized Red Lists and Red Data are also prepared and published by a number of Prefectural Governments, including those of Hokkaidō and Okinawa.
See also
Natural Habitat Conservation Areas (Japan)
Wildlife Protection Areas (Japan)
Convention on Biological Diversity
Wildlife of Japan
References
External links
2020 Japanese Red List
2020 and Previous Japanese Red Lists
MOE Marine Organisms Red List
JFA Marine Organisms Red List
IUCN Red List
Nature conservation in Japan
Biota by conservation status system
Lists of biota
Biological databases
ja:レッドリスト#日本におけるレッドリスト | Japanese Red List | [
"Biology"
] | 1,666 | [
"Lists of biota",
"Biota by conservation status",
"Bioinformatics",
"Biota by conservation status system",
"Biological databases"
] |
67,646,530 | https://en.wikipedia.org/wiki/Online%20Safety%20Act%202023 | The Online Safety Act 2023 (c. 50) is an act of the Parliament of the United Kingdom to regulate online speech and media. It passed on 26 October 2023 and gives the relevant Secretary of State the power, subject to parliamentary approval, to designate and suppress or record a wide range of speech and media deemed "harmful".
The act requires platforms, including end-to-end encrypted messengers, to scan for child pornography, despite warnings from experts that it is not possible to implement such a scanning mechanism without undermining users' privacy.
The act creates a new duty of care of online platforms, requiring them to take action against illegal, or legal but "harmful", content from their users. Platforms failing this duty would be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher. It also empowers Ofcom to block access to particular websites. It obliges large social media platforms not to remove, and to preserve access to, journalistic or "democratically important" content such as user comments on political parties and issues.
The bill that became the act was criticised for its proposals to restrain the publication of "lawful but harmful" speech, effectively creating a new form of censorship of otherwise legal speech. As a result, in November 2022, measures that were intended to force big technology platforms to take down "legal but harmful" materials were removed from the bill. Instead, tech platforms are obliged to introduce systems that will allow users to better filter out the "harmful" content they do not want to see.
The act grants significant powers to the secretary of state to direct Ofcom, the media regulator, on the exercise of its functions, which includes the power to direct Ofcom as to the content of codes of practice. This has raised concerns about the government's intrusion in the regulation of speech with unconstrained emergency-like powers that could undermine Ofcom's authority and independence.
Provisions
Scope
Within the scope of the act is any "user-to-user service". This is defined as an Internet service by means of which content that is generated by a user of the service, or uploaded to or shared on the service by a user of the service, may be read, viewed, heard or otherwise experienced ("encountered") by another user, or other users. Content includes written material or messages, oral communications, photographs, videos, visual images, music and data of any description.
The duty of care applies globally to services with a significant number of United Kingdom users, or which target UK users, or those which are capable of being used in the United Kingdom where there are reasonable grounds to believe that there is a material risk of significant harm.
The idea of a duty of care for Internet intermediaries was first proposed in Thompson (2016) and made popular in the UK by the work of Woods and Perrin (2019).
Duties
The duty of care in the act refers to a number of specific duties to all services within scope:
The illegal content risk assessment duty
The illegal content duties
The duty about rights to freedom of expression and privacy
The duties about reporting and redress
The record-keeping and review duties
For services 'likely to be accessed by children', adopting the same scope as the Age Appropriate Design Code, two additional duties are imposed:
The children's risk assessment duties
The duties to protect children’s online safety
For category 1 services, which will be defined in secondary legislation but are limited to the largest global platforms, there are four further new duties:
The adults' risk assessment duties
The duties to protect adults’ online safety
The duties to protect content of democratic importance
The duties to protect journalistic content
Enforcement
This would empower Ofcom, the national communications regulator, to block access to particular user-to-user services or search engines from the United Kingdom, including through interventions by internet access providers and app stores. The regulator will also be able to impose, through "service restriction orders", requirements on ancillary services which facilitate the provision of the regulated services.
The act lists in section 92 as examples (i) services which enable funds to be transferred, (ii) search engines which generate search results displaying or promoting content and (iii) services which facilitate the display of advertising on a regulated service (for example, an ad server or an ad network). Ofcom must apply to a court for both Access Restriction and Service Restriction Orders. Section 44 of the act also gives the Secretary of State the power to direct Ofcom to modify a draft code of practice for online safety if deemed necessary for reasons of public policy, national security or public safety. Ofcom must comply with the direction and submit a revised draft to the Secretary of State. The Secretary of State may give Ofcom further directions to modify the draft, and once satisfied, must lay the modified draft before Parliament. Additionally, the Secretary of State can remove or obscure information before laying the review statement before Parliament.
Limitations
The act has provisions to impose legal requirements ensuring that content removals do not arbitrarily remove or infringe access to what it defines as journalistic content. Large social networks would be required to protect "democratically important" content, such as user-submitted posts supporting or opposing particular political parties or policies. The government stated that news publishers' own websites, as well as reader comments on such websites, are not within the intended scope of the law.
Age verification for online pornography
Section 212 of the act repeals part 3 of the Digital Economy Act 2017, which demands mandatory age verification to access online pornography but was subsequently not enforced by the government. The act will include within scope any pornographic site which has functionality to allow for user-to-user services, but those which do not have this functionality, or choose to remove it, would not be in scope based on the draft published by the government.
Addressing the House of Commons DCMS Select Committee, the Secretary of State, Oliver Dowden, confirmed he would be happy to consider a proposal during pre-legislative scrutiny of the act by a joint committee of both Houses of Parliament to extend the scope of the act to all commercial pornographic websites. According to the government, the act addresses the major concern expressed by campaigners such as the Open Rights Group about the risk to user privacy with the Digital Economy Act 2017's requirement for age verification by creating, on services within scope of the legislation, "A duty to have regard to the importance of... protecting users from unwarranted infringements of privacy, when deciding on, and implementing, safety policies and procedures."
In February 2022 the Digital Economy Minister, Chris Philp, announced that the bill (as it then was) would be amended to bring commercial pornographic websites within its scope.
Other provisions
The Act adds two new offences to the Sexual Offences Act 2003: sending images of a person's genitals (cyberflashing), or sharing or threatening to share intimate images.
Legislative process and timetable
The draft bill was given pre-legislative scrutiny by a joint committee of Members of the House of Commons and peers from the House of Lords. The Opposition Spokesperson, Lord Ponsonby of Shulbrede, in the House of Lords said, "My understanding is that we now have a timeline for the online harms Bill, with pre-legislative scrutiny expected immediately after the Queen’s Speech—before the Summer Recess—and that Second Reading would be expected after the Summer Recess." But the Minister replying refused to pre-empt the Queen's Speech by confirming this.
In early February 2022, ministers planned to add to their existing proposal several criminal offences against those who send death threats online or deliberately share dangerous disinformation about fake cures for COVID-19. Other new offences, such as revenge porn, posts advertising people-smuggling, and messages encouraging people to commit suicide, would fall under the responsibilities of online platforms like Facebook and Twitter to tackle.
In September 2023, during the third reading in the Lords, Lord Parkinson presented a ministerial statement from the government claiming the controversial powers allowing Ofcom to break end-to-end encryption would not be used immediately. Despite the government's claim the powers will not be used, the provisions pertaining to end-to-end encryption weakening were not removed from the act and Ofcom can at any time issue notices requiring the breaking of end-to-end encryption technology. This followed statements from several tech firms, including Signal, suggesting they would withdraw from the UK market rather than weaken their encryption.
Support
The UK National Crime Agency, part of the Home Office, has said the act is necessary to protect children.
The NSPCC has been a prominent supporter of the act, saying it will help protect children from abuse. The Samaritans, that had made strengthening the act one of its key campaigns "to ensure no one is left unprotected from harmful content under the new law" gave the final act its qualified support, also saying the act fell short of the promise to make the UK the safest place to be online.
Opposition
The international human rights organization Article 19 stated that they saw the Online Safety Act 2023 as a potential threat to human rights, describing it as an "extremely complex and incoherent piece of legislation". The Open Rights Group described the Online Safety Bill (OSB) as a "censor's charter".
During an interview for the BBC, Rebecca MacKinnon, the vice president for global advocacy at the Wikimedia Foundation, criticised the OSB, saying the threat of "harsh" new criminal penalties for tech bosses would affect "not only big corporations, but also public interest websites, such as Wikipedia". In the same instance, MacKinnon argued the act should have been based on the European Union's Digital Services Act, which reportedly included differences between centralised content moderation and community-based moderation. In April 2023, both MacKinnon and the chief executive of Wikimedia UK, Lucy Crompton-Reid, announced that the WMF did not intend to apply the age-check requirements of the act to Wikipedia users, stating that it would violate their commitment to collect minimal data about readers and contributors. On 29 June of the same year, WMUK and the WMF officially published an open letter, asking the government and Parliament to exempt "public interest projects", including Wikipedia itself, from the OSB before it entered its report stage, starting on 6 July.
Apple Inc. criticised legal powers in the OSB which threatened end-to-end encryption on messaging platforms in an official statement, describing the act as "a serious threat" to end-to-end encryption, and urging the UK government to "amend the Bill to protect strong end-to-end encryption".
Meta Platforms has criticised the plan, saying, "We don't think people want us reading their private messages ... The overwhelming majority of Brits already rely on apps that use encryption to keep them safe from hackers, fraudsters and criminals". Head of WhatsApp Will Cathcart voiced his opposition to the OSB, stating that the service would not compromise its encryption for the proposed law and saying "The reality is, our users all around the world want security – ninety-eight percent of our users are outside the UK, they do not want us to lower the security of the product and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those ninety-eight percent of users." He also stated in a tweet that scanning everyone's messages would destroy privacy.
Ciaran Martin, a former head of the UK National Cyber Security Centre, accused the government of "magical thinking" and said that scanning for child abuse content would necessarily require weakening the privacy of encrypted messages.
In February 2024, the European Court of Human Rights ruled, in an unrelated case, that requiring degraded end-to-end encryption "cannot be regarded as necessary in a democratic society" and was incompatible with Article 6 of the European Convention on Human Rights. This decision may potentially form part of the basis of legal challenges to the Online Safety Act 2023.
See also
Children's Code
Proposed UK Internet age verification system
Web blocking in the United Kingdom
References
External links
Draft Online Safety Bill
Joint Committee on the Draft Online Safety Bill
Final Act
United Kingdom Acts of Parliament 2023
Mass media regulation
Social media
Internet censorship in the United Kingdom
United Kingdom tort law
Data laws of the United Kingdom
Child online safety laws
Encryption debate | Online Safety Act 2023 | [
"Technology"
] | 2,563 | [
"Computing and society",
"Social media"
] |
67,646,766 | https://en.wikipedia.org/wiki/Evercast | Evercast is a privately held software as a service company that makes collaborative software primarily for the film, television, and other creative industry sectors. Its platform allows remotely located creative teams to collaborate in real-time on video production tasks, such as reviewing dailies, editing footage, sound mixing, animation, visual effects, and other components simultaneously. Its primary users are directors, editors, VFX artists, animators, and sound teams in the film, television, advertising, and video gaming industries.
History
The company was founded in 2015 by Alex Cyrell, Brad Thomas, and Blake Brinker, and is based in Scottsdale, Arizona. After using the software, film editor Roger Barton joined the company and became a co-founder and investor. In 2020, Evercast won an Engineering Emmy award.
Funding
In 2020, an unnamed angel investor provided just over $3 million of funding.
References
Software companies based in Arizona
Collaborative software
Film editing
Web conferencing
Impact of the COVID-19 pandemic on science and technology
Software associated with the COVID-19 pandemic | Evercast | [
"Technology"
] | 220 | [
"History of science and technology",
"Impact of the COVID-19 pandemic on science and technology"
] |
67,650,319 | https://en.wikipedia.org/wiki/HD%20121056 | HD 121056, or HIP 67851, is an aging giant star with a pair of orbiting exoplanets located in the southern constellation of Centaurus. This star is dimly visible to the naked eye with an apparent visual magnitude of 6.17. It is located at a distance of 209 light years from the Sun, based on parallax measurements, and is drifting further away with a radial velocity of 5.6 km/s.
The spectrum of HD 121056 presents as an evolved K-type giant star with a stellar classification of K0 III. It is presently ascending the red-giant branch, having exhausted the supply of hydrogen at its core. The star is about 5.5 billion years old and is spinning with a projected rotational velocity of 2.4 km/s. HD 121056’s concentration of heavy elements is similar to the Sun, with a metallicity Fe/H index of 0.020, although the star is enriched in lighter rock-forming elements like magnesium and aluminum. It has 1.6 times the mass of the Sun and has expanded to 5.72 times the Sun's radius. The star is radiating 15.8 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,867 K.
Planetary system
In 2014, two planets orbiting HD 121056 were discovered by the radial velocity method, and were confirmed a few months later. The orbits of these planets are stable on astronomical timescales, although the periods are not in orbital resonance. In 2022, the inclination and true mass of HD 121056 c were measured via astrometry.
The planetary system configuration is favorable for direct imaging of exoplanets in the near future, being included in the top ten easiest targets in 2018.
References
K-type giants
Planetary systems with two confirmed planets
Centaurus
5224
CD-34 9223
0532.1
067851
121056
J13535209-3518517 | HD 121056 | [
"Astronomy"
] | 409 | [
"Centaurus",
"Constellations"
] |
67,650,480 | https://en.wikipedia.org/wiki/NGC%203613 | NGC 3613 is an elliptical galaxy in the constellation Ursa Major. It was discovered by the astronomer William Herschel on April 8, 1793. NGC 3613 is the center of a cluster of galaxies, and has an estimated globular cluster population of over 2,000.
In 2011, SN 2011eh, a type Ia supernova with a peculiar spectrum, was detected within NGC 3613.
References
External links
Ursa Major
3613
Elliptical galaxies
034583 | NGC 3613 | [
"Astronomy"
] | 97 | [
"Ursa Major",
"Constellations"
] |
67,651,922 | https://en.wikipedia.org/wiki/Graham%20reaction | In organic chemistry, the Graham reaction is an oxidation reaction that converts an amidine into a diazirine using a hypohalite reagent. The halide of the hypohalite oxidant, or another similar anionic additive to the reaction, is retained as a substituent on the diazirine product. The reaction was first reported in 1965. Various reaction mechanisms have been proposed.
Amidine substrates for the reaction can easily be formed from the corresponding nitriles via the Pinner reaction. The halide substituent in the diazirine product can be displaced by a various nucleophiles.
References
Ring forming reactions
Organic oxidation reactions
Amidines
Diazirines | Graham reaction | [
"Chemistry"
] | 148 | [
"Amidines",
"Functional groups",
"Organic reactions",
"Bases (chemistry)",
"Organic oxidation reactions",
"Ring forming reactions",
"Organic chemistry stubs"
] |
67,652,132 | https://en.wikipedia.org/wiki/Evernic%20acid | Evernic acid is an organic compound and depside with the molecular formula C17H16O7. Evernic acid was first isolated from the lichen Usnea longissima. Evernic acid is soluble in hot alcohol and poorly soluble in water. Evernic acid is produced by the lichens Ramalina, Evernia, and Hypogymnia.
References
Further reading
+
Polyphenols
Methoxy compounds
Esters
Carboxylic acids | Evernic acid | [
"Chemistry"
] | 96 | [
"Organic compounds",
"Esters",
"Carboxylic acids",
"Functional groups"
] |
67,652,277 | https://en.wikipedia.org/wiki/Job%20crafting | Job crafting is an individually-driven work design process which refers to self-initiated, proactive strategies to change the characteristics of one's job to better align the job with personal needs, goals, and skills. Individuals engage in job crafting as a means to experience greater meaning at work, a positive work identity, better work-related well-being, and better job performance. As a topic of scientific inquiry, job crafting was built on research that suggests employees do not always enact the job descriptions that are formally assigned to them, but instead actively shape and utilize their jobs to fit their needs, values, and preferences. Classic job design theory typically focuses on the ways in which managers design jobs for their employees. As a work design strategy, job crafting represents a departure from this thinking in that the redesign is driven by employees, is not negotiated with the employer and may not even be noticed by the manager. This idea also distinguishes job crafting from other 'bottom-up' redesign approaches such as idiosyncratic ideals (i-deals) which explicitly involve negotiation between the employee and employer.
Theoretical background
The term 'job crafting' was originally coined by Amy Wrzesniewski and Jane E. Dutton in 2001, however the idea that employees may redesign their jobs without the involvement of management has been present in job design literature since 1987. Wrzesniewski and Dutton's (2001) initial definition limited job crafting to three forms: Changes made by employees in their jobs tasks (i.e. task crafting), job relationships (i.e. relational crafting), and meaning of the job (i.e. cognitive crafting). More recent developments have indicated that employees may change other aspects of the job; to cover this broader scope, Maria Tims and Arnold B. Bakker proposed in 2010 that job crafting be framed within the job-demands resources (JD-R) model.
Recent theoretical developments have classified job crafting behaviours into two higher-order constructs: Approach crafting, which refers to self-directed actions to gain positive work aspects; and avoidance crafting, which refers to self-directed actions to avoid negative work aspects. These two constructs can then be further differentiated depending on whether the job crafting is behavioural (i.e. the individual makes actual changes to the job) or cognitive (i.e. the individual changes the way they think about the work). Further differentiating can then be made depending on whether individuals change their job resources or job demands, resulting in eight 'types' of job crafting (e.g., approach behavioural resource crafting).
Forms of job crafting
Task crafting — This involves changing the type, scope, sequence, and number of tasks that make up one's job. Employees may take initiative to change the tasks they carry out, change the way they work, or change the timing of their tasks. In doing so, employees exert a level of control over their work, which has been shown to minimize negative feelings (e.g., alienation).
Relational crafting — This refers to changing the nature of interactions at work. For example, employees may choose to what extent and how they approach colleagues, or to what extent they get involved in work group social activities.
Cognitive crafting — This involves an modifying one's perceptions about their job to ascribe more meaning to the work. For example, an employee might continuously re-evaluate how work influences them and how connected they are to the work. This could involve considering observations at work and evaluating how well those observations align with personal goals, ideals, and passions.
Practical implications
For employees
If enacted properly, job crafting is a method for employees to improve their quality of life at work in several important ways, as well as make valuable contributions to the workplace. The uniqueness of individual workers makes it exceptionally difficult for organizations to create 'one size fits all' work designs. Job crafting means that work designs are not fixed, and can be adapted over time to accommodate employees' unique backgrounds, motives, and preferences. The success of a job crafter may depend largely on their ability to take advantage of available resources (i.e. people, technology, raw materials etc) to reorganise, restructure, and reframe a job. Research has demonstrated that this type of resourcefulness can help employees get more enjoyment and meaning out of work, enhance their work identities, cope with adversity, and perform better.
For managers
Job crafting has the potential to positively influence both individual and organizational performance, meaning it is in the interest of managers to create a context that facilitates resourceful job crafting. Highly prescribed, restrictive job designs may limit employees from making positive changes in the way they perform tasks, taking on additional tasks, altering interactions with others, or viewing their jobs in an alternative way. On the other hand, job crafting that is beneficial for the job crafter may be harmful to the goals of the organization and produce negative effects. Therefore, in addition to allowing room for crafting, managers must build a shared understanding with employees that job crafting is encouraged so long as it aligns with the organizations overall strategy. Maintaining open lines of communication between managers and employees and building trust may promote positive job crafting which is favourable to both the individual crafter and the organization.
References
Human resource management
Industrial and organizational psychology
Organizational behavior | Job crafting | [
"Biology"
] | 1,106 | [
"Behavior",
"Organizational behavior",
"Human behavior"
] |
67,652,489 | https://en.wikipedia.org/wiki/Merochlorophaeic%20acid | Merochlorophaeic acid is a depside with the molecular formula C24H30O8 which has been isolated from the lichen Cladonia merochlorophaea.
References
Further reading
Polyphenols
Carboxylic acids
Phenol esters
Methoxy compounds
Pentyl compounds
Lichen products | Merochlorophaeic acid | [
"Chemistry"
] | 69 | [
"Carboxylic acids",
"Natural products",
"Functional groups",
"Lichen products"
] |
67,652,600 | https://en.wikipedia.org/wiki/Getac | Getac () is a Taiwanese multinational technology company that specializes in rugged computers, mobile video systems, mechanical components, automotive parts, and aerospace fasteners. Getac was established on 5 October 1989 as a joint venture with GE Aerospace. A subsidiary of the MiTAC-Synnex Group, Getac has been listed on the Taiwan Stock Exchange (TWSE: 3005) since 2002. Getac is one of the major suppliers of rugged computers.
History
Getac was established on 5 October 1989 as a joint venture with GE Aerospace.
In 2009, Getac acquired Waffer Technology Corp., which resulted in Getac becoming the world's third largest aluminum-magnesium alloy producer and the leading supplier of seat belt spindles and spools.
In 2012, the company introduced the Getac Z710, the world's first rugged 7-inch Android tablet.
In 2018, the company expanded its video recording and software businesses with the acquisition of WHP Workflow Solutions Inc. and the formation of Getac Video Solutions.
In 2020, Getac was selected by BMW to provide rugged mobile devices for applications including R&D, production, warehouse logistics and workshop diagnostics.
In 2021, Getac was selected by the United States Air Force to provide rugged computers under the Client Computing Solutions Quantum Enterprise Buy (CCS-2 QEB) Program.
See also
List of companies of Taiwan
References
External links
1989 establishments in Taiwan
2009 mergers and acquisitions
Electronics companies established in 1989
Electronics companies of Taiwan
Manufacturing companies established in 1989
Multinational companies headquartered in Taiwan
Taiwanese brands
Computer companies of Taiwan
Computer systems companies | Getac | [
"Technology"
] | 323 | [
"Computer systems companies",
"Computer systems"
] |
67,652,938 | https://en.wikipedia.org/wiki/Madol%20Kurupawa | Madol Kurupawa () is a wooden king post or catch pin, which is used to secure numerous wooden beams of a roof structure to a single point. It is a unique feature of Kandyan architecture/joinery.
This distinctive structural arrangement occurs in medieval Sri Lankan buildings, where four pitch roofs have been provided. Rafters of the shorter sides are elbowed against the ridge plate and were held fast at its pinnacle by a timber boss known as madol kurupawa, which in turn attached to the end of the wall plate. The pekada provides an intermediate means of connection between the pillars and beams, where a modol kurupawa provides similar means between the rafters and ridge plate at shorter side of the pitched roof. No mechanical joinery (nails, bolts or glue) is used other than the wooden pegs and the structural stability is only achieved through compression.
The most notable example can be found at Embekka Devalaya in Udunuwara, (built during the reign of King Rajadhi Rajasingha), where the upper ends of twenty six rafters are held together using a modol kurupawa at the hip end of the 'Digge' (dancing hall). Another example can be found at the National Museum of Kandy.
See also
Embekka Devalaya
Pekada
External links
Further reading
References
Timber framing
Buddhist architecture
Vernacular architecture
Indigenous architecture
Architecture in Sri Lanka | Madol Kurupawa | [
"Technology"
] | 291 | [
"Structural system",
"Timber framing"
] |
77,777,094 | https://en.wikipedia.org/wiki/Balfour%20Biological%20Laboratory%20for%20Women | The Balfour Biological Laboratory for Women was a laboratory attached to the University of Cambridge from 1884 to 1914. Established to expand the laboratory capacity and provide a separate space for women's practical work, it served as an important source of academic posts and opportunities for networking and discussion for women at Cambridge until laboratories began being shared by men and women in 1914.
Background
In March 1881, the month after women students received the right to sit the Natural Sciences Tripos at the University of Cambridge, twenty-two natural sciences students at Newnham College, Cambridge presented a memorial to the college's governing body outlining the need for more laboratory space. Newnham, one of two women's colleges at Cambridge, had had a purpose-built laboratory on its grounds since 1879. This laboratory was mostly set up for chemistry, and more space was needed because the natural sciences tripos included a two-day examination in practical laboratory techniques. All laboratory space at Cambridge was becoming oversubscribed due to the increase in students wanting to study natural sciences, but it was also thought appropriate that women, who attended lectures alongside men, should have a separate laboratory facility rather than a shared one.
In April 1881, the Newnham College council appointed a subcommittee consisting of Principal Anne Clough, Vice-Principal Eleanor Sidgwick, her brother Francis Maitland Balfour, and Trinity College's Coutts Trotter, to investigate the possibility of establishing a laboratory. The committee selected a site that month, and Eleanor Sidgwick began legal proceedings for purchasing the building in May.
Newnham College raised over £2000 towards the laboratory over the next three years. The other women's college at Cambridge, Girton College, also contributed to the equipping of the laboratory but was not involved in its establishment because it took the position that the laboratory should be established by the University of Cambridge itself, whereas Newnham was willing to proceed independently. Renovations and equipment were also donated by Coutts Trotter and Walter Holbrook Gaskell.
History
The laboratory opened for teaching in the spring of 1884, funded largely by Eleanor Sidgwick, Vice-Principal of Newnham College, and her sister Alice Blanche Balfour. It was named in memory of their brother Francis Maitland Balfour, a biologist who had been a supporter of Newnham College and a member of the committee negotiating to secure the building. Francis had died in a climbing accident on Mont Blanc in 1882 a few months after becoming lecturer in morphology at Cambridge. A bust of him was gifted to the laboratory by his former students. The premises for the laboratory was an abandoned chapel at Downing Place, in the centre of Cambridge and a five-minute walk away from the men's laboratory.
The laboratory drew most of its staff and funding from Newnham College, and was also open to students at Girton College, the only other Cambridge college accepting women students at the time. Resources were at first limited, but staff wrote of the sense of excitement at overcoming the obstacles in the early days. At first, the staff consisted only of director Alice Johnson, who had taken the Part I examination in Morphology, and Marion Greenwood, who taught physiology. Physiology student Florence Eves collaborated with Johnson on a prospectus as to how the laboratory should be run. There was also a "young untrained boy" to assist with setting up experiments, so the demonstrators did most of the work preparing specimens and reagents themselves. Greenwood also taught botany, because women were excluded from the botany laboratory by Sydney Howard Vines. As botany became more popular, the Balfour appointed two more staff to teach it in 1886, Lilian Sheldon and Anna Bateson.
Demonstrators supervised experiments and tutored students as well as carrying out their own research, and they also offered lectures when women students' access to university lectures was temporarily withdrawn in 1897. An average of forty students per year used the Balfour Laboratory in the 1880s, increasing to about sixty from 1896 when morphology, physics and geology were added to the programme.
The laboratory was refurbished in 1892. By 1910, it had acquired two neighbouring buildings. It contained two floors of laboratories, a lecture room, a greenhouse, and bench space for independent research.
The Balfour laboratory closed for teaching in 1914, by which time women were being admitted to share practical facilities with men, and student numbers were declining due to World War I. The building remained open for women's scientific research until 1927. It also hosted the Department of Biochemistry from 1919 to 1923.
Personnel
The Balfour Laboratory provided academic posts for women which would have been harder to come by otherwise because, being a designated laboratory for women, it needed to appoint women as demonstrators. This led to several women scientists advancing their careers and completing the research necessary to make publications.
Directors
Alice Johnson, demonstrator and director, 1884–1890
Marion Greenwood, demonstrator in physiology and botany 1884–1888, demonstrator in physiology 1902–3 and head for much of the period 1890–1899
Edith Saunders, demonstrator in botany 1888–1890 and head 1899–1914
Staff
Source:
Anna Bateson, demonstrator in botany 1886–7
Lilian Sheldon, assistant in botany 1886–7 and demonstrator in animal morphology 1893–8
Laura Russell Howell, demonstrator in animal morphology 1890–2
Rachel Alcock, demonstrator in animal morphology 1890–1 and 1898–9 and in biology 1903–4
Helen Klaassen, demonstrator in physics 1891–1901
Agnes Isabella Mary Elliot, demonstrator in vertebrate morphology 1892–6
Gertrude Elles, demonstrator in geology 1894–1914
Elizabeth Dale, assistant in botany 1897–9
Annie Purcell Sedgwick, assistant in physiology 1897–8
Elinor Philipps, demonstrator in animal morphology 1898–1902
Florence Margaret Durham, demonstrator in animal morphology 1898–1900
Sibille Ford, assistant in animal morphology 1901–2 and in botany 1903–4
Igerna Sollas, demonstrator in animal morphology and lecturer in animal biology 1902–1912
Muriel Wheldale, demonstrator in physiological botany 1907–1914
Mary Gladys Sykes, assistant in botany 1908–9, demonstrator in physiology 1909–10, and demonstrator in vegetable biology 1910–1
Susila Bonnerjee, demonstrator in physiology 1910–2
Agnes Robertson, demonstrator in systematic botany 1911–4, continued to use the facilities until 1927
Notable students and researchers
Catherine Durning Holt, who co-authored books on heredity with her husband William Cecil Dampier; studied at Balfour 1889–1892
Mary Tebb, physiologist, served as assistant to Marion Greenwood 1891–3
Alice Embleton, biologist, studied at Balfour in 1900
Dorothea Pertz, botanist, conducted research with Francis Darwin
Gabrielle Matthaei, botanist, conducted research with Frederick Blackman
Notes
References
1884 establishments in the United Kingdom
University and college laboratories in the United Kingdom
Botanical research institutes
Former research institutes
Women in science and technology
Research institutes in Cambridge
Newnham College, Cambridge
History of women in the United Kingdom | Balfour Biological Laboratory for Women | [
"Technology"
] | 1,422 | [
"Women in science and technology"
] |
77,777,633 | https://en.wikipedia.org/wiki/Austroboletus%20austrovirens | Austroboletus austrovirens is a species of bolete fungus found in northern Australia, particularly in northern Queensland and the Northern Territory of Australia. This species was first identified in 2017 by the mycologists N.A. Fechner, Bougher, Bonito & Roy Halling. This species is native to North Queensland, Australia and has a cap of 5 or 6 to 11 centimetres long. Austroboletus austrovirens is distinguished by its features "of dry, green pigments on its pileus and stipe reticulum in combination with apricot orange pigments on its stipe surface."
According to the Queensland Government, this species conservancy status has the “least concern”—which means that it is not at threat of extinction.
References
Fungi native to Australia
austrovirens
Fungus species
Taxa named by Roy Halling | Austroboletus austrovirens | [
"Biology"
] | 177 | [
"Fungi",
"Fungus species"
] |
77,777,667 | https://en.wikipedia.org/wiki/Decidim | Decidim describes itself as a "technopolitical network for participatory democracy". It combines a free and open-source software (FOSS) software package together with a participatory political project and an organising community, "Metadecidim". Decidim participants describe the software, political and organising components as "technical", "political" and "technopolitical" levels, respectively. Decidim's aims can be seen as promoting the right to the city, as proposed by Henri Lefebvre. , Decidim instances were actively in use for participatory decision-making in municipal and regional governments and by citizens' associations in Spain, Switzerland and elsewhere. Studies of the use of Decidim found that it was effective in some cases, while in one case implemented top-down in Lucerne, it strengthened the digital divide.
Creation
A server called "Decidim" was created by the 15M anti-austerity movement in Spain in 2016, running a fork of the "Consul" software, when a political party derived from the protest movement obtained political power. In early 2017, the server was switched to a similarly inspired, but new software project, Decidim, completely rewritten, aiming to be more modular and convenient for development by a wide community.
The name "Decidim" comes from a Catalan word meaning "let's decide" or "we decide".
Software
Decidim uses Ruby on Rails. , the software defines two structures: "participatory spaces" and "participatory components". The participatory spaces (six as of early 2024) include "processes" (such as a participatory budget), "assemblies" (such as a citizens' association website), "conferences/meetings", "initiatives", and "consultations (voting/elections)". The participatory components (twelve as of early 2024) range from "comments", "proposals", "amendments", "votes" through to "accountability". Together these allow a wide flexibility in creating specific space–component combinations. The "accountability" component is used to monitor whether and how a project is executed.
, three user levels are defined: general visitors with view-only access; registered users who have several participation rights; and verified users who can participate in decision-making. Users may be individuals or represent associations or working groups within an organisation. Users with special privileges are called "administrators", "moderators" and "collaborators".
, four versions of Decidim had been released.
The Decidim software development strategy is intended to be modular and scalable. As FOSS, the software is intended to encourage both citizen and government interaction with each other and with decision-making power over the software itself, aiming at high levels of traceability and transparency.
Decidim software provides an application programming interface (API) for command line access.
Technopolitical project
In the spirit of the Decidim software being free and open-source software (FOSS), a community of software developers, social activists, software consultancies, researchers, and administrative staff from municipal governments called Metadecidim was created for discussing and analysing Decidim experience and development. Metadecidim is seen as an intermediary component between the political level of Decidim, implemented on servers such as Barcelona Decidim, and the technical level of hosting the software source code and bug reporting structures. , Metadecidim had about 5000 registered participants.
The Decidim community has a text called the Decidim Social Contract (DSC) that defines six guidelines. The DSC defines the free software licences that may be used for Decidim software; it defines requirements of transparency, traceability and integrity of content hosted by Decidim software; a goal of equal access to all users and democratic quality parameters to measure progress towards equality; data privacy; and it requires inter-institutional cooperation of institutions implementing instances of the software, in order to encourage further development. The free software licensing is the GNU Affero General Public License (AGPL) version 3 for code; the CC BY-SA licence is used for content; and "data" is published under the Open Database License.
Philosophically, the aims of Decidim can be seen as promoting the right to the city, as proposed by Henri Lefebvre. Metadecidim's self-description as "technopolitical" is seen as implying that the political implications of designs and choices of software are seen as significant, in opposition to the view that software is "value neutral and objective". Metadecidim sees Decidim as a "recursively democratic infrastructure", in the sense that the software, political and server infrastructure is "both used and democratised by its community, the Metadecidim community".
Decidim proponents see the combination of online and offline participation as fundamental: "From its very conception until today, a distinguishing feature of Decidim over other kinds of participatory democracy software ... was that of connecting digital processes directly with public meetings and vice versa."
Organisationally, the community formally established Decidim Association in 2019 and City Council of Barcelona gave control of the Decidim trademark and code base to Decidim Association. The effect was to combine public funds with citizens' association control of decision-making.
Use of Decidim
In 2022, Borge and colleagues estimated that there were 311 instances running Decidim in Spain and in 19 other countries; while Borges and colleagues estimated that there were Decidim instances run by 80 local and regional governments and 40 citizens' associations in Spain and elsewhere. In 2023, Suter and colleagues cited Decidim's own estimate of 400 city and regional governments and civil society institutions using Decidim. The Open University of Catalonia, the University of Bordeaux and the University of Caen Normandy ran Decidim instances.
Decidim Barcelona
A Decidim server was run by the City Council of Barcelona for a two-month trial prior to 2017, in which 40,000 citizens discussed their own proposals and proposals made by the council. The Decidim software allowed threaded discussion, labelling whether the initial comment on a proposal was negative, neutral or positive, and notification to participants.
The two-month trial included both online and face-to-face participation. According to Decidim, about 40% of the 39,000 individual participants did so face-to-face, and about 85% of the organisational participants did so face-to-face. There were about 11,000 proposals made on the Decidim server, of which about 8000 were accepted. The execution of the proposals was monitored during the following four years, spending about 90% of the Barcelona City Council's budget for 2016–2019.
Zurich and Lucerne
In Switzerland, urban development has legal requirements in relation to citizen participation. Use of Decidim in Zurich and Lucerne in 2021 and 2022 was studied by Suter and colleagues, based on documentary evidence, interviews with 15 people in Zurich and 17 in Lucerne ranging from municipality employees through to representatives of neighbourhood associations, and "participatory observations" (informal participatory events observed by the researchers). The researchers found that the effectiveness of Decidim varied significantly between the different cases, and argued that the "full potential" of Decidim had not yet been achieved in Switzerland.
In Wipkingen in Zurich, two local citizens' associations used a server running Decidim to run a participatory budget to spend . The project, named "Quartieridee", had 99 submissions of proposals and awarded funding to eight proposals. The researchers found that overall implementation was dependent on significant financial resources and citizens' voluntary work; and had difficulties due to the municipality lacking legal procedures for implementing the citizens' chosen projects.
The project was scaled up to the Zurich city level the following year with the name "Stadtidee" and a participatory budget of . Among the successful projects was a confrontation between a citizens' association, "Linkes Seeufer für Alle" opposed to a Kibag AG in relation to a plot of land owned by Kibag next to Lake Zurich. An effect of the Decidim networking was that citizens legally occupied the plot of land for several days.
In 2021, the LuzernNord area of Lucerne was an area with many migrants and people with low incomes, at risk of gentrification. A top-down use of a Decidim server by the local administration, in which citizens' associations were encouraged to participate, was found by the researchers to strengthen the digital divide rather than overcoming it. Limitation of the language to German and lack of confidence in being able to participate effectively were found to be specific effects opposing the effectiveness of the project.
Other municipalities
Based on nine in-depth interviews with officials responsible for Decidim, conducted in 2018 in some of the initial municipalities that used Decidim, online interviews in March 2019 with officials from 34 municipalities using Decidim, and data from the Decidim servers, the effectiveness of Decidim in terms of transparency, participation in decision-making, and deliberation (discussion of proposals) was studied by Rosa Borge and colleagues. It was found that the officials saw Decidim's role as primarily promoting transparency and the collecting of citizens' proposals, while having only a modest role in transferring decision-making to citizens and a minor role in encouraging online citizen debate.
Several municipalities' use of Decidim provided their first use of participatory budgeting.
The Borge et al. study also found, consistently with other research, that the participatory aspect of citizens making proposals and participating in decisions was obstructed in some cases by local civil society associations, since direct citizen participation was seen to be in competition with the associations' roles. Several municipal governments worked on the implementation of Decidim together with local associations, adding features to the software such as different weightings for proposals by individuals versus those by associations.
The use of Decidim and participatory processes was found to depend on electoral results in some cases: these ceased in Badalona after Dolors Sabater lost power as Mayor in June 2018.
See also
technological utopianism (Decidim sees itself as opposed to technological utopianism)
References
External links
Decision-making software
Free software programmed in Ruby
Participatory democracy
Ethics of science and technology
Philosophy of technology | Decidim | [
"Technology"
] | 2,175 | [
"Philosophy of technology",
"Science and technology studies",
"Ethics of science and technology"
] |
77,780,512 | https://en.wikipedia.org/wiki/Roland%20N.%20Horne | Roland N. Horne is an energy engineer, author and academic. He is the Thomas Davies Barrow Professor of Earth Sciences, a Senior Fellow at the Precourt Institute for Energy, and Director of the Geothermal Program at Stanford University.
Horne is most known for his contributions to well test interpretation, production optimization, and the tracer analysis of fractured geothermal reservoirs. Among his authored works are peer-reviewed publications and the books Modern Well Test Analysis and Discrete Fracture Network Modeling of Hydraulic Stimulation, the latter of which he co-authored. He has been a Society of Petroleum Engineers (SPE) Distinguished Lecturer in 1998, 2009, and 2020, and has received the SPE Distinguished Achievement Award for Petroleum Engineering Faculty, the Lester C. Uren Award, as well as the John Franklin Carll Award. Additionally, he has served on the International Geothermal Association (IGA) Board from 1998 to 2001, 2001 to 2004, and 2007 to 2010, and was the IGA President from 2010 to 2013. He also served as Technical Program Chair for the World Geothermal Congress in Turkey in 2005, Bali in 2010, Melbourne in 2015, and Iceland in 2020.
Horne was elected to the U.S. National Academy of Engineering (NAE) in 2002, named an Honorary Member of the SPE in 2007, and awarded the titles of Fellow at the School of Engineering, University of Tokyo, and Honorary Professor at China University of Petroleum – East China in 2016.
Education
Horne received a Bachelor of Engineering and a Doctor of Philosophy in Engineering Science in 1972 and 1975, respectively, and a Doctor of Science (D.Sc.) in Engineering in 1986, all from the University of Auckland (UoA).
Career
Horne served as an Acting Assistant Professor of Chemical Engineering and then Petroleum Engineering at Stanford University between 1976 and 1978. He then joined the University of Auckland as a lecturer in Theoretical and Applied Mechanics for a year in 1978–1979 before being appointed assistant professor of Petroleum Engineering at Stanford in 1980. He was promoted to associate professor in 1984 and served as Professor from 1990 to 2006, while also holding the position of Chairman of Petroleum Engineering from 1995 to 2006. In 2006, he assumed the role of Professor of Energy Resources Engineering, and in 2022, he became Professor of Energy Science and Engineering. He was named the Thomas Davies Barrow Professor of Earth Sciences in 2008.
Research
Horne's research focuses on matching models to reservoir responses through inverse problems that infer unknown reservoir parameters instead of measuring them directly, addressing typical issues such as tracer analysis of fractures, computer-aided well test analysis, production schedule optimization, and automated history matching/decline analysis. In particular, he works on geothermal reservoir engineering and the flow of fluids through porous materials and fractures.
Works
Horne authored Modern Well Test Analysis in 1995, which provided a tutorial on well test interpretation using computerized tools and included a companion website with data examples, a searchable version of the book, and a software program for practical learning. His next book, Discrete Fracture Network Modeling of Hydraulic Stimulation, co-authored with Mark W. McClure, detailed a model integrating fluid flow, deformation, friction weakening, and permeability evolution in complex fracture networks. This work, published in 2013, explored hydraulic stimulation in low matrix permeability settings and demonstrated how fracture deformation stresses influenced propagation and network formation.
Reservoir engineering
Horne's work has led to numerous research papers—early in his career, he received Best Paper awards from the Journal of Petroleum Technology (1992) and the SPE Formation Evaluation (1993). His 1992 study showed that multivariate optimization provides superior solutions compared to traditional methods by analyzing all variables simultaneously. In 1993, he advanced well-test analysis with a new Laplace space application, improving parameter identification and flow rate deconvolution from noisy data while maintaining familiar pressure function behavior. Additionally, he presented the sequential predictive probability method, a Bayesian approach for improved quantitative discrimination between reservoir models in well-test analysis. Within his work on reservoir development and design optimization, a hybrid Genetic Algorithm (HGA) was used to optimize the layout of 33 new wells in an oil field project, resulting in a 6% increase in project profit by enhancing well distribution and platform location. His research further introduced a nonlinear parameter estimation method for permeability and porosity in heterogeneous reservoirs by integrating well testing, production history, seismic data, and geological correlations, demonstrating that combined data is more valuable than isolated data. In 2004, he also created a utility-theory-based methodology for well placement under uncertainty, incorporating numerical simulation, a hybrid genetic algorithm, and a cost-effective approach with random reservoir realizations.
Since 2010, Horne's work has considered the use of machine learning and data analytics in reservoir analysis, leading to the improvement of parameter estimation from permanent downhole gauges and other noisy measurements.
Enhanced geothermal systems and fracture modeling
Horne has made contributions to geothermal energy, enhanced geothermal systems (EGS), and fracture modeling. He, along with McClure, received the 2011 SEG Best Paper in "Geophysics" Award for their paper describing a numerical investigation of induced seismicity caused by injecting water into a single isolated fracture in fractured, low-permeability rock, which triggered slip on preexisting large-scale fracture zones. Furthermore, he found that in EGS projects, new fractures often formed away from the wellbore and propagated through natural fractures, with shear stimulation being more likely in areas with thick faults, as supported by simulation results and field experiences.
In 2022, Horne co-advised a study on the microbiological community in the subsurface, which demonstrated how the deep population of microbes could be used to characterize geological events.
Awards and honors
1982 – Distinguished Achievement Award for Petroleum Engineering Faculty, SPE
2000 – Distinguished Member, SPE
2000 – Lester C. Uren Award, SPE
2002 – Member, NAE
2005 – John Franklin Carll Award, SPE
2006 – Henry J. Ramey Jr., Geothermal Reservoir Engineering Award, Geothermal Resources Council
2011 – Patricius Medal, German Geothermal Society
2015 – Geothermal Special Achievement Award, Geothermal Resources Council
2016 – Honorary Professor, China University of Petroleum – East China
2016 – Fellow, School of Engineering, University of Tokyo
2023 – Core Values Award, Women in Geothermal (WING)
Bibliography
Books
Modern Well Test Analysis: A Computer-aided Approach (1995) ISBN 9780962699214
Discrete Fracture Network Modeling of Hydraulic Stimulation: Coupling Flow and Geomechanics (2013) ISBN 9783319003832
Selected articles
Güyagüler, B., & Horne, R. N. (2004). Uncertainty assessment of well-placement optimization. SPE Reservoir Evaluation & Engineering, 7(01), 24–32.
McClure, M. W., & Horne, R. N. (2011). Investigation of injection-induced seismicity using a coupled fluid flow and rate/state friction model. Geophysics, 76(6), WC181-WC198.
Tian, C., & Horne, R. N. (2019). Applying machine-learning techniques to interpret flow-rate, pressure, and temperature data from permanent downhole gauges. SPE Reservoir Evaluation & Engineering, 22(02), 386–401.
Zhang, Y., Horne, R. N., Hawkins, A. J., Primo, J. C., Gorbatenko, O., & Dekas, A. E. (2022). Geological activity shapes the microbiome in deep-subsurface aquifers by advection. Proceedings of the National Academy of Sciences, 119(25), e2113985119.
Aljubran, M. J., & Horne, R. N. (2024). Power supply characterization of baseload and flexible enhanced geothermal systems. Scientific reports, 14(1), 17619.
References
Energy engineers
American academics
American writers
University of Auckland alumni
Stanford University School of Earth Sciences faculty
Living people
1952 births | Roland N. Horne | [
"Engineering"
] | 1,667 | [
"Energy engineering",
"Energy engineers"
] |
77,781,127 | https://en.wikipedia.org/wiki/List%20of%20media%20featuring%20space%20marines | Fictional space marines have been included in a variety of short stories, novels, films, television, and games from the 1930s to the present.
Literature
Films and television
Games
References
Space marines
Lists of films and television series
Literature lists
Video game lists | List of media featuring space marines | [
"Technology"
] | 49 | [
"Computing-related lists",
"Video game lists"
] |
77,781,830 | https://en.wikipedia.org/wiki/Level%20Access | Level Access is a digital accessibility company based in Arlington, Virginia.
It provides commercial software for testing web accessibility and creating accessible and legally compliant websites, mobile apps, software, and other digital experiences.
History
It was founded in 1999 by engineers with disabilities. The initial business plan for what became Level Access was to build a website for accessible travel so that people with disabilities could go to the site, get information about accessible venues and plan accessible vacations.
In February 2018, Level Access acquired Simply Accessible, Inc., a digital accessibility company headquartered in Ontario, Canada.
In August 2022, Level Access completed its merger with eSSENTIAL Accessibility, the pioneer of Accessibility-as-a-Service.
In January 2024, Level Access acquired UserWay for almost almost $99 million.
Mark Zablan has been appointed as the chief executive officer of Level Access in October 2024.
Products
Level Access provides six products across digital accessibility and has been ranked in the top tier of platforms offering digital accessibiilty services by Forrester.
References
Assistive technology
Companies based in Virginia
Companies based in Arlington County, Virginia
Accessibility
Web accessibility | Level Access | [
"Engineering"
] | 224 | [
"Accessibility",
"Design"
] |
77,781,892 | https://en.wikipedia.org/wiki/Lathyrus%20%C3%97%20hammettii | Lathyrus × hammettii is a hybrid flowering plant within the genus Lathyrus and family Fabaceae. The hybrid was produced by artificially hybridizing L. odoratus with L. belinensis.
History
The hybridization of these two species was first attempted by plant breeder Dr Keith Hammett, using the sweet pea cultivar Lathyrus odoratus 'Orange Dragon', L. belinensis and embryo rescue techniques. The hybrid was attempted in hopes of producing a yellow sweet pea, which plant breeders have been attempting to create for decades. The F1 hybrid offspring produced from the cross were self sterile and possessed pink flowers. Multiple non yellow cultivars of Lathyrus × hammettii have been produced descending from those plants.
Mildew resistance
Lathyrus belinensis possesses genetics that exhibit mildew resistance. L. odoratus is susceptible to mildew. Hybrids produced between the two species were found to be resistant to the fungus Erysiphe pisi, which causes mildew in sweet peas.
References
Hybrid plants
Lathyrus
Plants described in 2014 | Lathyrus × hammettii | [
"Biology"
] | 223 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
77,781,911 | https://en.wikipedia.org/wiki/Association%20of%20the%20Electrical%20and%20Digital%20Industry | ZVEI e. V., the German Electrical and Electronic Manufacturers' Association (formerly: German Electrical and Electronic Manufacturers' Association), represents the economic, technological and environmental interests of the German electrical and digital industry. With 910,400 employees across Germany and a total turnover of 242 billion euros (in 2023), the electrical industry is the second largest industrial sector in Germany (regarding in terms of employees) - behind mechanical engineering. With an additional 811,000 employees abroad, its value creation is highly globally networked (2021). In 2023, the industry spent 22.1 billion euros on research and development and nine billion euros on investments.
Organisation
Its headquarters are in Frankfurt am Main. There are offices in Berlin and Brussels. The ZVEI is also represented by an office in Beijing through its EuropeElectro working group.
The association works with national trade associations and organisations, European industry and trade associations and international organisations. ZVEI is the second largest member of BDI, the Federation of German Industries. It is also a member of ORGALIM, the European umbrella organisation for the engineering industries. The ZVEI is also involved in the German TV Platform and the Industry 4.0 platform. The association is divided into 22 trade associations. The trade associations comprise all member companies that are active in the same market segment. A member may also belong to several trade associations due to its range of products and services. The ZVEI also maintains nine regional offices. They represent the interests of the electrical industry vis-à-vis the respective state governments.
President and management
Gunther Kegel has been ZVEI President since October 2020 and BDI Vice President since November 2020. Kegel is chairman of the management board of Pepperl+Fuchs. Wolfgang Weber has been Chairman of the ZVEI Management Board since 2020.
Industry initiative Licht.de
The Lighting Association in the ZVEI operates the industry initiative licht.de, which provides information about lighting and lighting technology as well as guidelines and standards that must be observed for professional lighting solutions. Among other things, the industry initiative accompanied the technological change from incandescent lamps to energy-efficient light sources in the course of the Ecodesign Directive with campaigns. Licht.de informs consumers, professional users, planners and architects through traditional media work and online offerings. One focus is on educating people about future technologies such as Human Centric Lighting (HCL). The industry initiative operates an information portal on the Internet and publishes the “licht.wissen” series of publications. It currently comprises 21 titles. As a rule, each issue is dedicated to a specific lighting application: for example, in schools, hospitals or offices, but also in museums or streets. Cross-cutting topics such as LEDs and the effect of light on people are also covered. The licht.forum series and other licht.de publications such as guidelines are published on current topics. The association founded the Fördergemeinschaft Gutes Licht (FGL) in 1970, which was renamed licht.de in 2007.
The website is officially home to the LED lead market initiative. It was founded by the Federal Ministry of Education and Research and has been continued by the Federal Ministry for the Environment since the beginning of 2012 with the aim of supporting the broad market launch of LEDs in Germany and reducing CO2 emissions.
Publications
ZVEI-Spotlights (Digital annual review)
ZVEI-Magazind Ampere
Publications of ZVEI
External links
ZVEI.org
Licht.de
Entry in the German Bundestag lobby register
References
Automation organizations
Electrical engineering organizations
1918 establishments
Business organisations based in Germany | Association of the Electrical and Digital Industry | [
"Engineering"
] | 746 | [
"Automation organizations",
"Electrical engineering",
"Electrical engineering organizations",
"Automation"
] |
77,782,087 | https://en.wikipedia.org/wiki/Haichemys | Haichemys is an extinct genus of turtles that lived during the Late Cretaceous period in what is now Mongolia. It was first described in 2006 and placed into the family Haichemydidae, of which it is the only genus. The validity of Haichemys has been questioned, with a study published in 2013 finding it likely that all known fossils of it actually represent hatchlings of Mongolemys elegans, though this cannot be conclusively proven until specimens preserving both the skull and shell are found.
Haichemys are suggested to have had durophagous diet from surrounding carbonaceous claystone found in swamps and lakes.
References
Testudinoidea
Prehistoric turtle genera | Haichemys | [
"Biology"
] | 143 | [
"Animals",
"Animal stubs"
] |
77,782,321 | https://en.wikipedia.org/wiki/IRAS%2022036%2B5306 | IRAS 22036+5306, also known as 2MASS J22053028+5321327, is a protoplanetary nebula located in the constellation Cepheus at approximately ~6,500 light-years from Earth.
The nebula was created by the shedding of most of the material in the outer shell of an aging star. The gas cloud formed is heated by the still burning hot core of the star. A torus formed mainly of ejected material was formed around the star. Two jets of material are ejected from the poles of the dying star, piercing the dusty curtain. These jets eject large amounts of material weighing tens of thousands of times the mass of Earth at speeds reaching .
The ejected dust now scatters the light from the central star and reflects it, among other things, towards the Earth. However, soon the central star will reach the stage of a very hot white dwarf, whose intense ultraviolet radiation will ionize the gas, causing it to glow with multi-colored light. Then, IRAS 22036+5306 will transform into a formal planetary nebula, and the cooling star will begin the last stage of its life.
See also
List of protoplanetary nebulae
References
Protoplanetary nebulae
Cepheus (constellation)
IRAS catalogue objects | IRAS 22036+5306 | [
"Astronomy"
] | 261 | [
"Constellations",
"Cepheus (constellation)"
] |
77,782,419 | https://en.wikipedia.org/wiki/P%C4%81t%C4%ABga%E1%B9%87ita | Pātīgaṇita is the term used in pre-modern Indian mathematical literature to denote the area of mathematics dealing with arithmetic and mensuration. The term is a compound word formed by combining the words pātī and gaṇita. The former is a non-Sanskrit word meaning a "board" and the latter is a Sanskrit word meaning "science of calculation". Thus the term pātīgaṇita literally means the science of calculations which requires a board (on which dust or sand is spread out) for performing the calculations, or "board-computation" in short. The usage of the term became popular among authors of Indian mathematical works about the beginning of the seventh century CE. It may be noted that Brahmagupta (c. 598 – c. 668 CE) has not used this term. Instead, he uses the term dhūlīkarma (dhūlī is the Sanskrit term for dust). The terminology pātīgaṇitamay be contrasted with "bījagaṇita" which denotes the area of mathematics referred to as algebra.
The term Pātīgaṇita is also the title of a work composed by Sridhara, an Indian mathematician who flourished during the 8th-9th century CE.
Topics discussed in pātīgaṇita
According to Brahmagupta there are 20 operations (parikarma-s) and 8 determinations (also called logistics) (vyavahāra-s) that come under pātīgaṇita. He has stated as such in his Brahma-sphuṭa-siddhānta without specifying what these are. The commentators of Brahmasphuṭa-siddhānta have listed the following as the 20 operations and the 8 determinations.
Parikarma (Operations)
Vyavahāra-s (determinations/logistics)
Miśrakah (mixture): Computations involving mixtures of several things.
Sreḍhi (progression or series): A sreḍhiis that which has a beginning (first term) and an increase (common difference).
Kṣetram (plane figures): Calculations of the area of a figure having several angles.
Khātam (excavation): Finding the volumes of excavations.
Citih (stock): Computing the measure of a pile of bricks.
Krākacikah (saw): Finding the measure of the timber sawn.
Rāśih (mound): Calculations to find the amount of a heap of grain, etc.
Chāyā (shadow): Finding the time from the shadow of a gnomon, etc.
Works dealing with pāṭīgaṇita
The earliest work dealing with the topics that come under pāṭīgaṇita that has survived to the present day is the Bakhshali manuscript some portions of which has been carbon dated as 224–383 CE. The following are the currently available texts which deal arithmetic and mensuration. They may contain more material than the 20 operations and the eight determinations that are listed as the topics that come under pāṭīgaṇita.
Gaṇita-sāra-sañgraha of Mahavira (850 CE)
Pātīgaṇita and Pātīgaṇita-sāra (or Trisātikā) of Śrīdharācarya
Gaṇita-tilaka of Srīpati (1039 CE) (incomplete)
Līlāvatī of Bhāskara II (1150 CE)
Gaṇita-kaumudī of Nārāyaṇa (1356 CE)
In these works one can see references to several older works, but none of them have survived to the present day. The lost works include Pātīgaṇita of Lalla (8th century CE) and Govindakṛti of Govindasvāmi (9th century CE).
The following astronomical treatises deal with arithmetic and mensuration in one of the chapters:
Brahma-sphuṭa-siddhānta of Brahmagupta (628 CE) (the twelfth chapter, entitled Gaṇitāddhyāya)
Mahā-siddhānta of Āryabhaṭa II (c. 950 CE) (the fifteenth chapter, entitled Pātīgaṇita)
Siddhānta-sekhara of Śrīpati (1039 CE) (the thirteenth chapter, entitled Vyakta-gaṇitāddhyāya)
Śrīdhara's Pāṭīgaṇita
In Indian mathematical literature, Śrīdhara is the only author who has composed a work titled Pāṭīgaṇita. He has composed another work titled Pāṭīgaṇita-sāra which is a short summary of his Pāṭīgaṇita. At the very beginning of the work, the author has listed the operations and the determinations that he is going to discuss in the work. According to Śrīdhara, there are 29 operations and nine determinations whereas Brahmagupta talks about only 20 operations and eight determinations. The operations specified in Śrīdhara's Pāṭīgaṇita are the following:
The first eight operations specified by Brahmagupata
These eight operations in respect of fractions
Six operations involving reductions of fractions
The five operations specified in items 12–17 in Brahmagupta's list
Bhāṇḑa-pratibhāṇḍa (barter of commodities)
Jīva-vikraya (sale of living beings)
The nine determinations specified by Śrīdhara are the eight determinations specified by Brahmagupta and śūnya-tatva (mathematics of zero).
Only one manuscript of Pāṭīgaṇita is currently available and it is incomplete. Discussions on some of the 29 operations and some of the nine determinations are missing from the extant manuscript.
Full texts of Śrīdhara's works
Full text of Śrīdhara's Pāṭīgaṇita is available in Internet Archive:
Full text of Śrīdhara's Pāṭīgaṇita-sāra is available in Internet Archive:
References
Indian mathematics
Arithmetic
Measurement | Pātīgaṇita | [
"Physics",
"Mathematics"
] | 1,151 | [
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"Arithmetic",
"Number theory"
] |
77,784,514 | https://en.wikipedia.org/wiki/Fork%20cell | A fork cell, also known as a fork neuron, is a type of neuron found in the human brain, located in the anterior cingulate cortex (ACC) and frontoinsular cortex (FI). This type of neuron is characterized by its own morphology - two primary apical dendrites, giving them a distinctive ‘forked’ appearance. Fork cells are found in humans and some other highly evolved species.
See also
Von Economo neuron
References
Neuroscience
Neuroanatomy
Neurophysiology
Cerebral cortex
Brain
Neurodevelopmental disorders
Neurons | Fork cell | [
"Biology"
] | 120 | [
"Neuroscience"
] |
77,784,938 | https://en.wikipedia.org/wiki/Tremella%20diaporthicola | Tremella diaporthicola is a species of fungus in the family Tremellaceae. It produces hyaline to pale grey, pustular, gelatinous basidiocarps (fruit bodies) and is parasitic on Diaporthe and similar species on dead branches of broad-leaved trees. It was originally described from the US and has also been recorded from Ukraine.
Taxonomy
The species was first published in 1935 by American mycologist Roy Whelden who placed it in the genus Sebacina. It was subsequently considered a synonym of Tremella tubercularia, which British mycologist Derek Reid later renamed Tremella globispora. Since the latter species has hyphae with clamp connections and the present species lacks clamp connections, Sebacina globispora was removed from the synonymy of Tremella globispora and given the new name Tremella diaporthicola in 1993.
Description
Fruit bodies are gelatinous, pustular, and hyaline (colourless) becoming greyish, up to 12 mm across. Microscopically, the hyphae lack clamp connections. The basidia are tremelloid (ellipsoid, with oblique to vertical septa), 4-celled, 15 to 20 by 12 to 16 μm. The basidiospores are globose, smooth, 7.5 to 8 μm in diameter.
Similar species
Tremella globispora, originally described from England but reported worldwide, is macroscopically very similar but differs microscopically in having hyphae with clamp connections. Most other Tremella species also have clamped hyphae.
Habitat and distribution
Tremella diaporthicola is a parasite on pyrenomycetous Diaporthe species on wood (Fraxinus (ash) in the original collection). It was described from Kentucky, but has also been reported from Ukraine on Diatrypella species on Quercus (oak).
References
diaporthicola
Fungi of North America
Fungi described in 1935
Parasitic fungi
Fungus species | Tremella diaporthicola | [
"Biology"
] | 437 | [
"Fungi",
"Fungus species"
] |
77,785,117 | https://en.wikipedia.org/wiki/Helen%20Diemer | Helen Diemer is an architectural lighting designer and the former president of The Lighting Practice, a lighting design firm based in Philadelphia. Diemer graduated from Pennsylvania State University with a degree in architectural engineering. She worked as an electrical engineer before joining The Lighting Practice in 1994.
Diemer previously served as President of the International Association of Lighting Designers. She contributed to the lighting industry through her involvement as chair of the International Association of Lighting Designers' Energy Committee and played a role in developing the lighting energy requirements of ASHRAE/IESNA Standard 90.1. Additionally, she was recognized with the inaugural SMPS Philadelphia Honoring Legends Award, which acknowledged her influential role in the industry. Diemer has also supported educational initiatives for students in architectural engineering, including contributions to the creation of student support funds.
In 2023, Diemer retired from her role as president of The Lighting Practice.
Notable Projects
Avenue of the Arts
Nemours/Alfred I. duPont Hospital for Children
Children's Hospital of Philadelphia Roberts Center for Pediatric Research
References
Living people
Year of birth missing (living people)
Pennsylvania State University alumni
21st-century American engineers
Electrical engineers | Helen Diemer | [
"Engineering"
] | 226 | [
"Electrical engineering",
"Electrical engineers"
] |
77,786,011 | https://en.wikipedia.org/wiki/Niall%20J.%20English | Niall J. English (born March 29th, 1979) is an Irish inventor, industrialist, researcher, and chartered chemical engineer. He is the founding director of BioSimulytics and AquaB.
Early life and education
Niall J. English (born March 29th, 1979) was born in Dublin, Ireland, to Michael and Catherine English. He grew up in Dublin and Brussels. He speaks Irish and French. English obtained a First-Class Honors degree in Chemical Engineering from University College Dublin in 2000 and won the Ferdinand de Lesseps medal in French as well as the Engineering Graduates’ Association gold medal in his final year in 1999-2000. He completed his Ph.D. in 2003.
Career
During 2004-2005, English explored electric-field effects thereon, at the National Energy Technology Laboratory, a U.S. DOE research facility in Pittsburgh. Between 2005 and 2007, English worked for Chemical Computing Group in Cambridge, Great Britain. During this time, English developed molecular simulation codes, protocols, and methods for biomolecular simulation.
In January 2007, he was hired as a lecturer at the School of Chemical and Bioprocess Engineering, and was promoted to senior lecturer in 2014. In 2017, he became a professor at the same school.
English’s research specializes in nanoscience, energy, gas hydrates, solar and renewable energies, and simulation of electromagnetic field effects on (nano) materials and biological systems.
In 2019, he is a co-founder and director of BioSimulytics and Aqua-B. Both companies are backed by the EIC-Accelerator program.
In 2023, he took legal action against the University College Dublin from granting commercialization license to his rival companies. The case was settled out of court.
Publications
(2012b). Photo-induced charge separation across the graphene–tio2 interface is faster than energy losses: A time-domain ab initio analysis. Journal of the American Chemical Society, 134/34: 14238–48. DOI: 10.1021/ja3063953
(2015). English, N. J., & Waldron, C. J. Perspectives on External Electric Fields in molecular simulation: Progress, prospects and challenges. Physical Chemistry Chemical Physics. The Royal Society of Chemistry.
(2003) English, N. J., & MacElroy, J. M. D. Molecular dynamics simulations of microwave heating of water. AIP Publishing. AIP Publishing.
(2007) Rosenbaum, E. J., English, N. J., Johnson, J. K., Shaw, D. W., & Warzinski, R. P. (2007). Thermal conductivity of methane hydrate from experiment and molecular simulation. The Journal of Physical Chemistry B, 111/46: 13194–205. DOI: 10.1021/jp074419o
(2005) English, N. J., Johnson, J. K., & Taylor, C. E. Molecular-dynamics simulations of Methane Hydrate Dissociation. AIP Publishing. AIP Publishing.
(2014) English, N. J., & MacElroy, J. M. D. Perspectives on molecular simulation of clathrate hydrates: Progress, prospects and challenges. Chemical Engineering Science. Pergamon.
(2010) Long, R., & English, N. J. Synergistic effects on band gap-narrowing in Titania by codoping from first-principles calculations. Chemistry of Materials, 22/5: 1616–23. DOI: 10.1021/cm903688z
(2004) English , N. J., & MacElroy, J. M. D. Theoretical studies of the kinetics of methane hydrate crystallization in external electromagnetic fields. The Journal of chemical physics. U.S. National Library of Medicine.
(2003a) English, N. J., & MacElroy, J. M. D. (2003a). Hydrogen bonding and molecular mobility in liquid water in external electromagnetic fields. AIP Publishing.
(2009) Long, R., & English, N. J.First-principles calculation of nitrogen-tungsten codoping effects on the band structure of anatase-titania. AIP Publishing.
References
Chemical engineers
1979 births
Living people
Irish inventors
Members of the Royal Irish Academy | Niall J. English | [
"Chemistry",
"Engineering"
] | 899 | [
"Chemical engineering",
"Chemical engineers"
] |
77,787,873 | https://en.wikipedia.org/wiki/Civil%20basilica | In antiquity, a civil basilica was a grand public building with a semi-sacred significance, serving a variety of purposes. These structures were commonly used for court hearings, public assemblies, and, at times, for commercial activities such as shops and financial transactions.
The architectural style of the basilica, known for its expansive covered space, originated in Ancient Greek architecture and was later adopted and enhanced in Roman architecture, becoming a distinctive feature of Roman cities.
Unlike Christian basilicas, ancient basilicas did not serve religious functions.
Origins and etymology
The word "basilica" derived from the Latin term basilica, originates from two Greek elements: basileus, meaning "king", and the feminine adjective suffix -ikê. The full Greek expression is (basilika oikia), which translates to "royal hall". This was traditionally a place where the king or his representatives would grant public audiences, dispense justice, and serve as a venue for public assemblies.
The concept is related to the Greek stoa (), a public covered space designed to shelter various activities from the weather. Over time, the stoa acquired a more specialized function, such as the stoa basileios in Athens, which served as the seat of the archon-king. These structures typically had an entrance enclosed at the back by a solid wall and opened onto the public space (the agora) at the front, featuring a portico with a colonnade.
Roman Basilica
The Roman basilica, which emerged in the 2nd century BC, was inspired by and named after the Greek stoa basileios. The development of the Roman basilica followed a path similar to that of the Greek stoa. Initially designed as a public space providing shelter from the weather, the basilica evolved to serve specific functions, particularly in the administration of justice. All Roman basilicas were used for legal proceedings. For example, in Rome, the tribunes of the plebs held their hearings in the Basilica Porcia, while the Centumvirs court met in the Basilica Julia. By the early 2nd century BC, this type of building, which provided a spacious and sheltered open area, became a significant feature in Roman cities, with most courts across the Empire utilizing it.
Every well-developed Roman city had a basilica, typically situated next to the forum. Some basilicas were associated with shops (tabernae), which opened either onto the exterior (as seen with the Basilica Aemilia, or tabernae novae) or onto the interior (as seen with the Basilica Julia). These shops may have been used by bankers and pawnbrokers.
Typical plan
The typical floor plan of a Roman basilica is rectangular, with at least one end featuring an apse, a semi-circular or polygonal recess often used as a court or to house a statue of the Roman emperor. A basilica with an apse at each end is known as a double-apse basilica. The apses, or exedras, may be incorporated within the rectangular plan or extended outward, as seen in the Basilica Ulpia.
The interior of a basilica is divided into multiple naves by rows of single or double columns. The central nave, known as the spatium medium, is the widest and extends nearly the full length of the rectangular plan. It is flanked by side naves—one on each side for basilicas with three naves, or two on each side for those with five naves. These side naves are narrower, and sometimes lower, than the central nave but are of equal length. The interior space may be covered by either a wooden framed ceiling or a vaulted ceiling supported by pillars. The central nave is typically taller than the side naves, allowing for the installation of windows in the upper part of the walls, which provides natural light to the interior. In larger basilicas, the ground floor arcade is often complemented by a second or even a third level of colonnades that support the windowed walls. The side naves are sometimes topped with an additional story, creating a gallery that overlooks the central space.
Basilicas in Rome
The initial basilicas constructed in Rome during the 2nd century BC were influenced by Greek architectural models, reflecting the impact of Roman campaigns in Macedonia and Syria. The first small basilica was built on the Roman Forum, later occupied by the southern section of the Basilica Aemilia. This earliest structure, dating from the end of the 3rd century BC, is not specifically named but is referred to as a basilica by ancient authors.
Between 184 and 170 BC, the Porcia, Aemilia, and Sempronia basilicas were constructed around the Forum, each named after the censor who commissioned its construction. These basilicas were adorned with various artworks obtained from conquered territories. By the mid-5th century AD, Polemius Silvius listed eleven basilicas in Rome, highlighting the architectural and cultural significance of these structures in the city.
Christian basilicas
The floor plan of the Roman civil basilica served as a model for the construction of the first Christian churches in late Antiquity. This influence is evident in the continued use of the term "basilica" to designate certain churches from the time of Constantine onward. Today, the term "basilica" is still used for religious buildings of significant importance that, while not functioning as cathedrals, are granted special privileges.
References
Bibliography
Architecture
Ancient Greece
Ancient Rome | Civil basilica | [
"Engineering"
] | 1,085 | [
"Construction",
"Architecture"
] |
77,788,035 | https://en.wikipedia.org/wiki/Michael%20Rout | Michael P Rout is a molecular and cellular biologist. He is the George and Ruby deStevens Professor and Head of the Laboratory of Cellular and Structural Biology at The Rockefeller University, as well as the Director of the National Center for Dynamic Interactome Research (NCDIR).
Rout's research focuses on the assembly and interactions of protein complexes in cells and their disease-related alterations. His particular focus is on the nuclear pore complex (NPC); collectively, his work and that of his colleagues have rationalized the architecture, transport mechanisms, and evolutionary origins of the NPC, and have helped explain why defects in the NPC contribute to the etiology of several diseases. Expanding the scope of his research, he established the NCDIR with support from the National Institutes of Health (NIH). He has received several awards for his work including the Max Perutz Student Prize by MRC Laboratory of Molecular Biology in 1989, Irma T. Hirschl Career Scientist Award by Icahn School of Medicine at Mount Sinai in 1999, Rita Allen Foundation Scholarship by the Rita Allen Foundation in 2000, Presidential Early Career Award for Scientists and Engineers (PECASE) by the National Science and Technology Council (NSTC) in 2001, Distinguished Teaching Award by The Rockefeller University in 2018, and the Emerging Leader Award by Bay Area Lyme Foundation in 2021.
Rout has been part of the International Scientific Advisory Board of the Wellcome Trust Centre for Cell Biology in Edinburgh.
Education
Rout graduated from the Peterhouse, University of Cambridge, where he pursued his undergraduate studies from 1983 to 1986, earning a B.A. (Hons) in Zoology, and then obtaining an M.A. (Hons) in Zoology. From 1986 to 1989, he worked under the supervision of J.V. Kilmartin at the MRC Laboratory of Molecular Biology in Cambridge completing his Ph.D. work on "The Structure and Function of the Spindle Pole Body of the Yeast, Saccharomyces".
Career
After completing his PhD, Rout worked as a Scientific Officer at the MRC Laboratory of Molecular Biology in Cambridge from 1989 to 1990. He then conducted research as a Jane Coffin Childs Postdoctoral Fellow from 1990 to 1993 at The Rockefeller University with his supervisor Günter Blobel, focusing on the isolation and characterization of the yeast NPC.
Rout served as a Howard Hughes Medical Institute Research Associate, working on the characterization of the yeast NPC and nuclear envelope from 1993 to 1997. In 1997, he started independent laboratory at Rockefeller University as Assistant Professor and Head of the Laboratory of Cellular and Structural Biology, where he became Associate Professor by 2002 and Professor by 2008. In 2021, he was appointed the George and Ruby deStevens Professor at The Rockefeller University.
Research
Rout's research has explored how nuclear pore complexes (NPCs) mediate the transport of molecules in and out of the nucleus, thereby controlling the communication of the cell's DNA with the rest of the cell and organizing the nucleus; defects in nuclear transport and NPC components and are implicated in numerous diseases. He examined the molecular architecture of the yeast NPC, the mechanism of its selective transport barrier, and shedding light on its evolutionary origins. In his related research on the integrative structure and functional anatomy of the NPC, he elucidated the complete architecture of the yeast NPC. This revealed its organizational framework, incorporating robust columns, flexible connector cables, and inwardly directed anchors crucial for RNA and protein transport. He also provided a comprehensive classification of the molecular components of the yeast NPC, mapping its architecture and suggesting a virtual gating mechanism for nucleocytoplasmic transport. Additionally, he worked on a method for determining the structures of macromolecular assemblies using proteomic data, demonstrated on the NPC, and suggested its applicability to other assemblies. He also looked into how the NPC, beyond regulating molecular traffic between the cytoplasm and nucleus, plays a crucial role in gene expression and the organization of nuclear architecture.
Rout's lab has also collaborated and developed interactomic technology, which maps and analyzes the dynamic macromolecular interactions in cells. He has collaborated to apply this technology to numerous disease models. Of note, he has helped develop pipelines to generate nanobodies, small robust single domain antibodies derived from camelids that can be targeted with high specificity against almost any antigen.
Awards and honors
1989 – Max Perutz Student Prize, MRC Laboratory of Molecular Biology
1999 – Irma T. Hirschl Career Scientist Award, Icahn School Of Medicine At Mount Sinai
2000 – Rita Allen Foundation Scholarship, Rita Allen Foundation
2001 – Presidential Early Career Award for Scientists and Engineers (PECASE), National Science and Technology Council (NSTC)
2014 – Research Award, Jain Foundation
2018 – Distinguished Teaching Award, Rockefeller University
2021 – Emerging Leader Award, Bay Area Lyme Foundation
Selected articles
Rout, M. P., Aitchison, J. D., Suprapto, A., Hjertaas, K., Zhao, Y., & Chait, B. T. (2000). The yeast nuclear pore complex: composition, architecture, and transport mechanism. The Journal of cell biology, 148(4), 635–652.
Devos, D., Dokudovskaya, S., Alber, F., Williams, R., Chait, B. T., Sali, A., & Rout, M. P. (2004). Components of coated vesicles and nuclear pore complexes share a common molecular architecture. PLoS biology, 2(12), e380.
Alber, F., Dokudovskaya, S., Veenhoff, L. M., Zhang, W., Kipper, J., Devos, D., ... & Rout, M. P. (2007). The molecular architecture of the nuclear pore complex. Nature, 450(7170), 695–701.
Kim, S. J., Fernandez-Martinez, J., Nudelman, I., Shi, Y., Zhang, W., Raveh, B., ... & Rout, M. P. (2018). Integrative structure and functional anatomy of a nuclear pore complex. Nature, 555(7697), 475–482.
Akey, C. W., Singh, D., Ouch, C., Echeverria, I., Nudelman, I., Varberg, J. M., ... & Rout, M. P. (2022). Comprehensive structure and functional adaptations of the yeast nuclear pore complex. Cell, 185(2), 361–378.
References
Molecular biologists
Cell biologists
Alumni of the University of Cambridge
Rockefeller University faculty
Year of birth missing (living people)
Living people | Michael Rout | [
"Chemistry"
] | 1,437 | [
"Biochemists",
"Molecular biology",
"Molecular biologists"
] |
77,788,330 | https://en.wikipedia.org/wiki/John%20Klironomos | John Klironomos is a plant and microbial ecologist and academic who is a Professor of Biology at the American University of Sharjah (AUS).
Klironomos's research focuses on the causes and consequences of plant and fungal diversity in terrestrial ecosystems. His work spans various subjects, including the feedback between plant and fungal populations, the impact of soil microbes on plant diversity and ecosystem services, and the potential for microbes to boost productivity in agriculture and forestry, as well as to restore disturbed ecosystems.
Klironomos has received the Humboldt Research Award and the Soil Ecology Society Professional Achievement Award. He is a Fellow of the American Association for the Advancement of Science and Royal Society of Canada, Academy of Science.
Education
Klironomos earned his B.Sc. in Biology from Concordia University in 1990 and completed his Ph.D. in the same field from the University of Waterloo in 1994 under the supervision of mycologist W. Bryce Kendrick. From 1994 to 1996, he served as a Postdoctoral Fellow at San Diego State University.
Career
After completing his postdoctoral studies, Klironomos began his academic career as an assistant professor at the University of Guelph and was later appointed as an associate professor, a role he held until 2006. Concurrently, he was a Bullard Research Fellow at Harvard University, Canada Research Chair at the University of Guelph where he was promoted to Professor in 2006. He also served as a visiting professor at Université Paul Sabatier in France and received a Humboldt Research Fellowship at Free University Berlin in 2008. Transitioning to the University of British Columbia, Okanagan Campus (UBC Okanagan), he briefly served as Director of the Institute for Species at Risk and Habitat Studies from 2012 to 2013 and as Associate Dean of Research at the Barber School of Arts and Sciences from 2013 to 2015. After retiring as a professor of biology at UBC Okanagan, he has been serving as a professor of biology at AUS since 2023.
Klironomos was President of the International Soil Ecology Society and the vice-president and President Elect of the International Mycorrhizal Society from 2017 to 2020.
Research
Klironomos' research focuses on the ecological roles of soil microbes, particularly mycorrhizal fungi, in plant health and ecosystem sustainability. His work explores how these interactions can be harnessed to improve ecosystem resilience, enhance food security, and promote biodiversity in the face of environmental changes.
Mycorrhizal fungal dynamics
Klironomos and his group underscored the role of mycorrhizal fungi in shaping plant community dynamics and ecosystem structure. He demonstrated that arbuscular mycorrhizal fungi (AMF) diversity is crucial for maintaining plant biodiversity and ecosystem functioning, showcasing how higher AMF diversity enhances plant biodiversity. Building on this, they revealed that AMF modulate the relationship between plant diversity and productivity, showing a positive but asymptotic trend that suggests AMF improve nutrient utilization and contribute to species redundancy. Further exploration revealed that plant growth responses to mycorrhizal fungi exhibit significant variability from parasitic to mutualistic interactions especially when locally adapted plant and fungal species are used. His group then proposed applying a life history classification framework to arbuscular mycorrhizal fungi, akin to Grime's C-S-R model for plants, to better understand their functional diversity, successional dynamics, and species associations. In a 2017 study, they indicated that mycorrhizal type significantly affects plant-soil feedbacks, with arbuscular mycorrhizal trees showing negative feedback and strong conspecific inhibition, while ectomycorrhizal trees exhibited positive feedback and conspecific facilitation.
Studies by Klironomos and colleagues on the invasive plant Alliaria petiolata demonstrated how it disrupts mutualistic mycorrhizal associations, leading to suppressed native plant growth and facilitating its own invasion into North American forests. Furthermore, they revealed that Alliaria petiolata has a more pronounced inhibitory effect on mycorrhizal fungi and native plants in North America compared to Europe, attributed to specific flavonoid compounds. In a collaborative meta-analysis, they found that plant responses to mycorrhizal inoculation are most affected by host plant type and nitrogen fertilization, with variations dependent on soil complexity, plant functional groups, and nutrient limitations.
Plant-soil microbe interactions
Klironomos and co-researchers revealed that plant-soil microbe interactions greatly influence species abundance in plant communities, with rare plants suffering from pathogen accumulation and invasive plants benefiting from mycorrhizal fungi. Demonstrating the challenges and limitations in methods for studying soil microbial diversity, they stressed the importance of accurate assessment techniques for linking microbial diversity to ecosystem functions. By integrating microbial interactions into plant community ecology, they expanded models of niche and feedback, emphasizing soil microbes' role in plant dynamics. Further work showed that soil microbes affect the diversity–productivity relationship in grasslands, where increased plant diversity reduces disease and enhances productivity, highlighting microbial interactions over niche complementarity. In reviewing plant-soil feedbacks, they emphasized their role in community dynamics, invasions, and climate change responses, advocating for more integrated, long-term studies to improve ecological predictions and management.
Ecosystem integration
Klironomos and colleagues have highlighted the importance of integrating both aboveground and belowground perspectives in developing effective strategies for ecosystem management and restoration. They demonstrated how aboveground and belowground components of ecosystems are intricately linked, with both influencing community and ecosystem processes through complex feedbacks. Highlighting the significant impact of exotic plant invasions on soil communities, they stressed the need for a thorough understanding of aboveground and belowground interactions to effectively manage and restore invaded ecosystems.
Awards and honors
2007 – Humboldt Research Award, Alexander von Humboldt Foundation
2013 – Professional Achievement Award, International Soil Ecology Society
2013 – Fellow, American Association for the Advancement of Science
2014 – Fellow, Royal Society of Canada, Academy of Science
Selected articles
Van Der Heijden, M. G., Klironomos, J. N., Ursic, M., Moutoglis, P., Streitwolf-Engel, R., Boller, T., ... & Sanders, I. R. (1998). Mycorrhizal fungal diversity determines plant biodiversity, ecosystem variability and productivity. Nature, 396(6706), 69–72.
Klironomos, J. N., & Hart, M. M. (2001). Animal nitrogen swap for plant carbon. Nature, 410(6829), 651–652.
Klironomos, J. N. (2002). Feedback with soil biota contributes to plant rarity and invasiveness in communities. Nature, 417(6884), 67–70.
Klironomos, J. N. (2003). Variation in plant response to native and exotic arbuscular mycorrhizal fungi. Ecology, 84(9), 2292–2301.
Wardle, D. A., Bardgett, R. D., Klironomos, J. N., Setälä, H., Van Der Putten, W. H., & Wall, D. H. (2004). Ecological linkages between aboveground and belowground biota. science, 304(5677), 1629–1633.
Klironomos, J. N., Allen, M. F., Rillig, M. C., Piotrowski, J., Makvandi-Nejad, S., Wolfe, B. E., & Powell, J. R. (2005). Abrupt rise in atmospheric CO2 overestimates community response in a model plant–soil system. Nature, 433(7026), 621–624.
Maherali, H., & Klironomos, J. N. (2007). Influence of phylogeny on fungal community assembly and ecosystem functioning. science, 316(5832), 1746–1748.
Pringle, A., Bever, J. D., Gardes, M., Parrent, J. L., Rillig, M. C., & Klironomos, J. N. (2009). Mycorrhizal symbioses and plant invasions. Annual Review of Ecology, Evolution, and Systematics, 40(1), 699–715.
Fraser, L. H., Pither, J., Jentsch, A., Sternberg, M., Zobel, M., Askarizadeh, D., ... & Zupo, T. (2015). Worldwide evidence of a unimodal relationship between productivity and plant species richness. Science, 349(6245), 302–305.
Bennett, J. A., Maherali, H., Reinhart, K. O., Lekberg, Y., Hart, M. M., & Klironomos, J. (2017). Plant-soil feedbacks and mycorrhizal type influence temperate forest population dynamics. Science, 355(6321), 181–184.
References
21st-century scholars
Ecologists
Academic staff of the American University of Sharjah
Academic staff of the University of British Columbia
Academic staff of the University of Guelph
Concordia University alumni
University of Waterloo alumni
Living people
Year of birth missing (living people) | John Klironomos | [
"Environmental_science"
] | 2,003 | [
"Ecologists",
"Environmental scientists"
] |
77,788,606 | https://en.wikipedia.org/wiki/Rafael%20Villaverde | Rafael Villaverde was a Cuban-born exile living in the United States and a veteran of the Bay of Pigs (Playa Giron). He was a soldier and spy, who worked for the United States Army and the Central Intelligence Agency. Villaverde was a staunch anticommunist, a vocal opponent of the Castro regime, and anti-Castro activist (activista anticastra). Villaverde had been deployed to Laos, Vietnam, Cambodia, and elsewhere.
Villaverde was a member of Brigade 2506, which was a brigade of Cuban exiles who took part in the invasion of Cuba during the Bay of Pigs. Villaverde's unit was captured, and he was held in detention by the Castro regime. After Villaverde was released from Cuban prison, his vocal opposition to Castro increased, and he became a well known figure in the Cuban community in Miami.
After the failed invasion, the CIA settled Villaverde into a job with the United Fund, where he learned the operating methods of social services and charities.
In 1972, Villaverde and his brothers opened the first cafe for the elderly on Calle Ocho. Villaverde's main two donors for this cafe were Claude Pepper and Maurice Ferré. It was around this time that he recruited Josefina Carbonell to help him run the cafe. This cafe eventually transformed into the Little Havana Activities and Nutrition Center in Little Havana, Miami, which was both a charity, and an alleged extremist anti-Castro terrorist group front.
Later in life, Villaverde and his two brothers were accused by a federal narcotics strike force sting operation called Operation Tick Talk of having been a member of a vast Cuban-American drug smuggling ring. 43 members were captured and arrested after officers from the Miami Police Department and federal agents from the Strike Force planted listening devices in the houses of the Tick Talk targets. It was further discovered that the Villaverde Brothers were using the name of the "Gris Brothers" to smuggle drugs.
He was also accused of having interactions with Edwin P. Wilson, a former CIA officer who smuggled arms to Muammar Gaddafi and the Libyan government. It was further alleged that he had been recruited by former officers of the CIA to assassinate Gaddafi. Shortly after he had agreed to testify against Wilson, his fishing boat mysteriously exploded in the Gulf of Mexico with him still on board. His body was never recovered.
Others connected with Edwin Wilson later died in suspicious circumstances, including Waldo H. Dubberstein and Kevin Mulcahy. Dubberstein was found after failing to appear in court of an apparent suicide. Mulcahy was found to have died from natural causes. However, some federal investigators in both cases suspected they might have been murdered.
Villaverde was cleared posthumously when the judge in the trial of Operation Tick Talk threw out the case on the grounds that the listening devices were placed illegally.
In 2002, Villaverde's brother Jorge Villaverde was murdered in a drive-by shooting while he was taking out the trash.
Possible survival
If Rafael did not die at sea, it is possible that he was later involved in the Iran–Contra affair, as the "Gris Brothers," were identified in a federal court as men involved in the affair. One plaintiff in the court trial specifically identified Rafael Villaverde as being involved in CIA activities in South America and the Caribbean some time after his boat exploded in the Gulf. Rafael was further noted by this source as having been deployed to both Iran and Libya.
References
People of the Central Intelligence Agency
American drug traffickers
Cold War history of Cuba | Rafael Villaverde | [
"Chemistry"
] | 733 | [
"Deaths from explosion",
"Explosions"
] |
77,789,304 | https://en.wikipedia.org/wiki/Ammonium%20hexafluoroarsenate | Ammonium hexafluoroarsenate is an inorganic chemical compound with the chemical formula .
Synthesis
Arsenic pentoxide is mixed with an excess of ammonium fluoride; the mixture is fused to produce ammonium hexafluoroarsenate:
Also, reaction of arsenic trifluoride, hydrofluoric acid, and ammonia:
Treatment of hexafluoroarsenic acid and ammonia:
Physical properties
Ammonium hexafluoroarsenate crystallizes rhombohedral with the structure type, with parameters: a = 7.459(3) Å, c = 7.543(3) Å (at 200 K), Z = 3, unit cell volume 363.4 Å3, space group R (No. 148).
References
Fluoro complexes
Arsenates
Ammonium compounds
Fluorometallates
Hexafluorides | Ammonium hexafluoroarsenate | [
"Chemistry"
] | 187 | [
"Ammonium compounds",
"Salts"
] |
77,790,233 | https://en.wikipedia.org/wiki/Avi%20Kivity | Avi Kivity () is a software engineer who created the Kernel-based Virtual Machine (KVM) hypervisor underlying many production clouds. Following his work on KVM, Kivity developed the Seastar framework and the ScyllaDB database. He co-founded the company ScyllaDB with Dor Laor; Kivity is CTO and an active project contributor.
Career
Kivity began the development of KVM at Qumranet in 2006. After Red Hat acquired Qumranet in 2008, Kivity joined Red Hat and continued as the lead developer and maintainer of KVM.
After leaving Red Hat in 2012, Kivity co-founded a company called Cloudius Systems with Dor Laor. Cloudius developed the OSv operating system for the cloud. While at Cloudius, Kivity created the Seastar framework, an open-source (Apache 2.0 licensed) C++ framework for I/O intensive asynchronous computing. Seastar later became the foundation for high performance distributed systems such as ScyllaDB, Redpanda, and Ceph.
In mid-2014, Cloudius Systems was renamed to ScyllaDB, after its main product which is used for high-throughput database workloads that require low latencies. (Forbes) Kivity serves as the company's chief technology officer and contributes to the source code development of ScyllaDB as well as Seastar.
Patents
Kivity has been granted patents for technologies implemented in KVM and ScyllaDB
Asynchronous input/output (I/O) using alternate stack switching in kernel space (8850443)
Delivery of events from a virtual machine to host CPU using memory monitoring instructions (9256455)
Delivery of events from a virtual machine to a thread executable by multiple host CPUs using memory monitoring instructions (9489228)
CPU using memory monitoring instructions (9256455)]
Delivery of events from a virtual machine to a thread executable by multiple host CPUs using memory monitoring instructions (9489228)
Detection of guest disk cache (9354916)
Event signaling in virtualized systems (9830286)
Heat-based load balancing (11157561)
Injecting interrupts in virtualized computer systems (9235538)
Interprocess communication (9075795)
Managing device access using an address hint (9575787)
Mechanism for automatic adjustment of virtual machine storage (8244956)
Mechanism for memory state restoration of virtual machine (VM)-controlled peripherals at a destination host machine during migration of the VM (8356120)
Mechanism for out-of-synch virtual machine memory management optimization (8560758)
Memory change tracking during migration of virtual machine (VM) with VM-controlled assigned peripherals (9104459)
Memory state transfer of virtual machine-controlled peripherals during migrations of the virtual machine (8924965)
MSI events using dynamic memory monitoring (10078603)
On-demand hypervisor memory mapping (9342450)
Optimistic interrupt affinity for devices (9003094)
Optimization of operating system and virtual machine monitor memory management (10761957)
Pessimistic interrupt affinity for devices (9201823)
Policy enforcement by hypervisor paravirtualized ring copying (9904564)
Virtual machine wakeup using a memory monitoring instruction (9489223)
References
See also
Kernel-based Virtual Machine (KVM)
ScyllaDB
External links
kvm: the Linux Virtual Machine Monitor (Proceedings of the Linux Symposium, 2007)
Keynote on KVM progress (Red Hat, KVM Forum, 2011)
OSv— Optimizing the Operating System for Virtual Machines (Proceedings of USENIX ATC ’14, 2014)
ScyllaDB Optimizes Database Architecture to Maximize Hardware Performance (IEEE Software, 2019)
Building efficient I/O intensive applications with Seastar (Core C++, 2019)
No-Compromise Performance (Carnegie Mellon University Database Group, 2019)
How a Database Looks from a Disk’s Perspective (P99 CONF, 2022)
Living people
Free software programmers
Linux kernel programmers
Linux people
Open source people
People in information technology
Year of birth missing (living people) | Avi Kivity | [
"Technology"
] | 896 | [
"People in information technology",
"Information technology"
] |
77,790,553 | https://en.wikipedia.org/wiki/List%20of%20bi-metallic%20coins%20by%20release%20date | This list includes discontinued and commemorative bi-metallic coins minted since 1982.
Italy with the 500 Lira in 1982
Andorra with the 2 Diners in 1985
Morocco, with its 5-dirhams coin in 1987;
France, with a 10-francs coin in 1988;
Monaco, with a 10 francs in 1988,
Thailand, with a 10 baht, in 1988;
Mexico with the 100 and 1000 Pesos in 1989
Monaco with the 10 Francs in 1989
China with the 50 Yuan in 1990, 25 Yuan coin in 1992, a 10 Yuan coin in 1994, and a 500 Yuan coin in 1995
Moldova with the 5 and 10 Lei in 1991,
Portugal in 1991 with the 200 Escudos:
Algeria with the 10, 20, and 50 Dinar in 1992
Azerbaijan with the 50 Qəpik in 1992
Iran with the 250 Rials in 1993 and the 500 Rials in 2003
Bahrain with the 100 fil coin in 1992 and the 500 Fils in 2000
The Czech Republic, with a 50 Kč coin in 1993;
Hong Kong, with a $10 coin, in 1993;
Indonesia, with a Rp 1,000 coin, in 1993;
Czech Republic with the 50 Korun in 1993
Colombia with the 500 Pesos in 1993,
Hong Kong with a 10 Dollar coin in 1993
Finland in 1993 with the 10 Markaa coin,
Australia with commemorative 5, 10, 20, 25, 50, 75, and 100 Dollar coins, the 5 Dollar being the first in 1994
Argentina with the 1 Peso coin in 1994,
Kenya with the 10 Shillings in 1994;
Cape Verde with a 100 Escudo coin in 1994, a 250 Escudos in 2013 and a 200 Escudos in 20
Cambodia with the 500 Riels in 1994
New Zealand with the commemorative 50 Cent coin in 1994:
Gibraltar with the 4.2 ECU and 2 Pounds in 1994 and 1996
Israel, with a ₪10 coin in 1995;
The Isle Of Man with the 1/4 Angel, Noble and Crown in 1995
Lesotho in 1995 with the 5 Maloti
Canada, with a $2 coin (nicknamed "toonie") in 1996;
Ecuador with the 100, 500, and 1000 Sucres coin in 1996:
Hungary, with a 100-Forint coin in 1996 and a 200 Forint coin in 2009;
Macau with the 100 Patacas in 1997
The United Kingdom of Great Britain and Norther Ireland has issued a bi-metallic £2 coin since 1997, and a bi-metallic £1 coin since March 2017;
El Salvador with the 5 Colones in 1997
Jordan with the 1/2 dinar in 1997
Croatia with the 25 Kuna in 1997
The Cook Islands with the $50 Coin in 1997, a $150 in 2005, a $1 in 2010, and a $100 in 2014.
Latvia with the 2 Lati in 1999
Falkland Islands with the 2 Pounds in 1999 and 2000
Cuba with the 500 Pesos in 2000, and a 5 Peso coin in 2016
Bosnia and Herzegovina with the Convertible Mark in 2000
Jamaica with the 20 Dollars in 2000:
Botswana with the 5 Pula in 2000, and the 2 Pula in 2013
Albania with the 100 Leke in 2000
Georgia with the 10 Lari in 2000 and the 2 Lari in 2006
The Bahamas with the 2 Dollar coin in 2000
The United States of America with the $10 Library of Congress in 2000
Chile with the 100 and 500 pesos in 2000:
The Philippines has minted a bi-metallic 10-peso coin from 2000 to 2017 and a 20-peso coin since 2019;
Bolivia with the 5 Boliviar in 2001 and the 1000 Boliviar in 2005;
Bhutan has minted the 2000 Ngultrums in 2002
Alderney with the 50 Pounds in 2002
Bulgaria with the 1 Lev in 2002 and the 2 Leva in 2015
Kazakhstan in 2002 with the 100 and 200 Tenge
The Eurozone circulated the €1 and €2 coins on 1 January 2002;
Ethiopia with the 1 Birr in 2002
Armenia with the 500 Dram in 2003
Saint Helena and Ascension with the 2 Pound coin in 2003
British Virgin Islands with the 75 Dollar coin in 2004 and 2007
North Korea with the 1 Won in 2004
Dominican Republic with the 5 and 10 Pesos in 2005:
Egypt with the 1 Pound coin in 2005
South Africa with the 5 Rand coin in 2005:
Ghana with a 1 Cedi coin in 2007, and a 2 Cedis coin in 2019
British territories such as Stoltenhoff Island, Nightingale Island, and Tristan da Cunha with the 25-Pence in 2008,
Belarus with the 2 Roubles in 2009 and the Commemorative 20 Roubles in 2016
India has issued a bi-metallic ₹10 coin since 2009 and a bi-metallic ₹20 coin since 2019;
Turkey issued the 50 Kuru and the 1 Lira in 2009,
Andaman and Nicobar has minted the Ten and Twenty Rupees as part of a Limited-Edition Release in 2011;
Angola with the 5 and 10 Kwanzas in 2012 and the 20 Kwanzas in 2014
Micronesia with a 5 Dollar coin in 2012:
Djibouti with the 250 Francs coin in 2012
Singapore has issued a bi-metallic 1-dollar coin since 2013;
Comoros with the 250 francs in 2013
Russia with the Commemorative 3, 5, and 10 'North Pole' Roubles since 2014,
Somaliland (partially recognized state) with the 2500 Shillings to commemorate the queen of Ghana in 2016;
Tokelau with the 1 Dollar coin in 2017
Chad with a 1000 franc coin in 2019
Japan has issued a bi-metallic 500 yen coin since 2021:
Costa Rica Minted a bi-metallic 500-colones coin in November 2021
French Pacific Territories with a 200 Francs coin in 2021
British Indian Ocean Territories with the 1 and 2 Pound coin in 2021
Abkhazia (unrecognized state) with the 1 Aspar in 2022
Benin with the commemorative 500 Francs in 2022
The Vatican City Released a 5 Euro Coin in June of 2024
References
List
Lists of coins | List of bi-metallic coins by release date | [
"Chemistry"
] | 1,218 | [
"Bi-metallic coins",
"Bimetal"
] |
77,791,358 | https://en.wikipedia.org/wiki/Atri%27s%20Eclipse | Atri's Eclipse is a total solar eclipse mentioned in the Indian text Rigaveda. It has been claimed by some modern astronomical scholars to be the earliest reference of the solar eclipse mentioned in any historical astronomy of the world. The claim for the earliest reference of the total solar eclipse was published in a paper by the Journal of Astronomical History and Heritage.
Etymology
The text Rigaveda doesn't directly mention the word Atri's Eclipse but it has been termed by the modern scholars to identify the total solar eclipse mentioned in the text Rigaveda by the sage Atri in the form of poetic hymns.
Indian scholar Bal Gangadhar Tilak in his commentary on Vedic literature mentioned Atri's Eclipse to identify the total solar eclipse in the Rigaveda mentioned by the Vedic sage Atri. Similarly Robert Garfinkle in his book Luna Cognita also discussed about the Atri's Eclipse.
Background
In the text Rigaveda, there is a story of the sage Atri who demolished an asura Swarbhanu for the liberation of the sun from a total solar eclipse. In the story it is said that due to the influence of the asura, the sun suddenly disappeared during the day and made people feel scared in the darkness. Then the sage Atri demolished the asura Swarbhanu and regained the glory of sun. The language of the Rigaveda is very symbolic having hidden meaning, making it difficult to comprehend in the form of historical events. Although the disappearing of the sun in the story has been interpreted by the astronomers as the total solar eclipse.
The Chapter 24 verse 3 in the text Sankhyayana Brahmana of Rigaveda mentions the location of the rising Sun during the spring equinox in its passages. In one reference there is a description of the spring equinox occurring in Orion, and in another description it occurred in the Pleiades. These descriptions have been used as the bases for the calculation of the dates of the Atri's Eclipse.
Description
According to the paper published in the Journal of Astronomical History and Heritage, the Indian astronomer Mayank Vahia from Tata Institute of Fundamental Research and the Japanese astronomer Misturu Soma from National Astronomical Observatory of Japan have found the reference of the earliest total solar eclipse in the text Rigaveda mentioned by the Vedic sage Atri. According to Tilak 's interpretation the eclipse occurred when the Vernal equinox was in Orion, and three days prior to the Autumnal equinox. The astronomers Mayank Vahia and Misturu Soma have identified the date of the solar eclipse as on 22 October 4202 BC or on 19 October 3811 BC.
The astronomers have also claimed that the story of the Atri's Eclipse is different and older from the general stories of Rahu and Ketu for the eclipses in the Hindu mythology.
References
History of astronomy
Indian astronomy texts | Atri's Eclipse | [
"Astronomy"
] | 580 | [
"History of astronomy"
] |
62,421,646 | https://en.wikipedia.org/wiki/Mezcalera%20Ocean | The Mezcalera Ocean is an inferred ancient ocean preserved in rocks in western Mexico. The Mezcalera oceanic plate was likely subducted and consumed into the mantle allowing the Guerrero Terrane to be accreted to western Mexico in the Early Cretaceous.
Speculative reconstructions suggest that Mezcalera plate experienced slab rollback in the east along the Mexican Craton and simultaneously subducted in the west beneath the Guerrero Terrane.
See also
List of ancient oceans
References
Historical oceans
Oceanography
Jurassic Mexico | Mezcalera Ocean | [
"Physics",
"Environmental_science"
] | 104 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
62,421,802 | https://en.wikipedia.org/wiki/Perturbed%20angular%20correlation | The perturbed γ-γ angular correlation, PAC for short or PAC-Spectroscopy, is a method of nuclear solid-state physics with which magnetic and electric fields in crystal structures can be measured. In doing so, electrical field gradients and the Larmor frequency in magnetic fields as well as dynamic effects are determined. With this very sensitive method, which requires only about 10–1000 billion atoms of a radioactive isotope per measurement, material properties in the local structure, phase transitions, magnetism and diffusion can be investigated. The PAC method is related to nuclear magnetic resonance and the Mössbauer effect, but shows no signal attenuation at very high temperatures.
Today only the time-differential perturbed angular correlation (TDPAC) is used.
History and development
PAC goes back to a theoretical work by Donald R. Hamilton from 1940. The first successful experiment was carried out by Brady and Deutsch in 1947. Essentially spin and parity of nuclear spins were investigated in these first PAC experiments. However, it was recognized early on that electric and magnetic fields interact with the nuclear moment, providing the basis for a new form of material investigation: nuclear solid-state spectroscopy.
Step by step the theory was developed.
After Abragam and Pound published their work on the theory of PAC in 1953 including extra nuclear fields, many studies with PAC were carried out afterwards. In the 1960s and 1970s, interest in PAC experiments sharply increased, focusing mainly on magnetic and electric fields in crystals into which the probe nuclei were introduced. In the mid-1960s, ion implantation was discovered, providing new opportunities for sample preparation. The rapid electronic development of the 1970s brought significant improvements in signal processing. From the 1980s to the present, PAC has emerged as an important method for the study and characterization of materials, e.g. for the study of semiconductor materials, intermetallic compounds, surfaces and interfaces, and a number of applications have also appeared in biochemistry.
While until about 2008 PAC instruments used conventional high-frequency electronics of the 1970s, in 2008 Christian Herden and Jens Röder et al. developed the first fully digitized PAC instrument that enables extensive data analysis and parallel use of multiple probes. Replicas and further developments followed.
Measuring principle
PAC uses radioactive probes, which have an intermediate state with decay times of 2 ns to approx. 10 μs, see example 111In in the picture on the right. After electron capture (EC), indium transmutates to cadmium. Immediately thereafter, the 111cadmium nucleus is predominantly in the excited 7/2+ nuclear spin and only to a very small extent in the 11/2- nuclear spin, the latter should not be considered further. The 7/2+ excited state transitions to the 5/2+ intermediate state by emitting a 171 keV γ-quantum. The intermediate state has a lifetime of 84.5 ns and is the sensitive state for the PAC. This state in turn decays into the 1/2+ ground state by emitting a γ-quantum with 245 keV. PAC now detects both γ-quanta and evaluates the first as a start signal, the second as a stop signal.
Now one measures the time between start and stop for each event. This is called coincidence when a start and stop pair has been found. Since the intermediate state decays according to the laws of radioactive decay, one obtains an exponential curve with the lifetime of this intermediate state after plotting the frequency over time. Due to the non-spherically symmetric radiation of the second γ-quantum, the so-called anisotropy, which is an intrinsic property of the nucleus in this transition, it comes with the surrounding electrical and/or magnetic fields to a periodic disorder (hyperfine interaction). The illustration of the individual spectra on the right shows the effect of this disturbance as a wave pattern on the exponential decay of two detectors, one pair at 90° and one at 180° to each other. The waveforms to both detector pairs are shifted from each other. Very simply, one can imagine a fixed observer looking at a lighthouse whose light intensity periodically becomes lighter and darker. Correspondingly, a detector arrangement, usually four detectors in a planar 90 ° arrangement or six detectors in an octahedral arrangement, "sees" the rotation of the core on the order of magnitude of MHz to GHz.
According to the number n of detectors, the number of individual spectra (z) results after z=n²-n, for n=4 therefore 12 and for n=6 thus 30. In order to obtain a PAC spectrum, the 90° and 180° single spectra are calculated in such a way that the exponential functions cancel each other out and, in addition, the different detector properties shorten themselves. The pure perturbation function remains, as shown in the example of a complex PAC spectrum. Its Fourier transform gives the transition frequencies as peaks.
, the count rate ratio, is obtained from the single spectra by using:
Depending on the spin of the intermediate state, a different number of transition frequencies show up. For 5/2 spin, 3 transition frequencies can be observed with the ratio ω1+ω2=ω3. As a rule, a different combination of 3 frequencies can be observed for each associated site in the unit cell.
PAC is a statistical method: Each radioactive probe atom sits in its own environment. In crystals, due to the high regularity of the arrangement of the atoms or ions, the environments are identical or very similar, so that probes on identical lattice sites experience the same hyperfine field or magnetic field, which then becomes measurable in a PAC spectrum. On the other hand, for probes in very different environments, such as in amorphous materials, a broad frequency distribution or no is usually observed and the PAC spectrum appears flat, without frequency response. With single crystals, depending on the orientation of the crystal to the detectors, certain transition frequencies can be reduced or extinct, as can be seen in the example of the PAC spectrum of zinc oxide (ZnO).
Instrumental setup
In the typical PAC spectrometer, a setup of four 90° and 180° planar arrayed detectors or six octahedral arrayed detectors are placed around the radioactive source sample. The detectors used are scintillation crystals of BaF2 or NaI. For modern instruments today mainly LaBr3:Ce or CeBr3 are used. Photomultipliers convert the weak flashes of light into electrical signals generated in the scintillator by gamma radiation. In classical instruments these signals are amplified and processed in logical AND/OR circuits in combination with time windows the different detector combinations (for 4 detectors: 12, 13, 14, 21, 23, 24, 31, 32, 34, 41, 42, 43) assigned and counted. Modern digital spectrometers use digitizer cards that directly use the signal and convert it into energy and time values and store them on hard drives. These are then searched by software for coincidences. Whereas in classical instruments, "windows" limiting the respective γ-energies must be set before processing, this is not necessary for the digital PAC during the recording of the measurement. The analysis only takes place in the second step. In the case of probes with complex cascades, this makes it makes it possible to perform a data optimization or to evaluate several cascades in parallel, as well as measuríng different probes simultaneously. The resulting data volumes can be between 60 and 300 GB per measurement.
Sample materials
As materials for the investigation (samples) are in principle all materials that can be solid and liquid. Depending on the question and the purpose of the investigation, certain framework conditions arise. For the observation of clear perturbation frequencies it is necessary, due to the statistical method, that a certain proportion of the probe atoms are in a similar environment and e.g. experiences the same electric field gradient. Furthermore, during the time window between the start and stop, or approximately 5 half-lives of the intermediate state, the direction of the electric field gradient must not change. In liquids, therefore, no interference frequency can be measured as a result of the frequent collisions, unless the probe is complexed in large molecules, such as in proteins. The samples with proteins or peptides are usually frozen to improve the measurement.
The most studied materials with PAC are solids such as semiconductors, metals, insulators, and various types of functional materials. For the investigations, these are usually crystalline. Amorphous materials do not have highly ordered structures. However, they have close proximity, which can be seen in PAC spectroscopy as a broad distribution of frequencies. Nano-materials have a crystalline core and a shell that has a rather amorphous structure. This is called core-shell model. The smaller the nanoparticle becomes, the larger the volume fraction of this amorphous portion becomes. In PAC measurements, this is shown by the decrease of the crystalline frequency component in a reduction of the amplitude (attenuation).
Sample preparation
The amount of suitable PAC isotopes required for a measurement is between about 10 to 1000 billion atoms (1010-1012). The right amount depends on the particular properties of the isotope. 10 billion atoms are a very small amount of substance. For comparison, one mol contains about 6.22x1023 particles. 1012 atoms in one cubic centimeter of beryllium give a concentration of about 8 nmol/L (nanomol=10−9 mol). The radioactive samples each have an activity of 0.1-5 MBq, which is in the order of the exemption limit for the respective isotope.
How the PAC isotopes are brought into the sample to be examined is up to the experimenter and the technical possibilities. The following methods are usual:
Implantation
During implantation, a radioactive ion beam is generated, which is directed onto the sample material. Due to the kinetic energy of the ions (1-500 keV) these fly into the crystal lattice and are slowed down by impacts. They either come to a stop at interstitial sites or push a lattice-atom out of its place and replace it. This leads to a disruption of the crystal structure. These disorders can be investigated with PAC. By tempering these disturbances can be healed. If, on the other hand, radiation defects in the crystal and their healing are to be examined, unperseived samples are measured, which are then annealed step by step.
The implantation is usually the method of choice, because it can be used to produce very well-defined samples.
Evaporation
In a vacuum, the PAC probe can be evaporated onto the sample. The radioactive probe is applied to a hot plate or filament, where it is brought to the evaporation temperature and condensed on the opposite sample material. With this method, e.g. surfaces are examined. Furthermore, by vapor deposition of other materials, interfaces can be produced. They can be studied during tempering with PAC and their changes can be observed. Similarly, the PAC probe can be transferred to sputtering using a plasma.
Diffusion
In the diffusion method, the radioactive probe is usually diluted in a solvent applied to the sample, dried and it is diffused into the material by tempering it. The solution with the radioactive probe should be as pure as possible, since all other substances can diffuse into the sample and affect thereby the measurement results. The sample should be sufficiently diluted in the sample. Therefore, the diffusion process should be planned so that a uniform distribution or sufficient penetration depth is achieved.
Added during synthesis
PAC probes may also be added during the synthesis of sample materials to achieve the most uniform distribution in the sample. This method is particularly well suited if, for example, the PAC probe diffuses only poorly in the material and a higher concentration in grain boundaries is to be expected. Since only very small samples are necessary with PAC (about 5 mm), micro-reactors can be used. Ideally, the probe is added to the liquid phase of the sol-gel process or one of the later precursor phases.
Neutron activation
In neutron activation, the probe is prepared directly from the sample material by converting very small part of one of the elements of the sample material into the desired PAC probe or its parent isotope by neutron capture. As with implantation, radiation damage must be healed. This method is limited to sample materials containing elements from which neutron capture PAC probes can be made. Furthermore, samples can be intentionally contaminated with those elements that are to be activated. For example, hafnium is excellently suited for activation because of its large capture cross section for neutrons.
Nuclear reaction
Rarely used are direct nuclear reactions in which nuclei are converted into PAC probes by bombardment by high-energy elementary particles or protons. This causes major radiation damage, which must be healed. This method is used with PAD, which belongs to the PAC methods.
Laboratories
The currently largest PAC laboratory in the world is located at ISOLDE in CERN with about 10 PAC instruments, that receives its major funding form BMBF. Radioactive ion beams are produced at the ISOLDE by bombarding protons from the booster onto target materials (uranium carbide, liquid tin, etc.) and evaporating the spallation products at high temperatures (up to 2000 °C), then ionizing them and then accelerating them. With the subsequent mass separation usually very pure isotope beams can be produced, which can be implanted in PAC samples. Of particular interest to the PAC are short-lived isomeric probes such as: 111mCd, 199mHg, 204mPb, and various rare earth probes.
Theory
The first -quantum () will be emitted isotropically. Detecting this quantum in a detector selects a subset with an orientation of the many possible directions that has a given. The second -quantum () has an anisotropic emission and shows the effect of the angle correlation. The goal is to measure the relative probability with the detection of at the fixed angle in relation to . The probability is given with the angle correlation (perturbation theory):
For a --cascade, is due to the preservation of parity:
Where is the spin of the intermediate state and with the multipolarity of the two transitions. For pure multipole transitions, is .
is the anisotropy coefficient that depends on the angular momentum of the intermediate state and the multipolarities of the transition.
The radioactive nucleus is built into the sample material and emits two -quanta upon decay. During the lifetime of the intermediate state, i.e. the time between and , the core experiences a disturbance due to the hyperfine interaction through its electrical and magnetic environment. This disturbance changes the angular correlation to:
is the perturbation factor. Due to the electrical and magnetic interaction, the angular momentum of the intermediate state experiences a torque about its axis of symmetry. Quantum-mechanically, this means that the interaction leads to transitions between the M states. The second -quantum () is then sent from the intermediate level. This population change is the reason for the attenuation of the correlation.
The interaction occurs between the magnetic core dipole moment and the intermediate state or/and an external magnetic field . The interaction also takes place between nuclear quadrupole moment and the off-core electric field gradient .
Magnetic dipole interaction
For the magnetic dipole interaction, the frequency of the precession of the nuclear spin around the axis of the magnetic field is given by:
is the Landé g-factor und is the nuclear magneton.
With follows:
From the general theory we get:
For the magnetic interaction follows:
Static electric quadrupole interaction
The energy of the hyperfine electrical interaction between the charge distribution of the core and the extranuclear static electric field can be extended to multipoles. The monopole term only causes an energy shift and the dipole term disappears, so that the first relevant expansion term is the quadrupole term:
ij=1;2;3
This can be written as a product of the quadrupole moment and the electric field gradient . Both [tensor]s are of second order. Higher orders have too small effect to be measured with PAC.
The electric field gradient is the second derivative of the electric potential at the core:
becomes diagonalized, that:
The matrix is free of traces in the main axis system (Laplace equation)
Typically, the electric field gradient is defined with the largest proportion and :
,
In cubic crystals, the axis parameters of the unit cell x, y, z are of the same length. Therefore:
and
In axisymmetric systems is .
For axially symmetric electric field gradients, the energy of the substates has the values:
The energy difference between two substates, and , is given by:
The quadrupole frequency is introduced.
The formulas in the colored frames are important for the evaluation:
The publications mostly list . as elementary charge and as Planck constant are well known or well defined.
The nuclear quadrupole moment is often determined only very inaccurately (often only with 2-3 digits).
Because can be determined much more accurately than , it is not useful to specify only because of the error propagation.
In addition, is independent of spin! This means that measurements of two different isotopes of the same element can be compared, such as 199mHg(5/2−), 197mHg(5/2−) and 201mHg(9/2−). Further, can be used as finger print method.
For the energy difference then follows:
If , then:
with:
For integer spins applies:
und
For half integer spins applies:
und
The perturbation factor is given by:
With the factor for the probabilities of the observed frequencies:
As far as the magnetic dipole interaction is concerned, the electrical quadrupole interaction also induces a precision of the angular correlation in time and this modulates the quadrupole interaction frequency. This frequency is an overlap of the different transition frequencies . The relative amplitudes of the various components depend on the orientation of the electric field gradient relative to the detectors (symmetry axis) and the asymmetry parameter . For a probe with different probe nuclei, one needs a parameter that allows a direct comparison: Therefore, the quadrupole coupling constant independent of the nuclear spin is introduced.
Combined interactions
If there is a magnetic and electrical interaction at the same time on the radioactive nucleus as described above, combined interactions result. This leads to the splitting of the respectively observed frequencies. The analysis may not be trivial due to the higher number of frequencies that must be allocated. These then depend in each case on the direction of the electric and magnetic field to each other in the crystal. PAC is one of the few ways in which these directions can be determined.
Dynamic interactions
If the hyperfine field fluctuates during the lifetime of the intermediate level due to jumps of the probe into another lattice position or from jumps of a near atom into another lattice position, the correlation is lost. For the simple case with an undistorted lattice of cubic symmetry, for a jump rate of for equivalent places , an exponential damping of the static -terms is observed:
Here is a constant to be determined, which should not be confused with the decay constant . For large values of , only pure exponential decay can be observed:
The boundary case after Abragam-Pound is , if , then:
After effects
Cores that transmute beforehand of the --cascade usually cause a charge change in ionic crystals (In3+) to Cd2+). As a result, the lattice must respond to these changes. Defects or neighboring ions can also migrate. Likewise, the high-energy transition process may cause the Auger effect, that can bring the core into higher ionization states. The normalization of the state of charge then depends on the conductivity of the material. In metals, the process takes place very quickly. This takes considerably longer in semiconductors and insulators. In all these processes, the hyperfine field changes. If this change falls within the --cascade, it may be observed as an after effect.
The number of nuclei in state (a) in the image on the right is depopulated both by the decay after state (b) and after state (c):
mit:
From this one obtains the exponential case:
For the total number of nuclei in the static state (c) follows:
The initial occupation probabilities are for static and dynamic environments:
General theory
In the general theory for a transition is given:
Minimum von
with:
References
Nuclear physics
Atomic physics
Electromagnetism
Spectroscopy
Scientific techniques
Laboratory techniques in condensed matter physics
Solid-state chemistry
Materials science | Perturbed angular correlation | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,222 | [
"Physical phenomena",
"Quantum mechanics",
"Laboratory techniques in condensed matter physics",
"Fundamental interactions",
"Spectroscopy",
"Solid-state chemistry",
"Electromagnetism",
"Instrumental analysis",
"Materials science",
" molecular",
"Nuclear physics",
" and optical physics",
"Mol... |
62,423,759 | https://en.wikipedia.org/wiki/Muxrabija | The Muxrabija (from the Arabic mashrabiya; plural muxrabijet) is a typical element of vernacular Maltese architecture. It consists of an ornate timber screen, perforated with an intricate network of holes, tightly fitted into a window or loggia projecting from the facade of the building, usually over the main door or to its side. Stone-carved muxrabijiet are also reported.
The muxrabija is also known as ‘in-nemmiesa’, ‘ix-xerriefa’ and in Gozo as ‘il-kixxiefa’ or ‘lkixxijìja’ and ‘il-glusija’ (probably from the French jalousie).
Muxrabijet and roundels (round motifs sculpted on building facades) are the only two features of vernacular Maltese architecture directly deriving from Arabic culture. The muxrabija is a typical Mediterranean feature, whose oldest record dates back to the 7th century in the Middle East. The oldest-surviving muxrabijet in Malta date back to the years 1300–1400.
Muxrabijet had the task to keep the interior of the building cool by allowing circulation of air through the carved wood. They were also used as cooling device for storing water, and as a security measure to observe the outside without being seen.
List of muxrabijet in Malta
Tal-Karmnu Street, Victoria Gozo
Sqaq il-Qajjied, Siggiewi
84, Santu Rokku Street, Birkirkara - House of Censu Borg (Brared), stone muxrabija with decorative style
Ta’ Ghammar, Gozo
Il-Knisja Street, Gharb
Doni Street, Rabat Malta
Ta’ Monita, Marsascala
Bibliography
Joe Azzopardi, "A Survey of the Maltese Muxrabijiet (Part 2)", VIGILO - DIN L-ART HELWA, October 2012
Notes
Architecture in Malta | Muxrabija | [
"Engineering"
] | 426 | [
"Architecture stubs",
"Architecture"
] |
62,423,839 | https://en.wikipedia.org/wiki/Fluorescence-activating%20and%20absorption-shifting%20tag | FAST (Fluorescence-Activating and absorption-Shifting Tag) is a genetically-encoded protein tag which, upon reversible combination with a fluorogenic chromophore, allows the reporting of proteins of interest. FAST, a small 14 kDa protein, was engineered from the photoactive yellow protein (PYP) by directed evolution. It was disclosed for the first time in 2016 by researchers from Ecole normale supérieure de Paris. FAST was further evolved into splitFAST (2019), a complementation system for protein-protein interaction monitoring, and CATCHFIRE (2023), a self-reporting protein dimerizing system.
Mechanism
Fluorogenic protein-based strategies for labeling, sensing, and actuation
Fluorescence imaging has become ubiquitous in cell and molecular biology. GFP-like fluorescent proteins have revolutionized fluorescence microscopy, giving researchers exquisite control over the localization, function and fate, of tagged constructs. Lately, have been developed alternative systems based on a fluorogenic interaction between a protein tag, which affords the classic advantages of protein tagging, i.e., absolute labeling specificity and localization, and an external chromophore, dark until combination with its cognate protein tag. Chromophores span from naturally occurring chromophores, e.g., flavin mononucleotide (FMN) with LOV-sensing domains, biliverdin with phytochromes, bilirubin with UnaG, to synthetic fluorophores with SNAP-tag, CLIP-tag, HaloTag. While initially designed as fluorescent labels, these systems also present opportunities for sensing and actuating.
FAST and its derivates, splitFAST and CATCHFIRE, pertain to these novel chemical-genetic strategies.
FAST
FAST is a 125 amino acid protein engineered from the photosensitive PYP. Not fluorescent by itself, it can bind selectively a fluorogenic chromophore derived from 4-hydroxybenzylidene rhodanine, which is itself non fluorescent unless bound. Once bound, the pair of molecules goes through a unique fluorogen activation mechanism based on two spectroscopic changes, increase of fluorescence quantum yield and absorption red shift, hence providing high labeling selectivity. Several versions of FAST have been described differing by a small number of mutations, e.g., Y-FAST, iFAST, pFAST, greenFAST, redFAST, frFAST, nirFAST, nanoFAST, or dimers of those. Also, a number of fluorogenic chromophores were developed, varying by their emission wavelength, their brightness and their tag affinity. Some are non permeant, i.e., they can't go through cell membranes, hence specifically labeling membrane proteins or extracellular proteins, allowing for, e.g., monitoring trafficking from synthesis until excretion.
FAST participates in the race towards near infra-red reporting, much needed for full organism imaging, while allowing deep tissue penetration, reduced photodamage to living organisms, and a high signal-to-noise ratio.
splitFAST
splitFAST is a fluorescence complementation system for the visualization of transient protein-protein interactions in living cells. Engineered from the fluorogenic reporter FAST, splitFAST consists of two protein moieties, NFAST (114 amino acids) and CFAST (10 or 11 amino acids). Each being genetically fused to one protein of interest, they, upon interaction of their corresponding proteins, reconstitute the complete FAST which is then capable to combine with any FAST fluorogen and illuminate the interaction. splitFAST offers a powerful alternative to conventional imaging techniques for protein-protein interactions, i.e., Föster Resonance Energy Transfer (FRET) and bimolecular fluorescence complementation (BiFC). Indeed, easy to implement, splitFAST complementation was shown fully reversible and disassembly rapid, which allows not only the real-time monitoring of protein complex assembly but also the real-time monitoring of protein complex disassembly.
A tripartite splitFAST was further developed.
CATCHFIRE
An evolution of FAST and splitFAST, CATCHFIRE implements the genetic fusion of a pair of proteins of interest to small FAST-based dimerizing domains, FIREtag and FIREmate. The addition of fluorogenic inducers, small molecules of the "match" series, e.g., match540, match550, matchDark, drives the interaction between FIREtag and FIREmate, hence inducing the proximity of proteins of interest. When both domains interact, then the match molecule sees its fluorescence increase by 100X. One can then observe the newly induced interaction by fluorescence microscopy. A further key feature of CATCHFIRE is its reversibility, hence the first ever self-reporting reversible dimerizing system. CATCHFIRE allows the control and tracking of protein localization, protein trafficking, organelle transport and cellular processes, opening avenues for studying or controlling biological processes with high spatiotemporal resolution. Its fluorogenic nature allows the design of a new class of biosensors for the study of processes such as signal transduction and apoptosis.
Applications
The FAST-fluorogen reporting system is used to explore the living world, from protein reporting (e.g., for protein trafficking), protein-protein interaction monitoring (and a number of biosensors), to chemically induced dimerization. It is implemented in fluorescence microscopy, flow cytometry and any other fluorometric methods. FAST has also been reported for super-resolution microscopy of living cells.
In anaerobic microbiology
Because of its unique capacity of fluorescence in zero-oxygen conditions, FAST has been widely used in anaerobes, for example to enable metabolic engineering of Clostridium or related bacteria long known in biomass fermentation. For the same purpose, it has been used in methanogenic archaea, namely Methanococcus maripaludis and Methanosarcina acetivorans. It was also implemented for pathogen studies, i.e., the bacterium Clostridioides difficile and the prozoan Giardia intestinalis.
Besides, FAST allows to monitor microbial activity in low oxygen conditions such as maturing biofilms or in tumors or gut microbiota.
In non-anerobic microbiology
Building on their small size and reversibility, hence limited impact on protein function and interactions, FAST and splitFAST have been used in fungi, namely Saccharomyces cerevisiae, to monitor metabolic engineering, and in pathological bacteria, namely Listeria monocytogenes, to explore their bacterial virulence factors.
In mammalian cells
Beyond microorganisms, FAST and splitFAST have started wide spreading across mechanism studies in mammalian cells. They helped elucidate the role of a special GPCR in dendritic spine maturation as well as a mechanism of action of the interferon-inducible MX1 protein against Influenza A. splitFAST has been used in studies of membrane contact sites (MCSs) between membranous organelles, a raising area in medical research, e.g., for the endoplasmic reticulum-mitochondria junction. Also, splitFAST-equipped lipid droplets have been designed to enable lipid droplets interactions studies.
References
Proteins
Biochemistry methods | Fluorescence-activating and absorption-shifting tag | [
"Chemistry",
"Biology"
] | 1,539 | [
"Biochemistry methods",
"Biomolecules by chemical classification",
"Molecular biology",
"Biochemistry",
"Proteins",
"Protein imaging"
] |
62,423,851 | https://en.wikipedia.org/wiki/Outline%20of%20web%20design%20and%20web%20development | The following outline is provided as an overview of and topical guide to web design and web development, two very related fields:
Web design – field that encompasses many different skills and disciplines in the production and maintenance of websites. The different areas of web design include web graphic design; interface design; authoring, including standardized code and proprietary software; user experience design; and search engine optimization. Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all. The term web design is normally used to describe the design process relating to the front-end (client side) design of a website including writing markup. Web design partially overlaps web engineering in the broader scope of web development. Web designers are expected to have an awareness of usability and if their role involves creating markup then they are also expected to be up to date with web accessibility guidelines.
Web development – work involved in developing a web site for the Internet (World Wide Web) or an intranet (a private network). Web development can range from developing a simple single static page of plain text to complex web-based internet applications (web apps), electronic businesses, and social network services. A more comprehensive list of tasks to which web development commonly refers, may include web engineering, web design, web content development, client liaison, client-side/server-side scripting, web server and network security configuration, and e-commerce development.
Among web professionals, "web development" usually refers to the main non-design aspects of building web sites: writing markup and coding. Web development may use content management systems (CMS) to make content changes easier and available with basic technical skills.
For larger organizations and businesses, web development teams can consist of hundreds of people (web developers) and follow standard methods like Agile methodologies while developing websites. Smaller organizations may only require a single permanent or contracting developer, or secondary assignment to related job positions such as a graphic designer or information systems technician. Web development may be a collaborative effort between departments rather than the domain of a designated department. There are three kinds of web developer specialization: front-end developer, back-end developer, and full-stack developer. Front-end developers are responsible for behaviour and visuals that run in the user browser, back-end developers deal with the servers and full-stack developers are responsible for both. Currently, the demand for React and Node.JS developers are very high all over the world.
Web design
Graphic design
Typography
Page layout
User experience design (UX design)
User interface design (UI design)
Web Design techniques
Responsive web design (RWD)
Adaptive web design (AWD)
Progressive enhancement
Tableless web design
Software
Adobe Photoshop
Adobe Illustrator
Adobe XD
Figma
Sketch (software)
Affinity Designer
Inkscape
Web development
Front-end web development – the practice of converting data to a graphical interface, through the use of HTML, CSS, and JavaScript, so that users can view and interact with that data.
HTML (HyperText Markup Language) (*.html)
CSS (Cascading Style Sheets) (*.css)
CSS framework
JavaScript (*.js)
Package managers for JavaScript
npm (originally short for Node Package Manager)
Server-side scripting (also known as "Server-side (web) development" or "Back-end (web) development")
ActiveVFP (*.avfp)
ASP (*.asp)
ASP.NET Web Forms (*.aspx)
ASP.NET Web Pages (*.cshtml, *.vbhtml)
ColdFusion Markup Language (*.cfm)
Go (*.go)
Google Apps Script (*.gs)
Hack (*.php)
Haskell (*.hs) (example: Yesod)
Java (*.jsp) via JavaServer Pages
JavaScript or TypeScript using Server-side JavaScript (*.ssjs, *.js, *.ts) (example: Node.js)
Lasso (*.lasso)
Lua (*.lp *.op *.lua)
NodeJS (*.node)
Parser (*.p)
Perl via the CGI.pm module (*.cgi, *.ipl, *.pl)
PHP (*.php, *.php3, *.php4, *.phtml)
Progress WebSpeed (*.r,*.w)
Python (*.py) (examples: Pyramid, Flask, Django)
R (*.rhtml) – (example: rApache)
React (*.jsx, *.tsx)
Ruby (*.rb, *.rbw) (example: Ruby on Rails)
SMX (*.smx)
Tcl (*.tcl)
WebDNA (*.dna,*.tpl)
Full stack web development – involves both front-end and back-end (server-side) development
Web framework
Types of framework architectures
Model–view–controller
Three-tier architecture
Software
Atom
IntelliJ IDEA
Sublime Text
Visual Studio Code
See also
Outline of computers
Outline of computing and Outline of information technology
Outline of computer science
Outline of artificial intelligence
Outline of cryptography
Outline of the Internet
Outline of Google
Outline of software
Types of software
Outline of free software
Outline of search engines
Outline of software development
Outline of software engineering
Outline of web design and web development
Outline of computer programming
Programming languages
Outline of C++
Outline of Perl
Outline of computer engineering
References
External links
Computer programming | Outline of web design and web development | [
"Technology",
"Engineering"
] | 1,154 | [
"Computing-related lists",
"Web development",
"Software engineering",
"Design",
"Web design"
] |
62,423,894 | https://en.wikipedia.org/wiki/Gallarija | The Gallarija (pl: gallariji) is a typical element of vernacular Maltese architecture, consisting of an ornate closed wooden balcony.
The term is of Italian origin, but with a shift in meaning (galleria, covered passage, vs balcone, balcony). The stone brackets or corbels that support the balcony are called saljaturi (it: sogliature vs mensole, beccattelli). The hinged glass flaps are purtelli (it: sportelli) and the blinds are called tendini (it: tendine)
History
The gallarija is considered a descendant of the Maltese muxrabija, and it is closely related to the mashrabiya which are typical in Arabic architecture.
Yet, its use became widespread only in the 17th century, as not one of antique townscapes of Valletta and the harbour cities show any covered balcony. The earlier representation of a gallarija concerns the one that rounds the Old Theatre Street corner of the Grandmaster's Palace in Valletta, around the year 1675. In 1679 Sieur de Bachelier mentions in his description of the palace that “a glass-covered balcony joins all the rooms of this side of the building” [Old Theatre Street], and adds that “Today’s Grand Master [Nicholas Cottoner] willingly strolls there [through the balcony] without being seen, and discovers from his walk all that is happening in the two piazzas in front and at the side of his palace. If he sees two knights ambling together, he immediately perceives their thoughts and the subject of their conversation, as he knows the minds of all those he governs, and the secret practices of their intrigues.”
The use of gallariji became widespread in Valletta and the Three Cities in the 18th century, in parallel with the spread of baroque. The architectural element was embellished by curve lines and elaborate stone corbels. The onset of the 20th century gave a new dimension to the Maltese balconies, which could now be designed in simpler Art Deco lines.
Notes
Bibliography
Giovanni Bonello,
Cyrus Vakili-Zad (2014) Maltese ‘gallarija’: a gender and space perspective, European Review of History: Revue européenne d'histoire, 21:5, 729-747, DOI: 10.1080/13507486.2014.949632
Architecture in Malta
Architectural elements | Gallarija | [
"Technology",
"Engineering"
] | 501 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
62,423,927 | https://en.wikipedia.org/wiki/Seafloor%20depth%20versus%20age | The depth of the seafloor on the flanks of a mid-ocean ridge is determined mainly by the age of the oceanic lithosphere; older seafloor is deeper. During seafloor spreading, lithosphere and mantle cooling, contraction, and isostatic adjustment with age cause seafloor deepening. This relationship has come to be better understood since around 1969 with significant updates in 1974 and 1977. Two main theories have been put forward to explain this observation: one where the mantle including the lithosphere is cooling; the cooling mantle model, and a second where a lithosphere plate cools above a mantle at a constant temperature; the cooling plate model. The cooling mantle model explains the age-depth observations for seafloor younger than 80 million years. The cooling plate model explains the age-depth observations best for seafloor older that 20 million years. In addition, the cooling plate model explains the almost constant depth and heat flow observed in very old seafloor and lithosphere. In practice it is convenient to use the solution for the cooling mantle model for an age-depth relationship younger than 20 million years. Older than this the cooling plate model fits data as well. Beyond 80 million years the plate model fits better than the mantle model.
Background
The first theories for seafloor spreading in the early and mid twentieth century explained the elevations of the mid-ocean ridges as upwellings above convection currents in Earth's mantle.
The next idea connected seafloor spreading and continental drift in a model of plate tectonics. In 1969, the elevations of ridges was explained as thermal expansion of a lithospheric plate at the spreading center. This 'cooling plate model' was followed in 1974 by noting that elevations of ridges could be modeled by cooling of the whole upper mantle including any plate. This was followed in 1977 by a more refined plate model which explained data that showed that both the ocean depths and ocean crust heat flow approached a constant value for very old seafloor. These observations could not be explained by the earlier 'cooling mantle model' which predicted increasing depth and decreasing heat flow at very old ages.
Seafloor topography: cooling mantle and lithosphere models
The depth of the seafloor (or the height of a location on a mid-ocean ridge above a base-level) is closely correlated with its age (i.e. the age of the lithosphere at the point where depth is measured). Depth is measured to the top of the ocean crust, below any overlying sediment. The age-depth relation can be modeled by the cooling of a lithosphere plate or mantle half-space in areas without significant subduction. The distinction between the two approaches is that the plate model requires the base of the lithosphere to maintain a constant temperature over time and the cooling is of the plate above this lower boundary. The cooling mantle model, which was developed after the plate model, does not require that the lithosphere base is maintained at a constant and limiting temperature. The result of the cooling mantle model is that seafloor depth is predicted to be proportional to the square root of its age.
Cooling mantle model (1974)
In the cooling mantle half-space model developed in 1974, the seabed (top of crust) height is determined by the oceanic lithosphere and mantle temperature, due to thermal expansion. The simple result is that the ridge height or seabed depth is proportional to the square root of its age. In all models, oceanic lithosphere is continuously formed at a constant rate at the mid-ocean ridges. The source of the lithosphere has a half-plane shape (x = 0, z < 0) and a constant temperature T1. Due to its continuous creation, the lithosphere at x > 0 is moving away from the ridge at a constant velocity , which is assumed large compared to other typical scales in the problem. The temperature at the upper boundary of the lithosphere (z = 0) is a constant T0 = 0. Thus at x = 0 the temperature is the Heaviside step function . The system is assumed to be at a quasi-steady state, so that the temperature distribution is constant in time, i.e.
By calculating in the frame of reference of the moving lithosphere (velocity ), which has spatial coordinate and the heat equation is:
where is the thermal diffusivity of the mantle lithosphere.
Since T depends on x''' and t only through the combination :
Thus:
It is assumed that is large compared to other scales in the problem; therefore the last term in the equation is neglected, giving one-dimensional diffusion equation:
with the initial conditions
The solution for is given by the error function:
.
Due to the large velocity, the temperature dependence on the horizontal direction is negligible, and the height at time t (i.e. of sea floor of age t) can be calculated by integrating the thermal expansion over z:
where is the effective volumetric thermal expansion coefficient, and h0 is the mid-ocean ridge height (compared to some reference).
The assumption that is relatively large is equivalent to the assumption that the thermal diffusivity is small compared to , where L is the ocean width (from mid-ocean ridges to continental shelf) and A is the age of the ocean basin.
The effective thermal expansion coefficient is different from the usual thermal expansion coefficient due to isostasic effect of the change in water column height above the lithosphere as it expands or contracts. Both coefficients are related by:
where is the rock density and is the density of water.
By substituting the parameters by their rough estimates into the solution for the height of the ocean floor :
we have:
where the height is in meters and time is in millions of years. To get the dependence on x, one must substitute t = x/ ~ Ax/L, where L is the distance between the ridge to the continental shelf (roughly half the ocean width), and A is the ocean basin age.
Rather than height of the ocean floor above a base or reference level , the depth of the seabed is of interest. Because (with measured from the ocean surface) we can find that:
; for the eastern Pacific for example, where is the depth at the ridge crest, typically 2500 m.
Cooling plate model (1977)
The depth predicted by the square root of seafloor age found by the 1974 cooling mantle derivation is too deep for seafloor older than 80 million years. Depth is better explained by a cooling lithosphere plate model rather than the cooling mantle half-space. The plate has a constant temperature at its base and spreading edge. Derivation of the cooling plate model also starts with the heat flow equation in one dimension as does the cooling mantle model. The difference is in requiring a thermal boundary at the base of a cooling plate. Analysis of depth versus age and depth versus square root of age data allowed Parsons and Sclater to estimate model parameters (for the North Pacific):
~125 km for lithosphere thickness
at base and young edge of plate
Assuming isostatic equilibrium everywhere beneath the cooling plate yields a revised age-depth relationship for older sea floor that is approximately correct for ages as young as 20 million years:
meters
Thus older seafloor deepens more slowly than younger and in fact can be assumed almost constant at ~6400 m depth. Their plate model also allowed an expression for conductive heat flow, q(t)'' from the ocean floor, which is approximately constant at beyond 120 million years:
Parsons and Sclater concluded that some style of mantle convection must apply heat to the base of the plate everywhere to prevent cooling down below 125 km and lithosphere contraction (seafloor deepening) at older ages. Morgan and Smith showed that the flattening of the older seafloor depth can be explained by flow in the asthenosphere below the lithosphere.
The age-depth-heat flow relationship continued to be studied with refinements in the physical parameters that define ocean lithospheric plates.
Impacts
The usual method for estimating the age of the seafloor is from marine magnetic anomaly data and applying the Vine-Matthews-Morley hypothesis. Other ways include expensive deep sea drilling and dating of core material. If the depth is known at a location where anomalies are not mapped or are absent, and seabed samples are not available, knowing the seabed depth can yield an age estimate using the age-depth relationships.
Along with this, if the seafloor spreading rate in an ocean basin increases, then the average depth in that ocean basin decreases and therefore its volume decreases (and vice versa). This results in global eustatic sea level rise (fall) because the Earth is not expanding. Two main drivers of sea level variation over geologic time are then changes in the volume of continental ice on the land, and the changes over time in ocean basin average depth (basin volume) depending on its average age.
See also
Sea level
Sea-level curve
Sea level equation
Sea level rise
References
Further reading
Coastal and oceanic landforms
Physical oceanography
Basalt
Geological processes
Plate tectonics
Volcanic landforms
Oceanographical terminology | Seafloor depth versus age | [
"Physics",
"Mathematics"
] | 1,892 | [
"Functions and mappings",
"Applied and interdisciplinary physics",
"Mathematical objects",
"Vertical distributions",
"Mathematical relations",
"Physical oceanography"
] |
62,424,009 | https://en.wikipedia.org/wiki/School%20belonging | The most commonly used definition of school belonging comes from a 1993 academic article by researchers Carol Goodenow and Kathleen Grady, who describe school belonging as "the extent to which students feel personally accepted, respected, included, and supported by others in the school social environment." The construct of school belonging involves feeling connected with and attached to one's school. It also encompasses involvement and affiliation with one's school community. Conversely, students who do not feel a strong sense of belonging within their school environment are frequently described as being alienated or disaffected. There are a number of terms within educational research that are used interchangeably with school belonging, including school connectedness, school attachment, and school engagement.
School belonging is determined by a myriad of factors, including academic achievement and motivation, personal characteristics, social relationships, demographic characteristics, school climate, and participation in extracurricular activities. Research indicates that school belonging has significant implications for students, as it has been consistently linked with academic outcomes, psychological adjustment, well-being, identity formation, mental health, and physical health—it is considered a fundamental aspect of students' development. A sense of belonging to one's school is considered particularly important for adolescents because they are within a period of transition and identity formation, and research has found that school belonging significantly declines during this period.
Psychological Sense of School Membership (PSSM), developed in 1993, is one of the measures to ascertain the degree to which students feel a sense of school belonging. Students rate the extent to which they agree or disagree with statements, such as "People here notice when I'm good at something." In 2003, the Centers for Disease Control and Prevention held an international convention where the Wingspread Declaration on School Connections was developed as a group of tactics to increase students' sense of belonging and connection with their school.
Prevalence and trajectory
Research indicates that many students have deficient feelings of school belonging. The Programme for International Student Assessment (PISA) has an investigated school belonging and disaffection in students around the world since 2003. Their most recent collection of data occurred in 2018. Approximately 600,000 students representing 32 million 15 years olds (aged between 15 years 3 months and 16 years 2 months) from 79 countries and economies participated in PISA 2018. Their analyses revealed that a significant proportion of students around the world are lacking strong feelings of belongingness to school. On average, a third of all students surveyed felt they did not belong to their school. In addition, they found that one in five students feels like an outsider at school and one in six reports feeling lonely. In most of the education systems, students who were socio-economically felt less belonging to school. On average student belonging to school declined by 2% between 2015 and 2018. The portion of students who do not feel like they belong to school has increased since 2003 indicating a trend in the deterioration of school belonging globally.
School belonging tends to decrease as students grow older, as indicated in several different research studies. In one study involving students from Latin America, Asia, and Europe, researchers Cari Gillen-O'Neel and Andrew Fuligni found that in childhood, students generally report high levels of school belonging. However, once students transition into middle school and adolescence, their perceptions of school belonging drop significantly. Similarly, a separate study found that students' school belonging decreased in the transition from middle to high school; these students also displayed an increase in depressive symptoms and a decline in social support, which could be considered either causes or consequences of the decline in school belonging. This trend has been replicated in many other studies, suggesting that school belonging declines once students reach adolescence.
Determinants
A meta-analysis of 51 studies (N = 67,378) by K. Allen and colleagues (2018) identified that there are multiple individual and social level factors that influence school belonging. These core themes include academic factors, personal characteristics, social relationships, demographic characteristics, school climate and extra-curricular activities. For many of the determinants of school belonging, it is likely that each of them have a reciprocal relationship with a student's sense of belonging. That is, they operate as both an antecedents or consequences.
Academic factors
Research has documented the influence of academic factors (i.e. achievement, motivation, hardiness, interest in school) on students' school belonging. Academic achievement, or one's skills and competencies in school, has been identified as a substantial predictor of school belonging. For example, research has demonstrated that students' grade point averages (GPAs), a common measure of academic achievement, are positively associated with school belonging. This means that students who have higher GPAs have higher levels of school belonging. Studies have also found several measures of academic motivation to be determinants of students' school belonging. Academic motivation encompasses behaviors such as homework completion, setting goals, expectancy of success, and effort and engagement within the classroom. Carol Goodenow and Kathleen Grady found each of these sub-sects of academic motivation to be significant predictors of students' perceptions of school belonging. More recent research has replicated these findings, suggesting that academic motivation plays an important role in developing feelings of school belonging. In addition, students' perceived value of school influences their school belonging: when they perceive their assignments and education as instructive, meaningful, and valuable, they are more likely to report greater school belonging.
Personal characteristics
Personal characteristics refer to students' distinctive qualities, traits, personality, emotions, and attributes, and have been consistently identified as a substantial determinant of school belonging. Personal characteristics can be classified as either positive or negative. Positive personal characteristics such as self-esteem, self-efficacy, positive affect, and effective emotional regulation have been shown to help foster students' sense of school belonging. A study by Xin Ma found that students' self-esteem had the greatest impact on school belonging compared to all other personal factors. Conversely, negative personal characteristics like anxiety, depressive symptoms, heightened stress, negative affect, and mental illness can lower students' perceptions of school belonging. Emotional instability can further influence school belonging by negatively affecting students' educational experiences.
Social relationships
Social relationships are involved in developing students' feelings of belonging within a school. There are large, positive correlations between school belonging and positive social relations with peers, teachers, and parents. Support, acceptance, and encouragement from these sources can help students develop the feeling that they connect and identify with their school.
Peers
Peer relations have been identified as a direct contributor to students' development of school belonging. Positive social relations with peers involve feelings of acceptance, connection, encouragement, academic and social support, trust, closeness, and caring. Such qualities within a peer relationship can significantly facilitate students' feelings of school belonging. When students are rejected or unsupported by their peers, they may experience anxiety, stress, and alienation. This alters their perceptions of belonging at school because the school environment now seems unwelcome and distressing, making it harder to identify and connect with the school.
Parents
Relationships with one's parents can have significant implications for students' feelings of school belonging, given that parents typically provide students' first social relationships. Positive parental relations include parents providing academic and social support, healthy communication, encouragement, compassion, acceptance, and safety. Such qualities within parent-child relationships have been shown to foster students' sense of school belonging by influencing their perceived connection with their school environment.
Teachers
Teachers have been identified as being noteworthy contributors to students' feelings of belonging at school. Several academic studies have identified teacher support as the strongest predictor of school belonging compared to support from peers or parents. Teachers can help instil school belonging by developing a safe and healthy classroom climate, providing academic and social support, fostering respect amongst peers, and treating students fairly. Teachers can also promote feelings of school belonging by being friendly, approachable, and making an effort to connect with their students. Teaching practices that seem to promote students' school belonging include scaffolding learning, commending positive behaviors and performance, allowing students autonomy within the classroom, and using academic pressure, such as holding high expectations of students.
Demographic characteristics
Gender
The relationship between gender and school belonging is largely inconclusive because research has produced conflicting results. Several studies have found gender differences in perceptions of school belonging: some research indicates that females possess a higher sense of school belonging compared to males, while other studies have found the opposite effect and conclude that males have higher school belonging than females. Other research has demonstrated that school belonging is not at all influenced by gender.
Race and ethnicity
Similar to gender, some research on the effect of race and ethnicity on school belonging has found a significant relationship between the two, while other research contradicts these findings. For example, one study found that Black students experience lower feelings of school belonging compared to white students, however, other research has found the opposite pattern or has found no significant influence of race on school belonging at all.
School climate
A school's climate can have significant consequences for students feeling like they belong at school. School climate broadly refers to the feelings associated with a school's environment and quality; it is considered to have physical (e.g. adequacy of buildings), social (e.g. interpersonal relationships), and academic dimensions (e.g. teaching quality). School climate influences school belonging through its support (or lack thereof) of students' feelings of connection with and attachment to their school. One important facet of school climate is school safety, which is how safe students feel at school. It includes variables such as a school's safety policies, use of discipline, bullying prevalence, and fairness. School safety is regarded as an important determinant of school belonging. Higher perceptions of school safety is associated with students holding greater feelings of school belonging.
Extracurricular activities
Research has shown that being involved in extracurricular activities can positively influence students' perceptions of school belonging. For example, researchers Casey Knifsend and Sandra Graham found that students who participated in two extracurricular activities reported greater feelings of school belonging compared to those students who participated in fewer than two. Other studies have replicated this relationship, highlighting the importance of participating in extracurricular activities for developing school belonging. Extracurricular activities may influence school belonging by providing collaborative and long-term interactions between students and their peers.
A Socio-ecological perspective
The many determinants of school belonging can be conceptualised in a socio-ecological model. The Socio-ecological Model of School Belonging developed by Allen and Colleagues (2016), adapted from Bronfenbrenner's Socio-ecological systems theory (1979) is used to describe the school system as whole and the multiple and dynamic influencers of school belonging. The model depicts students at the centre of their school environment. The inner circles describe biological and individual level characteristics that influence school belonging. These factors include biological traits and personal characteristics such as emotional stability and academic motivation. The microsystem is represented by relationships with others, specifically, teachers, peers, and parents. The mesosystem represents the school policy and practices that occur within the day-to-day operations of the school and the exosystem represents a broader level that may include the wider school community. The macro-system describes the cultural context of a school that may be influenced by where a school is geographically located, the external social climate, and other factors such as history, legislation, and government driven priorities.
Consequences
Psychological health and adjustment
School belonging has numerous consequences for students' psychological health and adjustment. Research has shown that when students feel a greater sense of school belonging, their mental health and well-being is improved: they exhibit greater levels of emotional stability, lower levels of depression, reduced stress, and an increase in positive emotions, such as happiness and pride. Feelings of school belonging have also been shown to predict self-esteem, self-concept, and self-worth. Students who possess school belonging experience more positive life transitions as well, which can have important implications for psychological health and adjustment.
On the other hand, students who do not have a strong sense of school belonging are at risk for a number of disadvantageous psychological and mental health outcomes. Students who lack a sense of belonging at school are at significantly greater risk for exhibiting anxiety, depression, negative affect, suicidal ideation, and overall developing mental illness. It may also increase their feelings of social rejection and alienation.
Academic development and outcomes
Feelings of school belonging can have a significant influence on academic development and outcomes for students. School belonging is related to students' expectancy of success, effort in school, and perceived value of school and education. Greater feelings of school belonging has been shown to increase engagement and participation both inside school and within extracurricular activities. Similarly, school belonging is associated with a greater commitment to school. Strong feelings of school belonging have also been shown to improve overall academic performance and achievement, as shown by increases in grade point averages. A sense of belonging at school can also improve academic self-efficacy, or in other words, students' belief in their ability to succeed in school.
Research has suggested that school belonging can also alleviate the prevalence of negative academic outcomes. Greater feelings of school belonging are associated with decreased misbehavior and misconduct, such as fighting, bullying, and vandalism. It can improve school attendance by reducing the frequency of truancy and absenteeism. Having school belonging also reduces students' likelihood of dropping out of school, thus improving rates of school completion. Conversely, students who lack a sense of school belonging are at greater risk for disengagement from school and potentially dropping out.
Physical health
School belonging has several implications for students' physical health. Students who possess feelings of school belonging exhibit reduced risk of having a stroke or disease. School belonging is also associated with lower mortality rates for students. In addition, perceptions of school belonging have a significant inverse relationship with risk-taking behaviors, including substance and tobacco use and early sexualization. In other words, students who have higher levels of school belonging are less likely to engage in risk-taking behaviors.
Measures
There are a number of measures used to assess school belonging. The most commonly used measures include:
Psychological Sense of School Membership (PSSM)
The most commonly used measure of school belonging is the Psychological Sense of School Membership (PSSM) scale, which was developed by Carol Goodenow in 1993. This scale measures students' feelings of belonging and membership within a school setting by having students respond to 18 items regarding their personal feelings and experiences within school. It is designed to be used with students of all ages and nationalities. Students answer the items on a scale ranging from 1 to 5, where 1 indicates Not at all true, and 5 indicates Completely true. The items are intended to measure students' perceptions of acceptance, academic and social support, value, and contentment within their social relationships at school. The following are some examples of items that students respond to: "People here notice when I'm good at something," "Other students take my opinions seriously," and "I feel like a real part of this school." Research has found the PSSM to have high validity and reliability, attesting to its status as a valuable and functional measure of school belonging.
Hemingway Measure of Adolescent Connectedness (HMAC)
The Hemingway Measure of Adolescent Connectedness (HMAC) was constructed by Michael Karcher in 1999 and has been used in research as a measure of school belonging for adolescents specifically. It contains 74 items on a scale ranging from 1 (Not true at all) to 5 (Very true). It examines adolescents' perceptions of connectedness, or in other words, their involvement with and valuation of both the specific and general social support they receive, across three sub-categories: social connectedness, academic connectedness, and family connectedness. The social connectedness component measures adolescents' feelings of connection towards their friends, neighborhood, and self. Academic connectedness evaluates adolescents' sense of connection towards their school, teachers, peers, and academic self. Finally, the family connectedness component assesses adolescents' feelings of connectedness to their parents, siblings, religion, and ancestry. Items measuring school belonging specifically include: "I feel good about myself when I am at school," "I get along well with the other students in my classes" and "I enjoy being at school." This scale has been found to be generalizable to adolescents across the globe.
School Connectedness Scale (SCS)
Jill Hendrickson Lohmeier and Steven W. Lee created the School Connectedness Scale (SCS) in 2011 to assess students' peer, adult, and school relationships within three distinct categories: general support (belongingness), specific support (relatedness), and engagement (connectedness).The scale includes 54 self-report items presented on a scale ranging from 1 to 5, where 1 represents 'Not at all true' and 5 represents 'Completely true'. Some items include "Students at my school help each other", "I am very involved in activities at my school, like clubs or teams", "Teachers at my school care about their students", and "I like spending time with my classmates." The SCS has shown generalizability to students from diverse populations, including different ages and ethnicities.
School Engagement Instrument (SEI)
The School Engagement Instrument (SEI) was designed by James Appleton, Sandra Christenson, Dongjin Kim, and Amy Reschly in 2006 and is commonly used to gauge perceptions of school belonging. It includes 35 items on a four-point scale ranging from Strongly agree to Strongly disagree that measure students' cognitive and affective engagement within the school environment. The items are categorized into six sub-domains: "future goals and aspirations, control and relevance of schoolwork, extrinsic motivation, family support for learning, peer support for learning, and teacher-student relationships." Items from the SEI include: "Overall, my teachers are open and honest with me," "Students at my school are there for me when I need them," "When I have problems at school, my family/guardian(s) want to know about it," and "What I'm learning in my classes will be important for my future."
Implications for practice
In 2003, the Centers for Disease Control and Prevention (CDC) held an international convention to develop tactics for bolstering students' perceptions of school belonging. They developed the Wingspread Declaration on School Connections which identified the following strategies for increasing students' belonging to and connection with their school:
Implementing high standards and expectations, and providing academic support to all students.
Applying fair and consistent disciplinary policies that are collectively agreed upon and fairly enforced.
Creating trusting relationships among students, teachers, staff, administrators, and families.
Hiring and supporting capable teachers skilled in content, teaching techniques, and classroom management to meet each learner's needs.
Fostering high parent/family expectations for school performance and school completion.
Ensuring that every student feels close to at least one supportive adult at school.
—"Wingspread Declaration on School Connections", Journal of School Health
The CDC later advanced the work of the Wingspread Declaration in 2009 by conducting a comprehensive, systematic review of school belonging and connectedness using sources from expert researchers, the government, educators, and more. This work produced four additional strategies for enhancing students' perception of belonging within school:
Adult Support: School staff members can dedicate their time, interest, attention, and emotional support to students.
Belonging to a Positive Peer Group: A stable network of peers can improve student perceptions of school.
Commitment to Education: Believing that school is important to their future and perceiving that the adults in school are investing in their education can help students engaged in their own learning and involved in school activities.
School Environment: The physical environment and psychosocial climate can set the stage for positive student perceptions of school.
—"School Connectedness: Strategies for Increasing Protective Factors Among Youth", Centers for Disease Control and Prevention
Student-level implications for practice
Student-level interventions may also increase a sense of school belonging. Research has indicated that social and emotional learning opportunities may also increase a sense of school belonging in students. Many individual characteristics found to enhance a student's sense of belonging can be taught to students and thus offer a preventative mechanism to support their sense of school belonging. For example, research suggests that teaching emotional regulation, coping skills, interpersonal skills, and skills related to academic motivation hold promise for supporting a students sense of school belonging.
See also
Education in the United States
References
External links
School belonging measure s are found on the International Belonging Research Laboratory.
Developmental psychology
Educational assessment and evaluation
Educational environment
Education and health
Educational research
Teaching | School belonging | [
"Biology"
] | 4,224 | [
"Behavioural sciences",
"Behavior",
"Developmental psychology"
] |
62,424,601 | https://en.wikipedia.org/wiki/Enemy%20release%20hypothesis | The enemy release hypothesis is among the most widely proposed explanations for the dominance of exotic invasive species. In its native range, a species has co-evolved with pathogens, parasites and predators that limit its population. When it arrives in a new territory, it leaves these old enemies behind, while those in its introduced range are less effective at constraining the introduced species' population. The result is sometimes rampant growth that threatens native species and ecosystems.
Explanations for invasive species success
Ecologists have identified many potential reasons for the success of invasive species, including higher growth rates or seed production than native species, more aggressive dispersal, tolerance of environmental heterogeneity, more efficient use of resources, and phenological advantages such as an earlier or longer flowering season. Invasive species may have greater phenotypic plasticity in important traits than their native competitors, allowing them to tolerate more environmental variation, or exhibit the ability to evolve rapidly to adapt to their new conditions. In addition, some habitats, due to disturbances or other factors, may be more vulnerable to invasion than others. Most exotic species do not become invasive, and some authors suggest that those that do represent repeated and larger introductions that generate propagule pressure. Among the many explanations for invasive success, however, the enemy release hypothesis has had the most support.
Enemy release hypothesis
The enemy release hypothesis (ERH) is most often applied to invasive plants, but there is evidence for its usefulness in other systems, including fish, amphibians, insects, and crustaceans. The ERH assumes that: (1) herbivores, pathogens and parasites suppress plant population growth, (2) these enemies plague native plants more than immigrating non-native species, and (3) non-native plants are able to leverage this advantage into more rapid population growth.
An early study of the flowering plant Silene latifolia found that about 60% of its invasive populations in North America were free from herbivory, while 84% of those in its native Europe exhibited damage from at least one herbivore. A study of almost 500 exotic plant species in the United States found that they were infected by 84% fewer fungi and 24% fewer virus species than in their native ranges. And a meta-analysis covering 15 exotic plant studies found the number of insect herbivores on average to be greater in their native than in their introduced range, with overall damage greater on native plants than on the introduced species.
Support for the theory, however, is not universal. In some cases, native pathogens, parasites and herbivores present significant biotic resistance to potential invasive species, as do non-native enemies that may have arrived prior to the exotic plant. Enemy release may be weaker, too, when an exotic species is more closely related to native species in their introduced ranges, making them more likely to share herbivores or pathogens. In a meta-analysis of 19 research studies involving 72 pairs of native and invasive plants, invasive exotic species did not incur less damage than their native counterparts and, in fact, exhibited lower relative growth rates. In other cases, invasive success was due not to release from herbivory but greater tolerance of it.
Related theories
The ERH is closely related to two other important theories for invasive species success: the evolution of increased competitive ability (EICA) and novel weapons hypotheses (NWH). EICA asserts that because exotic plants are released from the burden of defending themselves against herbivores in their native range, they evolve to reallocate those resources to traits, such as growth and seed production, that make them more formidable competitors in their introduced range. ERH is an ecological mechanism, while EICA rests on evolutionary adaptation. The experimental support for EICA is mixed. For example, Solidago altissima plants artificially released from herbivory became more competitive against other plant species. However, a meta-analysis of 30 studies that found evidence of evolutionary shifts in introduced species, showed no indication of a trade-off between herbivore defenses and growth.
The novel weapons hypothesis (NWH) is another perspective on the enemy release hypothesis. Some plants evolve chemical defenses to compete in their original range. In their introduced range, the native species are highly vulnerable to these chemicals because they have no prior experience with them, giving the exotic species a competitive advantage.
Practical applications
A final argument for the ERH lies in the success of biological control of some invasive species, in which herbivores or other enemies from their native environment are introduced to suppress population growth in their adopted range. For example, when conservationists sought to control the invasive St.-John's-wort (Hypericum perforatum) in North America, they imported a leaf herbivore (Chrysolina quadrigemina) from its native range in Europe.
References
Population ecology
Invasive species | Enemy release hypothesis | [
"Biology"
] | 989 | [
"Pests (organism)",
"Invasive species"
] |
62,426,258 | https://en.wikipedia.org/wiki/Nelson%20Max | Nelson Max is a professor of computer science at the University of California at Davis. He received his Ph.D. in Mathematics from Harvard University in 1967, advised by Herman Gluck. His research interests include scientific visualization, computer animation, photorealistic computer graphics rendering, multi-view stereo reconstruction, and augmented reality. In his visualization section, he worked on molecular graphics, and volume and flow visualization, particularly on irregular finite element meshes. He has rendered realistic lighting effects in clouds, trees, and water waves, and has produced numerous computer animations, shown at the annual ACM SIGGRAPH conferences, and in OMNIMAX stereo at the Fujitsu Pavilions at Expo ’85 in Tsukuba Japan, and at Expo ’90 in Osaka Japan. He received the prestigious Steven A. Coons Award in 2007, and is a Fellow of the ACM and a member of the ACM SIGGRAPH Academy.
His computer animation in the early 1970s for the Topology Films Project included the award winning animated films "Space Filling Curves," showing continuous fractal curves that pass through every point in a square, and "Turning a Sphere Inside Out," showing how to turn a sphere inside out without tearing or creasing the surface, but allowing the surface to cross itself. In photorealistic rendering, he was the first to render beams of light and shadow from atmospheric scattering, and developed horizon mapping to render bump shadows on bump-mapped surfaces. At Lawrence Livermore National Laboratory in 1981, he produced the film "Carla's Island" showing reflections of the sunset on ocean waves, using vectorized ray tracing on the Cray 1 supercomputer.
References
Living people
Computer graphics professionals
Harvard University alumni
University of California, Davis faculty
Year of birth missing (living people) | Nelson Max | [
"Technology"
] | 367 | [
"Computing stubs",
"Computer specialist stubs"
] |
62,426,375 | https://en.wikipedia.org/wiki/Marcia%20Groszek | Marcia Jean Groszek is an American mathematician whose research concerns mathematical logic, set theory, forcing, and recursion theory. She is a professor of mathematics at Dartmouth College.
Education
As a high school student, Groszek felt isolated for her interest in mathematics,
but she found a sense of community through her participation in the Hampshire College Summer Mathematics Program, and she went on to earn her bachelor's degree at Hampshire College. She completed her Ph.D. in 1981 at Harvard University. Her dissertation, Iterated Perfect Set Forcing and Degrees of Constructibility, was supervised by Akihiro Kanamori.
Research
With Theodore Slaman, Groszek showed that (if they exist at all) non-constructible real numbers must be widespread, in the sense that every perfect set contains one of them, and they asked analogous questions of the non-computable real numbers. With Slaman, she has also shown that the existence of a maximally independent set of Turing degrees, of cardinality less than the cardinality of the continuum, is independent of ZFC.
In the theory of ordinal definable sets, an unordered pair of sets is said to be a Groszek–Laver pair if the pair is ordinal definable but neither of its two elements is; this concept is named for Groszek and Richard Laver, who observed the existence of such pairs in certain models of set theory.
Service and outreach
Groszek was program chair of the 2014 North American annual meeting of the Association for Symbolic Logic. Her interest in logic extends to education as well as to research; she has participated in the Association for Symbolic Logic Committee on Logic Education, and in 2011 she was co-organizer of an Association for Symbolic Logic special session on "Logic in the Undergraduate Mathematics Curriculum".
With mathematics colleague Dorothy Wallace and performance artist Josh Kornbluth, Groszek has also helped write and produce a sequence of educational videos about mathematics.
Selected publications
References
Year of birth missing (living people)
Living people
20th-century American mathematicians
21st-century American mathematicians
Mathematical logicians
Women logicians
American set theorists
Hampshire College alumni
Harvard University alumni
Dartmouth College faculty
20th-century American women mathematicians
21st-century American women mathematicians | Marcia Groszek | [
"Mathematics"
] | 456 | [
"Mathematical logic",
"Mathematical logicians"
] |
62,427,038 | https://en.wikipedia.org/wiki/Lattice%20of%20stable%20matchings | In mathematics, economics, and computer science, the lattice of stable matchings is a distributive lattice whose elements are stable matchings. For a given instance of the stable matching problem, this lattice provides an algebraic description of the family of all solutions to the problem. It was originally described in the 1970s by John Horton Conway and Donald Knuth.
By Birkhoff's representation theorem, this lattice can be represented as the lower sets of an underlying partially ordered set. The elements of this set can be given a concrete structure as rotations, with cycle graphs describing the changes between adjacent stable matchings in the lattice. The family of all rotations and their partial order can be constructed in polynomial time, leading to polynomial time solutions for other problems on stable matching including the minimum or maximum weight stable matching. The Gale–Shapley algorithm can be used to construct two special lattice elements, its top and bottom element.
Every finite distributive lattice can be represented as a lattice of stable matchings.
The number of elements in the lattice can vary from an average case of to a worst-case of exponential.
Computing the number of elements is #P-complete.
Background
In its simplest form, an instance of the stable matching problem consists of two sets of the same number of elements to be matched to each other, for instance doctors and positions at hospitals. Each element has a preference ordering on the elements of the other type: the doctors each have different preferences for which hospital they would like to work at (for instance based on which cities they would prefer to live in), and the hospitals each have preferences for which doctors they would like to work for them (for instance based on specialization or recommendations). The goal is to find a matching that is stable: no pair of a doctor and a hospital prefer each other to their assigned match. Versions of this problem are used, for instance, by the National Resident Matching Program to match American medical students to hospitals.
In general, there may be many different stable matchings. For example, suppose there are three doctors (A,B,C) and three hospitals (X,Y,Z) which have preferences of:
A: YXZ B: ZYX C: XZY
X: BAC Y: CBA Z: ACB
There are three stable solutions to this matching arrangement:
The doctors get their first choice and the hospitals get their third: AY, BZ, CX.
All participants get their second choice: AX, BY, CZ.
The hospitals get their first choice and the doctors their third: AZ, BX, CY.
The lattice of stable matchings organizes this collection of solutions, for any instance of stable matching, giving it the structure of a distributive lattice.
Structure
Partial order on matchings
The lattice of stable matchings is based on the following weaker structure, a partially ordered set whose elements are the stable matchings. Define a comparison operation on the stable matchings,
where if and only if all doctors prefer matching to matching : either they have the same assigned hospital in both matchings, or they are assigned a better hospital in than they are in . If the doctors disagree on which matching they prefer, then and are incomparable: neither one is the other.
The same comparison operation can be defined in the same way for any two sets of elements, not just doctors and hospitals. The choice of which of the two sets of elements to use in the role of the doctors is arbitrary. Swapping the roles of the doctors and hospitals reverses the ordering of every pair of elements, but does not otherwise change the structure of the partial order.
Then this ordering gives the matchings the structure of a partially ordered set. To do so, it must obey the following three properties:
For every matching ,
For every two different matchings and , it cannot be the case that both and are true.
For every three different matchings , , and , if and , then .
For stable matchings, all three properties follow directly from the definition of the comparison operation.
Top and bottom elements
Define the best match of an element of a stable matching instance to be the element that most prefers, among all the elements that can be matched to in a stable matching, and define the worst match analogously. Then no two elements can have the same best match.
For, suppose to the contrary that doctors and both have as their best match, and that prefers to . Then, in the stable matching that matches to (which must exist by the definition of the best match of ), and would be an unstable pair, because prefers to and prefers to any other partner in any stable matching. This contradiction shows that assigning all doctors to their best matches gives a matching. It is a stable matching, because any unstable pair would also be unstable for one of the matchings used to define best matches. As well as assigning all doctor to their best matches, it assigns all hospitals to their worst matches. In the partial ordering on the matchings, it is greater than all other stable matchings.
Symmetrically, assigning all doctors to their worst matches and assigning all hospitals to their best matches gives another stable matching. In the partial order on the matchings, it is less than all other stable matchings.
The Gale–Shapley algorithm gives a process for constructing stable matchings, that can be described as follows: until a matching is reached, the algorithm chooses an arbitrary hospital with an unfilled position, and that hospital makes a job offer to the doctor it most prefers among the ones it has not already made offers to. If the doctor is unemployed or has a less-preferred assignment, the doctor accepts the offer (and resigns from their other assignment if it exists). The process always terminates, because each doctor and hospital interact only once. When it terminates, the result is a stable matching, the one that assigns each hospital to its best match and that assigns all doctors to their worst matches. An algorithm that swaps the roles of the doctors and hospitals (in which unemployed doctors send a job applications to their next preference among the hospitals, and hospitals accept applications either when they have an unfilled position or they prefer the new applicant, firing the doctor they had previously accepted) instead produces the stable matching that assigns all doctors to their best matches and each hospital to its worst match.
Lattice operations
Given any two stable matchings and for the same input, one can form two more matchings and in the following way:
In , each doctor gets their best choice among the two hospitals they are matched to in and (if these differ) and each hospital gets its worst choice.
In , each doctor gets their worst choice among the two hospitals they are matched to in and (if these differ) and each hospital gets its best choice.
(The same operations can be defined in the same way for any two sets of elements, not just doctors and hospitals.)
Then both and are matchings.
It is not possible, for instance, for two doctors to have the same best choice and be matched to the same hospital in , for regardless of which of the two doctors is preferred by the hospital, that doctor and hospital would form an unstable pair in whichever of and they are not already matched in. Because the doctors are matched in , the hospitals must also be matched. The same reasoning applies symmetrically to .
Additionally, both and are stable.
There cannot be a pair of a doctor and hospital who prefer each other to their match, because the same pair would necessarily also be an unstable pair for at least one of and .
Lattice properties
The two operations and form the join and meet operations of a finite distributive lattice.
In this context, a finite lattice is defined as a partially ordered finite set in which there is a unique minimum element and a unique maximum element, in which every two elements have a unique least element greater than or equal to both of them (their join) and every two elements have a unique greatest element less than or equal to both of them (their meet).
In the case of the operations and defined above, the join is greater than or equal to both and because it was defined to give each doctor their preferred choice, and because these preferences of the doctors are how the ordering on matchings is defined. It is below any other matching that is also above both and , because any such matching would have to give each doctor an assigned match that is at least as good. Therefore, it fits the requirements for the join operation of a lattice.
Symmetrically, the operation fits the requirements for the meet operation.
Because they are defined using an element-wise minimum or element-wise maximum in the preference ordering, these two operations obey the same distributive laws obeyed by the minimum and maximum operations on linear orderings: for every three different matchings , , and ,
and
Therefore, the lattice of stable matchings is a distributive lattice.
Representation by rotations
Birkhoff's representation theorem states that any finite distributive lattice can be represented by a family of finite sets, with intersection and union as the meet and join operations, and with the relation of being a subset as the comparison operation for the associated partial order. More specifically, these sets can be taken to be the lower sets of an associated partial order.
In the general form of Birkhoff's theorem, this partial order can be taken as the induced order on a subset of the elements of the lattice, the join-irreducible elements (elements that cannot be formed as joins of two other elements). For the lattice of stable matchings, the elements of the partial order can instead be described in terms of structures called rotations, described by .
Suppose that two different stable matchings and are comparable and have no third stable matching between them in the partial order. (That is, and form a pair of the covering relation of the partial order of stable matchings.) Then the set of pairs of elements that are matched in one but not both of and (the symmetric difference of their sets of matched pairs) is called a rotation. It forms a cycle graph whose edges alternate between the two matchings. Equivalently, the rotation can be described as the set of changes that would need to be performed to change the lower of the two matchings into the higher one (with lower and higher determined using the partial order). If two different stable matchings are separately the higher matching for the same rotation, then so is their meet. It follows that for any rotation, the set of stable matchings that can be the higher of a pair connected by the rotation has a unique lowest element. This lowest matching is join irreducible, and this gives a one-to-one correspondence between rotations and join-irreducible stable matchings.
If the rotations are given the same partial ordering as their corresponding join-irreducible stable matchings, then Birkhoff's representation theorem gives a one-to-one correspondence between lower sets of rotations and all stable matchings. The set of rotations associated with any given stable matching can be obtained by changing the given matching by rotations downward in the partial ordering, choosing arbitrarily which rotation to perform at each step, until reaching the bottom element, and listing the rotations used in this sequence of changes. The stable matching associated with any lower set of rotations can be obtained by applying the rotations to the bottom element of the lattice of stable matchings, choosing arbitrarily which rotation to apply when more than one can apply.
Every pair of elements of a given stable matching instance belongs to at most two rotations: one rotation that, when applied to the lower of two matchings, removes other assignments to and and instead assigns them to each other, and a second rotation that, when applied to the lower of two matchings, removes pair from the matching and finds other assignments for those two elements. Because there are pairs of elements, there are rotations.
Mathematical properties
Universality
Beyond being a finite distributive lattice, there are no other constraints on the lattice structure of stable matchings. This is because, for every finite distributive lattice , there exists a stable matching instance whose lattice of stable matchings is isomorphic to .
More strongly, if a finite distributive lattice has elements, then it can be realized using a stable matching instance with at most doctors and hospitals.
Number of lattice elements
The lattice of stable matchings can be used to study the computational complexity of counting the number of stable matchings of a given instance. From the equivalence between lattices of stable matchings and arbitrary finite distributive lattices, it follows that this problem has equivalent computational complexity to counting the number of elements in an arbitrary finite distributive lattice, or to counting the antichains in an arbitrary partially ordered set. Computing the number of stable matchings is #P-complete.
In a uniformly-random instance of the stable marriage problem with doctors and hospitals, the average number of stable matchings is asymptotically . In a stable marriage instance chosen to maximize the number of different stable matchings, this number can be at least ,
and us also upper-bounded by an exponential function of (significantly smaller than the naive factorial bound on the number of matchings).
Algorithmic consequences
The family of rotations and their partial ordering can be constructed in polynomial time from a given instance of stable matching, and provides a concise representation to the family of all stable matchings, which can for some instances be exponentially larger when listed explicitly. This allows several other computations on stable matching instances to be performed efficiently.
Weighted stable matching and closure
If each pair of elements in a stable matching instance is assigned a real-valued weight, it is possible to find the minimum or maximum weight stable matching in polynomial time. One possible method for this is to apply linear programming to the order polytope of the partial order of rotations, or to the stable matching polytope. An alternative, combinatorial algorithm is possible, based on the same partial order.
From the weights on pairs of elements, one can assign weights to each rotation, where a rotation that changes a given stable matching to another one higher in the partial ordering of stable matchings is assigned the change in weight that it causes: the total weight of the higher matching minus the total weight of the lower matching. By the correspondence between stable matchings and lower sets of rotations, the total weight of any matching is then equal to the total weight of its corresponding lower set, plus the weight of the bottom element of the lattice of matchings. The problem of finding the minimum or maximum weight stable matching becomes in this way equivalent to the problem of finding the minimum or maximum weight lower set in a partially ordered set of polynomial size, the partially ordered set of rotations.
This optimal lower set problem is equivalent to an instance of the closure problem, a problem on vertex-weighted directed graphs in which the goal is to find a subset of vertices of optimal weight with no outgoing edges. The optimal lower set is an optimal closure of a directed acyclic graph that has the elements of the partial order as its vertices, with an edge from to whenever in the partial order. The closure problem can, in turn, be solved in polynomial time by transforming it into an instance of the maximum flow problem.
Minimum regret
defines the regret of a participant in a stable matching to be the distance of their assigned match from the top of their preference list. He defines the regret of a stable matching to be the maximum regret of any participant. Then one can find the minimum-regret stable matching by a simple greedy algorithm that starts at the bottom element of the lattice of matchings and then repeatedly applies any rotation that reduces the regret of a participant with maximum regret, until this would cause some other participant to have greater regret.
Median stable matching
The elements of any distributive lattice form a median graph, a structure in which any three elements , , and (here, stable matchings) have a unique median element that lies on a shortest path between any two of them. It can be defined as:
For the lattice of stable matchings, this median can instead be taken element-wise, by assigning each doctor the median in the doctor's preferences of the three hospitals matched to that doctor in , , and and similarly by assigning each hospital the median of the three doctors matched to it. More generally, any set of an odd number of elements of any distributive lattice (or median graph) has a median, a unique element minimizing its sum of distances to the given set. For the median of an odd number of stable matchings, each participant is matched to the median element of the multiset of their matches from the given matchings. For an even set of stable matchings, this can be disambiguated by choosing the assignment that matches each doctor to the higher of the two median elements, and each hospital to the lower of the two median elements. In particular, this leads to a definition for the median matching in the set of all stable matchings. However, for some instances of the stable matching problem, finding this median of all stable matchings is NP-hard.
References
Stable matching
Lattice theory | Lattice of stable matchings | [
"Mathematics"
] | 3,528 | [
"Fields of abstract algebra",
"Order theory",
"Lattice theory"
] |
62,427,467 | https://en.wikipedia.org/wiki/Anti-social%20Media%20Bill%20%28Nigeria%29 | Anti-social Media Bill was introduced by the Senate of the Federal Republic of Nigeria on 5 November 2019 to criminalise the use of the social media in peddling false or malicious information. The original title of the bill is Protection from Internet Falsehood and Manipulations Bill 2019. It was sponsored by Senator Mohammed Sani Musa from the largely conservative northern Nigeria. After the bill passed second reading on the floor of the Nigeria Senate and its details were made public, information emerged on the social media accusing the sponsor of the bill of plagiarising a similar law in Singapore which is at the bottom of global ranking in the freedom of speech and of the press. But the senator denied that he plagiarised Singaporean law.
Opposition to the bill
Angry reactions trailed the introduction of the bill, and a number of civil society organisations, human rights activists, and Nigerian citizens unanimously opposed the bill. International rights group, Amnesty International and Human Rights Watch condemned the proposed legislation saying it is aimed at gagging freedom of speech which is a universal right in a country of over two hundred million people.
Opposition political parties are very critical of the bill and accused the government of attempting to strip bare, Nigerian citizens of their rights to free speech and destroying same social media on whose power and influence the ruling All Progressives Congress, APC came to power in 2015. Nigeria Information Minister, Lai Mohammed has been at the center of public criticism because he is suspected to be the brain behind the proposed act. Lai was a former spokesman of then opposition All Progressives Congress.
A "Stop the Social Media Bill! You can no longer take our rights from us" online petition campaign to force the Nigeria parliament to drop the bill received over 90,000 signatures within 24 hours. In November 2019, after the bill passed second reading in the senate, Akon Eyakenyi, a senator from Akwa Ibom State publicly said he would resist the bill.
Support for the bill
Those who support the proposed act especially Senators have often argued that the law would help curtail hate speech. President Muhammad Buhari who is seen as a beneficiary of the influence and power of the social media and free speech has been mute about it. But the president's senior aides and family members have publicly spoken in support of the bill. In November 2019, the wife of the president, Aisha Buhari, told a gathering at the Nigeria's National Mosque in the capital, Abuja that if China with over one billion people could regulate the social media, Nigeria should do same. But Nigerians reacted saying Nigeria is not a one-party communist state like China. Days later, a daughter to the president, Zahra Indimi told a gathering of young people in Abuja that social media had become a potent weapon for bullying those they thought were doing better than them in terms of social class and called for a critical regulation.
Key provisions of the bill
Title
Protection from Internet Falsehoods, Manipulations and Other Related Matters Bill 2019.
Explanatory memorandum
This Act is to prevent Falsehoods and Manipulations in Internet transmission and correspondences in Nigeria.
To suppress falsehoods and manipulations and counter the effects of such communications and transmissions and to sanction offenders with a view to encouraging and enhancing transparency by Social Media Platforms using the internet correspondences.
Objectives
One objective of the bill is to prevent the transmission of false statements or declaration of facts in Nigeria.
Another objective of the bill is to end the financing of online mediums that transmit false statements.
Measures will be taken to detect and control inauthentic behaviour and misuse of online accounts (parody accounts).
When paid content is posted towards a political end, there will be measures to ensure the poster discloses such information.
There will be sanction for offenders.
Transmission of false statement
According to the bill, a person must not:
Transmit a statement that is false or,
Transmit a statement that might:
i. Affect the security or any part of Nigeria. ii. Affect public health, public safety or public finance. iii. Affect Nigeria's relationship with other countries. iv. influence the outcome of an election to any office in a general election. v. Cause enmity or hatred towards a person or group of persons.
Anyone guilty of the above is liable to a fine of N300,000 or three years' imprisonment or both (for individual); and a fine not exceeding ten million naira (for corporate organisations).
Same punishment applies for fake online accounts that transmit statements listed above.
Parody accounts
The bill says a person shall not open an account to transmit false statement.
Anyone found guilty will be fined N200,000 or three years' imprisonment or both (for an individual) or five million naira (for corporate organisations).
If such accounts transmit a statement that will affect security or influence the outcome of an election, such a person will be fined N300,000 or three years' imprisonment or both.
If a person receives payment or reward to help another to transmit false statements knowingly, he/she is liable to a fine of N150,000 or three years' imprisonment or both. If a person receives payment or reward to help another to transmit a statement affects security or influence the outcome of an election, the fine is N300,000 or three years' imprisonment or both (for individual) and ten million naira for organisations.
Declaration
According to the bill, a law enforcement department can issue a "declaration" to offenders. And this declaration will be issued even if the "false statement" has been corrected or pulled down.
The offender will be required to publish a "correction notice" in a specified newspaper, online location or other printed publication of Nigeria.
Failure to comply, a person is liable to N200,000 or 12 months' imprisonment or both (for individual) and five million naira for organisations.
Access blocking order
The bill says the law enforcement department will also issue an access blocking order to offenders.
The law enforcement department may direct the NCC to order the internet access service provider to disable access by users in Nigeria to the online location and the NCC must give the internet access service provider an access blocking order.
An internet access service provider that does not comply with any access blocking order is liable on conviction to a fine not exceeding ten million naira for each day during any part of which that order is not fully complied with, up to a total of five million naira.
References
Law of Nigeria
Presidency of Muhammadu Buhari
Internet law
Information governance
Social media | Anti-social Media Bill (Nigeria) | [
"Technology"
] | 1,321 | [
"Computing and society",
"Social media"
] |
62,428,696 | https://en.wikipedia.org/wiki/Centrifugal%20pump%20selection%20and%20characteristics | The basic function of a pump is to do work on a liquid. It can be used to transport and compress a liquid. In industries heavy-duty pumps are used to move water, chemicals, slurry, food, oil and so on. Depending on their action, pumps are classified into two types — Centrifugal Pumps and Positive Displacement Pumps. While centrifugal pumps impart momentum to the fluid by motion of blades, positive displacement pumps transfer fluid by variation in the size of the pump’s chamber. Centrifugal pumps can be of rotor or propeller types, whereas positive displacement pumps may be gear-based, piston-based, diaphragm-based, etc.
As a general rule, centrifugal pumps are used with low viscosity fluids and positive displacement pumps are used with high viscosity fluids.
Parameters and Definitions
Volume flow rate (Q), specifies the volume of fluid flowing through the pump per unit time. Thus, it gives the rate at which fluid travels through the pump. Given the density of the operating fluid, mass flow rate (ṁ) can also be used to obtain the volume flow rate. The relationship between the mass flow rate and volume flow rate (also known as the capacity) is given by:
Where ρ is the operating fluid density.
One of the most important considerations, as a consequence, is to match the rated capacity of the pump with the required flow rate in the system that we are designing.
Discharge Head, is the net head obtained at the outlet of a pump. For a centrifugal pump, the discharge pressure depends on the suction or inlet pressure as well, along with the fluid’s density. Thus, for the same flow rate of the fluid, we may have different values of discharge pressure depending on the inlet pressure. Thus, discharge head (the height which the fluid can reach after getting pumped) varies according to its operating conditions.
Total Head is the difference between the height to which the fluid can rise at the outlet and the height to which it can rise at the inlet for a centrifugal pump. This is a crucial parameter for pump selection and is a popularly used parameter for ascertaining industrial requirements. By eliminating the inlet head, we remove the effect of the supplied pressure to the pump and are left with only the pump’s energy (head) contribution to the fluid flow.
Factors Affecting Pump Selection
Flow Rate – The flow rate is necessary to select a pump because the head characteristics of a pump will be affected by the flow rate of the system. It is necessary to importantly measure or ascertain this parameter, since the flow rate is critical in many industrial processes, especially in chemical industries.
Static Head – The difference between the inlet tank fluid surface elevation and the discharge tank fluid surface elevation.
Friction Head – The friction head accounts for the frictional losses in the pumping system. The value of the friction head can be found from available data-tables depending on the flow parameters such as fluid viscosity, pipe dimensions, flow rate, etc.
Total Head – It is obtained by adding the friction and static heads. It gives a measure of the amount of energy imparted by the pump to the fluid. Using the total head and the flow rate, the appropriate dynamic pump (centrifugal pump) can be selected.
Selection Using Pump Characteristics
Whenever there is a need to select a pump for any industrial or personal requirement, it is important to determine the required total head for the operation and the required flow rate. All this data is important because each pump which is manufactured by manufacturer has a characteristic value of head and flow at which it leads to maximum efficiency operation. For example, in a process industry if there is a need to transport chemical liquids at a specific flow rate for a particular chemical reaction to take place then there is a need to ascertain both the dynamic head (which is related to the flow rate) and static head. After calculating both the head and the flow rate, the pump curves given by the manufacturer are referred and the pump giving the maximum efficiency at the operational condition is selected. It should however be noted that the best efficiency point is not the best operating point in practice, because the pump curve describes how a centrifugal pump performs in isolation from plant equipment. How it operates in practice is determined by the resistance of the system it is installed in.
Characteristic Pump Curves
Pump curves are quite useful in the pump selection, testing, operation and maintenance. Pump performance curve is a graph of differential head against the operating flow rate. They specify performance and efficiency characteristics. Performance tests are done on the pumps to verify the claims made by the pump maker. It is quite possible that with time in the plant, requirements of the process along with the infrastructure and conditions may change considerably. In that case pump curves are used to verify whether the pumps would still be the best fit for modified requirements.
Selecting Using Pump Curves
Pump performance curves are important indicators of pump characteristics provided by the manufacturer. These curves are fundamental in predicting the variation in the differential head across the pump, as the flow changes. However, such curves are not limited to the head, and variation in other parameters such as power, efficiency or NPSH with flow can also be shown on similar plots by the manufacturer.
Due to mechanical and power constraints head provided by the pump drops as it pushes more quantity of fluid. In other words, when there is an increase the flow rate (for the same impeller diameter), there is a drop in differential head that the pump is capable of providing. The two are related as follows:
Here and depend on the geometric parameters and the rotational speed of the pump and are assumed to be constant for the purpose of comparison.
However, this simple linear relationship undergoes modification on account of various losses and a non-linear, decreasing relationship is seen in the pump characteristic curve.
From the curve, it is observed that even when the differential head drops off, the output obtained increases because the product of flow rate and head increases (recall that the net pump output is given by and the efficiency is ). This is due to the increase in flow rate. However, the reduction in the discharge head means that the pump consumes more power to push the additional fluid that we need (on account of the increased flow rate). After a specific point, known as the best efficiency point, the effect of reduction in the obtained head outweighs the increase in the flow rate. As a consequence, the power starts reducing hereafter, and the efficiency starts falling. Mathematically, the effect of flow rate on the efficiency is given by:
where is called the capacity constant, and and are constants that depend on the pump design and rotation speed.
Because of this contradicting feature a point of optimal efficiency is achieved for the pump. Our target should be to select pump which operates close to the maximum efficiency point for required operational requirement. This is the best efficiency point for pump and plotted on Pump Efficiency Curve.
References
Pumps | Centrifugal pump selection and characteristics | [
"Physics",
"Chemistry"
] | 1,419 | [
"Pumps",
"Hydraulics",
"Physical systems",
"Turbomachinery"
] |
62,429,844 | https://en.wikipedia.org/wiki/Out-flow%20radial%20turbine | Radial means that the fluid is flowing in radial direction that is either from inward to outward or from outward to inward, with respect to the runner shaft axis. If the fluid is flowing from inward to outward then it is called outflow radial turbine.
In this turbine, the working fluid enters around the axis of the wheel and then flows outwards (i.e., towards the outer periphery of the wheel).
The guide vane mechanism is typically surrounded by the runner/turbine.
In this turbine, the inner diameter of the runner is the inlet and outer diameter is an outlet.
Most practical radial outflow turbines are Reaction-type turbines, whereas the converse, radial inflow turbines can be either reaction type, impulse type (in the case of a typical turbo-supercharger), or intermediate (in the case of Francis turbines for example.)
Components of Out-flow Turbine
The Main Components of Reaction Turbine are :
Casing/ Involute: Typically the Runner shaft bearings, rotating seals, guide vane assembly and inlet tube are mounted to the casing
Guide Vanes: In liquid turbines these are also sometimes referred to as Wicket gates. These convert some of the pressure energy into momentum energy, but their main functions are to control the flow rate and impart an average tangential velocity on the fluid greater than or equal to the tangential velocity of the runner inlets. In an OFRT these are typically mounted concentrically, in the same plane as the turbine. However the guide vanes can also be designed in an axial or diagonal/mixed configuration.
Runner/Turbine: The passage between the blades has a converging-diverging profile. The majority of the head loss or pressure drop occurs as the working fluid passes through the turbine in radial outflow design. The runner is connected to the shaft which rotates along with it and thus this can be used for power production. Depending on the design, the flow through the turbine may be strictly planar, or it may enter the turbine axially and undergo a 90° turn therein.
Draft Tube: It is connected to outlet of the turbine which assists fluid exiting the spiral casing. It is used because the exit pressure may becomes less than the stagnation pressure within the tail race and thus it may become difficult for the fluid to proceed downstream causing choked-flow. To make it exit from the tail race/involute it's necessary to provide diverging cross section so that the pressure can increase while the linear velocity greatly decreases.
Comparison between inward and outward radial flow reaction turbine
Advantages
Some of the advantages of radial outflow turbine are:
The configuration of radial flow turbine is simple, similar to a centrifugal compressor.
Radial flow turbines are mechanically robust compared to axial turbines and they are easy to configure. As a result of that they were considered for the application before axial turbine. They are more tolerant of overspeed and temporary temperature extremes.
Radial flow turbines have higher energy extraction capability in one single stage.
Because the high pressure side is near the rotational axis (at low radius), it is possible to keep leakage losses lower than with other reaction turbines (Ljungström, axial or in-flow radial). This is more important in small turbines where complex rotating seal systems aren't cost effective.
Radial flow turbines are generally more preferred in small turbines because of simpler construction. Radial flow turbine rotor does not use aerofoil sections, as a result of which the rotor of radial flow turbine has a shape very similar to a centrifugal compressor and it uses 3D shape for energy extraction. They are more conducive to being produced from a single casting or round billet as a Bladed-disk or "blisk."
References
Al Jubori, A. M., Al-Dadah, R. K., Mahmoud, S., & Daabo, A. (2017). Modelling and parametric analysis of small-scale axial and radial-outflow turbines for Organic Rankine Cycle applications. Applied energy, 190, 981-996.
Erwin, J. R. (1969). U.S. Patent No. 3,465,518. Washington, DC: U.S. Patent and Trademark Office.
Turnquist, N. A., Willey, L. D., & Wolfe, C. E. (2002). U.S. Patent No. 6,439,844. Washington, DC: U.S. Patent and Trademark Office.
Pini, M., Persico, G., Casati, E., & Dossena, V. (2013). Preliminary design of a centrifugal turbine for organic rankine cycle applications. Journal of Engineering for Gas turbines and power, 135(4), 042312.
Childs, D. (1993). Turbomachinery rotordynamics: phenomena, modeling, and analysis. John Wiley & Sons.
Hiett, G. F., & Johnston, I. H. (1963, June). Paper 7: Experiments concerning the aerodynamic performance of inward flow radial turbines. In Proceedings of the Institution of Mechanical Engineers, Conference Proceedings (Vol. 178, No. 9, pp. 28-42). Sage UK: London, England: SAGE Publications.
Rodgers, C., & Geiser, R. (1987). Performance of a high-efficiency radial/axial turbine.
Turbines | Out-flow radial turbine | [
"Chemistry"
] | 1,116 | [
"Turbines",
"Turbomachinery"
] |
62,430,055 | https://en.wikipedia.org/wiki/Biennale%20of%20Design | The Biennale of Design (BIO) is an international design exhibition which has been held continuously since 1964 in Ljubljana, Slovenia, as the first design biennial in Europe.
History
The Biennial of Industrial Design () was officially founded in the autumn of 1963 at the initiative of the Ljubljana City Council, the Chamber of Commerce of the Socialist Republic of Slovenia, and various professional associations. The exhibition was conceived as a biennial forum to compare Yugoslav and foreign achievements in industry. In addition to the Triennale di Milano, BIO was one of the most important European design events in the 1960s, and the first biennial of its kind in the world.
The purpose of the Biennale was to promote and facilitate the development of Yugoslav industrial production, to influence the exchange of well-designed industrial objects on national and international markets, and to raise the general level of design apperception and good taste through educational and information campaigns. For more than forty years, BIO exhibitions followed the same concept created for the first Biennial in 1964 (BIO 1), when objects were organized according to the following categories: furniture, lamps, textiles, hospitality, household appliances and appliances, optical objects, electrical engineering, machinery and telecommunications equipment, machinery, industrial products from the mechanical engineering industry, sports equipment, toys, architectural details, transport, packaging, and visual Communication.
The Biennial of Industrial Design has been held at the (MAO) since the museum's founding in 1972.
In 2010, an accompanying exhibition titled "Alvar Aalto Houses – Timeless Expressions" was organised in collaboration with Alvar Aalto Foundation and museum, and the Embassy of Finland in Slovenia.
At the end of the first decade of the 21st century, the Biennale experienced gradual changes in its concept. BIO 21 (2008) dispensed with national selections and thus gave everyone the opportunity to submit their work. All works were reviewed by an international selection committee and, as always, the prizes were awarded by an international jury. In 2011, the Biennale of Industrial Design was renamed the Biennial of Design. BIO 23 (2012) was the first curated biennial, the subject of which being design relations.
In 2014, under the guidance of Belgian critic and curator Jan Boelen, it ceased to award actual design products, and instead begun to grant a "Best Collaboration Award" selected by an international jury and presented at the opening ceremony.
Its 2019 edition (BIO26) titled "Common Knowledge" and curated by Austrian design curator Thomas Geisler and assistant curator Aline Lara Rezende, focused on information crisis due to social media-propagated fake news and the design of online news publications.
The 27th edition of Ljubljana Biennale of Design (BIO27) titled Super Vernaculars opened in May 2022. It was curated by British design critic and writer Jane Withers and focused on how designers and architects are re-evaluating vernacular traditions and value systems to address contemporary design issues such as water scarcity, waste management, and protecting biodiversity.
Notes and references
External links
official site
Contemporary art exhibitions
Art biennials
Design events
Industrial design awards | Biennale of Design | [
"Engineering"
] | 612 | [
"Design",
"Design events"
] |
62,430,725 | https://en.wikipedia.org/wiki/Wholesale%20acquisition%20cost | Wholesale acquisition cost is the price of a medication set by a pharmaceutical manufacturer in the United States when selling to a wholesaler. Generally 20% is added to create the average wholesale price.
References
Drug pricing | Wholesale acquisition cost | [
"Chemistry"
] | 42 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
62,430,829 | https://en.wikipedia.org/wiki/Samson%20Jenekhe | Samson Ally Jenekhe is the Boeing-Martin Professor of Chemical Engineering and Professor of Chemistry at the University of Washington. Jenekhe was previously a chemical engineer at the University of Rochester where his work focused on semiconducting polymers and quantum wires. He has authored over 300 research articles and 28 patents.
Early life and education
Samson earned his Bachelor of Science in Engineering from Michigan Technological University and his doctoral degrees from the University of Minnesota.
Career
Jenekhe joined the faculty of chemistry at the University of Washington in 2000 as a professor of chemical engineering and chemistry. In 2003, he was one of three University of Washington Professors elected to the American Association for the Advancement of Science.
In 2013, he was elected to the Washington State Academy of Sciences. The next year, he was listed by the Clean Energy Institute as one of the 2014 Highly Cited Researchers.
He was elected a Fellow of the American Physical Society in 2003. Jenekhe has been selected as the 2021 recipient of the APS Polymer Physics Prize "for pioneering and sustained outstanding contributions to the synthesis, photophysics, and structure-morphology-performance relationships in semiconducting polymers for electronic and photovoltaic applications."
Honors and fellowships
APS Polymer Physics Prize, 2021
Charles M. A. Stine Award for Excellence in Materials Science and Engineering, 2014
Member of the Washington State Academy of Sciences, 2013
In 2022, Jenekhe was elected to the National Academy of Engineering.
Selected publications
Li, H.; Kim, F. S.; Ren, G.; Jenekhe, S. A. “High Mobility n-Type Conjugated Polymers for Organic Electronics,” J. Am. Chem. Soc. 2013, 135, 14920-14923. DOI:10.1021/ja407471b.
Earmme, T.; Hwang, Y. J.; Murari, N. M.; Subramaniyan, S.; Jenekhe, S. A. “All-Polymer Solar Cells with 3.3% Efficiency Based on Naphthalene Diimide-Selenophene Copolymer Acceptor,” J. Am. Chem. Soc. 2013, 135, 14960-14963. DOI: 10.1021/ja4085429.
Richards, J. J.; Rice, A. H.; Nelson, R. M.; Kim, F. S.; Jenekhe, S. A.; Luscombe, C. K.; Pozzo, D. C. “Modification of PCBM crystallization via incorporation of C60 in polymer/fullerene solar cells,” Adv. Funct. Mater. 2013, 23, 514-522.
Colbert, A.; Janke, E.; Hsieh, S.; Subramaniyan, S.; Schlenker,C. W.; Jenekhe, S. A.; Ginger, D. S. “Hole Transfer from Low Bandgap Quantum Dots to Conjugated Polymers in Organic/Inorganic Hybrid Photovoltaics,” J. Phys. Chem. Lett. 2013, 4, 280-284.
Ren, G.; Schlenker,C. W.; Ahmed, E.; Subramaniyan, S.; Olthof, S.; Kahn, A.; Ginger, D. S.; Jenekhe, S. A. “Photoinduced Hole Transfer Becomes Suppressed with Diminished Driving Force in Polymer-Fullerene Solar Cells While Electron Transfer Remains Active,” Adv. Funct. Mater. 2013, 23, 1238-1249.
Strein, E.; Colbert, A.; Nagaoka, H.; Subramaniyan, S.; Schlenker,C. W.; Janke, E.; Jenekhe, S. A.; Ginger, D. S. “Charge Generation and Energy Transfer in Hybrid Polymer/Infrared Quantum Dot Solar Cells,” Energy Environ. Sci. 2013, 6, 769-775.
Hahm, S. G.; Rho, Y.; Jung, J.; Kim, S. H.; Sajoto, T.; Kim, F. S.; Barlow, S.; Park, C. E.; Jenekhe, S. A.; Marder, S. R.; Ree, M. “High-Performance n-Channel Thin-Film Field-Effect Transistors Based on a Nanowire-Forming Polymer,” Adv. Funct. Mater. 2013, 23, 2060-2071.
Tucker, N. M.; Briseno, A. L.; Acton, O.; Yip, H. L.; Ma, H.; Jenekhe, S. A.; Xia, Y.; Jen, A. K. Y. “Solvent-Dispersed Benzothiadiazole-Tetrathiafulvalene Single-Crystal Nanowires and Their Application in Field-Effect Transistors,” ACS Appl. Mater. Interfaces 2013, 5, 2320-2324.
Li, H.; Kim, F. S.; Ren, G.; Hollenbeck, E. C.; Subramaniyan, S.; Jenekhe, S. A. “Tetraazabenzodifluoranthene Diimides: New Building Blocks for Solution Processable N-Type Organic Semiconductors,” Angew. Chem. Int. Ed. 2013, 52, 5513-5517.
Hwang, Y. J.; Murari, N. M.; Jenekhe, S. A. “New n-Type Polymer Semiconductors Based on Naphthalene Diimide and Selenophene Derivatives for Organic Field-Effect Transistors,” Polym. Chem. 2013, 4, 3187-3195.
Earmme, T.; Jenekhe, S. A. “Improved electron injection and transport by use of baking soda as a low-cost, air-stable, n-dopant for solution-processed phosphorescent organic light-emitting diodes,” Appl. Phys. Lett. 2013, 102, 233305/1-4.
Shoaee, S.; Subramaniyan, S.; Xin, H.; Keiderling, C.; Tuladhar, P. S.; Jamieson, F.; Jenekhe, S. A.; Durrant, J. R. “Charge photogeneration for a series of thiazolo-thiazole donor polymers blended with the fullerene electron acceptors PCBM and ICBA,” Adv. Funct. Mater. 2013, 23, 3286-3298.
References
External links
Samson A. Jenekhe - Google Scholar Citations
Year of birth missing (living people)
Living people
Chemical engineering academics
Michigan Technological University alumni
University of Minnesota College of Science and Engineering alumni
University of Washington faculty
Nigerian academics
Fellows of the American Physical Society | Samson Jenekhe | [
"Chemistry"
] | 1,494 | [
"Chemical engineering academics",
"Chemical engineers"
] |
62,431,273 | https://en.wikipedia.org/wiki/Puccinia%20sorghi | Puccinia sorghi, or common rust of maize, is a species of rust fungus that infects corn and species from the plant genus Oxalis.
Host and symptoms
Puccinia sorghi often first appears after silking in maize. The first early symptom includes chlorotic specks on the leaf. The obvious sign of this plant pathogen is golden-brown pustules or bumps on the above-ground surface of the plant tissue. These bumps are urediniospores which can spread to other plants and cause further infection. They are circular and powdery, which result from spores breaking through the leaf surface. While they are only about 1–2 mm each, they are very numerous with equal frequencies on upper and lower leaf surfaces. Over time, these blister-like bumps can change from brown to black, changing from urediniospores to teliospores. The most common place to find these spores is on the plant leaf, but they can develop on husks, tassels, and stalks as well. P. sorghi has two hosts making it a heteroecious rust. Maize and Oxalis are the two hosts for P. sorghi. In comparison, the other common type of maize rust is southern corn rust (Puccinia polysora) and it has a higher variety of hosts including maize, silver plumegrass, eastern gamagrass, Tripsacum lanceolatum, T. laxum, and T. pilorum.
Disease cycle
There are five spore stages in P. sorghi. The spore types are teliospores, basidiospores, pycniospores, aeciospores, and urediniospores.
Every year, viable urediniospores must travel to the north from the warmer southern climate. Since P. sorghi is an obligate parasite, it requires living plant tissue in order to survive. Therefore, this disease cannot overwinter in northern US states. The severity of the disease depends largely on weather conditions and how many spores are carried north each season. Urediniospores infect leaves and produce more spores to create a secondary inoculum and polycyclic disease cycle. Once the urediniospores mature on the plant tissue and turn black they become teliospores. Urediniospores measure 22-33 × 20-28 μm. Teliospores are two-celled and measure 27-53 μm. Teliospores overwinter in the southern climate and germinate in the spring. Teliospores produce basidiospores which spread by wind to infect Oxalis. They infect Oxalis and produce sexual spores (pycniospores) and aeciospores. Aeciospores are windblown to maize and infect the plant.
Management
The use of resistant maize hybrids is the best way to manage P. sorghi. There are two types of resistance that exist. The first is partial resistance which results in fewer rust spots by reducing germination rate. This type of resistance makes P. sorghi less severe by slowing down development of number of urediniospores. The other type of resistance is qualitative. This type relies on a single gene which provides total resistance to the plant. Other management tactics include foliar application of fungicide and cultural control. For fungicide application, plants should be monitored throughout the season, spraying when there are six or more pustules per leaf. Fungicide groups that can be used include mixed modes of action, DMI Triazoles (Group 3), and QoI Strobilurins (Group 11). Cultural control can be more effective in areas where the spores can overwinter. Debris should be collected and destroyed by burning along with eradication of Oxalis in surrounding areas. In northern areas where the spores can't overwinter, early planting time can help avoid P. sorghi. Younger leaves are more susceptible to infection, by planting earlier the crop will be more mature and more resilient by the time the spores arrive.
References
sorghi
Fungi described in 1832
Fungal plant pathogens and diseases
Maize diseases
Fungus species | Puccinia sorghi | [
"Biology"
] | 886 | [
"Fungi",
"Fungus species"
] |
62,431,286 | https://en.wikipedia.org/wiki/Average%20daily%20quantity | Average daily quantity (ADQ) is similar to the World Health Organization's defined daily dose, but is adjusted to reflect how medications are used in England.
References
Health economics | Average daily quantity | [
"Chemistry"
] | 37 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
72,023,462 | https://en.wikipedia.org/wiki/Observability%20%28software%29 | In software engineering, more specifically in distributed computing, observability is the ability to collect data about programs' execution, modules' internal states, and the communication among components. To improve observability, software engineers use a wide range of logging and tracing techniques to gather telemetry information, and tools to analyze and use it. Observability is foundational to site reliability engineering, as it is the first step in triaging a service outage.
One of the goals of observability is to minimize the amount of prior knowledge needed to debug an issue.
Etymology, terminology and definition
The term is borrowed from control theory, where the "observability" of a system measures how well its state can be determined from its outputs. Similarly, software observability measures how well a system's state can be understood from the obtained telemetry (metrics, logs, traces, profiling).
The definition of observability varies by vendor:
The term is frequently referred to as its numeronym o11y (where 11 stands for the number of letters between the first letter and the last letter of the word). This is similar to other computer science abbreviations such as i18n and l10n and k8s.
Observability vs. monitoring
Observability and monitoring are sometimes used interchangeably. As tooling, commercial offerings and practices evolved in complexity, "monitoring" was re-branded as observability in order to differentiate new tools from the old.
The terms are commonly contrasted in that systems are monitored using predefined sets of telemetry, and monitored systems may be observable.
Majors et al. suggest that engineering teams that only have monitoring tools end up relying on expert foreknowledge (seniority), whereas teams that have observability tools rely on exploratory analysis (curiosity).
Telemetry types
Observability relies on three main types of telemetry data: metrics, logs and traces. Those are often referred to as "pillars of observability".
Metrics
A metric is a point in time measurement (scalar) that represents some system state. Examples of common metrics include:
number of HTTP requests per second;
total number of query failures;
database size in bytes;
time in seconds since last garbage collection.
Monitoring tools are typically configured to emit alerts when certain metric values exceed set thresholds. Thresholds are set based on knowledge about normal operating conditions and experience.
Metrics are typically tagged to facilitate grouping and searchability.
Application developers choose what kind of metrics to instrument their software with, before it is released. As a result, when a previously unknown issue is encountered, it is impossible to add new metrics without shipping new code. Furthermore, their cardinality can quickly make the storage size of telemetry data prohibitively expensive. Since metrics are cardinality-limited, they are often used to represent aggregate values (for example: average page load time, or 5-second average of the request rate). Without external context, it is impossible to correlate between events (such as user requests) and distinct metric values.
Logs
Logs, or log lines, are generally free-form, unstructured text blobs that are intended to be human readable. Modern logging is structured to enable machine parsability. As with metrics, an application developer must instrument the application upfront and ship new code if different logging information is required.
Logs typically include a timestamp and severity level. An event (such as a user request) may be fragmented across multiple log lines and interweave with logs from concurrent events.
Traces
Distributed traces
A cloud native application is typically made up of distributed services which together fulfill a single request. A distributed trace is an interrelated series of discrete events (also called spans) that track the progression of a single user request. A trace shows the causal and temporal relationships between the services that interoperate to fulfill a request.
Instrumenting an application with traces means sending span information to a tracing backend. The tracing backend correlates the received spans to generate presentable traces. To be able to follow a request as it traverses multiple services, spans are labeled with unique identifiers that enable constructing a parent-child relationship between spans. Span information is typically shared in the HTTP headers of outbound requests.
Continuous profiling
Continuous profiling is another telemetry type used to precisely determine how an application consumes resources.
Instrumentation
To be able to observe an application, telemetry about the application's behavior needs to be collected or exported. Instrumentation means generating telemetry alongside the normal operation of the application. Telemetry is then collected by an independent backend for later analysis.
Instrumentation can be automatic, or custom. Automatic instrumentation offers blanket coverage and immediate value; custom instrumentation brings higher value but requires more intimate involvement with the instrumented application.
Instrumentation can be native - done in-code (modifying the code of the instrumented application) - or out-of-code (e.g. sidecar, eBPF).
Verifying new features in production by shipping them together with custom instrumentation is a practice called "observability-driven development".
"Pillars of observability"
Metrics, logs and traces are most commonly listed as the pillars of observability. Majors et al. suggest that the pillars of observability are high cardinality, high-dimensionality, and explorability, arguing that runbooks and dashboards have little value because "modern systems rarely fail in precisely the same way twice."
Self monitoring
Self monitoring is a practice where observability stacks monitor each other, in order to reduce the risk of inconspicuous outages. Self monitoring may be put in place in addition to high availability and redundancy to further avoid correlated failures.
See also
Application performance management (APM)
OpenTelemetry (OTel)
Real user monitoring (RUM)
Synthetic monitoring
DevOps
Site reliability engineering (SRE)
Sociotechnical system
External links
CNCF Observability Technical Advisory Group (TAG)
Bibliography
References
Distributed computing | Observability (software) | [
"Technology",
"Engineering"
] | 1,276 | [
"Systems engineering",
"Computer engineering",
"Computer science stubs",
"Software engineering",
"Computer science",
"Information technology",
"Computing stubs"
] |
72,023,576 | https://en.wikipedia.org/wiki/List%20of%20English%20palindromic%20phrases | A palindrome is a word, number, phrase, or other sequence of symbols that reads the same backwards as forwards, such as the sentence: "A man, a plan, a canal – Panama". Following is a list of palindromic phrases of two or more words in the English language, found in multiple independent collections of palindromic phrases.
As late as 1821, The New Monthly Magazine reported that there was only one known palindrome in the English language: "Lewd did I live, & evil did I dwel (sic)". In the following centuries, many more English palindromes were constructed. For many long-attested or well-known palindromes, authorship can not be determined, although a number can tentatively be attributed to a handful of prolific palindrome creators. Because of the popularity of palindromes as a form of word play, a number of sources have collected and listed popular palindromes, and palindrome-constructing contests have been held.
Notable palindromic phrases in English
See also
List of palindromic places
Notes
References
External links
Palindromes
Lists of phrases | List of English palindromic phrases | [
"Physics"
] | 245 | [
"Symmetry",
"Palindromes"
] |
72,023,786 | https://en.wikipedia.org/wiki/Time%20in%20Belize | Belize observes Central Standard Time (UTC−6) year-round.
IANA time zone database
In the IANA time zone database, Belize is given one zone in the file zone.tab—America/Belize. "BZ" refers to the country's ISO 3166-1 alpha-2 country code. Data for Belize directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
References
External links
Current time in Belize at Time.is
Time in Belize at TimeAndDate
Time by country
Geography of Belize
Time in North America | Time in Belize | [
"Physics"
] | 123 | [
"Spacetime",
"Physical quantities",
"Time",
"Time by country"
] |
72,024,289 | https://en.wikipedia.org/wiki/Gadopiclenol | Gadopiclenol, sold under the brand name Elucirem among others, is a contrast agent used with magnetic resonance imaging (MRI) to detect and visualize lesions with abnormal vascularity in the central nervous system and in the body. Gadopiclenol is a paramagnetic macrocyclic non-ionic complex of gadolinium.
Gadopiclenol was approved for medical use in the United States in September 2022, and in the European Union in December 2023.
Pharmacology
Gadopiclenol has a higher relaxivity compared with standard gadolinium-based contrast agents (GBCAs). The higher relaxivity allows for a lower dose of gadopiclenol, reducing the total amount of gadolinium administered to the patient while preserving imaging quality. Gadopiclenol was approved by the FDA with a recommended dose of 0.05 mmol/kg for adults and pediatric patients aged 2 years and older. This is half the dose of standard macrocyclic GBCAs, which have a recommended dose of 0.1 mmol/kg.
Society and culture
Legal status
Gadopiclenol was approved for medical use in the United States in September 2022 by the Food and Drug Administration.
In October 2023, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Elucirem, intended for contrast-enhanced magnetic resonance imaging (MRI) to improve detection and, visualization of pathologies when diagnostic information is essential and not available with unenhanced MRI. The applicant for this medicinal product is Guerbet. In October 2023, the CHMP adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Vueway, intended for contrast-enhanced magnetic resonance imaging (MRI) to improve detection and, visualization of pathologies when diagnostic information is essential and not available with unenhanced MRI. The applicant for this medicinal product is Bracco Imaging S.p.A. Gadopiclenol was approved for medical use in the European Union in December 2023.
Brand names
Gadopiclenol is the international nonproprietary name.
Gadopiclenol is sold under the brand names Elucirem and Vueway.
References
External links
MRI contrast agents
Pyridines
Heterocyclic compounds with 2 rings
Carboxylic acids
Polyols
Amides
Gadolinium compounds | Gadopiclenol | [
"Chemistry"
] | 526 | [
"Carboxylic acids",
"Amides",
"Functional groups"
] |
72,025,462 | https://en.wikipedia.org/wiki/P%C3%A1jaro%20verde | The Pájaro verde (lit. Green Bird) is a highly toxic alcoholic beverage (due to the presence of chemicals such as thinners, paint or turpentine) produced clandestinely and illegally inside Chilean prisons. The drink was reported by the Chilean press after a series of scandals in which prisoners died from its consumption.
According to some scholars, within the prison culture this drink has a ritualistic character. It originated in the Chilean prisons of the 19th century; the practice has been preserved over time through oral tradition.
Preparation
Its ingredients have varied throughout history and part of the prison rite is to prepare it with the available resources. Today, the most common way is to ferment a mixture of sugar, rice, rotten and fresh fruits and their peels; a strong chemical is added to this liquid, such as turpentine, paint thinners, paint or varnish to give it a "greater neural shock". There have been cases where excrement has even been used in the fermentation process.
The result is a distillate with a high degree of methanol, which is toxic to humansunlike ethanol, which is found in common alcoholic beverages. It is sometimes mixed with a cola drink to "enhance the taste". Lemon juice is usually added to the final mixture (usually in the same container from which it is drunk) as there is a belief that this citrus counteracts the toxic effects of the chemicals that make up the drink.
Toxicity
Given the tremendously harmful nature of the main ingredients, there have been many convicts who have been seriously intoxicated, even reaching the point of death.
In July 2006, in the Rancagua Prison, one convict died, another was left brain dead and five were seriously damaged in the trachea after drinking a mixture of thinner with Coca-Cola in an attempt to emulate this drink. The case of the Valparaíso Penitentiary is also known, where an inmate died in the Carlos van Buren hospital after drinking Pájaro verde, which caused a seizure at the prison.
Current situation
Today the deadly drink continues to exist in Chilean prisons, although to a much lesser degree and with less toxic variations, such as chicha prepared in the same way but without diluents, which are replaced by medicinal alcohol. This chicha, which is considered heir to the green bird, is usually consumed together with clonazepam —an anxiolytic known as the prison drug— and less frequently with cocaine, marijuana or cocaine paste.
See also
Spanish Methanol Poisonings
Pruno
References
Chilean alcoholic drinks
Prison drinks
Fermented drinks | Pájaro verde | [
"Biology"
] | 533 | [
"Fermented drinks",
"Biotechnology products"
] |
72,026,399 | https://en.wikipedia.org/wiki/Ibragimov%E2%80%93Iosifescu%20conjecture%20for%20%CF%86-mixing%20sequences | Ibragimov–Iosifescu conjecture for φ-mixing sequences in probability theory is the collective name for 2 closely related conjectures by Ildar Ibragimov and :ro:Marius Iosifescu.
Conjecture
Let be a strictly stationary -mixing sequence, for which and . Then is asymptotically normally distributed.
-mixing coefficients are defined as
,
where and are the -algebras generated by the (respectively ), and -mixing means that .
Reformulated:
Suppose is a strictly stationary sequence of random variables such that
and as (that is, such that it has finite second moments and as ).
Per Ibragimov, under these assumptions, if also is -mixing, then a central limit theorem holds. Per a closely related conjecture by Iosifescu, under the same hypothesis, a weak invariance principle holds. Both conjectures together formulated in similar terms:
Let be a strictly stationary, centered, -mixing sequence of random variables such that and . Then per Ibragimov , and per Iosifescu . Also, a related conjecture by Magda Peligrad states that under the same conditions and with , .
Sources
I.A. Ibragimov and Yu.V. Linnik, Independent and Stationary Sequences of Random Variables, Wolters-Noordhoff, Groningen, 1971, p. 393, problem 3.
M. Iosifescu, Limit theorems for ϕ-mixing sequences, a survey. In: Proceedings of the Fifth Conference on Probability Theory, Brașov, 1974, pp. 51-57. Publishing House of the Romanian Academy, Bucharest, 1977.
Conjectures
Probability theory | Ibragimov–Iosifescu conjecture for φ-mixing sequences | [
"Mathematics"
] | 329 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Conjectures"
] |
72,030,943 | https://en.wikipedia.org/wiki/Stefan%20Catsicas | Stefan Catsicas, born in 1958, is a Swiss molecular biologist specialised in neurosciences of Italian and Greek origins. He was executive director of Nestlé from 2013 to 2018, vice-president of research of the Ecole Polytechnique Fédérale in Lausanne (EPFL) from 2000 to 2004 and director of the institute of cell biology at the School of Medicine in Lausanne from 1996 à 2000. He is currently the managing partner of Skyviews Life Science, a Swiss advisory company in life sciences; and the director of Precision Health Corp., a private investment company based in the Isle of Man.
Early life and education
Stefan Catsicas started his scientific training with studies in natural sciences at the University of Lausanne, and a doctoral thesis on the development of the nervous system.., which he obtained in 1987. He then continued his studies in this field at the Research Institute of Scripps Clinic in San Diego, California.
Career
Back in Switzerland, he became Head of the Neurobiology Department of Glaxo in Geneva from 1991 to 1996, before pursuing his academic career at the University of Lausanne as professor and chair of Cell Biology, and then as professor of Cellular Engineering at the EPFL.
In 2000, Patrick Aebischer appointed him Vice President of Research at the EPFL. To promote pluridisciplinarity on campus, he led the collaborations with Alinghi for the America's Cup, and with Solar Impulse for the world tour with a solar-energy powered plane.
In 2004, he left his position at the EPFL and co-founded Tilocor Life Science, a biotechnology group of private companies.
In 2011, Catsicas became Provost and Executive Vice-president of the King Abdullah University of Science & Technology.
In 2013, he was appointed to the executive board of Nestlé Global, as Chief Technology Officer (CTO), a position he held from 2013 to 2018
In 2018, he published his first novel in French, La Séquence (Editions Favre) which sets in the field of genetic research.
Since 2018, Stefan Catsicas is the managing director of Skyviews Life Science, a Swiss advisory company in life sciences, including biotechnology, advanced nutrition and digital health. He is also the co-founder and director of the private investment company Precision Health Corp., based in the Isle of Man.
References
Molecular biologists
1958 births
Living people
École Polytechnique Fédérale de Lausanne alumni
University of Lausanne alumni
King Abdullah University of Science and Technology
Nestlé people
GSK plc people
Scripps Research alumni | Stefan Catsicas | [
"Chemistry"
] | 518 | [
"Molecular biologists",
"Biochemists",
"Molecular biology"
] |
72,031,127 | https://en.wikipedia.org/wiki/Lauren%20B.%20Hitchcock | Lauren Blakely Hitchcock (March 18, 1900 – October 15, 1972) was a chemical engineer and early opponent of air pollution.
Hitchcock was born in Paris to Frank Lauren Hitchcock, a mathematician and physicist, and Margaret Johnson Blakely, and was raised in Belmont, Massachusetts. He received his undergraduate (1920), master's (1927), and doctorate degree (1933) from Massachusetts Institute of Technology. He taught at the University of Virginia from 1928 to 1935 and then moved into private industry.
Hitchcock became president of the Southern California Air Pollution Foundation (APF) in 1954, which had been formed to fight smog. Hitchcock identified automobile exhaust and backyard incinerators as the cause and advised that significant steps would be needed--comparable to wartime efforts--to fight the problem in a meaningful way. In 1963, Hitchcock was appointed to the faculty at University at Buffalo, where his work papers are now archived.
References
External links
Hitchcock (Lauren B.) Papers, 1923-1966, at University at Buffalo Archives
1972 deaths
1900 births
Chemical engineers
Massachusetts Institute of Technology alumni
People from Belmont, Massachusetts | Lauren B. Hitchcock | [
"Chemistry",
"Engineering"
] | 223 | [
"Chemical engineering",
"Chemical engineers"
] |
72,032,480 | https://en.wikipedia.org/wiki/Dark%20forest%20hypothesis | The dark forest hypothesis is the conjecture that many alien civilizations exist throughout the universe, but they are both silent and hostile, maintaining their undetectability for fear of being destroyed by another hostile and undetected civilization. It is one of many possible explanations of the Fermi paradox, which contrasts the lack of contact with alien life with the potential for such contact. The hypothesis derives its name from Liu Cixin's 2008 novel The Dark Forest, although the concept predates the novel.
Background
There is no known reliable or reproducible evidence that aliens have visited or attempted to contact Earth. No transmissions and no firm evidence of intelligent extraterrestrial life have been detected or observed. This runs counter to the general observations, that:
The universe is filled with a very large number of planets, where the probability is that some present conditions hospitable for life; and
Terrestrial life is observed to expand until it fills all niches suited to it.
These contradictory facts form the basis for the Fermi paradox.
Concept
The "dark forest" hypothesis presumes that any space-faring civilization would view any other intelligent life as an inevitable threat and thus destroy any nascent life that makes itself known. As a result, the electromagnetic spectrum would be relatively quiet, without evidence of any intelligent alien life.
A similar hypothesis, under the name "deadly probes", was described by astronomer and author David Brin in his 1983 summary of the arguments for and against the Fermi paradox.
The name of the hypothesis derives from Liu Cixin’s 2008 novel The Dark Forest, as in a "dark forest" filled with "armed hunter(s) stalking through the trees like ghosts". According to the dark forest hypothesis, since the intentions of any newly contacted civilisation can never be known with certainty, then if one is encountered, it is best to shoot first and ask questions later, in order to avoid the potential extinction of one’s own species. The novel provides a detailed investigation of Liu's concerns about alien contact.
Relationship to other proposed Fermi paradox solutions
The Berserker hypothesis, also known as the deadly probes scenario, proposes self-reproducing machines that would seek to destroy organic life. The name derives from short stories by Fred Saberhagen written in the 1960s.
The dark forest hypothesis is distinct from the Berserker hypothesis in that under the former, many alien civilizations could still exist provided they keep silent. The former can be viewed as a special case of the latter, if the deadly probes are (e.g. due to resource scarcity) only sent to star systems that show signs of intelligent life.
Game theory
The dark forest hypothesis is a special case of the "sequential and incomplete information game" in game theory.
In game theory, a "sequential and incomplete information game" is one in which all players act in sequence, one after the other, and none are aware of all available information. In the case of this particular game, the only win condition is continued survival. An additional constraint in the special case of the "dark forest" is the scarcity of vital resources. The "dark forest" can be considered an extensive-form game with each "player" possessing the following possible actions: destroy another civilization known to the player; broadcast and alert other civilizations of one's existence; or do nothing.
Science fiction versions
In addition to Fred Saberhagen's Berserker novels, variations of these ideas have been used in other science fiction stories. In 1987, science fiction author Greg Bear explored this concept that he called a "vicious jungle" in his novel The Forge of God. In The Forge of God, humanity is likened to a baby crying in a hostile forest: "There once was an infant lost in the woods, crying its heart out, wondering why no one answered, drawing down the wolves." One of the characters explains, "We've been sitting in our tree chirping like foolish birds for over a century now, wondering why no other birds answered. The galactic skies are full of hawks, that's why. Planetisms that don't know enough to keep quiet, get eaten."
The term "dark forest" was coined for the idea in 2008 by science fiction author Liu Cixin in his novel The Dark Forest.
In Liu Cixin's novel, the dark forest hypothesis is introduced by the character Ye Wenjie, while visiting her daughter's grave. She introduces three key axioms to a new field she describes as "cosmic sociology":
"Suppose a vast number of civilizations distributed throughout the universe, on the order of the number of observable stars. Lots and lots of them. Those civilizations make up the body of a cosmic society. Cosmic sociology is the study of the nature of this super-society."
Suppose that survival is the primary need of a civilization.
Suppose that civilizations continuously expand over time, but the total matter in the universe remains constant.
The only logical conclusion from the acceptance of these axioms as well as two other considerations, "chain of suspicion" and "technological explosion", according to the character Ye was talking to, is that any civilization that revealed itself will be considered as an imminent existential threat by at least some of the other civilizations, among which some will then proceed to destroy the civilization that makes itself known.
In the third book of the trilogy, the perspective of the hunters in The Dark Forest is further illustrated through an alien character called Singer, who thinks that intelligent life that does not fear the dark forest would "expand and attack without fear". In other words, dark-forest-fearing civilizations are benign, civilizations that would reveal themselves are evil, and hunters are enforcers and protectors.
References
External links
Astrobiology
Astronomical controversies
Astronomical hypotheses
Fermi paradox
Hypotheses
Search for extraterrestrial intelligence | Dark forest hypothesis | [
"Astronomy",
"Biology"
] | 1,192 | [
"Astronomical hypotheses",
"Origin of life",
"History of astronomy",
"Speculative evolution",
"Astrobiology",
"Astronomical controversies",
"Fermi paradox",
"Biological hypotheses",
"Astronomical sub-disciplines"
] |
72,033,530 | https://en.wikipedia.org/wiki/Wilson%20fermion | In lattice field theory, Wilson fermions are a fermion discretization that allows to avoid the fermion doubling problem proposed by Kenneth Wilson in 1974. They are widely used, for instance in lattice QCD calculations.
An additional so-called Wilson term
is introduced supplementing the naively discretized Dirac action in -dimensional Euclidean spacetime with lattice spacing , Dirac fields at every lattice point , and the vectors being unit vectors in the direction. The inverse free fermion propagator in momentum space now reads
where the last addend corresponds to the Wilson term again. It modifies the mass of the doublers to
where is the number of momentum components with . In the continuum limit
the doublers become very heavy and decouple from the theory.
Wilson fermions do not contradict the Nielsen–Ninomiya theorem because they explicitly violate chiral symmetry since the Wilson term does not anti-commute with .
References
Lattice field theory
Fermions | Wilson fermion | [
"Physics",
"Materials_science"
] | 201 | [
"Fermions",
"Subatomic particles",
"Condensed matter physics",
"Matter"
] |
72,034,215 | https://en.wikipedia.org/wiki/Twisted%20mass%20fermion | In lattice field theory, twisted mass fermions are a fermion discretization that extends Wilson fermions for two mass-degenerate fermions.
They are well established and regularly used in non-perturbative fermion simulations, for instance in lattice QCD.
The original motivation for the use of twisted mass fermions in lattice QCD simulations was the observation that the two lightest quarks (up and down) have very similar mass and can therefore be approximated with the same (degenerate) mass. They form a so-called isospin doublet and are both represented by Wilson fermions in the twisted mass formalism. The name-giving twisted mass is used as a numerical trick, assigned to the two quarks with opposite signs. It acts as an infrared regulator, that is it allows to avoid unphysical configurations at low energies. In addition, at vanishing physical mass (maximal or full twist) it allows improvement, getting rid of leading order lattice artifacts linear in the lattice spacing .
The twisted mass Dirac operator is constructed from the (massive) Wilson Dirac operator and reads
where is the twisted mass and acts as an infrared regulator (all eigenvalues of obey ). is the third Pauli matrix acting in the flavour space spanned by the two fermions. In the continuum limit the twisted mass becomes irrelevant in the physical sector and only appears in the doubler sectors which decouple due to the use of Wilson fermions.
References
Lattice field theory
Fermions | Twisted mass fermion | [
"Physics",
"Materials_science"
] | 319 | [
"Fermions",
"Subatomic particles",
"Condensed matter physics",
"Matter"
] |
72,034,350 | https://en.wikipedia.org/wiki/Ginsparg%E2%80%93Wilson%20equation | In lattice field theory, the Ginsparg–Wilson equation generalizes chiral symmetry on the lattice in a way that approaches the continuum formulation in the continuum limit. The class of fermions whose Dirac operators satisfy this equation are known as Ginsparg–Wilson fermions, with notable examples being overlap, domain wall and fixed point fermions. They are a means to avoid the fermion doubling problem, widely used for instance in lattice QCD calculations. The equation was discovered by Paul Ginsparg and Kenneth Wilson in 1982, however it was quickly forgotten about since there were no known solutions. It was only in 1997 and 1998 that the first solutions were found in the form of the overlap and fixed point fermions, at which point the equation entered prominence.
Ginsparg–Wilson fermions do not contradict the Nielsen–Ninomiya theorem because they explicitly violate chiral symmetry. More precisely, the continuum chiral symmetry relation (where is the massless Dirac operator) is replaced by the Ginsparg–Wilson equation
which recovers the correct continuum expression as the lattice spacing goes to zero.
In contrast to Wilson fermions, Ginsparg–Wilson fermions do not modify the inverse fermion propagator additively but multiplicatively, thus lifting the unphysical poles at . The exact form of this modification depends on the individual realisation.
References
Lattice field theory
Fermions | Ginsparg–Wilson equation | [
"Physics",
"Materials_science"
] | 294 | [
"Matter",
"Fermions",
"Quantum physics stubs",
"Quantum mechanics",
"Condensed matter physics",
"Subatomic particles"
] |
72,034,422 | https://en.wikipedia.org/wiki/Joan%20Ehrenfeld | Joan Gardner Ehrenfeld (1948 – 2011) was an American environmental scientist who was a professor at Rutgers University. Her research considered invasive species and ecology. She was elected Fellow of the American Association for the Advancement of Science in 2000.
Early life and education
Ehrenfeld was born in New York City. Her mother was a violinist, but encouraged Ehrenfeld to have a career in the sciences. Ehrenfeld said she remembered reading Paul de Kruif's Microbe Hunters as a child. As a teenager, the National Science Foundation selected her for a summer placement in the laboratory of Donald Ritchie at Barnard College. She returned to Barnard College for undergraduate studies, where she specialized in biology. She also completed a summer program at Colorado State University, and spent time working in a molecular biology lab. Ehrenfeld moved to Harvard University, where she earned a master's degree in 1970. She was a doctoral researcher at the City University of New York, where she studied the ecological interactions of Euphorbia.
Research and career
In 1976, Ehrenfeld was appointed to the faculty in the Center for Coastal and Environmental at Rutgers University. She was made Director of the New Jersey Water Resources Research Institute in 1990. Ehrenfeld worked on wetlands ecology and was particularly interested in the relationships between biodiversity and human disease. She extensively studied the spread of the West Nile virus.
Ehrenfeld investigated how the Berberis thunbergii (Japanese barberry) impacted soil processes and micro-organisms. She found that barberry tissue is high in nitrogen-rich compounds akaloids, which causes a loss of organic matter in nearby soil due to excessive nitrogen cycling. As barberry starts to decompose, the nitrate levels in nearby soil start to increase, making the areas susceptible to weeds. Ehrenfeld removed barberry in the Morristown National Historical Park and attempted to restore native shrubs (spice-bush and witch-hazel). These native plants could not survive, as the barberry had transformed the soil itself. She thus showed that just one plant can have a dramatic impact on its environment.
In 2012, the Ecological Society of America launched the Ehrenfeld Award to celebrate her contribution to urban ecology. In 2019, the New York–New Jersey Trail Conference established the Joan Ehrenfeld Award for Responsible Stewardship.
Selected publications
Awards and honours
1999 Cook College Academic Professional Excellence Award for Academic Innovation and Creativity
2000 Elected Fellow of the American Association for the Advancement of Science
2003 Research Excellence and Impact Award
Science Advisory Board of the United States Environmental Protection Agency
2010 Elected Fellow of the Society of Wetland Scientists
2011 Research Excellence Award from the School of Environmental and Biological Sciences
Personal life
Ehrenfeld had four children. Her husband, David Ehrenfeld, was a professor of biology at Rutgers University. In 2010, she was diagnosed with leukemia. She was an examiner for Swarthmore College. She was a member of the Jewish community, and dedicated her weekends to music and the choir. Ehrenfeld died on June 25, 2011.
References
1948 births
2011 deaths
Scientists from New York City
Barnard College alumni
Harvard University alumni
City University of New York alumni
Rutgers University faculty
American environmental scientists
Jewish American scientists
Fellows of the American Association for the Advancement of Science
Deaths from leukemia | Joan Ehrenfeld | [
"Environmental_science"
] | 653 | [
"American environmental scientists",
"Environmental scientists"
] |
72,037,987 | https://en.wikipedia.org/wiki/Aubreville%27s%20model | Aubreville's model is a tree architectural model named after André Aubréville, as he identified this pattern as common in Sapotaceae. It is a monopodial model, and characterized by single axis with rhythmic growth. In this model, each cycle of growth will produce a new group of horizontally arranged branches which themselves develop as sympodial complex axis which support leafy rosettes and flowers. Linnaeus used this feature as a distinctive character while naming the genus Terminalia.
References
Plant morphology
Plant taxonomy | Aubreville's model | [
"Biology"
] | 105 | [
"Plant morphology",
"Plant taxonomy",
"Plants"
] |
72,039,476 | https://en.wikipedia.org/wiki/UPt3 | {{DISPLAYTITLE:UPt3}}
UPt3 is an inorganic binary intermetallic crystalline compound of platinum and uranium.
Production
It can be synthesised in the following ways:
as an intermetallic compound, by direct fusion of pure components according to stoichiometric calculations:
by reduction of uranium dioxide with hydrogen in the presence of platinum:
Physical properties
UPt3 forms crystals of hexagonal symmetry (some studies hypothesize a trigonal structure instead), space group P63/mmc, cell parameters a = 0.5766 nm and c = 0.4898 nm (c should be understood as distance from planes), with a structure similar to nisnite (Ni3Sn) and MgCd3.
The compound congruently melts at 1700 °C. The enthalpy of formation of the compound is -111 kJ/mol.
At temperatures below 1 K it becomes superconducting, thought to be due to the presence of heavy fermions (the uranium atoms).
References
Platinum compounds
Uranium compounds
Intermetallics | UPt3 | [
"Physics",
"Chemistry",
"Materials_science"
] | 230 | [
"Inorganic compounds",
"Metallurgy",
"Intermetallics",
"Condensed matter physics",
"Alloys"
] |
72,041,443 | https://en.wikipedia.org/wiki/Overlap%20fermion | In lattice field theory, overlap fermions are a fermion discretization that allows to avoid the fermion doubling problem. They are a realisation of Ginsparg–Wilson fermions.
Initially introduced by Neuberger in 1998, they were quickly taken up for a variety of numerical simulations. By now overlap fermions are well established and regularly used in non-perturbative fermion simulations, for instance in lattice QCD.
Overlap fermions with mass are defined on a Euclidean spacetime lattice with spacing by the overlap Dirac operator
where is the ″kernel″ Dirac operator obeying , i.e. is -hermitian. The sign-function usually has to be calculated numerically, e.g. by rational approximations. A common choice for the kernel is
where is the massless Dirac operator and is a free parameter that can be tuned to optimise locality of .
Near the overlap Dirac operator recovers the correct continuum form (using the Feynman slash notation)
whereas the unphysical doublers near are suppressed by a high mass
and decouple.
Overlap fermions do not contradict the Nielsen–Ninomiya theorem because they explicitly violate chiral symmetry (obeying the Ginsparg–Wilson equation) and locality.
References
Lattice field theory
Fermions | Overlap fermion | [
"Physics",
"Materials_science"
] | 274 | [
"Fermions",
"Subatomic particles",
"Condensed matter physics",
"Matter"
] |
72,041,620 | https://en.wikipedia.org/wiki/Domain%20wall%20fermion | In lattice field theory, domain wall (DW) fermions are a fermion discretization avoiding the fermion doubling problem. They are a realisation of Ginsparg–Wilson fermions in the infinite separation limit where they become equivalent to overlap fermions. DW fermions have undergone numerous improvements since Kaplan's original formulation such as the reinterpretation by Shamir and the generalisation to Möbius DW fermions by Brower, Neff and Orginos.
The original -dimensional Euclidean spacetime is lifted into dimensions. The additional dimension of length has open boundary conditions and the so-called domain walls form its boundaries. The physics is now found to ″live″ on the domain walls and the doublers are located on opposite walls, that is at they completely decouple from the system.
Kaplan's (and equivalently Shamir's) DW Dirac operator is defined by two addends
with
where is the chiral projection operator and is the canonical Dirac operator in dimensions. and are (multi-)indices in the physical space whereas and denote the position in the additional dimension.
DW fermions do not contradict the Nielsen–Ninomiya theorem because they explicitly violate chiral symmetry (asymptotically obeying the Ginsparg–Wilson equation).
References
Lattice field theory
Fermions | Domain wall fermion | [
"Physics",
"Materials_science"
] | 285 | [
"Fermions",
"Subatomic particles",
"Condensed matter physics",
"Matter"
] |
72,041,631 | https://en.wikipedia.org/wiki/Celestial%20police | The Celestial police (), officially the United Astronomical Society (), was a cooperation of numerous European astronomers in the early 19th century. It is mainly known in relation to the search for objects expected between the orbits of Mars and Jupiter. It was formed in 1800 at the second European congress of astronomers. At the first such congress, in 1798, the French mathematician Jérôme Lalande had called for a coordinated search, in which each participating observatory would patrol a particular part of the sky. The group confirmed or discovered the four largest minor planets, which would lead to the identification of the asteroid belt. They also initiated the compilation of better star catalogues and the investigation of variable stars. They pioneered international collaboration and communication in astronomy.
Founding
In 1798 Franz Xaver von Zach had organised and hosted the first European congress of astronomers at his observatory in Gotha. Zach was also editor of the monthly journals Allgemeine Geographische Ephemeriden (since 1798) and Monatliche Correspondenz zur Beförderung der Erd- und Himmels-Kunde (since 1800). The second congress in 1800 was held with smaller attendance and more focussed agenda in Lilienthal, at the observatory of Johann Hieronymus Schröter. Schröter had arranged for a visit by Prince Adolph Frederick to coincide with the congress.
Foremost on the agenda for the congress was the founding of the Vereinigte Astronomische Gesellschaft (United Astronomical Society). Six astronomers were present to found the society on 20 September 1800, with Schröter as president and von Zach as director or secretary. The founding members were:
Johann Hieronymus Schröter (Lilienthal), president
Adolf von Ende (Celle)
Johann Gildemeister (Bremen)
Wilhelm Olbers (Bremen)
Karl Ludwig Harding (Lilienthal)
Franz Xaver von Zach (Gotha), director or secretary
Tasks
Star catalogues
The main workload for the society was the compilation of more precise star catalogues and to improve knowledge of spherical astronomy and coordinate systems.
This was required for two reasons:
It was necessary to identify and locate the positions of fainter celestial objects than in the past.
Sound definitions of coordinate systems were needed as basis for the precise determination of the orbits of newly discovered celestial bodies.
An area of 15° width centred on the ecliptic was to be catalogued. To share the workload, the ecliptic was divided into 24 zones each extending 15° in longitude and 7° or 8° either side in latitude.
New comets and further planets
The task that the Celestial police is best known for was the search for a small planet that was expected to exist between the orbits of Mars and Jupiter. The existence of such a body followed from the Titius-Bode law, a geometric series of the orbital radii from Mercury to Uranus, which has a gap at 2.8 astronomical units. Even Johannes Kepler had postulated such an undiscovered planet in 1596 in his Mysterium Cosmographicum.
Given the discovery of Uranus in 1781, more planets might also be found beyond Saturn. And new and telescopic comets might be found.
Further tasks
The question of stellar parallax and the distance of the stars was an important topic at the turn from the 18th to the 19th century. This was hence also on the agenda of the Celestial police.
Another new topic of astronomical research in the early 19th century was the surveillance of variable stars and novae.
As an international collaboration of astronomers, the Celestial police also noted the need for communication, both among participants and through a publication like von Zach's Monatliche Correspondenz zur Beförderung der Erd- und Himmels-Kunde.
Members
The division of labour into 24 zones of ecliptic longitude required the Celestial police to have 24 members, with one zone allocated to each member. The canonical list of 24 members of the celestial police are:
Johann Elert Bode (Berlin)
Johann Tobias Bürg (Vienna)
Thomas Bugge (Copenhagen)
Johann Karl Burckhardt (Paris)
Adolf von Ende (Celle)
Johann Gildemeister (Bremen)
Karl Ludwig Harding (Lilienthal)
William Herschel (Slough)
Johann Sigismund Gottfried Huth (Frankfurt (Oder))
Georg Simon Klügel (Halle (Saale))
Julius August Koch (Gdansk)
Nevil Maskelyne (Greenwich)
Daniel Melanderhjelm (Stockholm)
Pierre Méchain (Paris)
Charles Messier (Paris)
Wilhelm Olbers (Bremen)
Barnaba Oriani (Milan)
Giuseppe Piazzi (Palermo)
Johann Hieronymus Schröter (Lilienthal)
Theodor von Schubert (Saint Petersburg)
Jöns Svanberg (Uppsala); Svanberg replaced Jan Śniadecki (Kraków) when the future of his observatory was in doubt
Joseph Thulis (Marseille)
Johann Friedrich Wurm (Blaubeuren)
Franz Xaver von Zach (Gotha)
Jérôme Lalande had been invited, but declined due to other commitments. Some invitations may have been issued late or may never have arrived. Not every invitee actively participated in the survey of the ecliptic, and others who worked on the tasks, such as Friedrich Bessel, are not included in the group. Carl Friedrich Gauss (Braunschweig) became a member in 1801 and, jointly with Olbers, became foreign correspondent in 1804.
Results
Ceres
On 1 January 1801, apparently by coincidence and independent of the Celestial police, Piazzi was working on a star catalogue and found a moving object, the first minor planet, (1) Ceres. He announced it as a new comet, but due to the lack of nebulosity suspected it might be a small planet. It was not until September 1801 that his complete observations were published. Gauss then developed his method of determining orbits from astrometric observations. This confirmed not only a planetary rather than a cometary orbit, it also enabled von Zach and Olbers to "recover" the minor planet, i.e. to find it again after its passage behind the Sun.
The orbit of Ceres matched the requirement from the Titius-Bode law, the planet missing between Mars and Jupiter seemed to have been found. But it was disappointingly faint.
Pallas, Juno and Vesta
In March 1802 Olbers was working on the star catalogue of his zone, in preparation of Ceres arriving in the area, when he discovered another moving star, the second minor planet, (2) Pallas.
The presence of two minor planets between Mars and Jupiter had several consequences. It cast doubt on the Titius-Bode law, which called for a single, large planet. It prompted William Herschel, discoverer of Uranus, to propose an alternative term "asteroid" instead of "planet". While the use of "planet" could not continue, "asteroid" was not generally accepted until decades later.
Olbers took the presence of two minor planets to suggest that a former planet had been destroyed by a collision with a comet. This could restore the Titius-Bode law and offered hope to find more minor planets, in particular at the crossing points of the orbits of Ceres and Pallas. Huth and von Zach favoured the opposite idea, the minor planets were just that, small planets in a region where they failed to form a full-size planet.
Pursuing Olbers' idea, Harding in September 1804 finds (3) Juno, Olbers in March 1807 finds (4) Vesta.
Further developments
After discovering such a large number of relatively small objects in a similar orbit, it became clear that no planet-sized object likely existed in that region. The group members' interest waned in the search. Additionally, the Napoleonic Wars had disrupted the work of several group members, especially when the war came to Lilienthal, where Schröter's observatory had served as the home for many of the scientists working with the celestial police. Schröter died in 1816; other members of the Celestial police had moved elsewhere or changed the focus of their work. It would be another generation before any further major discoveries of planets (or even large asteroids) occurred.
The division of labour pioneered by the celestial police, around 1850, lead to the concept of surveys, also to the compilation of catalogues of nebulae. The most famous star catalogue of the 19th century is the Bonner Durchmusterung with 300,000 stars, which was later extended through the work of more southerly observatories.
See also
Astronomische Gesellschaft
List of astronomical societies
References
Further reading
Astronomers
Astronomy organizations
Organizations established in 1800
Planetary science | Celestial police | [
"Astronomy"
] | 1,781 | [
"Astronomers",
"Astronomy organizations",
"People associated with astronomy",
"Planetary science",
"Astronomical sub-disciplines"
] |
72,042,440 | https://en.wikipedia.org/wiki/Keiko%20Nishikawa | Keiko Nishikawa (, born 27 November 1948) is a Japanese physical chemist known for her studies of supercritical fluids. She is an emeritus professor at Chiba University and research fellow at the Toyota Physical and Chemical Research Institute.
Education and career
Nishikawa studied chemistry at the University of Tokyo, earning a bachelor's degree in 1972, a master's degree in 1974, and a doctorate in 1981.
She became an assistant professor in the faculty of science at Gakushuin University, and remained there until 1991, when she moved to Yokohama National University as an associate professor in the faculty of education. From 1996 to 2014 she was a professor in the graduate school of national science at Chiba University.
She retired as an emeritus professor in 2014, also retaining a position at Chiba University as research professor from 2014 to 2018. She was inspector general for the Japan Society for the Promotion of Science from 2014 to 2018, and took her present position as fellow at the Toyota Research Institute in 2018.
Recognition
In 1988, Nishikawa won the award of the Crystallographic Society of Japan, and in 1998 she won the Saruhashi Prize. She won the award of the Chemical Society of Japan in 2012, and in the same year was given a commendation by the Ministry of Education, Culture, Sports, Science and Technology. She won the Japanese Medal with Purple Ribbon in 2013. In 2014 she won the award of the Japan Society for Molecular Science.
References
External links
Home page
1948 births
Living people
Japanese chemists
Japanese women chemists
Women physical chemists
University of Tokyo alumni
Academic staff of Gakushuin University
Academic staff of Yokohama National University | Keiko Nishikawa | [
"Chemistry"
] | 331 | [
"Women physical chemists",
"Physical chemists"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.