id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
71,376,696 | https://en.wikipedia.org/wiki/Leucocoprinus%20holospilotus | Leucocoprinus holospilotus is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 1871 by the British mycologists Miles Joseph Berkeley & Christopher Edmund Broome who classified it as Agaricus holospilotus.
It 1887 it was classified as Lepiota holospilota by the Italian botanist and mycologist Pier Andrea Saccardo and then as Mastocephalus holospilotus in 1891 by the German botanist Otto Kunze.
In 1990 it was classified as Leucocoprinus holospilotus by the British mycologist Derek Reid and whilst the French mycologist Marcel Bon classified it as Leucoagaricus holospilotus in 1993 it is the Leucocoprinus placement which is currently accepted in Species Fungorum and Mycobank, though this may be erroneous. This taxonomic history is very similar to other species in the Leucocoprinus and Leucoagaricus genera due to the similarities between some of these species making them hard to place.
Description
Leucocoprinus holospilotus is a small dapperling mushroom with white or pale yellow flesh.
Cap: 3–5 cm wide. Starts bulbous before expanding to campanulate or flat with a slightly raised centre and striations at the cap edges. The flesh is thicker and firmer than similar species and up to 4mm at the centre of cap. The cap surface is white and silky with dark red tones which deepen when dry. It is covered in brownish-purple scales (squamules) which are sparsely scattered at the cap edges and more densely packed together at the centre disc. Gills: Free, crowded, pale yellow in colour and may bruise orange. Stem: 5–11 cm tall and 3–10mm thick sometimes with a slightly wider base. The interior of the stem is white or pale yellow and hollow whilst the exterior is pale brown with fine brownish-purple squamules similar to those of the cap. These occur both above and below the persistent, descending stem ring which is located in the middle of the stem (median). It is white but also exhibits brownish-purple squamules at the edges. Spores: Ovoid to ellipsoid. Dextrinoid. 7–8.5 x 4.5–5.7 μm.
Habitat and distribution
L. holospilotus is scarcely recorded, little known and may be confused with numerous other Leucocoprinus or Leucoagaricus species.
Its full distribution is unclear however the specimens studied in 1887 by Saccardo were from the Ceylon region of India and found growing on the ground. It has also been found in the Kerala region of India in 2003.
References
holospilotus
Fungi described in 1871
Fungus species
Fungi of Europe | Leucocoprinus holospilotus | [
"Biology"
] | 594 | [
"Fungi",
"Fungus species"
] |
71,376,958 | https://en.wikipedia.org/wiki/Leucocoprinus%20delicatulus | Leucocoprinus delicatulus is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 2009 by the Indian mycologists T.K. Arun Kumar & Patinjareveettil Manimohan who classified it as Leucocoprinus delicatulus.
Description
Leucocoprinus delicatulus is a small dapperling mushroom with thin (up to 1mm thick) whitish flesh.
Cap: 1-4.1cm wide with a white and grey convex cap which may flatten with age. It is covered in scales (squamules) which are sparse at the edge of the cap and concentrated more towards the disc. It is striate towards the edges of the cap which curves inward at first and later flattens or erodes. Gills: Free, crowded and whitish. Stem: 4.5-6cm tall and 1-2mm thick expanding to up to 5mm at the base where there is white mycelium. The exterior of the hollow stem is whitish and discolours to greyish brown with age or from bruising. The membranous, ascending stem ring is persistent and can be located in the middle of the stem or towards the top or bottom. Spore print: White. Spores: Ellipsoid or subamygdaliform with a thick wall and truncate germ pore. Hyaline with guttules. Dextrinoid, metachromatic and cyanophilic. 9-12 x 6-7 μm. Basidia: 16-24 x 11-21 μm. Smell: Indistinct.
Etymology
The specific epithet delicatulus is Latin for 'delicate'.
Habitat and distribution
L. delicatulus is scarcely recorded and little known and may be confused with numerous other Leucocoprinus or Leucoagaricus species. The specimens studied were growing individually or scattered amongst decaying leaf litter in the state of Kerala, India.
Similar species
Leucocoprinus ianthinus is very similar in appearance and is distinguished via microscopic differences.
References
Leucocoprinus
Fungi described in 2009
Fungus species | Leucocoprinus delicatulus | [
"Biology"
] | 451 | [
"Fungi",
"Fungus species"
] |
71,377,042 | https://en.wikipedia.org/wiki/Leucocoprinus%20munnarensis | Leucocoprinus munnarensis is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 2009 by the Indian mycologists T.K. Arun Kumar & Patinjareveettil Manimohan who classified it as Leucocoprinus munnarensis.
Description
Leucocoprinus munnarensis is a small dapperling mushroom with thin (up to 1mm thick) whitish flesh which discolours brown.
Cap: 2.7-5.1cm wide with a white, convex cap which may flatten with age. It is covered in scattered fine dark grey or blackish scales (squamules) which are sparse at the edge of the cap and concentrated more towards the umbonate centre disc. It is striate towards the edges of the cap which curves inward at first and later flattens or erodes. Gills: Free, crowded and whitish. Stem: 5-8cm tall and 3-5mm thick expanding to up to 5mm at the base where there is white mycelium. The exterior of the stem is whitish and discolours to brown with damage or contact and the interior hollows with age. The membranous stem ring is located towards the top of the stem (superior) and may disappear. Spores: Amygdaliform with a germ pore. Dextrinoid. 8.5-12.5 x 6-8 μm. Smell: Indistinct.
Etymology
The specific epithet munnarensis is named for the town of Munnar in the Indian state of Kerala where this species was observed.
Habitat and distribution
L. munnarensis is scarcely recorded, little known and may be confused with numerous other Leucocoprinus or Leucoagaricus species. The specimens studied were growing individually or scattered on soil in the state of Kerala, India.
Similar species
Leucocoprinus brebissonii is similar in appearance and is distinguished via the lack of brown bruising as well as microscopic differences.
References
Leucocoprinus
Fungi described in 2009
Fungi of India
Fungus species | Leucocoprinus munnarensis | [
"Biology"
] | 441 | [
"Fungi",
"Fungus species"
] |
71,377,130 | https://en.wikipedia.org/wiki/Leucocoprinus%20pusillus | Leucocoprinus pusillus is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 2009 by the Indian mycologists T.K. Arun Kumar & Patinjareveettil Manimohan who classified it as Leucocoprinus pusillus.
Description
Leucocoprinus pusillus is a small dapperling mushroom with thin (up to 1mm thick) whitish flesh which discolours brown.
Cap: 1.3-1.6cm wide with a white, bulbous cap which expands with age to become convex with an indistinct umbo. It is covered in scattered fine dark brown or dark grey scales (squamules) which are more concentrated towards the centre disc. It has striations (sulcate-striate) towards the edges of the cap which curves inward at first and later flattens. Gills: Free, crowded and whitish but discolouring yellowish white or brown with age or upon drying. Stem: 2-2.2cm tall and 3-5mm thick and expanding towards the base where there is white mycelium. The exterior of the stem is whitish and discolours to brown with damage. The membranous, ascending stem ring is located towards the top of the stem (superior) and has dark brown scales on the edges. Spores: Ovoid or ellipsoid with a tiny germ pore. Dextrinoid. 7-10 x 5-6 μm. Smell: Indistinct.
Etymology
The specific epithet pusillus is Latin for very small, tiny or insignificant. This is a reference to the diminutive size and stature of this species of mushroom.
Habitat and distribution
L. pusillus is scarcely recorded, little known and may be confused with numerous other Leucocoprinus or Leucoagaricus species. The specimens studied were growing individually on manure rich soil in the state of Kerala, India.
References
Leucocoprinus
Fungi described in 2009
Fungi of India
Fungus species | Leucocoprinus pusillus | [
"Biology"
] | 428 | [
"Fungi",
"Fungus species"
] |
71,377,398 | https://en.wikipedia.org/wiki/Leucocoprinus%20wynneae | Leucocoprinus wynneae is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 1879 by the British mycologists Miles Joseph Berkeley and Christopher Edmund Broome who classified it as Hiatula wynneae or (wynniae).
In 1943 it was reclassified as Leucocoprinus wynneae (or wynniae) by the French mycologist Marcel Locquin.
Description
Leucocoprinus wynneae is a small, white dapperling mushroom. Berkeley and Broome provided only a very basic description of this species in 1879 which is not enough to adequately distinguish it from other species.
Cap: 3.2cm wide. White with a soft, powdery cap with a darker centre. Stem: 2.5cm tall and 1.5mm thick. Slender and striated.
Etymology
The specific epithet wynneae is named for Mrs. Lloyd Wynne who found the specimen examined by Berkeley and Broome.
Habitat and distribution
L. wynneae is scarcely recorded and little known. It was first found in a hothouse at Kew Gardens by Mrs. Lloyd Wynne. It has not been recorded there since but has been observed in the wild in Queensland, Australia and Sri Lanka. However the Atlas of Living Australia only has a single record of L. wynneae from 1887.
References
Leucocoprinus
Fungi described in 1879
Taxa named by Miles Joseph Berkeley
Taxa named by Christopher Edmund Broome
Fungus species | Leucocoprinus wynneae | [
"Biology"
] | 307 | [
"Fungi",
"Fungus species"
] |
71,377,690 | https://en.wikipedia.org/wiki/Glossary%20of%20lichen%20terms | This glossary provides an overview of terms used in the description of lichens, composite organisms arising from algae or cyanobacteria living symbiotically among filaments of multiple fungus species.
Erik Acharius, known as the "father of lichenology," coined many lichen terms still in use today around the turn of the 18th century. Before that, only a couple of lichen-specific terms had been proposed. Johann Dillenius introduced in 1742 to describe the cup-shaped structures associated with genus Cladonia, while in 1794 Michel Adanson used for the furrowed fruitbodies of the genus Graphis. Acharius introduced numerous terms to describe lichen structures, including , , , , , , and . In 1825, Friedrich Wallroth published the first of his multi-volume work Naturgeschichte der Flechten ("Natural History of Lichens"), in which he proposed an alternative terminology based largely on roots from the Greek language. His work, presented as an alternative to that of Acharius (of whom he was critical) was not well received, and the only terms he proposed to gain widespread acceptance were and , and , and , the last of which remained in use until the 1960s. Until about 1850, there were 21 terms for features of the lichen thallus that remain in use today.
The increasing availability of the optical microscope as an aid to identifying and characterizing lichens led to the creation of new terms to describe structures that were previously too small to be visualized. Contributions were made by Julius von Flotow (e.g. ), Edmond Tulasne (e.g ), and William Nylander (e.g. , ). Gustav Wilhelm Körber, an early proponent of using spore structure as a in lichen taxonomy, introduced , , and "polari-dyblastae", later anglicized to "polari-bilocular" and then shortened to . In the next five decades that followed, many other additions were made to the repertoire of lichen terms, subsequent to the increased understanding of lichen anatomy and physiology made possible by microscopy. For whatever reasons, there were not any new terms (still currently used) introduced from the period 1906 to 1945, when Gustaf Einar Du Rietz proposed replacing and with and ; all four terms remain in use. In some cases, older terminology became obsolete as better understanding of the nature of the fungal–algal relationship led to changes in their terminology. For example, after Gunnar Degelius objected to the use of for the algal partner, George Scott proposed the use of and for lichen components, recommendations that were generally accepted by lichenologists.
This glossary includes terms defining features of lichens unique to their composite nature, such as the major components the two major components of lichens ( and ); specialized structures in lichen physiology; descriptors of types of lichens; two- and three-dimensional shapes used to describe spores and other lichen structures; terms of position and shape; prefixes and suffixes commonly used to form lichen terms; terminology used in methods for the chemical identification of lichens; the names of 22 standard insoluble lichen pigments and their associated reference species; and "everyday" words that have a specialized meaning in lichenology. The list also includes a few historical terms that have been supplanted or are now considered obsolete. Familiarity with these terms is helpful for understanding older literature in the field.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
R
S
T
U
V
X
Z
See also
Glossary of biology
Glossary of mycology
Glossary of scientific naming
List of common names of lichen genera
List of Latin and Greek words commonly used in systematic names
Citations
Sources
Lichens
Glossaries of biology
Wikipedia glossaries using description lists | Glossary of lichen terms | [
"Biology"
] | 802 | [
"Glossaries of biology"
] |
71,378,144 | https://en.wikipedia.org/wiki/Leucocoprinus%20bakeri | Leucocoprinus bakeri is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 1952 by the British mycologist Richard William George Dennis who classified it as Lepiota bakeri.
In 1982, it was reclassified as Leucocoprinus bakeri by the German mycologist Rolf Singer.
Description
Leucocoprinus bakeri is a small dapperling mushroom with white flesh.
Cap: 7 cm wide. Convex, with a pinkish-buff (light brownish yellow) surface and fine brown scales (squamules) and a brown umbo. It is striated at the edges of the cap. Stem: Bulbous at the base and tapering to the tip with a pinkish-buff surface that has woolly (tomentose) scales below the ring. The membranous stem ring is located towards the top of the stem (superior) and is white with brown edges. Gills: Free, crowded (5-6mm) and white. Spores: Elliptical, dextrinoid, 5-7 x 3.5-4 μm.
Habitat and distribution
L. bakeri is scarcely recorded and little known. It has been found in Costa Rica and Trinidad.
References
Leucocoprinus
Taxa named by Rolf Singer
Fungi described in 1952
Fungus species | Leucocoprinus bakeri | [
"Biology"
] | 276 | [
"Fungi",
"Fungus species"
] |
71,378,146 | https://en.wikipedia.org/wiki/Skyttea%20ochrolechiae | Skyttea ochrolechiae is a species of lichenicolous (lichen-dwelling) fungus in the family Cordieritidaceae. Found in Japan, it was formally described as a new species in 2015 by Russian mycologist Mikhail P. Zhurbenko. The fungus grows on the epiphytic lichen Ochrolechia trochophora, but does not visibly damage its host. It is somewhat similar to Skyttea fusispora, but unlike that fungus, it has an orange-brown exciple that stains purple with the K test, and has shorter ascospores, typically measuring 14.7‒18.9 by 3.1‒3.9 μm.
References
Leotiomycetes
Fungi described in 2015
Lichenicolous fungi
Taxa named by Mikhail Petrovich Zhurbenko
Fungus species | Skyttea ochrolechiae | [
"Biology"
] | 177 | [
"Fungi",
"Fungus species"
] |
71,378,293 | https://en.wikipedia.org/wiki/Leucocoprinus%20beelianus | Leucocoprinus beelianus is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 1932 by the Belgian mycologist Maurice Beeli and was illustrated in 1936. Beeli had classified the species as Lepiota citrinella apparently without realising that this name had already been used by the Argentinian mycologist Carlo Luigi Spegazzini in 1898. Thus Beeli's classification was illegitimate.
In 1977 the Belgian mycologist Paul Heinemann classified it as Leucocoprinus beelianus and recognised Beeli's Lepiota citrinella as a synonym. Heinemann specifically stated that it was not the same as Spegazzini's Lepiota citrinella, which was ultimately reclassified as Leucocoprinus citrinellus in 1987.
Description
Leucocoprinus beelianus is a dapperling mushroom with thin white flesh.
Cap: 5-8cm wide, campanulate (bell shaped) and flattening as it expands. The umbo or centre disc is thicker than the rest of the cap and is reddish brown with woolly scales (tomentose). The rest of the cap surface is devoid of scales and pale yellow but white towards the edges where striations are present and run a third of the way up the cap, or less. Stem: 5-11cm long and 3.5-5mm thick with a slightly thicker base of up to 10mm. The exterior surface is light brown and has similar woolly scales to the cap whilst the interior is hollow. The membranous, immobile stem ring is located towards the top of the stem (superior) and is brownish with more pronounced brown edges. Gills: Free with a small collar, crowded and white. Spores: Amygdaliform. 8.4-12.3 x 5.2-7.2 μm. Taste: Bitter. When dry specimens discolour reddish brown colour.
Habitat and distribution
L. beelianus is scarcely recorded and little known. Beeli's and Heinemann's studies were based on specimens from Zaire, Central Africa (now the Democratic Republic of the Congo) where they were found on the ground and on dead wood in the forest near the town of Binga where they were described as 'abundant'. Specimens of Beeli's Lepiota citrinella were also found in Gabon in Africa. GBIF only contains one recorded observation of L. beelianus.
Etymology
The specific epithet beelianus is named for the Belgian mycologist Maurice Beeli who originally classified this species but provided an invalid name.
References
Leucocoprinus
Fungi described in 1932
Fungi of Africa
Taxa named by Paul Heinemann
Taxa named by Maurice Beeli
Fungus species | Leucocoprinus beelianus | [
"Biology"
] | 582 | [
"Fungi",
"Fungus species"
] |
71,379,039 | https://en.wikipedia.org/wiki/Parvibellus | Parvibellus is an extinct genus of panarthropod animal known from the Cambrian of China. It is known from only a single species, P. atavus, found in the Cambrian Stage 3 aged Chengjiang Biota of Yunnan, China.
Morphology
Parvibellus is small panarthropod with length of around . The head bore a pair of small frontal appendages and ventrally directed circular mouth. There is no evidence that Parvibellus had eyes. The elongated trunk possesses 11 pairs of lateral appendages and a pair of terminal projections.
In the original description, The trunk appendages were interpreted as swimming flaps, which suggest a nektonic life style and close relationship with stem-group arthropods such as the "gilled lobopodians" Kerygmachela and Pambdelurion, opabiniids and radiodonts. However, recent research suggests it may instead be a larval siberiid, a group of benthic lobopodian nest within arthropod stem-group, and the trunk appendages were re-interpreted as stout lobopods. Since it may represent the larva of any described siberiids from the same strata (e.g. Megadictyon, Jianshanopodia) and cannot be accurately identified, Parvibellus is considered to be a nomen dubium.
References
Fossil taxa described in 2022
Panarthropoda
Dinocaridida
Nomina dubia | Parvibellus | [
"Biology"
] | 308 | [
"Biological hypotheses",
"Nomina dubia",
"Controversial taxa"
] |
71,380,095 | https://en.wikipedia.org/wiki/Twin-width | The twin-width of an undirected graph is a natural number associated with the graph, used to study the parameterized complexity of graph algorithms. Intuitively, it measures how similar the graph is to a cograph, a type of graph that can be reduced to a single vertex by repeatedly merging together twins, vertices that have the same neighbors. The twin-width is defined from a sequence of repeated mergers where the vertices are not required to be twins, but have nearly equal sets of neighbors.
Definition
Twin-width is defined for finite simple undirected graphs. These have a finite set of vertices, and a set of edges that are unordered pairs of vertices. The open neighborhood of any vertex is the set of other vertices that it is paired with in edges of the graph; the closed neighborhood is formed from the open neighborhood by including the vertex itself. Two vertices are true twins when they have the same closed neighborhood, and false twins when they have the same open neighborhood; more generally, both true twins and false twins can be called twins, without qualification.
The cographs have many equivalent definitions, but one of them is that these are the graphs that can be reduced to a single vertex by a process of repeatedly finding any two twin vertices and merging them into a single vertex. For a cograph, this reduction process will always succeed, no matter which choice of twins to merge is made at each step. For a graph that is not a cograph, it will always get stuck in a subgraph with more than two vertices that has no twins.
The definition of twin-width mimics this reduction process. A contraction sequence, in this context, is a sequence of steps, beginning with the given graph, in which each step replaces a pair of vertices by a single vertex. This produces a sequence of graphs, with edges colored red and black; in the given graph, all edges are assumed to be black. When two vertices are replaced by a single vertex, the neighborhood of the new vertex is the union of the neighborhoods of the replaced vertices. In this new neighborhood, an edge that comes from black edges in the neighborhoods of both vertices remains black; all other edges are colored red.
A contraction sequence is called a -sequence if, throughout the sequence, every vertex touches at most red edges. The twin-width of a graph is the smallest value of for which it has a -sequence.
A dense graph may still have bounded twin-width; for instance, the cographs include all complete graphs. A variation of twin-width, sparse twin-width, applies to families of graphs rather than to individual graphs. For a family of graphs that is closed under taking induced subgraphs and has bounded twin-width, the following properties are equivalent:
The graphs in the family are sparse, meaning that they have a number of edges bounded by a linear function of their number of vertices.
The graphs in the family exclude some fixed complete bipartite graph as a subgraph.
The family of all subgraphs of graphs in the given family has bounded twin-width.
The family has bounded expansion, meaning that all its shallow minors are sparse.
Such a family is said to have bounded sparse twin-width.
The concept of twin-width can be generalized from graphs to various totally ordered structures (including graphs equipped with a total ordering on their vertices), and is in many ways simpler for ordered structures than for unordered graphs. It is also possible to formulate equivalent definitions for other notions of graph width using contraction sequences with different requirements than having bounded degree.
Graphs of bounded twin-width
Cographs have twin-width zero. In the reduction process for cographs, there will be no red edges: when two vertices are merged, their neighborhoods are equal, so there are no edges coming from only one of the two neighborhoods to be colored red. In any other graph, any contraction sequence will produce some red edges, and the twin-width will be greater than zero.
The path graphs with at most three vertices are cographs, but every larger path graph has twin-width one. For a contraction sequence that repeatedly merges the last two edges of the path, only the edge incident to the single merged vertex will be red, so this is a 1-sequence. Trees have twin-width at most two, and for some trees this is tight. A 2-contraction sequence for any tree may be found by choosing a root, and then repeatedly merging two leaves that have the same parent or, if this is not possible, merging the deepest leaf into its parent. The only red edges connect leaves to their parents, and when there are two at the same parent they can be merged, keeping the red degree at most two.
More generally, the following classes of graphs have bounded twin-width, and a contraction sequence of bounded width can be found for them in polynomial time:
Every graph of bounded clique-width, or of bounded rank-width, also has bounded twin-width. The twin-width is at most exponential in the clique-width, and at most doubly exponential in the rank-width. These graphs include, for instance, the distance-hereditary graphs, the -leaf powers for bounded values of , and the graphs of bounded treewidth.
Indifference graphs (equivalently, unit interval graphs or proper interval graphs) have twin-width at most two.
Unit disk graphs defined from sets of unit disks that cover each point of the plane a bounded number of times have bounded twin-width. The same is true for unit ball graphs in higher dimensions.
The permutation graphs coming from permutations with a forbidden permutation pattern have bounded twin-width. This allows twin-width to be applied to algorithmic problems on permutations with forbidden patterns.
Every family of graphs defined by forbidden minors has bounded twin-width. For instance, by Wagner's theorem, the forbidden minors for planar graphs are the two graphs and , so the planar graphs have bounded twin-width.
Every graph of bounded stack number or bounded queue number also has bounded twin-width. There exist families of graphs of bounded sparse twin-width that do not have bounded stack number, but the corresponding question for queue number remains open.
The strong product of any two graphs of bounded twin-width, one of which has bounded degree, again has bounded twin-width. This can be used to prove the bounded twin-width of classes of graphs that have decompositions into strong products of paths and bounded-treewidth graphs, such as the -planar graphs. For the lexicographic product of graphs, the twin-width is exactly the maximum of the widths of the two factor graphs. Twin-width also behaves well under several other standard graph products, but not the modular product of graphs.
In every hereditary family of graphs of bounded twin-width, it is possible to find a family of total orders for the vertices of its graphs so that the inherited ordering on an induced subgraph is also an ordering in the family, and so that the family is small with respect to these orders. This means that, for a total order on vertices, the number of graphs in the family consistent with that order is at most singly exponential in . Conversely, every hereditary family of ordered graphs that is small in this sense has bounded twin-width. It was originally conjectured that every hereditary family of labeled graphs that is small, in the sense that the number of graphs is at most a singly exponential factor times , has bounded twin-width. However, this conjecture was disproved using a family of induced subgraphs of an infinite Cayley graph that are small as labeled graphs but do not have bounded twin-width.
There exist graphs of unbounded twin-width within the following families of graphs:
Graphs of bounded degree.
Interval graphs.
Unit disk graphs.
In each of these cases, the result follows by a counting argument: there are more graphs of the given type than there can be graphs of bounded twin-width.
Properties
If a graph has bounded twin-width, then it is possible to find a versatile tree of contractions. This is a large family of contraction sequences, all of some (larger) bounded width, so that at each step in each sequence there are linearly many disjoint pairs of vertices each of which could be contracted at the next step in the sequence. It follows from this that the number of graphs of bounded twin-width on any set of given vertices is larger than by only a singly exponential factor, that the graphs of bounded twin-width have an adjacency labelling scheme with only a logarithmic number of bits per vertex, and that they have universal graphs of polynomial size in which each -vertex graph of bounded twin-width can be found as an induced subgraph.
Algorithms
The graphs of twin-width at most one can be recognized in polynomial time. However, it is NP-complete to determine whether a given graph has twin-width at most four, and NP-hard to approximate the twin-width with an approximation ratio better than 5/4. Under the exponential time hypothesis, computing the twin-width requires time at least exponential in , on -vertex graphs. In practice, it is possible to compute the twin-width of graphs of moderate size using SAT solvers. For most of the known families of graphs of bounded twin-width, it is possible to construct a contraction sequence of bounded width in polynomial time.
Once a contraction sequence has been given or constructed, many different algorithmic problems can be solved using it, in many cases more efficiently than is possible for graphs that do not have bounded twin-width. As detailed below, these include exact parameterized algorithms and approximation algorithms for NP-hard problems, as well as some problems that have classical polynomial time algorithms but can nevertheless be sped up using the assumption of bounded twin-width.
Parameterized algorithms
An algorithmic problem on graphs having an associated parameter is called fixed-parameter tractable if it has an algorithm that, on graphs with vertices and parameter value , runs in time for some constant and computable function . For instance, a running time of would be fixed-parameter tractable in this sense. This style of analysis is generally applied to problems that do not have a known polynomial-time algorithm, because otherwise fixed-parameter tractability would be trivial. Many such problems have been shown to be fixed-parameter tractable with twin-width as a parameter, when a contraction sequence of bounded width is given as part of the input. This applies, in particular, to the graph families of bounded twin-width listed above, for which a contraction sequence can be constructed efficiently. However, it is not known how to find a good contraction sequence for an arbitrary graph of low twin-width, when no other structure in the graph is known.
The fixed-parameter tractable problems for graphs of bounded twin-width with given contraction sequences include:
Testing whether the given graph models any given property in the first-order logic of graphs. Here, both the twin-width and the description length of the property are parameters of the analysis. Problems of this type include subgraph isomorphism for subgraphs of bounded size, and the vertex cover and dominating set problems for covers or dominating sets of bounded size. The dependence of these general methods on the length of the logical formula describing the property is tetrational, but for independent set, dominating set, and related problems it can be reduced to exponential in the size of the independent or dominating set, and for subgraph isomorphism it can be reduced to factorial in the number of vertices of the subgraph. For instance, the time to find a -vertex independent set, for an -vertex graph with a given -sequence, is , by a dynamic programming algorithm that considers small connected subgraphs of the red graphs in the forward direction of the contraction sequence. These time bounds are optimal, up to logarithmic factors in the exponent, under the exponential time hypothesis. For an extension of the first-order logic of graphs to graphs with totally ordered vertices, and logical predicates that can test this ordering, model checking is still fixed-parameter tractable for hereditary graph families of bounded twin-width, but not (under standard complexity-theoretic assumptions) for hereditary families of unbounded twin-width.
Coloring graphs of bounded twin-width, using a number of colors that is bounded by a function of their twin-width and of the size of their largest clique. For instance, triangle-free graphs of twin-width can be -colored by a greedy coloring algorithm that colors vertices in the reverse of the order they were contracted away. This result shows that the graphs of bounded twin-width are χ-bounded. For graph families of bounded sparse twin-width, the generalized coloring numbers are bounded. Here, the generalized coloring number is at most if the vertices can be linearly ordered in such a way that each vertex can reach at most earlier vertices in the ordering, through paths of length through later vertices in the ordering.
Speedups of classical algorithms
In graphs of bounded twin-width, it is possible to perform a breadth-first search, on a graph with vertices, in time , even when the graph is dense and has more edges than this time bound.
Approximation algorithms
Twin-width has also been applied in approximation algorithms. In particular, in the graphs of bounded twin-width, it is possible to find an approximation to the minimum dominating set with bounded approximation ratio. This is in contrast to more general graphs, for which it is NP-hard to obtain an approximation ratio that is better than logarithmic.
The maximum independent set and graph coloring problems can be approximated to within an approximation ratio of , for every , in polynomial time on graphs of bounded twin-width. In contrast, without the assumption of bounded twin-width, it is NP-hard to achieve any approximation ratio of this form with .
References
Further reading
Graph invariants | Twin-width | [
"Mathematics"
] | 2,844 | [
"Graph invariants",
"Mathematical relations",
"Graph theory"
] |
71,381,863 | https://en.wikipedia.org/wiki/German%20Mineralogical%20Society | The German Mineralogical Society (Deutsche Mineralogische Gesellschaft, or DMG, in German) is a non-profit German society for the promotion of mineralogy. It has about 1400 members (2021) and belongs to the International Mineralogical Association and the umbrella organization for geosciences. It was founded at the meeting of German natural scientists and physicians in Cologne in 1908 based on a proposal by Friedrich Martin Berwerth at the 1907 meeting in Dresden.
The current chairman (2021-2022) is the geochemist Friedhelm von Blanckenburg.
Organization structure
The DMG has the sections:
Applied mineralogy: systematics, properties of minerals; Organic, clay mineralogy, gemology
Crystallography: Research into the atomic structure and properties of inorganic and organic crystals (structural research, crystal chemistry, crystal physics, crystal growth and growth)
Geochemistry: distribution laws, frequency and mobility of chemical elements in the Earth, the seas, the atmosphere and in space (analytical, experimental, theoretical, applied, environmental geochemistry)
Petrology and petrophysics: Formation, origin and transformation of rocks; Investigations and syntheses under simulated conditions of the Earth's interior (experimental petrology), structural investigations
Besides, the DMG has the working groups Archaeometry and Monument Preservation, Raw Materials Research, Mineralogical Museums and Collections and Mineralogy in Schools and Universities.
Awards and prizes
The DMG awards prizes
Abraham Gottlob Werner Medal in silver and gold
Victor Moritz Goldschmidt Prize for young scientists
Georg Agricola Medal in applied mineralogy
Paul Ramdohr Prize for young scientists
Beate Mocek Prize for young female scientists
The DMG publishes multiple journals with other societies, including European Journal of Mineralogy along with the Italian and French mineralogical societies, and the magazine Elements, along with 18 other geochemical, cosmochemical and mineralogical societies.
Honorary members
1924 Max von Laue (1879–1960), Friedrich Becke (1855–1931), Waldemar Christofer Brøgger (1851–1940)
1925 Gustav Tschermak (1836–1927)
1927 Henry Alexander Miers (1858–1942), Leonard James Spencer (1870–1959), Jakob Johannes Sederholm (1863–1934)
1931 Gottlob Linck (1858–1947), Reinhard Brauns (1861–1937)
1932 Edward H. Kraus (1875–1973), Charles Palache (1869–1954)
1932 Friedrich Rinne (1863–1933)
1935 Gustav Klemm (1858–1938)
1938 Josef Emanuel Hibsch (1852–1940)
1947 Ludwig Ferdinand von Wolff (1874–1952), Otto Erdmannsdörffer (1876–1955)
1948 Hermann Steinmetz (1879–1964)
1949 Paul Niggli (1888–1953)
1950 Pentti Eskola (1883–1964), Percy Dudgeon Quensel (1881–1966), Karl-Hermann Scheumann (1881–1964)
1953 Hermann Tertsch (1880–1962), Walther Kossel (1888–1956), Iwan Stranski (1897–1979)
1957 Martin J. Buerger (1903–1986)
1958 Paul Peter Ewald (1888–1985)
1962 Carl Wilhelm Correns (1893–1980)
1963 Felix Machatschki (1895–1970)
1968 John Frank Schairer (1904–1970)
1970 Thomas F. W. Barth (1899–1971)
1971 Emil Lehmann (1881–1981), Adolf Pabst (1899–1990)
1972 Hermann Rose (Mineraloge) (1883–1976)
1973 George T. Faust (1908–1985)
1975 Theodor Ernst (1904–1983)
1976 Fritz Laves (1906–1978)
1980 Heinz Meixner (1908–1981)
1980 Werner Nowacki (1909–1988)
1981 Walter Noll (1907–1987), Doris Schachner (1904–1988)
1982 Karl Hugo Strunz (1910–2006), Georges Deicha (1917–2011)
1988 Hans Ulrich Bambauer (1929–2021)
1991 Josef Zemann (* 1923)
1992 Heinz Jagodzinski (1916–2012)
1995 Horst Saalfeld (1920–2022)
2000 Werner Schreyer (1930–2006)
2002 Volkmar Trommsdorff (1936–2005)
2004 Egon Althaus (1933–2022)
2005 Karl Hans Wedepohl (1925–2016), Friedrich Liebau (1926–2011)
2006 Peter Paufler (* 1940)
2015 Hans A. Seck (1935–2016), Friedrich Seifert (* 1941), Martin Okrusch (* 1934), Jochen Hoefs (* 1939)
2016 Herbert Kroll (* 1940), Herbert Palme (* 1943)
2017 Walter Maresch (* 1944)
2019 Christian Chopin (* 1955)
2020 Klaus Keil (1934–2022)
External links
Official Website of the German Mineralogical Society
References
1908 establishments
Mineralogy
Earth sciences societies
Scientific societies based in Germany
Crystallography organizations | German Mineralogical Society | [
"Chemistry",
"Materials_science"
] | 1,041 | [
"Crystallography",
"Crystallography organizations"
] |
71,381,930 | https://en.wikipedia.org/wiki/German%20Crystallographic%20Society | The German Crystallographic Society (Deutsche Gesellschaft für Kristallographie, or DGK in German) is a non-profit organization based in Berlin. As a voluntary association of scientists working in crystallography or interested in crystallography and other people and institutions, its goal is to promote crystallography in teaching, research and industrial practice as well as in the public, in particular by fostering the exchange of experience and ideas as well as further education at national and international level Frame. Working groups are dedicated to specific areas of crystallography. The Society has just over 1000 members.
Activities
The DGK represents crystallography in national and international scientific institutions. In particular, the DGK is a member body of the International Union of Crystallography (IUCr) and the European Crystallographic Association (ECA). The DGK nominates candidates for the crystallographically relevant review boards of the German Research Foundation. The association holds an annual conference every year, usually in spring.
The DGK issues a publication, the "Notifications", which is sent to the members annually.
Special scientific achievements are recognized with prizes, which are usually awarded annually. The DGK awards the following prizes: The Carl Hermann Medal for the scientific life's work of outstanding researchers in the field of crystallography and the Max von Laue Prize for young scientists. Furthermore, outstanding scientific contributions are honored with the Will Kleber commemorative coin. The Waltrude and Friedrich Liebau Prize for promoting interdisciplinary crystallography is awarded by the DGK on behalf of the Waltrude and Friedrich Liebau Foundation. In addition, from 2022 the Lieselotte Templeton Prize is awarded for very good bachelor's, master's or similar theses in the field of crystallography.
The DGK has provisionally applied to host the IUCr conference in Berlin in 2029 and was represented with its own stand at the corresponding conference in Prague in 2021. The IUCr conference was last represented in Germany in 1984, in Hamburg.
In a blog on the DGK homepage, articles are written at irregular intervals with topics such as "studying during the corona pandemic", depositing crystal structure data sets in the ICSD database, reports on people or on past conferences.
History
The association was founded on March 12, 1991 in Munich by merging the scientific associations "Crystallography Working Group" (AGKr) and "Association for Crystallography" (VFK). The merger of the West German AGKr led by Wolfram Saenger with the VFK of the GDR led by Ursula Steinike (Berlin) was decided by a vote of the members of these societies. The first elected chairman of the German Crystallographic Society was Heinz Schulz.
President since inception:
The annual meeting of the DGK takes place at a different location (usually in Germany) every year. In 2020 the conference was organized together with the Polish Crystallographic Association in Wroclaw. In 2021 the conference was supposed to take place in Hamburg at DESY and in 2022 in Munich, but due to the corona pandemic they were held online.
See also
British Crystallographic Association
American Crystallographic Association
European Crystallographic Association
International Union of Crystallography
References
External links
1991 establishments in Germany
Crystallography organizations
Chemistry societies
Scientific societies based in Germany
Organisations based in Berlin | German Crystallographic Society | [
"Chemistry",
"Materials_science"
] | 703 | [
"Chemistry societies",
"Crystallography",
"nan",
"Crystallography organizations"
] |
71,382,655 | https://en.wikipedia.org/wiki/Rana%20X.%20Adhikari | Rana X. Adhikari (born 1974) is an American experimental physicist. He is a professor of physics at the California Institute of Technology (Caltech) and an associate faculty member of the International Centre for Theoretical Sciences of Tata Institute of Fundamental Research (ICTS-TIFR).
Adhikari works on the experimental physics of gravitational wave detection and is among the scientists responsible for the U.S.-based Laser Interferometer Gravitational-wave Observatory (LIGO) that discovered gravitational waves in 2015. He, along with Lisa Barsotti and Matt Evans from MIT, received the New Horizons in Physics Prize in 2019 for research on current and future earth-based gravitational wave detectors. His research focus is on the areas of precision measurement related to surpassing fundamental physical limits to discover new phenomena related to gravity, quantum mechanics, and the true nature of space and time.
Adhikari is actively involved in the LIGO-India project, which aims to build a gravitational-wave observatory in India. He was elected as a Fellow of the American Physical Society and a member of Optica (formerly known as Optical Society of America). Since 2019 he has been a member of the Infosys Prize jury for physical sciences.
Personal life
Adhikari was born in the U.S. state of Ohio to Indian Bengali immigrants from Raiganj, West Bengal, India. They moved to Cape Canaveral, Florida when he was seven. He studied physics at the University of Florida, where he worked with David Reitze, and graduated in 1998 with a bachelor's degree. In 2004, he received a PhD in physics from the Massachusetts Institute of Technology under the supervision of experimental physicist Rainer Weiss, and joined Caltech's Laser Interferometer Gravitational-Wave Observatory (LIGO) project as a postdoctoral researcher. Adhikari was promoted as an assistant professor in 2006 and become a tenured professor of physics in 2012. He has also been an adjunct professor at the International Centre for Theoretical Sciences at the Tata Institute of Fundamental Research (ICTS-TIFR) in Bengaluru, India, since 2012.
Research
Adhikari has been involved in the construction and design of gravitational-wave detectors since 1997. He started working on laser interferometers as a graduate student at MIT, with a particular focus on the variety of noise sources, feedback loops and subsystems, and helped to reduce the noise in all 3 of the LIGO interferometers while working on the Livingston interferometer. In 2005, he received the first LIGO thesis prize.
The Adhikari Research Group, part of the Division of Physics, Mathematics, and Astronomy at Caltech, focuses on new detector technologies for fundamental physics experiments (gravitational waves, dark matter, and near field gravity). Adhikari is also affiliated with the Caltech Material Science Department and together they work on advancing mechanical oscillators, nonlinear optics, acoustic metamaterials, and high efficiency photodetection for quantum measurements.
Adhikari has collaborated with Kathryn Zurek to develop a new experiment that uses tabletop instruments to observe signatures of quantum gravity. Adhikari has also been working on alternative dark matter models. and space-based gravitational-wave detectors. He routinely collaborates with the international gravitational-wave community OzGrav, KAGRA and GEO600.
LIGO-India
In 2007, during the International Conference on Gravitation and Cosmology (ICGC) at the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, the idea of having a LIGO observatory in India was first proposed by Rana X. Adhikari. The IndIGO Consortium was formed in 2009 and since then has been planning a roadmap for gravitational-wave astronomy and a phased strategy towards Indian participation in realizing a gravitational-wave observatory in the Asia-Pacific region.
On February 17, 2016, less than a week after LIGO's landmark announcement about the detection of gravitational waves, Indian Prime Minister Narendra Modi announced that the Cabinet has granted 'in-principle' approval to the LIGO-India mega science proposal. The Indian gravitational-wave detector would be only the sixth such observatory in the world and will be similar to the two U.S. detectors in Hanford, Washington and Livingston, Louisiana. A Memorandum of Understanding (MoU) was signed on March 31, 2016, between the Department of Atomic Energy and Department of Science & Technology in India and the National Science Foundation of the U.S. to develop the observatory in India.
Adhikari was part of the delegation that met with the Prime Minister of India Narendra Modi in Washington, DC, for the signing of the MoU between India and the U.S. to build a LIGO detector in India. In an interview with Quartz India, Adhikari said, "The presence of world-class infrastructure in the form of the LIGO detector and the latest R&D will attract the right talent for experimental physics from all across the country." In order to support the upcoming project, LIGO laboratory in Caltech has been hosting, for many years, talented and motivated undergraduate students from Indian institutions, pre-selected by LIGO-India Science Collaboration, as part of the International LIGO SURF program.
Scientific art and media
Adhikari was the subject of the documentary LIGO: The Way the Universe is, I think directed by Hussain Currimbhoy, Carrie McCarthy, and Mark Pedri. Screened at DOC NYC, San Francisco Documentary Film Festival, the RAW Science Film Festival in Los Angeles, and the Cineglobe Film Festival at CERN, Geneva, the short film focuses on a mechanic-turned-scientist who tuned the machine that spurred a dramatic re-envisioning of the universe through the detection of gravitational waves.
In July 2017, he was part of Limits of Knowing, a month-long set of exhibitions and programs organized with the Berliner Festspiele. For this exhibition, he presented a prototype of an artwork designed to sense the environment of the Martin-Gropius-Bau. The 30 x 30 x 130 cm immersive mixed media artwork named Untitled reacted to the space and all objects in it (including the visitors) by recording a variety of data: the building's vibrations, sounds, temperature, magnetic fields, and levels of infrared light.
Later that year, on the anniversary of the first detection of gravitational waves, LA artist Rachel Mason's Singularity Song was released, as part of a fiscally sponsored program of Fulcrum Arts, Pasadena. Singularity Song is a meditation on black holes, pairing legendary butoh dancer Oguri with the voices of Caltech Theoretical Physicist Kip Thorne, Rana X. Adhikari, indie rock icon Carla Bozulich and experimental composer Anna Homler.
In January 2020, Scientific Inquirer posted an exchange between Australian recording artist Tex Crick and Adhikari, in which they discuss time travel using a mirror and listening to music in four dimensions. He was on the Y combinator podcast discussing the technical challenges of measuring gravitational waves. He also appeared on Seeker's The Good, the Bad, and the Science Podcast (The Science of Men in Black) and has collaborated with Pioneer Works' Director of Sciences Janna Levin.
Adhikari appeared on How the Universe Works, a documentary series aired on the Discovery Science Channel. In the episode Mystery of Spacetime (season 6 episode 10) he ponders on the secret structure that controls our universe, time, light, and energy. He will appear in the feature-length documentary The Faraway, Nearby that examines the life of physicist Joseph Weber - the first scientist to explore the detection of gravitational waves. Alan Lightman is the co-creator of the science film.
Adhikari will also be seen in BBC Studios Science Unit and Bilibili's Odyssey: Into The Future, a 3-part science series featuring Chinese science fiction author Liu Cixin and many of the futuristic concepts that inspired The Three-Body Problem series of novels.
Awards and recognition
2019 New Horizons in Physics: Breakthrough Foundation
2018 American Physical Society Fellow
2017 co-recipient Albert Einstein Medal
2017 co-recipient Princess of Asturias Award
2017 co-recipient Physics World: Breakthrough of the Year Award
2017 co-recipient Bruno Rossi Prize
2017 co-recipient Group Award of the Royal Astronomical Society
2016 co-recipient Physics World: Breakthrough of the Year Award
2016 co-recipient Gruber Cosmology Prize
2016 co-recipient Special Breakthrough Prize in Fundamental Physics
2016 Recognition by California Legislature
See also
Laser Interferometer Gravitational-wave Observatory (LIGO)
References
1974 births
Living people
Fellows of the American Physical Society
Albert Einstein Medal recipients
California Institute of Technology faculty
University of Florida alumni
Massachusetts Institute of Technology alumni
Academic staff of Tata Institute of Fundamental Research
21st-century American physicists
American people of Bengali descent
American academics of Indian descent
American astrophysicists
New Horizons in Physics Prize laureates
American experimental physicists
Gravitational-wave astrophysicists
Scientists from Ohio
Massachusetts Institute of Technology School of Science alumni
Gravitational-wave astronomy | Rana X. Adhikari | [
"Physics",
"Astronomy"
] | 1,841 | [
"Astronomical sub-disciplines",
"Gravitational-wave astronomy",
"Astrophysics"
] |
71,382,823 | https://en.wikipedia.org/wiki/Leucocoprinus%20muticolor | Leucocoprinus muticolor is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was first described in 1914 by the American mycologist William Murrill who classified it as Lepiota muticolor.
In 1992 it was reclassified as Leucocoprinus muticolor by the mycologist John Errol Chandos Aberdeen based on observations in Australia which were compared to specimens described by the British mycologist Richard William George Dennis in 1953.
The name Agaricus (Lepiota) muticolor was also used by Miles Joseph Berkeley and Christopher Edmund Broome in 1871 for an unrelated species.
Description
Leucocoprinus muticolor is a small dapperling mushroom.
Murrill described the specimens from Alabama, USA as follows.
Cap: 2.5-4cm wide with a white, campanulate (bell shaped) expanding cap which has a distinct dark brown umbo. It is covered in small brown fibrillose scales, the cap edges have slight striations and the white surface discolours to a rosy whitish when dry. Gills: Free, crowded, swollen in the middle (ventricose) and white discolouring to orange or browny yellow when dry. Stem: 3-5cm long and around 3mm thick with a slightly bulbous base. It is tough, smooth and white but discolors to a reddish-umber colour when dry. The stem ring is located in the middle of the stem (median) and is fixed in place, it is white with brown edges but may sometimes disappears. Spores: Ellipsoid. 8-9 x 6-7 μm.
Aberdeen's description of the specimens from Queensland, Australia differs in some regards.
Cap: 4cm wide, shallow convex with a greyish central disc instead of an umbo. The rest of the cap surface is whitish and covered in dark cream coloured fibrils (thread like filaments). When dry the cap discolours brownish and the scales become detachable. Gills: Not recorded when fresh. Dark brown when dry. Stem: 7.6 cm long and 5mm thick tapering upwards from the bulbous 8mm thick base. It is hollow and white but also dries to brown. The fragile stem ring is located a third of the way down from the top (apical). Spore print: Whitish with a pink tint. Spores: Elliptical with a pore. Dextrinoid. 9-13.5 x 5.5-7 μm.
Habitat and distribution
L. muticolor is scarcely recorded and little known. The specimens studied by Murrill were growing gregariously and collected by F.S. Earle from a hickory log in swampland near Auburn, Alabama in September 1899. It was only known from this location. The specimens studied by Aberdeen were collected by A.B. Cribb in February 1962 and January 1963 in Lamington National Park, Queensland, Australia.
The Queensland Department of Environment and Science holds a small number of preserved specimens of L. muticolor mostly from the 1960s. These were collected in rainforest habitats in the state of Queensland.
References
Leucocoprinus
Fungi described in 1992
Fungi of North America
Fungi of Australia
Fungus species | Leucocoprinus muticolor | [
"Biology"
] | 679 | [
"Fungi",
"Fungus species"
] |
72,887,006 | https://en.wikipedia.org/wiki/Allison%20Hubel | Allison Hubel is an American mechanical engineer and cryobiologist who applies her expertise in heat transfer to study the cryopreservation of biological tissue. She is a professor of mechanical engineering at the University of Minnesota, where she directs the Biopreservation Core Resource and the Technological Leadership Institute, and is the president of the Society for Cryobiology from 2023 to 2024.
Education and career
Hubel majored in mechanical engineering at Iowa State University, graduating in 1983. She continued her studies at the Massachusetts Institute of Technology (MIT), where she earned a master's degree in 1989 and completed her Ph.D. in the same year.
She worked as a research fellow at Massachusetts General Hospital from 1989 to 1990, and as an instructor at MIT from 1990 to 1993, before moving to the University of Minnesota in 1993 as a research associate in the Department of Laboratory Medicine and Pathology. In 1996 she became an assistant professor in that department, and in 2002 she moved to the Department of Mechanical Engineering as an associate professor. She was promoted to full professor in 2009, and became director of the Biopreservation Core Resource in 2010.
With two of her students, she founded a spinoff company, BlueCube Bio (later renamed Evia Bio) to commercialize their technology for preserving cells in cell therapy. She continues to serve as chief scientific officer for Evia Bio.
She became president-elect of the Society for Cryobiology for the 2022–2023 term, and will become president in the subsequent term.
Book
Hubel is the author of the book Preservation of Cells: A Practical Manual (Wiley, 2017).
Recognition
Hubel was elected as an ASME Fellow in 2008, and a Fellow of the American Institute for Medical and Biological Engineering in 2012. She was named a Cryofellow of the Society for Cryobiology in 2021.
References
External links
Academic home page
Year of birth missing (living people)
Living people
American mechanical engineers
American women engineers
American biologists
American women biologists
Cryobiology
Iowa State University alumni
Massachusetts Institute of Technology alumni
University of Minnesota faculty
Fellows of the American Institute for Medical and Biological Engineering
Fellows of the American Society of Mechanical Engineers | Allison Hubel | [
"Physics",
"Chemistry",
"Biology"
] | 444 | [
"Biochemistry",
"Physical phenomena",
"Phase transitions",
"Cryobiology"
] |
72,887,334 | https://en.wikipedia.org/wiki/Parallax%20in%20astronomy | The most important fundamental distance measurements in astronomy come from trigonometric parallax, as applied in the stellar parallax method. As the Earth orbits the Sun, the position of nearby stars will appear to shift slightly against the more distant background. These shifts are angles in an isosceles triangle, with 2 AU (the distance between the extreme positions of Earth's orbit around the Sun) making the base leg of the triangle and the distance to the star being the long equal-length legs. The amount of shift is quite small, even for the nearest stars, measuring 1 arcsecond for an object at 1 parsec's distance (3.26 light-years), and thereafter decreasing in angular amount as the distance increases. Astronomers usually express distances in units of parsecs (parallax arcseconds); light-years are used in popular media.
Because parallax becomes smaller for a greater stellar distance, useful distances can be measured only for stars which are near enough to have a parallax larger than a few times the precision of the measurement. In the 1990s, for example, the Hipparcos mission obtained parallaxes for over a hundred thousand stars with a precision of about a milliarcsecond, providing useful distances for stars out to a few hundred parsecs. The Hubble Space Telescope's Wide Field Camera 3 has the potential to provide a precision of 20 to 40 microarcseconds, enabling reliable distance measurements up to for small numbers of stars. The Gaia space mission provided similarly accurate distances to most stars brighter than 15th magnitude.
Distances can be measured within 10% as far as the Galactic Center, about 30,000 light years away. Stars have a velocity relative to the Sun that causes proper motion (transverse across the sky) and radial velocity (motion toward or away from the Sun). The former is determined by plotting the changing position of the stars over many years, while the latter comes from measuring the Doppler shift of the star's spectrum caused by motion along the line of sight. For a group of stars with the same spectral class and a similar magnitude range, a mean parallax can be derived from statistical analysis of the proper motions relative to their radial velocities. This statistical parallax method is useful for measuring the distances of bright stars beyond 50 parsecs and giant variable stars, including Cepheids and the RR Lyrae variables.
The motion of the Sun through space provides a longer baseline that will increase the accuracy of parallax measurements, known as secular parallax. For stars in the Milky Way disk, this corresponds to a mean baseline of 4 AU per year, while for halo stars the baseline is 40 AU per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. However, secular parallax introduces a higher level of uncertainty because the relative velocity of observed stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the uncertainty is inversely proportional to the square root of the sample size.
Moving cluster parallax is a technique where the motions of individual stars in a nearby star cluster can be used to find the distance to the cluster. Only open clusters are near enough for this technique to be useful. In particular the distance obtained for the Hyades has historically been an important step in the distance ladder.
Other individual objects can have fundamental distance estimates made for them under special circumstances. If the expansion of a gas cloud, like a supernova remnant or planetary nebula, can be observed over time, then an expansion parallax distance to that cloud can be estimated. Those measurements however suffer from uncertainties in the deviation of the object from sphericity. Binary stars which are both visual and spectroscopic binaries also can have their distance estimated by similar means, and do not suffer from the above geometric uncertainty. The common characteristic to these methods is that a measurement of angular motion is combined with a measurement of the absolute velocity (usually obtained via the Doppler effect). The distance estimate comes from computing how far the object must be to make its observed absolute velocity appear with the observed angular motion.
Expansion parallaxes in particular can give fundamental distance estimates for objects that are very far, because supernova ejecta have large expansion velocities and large sizes (compared to stars). Further, they can be observed with radio interferometers which can measure very small angular motions. These combine to provide fundamental distance estimates to supernovae in other galaxies. Though valuable, such cases are quite rare, so they serve as important consistency checks on the distance ladder rather than workhorse steps by themselves.
Parsec
Stellar parallax
Stellar parallax created by the relative motion between the Earth and a star can be seen, in the Copernican model, as arising from the orbit of the Earth around the Sun: the star only appears to move relative to more distant objects in the sky. In a geostatic model, the movement of the star would have to be taken as real with the star oscillating across the sky with respect to the background stars.
Stellar parallax is most often measured using annual parallax, defined as the difference in position of a star as seen from the Earth and Sun, i.e. the angle subtended at a star by the mean radius of the Earth's orbit around the Sun. The parsec (3.26 light-years) is defined as the distance for which the annual parallax is 1 arcsecond. Annual parallax is normally measured by observing the position of a star at different times of the year as the Earth moves through its orbit. Measurement of annual parallax was the first reliable way to determine the distances to the closest stars. The first successful measurements of stellar parallax were made by Friedrich Bessel in 1838 for the star 61 Cygni using a heliometer. Stellar parallax remains the standard for calibrating other measurement methods. Accurate calculations of distance based on stellar parallax require a measurement of the distance from the Earth to the Sun, now based on radar reflection off the surfaces of planets.
The angles involved in these calculations are very small and thus difficult to measure. The nearest star to the Sun (and thus the star with the largest parallax), Proxima Centauri, has a parallax of 0.7687 ± 0.0003 arcsec. This angle is approximately that subtended by an object 2 centimeters in diameter located 5.3 kilometers away.
The fact that stellar parallax was so small that it was unobservable at the time was used as the main scientific argument against heliocentrism during the early modern age. It is clear from Euclid's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons such gigantic distances involved seemed entirely implausible: it was one of Tycho's principal objections to Copernican heliocentrism that for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn (then the most distant known planet) and the eighth sphere (the fixed stars).
In 1989, the satellite Hipparcos was launched primarily for obtaining improved parallaxes and proper motions for over 100,000 nearby stars, increasing the reach of the method tenfold. Even so, Hipparcos was only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy. The European Space Agency's Gaia mission, launched in December 2013, can measure parallax angles to an accuracy of 10 microarcseconds, thus mapping nearby stars (and potentially planets) up to a distance of tens of thousands of light-years from Earth. In April 2014, NASA astronomers reported that the Hubble Space Telescope, by using spatial scanning, can precisely measure distances up to 10,000 light-years away, a ten-fold improvement over earlier measurements.
Diurnal parallax
Diurnal parallax is a parallax that varies with the rotation of the Earth or with a difference in location on the Earth. The Moon and to a smaller extent the terrestrial planets or asteroids seen from different viewing positions on the Earth (at one given moment) can appear differently placed against the background of fixed stars.
The diurnal parallax has been used by John Flamsteed in 1672 to measure the distance to Mars at its opposition and through that to estimate the astronomical unit and the size of the Solar System.
Lunar parallax
Lunar parallax (often short for lunar horizontal parallax or lunar equatorial horizontal parallax), is a special case of (diurnal) parallax: the Moon, being the nearest celestial body, has by far the largest maximum parallax of any celestial body, at times exceeding 1 degree.
The diagram for stellar parallax can illustrate lunar parallax as well if the diagram is taken to be scaled right down and slightly modified. Instead of 'near star', read 'Moon', and instead of taking the circle at the bottom of the diagram to represent the size of the Earth's orbit around the Sun, take it to be the size of the Earth's globe, and a circle around the Earth's surface. Then, the lunar (horizontal) parallax amounts to the difference in angular position, relative to the background of distant stars, of the Moon as seen from two different viewing positions on the Earth.
One of the viewing positions is the place from which the Moon can be seen directly overhead at a given moment. That is, viewed along the vertical line in the diagram. The other viewing position is a place from which the Moon can be seen on the horizon at the same moment. That is, viewed along one of the diagonal lines, from an Earth-surface position corresponding roughly to one of the blue dots on the modified diagram.
The lunar (horizontal) parallax can alternatively be defined as the angle subtended at the distance of the Moon by the radius of the Earth—equal to angle p in the diagram when scaled-down and modified as mentioned above.
The lunar horizontal parallax at any time depends on the linear distance of the Moon from the Earth. The Earth-Moon linear distance varies continuously as the Moon follows its perturbed and approximately elliptical orbit around the Earth. The range of the variation in linear distance is from about 56 to 63.7 Earth radii, corresponding to a horizontal parallax of about a degree of arc, but ranging from about 61.4' to about 54'. The Astronomical Almanac and similar publications tabulate the lunar horizontal parallax and/or the linear distance of the Moon from the Earth on a periodical e.g. daily basis for the convenience of astronomers (and of celestial navigators), and the study of how this coordinate varies with time forms part of lunar theory.
Parallax can also be used to determine the distance to the Moon.
One way to determine the lunar parallax from one location is by using a lunar eclipse. A full shadow of the Earth on the Moon has an apparent radius of curvature equal to the difference between the apparent radii of the Earth and the Sun as seen from the Moon. This radius can be seen to be equal to 0.75 degrees, from which (with the solar apparent radius of 0.25 degrees) we get an Earth apparent radius of 1 degree. This yields for the Earth-Moon distance 60.27 Earth radii or This procedure was first used by Aristarchus of Samos and Hipparchus, and later found its way into the work of Ptolemy.
The diagram at the right shows how daily lunar parallax arises on the geocentric and geostatic planetary model, in which the Earth is at the center of the planetary system and does not rotate. It also illustrates the important point that parallax need not be caused by any motion of the observer, contrary to some definitions of parallax that say it is, but may arise purely from motion of the observed.
Another method is to take two pictures of the Moon at the same time from two locations on Earth and compare the positions of the Moon relative to the stars. Using the orientation of the Earth, those two position measurements, and the distance between the two locations on the Earth, the distance to the Moon can be triangulated:
This is the method referred to by Jules Verne in his 1865 novel From the Earth to the Moon: Until then, many people had no idea how one could calculate the distance separating the Moon from the Earth. The circumstance was exploited to teach them that this distance was obtained by measuring the parallax of the Moon. If the word parallax appeared to amaze them, they were told that it was the angle subtended by two straight lines running from both ends of the Earth's radius to the Moon. If they had doubts about the perfection of this method, they were immediately shown that not only did this mean distance amount to a whole two hundred thirty-four thousand three hundred and forty-seven miles (94,330 leagues) but also that the astronomers were not in error by more than seventy miles (≈ 30 leagues).
Solar parallax
After Copernicus proposed his heliocentric system, with the Earth in revolution around the Sun, it was possible to build a model of the whole Solar System without scale. To ascertain the scale, it is necessary only to measure one distance within the Solar System, e.g., the mean distance from the Earth to the Sun (now called an astronomical unit, or AU). When found by triangulation, this is referred to as the solar parallax, the difference in position of the Sun as seen from the Earth's center and a point one Earth radius away, i.e., the angle subtended at the Sun by the Earth's mean radius. Knowing the solar parallax and the mean Earth radius allows one to calculate the AU, the first, small step on the long road of establishing the size and expansion age of the visible Universe.
A primitive way to determine the distance to the Sun in terms of the distance to the Moon was already proposed by Aristarchus of Samos in his book On the Sizes and Distances of the Sun and Moon. He noted that the Sun, Moon, and Earth form a right triangle (with the right angle at the Moon) at the moment of first or last quarter moon. He then estimated that the Moon–Earth–Sun angle was 87°. Using correct geometry but inaccurate observational data, Aristarchus concluded that the Sun was slightly less than 20 times farther away than the Moon. The true value of this angle is close to 89° 50', and the Sun is about 390 times farther away.
Aristarchus pointed out that the Moon and Sun have nearly equal apparent angular sizes, and therefore their diameters must be in proportion to their distances from Earth. He thus concluded that the Sun was around 20 times larger than the Moon. This conclusion, although incorrect, follows logically from his incorrect data. It suggests that the Sun is larger than the Earth, which could be taken to support the heliocentric model.
Although Aristarchus' results were incorrect due to observational errors, they were based on correct geometric principles of parallax, and became the basis for estimates of the size of the Solar System for almost 2000 years, until the transit of Venus was correctly observed in 1761 and 1769. This method was proposed by Edmond Halley in 1716, although he did not live to see the results. The use of Venus transits was less successful than had been hoped due to the black drop effect, but the resulting estimate, 153 million kilometers, is just 2% above the currently accepted value, 149.6 million kilometers.
Much later, the Solar System was "scaled" using the parallax of asteroids, some of which, such as Eros, pass much closer to Earth than Venus. In a favorable opposition, Eros can approach the Earth to within 22 million kilometers. During the opposition of 1900–1901, a worldwide program was launched to make parallax measurements of Eros to determine the solar parallax (or distance to the Sun), with the results published in 1910 by Arthur Hinks of Cambridge and Charles D. Perrine of the Lick Observatory, University of California.
Perrine published progress reports in 1906 and 1908. He took 965 photographs with the Crossley Reflector and selected 525 for measurement. A similar program was then carried out, during a closer approach, in 1930–1931 by Harold Spencer Jones. The value of the Astronomical Unit (roughly the Earth-Sun distance) obtained by this program was considered definitive until 1968, when radar and dynamical parallax methods started producing more precise measurements.
Also radar reflections, both off Venus (1958) and off asteroids, like Icarus, have been used for solar parallax determination. Today, use of spacecraft telemetry links has solved this old problem. The currently accepted value of solar parallax is arcseconds.
Moving-cluster parallax
The open stellar cluster Hyades in Taurus extends over such a large part of the sky, 20 degrees, that the proper motions as derived from astrometry appear to converge with some precision to a perspective point north of Orion. Combining the observed apparent (angular) proper motion in seconds of arc with the also observed true (absolute) receding motion as witnessed by the Doppler redshift of the stellar spectral lines, allows estimation of the distance to the cluster (151 light-years) and its member stars in much the same way as using annual parallax.
Dynamical parallax
Dynamical parallax has sometimes also been used to determine the distance to a supernova when the optical wavefront of the outburst is seen to propagate through the surrounding dust clouds at an apparent angular velocity, while its true propagation velocity is known to be the speed of light.
Spatio-temporal parallax
From enhanced relativistic positioning systems, spatio-temporal parallax generalizing the usual notion of parallax in space only has been developed. Then, event fields in spacetime can be deduced directly without intermediate models of light bending by massive bodies such as the one used in the PPN formalism for instance.
Statistical parallax
Two related techniques can determine the mean distances of stars by modelling the motions of stars. Both are referred to as statistical parallaxes, or individually called secular parallaxes and classical statistical parallaxes.
The motion of the Sun through space provides a longer baseline that will increase the accuracy of parallax measurements, known as secular parallax. For stars in the Milky Way disk, this corresponds to a mean baseline of 4 AU per year. For halo stars the baseline is 40 AU per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. Secular parallax introduces a higher level of uncertainty, because the relative velocity of other stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the precision is inversely proportional to the square root of the sample size.
The mean parallaxes and distances of a large group of stars can be estimated from their radial velocities and proper motions. This is known as a classical statistical parallax. The motions of the stars are modelled to statistically reproduce the velocity dispersion based on their distance.
Other methods for distance measurement in astronomy
In astronomy, the term "parallax" has come to mean a method of estimating distances, not necessarily utilizing a true parallax, such as:
Photometric parallax method
Spectroscopic parallax
Dynamical parallax
See also
Cosmic distance ladder
Lunar distance (astronomy)
Notes
References
Parallax
Parallax
Parallax
Length, distance, or range measuring devices | Parallax in astronomy | [
"Physics",
"Astronomy"
] | 4,150 | [
"Concepts in astronomy",
"Astrometry",
"Astronomical sub-disciplines"
] |
72,887,622 | https://en.wikipedia.org/wiki/HD%20159176 | HD 159176, also known as Boss 4444 and V1036 Scorpii, is a variable star about 2,800 light years from the Earth, in the constellation Scorpius. It is a 5th magnitude star, so it should be visible to the naked eye of an observer far from city lights. HD 159176 is the brightest star in the young open cluster NGC 6383. It is a binary star composed of two nearly identical O stars in a circular orbit.
In 1930, Robert Trumpler discovered that HD 159176 is a spectroscopic binary. He noted that the two stars have very nearly the same brightness and spectral type. In addition, he was observing at the Lick Observatory, so the far southern declination of the star meant it could only be observed near transit. Those three things together prevented him from unambiguously measuring the orbital period. The first full set of orbital elements, including the day period, was derived by Peter Conti et al. in 1975. Also in 1975, photometric observations by J. C. Thomas showed that HD 159176 is a rotating ellipsoidal variable. The star's spectra exhibit the Struve–Sahade effect. HD 159176 was given the variable star designation V1036 Scorpii, in 1997.
Although HD 159176 was long considered to be a non-eclipsing binary, data from the Transiting Exoplanet Survey Satellite does show eclipses.
HD 159176 has an X-ray luminosity far higher than would be expected from two isolated O stars. The excess X-rays may arise from interacting stellar winds.
References
Scorpius
86011
159176
Scorpii, V1036
6535
Rotating ellipsoidal variables | HD 159176 | [
"Astronomy"
] | 371 | [
"Scorpius",
"Constellations"
] |
72,890,483 | https://en.wikipedia.org/wiki/Materials%20Research%20Bulletin | Materials Research Bulletin is a peer-reviewed, scientific journal that covers the study of materials science and engineering. The journal is published by Elsevier and was established in 1966. The Editor-in-Chief is Rick Ubic.
The journal focuses on the development and understanding of materials, including their properties, structure, and processing, and the application of these materials in various fields. The scope of the journal includes the following areas: ceramics, metals, polymers, composites, electronic and optical materials, and biomaterials.
Materials Research Bulletin features original research articles, review articles, and short communications.
Abstracting and indexing
The journal is abstracted and indexed for example in:
Materials Science Citation Index
Chemical Abstracts
Cambridge Scientific Abstracts
Scopus
Web of Science
According to the Journal Citation Reports, the journal has a 2021 impact factor of 5.6.
References
External links
Materials
English-language journals
Elsevier academic journals
Materials science journals
Academic journals established in 1966 | Materials Research Bulletin | [
"Physics",
"Materials_science",
"Engineering"
] | 191 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Materials science",
"Materials",
"Matter"
] |
72,891,636 | https://en.wikipedia.org/wiki/Data%20version%20control | Data version control is a method of working with data sets. It is similar to the version control systems used in traditional software development, but is optimized to allow better processing of data and collaboration in the context of data analytics, research, and any other form of data analysis. Data version control may also include specific features and configurations designed to facilitate work with large data sets and data lakes.
History
Background
As early as 1985, researchers recognized the need for defining timing attributes in database tables, which would be necessary for tracking changes to databases. This research continued into the 1990s, and the theory was formalized into practical methods for managing data in relational databases, providing some of the foundational concepts for what would later become data version control.
In the early 2010s the size of data sets was rapidly expanding, and relational databases were no longer sufficient to manage the amounts of data organizations were accumulating. The rise of the Apache Hadoop eco system, with HDFS as a storage layer, and later object storage had become dominant in big data operations. Research into data management tools and data version control systems increased sharply, along with demand for such tools from both academia and the private and public sectors.
Version controlled databases
The first versioned database was proposed in 2012 for the SciDB database, and demonstrated it was possible to create chains and trees of different versions of the database while decreasing both the overall storage size and access speeds associated with previous methods. In 2014, a proposal was made to generalize these principles into a platform that could be used for any application.
In 2016, a prototype for a data version control system was developed during a Kaggle competition. This software was later used internally at an AI firm, and eventually spun off as a startup. Since then, a number of data version control systems, both open and closed source, have been developed and offered commercially, with a subset dedicated specifically to machine learning.
Use cases
Reproducibility
A wide range of scientific disciplines have adopted automated analysis of large quantities of data, including astrophysics, seismology, biology and medicine, social sciences and economics, and many other fields. The principle of reproducibility is an important aspect of formalizing findings in scientific disciplines, and in the context of data science presents a number of challenges. Most datasets are constantly changing, whether due to the addition of more data or changes in the structure and format of the data, and small changes can have significant effects on the outcome of experiments. Data version control allows for recording the exact state of data sets at a particular moment of time, making it easier to reproduce and understand experimental outcomes. If data practitioners can only know the present state of the data, they may run into a number of challenges such as difficulties in problem debugging or complying with data audits.
Development and testing
Data version control is sometimes used in testing and development of applications that interact with large quantities of data. Some data version control tools allow users to create replicas of their production environment for testing purposes. This approach allows them to test data integration processes such as extract, transform and load (ETL) and understand the changes made to data without having a negative impact on the consumers of the production data.
Machine learning and artificial intelligence
In the context of machine learning, data version control can be used for optimizing the performance of models. It can allow automating the process of analyzing outcomes with different versions of a data set to continuously improve performance. It is possible that open source data version control software could eliminate the need for proprietary AI platforms by extending tools like Git and CI/CD for use by machine learning engineers. Many open-source solutions build on Git-like semantics to provide these capabilities, as Git itself was designed for small text files and doesn't support typical machine learning datasets, which are very large.
CI/CD for data
CI/CD methodologies can be applied to datasets using data version control. Version control enables users to integrate with automation servers that allow establishing a CI/CD process for data. By adding testing platforms to the process, they can guarantee high quality of the data product. In this scenario, teams execute Continuous Integration (CI) tests on data and set checks in place to ensure the data is promoted to production only all the set data quality and data governance criteria are met.
Experimentation in isolated environments
To experiment on a dataset without impacting production data, one can use data version control to create replicas of the production environment where tests can be carried out. Such replicas allow testing and understanding of changes safely applied to data.
Data version control tools allow replication environments without the time- and resource-consuming maintenance. Instead, such tools allow objects to be shared using metadata.
Rollback
Continuous changes in data sets can sometimes cause functionality issues or lead to undesired outcomes, especially when applications are using the data. Data version control tools allow for the possibility to roll back a data set to an earlier state. This can be used to restore or improve functionality of an application or to correct errors or bad data which has been mistakenly included.
Examples
Version controlled data sources:
Kaggle
Quilt
Dolt
Kamu
Data version control for data lakes:
LakeFS
Project Nessie
Git-LFS
ML-Ops systems that implement data version control:
DVC
Pachyderm
Neptune
activeloop
graviti
dagshub
alectio
Galileo
Voxel51
dstack
dvid
See also
References
Version control systems
Technical communication
Big data
Data management
Data analysis | Data version control | [
"Technology"
] | 1,110 | [
"Data management",
"Data",
"Big data"
] |
72,891,759 | https://en.wikipedia.org/wiki/NGC%206383 | NGC 6383 is an open cluster of stars in the constellation of Scorpius. It was discovered by English astronomer John Herschel in 1847. In the New General Catalogue it was also identified as NGC 6374, most likely due to a clerical error. This is a large cluster of scattered stars that spans an angular diameter of . The brightest component is the O-type binary star system designated HD 159176 (HR 4962). Against the glare of this sixth magnitude star, a handful of fainter members are visible with a pair of large binoculars.
The cluster NGC 6383 is located at a distance of approximately from the Sun. It forms part of the Milky Way galaxy's Carina–Sagittarius Arm in a star forming region identified as , and lies in front of a dust absorption cloud. The cluster is likely part of the Sagittarius OB1 association, as are the clusters NGC 6530 and NGC 6531. This cluster, and in particular the ionizing radiation from the star HD 159176, form the H II region RCW 132, which span a crescent-shaped volume that has an angular size of .
This is a young cluster with age estimates ranging from 4 to 20 million years, and has not yet achieved dynamic relaxation. It has 254 members identified, with 53 forming young stellar objects, and 21 being hot, massive OB stars. 76 secondary X-ray sources have been detected, with most of them concentrated near the core. Newly-formed stars range in age from 1–6 million years old, indicating recent star formation activity. The cluster has a compact core radius of and a tidal radius of .
References
Further reading
6383
Scorpius
Open clusters | NGC 6383 | [
"Astronomy"
] | 351 | [
"Scorpius",
"Constellations"
] |
72,892,140 | https://en.wikipedia.org/wiki/List%20of%20slowest%20fixed-wing%20aircraft | This article lists powered fixed-wing aircraft with a stall speed of or less, and certain other aircraft. It does not list helicopters or vertical take-off and landing aircraft.
Fixed-wing aircraft are limited by their stall speed, the slowest airspeed at which they can maintain level flight. This depends on weight, however an aircraft will typically have a published stall speed at maximum takeoff weight.
Short take-off and landing aircraft typically have a low stall speed.
Slowest aircraft
The MacCready Gossamer Condor is a human-powered aircraft capable of flight as slow as . Its successor, the MacCready Gossamer Albatross can fly as slow as . It has a maximum speed of .
The Ruppert Archaeopteryx has a certified stall speed of .
The Vought XF5U can fly as slow as .
The Tapanee Pegazair-100 stall speed is .
The Zenith STOL CH 701 and ICP Savannah both have stall speeds of .
The Slepcev Storch has a stall speed of . It is a 3/4 scale replica of the Fieseler Fi 156 Storch, which had a stall speed of .
The British Auster WW2 reconnaissance aircraft had a placarded stall speed of , but that was merely the speed at which its control surfaces lost authority. As reported in many personal accounts by the pilots in their memoirs, the speed at which the aircraft would actually stall was . Either speed making it the slowest aircraft used in WW2 and possibly the slowest conventional warplane of all time.
The Antonov An-2 had no published stall speed. At low speeds its elevator cannot generate enough downforce to exceed the stalling angle of attack. In practice it could maintain approximately without descending.
The slowest jet-powered aircraft is the PZL M-15 Belphegor, with a stall speed of
References
Aviation records
Airspeed
Lists of technological superlatives | List of slowest fixed-wing aircraft | [
"Physics"
] | 402 | [
"Wikipedia categories named after physical quantities",
"Airspeed",
"Physical quantities"
] |
72,893,958 | https://en.wikipedia.org/wiki/HD%2099015 | HD 99015, also known as HR 4397 or rarely 31 G. Chamaeleontis, is a solitary white-hued star located in the southern circumpolar constellation Chamaeleon. It has an apparent magnitude of 6.42, placing it near the limit for naked eye visibility even in ideal conditions. The object is located relatively close at a distance of 243 light years and is drifting closer with a somewhat constrained heliocentric radial velocity of . At its current distance, HD 99015's brightness is diminished by 0.31 magnitudes due to interstellar dust. It has an absolute magnitude of +2.08.
This is an ordinary A-type main-sequence star with a stellar classification of A5 V. However, Nancy Houk and A. P. Cowley gave a class of A5 III/IV, indicating that it is instead an evolved A-type star with the luminosity class of a subgiant and giant star. It has 1.87 times the mass of the Sun and 1.83 times the solar radius. It radiates 12 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 99015 is somewhat metal enriched ([Fe/H] = +0.15) and is estimated to be 854 million years old.
References
A-type main-sequence stars
Chamaeleon
Chamaeleon, 31
CD-76 00495
099015
055497
4397 | HD 99015 | [
"Astronomy"
] | 313 | [
"Chamaeleon",
"Constellations"
] |
72,894,052 | https://en.wikipedia.org/wiki/Tetraneuralia | Tetraneuralia is a proposed clade of spiralian bilaterians uniting the phyla Mollusca and Entoprocta. belonging to Lophotrochozoa that groups mollusks, entoprocts and the extinct family Cupithecidae. The clade is supported by several morphological similarities between the two and has turned out to be important for evolution of mollusks.
It has been proposed by malacologists and zoologists when they observed that several morphological characteristics of entoprocts were clearly similar to those of mollusks. Both mollusks and entoprocts share a similar muscular system, the cuticle is the same in both, the hemolymph of the circulatory system is very similar, they have a tetraneuro nervous system with two pedal or ventral nerve cords. From this last characteristic the clade takes its name "tetraneuralia". Finally, entoprocts present similarities in the larval phases, they have a type of unique trochophore larva called "polyplacophora" and a complex larval apical organ.4 Molluscs are coelomate animals, in some cases equipped with shells, while entoprocts and cyclophores are acoelomates in habit. sessile.
Although the synapomorphies are strong, few molecular studies have been able to support it, until a recent molecular study has been able to strongly support it and therefore this relationship could be correct. The Tetraneuralia clade constituted the most basal clade of Lophotrochozoa according to molecular analyses. First proposed in 2009 based on similarities between entoproct larvae and polyplacophoran molluscs, it was recovered in 2019 in a phylogenomic study by Marlétaz et al.
Phylogeny
In the most recent study, Tetraneuralia was recovered at the base of Lophotrochozoa.
References
Controversial taxa
Bilaterian taxa
Lophotrochozoa | Tetraneuralia | [
"Biology"
] | 420 | [
"Biological hypotheses",
"Controversial taxa"
] |
72,894,674 | https://en.wikipedia.org/wiki/Zinc%20laurate | Zinc laurate is an metal-organic compound with the chemical formula . It is classified as a metallic soap, i.e. a metal derivative of a fatty acid (lauric acid).
Physical properties
Zinc laurate forms a white powder, has a slightly waxy odor.
It is insoluble in water.
Use
Zinc laurate is used in the personal care and cosmetics industry as an anticaking agent, dry binder, viscosity increasing agent.
References
Laurates
Zinc compounds | Zinc laurate | [
"Chemistry"
] | 102 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
72,900,340 | https://en.wikipedia.org/wiki/Neural%20radiance%20field | A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. The NeRF model enables downstream applications of novel view synthesis, scene geometry reconstruction, and obtaining the reflectance properties of the scene. Additional scene properties such as camera poses may also be jointly learned. First introduced in 2020, it has since gained significant attention for its potential applications in computer graphics and content creation.
Algorithm
The NeRF algorithm represents a scene as a radiance field parametrized by a deep neural network (DNN). The network predicts a volume density and view-dependent emitted radiance given the spatial location (x, y, z) and viewing direction in Euler angles (θ, Φ) of the camera. By sampling many points along camera rays, traditional volume rendering techniques can produce an image.
Data collection
A NeRF needs to be retrained for each unique scene. The first step is to collect images of the scene from different angles and their respective camera pose. These images are standard 2D images and do not require a specialized camera or software. Any camera is able to generate datasets, provided the settings and capture method meet the requirements for SfM (Structure from Motion).
This requires tracking of the camera position and orientation, often through some combination of SLAM, GPS, or inertial estimation. Researchers often use synthetic data to evaluate NeRF and related techniques. For such data, images (rendered through traditional non-learned methods) and respective camera poses are reproducible and error-free.
Training
For each sparse viewpoint (image and camera pose) provided, camera rays are marched through the scene, generating a set of 3D points with a given radiance direction (into the camera). For these points, volume density and emitted radiance are predicted using the multi-layer perceptron (MLP). An image is then generated through classical volume rendering. Because this process is fully differentiable, the error between the predicted image and the original image can be minimized with gradient descent over multiple viewpoints, encouraging the MLP to develop a coherent model of the scene.
Variations and improvements
Early versions of NeRF were slow to optimize and required that all input views were taken with the same camera in the same lighting conditions. These performed best when limited to orbiting around individual objects, such as a drum set, plants or small toys. Since the original paper in 2020, many improvements have been made to the NeRF algorithm, with variations for special use cases.
Fourier feature mapping
In 2020, shortly after the release of NeRF, the addition of Fourier Feature Mapping improved training speed and image accuracy. Deep neural networks struggle to learn high frequency functions in low dimensional domains; a phenomenon known as spectral bias. To overcome this shortcoming, points are mapped to a higher dimensional feature space before being fed into the MLP.
Where is the input point, are the frequency vectors, and are coefficients.
This allows for rapid convergence to high frequency functions, such as pixels in a detailed image.
Bundle-adjusting neural radiance fields
One limitation of NeRFs is the requirement of knowing accurate camera poses to train the model. Often times, pose estimation methods are not completely accurate, nor is the camera pose even possible to know. These imperfections result in artifacts and suboptimal convergence. So, a method was developed to optimize the camera pose along with the volumetric function itself. Called Bundle-Adjusting Neural Radiance Field (BARF), the technique uses a dynamic low-pass filter to go from coarse to fine adjustment, minimizing error by finding the geometric transformation to the desired image. This corrects imperfect camera poses and greatly improves the quality of NeRF renders.
Multiscale representation
Conventional NeRFs struggle to represent detail at all viewing distances, producing blurry images up close and overly aliased images from distant views. In 2021, researchers introduced a technique to improve the sharpness of details at different viewing scales known as mip-NeRF (comes from mipmap). Rather than sampling a single ray per pixel, the technique fits a gaussian to the conical frustum cast by the camera. This improvement effectively anti-aliases across all viewing scales. mip-NeRF also reduces overall image error and is faster to converge at ~half the size of ray-based NeRF.
Learned initializations
In 2021, researchers applied meta-learning to assign initial weights to the MLP. This rapidly speeds up convergence by effectively giving the network a head start in gradient descent. Meta-learning also allowed the MLP to learn an underlying representation of certain scene types. For example, given a dataset of famous tourist landmarks, an initialized NeRF could partially reconstruct a scene given one image.
NeRF in the wild
Conventional NeRFs are vulnerable to slight variations in input images (objects, lighting) often resulting in ghosting and artifacts. As a result, NeRFs struggle to represent dynamic scenes, such as bustling city streets with changes in lighting and dynamic objects. In 2021, researchers at Google developed a new method for accounting for these variations, named NeRF in the Wild (NeRF-W). This method splits the neural network (MLP) into three separate models. The main MLP is retained to encode the static volumetric radiance. However, it operates in sequence with a separate MLP for appearance embedding (changes in lighting, camera properties) and an MLP for transient embedding (changes in scene objects). This allows the NeRF to be trained on diverse photo collections, such as those taken by mobile phones at different times of day.
Relighting
In 2021, researchers added more outputs to the MLP at the heart of NeRFs. The output now included: volume density, surface normal, material parameters, distance to the first surface intersection (in any direction), and visibility of the external environment in any direction. The inclusion of these new parameters lets the MLP learn material properties, rather than pure radiance values. This facilitates a more complex rendering pipeline, calculating direct and global illumination, specular highlights, and shadows. As a result, the NeRF can render the scene under any lighting conditions with no re-training.
Plenoctrees
Although NeRFs had reached high levels of fidelity, their costly compute time made them useless for many applications requiring real-time rendering, such as VR/AR and interactive content. Introduced in 2021, Plenoctrees (plenoptic octrees) enabled real-time rendering of pre-trained NeRFs through division of the volumetric radiance function into an octree. Rather than assigning a radiance direction into the camera, viewing direction is taken out of the network input and spherical radiance is predicted for each region. This makes rendering over 3000x faster than conventional NeRFs.
Sparse Neural Radiance Grid
Similar to Plenoctrees, this method enabled real-time rendering of pretrained NeRFs. To avoid querying the large MLP for each point, this method bakes NeRFs into Sparse Neural Radiance Grids (SNeRG). A SNeRG is a sparse voxel grid containing opacity and color, with learned feature vectors to encode view-dependent information. A lightweight, more efficient MLP is then used to produce view-dependent residuals to modify the color and opacity. To enable this compressive baking, small changes to the NeRF architecture were made, such as running the MLP once per pixel rather than for each point along the ray. These improvements make SNeRG extremely efficient, outperforming Plenoctrees.
Instant NeRFs
In 2022, researchers at Nvidia enabled real-time training of NeRFs through a technique known as Instant Neural Graphics Primitives. An innovative input encoding reduces computation, enabling real-time training of a NeRF, an improvement orders of magnitude above previous methods. The speedup stems from the use of spatial hash functions, which have access times, and parallelized architectures which run fast on modern GPUs.
Related techniques
Plenoxels
Plenoxel (plenoptic volume element) uses a sparse voxel representation instead of a volumetric approach as seen in NeRFs. Plenoxel also completely removes the MLP, instead directly performing gradient descent on the voxel coefficients. Plenoxel can match the fidelity of a conventional NeRF in orders of magnitude less training time. Published in 2022, this method disproved the importance of the MLP, showing that the differentiable rendering pipeline is the critical component.
Gaussian splatting
Gaussian splatting is a newer method that can outperform NeRF in render time and fidelity. Rather than representing the scene as a volumetric function, it uses a sparse cloud of 3D gaussians. First, a point cloud is generated (through structure from motion) and converted to gaussians of initial covariance, color, and opacity. The gaussians are directly optimized through stochastic gradient descent to match the input image. This saves computation by removing empty space and foregoing the need to query a neural network for each point. Instead, simply "splat" all the gaussians onto the screen and they overlap to produce the desired image.
Photogrammetry
Traditional photogrammetry is not neural, instead using robust geometric equations to obtain 3D measurements. NeRFs, unlike photogrammetric methods, do not inherently produce dimensionally accurate 3D geometry. While their results are often sufficient for extracting accurate geometry (ex: via cube marching), the process is fuzzy, as with most neural methods. This limits NeRF to cases where the output image is valued, rather than raw scene geometry. However, NeRFs excel in situations with unfavorable lighting. For example, photogrammetric methods completely break down when trying to reconstruct reflective or transparent objects in a scene, while a NeRF is able to infer the geometry.
Applications
NeRFs have a wide range of applications, and are starting to grow in popularity as they become integrated into user-friendly applications.
Content creation
NeRFs have huge potential in content creation, where on-demand photorealistic views are extremely valuable. The technology democratizes a space previously only accessible by teams of VFX artists with expensive assets. Neural radiance fields now allow anyone with a camera to create compelling 3D environments. NeRF has been combined with generative AI, allowing users with no modelling experience to instruct changes in photorealistic 3D scenes. NeRFs have potential uses in video production, computer graphics, and product design.
Interactive content
The photorealism of NeRFs make them appealing for applications where immersion is important, such as virtual reality or videogames. NeRFs can be combined with classical rendering techniques to insert synthetic objects and create believable virtual experiences.
Medical imaging
NeRFs have been used to reconstruct 3D CT scans from sparse or even single X-ray views. The model demonstrated high fidelity renderings of chest and knee data. If adopted, this method can save patients from excess doses of ionizing radiation, allowing for safer diagnosis.
Robotics and autonomy
The unique ability of NeRFs to understand transparent and reflective objects makes them useful for robots interacting in such environments. The use of NeRF allowed a robot arm to precisely manipulate a transparent wine glass; a task where traditional computer vision would struggle.
NeRFs can also generate photorealistic human faces, making them valuable tools for human-computer interaction. Traditionally rendered faces can be uncanny, while other neural methods are too slow to run in real-time.
References
Machine learning algorithms
Computer vision | Neural radiance field | [
"Engineering"
] | 2,427 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
72,900,602 | https://en.wikipedia.org/wiki/Manganese%20laurate | Manganese laurate is an metal-organic compound with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid (lauric acid).
Preparation
Reaction of sodium laurate with manganese chloride.
Physical properties
Manganese laurate forms pale pink chrystalline powder.
Insoluble in water, soluble in alcohol. Slightly soluble in decane.
References
Laurates
Manganese compounds | Manganese laurate | [
"Chemistry"
] | 92 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
72,900,780 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20S23 | The Samsung Galaxy S23 is a series of high-end Android-based smartphones developed, manufactured, and marketed by Samsung Electronics as part of its flagship Galaxy S series. The phones were announced and unveiled on 1 February 2023 at the Galaxy Unpacked in-person event and were released on 17 February 2023. They collectively serve as the successor to the Samsung Galaxy S22 series and the S21 FE. It was succeeded by the Samsung Galaxy S24 series.
History
The first three phones in the series the Samsung Galaxy S23, S23+, and S23 Ultra were announced on 1 February 2023 at the Galaxy Unpacked event.
The Samsung Galaxy S23 FE was announced on Oct. 3, 2023 and was released on October 26, 2023.
Lineup
The Galaxy S23 series includes three devices, which share the same lineup and screen sizes with the previous Galaxy S22 series. The entry-level Galaxy S23 features a flat 6.1-inch (155 mm) display with a variable refresh rate from 48 Hz to 120 Hz, 8 GB of RAM, and storage options from 128 GB to 512 GB. The Galaxy S23+ features similar hardware in a larger 6.6-inch (168 mm) form factor, storage options starting at 256 GB, faster charging rate and larger battery capacity. At the top of the lineup, the Galaxy S23 Ultra features a curved 6.8-inch (173 mm) display with a variable refresh rate starting at 1 Hz, 8 GB or 12 GB of RAM, storage options from 256 GB to 1 TB, and the largest battery capacity in the lineup. Additionally, it features a more advanced camera setup, a higher-resolution display, and like the previous Galaxy S22 Ultra, an integrated S Pen for increased functionality and productivity.
Design
All models in the Samsung Galaxy S23 series are available in four standard colors: Cream, Lavender, Green, and Phantom Black, with four additional colors available only at samsung.com: Graphite and Lime, and the Galaxy S23 Ultra exclusives Red and Sky Blue.
Specifications
Hardware
Chipset
Unlike the Samsung Galaxy S22 and previous generations, which in some African and all European and Latin American countries utilized Samsung's own Exynos chip, the Galaxy S23 series uses the Qualcomm's Snapdragon 8 Gen 2 for Galaxy chip worldwide.
The Qualcomm Snapdragon 8 Gen 2 for Galaxy, which is a special version of the Snapdragon 8 Gen 2 developed specifically for Samsung, includes an Octa-Core CPU and an Adreno 740 GPU with a Qualcomm X70 modem for connectivity. The difference between the regular version of the Snapdragon 8 Gen 2 compared to the Samsung version is that the Samsung version features an overclocked Cortex-X3 core at 3.36 GHz instead of 3.20 GHz, and the Adreno 740 GPU has been overclocked to 719 MHz instead of 680 MHz.
Display
The Galaxy S23 series features a "Dynamic AMOLED 2X" display with HDR10+ support, 1450 nits of peak brightness on the Galaxy S23 FE and 1750 nits of peak brightness on other models, and "dynamic tone mapping" technology. The Galaxy S23, S23+ and S23 Ultra use an ultrasonic in-screen fingerprint sensor, while the Galaxy S23 FE uses an optical in-screen fingerprint sensor.
Cameras
The Galaxy S23 and S23+ have a 50 MP wide sensor, a 10 MP telephoto sensor and a 12 MP ultrawide sensor. The S23 Ultra has a 200 MP wide sensor, two 10 MP telephoto sensors and a 12 MP ultrawide sensor. The front camera uses a 12 MP sensor on all three models.
Connectivity
Samsung Galaxy S23, S23+, and S23 Ultra support 5G SA/NSA/Sub6, Wi-Fi 6E, and Bluetooth 5.3 connectivity.
Memory and storage
The Samsung Galaxy S23 and Galaxy S23 FE offer 8 GB of RAM with 128 GB, 256 GB, and, in some regions, 512 GB of internal storage options on the Galaxy S23. The Galaxy S23+ offers 8 GB of RAM with 256 GB and 512 GB of internal storage options. The Galaxy S23 Ultra has 8 GB or 12 GB of RAM and 256 GB, 512 GB, and 1 TB of internal storage options.
The 128 GB versions of the Galaxy S23 and Galaxy S23 FE use the older UFS 3.1 storage format, while versions with 256 GB and more use the newer, faster and more efficient UFS 4.0.
Batteries
The Galaxy S23, S23+, S23 Ultra, and S23 FE contain non-removable 3,900 mAh, 4,700 mAh, 5,000 mAh, and 4,500 mAh Li-ion batteries respectively. The S23 and S23 FE support wired charging over USB-C at up to 25W (using USB Power Delivery) while the S23+ and S23 Ultra have faster 45W charging, branded by Samsung as "Super Fast Charging 2.0" . All three have Qi inductive charging up to 15W. The phones also have the ability to charge other Qi-compatible devices from the S23's own battery power, which is branded as "Wireless PowerShare," at up to 4.5W.
Software
The Samsung Galaxy S23 phones were released with Android 13 with Samsung's One UI 5.1 software. Samsung Knox is included for enhanced device security, and a separate version exists for enterprise use. Samsung has promised 4 years of major Android OS updates and 1 additional year of security updates for a total of 5 years worth of updates.
Controversies
On 10 March, a Reddit user made a post accusing Samsung of using artificial intelligence to digitally enhance photos of the moon with its 'Scene Optimizer' feature, which was introduced with the Samsung Galaxy S10.
Replying to the controversy, Samsung issued an article in which it explains that it used variety of processes such as "Scene Optimiser", "AI Deep Learning" and "Super Resolution" to apply multi frame processing to enhance the moon.
Gallery
See also
List of longest smartphone telephoto lenses
References
External links
Samsung Galaxy S23 & S23+ – official website
Samsung Galaxy S23 Ultra – official website
Samsung Galaxy S23 FE – official website
Android (operating system) devices
Samsung Galaxy
Flagship smartphones
Samsung smartphones
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording
Mobile phones with 8K video recording
Mobile phones introduced in 2023
Discontinued flagship smartphones
Discontinued Samsung Galaxy smartphones | Samsung Galaxy S23 | [
"Technology"
] | 1,370 | [
"Discontinued flagship smartphones",
"Flagship smartphones"
] |
72,900,948 | https://en.wikipedia.org/wiki/Lanthanum%20laurate | Lanthanum laurate is an metal-organic compound with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid (lauric acid).
References
Laurates
Lanthanum compounds | Lanthanum laurate | [
"Chemistry"
] | 51 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
72,901,566 | https://en.wikipedia.org/wiki/TIBER | TIBER (Threat Intelligence Based Ethical Red Teaming) is a standard developed by the European Central Bank for Red Teaming. It can be adopted by member states of the European Union.
See also
ENISA
References
External links
European Central Bank
Computer security standards | TIBER | [
"Technology",
"Engineering"
] | 50 | [
"Computer security stubs",
"Cybersecurity engineering",
"Computer security standards",
"Computer standards",
"Computing stubs"
] |
72,903,452 | https://en.wikipedia.org/wiki/Tarbais%20bean | The Tarbais bean is a local variety of the bean Phaseolus vulgaris from the south-west of France. It is a product of terroir, whose production area lies mainly within the Hautes-Pyrénées, but also in some municipalities of Gers, Haute-Garonne and Pyrénées-Atlantiques. Recognised by the Label Rouge since 1997 and the PGI since 2000, its characteristics are guaranteed by specific scope statements.
History
Originally from the New World, the bean crossed the Pyrenees and established itself in the plain of Tarbes at the beginning of the 18th century. After declining in the 1950s, Tarbais bean cultivation was rejuvenated in 1986.
Formerly grown on the stalks of maize, the Tarbais bean is now produced on plastic nets, although many individuals continue to associate it with maize.
Cultivars
In the 1990s, 24 seed lines were selected by the INRA from among all the cultivars grown on farms in the department. Only one, the 'Alaric' seed, is available to the public and enables it to obtain the Red Label.
Production area
The PGI delimits a production area centred on the vast northern part of the Hautes-Pyrénées department, that reaches to its borders. Also involved are certain cantons in Gers, Pyrénées-Atlantiques and Haute-Garonne, which border on the main region, directly or indirectly.
Main stages of production
The ground is prepared in the spring for the sowing which is carried out in the middle of spring. About a month later, as the plant develops, the staking on the net can gradually take place. It is therefore useful to aerate the soil by hilling and hoeing in order to compensate for the repeated movement of agricultural machinery. Throughout the growing season, depending on observation and the results of analysis, the producer may intervene in a number of ways: fertilization, irrigation, pest control... Harvesting is exclusively manual and carried out in several stages, the fresh beans being picked in pods from the end of August to the beginning of September and the dry beans being picked from the plant from 20 September to mid-November. Then come, depending on the need, the steps related to drying and shattering. For better quality, progressive sorting takes place before packaging.
To eliminate beans parasitised by insects such as the bean weevil, seeds intended for sowing can be stored in the freezer at -35 °C for 24 to 48 hours. In order not to cause thermal shock, they will then be left for the same period in the refrigerator between 0 and 4°C then they can be stored between 10 and 20°C until sowing.
Characteristics
The Tarbais bean is a large white bean, with very thin stringy skin. It is also characterized by a melting and soft flesh, a delicate and non-starchy texture.
Gastronomy
See also
Common Bean
Hautes-Pyrenees
Bigorre
Tarbes
Gastronomy
Terroir
Notes and references
External links
Site of the Tarbais Bean Cooperative
Tarbais Haricot Web TV
Tarbais
Hautes-Pyrénées
Gastronomy in France
Certification marks | Tarbais bean | [
"Mathematics"
] | 644 | [
"Symbols",
"Certification marks"
] |
72,905,077 | https://en.wikipedia.org/wiki/Anion%20exchange%20membrane%20electrolysis | Anion exchange membrane (AEM) electrolysis is the electrolysis of water that utilises a semipermeable membrane that conducts hydroxide ions (OH−) called an anion exchange membrane. Like a proton-exchange membrane (PEM), the membrane separates the products, provides electrical insulation between electrodes, and conducts ions. Unlike PEM, AEM conducts hydroxide ions. The major advantage of AEM water electrolysis is that a high-cost noble metal catalyst is not required, low-cost transition metal catalyst can be used instead. AEM electrolysis is similar to alkaline water electrolysis, which uses a non-ion-selective separator instead of an anion-exchange membrane.
Advantages and Challenges
Advantages
Of all water electrolysis methods, AEM electrolysis can combine the advantages of alkaline water electrolysis (AWE) and PEM electrolysis.
Polymer electrolyte membrane electrolysis uses expensive platinum-group metals (PGMs) such as platinum, iridium, and ruthenium as a catalyst. Iridium, for instance, is more scarce than platinum; a 100 MW PEM electrolyser is expected to require 150 kg of Iridium, which will cost an estimated 7 million USD. Like alkaline water electrolysis, electrodes in AEM electrolysis operate in an alkaline environment, which allows non-noble, low-cost catalysts based on Ni, Fe, Co, Mn, Cu, etc to be used.
AEM electrolyser can run on pure water or slightly alkaline solutions (0.1-1M KOH/NaOH) unlike highly concentrated alkaline solutions (5M KOH/NaOH) in AWE. This reduces the risk of leakage. Using an alkaline solution, usually KOH/NaOH increases membrane conductivity and adds a hydroxide ion conductive pathway, which increases the utilisation of catalyst. The current density of an AEM electrolyser without a PGM catalyst operating at 1 A/cm2 was reported to require 1.8 volts and 1.57 volts in pure water-fed and 1 M KOH-fed, respectively. Electrolyte can be fed on both anode and cathode side or anode side only.
In the zero-gap design of AWE, the electrodes are separated only by a diaphragm which separates the gases. The diaphragm only allows water and hydroxide ions to pass through, but does not completely eliminate gas cross-over. Oxygen gas can enter the hydrogen half-cell and react on the cathode side to form water, which reduces the efficiency of the cell. Gas cross-over from the H2 to the O2 evolution side can pose a safety hazard because it can create an explosive gas mixture with >4%mol H2. The AEM electrolyser was reported to maintain H2 crossover to less than 0.4% for the 5000 h of operation.
AEM based on an aromatic polymer backbone is promising due to its significant cost reduction. Compare to Nafion membrane use in PEM, the production of Nafion required highly toxic chemicals, which increased the cost (>1000$/m2) and fluorocarbon gas is produced at the production stage of
tetrafluoroethylene, which poses a strong environmental impact. Fluorinated raw materials are inessential for AEM, allowing for a wider selection of low-cost polymer chemistry.
Challenges
AEM electrolysis is still in the early research and development stage, while alkaline water electrolysis is in the mature stage and PEM electrolysis is in the commercial stage. There is less academic literature on pure-water fed AEM electrolysers compared to the usage of KOH solution.
The major technical challenge facing a consumer level AEM electrolyser is the low durability of the membrane, which refers to the short device lifetime or longevity. The lifetimes of PEM electrolyser stacks range from 20,000 h to 80,000 h. Literature surveys have found that AEM electrolyser durability is demonstrated to be >2000 h, >12,000 h, and >700 h for pure water-fed (Pt group catalyst on anode and cathode), concentrated KOH-fed, and 1wt% K2CO3-fed respectively.
To overcome the obstacles for a large scale usage of AEM, increasing ionic conductivity and durability is essential. Many AEM breakdown at temperatures higher than 60 °C, AEM that can tolerate the presence of O2, high pH, and temperatures exceeding 60 °C are needed.
Science
Reactions
Oxygen evolution reactions (OER) need four electrons to produce one molecule of O2, consume multiple OH− anions, and form multiple adsorbed intermediates on the surface of the catalyst. These multiple steps of reaction create a high energy barrier and thus a high overpotential, which causes the OER to be sluggish. The performance of the AEM electrolyser largely depends on OER. The overpotential of OER can be reduced with an efficient catalyst that breaks the reaction's intermediate bond.
Hydrogen evolution reaction (HER) kinetics in alkaline solutions are slower than in acidic solutions because of additional proton dissociation and the formation of hydrogen intermediate (H*) that is not present in acidic conditions.
Anode reaction
Where the * indicate species adsorbed to the surface of the catalyst.
Cathode reaction
The reaction starts with water adsorption and dissociation in Volmer step and either hydrogen desorption in the Tafel step or Heyrovsky step.
Anion exchange membrane
Hydroxide ion intrinsically has lower mobility than H+, increasing ion exchange capacity can compensate for this lower mobility but also increase swelling and reduce membrane mechanical stability. Cross-linking membranes can compensate for membrane mechanical instability. The quaternary ammonium (QA) headgroup is commonly employed to attach polymer matrices in AEM. The head group allows anions but not cations to be transported. QA AEMs have low chemical stability because they are susceptible to OH− attack. Promising head group candidates include imidazolium-based head group and nitrogen-free head groups such as phosphonium, sulphonium, and ligand-metal complex. Most QAs and imidazolium groups degrade in alkaline environments by Hofmann degradation, SN2 reaction, or ring-opening reaction, especially at high temperatures and pH.
Polymeric AEM backbones are cationic-free base polymers. Poly(arylene ether)-based backbones, polyolefin-based backbones, polyphenylene-based backbones, and backbones containing cationic moieties are some examples.
Some of the best-performing AEMs are HTMA-DAPP, QPC-TMA, m-PBI, and PFTP.
Membrane electrode assembly
A membrane electrode assembly (MEA) is made of an anode and cathode catalyst layer with a membrane layer in between. The catalyst layer can be deposited on the membrane or the substrate. Catalyst-coated substrate (CCS) and catalyst-coated membrane (CCM) are two approaches to preparing MEA. A substrate must conduct electricity, support the catalyst mechanically, and remove gaseous products.
Nickel is typically used as a substrate for AEM, while titanium is for PEM; both nickel and titanium can be used on AEM. Carbon materials are not suitable for the anode side because of their degradation by HO− ions, which are nucleophiles. On the cathode, nickel, titanium, and carbon can be readily used. The catalyst layer is typically made by mixing catalyst powder and ionomer to produce an ink or slurry that is applied by spraying or painting.
Other methods include electrodeposition, magnetron sputtering, chemical electroless plating, and screen printing onto the substrate.
Ionomers act as a binder for the catalyst, substrate support, and membrane, which also provide OH− conducting ions and increase electrocatalytic activities.
See also
Electrochemistry
Electrochemical engineering
Electrolysis
Hydrogen production
Photocatalytic water splitting
Timeline of hydrogen technologies
Electrolysis of water
PEM fuel cell
proton-exchange membrane
Hydrogen economy
High-pressure electrolysis
References
Electrolysis
Hydrogen economy
Hydrogen production
Electrolytic cells | Anion exchange membrane electrolysis | [
"Chemistry"
] | 1,735 | [
"Electrochemistry",
"Electrolysis"
] |
69,800,355 | https://en.wikipedia.org/wiki/17%20Comae%20Berenices | 17 Comae Berenices (17 Com) is a multiple star system in the northern constellation of Coma Berenices. The brighter component, 17 Com A, is a naked eye star with an apparent visual magnitude of 5.2. It has a faint companion of magnitude 6.6, 17 Com B, positioned at an angular separation of along a position angle of 251°, as of 2018. They are located at a distance of approximately 240 light years from the Sun based on parallax measurements.
The double nature of this system was documented by F. G. W. Struve in 1836. The pair share a common proper motion through space and thus may be associated. Component B is itself a binary star system, although only the brighter component is visible in the spectrum. The Washington Double Star Catalogue lists the companion as component C, with a magnitude of 13.7 and a separation of . 17 Com has been recognized as members of the Coma Star Cluster, but this is disputed.
The star 17 Com A was classified as chemically peculiar by A. J. Cannon prior to 1918. W. W. Morgan in 1932 found the star's spectral lines varied in strength and appearance, and detected lines of the element europium. H. W. Babcock and T. G. Cowling measured the Zeeman effect in this star, demonstrating in 1953 that it has a magnetic field. In 1967, E. P. J. van den Heuvel noted the blue excess of this star, suggesting it is a blue straggler. G. W. Preston and associates in 1969 found that the luminosity and magnetic field of this star varied in strength with a time scale of around five days.
17 Com A is a magnetic chemically peculiar Ap star with a stellar classification of A0p or A0 SrCrEu, with the latter indicating the spectrum shows abundance anomalies of the elements strontium, chromium, and europium. The level of silicon in the atmosphere is also enhanced and it shows a significant helium deficiency. It has the variable star designation of AI Com, and is classified as an Alpha2 Canum Venaticorum variable and a suspected Delta Scuti variable. It has been identified as a suspected blue straggler.
The primary has an estimated age of 101 million years and is spinning with a projected rotational velocity of 20 km/s. It has more than double the mass and twice the radius of the Sun. The magnetic field strength is . It is radiating 43 times the luminosity of the Sun from its photosphere at an effective temperature of around 10,000 K.
The co-moving companion, component B, is a single-lined spectroscopic binary with an orbital period of 68.3 days and an eccentricity (ovalness) of 0.3. The visible member of this binary pair is a strong Am star with a class of kA2hA9VmF0, indicating it has the Calcium K-lines of an A0 star, the hydrogen lines of an A9 star, and the metallic lines of an F0 star.
References
External links
A-type main-sequence stars
Ap stars
Am stars
Alpha2 Canum Venaticorum variables
Delta Scuti variables
Spectroscopic binaries
Triple stars
Coma Berenices
4752
+26 2353
Com, 17
108662
060904
Comae Berenices, AI | 17 Comae Berenices | [
"Astronomy"
] | 697 | [
"Coma Berenices",
"Constellations"
] |
69,800,853 | https://en.wikipedia.org/wiki/Pacific%20Coast%20Electric%20Transmission%20Association | The Pacific Coast Electric Transmission Association was an American engineering institute founded in 1884 in response to the East coast establishment of the American Institute of Electrical Engineers. It published its proceedings in the journalist George P. Low's journal The Electrical Journal, later titled The Journal of Electricity and then The Journal of Electricity, Power, and Gas, and began annual meetings in 1898. The annual meeting acted as both an electrical industry conference and an academic conference in electrical engineering. It disbanded with the continuation of the AIEE to the West coast in or shortly after 1905.
References
Engineering societies based in the United States
Professional associations based in the United States
Electrification
History of electrical engineering | Pacific Coast Electric Transmission Association | [
"Engineering"
] | 131 | [
"Electrical engineering",
"History of electrical engineering"
] |
69,801,221 | https://en.wikipedia.org/wiki/HD%2073882 | HD 73882 is a visual binary system with the components separated by and a combined spectral class of O8. One of stars is an eclipsing binary system. The period of variability is listed as both 2.9199 days and 20.6 days, possibly due to the secondary being a spectroscopic binary star.
The system lies in the constellation of Vela about away from the Sun and is a member of the open cluster Ruprecht 64.
Components
The apparent magnitudes of the visible components A and B are 7.8 and 8.8 respectively. The primary, A, is thought to be the eclipsing binary. It shows eclipses every , but there are thought to be both primary and secondary minima with the actual orbital period being . Additional radial velocity variations with a period of have also been found, suggesting that one of the components is a spectroscopic binary.
The spectral types of the individual components are not known. The observed combined spectral type is variously given as O8.5V, O9III, or O8.5IV. The spectrum is presumed to be dominated by the primary pair which are more than a magnitude brighter than the secondary. The eclipsing components are likely to be two similar stars since the primary and secondary eclipses are almost identical. One source gives the combined mass of the eclipsing pair as and the mass of the secondary as , with an orbital period of about 643 years, but this is highly speculative with no reliable orbits available and even the number of components uncertain.
Circumstellar nebula
The star system, located behind the Vela Supernova Remnant, is obscured by the translucent nebula , located near the Vela Molecular Ridge nebulae complex. The nebula is illuminated by this star system and probably has a close physical association with it, together with brighter reflection nebula NGC 2626. The nebulae are rich in hydrogen (including deuterated hydrogen) and also contain detectable amounts of sodium, carbon monoxide, and other carbon compounds, including polycyclic aromatic hydrocarbons, and thiols. The nebula associated with the HD 73882 is one of the few exhibiting emission from compounds containing three carbon atoms. The nebula has unusually low levels of oxygen compared to the average interstellar medium.
References
Vela (constellation)
J08390953-4025092
CD-39 4631
042433
073882
O-type subgiants
HD 73882
Velorum, NX | HD 73882 | [
"Astronomy"
] | 514 | [
"Vela (constellation)",
"Constellations"
] |
69,801,895 | https://en.wikipedia.org/wiki/Weak%20component | In graph theory, the weak components of a directed graph partition the vertices of the graph into subsets that are totally ordered by reachability. They form the finest partition of the set of vertices that is totally ordered in this way.
Definition
The weak components were defined in a 1972 paper by Ronald Graham, Donald Knuth, and (posthumously) Theodore Motzkin, by analogy to the strongly connected components of a directed graph, which form the finest possible partition of the graph's vertices into subsets that are partially ordered by reachability. Instead, they defined the weak components to be the finest partition of the vertices into subsets that are totally ordered by
In more detail, defines the weak components through a combination of four symmetric relations on the vertices of any directed graph, denoted here as
For any two vertices and of the graph, if and only if each vertex is reachable from the other: there exist paths in the graph from to and from The relation is an equivalence relation, and its equivalence classes are used to define the strongly connected components of the graph.
For any two vertices and of the graph, if and only if neither vertex is reachable from the other: there do not exist paths in the graph in either direction between
For any two vertices and of the graph, if and only if either That is, there may be a two-way connection between these vertices, or they may be mutually unreachable, but they may not have a one-way connection.
The relation is defined as the transitive closure That is, when there is a sequence of vertices, starting with and ending such that each consecutive pair in the sequence is related
Then is an equivalence relation: every vertex is related to itself by (because it can reach itself in both directions by paths of length zero), any two vertices that are related by can be swapped for each other without changing this relation (because is built out of the symmetric relations and is a transitive relation (because it is a transitive closure). As with any equivalence relation, it can be used to partition the vertices of the graph into equivalence classes, subsets of the vertices such that two vertices are related by if and only if they belong to the same equivalence class. These equivalence classes are the weak components of the given
The original definition by Graham, Knuth, and Motzkin is equivalent but formulated somewhat differently. Given a directed they first construct another graph as the complement graph of the transitive closure As describes, the edges in represent , pairs of vertices that are not connected by a path Then, two vertices belong to the same weak component when either they belong to the same strongly connected component of or As Graham, Knuth, and Motzkin show, this condition defines an equivalence the same one defined above
Corresponding to these definitions, a directed graph is called weakly connected if it has exactly one weak component. This means that its vertices cannot be partitioned into two subsets, such that all of the vertices in the first subset can reach all of the vertices in the second subset, but such that none of the vertices in the second subset can reach any of the vertices in the first subset. It differs from other notions of weak connectivity in the literature, such as connectivity and components in the underlying undirected graph, for which Knuth suggests the alternative terminology
Properties
If and are two weak components of a directed graph,
then either all vertices in can reach all vertices in by paths in the graph, or all vertices in can reach all vertices However, there cannot exist reachability relations in both directions between these two components. Therefore, we can define an ordering on the weak components, according to which when all vertices in can reach all vertices By definition, This is an asymmetric relation (two elements can only be related in one direction, not the other) and it inherits the property of being a transitive relation from the transitivity of reachability. Therefore, it defines a total ordering on the weak components. It is the finest possible partition of the vertices into a totally ordered set of vertices consistent with
This ordering on the weak components can alternatively be interpreted as a weak ordering on the vertices themselves, with the property that when in the weak ordering, there necessarily exists a path from but not from However, this is not a complete characterization of this weak ordering, because two vertices and could have this same reachability ordering while belonging to the same weak component as each
Every weak component is a union of strongly connected If the strongly connected components of any given graph are contracted to single vertices, producing a directed acyclic graph (the of the given graph), and then this condensation is topologically sorted, then each weak component necessarily appears as a consecutive subsequence of the topological order of the strong
Algorithms
An algorithm for computing the weak components of a given directed graph in linear time was described by , and subsequently simplified by and As Tarjan observes, Tarjan's strongly connected components algorithm based on depth-first search will output the strongly connected components in (the reverse of) a topologically sorted order. The algorithm for weak components generates the strongly connected components in this order, and maintains a partition of the components that have been generated so far into the weak components of their induced subgraph. After all components are generated, this partition will describe the weak components of the whole
It is convenient to maintain the current partition into weak components in a stack, with each weak component maintaining additionally a list of its , strongly connected components that have no incoming edges from other strongly connected components in the same weak component, with the most recently generated source first. Each newly generated strongly connected component may form a new weak component on its own, or may end up merged with some of the previously constructed weak components near the top of the stack, the ones for which it cannot reach all
Thus, the algorithm performs the following
Initialize an empty stack of weak components, each associated with a list of its source components.
Use Tarjan's strongly connected components algorithm to generate the strongly connected components of the given graph in the reverse of a topological order. When each strongly connected component is generated, perform the following steps with it:
While the stack is non-empty and has no edges to the top weak component of the stack, pop that component from the stack.
If the stack is still non-empty, and some sources of its top weak component are not hit by edges again pop that component from the stack.
Construct a new weak containing as sources and all of the unhit sources from the top component that was popped, and push onto the stack.
Each test for whether any edges from hit a weak component can be performed in constant time once we find an edge from to the most recently generated earlier strongly connected component, by comparing the target component of that edge to the first source of the second-to-top component on the stack.
References
External links
Graph connectivity
Graph theory objects | Weak component | [
"Mathematics"
] | 1,389 | [
"Mathematical relations",
"Graph theory",
"Graph connectivity",
"Graph theory objects"
] |
69,802,577 | https://en.wikipedia.org/wiki/Research%20Institutes%20for%20Experimental%20Medicine | The Research Institutes for Experimental Medicine are a research facility in Berlin, Germany. This building is commonly known as Mouse Bunker, or Mäusebunker in German. Until 2003, its official name was Central Animal Laboratories of the Free University of Berlin.
Planning and Construction
This facility was built for the purpose of live animal testing. Animals for experimentation were bred in the facility too, to provide sterile conditions and a maximum amount of control. It is located in close proximity to the Benjamin Franklin Medical Center and the Institute for Hygiene and Microbiology. These three facilities were erected to provide a high performing infrastructure for the development and application of new medicine and vaccines.
The Mouse Bunker's original designers are husband and wife Gerd Hänska and Magdalena Hänska. They started the design between 1965 and 1967. Construction began in 1971. By that time, Gerd and Magdalena Hänska were working separately and only Gerd Hänska continued planning the Mouse Bunker. Detailed planning and construction was done by Gerd Hänska and Kurt Schmersow between 1971 and 1981.
Reception
The building has been viewed as controversial from its very beginning. The sinister design and the use for animal testing were not popular with the general public. Over the years it has become more and more popular among friends of brutalist architecture. Numerous international publications have featured Mouse Bunker as a prominent example of brutalist architecture in Germany.
Striking features are the building's pyramid-like shape and prominent blue ventilation pipes. The outer shell is made of pre-fabricated concrete panels. Triangular bay windows are placed on the long facades. The nickname Mouse Bunker emerged in reference to the buildings overall defensiveness, featuring slanted outer walls and a solid concrete shell.
After the current owner Charité announced plans for demolition, public awareness has increased even more. A petition to save the building was initiated by architect Gunnar Klack and art historian Felix Torkar in 2020. The discussion about the future of the Mouse Bunker was featured in international publications. The Mouse Bunker has been referenced in connection to other international examples of brutalist buildings and their respective future perspectives – for example the Tel Aviv central bus station in Israel or the Vilnius Palace of Concerts and Sports in Lithuania.
Filmmaker Nathan Eddy shot the documentary Battleship Berlin, which deals with the Research Institutes for Experimental Medicine and the neighboring Institute for Hygiene and Microbiology. After a years long debate, the building was finally listed as a cultural heritage site in May 2023.
External links
References
Buildings and structures in Berlin
Brutalist architecture in Germany
Buildings and structures completed in 1981
Animal testing
Free University of Berlin | Research Institutes for Experimental Medicine | [
"Chemistry"
] | 514 | [
"Animal testing"
] |
69,803,700 | https://en.wikipedia.org/wiki/Phonon%20polariton | In condensed matter physics, a phonon polariton is a type of quasiparticle that can form in a diatomic ionic crystal due to coupling of transverse optical phonons and photons. They are particular type of polariton, which behave like bosons. Phonon polaritons occur in the region where the wavelength and energy of phonons and photons are similar, as to adhere to the avoided crossing principle.
Phonon polariton spectra have traditionally been studied using Raman spectroscopy. The recent advances in (scattering-type) scanning near-field optical microscopy((s-)SNOM) and atomic force microscopy(AFM) have made it possible to observe the polaritons in a more direct way.
Theory
A phonon polariton is a type of quasiparticle that can form in some crystals due to the coupling of photons and lattice vibrations. They have properties of both light and sound waves, and can travel at very slow speeds in the material. They are useful for manipulating electromagnetic fields at nanoscale and enhancing optical phenomena. Phonon polaritons only result from coupling of transverse optical phonons, this is due to the particular form of the dispersion relation of the phonon and photon and their interaction. Photons consist of electromagnetic waves, which are always transverse. Therefore, they can only couple with transverse phonons in crystals.
Near the dispersion relation of an acoustic phonon can be approximated as being linear, with a particular gradient giving a dispersion relation of the form , with the speed of the wave, the angular frequency and k the absolute value of the wave vector . The dispersion relation of photons is also linear, being also of the form , with c being the speed of light in vacuum. The difference lies in the magnitudes of their speeds, the speed of photons is many times larger than the speed for the acoustic phonons. The dispersion relations will therefore never cross each other, resulting in a lack of coupling. The dispersion relations touch at , but since the waves have no energy, no coupling will occur.
Optical phonons, by contrast, have a non-zero angular frequency at and have a negative slope, which is also much smaller in magnitude to that of photons. This will result in the crossing of the optical phonon branch and the photon dispersion, leading to their coupling and the forming of a phonon polariton.
Dispersion relation
The behavior of the phonon polaritons can be described by the dispersion relation. This dispersion relation is most easily derived for diatomic ion crystals with optical isotropy, for example sodium chloride and zinc sulfide. Since the atoms in the crystal are charged, any lattice vibration which changes the relative distance between the two atoms in the unit cell will change the dielectric polarization of the material. To describe these vibrations, it is useful to introduce the parameter w, which is given by:
Where
is the displacement of the positive atom relative to the negative atom;
μ is the reduced mass of the two atoms;
V is the volume of the unit cell.
Using this parameter, the behavior of the lattice vibrations for long waves can be described by the following equations:
Where
denotes the double time derivative of
is the static dielectric constant
is the high-frequency dielectric constant
is the infrared dispersion frequency
is the electric field
is the dielectric polarization.
For the full coupling between the phonon and the photon, we need the four Maxwell's equations in matter. Since, macroscopically, the crystal is uncharged and there is no current, the equations can be simplified. A phonon polariton must abide all of these six equations. To find solutions to this set of equations, we write the following trial plane wave solutions for , and :
Where denotes the wave vector of the plane wave, the position, t the time, and ω the angular frequency. Notice that wave vector should be perpendicular to the electric field and the magnetic field. Solving the resulting equations for ω and k, the magnitude of the wave vector, yields the following dispersion relation, and furthermore an expression for the optical dielectric constant:
With the optical dielectric constant.
The solution of this dispersion relation has two branches, an upper branch and a lower branch (see also the figure). If the slope of the curve is low, the particle is said to behave "phononlike", and if the slope is high the particle behaves "photonlike", owing these names to the slopes of the regular dispersion curves for phonons and photons. The phonon polariton behaves phononlike for low k in the upper branch, and for high k in the lower branch. Conversely, the polariton behaves photonlike for high k in the upper branch, low k in the lower branch.
Limit behaviour of the dispersion relation
The dispersion relation describes the behaviour of the coupling. The coupling of the phonon and the photon is the most promininent in the region where the original transverse disperion relations would have crossed. In the limit of large k, the solid lines of both branches approach the dotted lines, meaning, the coupling does not have a large impact on the behaviour of the vibrations.
Towards the right of the crossing point, the upper branch behaves like a photon. The physical interpretation of this effect is that the frequency becomes too high for the ions to partake in the vibration, causing them to be essentially static. This results in a dispersion relation resembling one of a regular photon in a crystal. The lower branch in this region behaves, because of their low phase velocity compared to the photons, as regular transverse lattice vibrations.
Lyddane–Sachs–Teller relation
The longitudonal optical phonon frequency is defined by the zero of the equation for the dielectric constant. Writing the equation for the dielectric constant in a different way yields:
Solving the equation yields:
This equation gives the ratio of the frequency of the longitudonal optical phonon (), to the frequency of the transverse optical phonon () in diatomic cubic ionic crystals, and is known as the Lyddane-Sachs-Teller relation. The ratio can be found using inelastic neutron scattering experiments.
Surface phonon polariton
Surface phonon polariton(SPhPs) are a specific kind of phonon polariton. They are formed by the coupling of optical surface phonon, instead of normal phonons, and light, resulting in an electromagnetic surface wave. They are similar to surface plasmon polaritons, although studied to a far lesser extent. The applications are far ranging from materials with negative index of refraction to high-density IR data storage.
One other application is in the cooling of microelectronics. Phonons are the main source of heat conductivity in materials, where optical phonons contribute far less than acoustic phonons. This is because of the relatively low group velocity of optical phonons. When the thickness of the material decreases, the conductivity of via acoustic also decreases, since surface scattering increases. This microelectronics are getting smaller and smaller, reductions is getting more problematic. Although optical phonons themselves do not have a high thermal conductivity, SPhPs do seem to have this. So they may be an alternative means of cooling these electronic devices.
Experimental observation
Most observations of phonon polaritons are made of surface phonon polaritons, since these can be easily probed by Raman spectroscopy or AFM.
Raman spectroscopy
As with any Raman experiment, a laser is pointed at the material being studied. If the correct wavelength is chosen, this laser can induce the formation of a polariton on the sample. Looking at the Stokes shifted emitted radiation and by using the conservation of energy and the known laser energy, one can calculate the polariton energy, with which one can construct the dispersion relation.
SNOM and AFM
The induction of polaritons is very similar to that in Raman experiments, with a few differences. With the extremely high spatial resolution of SNOM, one can induce polaritons very locally in the sample. This can be done continuously, producing a continuous wave(CW) of polariton, or with an ultrafast pulse, producing a polariton with a very high temporal footprint. In both cases the polaritons are detected by the tip of the AFM, this signal is then used to calculate the energy of the polariton. One can also perform these experiments near the edge of the sample. This will result in the polaritons being reflected. In the case of CW polaritons, standing waves will be created, which will again be detected by the AFM tip. In the case of the polaritons created by the ultrafast laser, no standing wave will be created. The wave can still interfere with itself the moment it is reflected of the edge. Whether one is observing on the bulk surface or close to an edge, the signal is in temporal form. One can Fourier transform this signal, converting the signal into frequency domain, which can used to obtain the dispersion relation.
Polaritonics and real-space imaging
Phonon polaritons also find use in the field of polaritonics, a field between photonics and electronics. In this field phonon polaritons are used for high speed signal processing and terahertz spectroscopy. The real-space imaging of phonon polaritons was made possible by projecting them onto a CCD camera.
See also
Polariton
Phonon
Surface plasmon polariton
References
Quasiparticles | Phonon polariton | [
"Physics",
"Materials_science"
] | 2,041 | [
"Quasiparticles",
"Subatomic particles",
"Condensed matter physics",
"Matter"
] |
69,803,965 | https://en.wikipedia.org/wiki/Electroreflectance | Electroreflectance (also: electromodulated reflectance) is the change of reflectivity of a solid due to the influence of an electric field close to, or at the interface of the solid with a liquid. The change in reflectivity is most noticeable at very specific ranges of photon energy, corresponding to the band gaps at critical points of the Brillouin zone.
The electroreflectance effect can be used to get a clearer picture of the band structure at critical points where there is a lot of near degeneracy. Normally, the band structure at critical points (points of special interest) has to be measured within a background of adsorption from non-critical points at the Brillouin zone boundary. Using a strong electric field, the adsorption spectrum can be changed to a spectrum that shows peaks at these critical points, essentially lifting the critical points from the background.
The effect was first discovered and understood in semiconductor materials, but later research proved that metals also exhibit electroreflectance. An early observation of the changing optical reflectivity of gold due to a present electric field was attributed to a change in refractive index of the neighboring liquid. However, it was shown that this could not be the case. The new conclusion was that the effect had to come from a modulation of the near-surface layer of the gold.
Theoretic description
Effect of the electric field on the electronic structure
When an electric field is applied to a metal or semiconductor, the electronic structure of the material changes. The electrons (and other charged particles) will react to the electric field, by repositioning themselves within the material. Electrons in metals can relatively easily move around and are available in abundance. They will move in such a manner that they try to cancel the external electric field. Since no metal is a perfect conductor, no metal will perfectly cancel the external electric field within the material. In semiconductors the electrons that are available will not be able to move around as easily as electrons in metals. This leads to a weaker response and weaker cancellation of the electric field. This has the effect that the electric field can penetrate deeper into a semiconductor than into a metal.
The optical reflectivity of a (semi-)conductor is based on the band structure of the material close to or at the surface of the material. For reflectivity to occur a photon has to have enough energy to overcome the bandgap of electrons at the Fermi surface. When the photon energy is smaller than the bandgap, the solid will be unable to absorb the energy of the photon by excitation of an electron to a higher energy. This means that the photon will not be re-emitted by the solid and thus not reflected. If the photon energy is large enough to excite an electron from the Fermi surface, the solid will re-emit the photon by decaying the electron back to the original energy. This is not exactly the same photon as the incident photon, as it has for example the opposite direction of the incident photon.
By applying an electric field to the material, the band structure of the solid changes. This change in band structure leads to a different bandgap, which in turn leads to a difference in optical reflectivity. The electric field, generally made by creating a potential difference, leads to an altered Hamiltonian. Using analytical methods available, such as the Tight Binding method, it can be calculated that this altered Hamiltonian leads to a different band structure.
The combination of electron repositioning and the change in band structure due to an external electric field is called the field effect. Since the electric field has more influence on semiconductors than on metals, semiconductors are easier to use to observe the electroreflectance effect.
Near the surface
The optical reflection in (semi-)conductors happens mostly in the surface region of the material. Therefore, the band structure of this region is extra important. Band structure usually covers bulk material. For deviations from this structure, it is conventional to use a band diagram. In a band diagram the x-axis is changed from wavevector k in band structure diagrams to position x in the preferred direction. Usually, this positional direction is normal to the surface plane.
For semiconductors specifically, the band diagram near the surface of the material is important. When an electric field is present close to, or in the material, this will lead to a potential difference within the semiconductor. Dependent on the electric field, the semiconductor will become n- or p-like in the surface region. From now on we will use that the semiconductor has become n-like at the surface. The bands near the surface will bend under the electrostatic potential of the applied electric field. This bending can be interpreted in the same way as the bending of the valence and conduction bands in a p-n-junction, when equilibrium has been reached. The result of this bending leads to a conduction band that comes close to the Fermi level. Therefore, the conduction band will begin to fill with electrons. This change in band structure leads to a change in optical reflection of the semiconductor.
Brillouin zones and optical reflectivity
Optical reflectivity and the Brillouin zones are closely linked, since the band gap energy in the Brillouin zone determines if a photon is absorbed or reflected. If the band gap energy in the Brillouin zone is smaller than the photon energy, the photon will be absorbed, while the photon will be transmitted/reflected if the band gap energy is larger than the photon energy. For example, the photon energies of visible light lie in a range between 1.8 eV (red light) and 3.1 eV (violet light), So if the band gap energy is larger than 3.2 eV, photons of visible light will not be absorbed, but reflected/transmitted: the material appears transparent. This is the case for diamond, quartz etc. But if the band gap is roughly 2.6 eV (this is the case for cadmium sulfide) only blue and violet light is absorbed, while red and green light are transmitted, resulting in a reddish looking material.
When an electric field is added to a (semi)conductor, the material will try to cancel this field by inducing an electric field at its surface. Because of this electric field, the optical properties of the surface layer will change, due to the change in size of critical band gaps, and hence changing its energy. Since the change in band gap only occurs on the surface of the (semi)conductor, optical properties will not change in the core of bulk materials, but for very thin films, where almost all particles can be found at the surface, the optical properties can change: absorption or transmittance of certain wavelengths depending on the strength of the electric field. This can result in more accurate measurements in case there are multiple compounds in the semiconductor, practically canceling the background noise of data.
Commonly, the band gaps are smallest close to, or at the Brillouin zone boundary. Adding an electric field will alter the whole band structure of the material where the electric field penetrates, but the effect will be especially noticeable at the Brillouin zone boundary. When the smallest band gap changes in size, this alters the optical reflectivity of the material more than the change in an already larger band gap. This can be explained by noticing that the smallest band gap determines a lot of the reflectivity, as lower energy photons cannot be absorbed and re-emitted.
Dielectric constant
The optical properties of semiconductors are directly related to the dielectric constant of the material. This dielectric constant gives the ratio between the electric permeability of a material in relation to the permeability of a vacuum. The imaginary refractive index of a material is given by the square root of the dielectric constant. The reflectance of a material can be calculated using the Fresnel equations. A present electric field alters the dielectric constant and therefore alters the optical properties of the material, like the reflectance.
Interfaces with a liquid (electric double layer)
A solid in contact with a liquid, in the presence of an electric field, forms an electric double layer. This layer is present at the interface of the solid and liquid and it shields the charged surface of the solid. This electric double layer has an effect on the optical reflectivity of the solid as it changes the elastic light scattering properties. The formation of the electric double layer involves different timescales, such as the relaxation time and the charging time.
The relaxation time we can write as with being the Diffusion constant and the Debye length.
The charging time can be expressed by where is the representative system size.
The Debye length is often used as a measure of electric double layer thickness. Measuring the electric double layer with electroreflectance is challenging due to separation caused by conduction electrons.
History
The effect of electroreflectence was first written of in a review letter from 1965 by B. O. Seraphin and R. B. Hess from Michelson Laboratory, China Lake, California where they were studying the Franz-Keldysh effect above the fundamental edge in germanium. They found that it was not only possible for the material to absorb the electrons, but also re-emit them. Following this discovery Seraphin has written numerous articles on the new found phenomenon.
Research techniques
Electroreflectance in surface physics
Using electroreflectance in surface physics studies gives some major advantages over techniques used before its discovery. Before, the determination of the surface potential was the hard to do since you need electrical measurements at the surface and it was difficult to probe the surface region without involving the bulk underneath. Electroreflectance does not need to make electrical measurements on the surface, but only uses optical measurements. Furthermore, due to direct functional relationships between surface potential and reflectivity we can get rid of a lot assumptions about mobility, scattering, or trapping of added carriers needed in the older methods. The electric field of the surface is probed by the modulation of the beam reflected by the surface. The incoming beam does not penetrate the material deep, so you only probe the surface without interacting with the bulk underneath.
Aspnes's third-derivative
Third order spectroscopy, sometimes revered to as Aspnes's third-derivative, is a technique used to enhance the resolution of a spectroscopy measurement. This technique was first used by D.E. Aspnes to study electroreflectance in 1971. Using 3rd order derivatives can sharpen the peak of a function (see figure). Especially in spectroscopy, where the wave is never measured on one specific wavelength, but always on a band, it is use full to sharpen the peak, and thus narrow the band.
Another advantage of derivatives is that baseline shifts are eliminated since derivatives get rid of shifts. These shifts in spectra can for example be caused by sample handling, lamp or detector instabilities. This way, you can eliminate some of the background noise of measurements.
Applications
Electroreflectance is often used to determine band gaps and electric properties of thin films of weaker semiconducting materials. Two different examples are listed below.
Enhancing research of high band gap semiconductors at room temperature
Wide band gap semiconductors like tin oxide SnO2 generally possess a high chemical stability and mobility, are cheap to fabricate and have a suitable band alignment, making these semiconductors often used in various electronics as thin film transistors, anodes in lithium ion batteries and as electron transport layer in solar cells. The large band gap of SnO2 () and large binding energy () make it useful in ultraviolet based devices.
But a fundamental problem arises with its dipole forbidden band structure in bulk form: the transition from the valence to the conduction band is dipole forbidden since both types of states exist with even parity with the effect that band edge emission of SnO2 is forbidden in nature. This can be offset by employing its reduced dimensional structure, partially destroying the crystal symmetry, turning the forbidden dipole transition into allowed ones. Observing optical transitions in SnO2 at room temperature, however, is challenging due to the light absorbing efficiency in the UV region of the reduced SnO2 structures being very weak and background scattering of electrons with lower energies. Using electroreflectance the optical transitions of thin films can be recovered: by placing a thin film in an electric field, the critical points of the optical transition will be enhanced while, due to a change in reflectivity, low energy background scattering is reduced.
Electroreflectance in organic semiconductors
Organic compounds containing conjugate (i.e., alternate single-double) bonds can have semiconducting properties. The conductivity and mobility of those organic compounds however, are very low compared to inorganic semiconductors. Assuming the molecules of the organic semiconductor are lattices, the same procedure of electroreflectance of inorganic semiconductors can be applied for the organic ones. It should be noted though that there is a certain dualism in semiconductors: intra-molecular conduction (inside a molecule) and inter-molecular conduction (between molecules), which one should take into account doing measurements. Especially for thin films the band gaps of organic semiconductors can be accurately determined using this method.
See also
Photo-reflectance
Field effect (semiconductor)
Band structure
Brillouin zone
Electric double layer
References
Spectroscopy
Semiconductor properties
Electrostatics | Electroreflectance | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,734 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Semiconductor properties",
"Condensed matter physics",
"Spectroscopy"
] |
69,804,819 | https://en.wikipedia.org/wiki/Porto%20School%20%28architecture%29 | The Porto School is a movement of modern and contemporary architecture in Portugal. Grounded in the teaching at the Porto School of Fine Arts and the Porto School of Architecture, it is one of the most influential architectural movements in the history of Portuguese architecture. Its main figures, Fernando Távora, Álvaro Siza Vieira, and Eduardo Souto de Moura are some of the most globally renowned Portuguese architects.
The School is the foremost expression of modern architecture in Portugal. It is defined by the importance given to the contextualisation and integration of the functionalism and minimalism typical of modernism to the specific local and historical background of each work and to the roots of Portuguese traditional architecture. The first work unanimously attributed to the School was designed in 1953, but its origins can be traced from the beginning of the 20th century.
Background
The teaching of architecture as an independent program in Porto begins in 1911, at the Porto School of Fine Arts (ESBAP). José Marques da Silva, one of the most renowned architects in the country in the beginning of the 20th century, was part of the faculty between 1906 and 1939, as well as the head of the institution for 15 years. He had an influence in the school through the importance given to drawing and its rigour as a tool of the creative process in Architecture, as well as to considerations of function and utility, a framework of teaching mostly influenced by his training in Fine Arts in Paris with Julien Guadet. However, his conception of Architecture was realised in his work and instruction somewhat anachronistically, detached of the revolution of the Modernists, which frequently undervalues his relevance as a forefather to the Porto School.
It is Carlos Ramos, a professor at ESBAP for thirty years and its leader between 1952 and 1967, who is credited as the "establisher of contemporaneity in the school', as stated by Álvaro Siza Vieira. Besides engraving a mindset of openness to artistic innovation and to the city of Porto itself, Carlos Ramos was also influential in bringing to ESBAP a faculty of artists who would go on to influence the following generations of architects and fine artists. Not withstanding the modernisation brought about by his time at ESBAP, Ramos did not abandon the core principles left by Marques da Silva, harnessing and developing the importance of drawing and the methodology of teaching as if in an atelier. However, he did not realise, either in his works or theoretical content, the paths of development of Portuguese architecture that his students would go on to trail. According to Fernando Távora, Carlos Ramos "liked to open up pathways more than pinpointing them".
History
To Eduardo Souto de Moura, "Távora is the father of the Porto School, but the great-grandfather of Europe. He is an historical and universal figure". Fernando Távora entered ESBAP as an Architecture student in 1941, complementing it with a higher degree from 1945 onwards. In that time, he developed through essays a theoretical underpinning to his work which culminates in the design of the Santa Maria da Feira Municipal Market in 1953.
The building exemplifies his concern of developing and reconciling the radical tendencies of modernist architecture with the local and historical context, contributing to his defence of Architecture as an answer to the social needs of the "Man of today" and as a practice which inescapably exists in a given (social, economical, climatic) setting. The market can be understood as the first work belonging to the School – where the seeds are laid for Távora's following projects and for the genesis of the Porto School's identity.
Álvaro Siza Vieira interns with Távora between 1955 and 1958 after being his student at ESBAP. Siza remembers Távora as the first person in the school to recognise his talent – until then Siza describes his academic performance at the school as "very mediocre". From 1958 onwards he starts working on his own, designing three seminal projects at Leça da Palmeira(Casa de Chá da Boa Nova, Piscinas de Marés, swimming pool at Quinta da Conceição), some of them started while he worked under Távora.
In his early works Siza continues and expands the dialogue between modernist influences and traditional Portuguese architecture, as well as the importance of location and place to the exterior design of the building, although with some differences to Távora. With Siza, the contextualisation of the building does not necessarily mean a harmonisation with its surroundings. On the contrary, the exterior design is understood as the manifestation of an architect's attitude to those surroundings. When the site is deemed to possess a beauty and strength of its own, its impact on the landscape is minimised to an extreme (such as in the seaside Piscinas de Marés). When the site is deemed under or poorly developed, the building is markedly and visibly closed onto itself, with a greater attention given to its interior (such as in the Lordelo Cooperative or the Rocha Ribeiro House).
His individual evolution along with his growing international acclaim placed him on a different level compared to his contemporaries and lead him to a progressive distancing from ESBAP – he was hired as an assistant professor in 1966 but resigned in 1969 after Carlos Ramos's departure and the number of pedagogical and political crises that had plagued the institution. Ramos's death, along with the academic struggles of 1968 and 1969, lead the school to an artistic stagnation from which it only emerged after the Carnation Revolution of 1974, when the long-standing Estado Novo dictatorship was overthrown and the tension accumulated of years under the regime's oppression was released in the period that followed: the Processo Revolucionário Em Curso.
The school's involvement in the Ambulatory Local Support Service (Serviço de Apoio Ambulatório Local in Portuguese, or simply SAAL) was a crucial moment for the Porto School. SAAL was a state programme for the construction of social housing created after the revolution that set out to answer the pressing housing needs of disadvantaged populations in Portugal. Its methodology of direct interaction between crews of architects and the target population organised in residents' associations makes it a unique moment in the history of Portuguese architecture and a worldwide reference in the field. The restraints caused by the country's poor economic situation and the service's own fragile financial status were well suited to the school's modernist attitude towards rationality and functionalism. The main mark left on the city by the programme was the Bouça housing complex, by Álvaro Siza Vieira.
The involvement of architects trained (or in training) at ESBAP in the SAAL crews meant a change of scale from what they usually worked in. The Porto School's approach had always found better refuge in smaller, less complex projects, where the desired intimate relationship with location and with traditional Portuguese architecture. SAAL's scale was larger, however – 11,500 homes were planned to be built across 33 different sites in the city. The substantial strength of residents' associations in Porto during the post-revolutionary period made possible the ambition that SAAL would build more than just homes – parks, nursery schools, and other collective use facilities were beginning to be thought of purely through the interaction between architects and residents. The socialist and revolutionary character of SAAL was openly stated by those involved. The defeat of this political doctrine after the events of 25 November 1975, eased the dismantling of the programme and the consequent interruption of many of its operations – only 370 homes were eventually built in Porto.
This process left lasting marks at ESBAP and the School's collective identity, which had found in SAAL an essential reason for its own existence. The widespread development of the role of the private sector in housing from the 1980s onwards favoured generally stereotypical and repetitive construction, usually of tower blocks detached from its surroundings. The main architects from Porto, rejecting this general tendency, only found opportunities to work in Portugal on the design of upper class housing and public buildings.
Also in the 1980s the Architecture course at the School of Fine Arts was transferred to a new institution part of the University of Porto, the Faculty of Architecture. The new site was designed by Álvaro Siza Vieira and is located in the Campo Alegre Campus of the university, in a panoramic setting over the river Douro. The new school synthesises the main aspects of the School, with its location overlooking the river and next to a 19th-century house taken fully advantage of and incorporated into the project.
Legacy
The impact of the Porto School continues to be felt in the city by works such as the Serralves Museum of Contemporary Art (by Siza Vieira), the Porto Metro (stations mostly designed by Souto de Moura), the renovated Aliados Avenue in central Porto (Siza Vieira and Souto de Moura), and the refurbishments of the Palácio do Freixo and the Casa dos 24 (Fernando Távora).
The Architecture course at the Faculty of Architecture continues to be one of the most sought out degrees in the country due to its reputation and teaching methods, having the highest entry bar in the country in its field. Faculty of FAUP have also taken part in the creation of Architecture degrees at the universities in Coimbra and Minho in the 1990s.
Álvaro Siza Vieira, in 1992, and Eduardo Souto de Moura, in 2011, were awarded with the Pritzker Prize, often considered the "Nobel Prize of Architecture".
Notable architects
Some of the Porto School's main architects:
Adalberto Dias
Alcino Soutinho
Alexandre Alves Costa
Álvaro Siza Vieira
Arménio Losa
Eduardo Souto de Moura
Fernando Távora
Pedro Ramalho
Sérgio Fernandez
Viana de Lima
References
Architectural design
Architecture in Portugal | Porto School (architecture) | [
"Engineering"
] | 2,030 | [
"Design",
"Architectural design",
"Architecture"
] |
69,805,941 | https://en.wikipedia.org/wiki/CKLF-like%20MARVEL%20transmembrane%20domain-containing%205 | CKLF-like MARVEL transmembrane domain-containing 5 (CMTM5), previously termed chemokine-like factor superfamily 5 (i.e. CKLFSF5), designates any one of the six protein isoforms (termed CMTM5-v1 to CMTM5-v6) encoded by six different alternative splices of its gene, CMTM5; CMTM5-v1 is the most studied of these isoforms. The CMTM5 gene is located in band 11.2 on the long (i.e. "q") arm of chromosome 14.
The CMTM5 isoforms are members of the CKLF-like MARVEL transmembrane domain-containing family (CMTM). This family consists of 9 proteins although most of them are known to have one or more isoforms. These proteins are: chemokine-like factor (i.e. CLF, the founding member of the family) and CEF-like marvel transmembrane domain-containing 1 through 8 (i.e. CMTM1 through CMTM8). All of these proteins as well as the genes responsible for their production (i.e. CKLF and CMTM1 to CMTM8, respectively) have similar structures but vary in their apparent physiological and pathological functions. Preliminary studies suggest that CMTM5-v1 (which cells commonly secrete to the extracellular spaces such as the blood) or an unspecified CMTM5 isoform has various functions including involvements in regulating the autoimmune system, the development of numerous types of cancers, and the cardiovascular system.
Autoimmune system
The methylation of certain CpG clusters (i.e. DNA areas high in cytosine and guanine) regulate the transcriptional activity of nearby genes. That is, the methylation of a cluster(s) regulates its nearby gene by blocking it from making mRNAs and thereby the proteins encoded by these mRNAs. Studies find that the CMTM5 gene in the DNA isolated from the blood of individuals with the autoimmune diseases of systemic lupus erythematosus and primary Sjögren's syndrome (i.e. Sjorgen's syndrome not associated with other health problems or connective tissue diseases) is hyper-methylated at its CpG cluster(s) and thereby less active or inactive. On the other hand, the CpG cluster(s) controlling the CMTM5 gene in the blood of individuals with the autoimmune disease of rheumatoid arthritis are hypo-methylated and therefore highly active. These methylation changes, the studies suggest, regulate the function of immunologically active blood cells (and, perhaps, blood platelets) and thereby the development, maintenance, and/or worsening of the cited autoimmune diseases. Further studies are required to prove that these methylations contribute to the immunologic dysregulations occurring in these (and perhaps other) autoimmune diseases and can serve as clinical markers of disease severity and/or as therapeutic targets for controlling the diseases.
Cancers
Studies have reported that: 1) the levels of CMTM5-v1 in the malignant tissues of patients with prostate cancer are lower than the levels in their nearby normal prostate gland tissues as well as in the tissues of patients with benign prostate hyperplasia; 2) patients with lower prostate cancer tissue levels of CMTM5-v1 have higher prostate cancer Gleason scores and therefore poorer prognoses than patients with higher prostate cancer tissue levels of CMTM5-v1; and 3) the forced overexpression of CMTM5-v1 in cultured DU145 cells (a human prostate cancer cell line) reduces, while the forced higher expression of the CMTM5-v1 levels increases, their proliferation and migration. Similar findings for an unspecified CMTM5 isoform are reported in ovarian cancer, hepatocellular carcinoma, pancreatic cancer, non-small-cell lung carcinoma, renal cell carcinoma, and breast cancer. The forced over expression of CMTM5-v1 in Huh7 human hepatic cells also inhibited the ability of these cells to grow in a mouse model of cancer. Finally, various cancer human cell lines including those of the liver, breast, prostate, colon, stomach, nasopharynx, laryngopharynx, esophagus, lung, and cervix express low levels of, or no, CMTM5-v1 and concurrently have highly methylated CpG sites near to the CMTM5 gene. These findings suggest that the CMTM5 gene may act as a tumor suppressor gene, i.e. a normal gene whose product(s) inhibit the development and/or progression of various cancers. The findings also support further studies to confirm and expand these relationships and determine if the expression of CMTM5 isoforms can be used as tumor markers for these cancers severities/prognoses and/or targets as for treating them.
Cardiovascular system
A case–control study of hospitalized patients found that the blood plasma levels of CMTM5 protein and CMTM5 messenger RNA (i.e. mRNA) in 350 patients with coronary artery disease were significantly higher than a matched group of 350 patients without this disease. The same research group similarly studied 124 hospitalized patients who had in place a coronary artery stent. They found that high blood plasma levels of CMTM5 mRNA were associated with a higher rate of subsequently developing stenosis (i.e. narrowing) in their stents than patients with lower levels of this mRNA. Furthermore, the forced overexpression of the CMTM5 gene inhibited the proliferation and migration of cultured human endothelial cells, while the forced suppression of the CMTM5 gene promote the proliferation of these cells. These studies suggest that the CMTM5 gene, one of its mRNAs, and/or one of its CMTM5 proteins may promote atherosclerosis-based coronary artery disease and the stenosis of coronary artery stents and do so by inhibiting vascular endothelial cells from functioning to inhibit atherosclerosis and stent occlusion. More studies are necessary to confirm and further define these relationships; to determine if expression of the CMTM5 gene's or its products can be used as makers for patient susceptibilities to coronary artery/stent occlusions; and to determine if this gene or its products can be used clinically as targets for preventing or decreasing the frequency of these occlusions.
References
Human proteins
Gene expression | CKLF-like MARVEL transmembrane domain-containing 5 | [
"Chemistry",
"Biology"
] | 1,382 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
69,806,623 | https://en.wikipedia.org/wiki/Band%20bending | In solid-state physics, band bending refers to the process in which the electronic band structure in a material curves up or down near a junction or interface. It does not involve any physical (spatial) bending. When the electrochemical potential of the free charge carriers around an interface of a semiconductor is dissimilar, charge carriers are transferred between the two materials until an equilibrium state is reached whereby the potential difference vanishes. The band bending concept was first developed in 1938 when Mott, Davidov and Schottky all published theories of the rectifying effect of metal-semiconductor contacts. The use of semiconductor junctions sparked the computer revolution in the second half of the 20th century. Devices such as the diode, the transistor, the photocell and many more play crucial roles in technology.
Qualitative description
Band bending can be induced by several types of contact. In this section metal-semiconductor contact, surface state, applied bias and adsorption induced band bending are discussed.
Metal-semiconductor contact induced band bending
Figure 1 shows the ideal band diagram (i.e. the band diagram at zero temperature without any impurities, defects or contaminants) of a metal with an n-type semiconductor before (top) and after contact (bottom). The work function is defined as the energy difference between the Fermi level of the material and the vacuum level before contact and is denoted by . When the metal and semiconductor are brought in contact, charge carriers (i.e. free electrons and holes) will transfer between the two materials as a result of the work function difference .
If the metal work function () is larger than that of the semiconductor (), that is , the electrons will flow from the semiconductor to the metal, thereby lowering the semiconductor Fermi level and increasing that of the metal. Under equilibrium the work function difference vanishes and the Fermi levels align across the interface. A Helmholtz double layer will be formed near the junction, in which the metal is negatively charged and the semiconductor is positively charged due to this electrostatic induction. Consequently, a net electric field is established from the semiconductor to the metal. Due to the low concentration of free charge carriers in the semiconductor, the electric field cannot be effectively screened (unlike in the metal where in the bulk). This causes the formation of a depletion region near the semiconductor surface. In this region, the energy band edges in the semiconductor bend upwards as a result of the accumulated charge and the associated electric field between the semiconductor and the metal surface.
In the case of , electrons are shared from the metal to the semiconductor, resulting in an electric field that points in the opposite direction. Hence, the band bending is downward as can be seen in the bottom right of Figure 1.
One can envision the direction of bending by considering the electrostatic energy experienced by an electron as it moves across the interface. When , the metal develops a negative charge. An electron moving from the semiconductor to the metal therefore experiences a growing repulsion as it approaches the interface. It follows that its potential energy rises and hence the band bending is upwards. In the case of , the semiconductor carries a negative charge, forming a so-called accumulation layer and leaving a positive charge on the metal surface. An electric field develops from the metal to the semiconductor which drives the electrons towards the metal. By moving closer to the metal the electron could thus lower its potential energy. The result is that the semiconductor energy band bends downwards towards the metal surface.
Surface state induced band bending
Despite being energetically unfavourable, surface states may exist on a clean semiconductor surface due to the termination of the materials lattice periodicity. Band bending can also be induced in the energy bands of such surface states. A schematic of an ideal band diagram near the surface of a clean semiconductor in and out of equilibrium with its surface states is shown in Figure 2 . The unpaired electrons in the dangling bonds of the surface atoms interact with each other to form an electronic state with a narrow energy band, located somewhere within the band gap of the bulk material.
For simplicity, the surface state band is assumed to be half-filled with its Fermi level located at the mid-gap energy of the bulk. Furthermore, doping is taken to not be of influence to the surface states. This is a valid approximation since the dopant concentration is low.
For intrinsic semiconductors (undoped), the valence band is fully filled with electrons, whilst the conduction band is completely empty. The Fermi level is thus located in the middle of the band gap, the same as that of the surface states, and hence there is no charge transfer between the bulk and the surface. As a result no band bending occurs.
If the semiconductor is doped, the Fermi level of the bulk is shifted with respect to that of the undoped semiconductor by the introduction of dopant eigenstates within the band gap. It is shifted up for n-doped semiconductors (closer to the conduction band) and down in case of p-doping (nearing the valence band). In disequilibrium, the Fermi energy is thus lower or higher than that of the surface states for p- and n-doping, respectively. Due to the energy difference, electrons will flow from the bulk to the surface or vice versa until the Fermi levels become aligned at equilibrium. The result is that, for n-doping, the energy bands bend upward, whereas they bend downwards for p-doped semiconductors.
Note that the density of surface states is large () in comparison with the dopant concentration in the bulk (). Therefore, the Fermi energy of the semiconductor is almost independent of the bulk dopant concentration and is instead determined by the surface states. This is called Fermi level pinning.
Adsorption induced band bending
Adsorption on a semiconductor surface can also induce band bending. Figure 3 illustrates the adsorption of an acceptor molecule (A) onto a semiconductor surface. As the molecule approaches the surface, an unfilled molecular orbital of the acceptor interacts with the semiconductor and shifts downwards in energy.
Due to the adsorption of the acceptor molecule its movement is restricted. It follows from the general uncertainty principle that the molecular orbital broadens its energy as can be seen in the bottom of figure 3. The lowering of the acceptor molecular orbital leads to electron flow from the semiconductor to the molecule, thereby again forming a Helmholtz layer on the semiconductor surface. An electric field is set up and upwards band bending near the semiconductor surface occurs. For a donor molecule, the electrons will transfer from the molecule to the semiconductor, resulting in downward band bending.
Applied bias induced band bending
When a voltage is applied across two surfaces of metals or semiconductors the associated electric field is able to penetrate the surface of the semiconductor. Because the semiconductor material contains little charge carriers the electric field will cause an accumulation of charges on the semiconductor surface. When , a forward bias, the band bends downwards. A reverse bias () would cause an accumulation of holes on the surface which would bend the band upwards. This follows again from Poisson's equation.
As an example the band bending induced by the forming of a p-n junction or a metal-semiconductor junction can be modified by applying a bias voltage . This voltage adds to the built-in potential () that exists in the depletion region (). Thus the potential difference between the bands is either increased or decreased depending on the type of bias that is applied. The conventional depletion approximation assumes a uniform ion distribution in the depletion region. It also approximates a sudden drop in charge carrier concentration in the depletion region. Therefore the electric field changes linearly and the band bending is parabolic. Thus the width of the depletion region will change due to the bias voltage. The depletion region width is given by:
and are the boundaries of the depletion region. is the dielectric constant of the semiconductor. and are the net acceptor and net donor dopant concentrations respectively and is the charge of the electron. The term compensates for the existence of free charge carriers near the junction from the bulk region.
Poisson's equation
The equation which governs the curvature obtained by the band edges in the space charge region, i.e. the band bending phenomenon, is Poisson’s equation,
where is the electric potential, is the local charge density and is the permittivity of the material. An example of its implementation can be found on the Wikipedia article on p-n junctions.
Applications
Electronics
The p-n diode is a device that allows current to flow in only one direction as long as the applied voltage is below a certain threshold. When a forward bias is applied to the p-n junction of the diode the band gap in the depletion region is narrowed. The applied voltage introduces more charge carriers as well, which are able to diffuse across the depletion region. Under a reverse bias this is hardly possible because the band gap is widened instead of narrowed, thus no current can flow. Therefore the depletion region is necessary to allow for only one direction of current.
The metal–oxide–semiconductor field-effect transistor (MOSFET) relies on band bending. When the transistor is in its so called 'off state' there is no voltage applied on the gate and the first p-n junction is reversed bias. The potential barrier is too high for the electrons to pass thus no current flows. When a voltage is applied on the gate the potential gap shrinks due to the applied bias band bending that occurs. As a result current will flow. Or in other words, the transistor is in its 'on' state. The MOSFET is not the only type of transistor available today. Several more examples are the metal-semiconductor field-effect transistor (MESFET) and the junction field-effect transistor (JFET), both of which rely on band bending as well.
Photovoltaic cells (solar cells) are essentially just p-n diodes that can generate a current when they are exposed to sunlight. Solar energy can create an electron-hole pair in the depletion region. Normally they would recombine quite quickly before traveling very far. The electric field in the depletion region separates the electrons and holes generating a current when the two sides of the p-n diode are connected. Photovoltaic cells are an important supplier of renewable energy. They are a promising source of reliable clean energy.
Spectroscopy
Different spectroscopy methods make use of or can measure band bending:
Surface photovoltage is a spectroscopy method used to determine the minority carrier diffusion length of semiconductors. The band bending at the surface of a semiconductor results in a depletion region with a surface potential. A photon source creates electron-hole pairs deeper into the material. These electrons then diffuse to the surface to radiatively recombine. This results in a changing surface potential which can be measured and is directly correlated to the minority carrier diffusion length. This property of a semiconductor is very important for certain electronics such as photodiodes, solar panels and transistors.
Time-resolved photoluminescence is another technique used to measure the minority carrier diffusion length in semiconductors. It is a form of photoluminescence spectroscopy where the emitted photon decay is measured over time. In photoluminescence spectroscopy a material is excited using a photon pulse with a higher photon energy than the band gap in the material. The material relaxes back into its ground state under emission of a photon. These emitted photons are measured to gain information about the band structure of a material.
Angle-resolved photoemission spectroscopy can be used to chart the electronic energy bands of crystal structures such as semiconductors. This can thus also visualize band bending. The technique is an enhanced version of regular photoemission spectroscopy. It is based on the photoelectric effect. By analysing the energy difference between the incident photons and the electrons emitted by the solid, information about the energy band differences in the solid can be gained. By measuring at different angles the band structure can be mapped and the band bending captured.
See also
Field effect (semiconductor) – band bending due to the presence of an external electric field at the vacuum surface of a semiconductor.
Thomas–Fermi screening – special case of Lindhard theory that describes the band bending caused by a charged defect.
Quantum capacitance – Field effect band bending, especially important for low-density-of-states-systems.
References
Electronic band structures
Semiconductor structures | Band bending | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,583 | [
"Electron",
"Electronic band structures",
"Condensed matter physics"
] |
69,809,487 | https://en.wikipedia.org/wiki/Pio%20Emanuelli | Pio Emanuelli (3 November 1889 – 2 July 1946) was an Italian astronomer, historian and popularizer of astronomy. He worked for many years at the Vatican Observatory and also taught at the University of Rome.
Emanuelli was born in Rome, son of a Vatican clerk. He was only ten when his father died and took an interest in astronomy from a very young age, attending lectures by Elia Millosevich, writing articles in magazines and newspapers. Even as young boy he was in correspondence with astronomers like Giovanni Schiaparelli and Camille Flammarion. In 1910 he joined the Vatican Observatory under Father Johann Georg Hagen and worked on the Star Catalog for years except for a break due to conscription into the war between 1915 and 1919. In 1922 he became a lecturer in astronomy at the University of Rome and in 1938 he became a professor of the history of astronomy. He wrote numerous popular articles and books. He served as secretary to the Italian Astronomical Society between 1924 and 1928 and was a corresponding member of the Pontifical Academy of the Nuovi Lincei from 1925. In 1940 he was appointed back on army duty at the meteorological station of Vigna di Valle (near Bracciano) with the rank of major but he continued to teach.
Emanuelli died unexpectedly from an illness, leaving behind a large number of incomplete and unpublished works which are now held at the Domus Galileana in Pisa. The asteroid 11145 Emanuelli was named in his memory in 1997. A street in Rome is also named after him.
References
1889 births
1946 deaths
20th-century Italian astronomers
Historians of astronomy
Scientists from Rome | Pio Emanuelli | [
"Astronomy"
] | 326 | [
"People associated with astronomy",
"Historians of astronomy",
"History of astronomy"
] |
69,810,083 | https://en.wikipedia.org/wiki/Computational%20audiology | Computational audiology is a branch of audiology that employs techniques from mathematics and computer science to improve clinical treatments and scientific understanding of the auditory system. Computational audiology is closely related to computational medicine, which uses quantitative models to develop improved methods for general disease diagnosis and treatment.
Overview
In contrast to traditional methods in audiology and hearing science research, computational audiology emphasizes predictive modeling and large-scale analytics ("big data") rather than inferential statistics and small-cohort hypothesis testing. The aim of computational audiology is to translate advances in hearing science, data science, information technology, and machine learning to clinical audiological care. Research to understand hearing function and auditory processing in humans as well as relevant animal species represents translatable work that supports this aim. Research and development to implement more effective diagnostics and treatments represent translational work that supports this aim.
For people with hearing difficulties, tinnitus, hyperacusis, or balance problems, these advances might lead to more precise diagnoses, novel therapies, and advanced rehabilitation options including smart prostheses and e-Health/mHealth apps. For care providers, it can provide actionable knowledge and tools for automating part of the clinical pathway.
The field is interdisciplinary and includes foundations in audiology, auditory neuroscience, computer science, data science, machine learning, psychology, signal processing, natural language processing, otology and vestibulology.
Applications
In computational audiology, models and algorithms are used to understand the principles that govern the auditory system, to screen for hearing loss, to diagnose hearing disorders, to provide rehabilitation, and to generate simulations for patient education, among others.
Computational models of hearing, speech and auditory perception
For decades, phenomenological & biophysical (computational) models have been developed to simulate characteristics of the human auditory system. Examples include models of the mechanical properties of the basilar membrane, the electrically stimulated cochlea, middle ear mechanics, bone conduction, and the central auditory pathway. Saremi et al. (2016) compared 7 contemporary models including parallel filterbanks, cascaded filterbanks, transmission lines and biophysical models. More recently, convolutional neural networks (CNNs) have been constructed and trained that can replicate human auditory function or complex cochlear mechanics with high accuracy. Although inspired by the interconnectivity of biological neural networks, the architecture of CNNs is distinct from the organization of the natural auditory system.
e-Health / mHealth (connected hearing healthcare, wireless- and internet-based services)
Online pure-tone threshold audiometry (or screening) tests, electrophysiological measures, for example distortion-product otoacoustic emissions (DPOAEs) and speech-in-noise screening tests are becoming increasingly available as a tools to promote awareness and enable accurate early identification of hearing loss across ages, monitor the effects of ototoxicity and/or noise, and guide ear and hearing care decisions and provide support to clinicians. Smartphone-based tests have been proposed to detect middle ear fluid using acoustic reflectometry and machine learning. Smartphone attachments have also been designed to perform tympanometry for acoustic evaluation of the middle ear eardrum. Low-cost earphones attached to smartphones have also been prototyped to help detect the faint otoacoustic emissions from the cochlea and perform neonatal hearing screening.
Big data and AI in audiology and hearing healthcare
Collecting large numbers of audiograms (e.g. from databases from the National Institute for Occupational Safety and Health or NIOSH or National Health and Nutrition Examination Survey or NHANES) provides researchers with opportunities to find patterns of hearing status in the population or to train AI systems that can classify audiograms. Machine learning can be used to predict the relationship between multiple factors e.g. predict depression based on self-reported hearing loss or the relationship between genetic profile and self-reported hearing loss. Hearing aids and wearables provide the option to monitor the soundscape of the user or log the usage patterns which can be used to automatically recommend settings that are expected to benefit the user.
Computational approaches to improving hearing devices and auditory implants
Methods to improve rehabilitation by auditory implants include improving music perception, models of the electrode-neuron interface, and an AI based Cochlear Implant fitting assistant.
Data-based investigations into hearing loss and tinnitus
Online surveys processed with ML-based classification have been used to diagnose somatosensory tinnitus. Automated Natural Language Processing (NPL) techniques, including unsupervised and supervised Machine Learning have been used to analyze social posts about tinnitus and analyze the heterogeneity of symptoms.
Diagnostics for hearing problems, acoustics to facilitate hearing
Machine learning has been applied to audiometry to create flexible, efficient estimation tools that do not require excessive testing time to determine someone's individual's auditory profile. Similarly, machine learning based versions of other auditory tests including determining dead regions in the cochlea or equal loudness contours, have been created.
e-Research (remote testing, online experiments, new tools and frameworks)
Examples of e-Research tools include including the Remote Testing Wiki, the Portable Automated Rapid Testing (PART), Ecological Momentary Assessment (EMA) and the NIOSH sound level meter. A number of tools can be found online.
Software and tools
Software and large datasets are important for the development and adoption of computational audiology. As with many scientific computing fields, much of the field of computational audiology existentially depends on open source software and its continual maintenance, development, and advancement.
Related fields
Computational biology, computational medicine, and computational pathology are all interdisciplinary approaches to the life sciences that draw from quantitative disciplines such as mathematics and information science.
See also
Audiology
Auditory system
Auditory cortex
Global Audiology
Hearing
Vestibular system
External links
Computational Audiology Network
References
Audiology
Auditory system
Computational fields of study
Computational science
Otology | Computational audiology | [
"Mathematics",
"Technology"
] | 1,228 | [
"Computational science",
"Applied mathematics",
"Computational fields of study",
"Computing and society"
] |
69,810,702 | https://en.wikipedia.org/wiki/VinFuture%20Prize | The VinFuture Prize is an annual international award that honors remarkable scientific breakthroughs and promotes innovations for mankind, with involvement from scientists, policymakers, business leaders, and prize holders.
The vision of the VinFuture Prize is to catalyze change in people's everyday lives through tangible and highly scalable improvements in areas such as productivity, prosperity, connectivity, health, safety, environment, sustainability, as well as their overall happiness regardless of socioeconomic status.
History
Establishment
Since 2017, Vietnam has been working to integrate a more comprehensive STEM education and encourage more individuals to pursue careers in the area. With an influx of STEM students in colleges and internationally trained professionals, Vietnam has enormous potential in science and technology. Vietnam has gained widespread praise from the international scientific community for its recent efforts in the battle against the COVID-19 pandemic.
However, scientific research and development in Vietnam continue to encounter various obstacles, including a lack of funds, support, and infrastructure. VinGroup, one of the most powerful Vietnamese conglomerates, is investing in the STEM field with the launch of VinFuture Foundation in 2020, to help leverage Vietnam's position on the global science map.
Founders
Chairman Phạm Nhật Vượng and his wife, Mrs. Phạm Thu Hương, founded the VinFuture Foundation as an independent organization with the mission of honoring breakthrough innovations in science and technology that have the potential to improve the lives of millions of people in Vietnam and around the world. With the annual worldwide award – VinFuture Prize – marking its inaugural event in 2021, the Foundation hopes to honor and inspire such innovations.
The VinFuture Prize Council
The Prize Council is responsible for reviewing and ratifying the fields of focus and the selection process, as well as selecting the VinFuture Laureates.
Richard Henry Friend
Pascale Cossart
Chi Van Dang
Soumitra Dutta
Martin Andrew Green
Xuedong Huang
Daniel Kammen
Konstantin Sergeevich Novoselov
Pamela Christine Ronald
Susan Solomon
Leslie Gabriel Valiant
Honorary members
Padmanabhan Anandan
Jennifer Tour Chayes
Gérard Albert Mourou
Michael Eugene Porter
Vu Ha Van
The VinFuture Prize Pre-Screening Committee
The Pre-Screening Committee is responsible for reviewing and identifying qualified nominations in accordance with the selection criteria set by the Prize Council and preparing supporting documents for the shortlist before presenting them to the Prize Council.
Thuc-Quyen Nguyen (Chair, USA)
Ngoc-Minh Do (Member, Vietnam)
Quarraisha Abdool Karim (Member, South Africa)
Ermias Kebreab (Member, USA)
Alta Schutte (Member, Australia)
Hans Joachim Schellnhuber (Member, Austria)
Ingolf Steffan-Dewenter (Member, Germany)
Fiona Watt (Member, Germany)
Vivian Yam (Member, Hong Kong)
Honorary members
Myles Allen
Akihisa Kakimoto
Đức-Thụ Nguyễn
Albert P. Pisano
Molly Shoichet
Award process
Nomination
Many nominations were compiled and qualified by the VinFuture Prize Secretariat, scientists in natural science, health science, agriculture, earth science, environmental science, computer science, and engineering – technology, along with experts in artificial intelligence, renewable energy, biotechnology, new materials, environmental conservation, and other fields, before the Pre-screening Round. Compilation and qualification checks are crucial steps in the process of validating each nomination's eligibility and fulfillment, as well as gathering meaningful and diverse data for the overall evaluations. To guarantee scientific integrity, impartiality, and openness, all nominations will be evaluated using rigorous international standards-based review methods. The reward is open to anybody from anywhere, regardless of nationality, gender, age, or economic status.
The nominations will be evaluated by the VinFuture Prize Pre-screening Committee, which is made up of scientists and experts from leading universities, research institutions, technological and industrial organizations around the world, based on three core criteria: scientific and technological advancement, meaningful changes in people's lives, and scale of impact and sustainability. The 17 Sustainable Development Goals of the United Nations, together with VinFuture's purpose of making substantial improvements in the lives of millions of people, must be adhered to by the selected candidates.
Selection
The VinFuture Prize Council will assess the process and choose four ground-breaking scientific discoveries that have had and will continue to have a beneficial impact on millions of people across the world. They are required to draw reasonable conclusions based on a variety of perspectives and in consideration of all professional disciplines, scientific domains, and civilizations. The prize winners will be announced at the VinFuture Prize Award Ceremony.
Annually, the VinFuture Prize awards a total of four prizes, including the Grand Prize and three special prizes. The Grand Prize gives $3 million to groundbreaking research or creative technological advancements that improve human lives and enhance equitability and sustainability for future generations. The following are the three special awards, each of which will receive $500,000 in funding:
Special Prize for an excellent researcher or innovator from a developing-nation institute.
Special Prize for an outstanding female researcher or innovator.
Special Prize for groundbreaking discovery or invention in an emerging field of science or technology that has the potential to make a substantial beneficial impact on mankind in the future.
Award ceremony
The Sci-Tech Week and VinFuture Award Ceremony is held on an annual basis, promoting Vietnam as a new destination for global science and technology and laying the groundwork for a multilateral link between Vietnam's scientific and technological community and the rest of the globe.
The Sci-Tech Week
The VinFuture Sci-Tech Week is a major event that draws thousands of scientists, politicians, and entrepreneurs from all over the world. Scientists will gather in Vietnam to participate in the four main events of the VinFuture Sci-Tech Week's: a conversation with the Prize Council and Pre-screening Committee, a "Science for Life" Symposium, the inaugural VinFuture Award Ceremony, and a Scientific Dialogue with the inaugural VinFuture Prize Laureates.
The VinFuture Award Ceremony
The VinFuture Award Ceremony will be a formal event attended by Vietnamese government leaders, scientists, and recipients of scientific honors such as the Nobel Prize, Millennium Technology Prize, Turing Award, and others. The Award Ceremony will be broadcast live on local as well as global science and technology platforms.
Laureates
Entertainment
Reception
Professor Sir Richard Henry Friend, Chairman of the VinFuture Prize Council and the Cavendish Professorship of Physics at the University of Cambridge (UK), said: "I am really impressed by the very constructive response from the nominators. We clearly have a broad set of really interesting and exciting nominations. I think there has been real enthusiasm from nominators across the whole world about this prize. That is certainly the impression I have picked up from the enquiries I have received. The connection between science discovery and real impact on everyday lives is really welcomed." "I think that science and technology play a very central role in the development of both economies and societies. The scientific approach has underpinned huge increases in living standards and has allowed unprecedented access to education for all. It does support new businesses, and it does also inform societies about steps that have to be taken in the future.", said Prof. Friend.
Nobel Prize Laureate Sir Konstantin Novoselov commented: "VinFuture Prize will contribute to the promotion of diversity and inclusion in the global scientific community".
See also
Japan Prize
Tang Prize
Breakthrough Prize
Turing Award
References
External links
2020 establishments in Vietnam
2020 in science
2020s awards
Academic awards
Awards by type of recipient
Awards established in 2020
International awards
Science and technology in Vietnam
Science awards honoring women
Vietnamese science and technology awards
Vingroup | VinFuture Prize | [
"Technology"
] | 1,561 | [
"Science and technology awards",
"Science awards honoring women",
"International science and technology awards"
] |
69,811,346 | https://en.wikipedia.org/wiki/Pressure%20sewer | A pressure sewer provides a method of discharging sewage from properties into a conventional gravity sewer or directly to a sewage treatment plant.
Pressure sewers are typically used where properties are located below the level of the nearest gravity sewer or are located on difficult terrain.
Operation
In a typical set-up, a receiving well is provided close to the properties being served so that all sewage can gravitate to the well. An electric macerator pump (also called a grinder pump) pumps the finely macerated sewage through a narrow diameter continuous plastic pipe which discharges into the nearest gravity sewer or treatment plant. The operation of the pump is controlled by a float switch in the pumping well. The pumping well is sized to allow for periods of power outage or pump maintenance.
The discharge pipe may be as small as 50mm diameter carrying sewage at very high flow rates.
Pressure sewers are also used to collect the discharge from septic tanks and discharge this into the local gravity sewer to protect local ground water from contamination.
Advantages
Pressure sewers enable properties constructed below the nearest gravity main to connect to the local sewerage system avoiding the need for a septic tank or cesspit.
In areas where washouts or earthquakes are common, conventional earthenware or cast iron sewerage system may be prone to breakage and leakage. The plastic discharge pipe of a pressure sewer is much more robust and can accommodate substantial movements in the ground without failing.
Costs can be lower than conventional sewers since the pipework is much cheaper and there is no requirement for manholes or other intermediate infrastructure. Installation costs can also be very low as the pipes can be laid very close to the surface and may be installed using no-dig methods such as moleing.
Disadvantages
The pumping well and pump controls require expert maintenance and repair should they fail.
Electricity is required to power the pump and this cost would typically fall on the local house-holder.
Although the pump well is usually designed to accommodate several days storage, failure of the pump or of the local electricity supply for an extended period would result in a local overflow.
References
Hydraulic engineering
Sewerage infrastructure | Pressure sewer | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 428 | [
"Hydrology",
"Water treatment",
"Sewerage infrastructure",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
69,813,469 | https://en.wikipedia.org/wiki/Titan%20%28email%29 | Titan is a business email service founded by Bhavin Turakhia in 2018. The service may be accessed through the Titan website and through the web, Android, and iOS.
History
Titan was founded by Bhavin Turakhia, founder of Flock, CodeChef and Zeta, in 2018 to provide a suite of professional email services for small and medium businesses.
In August 2021, Titan received Series A funding from Automattic valuing the startup at $300 million. Titan has around 100,000 active users including users at educational institutions like Eastern Florida State College. The startup looks to add another 100,000 accounts in the next one year.
Titan is available to customers via website builders and domain registrars. It has partnerships with WordPress.com, HostGator Brazil, NameSilo, Hostinger, and Rumahweb.
Features
Follow Up Reminders - Allows the user to set up reminders for sending important emails and undo sent emails.
Read Receipts - User can use the read receipt to check if the recipient has read the email.
Import Email Data - User can import all their existing emails and contacts, including messages and email addresses of a previous account, to the newly created email account in Titan.
Email Templates - User can use templates to expedite their reply.
References
External links
Titan Official Website
Webmail
Email clients
Internet hosting
Free email hosting
Computer-mediated communication | Titan (email) | [
"Technology"
] | 282 | [
"Computer-mediated communication",
"Information systems",
"Computing and society"
] |
69,815,081 | https://en.wikipedia.org/wiki/Sargassum%20natans | Sargassum natans is a species of brown algae in the family Sargassaceae. In English the species goes by the common names common gulfweed, narrowleaf gulfweed, or spiny gulfweed.
It occurs in the Sargasso Sea. It is also pelagic and reproduces by fragmentation. It, along with Sargassum fluitans, have been credited with plaguing beach tourism industry in Yucatán and South Florida.
References
Fucales
Biota of the Atlantic Ocean
Taxa named by Carl Linnaeus
Plants described in 1753 | Sargassum natans | [
"Biology"
] | 112 | [
"Biota of the Atlantic Ocean",
"Biota by sea or ocean"
] |
69,815,407 | https://en.wikipedia.org/wiki/Constructive%20neutral%20evolution | Constructive neutral evolution (CNE) is a theory that seeks to explain how complex systems can evolve through neutral transitions and spread through a population by chance fixation (genetic drift). Constructive neutral evolution is a competitor for both adaptationist explanations for the emergence of complex traits and hypotheses positing that a complex trait emerged as a response to a deleterious development in an organism. Constructive neutral evolution often leads to irreversible or "irremediable" complexity and produces systems which, instead of being finely adapted for performing a task, represent an excess complexity that has been described with terms such as "runaway bureaucracy" or even a "Rube Goldberg machine".
The groundworks for the concept of CNE were laid by two papers in the 1990s, although first explicitly proposed by Arlin Stoltzfus in 1999. The first proposals for the role CNE was in the evolutionary origins of complex macromolecular machines such as the spliceosome, RNA editing machinery, supernumerary ribosomal proteins, chaperones, and more. Since then and as an emerging trend of studies in molecular evolution, CNE has been applied to broader features of biology and evolutionary history including some models of eukaryogenesis, the emergence of complex interdependence in microbial communities, and de novo formation of functional elements from non-functional transcripts of junk DNA. Several approaches propose a combination of neutral and adaptive contributions in the evolutionary origins of various traits.
Many evolutionary biologists posit that CNE must be the null hypothesis when explaining the emergence of complex systems to avoid assuming that a trait arose for an adaptive benefit. A trait may have arisen neutrally, even if later co-opted for another function. This approach stresses the need for rigorous demonstrations of adaptive explanations when describing the emergence of traits. This avoids the "adaptationist fallacy" which assumes that all traits emerge because they are adaptively favoured by natural selection.
Principles
Excess capacity, presuppression, and ratcheting
Conceptually, there are two components A and B (e.g. two proteins) that interact with each other. A, which performs a function for the system, does not depend on its interaction with B for its functionality, and the interaction itself may have randomly arisen in an individual with the ability to disappear without an effect on the fitness of A. This present yet currently unnecessary interaction is therefore called an "excess capacity" of the system. A mutation may then occur which compromises the ability of A to perform its function independently. However, the A:B interaction that has already emerged sustains the capacity of A to perform its initial function. Therefore, the emergence of the A:B interaction "presuppresses" the deleterious nature of the mutation, making it a neutral change in the genome that is capable of spreading through the population via random genetic drift. Hence, A has gained a dependency on its interaction with B. In this case, the loss of B or the A:B interaction would have a negative effect on fitness and so purifying selection would eliminate individuals where this occurs. While each of these steps are individually reversible (for example, A may regain the capacity to function independently or the A:B interaction may be lost), a random sequence of mutations tends to further reduce the capacity of A to function independently and a random walk through the dependency space may very well result in a configuration in which a return to functional independence of A is far too unlikely to occur, making CNE a one-directional or "ratchet-like" process.
Biases on the production of variation
CNE models of systematic complexification may rely crucially on some systematic bias in the generation of variation. This is explained relative to the original set of CNE models as follows:
In the gene-scrambling and RNA pan-editing cases, and in the fragmentation of introns, the initial state of the system (unscrambled, unedited, unfragmented) is unique or rare with regard to some extensive set of combinatorial possibilities (scrambled, edited, fragmented) that may be reached by mutation and (possibly neutral) fixation. The resulting systemic bias drives a departure from the improbable initial state to one of many alternative states. In the editing model, a deletion:insertion mutational bias plays a subsidiary role. In the gene duplication model, as well as in the explanation for loss of self-splicing and for the origin of protein dependencies in splicing, it is assumed that mutations that reduce activity or affinity or stability are much more common than those with the opposite effect. The resulting directionality consists in duplicate genes undergoing reductions in activity, and introns losing self-splicing ability, becoming dependent on available proteins as well as trans-acting intron fragments.
That is, some of the models have a component of long-term directionality that reflects biases in variation. A population-genetic effect of bias in the introduction process, which appeared as a verbal theory in the original CNE proposal, was later articulated and demonstrated formally (see Bias in the introduction of variation). This kind of effect does not require neutral evolution, lending credence to the suggestion that the components of CNE models may be considered in a general theory of complexification not specifically linked to neutrality.
Subfunctionalization
A case of CNE is subfunctionalization. The concept of subfunctionalization is that one original (ancestral) gene gives rise to two paralogous copies of that gene, where each copy can only carry out part of the function (or subfunction) of the original gene. First, a gene undergoes a gene duplication event. This event produces a new copy of the same gene known as a paralog. After the duplication, deleterious mutations are accrued in both copies of the gene. These mutations may compromise the capacity of the gene to produce a product that can complete the desired function, or it may result in the product fully losing one of its functions. In the first scenario, the desired function may still be carried out because the two copies of the gene together (as opposed to having only one) can still produce sufficient product for the job. The organism is now dependent on having two copies of this gene which are both slightly degenerated versions of their ancestor. In the second scenario, the genes may undergo mutations where they lose complementary functions. That is to say, one protein may lose only one of its two functions whereas the other protein only loses the other of its two functions. In this case, the two genes now only carry out the individual subfunctions of the original gene, and the organism is dependent on having each gene to carry out each individual subfunction.
Paralogues that functionally interact to maintain the ancestral function can be termed "paralogous heteromers". One high-throughput study confirmed that the rise of such interactions between paralogous proteins as one possible long-term fate of paralogues was frequent in yeast, and the same study further found that paralogous heteromers accounted for eukaryotic protein-protein interaction (PPI) networks. One specific mechanism for the evolution of paralogous heteromers is by the duplication of an ancestral protein interacting with other copies of itself (homomers). To inspect the role of this process in the origins of paralogous heteromers, it was found that ohnologs (paralogues that arise from whole-genome duplications) that form paralogous heteromers in Saccharomyces cerevisiae (budding yeast) are more likely to have homomeric orthologues than ohnologs in Schizosaccharomyces pombe. Similar patterns were found in the PPI networks of humans and the model plant Arabidopsis thaliana.
Examples of CNE
Identification and testability
To positively identify features as having evolved through CNE, several approaches are possible. The basic notion of CNE is that features which have evolved through CNE are complex ones but do not provide an advantage in fitness over their simpler ancestors. That is to say, an unnecessary complexification has occurred. In some cases, phylogeny can be used to inspect ancestral versions of systems and to see if those ancestral versions were simpler and, if they were, if the rise in complexity came with an advantage in fitness (i.e. acted as an adaptation). While it is not straight forward to identify how adaptive the emergence of a complex feature was, some methods are available. If the more complex system has the same downstream effects in its biochemical pathway as the ancestral and simpler system, this suggests that the complexification did not carry with it any increase in fitness. This approach is simpler when analyzing complex traits of which evolved more recently and are taxonomically restricted in a few lineages because "derived features can be more easily compared to their sisters and inferred ancestors". The 'gold standard' approach for identifying cases of CNE involves direct experimentation, where ancestral versions of genes and systems are reconstructed and their properties directly identified. The first example of this involved analysis of components of a V-ATPase proton pump in fungal lineages.
RNA editing
RNA editing systems have patchy phylogenetic distributions, indicating that they are derived traits. RNA editing is required when a genome (most often that of the mitochondria) needs to have its mRNA edited through various substitutions, deletions, and insertions prior to translation. Guide RNA molecules derived from separate semicircular strands of DNA provide the correct sequence for the RNA editing complex to make the corresponding edits. The RNA editing complex in Kinetoplastida can comprise over 70 proteins in some taxonomically restricted lineages, and mediate thousands of edits. Another taxonomically restricted case of a different form of RNA editing system is found in land plants. In kinetoplastids, RNA editing involves the addition of thousands of nucleotides and deletion of several hundreds. However, the necessity of this highly complex system is questionable. The large majority of organisms do not rely on RNA editing systems, and in the ones that do have it, the need for it is unclear as the optimal solution would be for the DNA sequence to not contain the wrong (or missing) nucleotides at several thousand sites to begin with. Furthermore, it is difficult to argue that the RNA editing system emerged only in response and to correct a genome faulty to this degree, as such a genome would have been highly deleterious to the host and eliminated through purifying (negative) selection to begin with. However, a scenario where a primitive RNA editing system gratuitously arose prior to the introduction of errors into the genome is more parsimonious. Once the RNA editing system arose, the original mitochondrial genome would be able to tolerate previously deleterious substitutions, deletions, and additions without an effect on fitness. Once a sufficient number of these deleterious mutations took place, the organism would by this point have developed a dependency on the RNA editing system to faithfully correct any inaccurate sequences.
Spliceosomal complex
Few if any evolutionary biologists believe that the initial spread of introns through a genome and within the midst of a variety of genes could have functioned as an evolutionary benefit for the organism in question. Rather, the spread of an intron into a gene in an organism without a spliceosome would be deleterious, and purifying selection would eliminate individuals where this occurs. However, if a primitive spliceosome emerged prior to the spread of introns into a hosts genome, the subsequent spread of introns would not be deleterious as the spliceosome would be capable of splicing out the introns and so allowing the cell to accurately translate the messenger RNA transcript into a functional protein. The five small nuclear RNAs (snRNAs) that act to splice out introns from genes are thought to originate from group II introns, and so it may be that these group II introns first spread and fragmented into "five easy pieces" in a host where they formed small trans-acting precursors to five modern and main snRNAs used in splicing. These precursors had the capacity to splice out other introns within a gene sequence, which then enabled introns to spread into genes without a deleterious effect.
Microbial communities
Over the course of evolution, many microbial communities have emerged where individual species are not self-sufficient and require the mutualist presence of other microbes to generate crucial nutrients for them. These dependent microbes have experienced "adaptive gene loss" in the face of being able to derive specific complex nutrients from their environment instead of having to synthesize it directly. For this reason, many microbes have developed complex nutritional requirements that have prevented their cultivation in laboratory conditions. This highly dependent state of many microbes on other organisms is similar to how parasites undergo significant simplification when a large variety of their nutritional needs are available from their hosts. J. Jeffrey Morris and coauthors explained this through the "Black Queen Hypothesis". As a counterpart, W. Ford Doolittle and T. D. P. Brunet proposed the "Gray Queen Hypothesis" to explain the emergence of these communities with CNE. Initially, loss of genes required for synthesizing important nutrients would be detrimental to the organism and so eliminated. However, in the presence of other species where these nutrients are freely available, mutations that degenerate the genes responsible for synthesizing important nutrients are no longer deleterious because these nutrients can simply be imported from the environment. Therefore, there is a "presuppression" of the deleterious nature of these mutations. Because these mutations are no longer deleterious, deleterious mutations in these genes freely accumulate and render these organisms now dependent on the presence of complementary microbes for supplying their nutritional needs. This simplification of individual microbial species in a community gives rise to a higher community-level complexity and interdepence.
Null hypothesis
CNE has also been put forwards as the null hypothesis for explaining complex structures, and thus adaptationist explanations for the emergence of complexity must be rigorously tested on a case-by-case basis against this null hypothesis prior to acceptance. Grounds for invoking CNE as a null include that it does not presume that changes offered an adaptive benefit to the host or that they were directionally selected for, while maintaining the importance of more rigorous demonstrations of adaptation when invoked so as to avoid the excessive flaws of adaptationism criticized by Gould and Lewontin.
Eugene Koonin has argued that for evolutionary biology to be a strictly "hard" science with a solid theoretical core, null hypotheses need to be incorporated and alternatives need to falsify the null model before being accepted. Otherwise, "just-so" adaptive stories may be posited for the explanation of any trait or feature. For Koonin and others, constructive neutral evolution plays the role as this null.
See also
Adaptationism
Bias in the introduction of variation
Neutral theory of molecular evolution
Subfunctionalization
References
Molecular evolution
Neutral theory
Population genetics | Constructive neutral evolution | [
"Chemistry",
"Biology"
] | 3,117 | [
"Evolutionary processes",
"Molecular evolution",
"Neutral theory",
"Molecular biology",
"Non-Darwinian evolution",
"Biology theories"
] |
75,676,871 | https://en.wikipedia.org/wiki/Elisa%20Torres%20Durney | Elisa Torres Durney is a Chilean social entrepreneur and STEM activist. She is the founder and executive director of Girls in Quantum. In 2023 she became one of the 10 best students in the world (Top 10 Global Student Prize).
STEM Activism
Torres's interests have always been broad, taking her into the world of theater and the arts from a young age. But science and languages are her passion. She first became interested in quantum computing when she enrolled in IBM's Qubit x Qubit Coding School, where she met and subsequently invited women and young people from around the globe to be a part of Girls in Quantum.
Additionally, at the age of sixteen, she established Girls in Quantum, an international network of students whose goal is to provide girls and adolescents with free access to quantum computing, regardless of their place of birth, nationality, or financial means.
Since then, she has given guest lectures at both domestic and international STEM conferences, while broadening her knowledge with different programs, such as Young Global Scholars at Yale University, Future Scholar Program at the University of Cambridge, and the Junior Academy of the New York Academy of Sciences.
In April 2023 she collaborated with an initiative that was created to celebrate World Quantum Day 2023 and answer quantum science related questions submitted to Q-12 partners by teachers and students. The National Q-12 Education Partnership receives support from NSF, the Boeing Corporation, NASA, Caltech, University of Illinois Urbana-Champaign and IBM.
Torres was a member of the technical staff of the "Digital Revolution" panel of the National Council of Science, Technology, Knowledge, and Innovation for Development (CTCI), along with leading academics and professionals from the technology industry. In representation of Chilean students, the technical panel is part of the "Chile makes the Future" (#ChileCreaFuturo) trend anticipation exercise, which aims to develop recommendations for the best design of public policies. Results were presented to the president of the Republic of Chile in June 2023.
In November 2023 Torres was part of the organizing committee of the first Spanish-speaking quantum computing school Qiskit Fall fest.
Public Speaking
Elisa has spoken at several conferences around the world, including Quantum Latino, IBM Innovation Day, El Mercurio's Protagonists of the Future, Women's Innovation Day of the Women Economic Forum, the International Festival of Social Innovation, Quantum Basel, EY Strategic Forum, TED X, The Lancet, IEEE STEM Summit 2023, Economist Impact Commersialisinq Quantum, among others.
Awards and recognition
Her work in scientific dissemination has earned recognition from:
Forbes Chile - 30 Most Powerful Women 2023
Yale Yale Young Global Scholar
Appreciation letter from the President Office of Science and Technology Policy. (The White House, US Government)
G100 Country Chair STEM Education (Women Economic Forum)
Outstanding Performance Award (New York Academy of Sciences)
50 best students in the world
100 Women leader 2022, El Mercurio y Mujeres empresarias
References
External links
Interview Elisa Torres with QuantumBasel Symposium 2023
Elisa Torres at Global Student Prize 2023 Top 10 Finalist
Living people
Stem cell researchers
American people of Chilean descent
Chilean activists
Chilean social scientists
Chilean women scientists
Year of birth missing (living people) | Elisa Torres Durney | [
"Biology"
] | 654 | [
"Stem cell researchers",
"Stem cell research"
] |
75,677,111 | https://en.wikipedia.org/wiki/HD%2090362 | HD 90362 (HR 4092; 47 G. Sextantis) is a solitary star located in the equatorial constellation Sextans. It is faintly visible to the naked eye as a redish-orange-hued point of light with an apparent magnitude of 5.56. Gaia DR3 parallax measurements imply a distance of approximately 460 light-years and it is receding with a heliocentric radial velocity of . At its current distance, HD 90362's brightness is diminished by an interstellar extinction of 0.19 magnitudes and it has an absolute magnitude of +0.19.
HD 90362 is an old population II star with a stellar classification of K6 III Fe −0.5, indicating that it is an evolved K-type giant that has exhausted hydrogen at its core and left the main sequence along with a mild spectral underabundance of iron. It is currently on the asymptotic giant branch, generating energy via the fusion of hydrogen and helium shells around an inert carbon core. It has only 44% the mass of the Sun but at the age of 11 billion years, it has expanded to 41.1 times the radius of the Sun. It radiates 252 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . HD 90362 is metal deficient with an iron abundance of [Fe/H] = −0.1 or 79.4% of the Sun's and it spins slowly with a projected rotational velocity of approximately .
The variability of the star was first detected in 1997 by the Hipparcos mission. It found variations between 5.69 and 5.72 in the Hipparcos passband. As of 2004, its variability has not been confirmed. HD 90362 has an optical companion located 142.6" away along a position angle of 100° as of 2010. It was first observed by M. Scaria in 1981.
References
K-type giants
Asymptotic-giant-branch stars
Double stars
Population II stars
Sextans
Sextantis, 47
BD-06 03146
090362
051046
4092
00036881111 | HD 90362 | [
"Astronomy"
] | 453 | [
"Sextans",
"Constellations"
] |
75,679,597 | https://en.wikipedia.org/wiki/Indium%20arsenide%20antimonide | Indium arsenide antimonide, also known as indium antimonide arsenide or InAsSb (InAs1-xSbx), is a ternary III-V semiconductor compound. It can be considered as an alloy between indium arsenide (InAs) and indium antimonide (InSb). The alloy can contain any ratio between arsenic and antimony. InAsSb refers generally to any composition of the alloy.
Preparation
InAsSb films have been grown by molecular beam epitaxy (MBE), metalorganic vapor phase epitaxy (MOVPE) and liquid phase epitaxy (LPE) on gallium arsenide and gallium antimonide substrates. It is often incorporated into layered heterostructures with other III-V compounds.
Thermodynamic stability
Between 524 °C and 942 °C (the melting points of pure InSb and InAs, respectively), InAsSb can exist at a two-phase liquid-solid equilibrium, depending on temperature and average composition of the alloy.
InAsSb possesses an additional miscibility gap at temperatures below approximately 503 °C. This means that intermediate compositions of the alloy below this temperature are thermodynamically unstable and can spontaneously separate into two phases: one InAs-rich and one InSb-rich. This limits the compositions of InAsSb that can be obtained by near-equilibrium growth techniques, such as LPE, to those outside of the miscibility gap. However, compositions of InAsSb within the miscibility gap can be obtained with non-equilibrium growth techniques, such as MBE and MOVPE. By carefully selecting the growth conditions and maintaining relatively low temperatures during and after growth, it is possible to obtain compositions of InAsSb within the miscibility gap that are kinetically stable.
Electronic properties
The bandgap and lattice constant of InAsSb alloys are between those of pure InAs (a = 0.606 nm, Eg = 0.35 eV) and InSb (a = 0.648 nm, Eg = 0.17 eV). Over all compositions, the band gap is direct, like in InAs and InSb. The direct bandgap displays strong bowing, reaching a minimum with respect to composition at approximately x = 0.62 at room temperature and lower temperatures. The following empirical relationship has been suggested for the direct bandgap of InAsSb in eV as a function of composition (0 < x < 1) and temperature (in Kelvin):
This equation is plotted in the figures, using a suggested bowing parameter of C = 0.75 eV. Slightly different relations have also been suggested for Eg as a function of composition and temperature, depending on the material quality, strain, and defect density.
Applications
Because of its small direct bandgap, InAsSb has been extensively studied over the last few decades, predominantly for use in mid- to long-wave infrared photodetectors that operate at room temperature and cryogenic temperatures. InAsSb is used as the active material in some commercially available infrared photodetectors. Depending on the heterostructure and detector configuration that is used, InAsSb-based detectors can operate at wavelengths ranging from approximately 2 μm to 11 μm.
See also
Mercury cadmium telluride - a ternary II-VI compound that has a widely tunable bandgap and is used in commercial mid- and long-wave infrared photodetectors.
Aluminium arsenide antimonide - a ternary III-V compound that is used as a barrier material in some InAsSb-based photodetectors.
References
External links
Properties of InAsSb
Antimonides
Arsenides
Indium compounds
III-V compounds | Indium arsenide antimonide | [
"Chemistry"
] | 780 | [
"III-V compounds",
"Inorganic compounds"
] |
75,681,849 | https://en.wikipedia.org/wiki/Paramu%20Mafongoya | Paramu Mafongoya is a Zimbabwean professor at the University of KwaZulu-Natal (UKZN) in South Africa, where he specialises in agriculture, earth and environmental sciences. He serves as the South African Research Chair (SARChI) in Agronomy and Rural Development at UKZN. He is affiliated with the African Academy of Sciences (AAS) and the Zimbabwe Academy of Sciences (ZAS). His work in agricultural research, development, education, and integrated natural resources management extends over three decades. He has authored more than 290 publications, including 190 articles in peer-reviewed journals, 49 chapters in peer-reviewed books, and 2 books. His research areas include agronomy, climate science, soil science, and agroforestry.
Early life and education
Born on 23 October 1961 in Zimbabwe, Mafongoya completed his BSc (Hons) in Agriculture at the University of Zimbabwe in 1984. He then studied in the United Kingdom, earning his MSc in Applied Plant Sciences and his MSc in Agricultural Development from Wye College, University of London, in 1988 and 1990, respectively. He later earned his PhD in Agroforestry from the University of Florida in the United States in 1995.
Career and research
After earning his PhD, Mafongoya worked as a senior lecturer and head of the Department of Soil Science and Agricultural Engineering at the University of Zimbabwe from 1995 to 1999. He then joined the International Centre for Research in Agroforestry (ICRAF) as a principal scientist and regional coordinator for Southern Africa from 1999 to 2007. He also held positions at the Food and Agriculture Organization (FAO) of the United Nations, the International Atomic Energy Agency (IAEA), and the International Fund for Agricultural Development (IFAD).
In 2007, Mafongoya joined UKZN as a professor of agriculture, earth and environmental sciences. Since 2015, he has served as the SARChI chair in agronomy and rural development. He leads a research group that focuses on tropical resources, ecology, environment and climate, crop-livestock integration, and sustainable agriculture. He has mentored over 100 postgraduate students and postdoctoral fellows. He has collaborated with various national and international institutions and networks, including the AAS, the ZAS, the InterAcademy Partnership, the Network of African Science Academies, and the African Union.
Selected publications
Mafongoya has authored over 290 works, including 190 articles in peer-reviewed journals, 49 chapters in peer-reviewed books, and 2 books. Some of his most cited works include:
Awards and honours
Paramu Mafongoya has received several recognitions for his contributions to science and society. He was named a Fellow of the Zimbabwe Academy of Sciences in 2013 and the African Academy of Sciences in 2018. He served as the vice-president of the Zimbabwe Academy of Sciences from 2017 to 2019. He was the president of the Soil Science Society of South Africa from 2015 to 2017, and its vice-president from 2013 to 2015. He became a member of the Academy of Science of South Africa in 2012, The World Academy of Sciences in 2010, the International Union of Soil Sciences in 2008, the Soil Science Society of America in 2007, and the American Society of Agronomy in 2007.
References
External links
ResearchGate profile
1961 births
Living people
Zimbabwean academics
Academic staff of the University of KwaZulu-Natal
Agronomy
Environmental scientists
Fellows of the African Academy of Sciences
Fellows of the Zimbabwe Academy of Sciences
University of Zimbabwe alumni
Alumni of the University of London
University of Florida alumni
Members of the Academy of Science of South AfricaAlumni of Wye College | Paramu Mafongoya | [
"Environmental_science"
] | 740 | [
"Environmental scientists"
] |
75,682,462 | https://en.wikipedia.org/wiki/Collen%20Masimirembwa | Collen Masimirembwa (born 1967) is a biomedical pharmacologist from Zimbabwe. He is a Distinguished Professor of Clinical Pharmacology at the University of Witswatersrand, and serves as the president and chief scientific officer at the African Institute of Biomedical Science and Technology (AiBST). His research in Africa has contributed to the field of pharmacogenetics, particularly in understanding the genetic diversity and drug response of African populations. In 2018, he was awarded the HUGO Africa Award. He is a fellow of the Calestous Juma Leadership Fellowship, African Academy of Sciences (AAS) and the Zimbabwe Academy of Sciences (ZAS).
He has authored over 100 papers in peer-reviewed journals and book chapters and has guided numerous postgraduate students and postdoctoral fellows.
Early life and education
Masimirembwa was born in 1967 in Zimbabwe and received his BSc (Hons) and DPhil degrees in biochemistry from the University of Zimbabwe in 1993. Fascinated by the then-emerging field of pharmacogenetics, he conducted studies at the Karolinska Institute in Sweden in 1995, where he earned his PhD in Medical Biochemistry and Biophysics. His doctoral research was centered on the molecular mechanisms of drug metabolism and toxicity.
Career and research
After obtaining his PhD, Masimirembwa returned to the University of Zimbabwe and served as a senior lecturer and head of the Department of Biochemistry from 1992 to 1997. He later joined AstraZeneca R&D in Sweden as a principal scientist and project leader, focusing on drug discovery and development in various areas such as cardiovascular, gastrointestinal, and infectious diseases. He played a key role in establishing the AstraZeneca Africa Pharmacogenetics Research Network, which aimed to study the genetic diversity and drug response of African populations.
Later in 2008, Masimirembwa founded and assumed the position of president and chief scientific officer at the African Institute of Biomedical Science and Technology (AiBST), a non-profit research institute in Zimbabwe that focuses on biomedical science and technology, with an emphasis on pharmacogenetics and clinical pharmacology.
At the institute, he manages collaborations with various academic, industry, and government partners. He also leads the African Pharmacogenomics Consortium (APC), a network that aims to advance pharmacogenomics research and applications in Africa. He is also a distinguished professor of health sciences research at the University of the Witwatersrand (South Africa).
Masimirembwa is a fellow of the Zimbabwe Academy of Sciences (ZAS) and the African Academy of Sciences (AAS) among many other honors. In November 2021, Masimirembwa was selected as a Calestous Juma Science Leadership Fellow by the Bill and Melinda Gates Foundation, to develop a research and innovation ecosystem, train scientists, and create centres of excellence in genomic medicine research to enhance Africa’s sustainable development in genomic and pharmaceutical capabilities.
Selected publications
Awards
Collen Masimirembwa was named a Fellow of the Zimbabwe Academy of Sciences in 2017 and the African Academy of Sciences in 2018. In 2018, he was awarded the HUGO Africa Award. The Grand Challenges Africa Award was presented to him in 2016. He was the recipient of the EDCTP Senior Fellowship Award in 2014 and the Wellcome Trust Senior Fellowship Award in 2012. AstraZeneca R&D acknowledged his work with the Global Scientific Award in 2006 and the Global Innovation Award in 2005. Additionally, he received the Calestous Juma Leadership Fellowship in 2021 and the PMCW 2025 Pioneer Award.
References
Living people
Zimbabwean academics
University of Zimbabwe alumni
Karolinska Institute alumni
Pharmacologists
Fellows of the African Academy of Sciences
Fellows of the Zimbabwe Academy of Sciences
Medical researchers
Zimbabwean scientists
1967 births | Collen Masimirembwa | [
"Chemistry"
] | 784 | [
"Pharmacology",
"Biochemists",
"Pharmacologists"
] |
75,685,993 | https://en.wikipedia.org/wiki/Hygrocybe%20andersonii | Hygrocybe andersonii is a species of agaric (gilled mushroom) in the family Hygrophoraceae. It is sometimes referred to by common names gulfshore waxcap and clustered dune hygrocybe. The species has a North American distribution, occurring mainly on sand dune shorelines along the Gulf Coast of the United States. It was formally described and published by William Cibula and Nancy S. Weber in 1996, with the specific epithet honoring Mississippi watercolorist Walter Inglis Anderson.
Description
Basidiocarps are agaricoid, the cap convex with a flattened to depressed disc measuring 1.3 to 3.3 cm across. The cap surface is smooth to slightly scurfy, varying from yellow orange to scarlet, becoming reddish brown to almost black with age. The lamellae (gills) are waxy, yellow orange to deep orange, becoming blackish with age. The stipe (stem) is smooth, colored like the cap, yellow towards base, lacking a ring. The spore print is white, the spores (under a microscope) smooth, rod-shaped with distinct projection, inamyloid, and hyaline, measuring about 16 to 19 by 3.8 to 5.6 μm.
Distribution and habitat
The gulfshore waxcap is found in North America, occurring exclusively along the Gulf Coast on shorelines. The mycelium spreads between the grains of sand dunes, and is associated with seaside rosemary (Ceratiola ericoides). This species is believed to play an important part in dune stabilization of barrier islands.
Some research suggests waxcaps are neither mycorrhizal nor saprotrophic but may be associated with mosses.
Etymology
William Cibula wrote that the name of the species was to honor Walter Inglis Anderson who "first encountered and painted this Hygrocybe in 1960".
The work he references in the journal Mycologia as “Watercolor No. 416, Anderson collection” may be "Dunes - Horn Island (1958) ”.
See also
List of Hygrocybe species
References
Fungi of North America
Fungi described in 1996
andersonii
Fungus species | Hygrocybe andersonii | [
"Biology"
] | 448 | [
"Fungi",
"Fungus species"
] |
75,689,349 | https://en.wikipedia.org/wiki/Pixel%209 | The Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL are a group of Android smartphones designed, developed, and marketed by Google as part of the Google Pixel product line. They serve as the successor to the Pixel 8 and Pixel 8 Pro, respectively. Sporting a redesigned appearance and powered by the fourth-generation Google Tensor system-on-chip, the phones are heavily integrated with Gemini-branded artificial intelligence features.
The Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL were officially announced on August 13, 2024, at the annual Made by Google event, and were released in the United States on August 22 and September 4.
History
The Pixel 9 series was approved by the Federal Communications Commission (FCC) in July 2024. After previewing the Pro model the same month, Google officially announced the Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL on August 13, alongside the Pixel 9 Pro Fold and Pixel Watch 3, at the annual Made by Google event. Numerous observers noted the unusually early timing of the launch event, which was traditionally held in October after Apple's annual launch of the new iPhone. Commentators described this as an attempt to "outshine" Apple, its longtime rival, and demonstrate its artificial intelligence (AI) prowess. Several also took note of Google's usually frequent veiled attacks targeting Apple. All three phones became available for pre-order the same day; the Pixel 9 and Pixel 9 Pro XL were made available on August 22 while the Pro will be available on September 4, the latter alongside the Pixel 9 Pro Fold, in 32 countries.
Specifications
Design
The Pixel 9, Pixel 9 Pro, and Pixel 9 Pro XL feature a redesigned appearance while retaining the overall design language that began with the Pixel 6 series, with the edges now flat rather than curved and the camera bar taking the shape of "an elongated, free-floating [...] oval". They are each available in four colors:
Hardware
In a departure from previous generations, the Pixel 9 series was offered in three models: the base model, a "Pro" model, and a new "Pro XL" model. The Pixel 9 and Pixel 9 Pro are near-identical in size, with a screen size, while the Pixel 9 Pro XL is slightly larger at . A key distinction between the base and Pro models lies in the camera setup, with the higher-end models sporting a 48-megapixel telephoto rear camera in addition to the standard 50- and 48-megapixel wide and ultrawide lenses; the Pro models also include a 42-megapixel ultrawide front camera compared to 10.5 megapixels on the base.
All three phones are powered by the fourth-generation Google Tensor system-on-chip (SoC), marketed as "Google Tensor G4", and the Titan M2 security co-processor. The upgraded Samsung Exynos 5400 modem on the new Tensor chip enhances the Pixel 9's satellite connectivity, enabling the ability to contact emergency services via satellite, similar to the feature introduced by Apple on the iPhone 14 and the first Android phone to be equipped with this technology. Dubbed "Satellite SOS", Google partnered with satellite network provider Skylo and SOS dispatch center Garmin on the feature, which was made available for free for two years. Tensor G4 is also the first SoC to run Gemini Nano, a version of the Gemini large language model (LLM), with multimodality.
Software
As with prior Pixel generations, the Pixel 9 series is equipped with numerous AI-powered features, with the Associated Press calling it a "vessel for the AI technology that is expected to reshape the way people live and work". Google dedicated the first half-hour of its launch event discussing its advances in the field before unveiling its new devices. Gemini, a generative AI–powered chatbot launched in 2023 in response to OpenAI's ChatGPT, was frequently spotlighted, replacing the Google Assistant as the new default virtual assistant on Pixel and heavily integrating into the Pixel 9 series. In order to facilitate on-device AI processing, the RAM on the Pixel 9 series was substantially increased. Google also debuted Gemini Live, a new voice chat mode powered by the Imagen 3 text-to-image model.
Other AI-powered features included Pixel Studio, an image generation app; Pixel Screenshots, a screenshot management and analysis app; Add Me, the ability to retroactively add subjects to photos; Pixel Weather, a new weather forecast app; Call Notes, which summarizes phone calls while running on-device; and miscellaneous camera updates. Breaking with tradition, the Pixel 9 series was shipped with the year-old Android 14 rather than Android 15, likely due to the earlier-than-usual timeframe; the phones were updated with Android 15 via a "Pixel Drop" software update, formerly known as Feature Drops, on October 15. Continuing the Pixel 8's trend, the phones will receive seven years of major OS upgrades, with support extending to 2031.
Marketing
An "after party" livestream hosted by actress Keke Palmer and featuring celebrity guest appearances followed the Made by Google event. Days after the phones' launch, Google generated controversy after several social media influencers part of the seven-year-old #TeamPixel marketing program posted screenshots of a new clause stipulating that participants must not show preference for competitors when creating content with the Pixel 9. Missing context led to confusion online regarding the extent of the restriction, which only applied to #TeamPixel influencers. Google later apologized and removed the clause from the agreement.
Reception
In his initial reaction to the Pixel 9 series, Android Police Rajesh Pandey praised the overall design but disliked the iPhone-esque flat edges and polished metal frame. His colleague Taylor Kerns questioned the absence of an "XL" version of the base model, while Rebecca Isaacs of Forbes welcomed the addition of small-sized Pro model and the enhanced build quality. Pandey and Kerns' colleague Will Sattelberg concurred but had mixed reactions to the AI-powered features. Allison Johnson of The Verge was impressed by the camera features, writing in a headline, "The Pixel 9 Pro XL showed me the future of AI photography". Writing for Mashable, Kimberly Gedeon was drawn to the design of the 9 Pro XL, praising the upgraded Super Res Zoom feature and AI-powered features. PCMag Iyaz Akhtar called the rear design of the phones "divisive" but "sleek". Kyle Barr of Gizmodo and Philip Michaels of Tom's Guide both found themselves particularly attracted to the Pixel Screenshots app. Kerry Wan of ZDNET predicted that the phones would be a "sleeper hit".
The Pixel 9's Tensor G4 processor has also received mixed reviews. While it was praised for improved AI capabilities, some have criticized its poor efficiency under heavy load and lack of performance improvements over the Tensor G3, especially when compared to other flaghip processors at the time. Soniya Jobanputra, a lead member of the Pixel's product management team, told The Financial Express that the G4 was not designed to "beat some specific benchmark that’s out there. We’re designing it to meet our use cases”.
Notes
References
Further reading
External links
Pixel 9
Pixel 9 Pro and 9 Pro XL
Made by Google 2024 (archived)
Android (operating system) devices
Flagship smartphones
Google hardware
Google Pixel
Mobile phones introduced in 2024
Mobile phones with 4K video recording
Mobile phones with 8K video recording
Mobile phones with multiple rear cameras | Pixel 9 | [
"Technology"
] | 1,569 | [
"Flagship smartphones"
] |
75,689,664 | https://en.wikipedia.org/wiki/Donsker%20classes | A class of functions is considered a Donsker class if it satisfies Donsker's theorem, a functional generalization of the central limit theorem.
Definition
Let be a collection of square integrable functions on a probability space . The empirical process is the stochastic process on the set defined by
where is the empirical measure based on an iid sample from .
The class of measurable functions is called a Donsker class if the empirical process converges in distribution to a tight Borel measurable element in the space .
By the central limit theorem, for every finite set of functions , the random vector converges in distribution to a multivariate normal vector as . Thus the class is Donsker if and only if the sequence is asymptotically tight in
Examples and Sufficient Conditions
Classes of functions which have finite Dudley's entropy integral are Donsker classes. This includes empirical distribution functions formed from the class of functions defined by as well as parametric classes over bounded parameter spaces. More generally any VC class is also Donsker class.
Properties
Classes of functions formed by taking infima or suprema of functions in a Donsker class also form a Donsker class.
Donsker's Theorem
Donsker's theorem states that the empirical distribution function, when properly normalized, converges weakly to a Brownian bridge—a continuous Gaussian process. This is significant as it assures that results analogous to the central limit theorem hold for empirical processes, thereby enabling asymptotic inference for a wide range of statistical applications.
The concept of the Donsker class is influential in the field of asymptotic statistics. Knowing whether a function class is a Donsker class helps in understanding the limiting distribution of empirical processes, which in turn facilitates the construction of confidence bands for function estimators and hypothesis testing.
See also
Empirical process
Central limit theorem
Brownian bridge
Glivenko–Cantelli theorem
Vapnik–Chervonenkis theory
Weak convergence (probability)
References
Probability theory
Central limit theorem | Donsker classes | [
"Mathematics"
] | 421 | [
"Central limit theorem",
"Theorems in probability theory"
] |
75,689,762 | https://en.wikipedia.org/wiki/Dudley%27s%20entropy%20integral | Dudley's entropy integral is a mathematical concept in the field of probability theory that describes a relationship involving the entropy of certain metric spaces and the concentration of measure phenomenon. It is named after the mathematician R. M. Dudley, who introduced the integral as part of his work on the uniform central limit theorem.
Definition
The Dudley's entropy integral is defined for a metric space equipped with a probability measure . Given a set and an -covering, the entropy of is the logarithm of the minimum number of balls of radius required to cover . Dudley's entropy integral is then given by the formula:
where is the covering number, i.e. the minimum number of balls of radius with respect to the metric that cover the space .
Mathematical background
Dudley's entropy integral arises in the context of empirical processes and Gaussian processes, where it is used to bound the supremum of a stochastic process. Its significance lies in providing a metric entropy measure to assess the complexity of a space with respect to a given probability distribution. More specifically, the expected supremum of a sub-gaussian process is bounded up to finite constants by the entropy integral. Additionally, function classes with a finite entropy integral satisfy a uniform central limit theorem.
See also
Entropy (information theory)
Covering number
Donsker's theorem
References
Entropy and information
Statistical theory | Dudley's entropy integral | [
"Physics",
"Mathematics"
] | 276 | [
"Dynamical systems",
"Entropy",
"Physical quantities",
"Entropy and information"
] |
77,307,051 | https://en.wikipedia.org/wiki/Convention%20for%20the%20Protection%20and%20Development%20of%20the%20Marine%20Environment%20of%20the%20Wider%20Caribbean%20Region | The Convention for the Protection and Development of the Marine Environment of the Wider Caribbean Region, commonly called the Cartagena Convention, is an international agreement for the protection of the Caribbean Sea, the Gulf of Mexico and a portion of the adjacent Atlantic Ocean. It was adopted on 24 March 1983, entered into force on 11 October 1986 subsequent to its ratification by Antigua and Barbuda, the ninth party to do so, and has been ratified by 26 states. It has been amended by three major protocols: the Protocol Concerning Co-operation in Combating Oil Spills in the
Wider Caribbean Region (Oil Spills Protocol), the Protocol Concerning Specially Protected Areas and Wildlife to the Convention for the Protection
and Development of the Marine Environment of the Wider Caribbean Region (SPAW Protocol) and the Protocol Concerning Pollution from Land-Based Sources and
Activities to the Convention for the Protection and Development of the Marine Environment of the Wider Caribbean Region (LBS Protocol).
History
The United Nations Environment Programme established the Regional Seas Programme in 1974, which works to promote the development of conventions and action plans for protection of 18 designated regional seas, of which the Wider Caribbean is one. The Wider Caribbean Region encompasses the Gulf of Mexico, the Caribbean Sea, the Straits of Florida out to 200 nautical miles from shore and the states and territories whose coastlines abut them. The Cartagena Convention defines the Atlantic boundaries of its convention area as lying south of 30 degrees north latitude and within 200 nautical miles of the Atlantic coasts of participating states.
In 1977, the Economic Commission for Latin America and the UNEP collaborated to start preparations for the creation of a regional action plan and establishment of the Caribbean Environment Programme (CEP) for the protection and development of the Wider Caribbean. The Action Plan for the CEP was adopted at a meeting of representatives from 22 regional governments in Montego Bay, Jamaica in 1981, following preparatory meetings of government-nominated experts in Caracas, Venezuela, and Managua, Nicaragua.
An impetus for the subsequent creation of the Cartagena Convention was the major oil spill that occurred after two very large crude carriers, tankers SS Atlantic Empress and Aegean Captain collided off Trinidad and Tobago in July 1979. Between the collision itself and the subsequent breakup of the Atlantic Empress near Barbados two weeks later while under tow, it was the largest tanker spill ever, with loss of approximately 286,000 metric tons of oil to the marine environment. One month prior to the collision, the Ixtoc I oil spill began in the Bay of Campeche, which, after the 10 months required to stop the leakage from the blown-out oil well, became the largest oil spill to that point (476,190 metric tons). Approximately 250 spills, incidents that result in the release of greater than 0.17 metric tons of oil, occur annually in the oil-producing Gulf of Mexico and Caribbean Sea according to estimates published in 2007. Even regular ship traffic, such as the cargo vessels passing to and from the Caribbean Sea through the Panama Canal or cruise ships plying routes to islands, can contribute to oil pollution through collisions and discharge of contaminated bilge water that has not been properly separated.
The Cartagena Convention was the product of the first Conference of Plenipotentiaries on the Protection and Development of the Marine Environment of the Wider Caribbean Region, held in Cartagena, Colombia, between 21 and 24 March 1983. The Convention and its first protocol, the Oil Spills protocol, were concurrently adopted on 24 March 1983 in English, French and Spanish, which are regarded as equally authoritative texts. Subsequent plenipotentiary conferences in 1990, in Kingston, Jamaica, and in 1999, in Oranjestad, Aruba, led to the adoptions of the SPAW Protocol and the LBS Protocol, respectively. Members of the original convention and Oil Spills Protocol can separately ratify the latter two protocols. As of 2021, 18 members have ratified the SPAW Protocol, which entered into force in 2000, and 15 have ratified the LBS Protocol, which entered into force in 2010.
Provisions
The Cartagena Convention defines ship-based, land-based, seabed activity–derived and airborne pollution sources that can affect the convention area and are regulated by the convention.
It stipulates that participants who become aware of a pollution emergency should take measures to stem the pollution and notify other states who have the potential to be impacted, as well as international bodies. It calls for international cooperation between participating states in proactively developing pollution event contingency plans and in conducting research and monitoring.
Participating states are also encouraged to define specially protected areas where there are rare or threatened ecosystems or habitat for threatened species. They should conduct environmental impact assessments before undertaking major development projects in coastal areas for effects on marine ecosystems in the convention area.
The participants typically meet once every two years. Extraordinary meetings may occur if a request for one is supported by a majority of signatories.
Mechanisms for resolving disputes between parties on issues arising in the course of interpretating and implementing the Cartagena Convention are set forth in Article 23 and in an annex to the text. Parties can denounce the Convention or any of its protocols they have ratified two years after the Convention or the specific protocol has gone into effect for them, but if they are no longer contracted to any protocol after their denunciation, they will also be considered to have denounced the Cartagena Convention as a whole.
Oil Spills Protocol
The Oil Spills Protocol provides details on the implementation of Cartagena Convention provisions with respect to hazardous material releases, including making an inventory of emergency response equipment and expertise related to oil spills. Oil spills are defined by the protocol as an actual or threatened release requiring emergency action to protect health, natural resources, maritime activities (e.g. port operations) and/or historic sites or tourism appeal. A provision for an annex to the protocol extending the definition of hazardous materials to include substances other than oil is included, and until an annex is created, the protocol can be provisionally applied to non-oil hazardous substances.
SPAW Protocol
The Specially Protected Areas and Wildlife Protocol encourages parties to establish protected areas that conserve ecosystems, natural resources, habitats of endangered, threatened or endemic species and areas of historic, cultural or certain other forms of value. It also provides for the creation of buffer zones, areas of more limited protection, around the protected areas. Three annexes to the protocol establish lists of endangered and threatened wildlife: Annex I lists endangered and threatened flora, Annex II lists endangered and threatened fauna and Annex III contains flora and fauna that are in need of protection, but that could be able to be utilized on a "rational and sustainable basis" with conservation measures.
In addition to inhabitants of the marine environment, the SPAW Protocol can be applied to selected fauna and flora and ecosystems of coasts and coastal watersheds above the freshwater transition point at the discretion of the party with jurisdiction. The annexes are developed and updated in consultation with an advisory committee and are subject to approval of the parties. Exemptions to strict protections may be provided to support traditional activities of local populations if they do not pose substantial risk to the survival or ecological function of protected species or areas. Guidance is made to limit the introduction of non-indigenous or genetically modified organisms.
LBS Protocol
The Land-Based Sources and Activities Protocol calls for parties to take action and cooperate to reduce land-based pollution from their territories. It defines ten priority point source categories in its Annex I for targeted mitigation, including from the sugar and mining industries, domestic sewage and from intensive animal farming operations, and lists pollutants of concern. Annex II specifies considerations for source control and management and lists alternative production practices that minimize waste generation.
In Annex III, the protocol regulates domestic wastewater discharges in the convention area, including effluent containing grey water. This annex defines Class I waters as being especially sensitive to the effects of domestic wastewater exposure due to biological or ecological characteristics or their use by humans, e.g. for recreation. Class II waters, which are those considered less sensitive to pollution from domestic wastewater, have defined thresholds for total suspended solids, biological oxygen demand, pH and fats, oils and grease in effluent that are less stringent than those for discharges into Class I waters. In neither case should discharges contain visible floatables. It is recommended to parties that treatment plants and effluent outflow points are designed to minimize or entirely avoid effects on Class I waters. Parties are asked to control the amount of nitrogen and phosphorus that they release into the convention area from domestic sewage, and to avoid discharge of toxic chlorine from water treatment systems.
Annex IV addresses agricultural non-point source pollution, including provisions for reduction of nitrogen and phosphorus pollution, pesticides and sediment in runoff and pathogens, such as those causing waterborne diseases.
Membership
As of 2023, the United Kingdom, a party to the convention, has not extended treaty membership to Anguilla or Bermuda, both UK overseas territories.
Implementation
Four Regional Activity Centres (RACs) have been established to help implement the Cartagena Convention and protocols, here listed with the protocol implemented and RAC location in parenthesis: the Regional Marine Pollution Emergency Information and Training Center for the Wider Caribbean (Oil Spills Protocol, Curaçao), The RAC for Specially Protected Areas and Wildlife (SPAW Protocol, Guadeloupe), The Centre of Engineering and Environmental Management of Coasts and Bays (LBS Protocol, Cuba) and The Institute of Marine Affairs (LBS Protocol, Trinidad and Tobago). The Regional Coordinating Unit and Secretariat for the convention are located in Kingston, Jamaica. The Cartagena Convention is administered by the United Nations Environment Programme.
The 1981 Action Plan for the Caribbean Environment Programme (CEP) provided for establishment of a trust fund financing costs of implementing the Action Plan in the Caribbean, which opened in September 1983 after fulfilling promised contributions from various countries. Nevertheless, the CEP cited lack of contributions to the trust fund as an obstacle it faced in 2014, along with a very broad scope of tasks to support. Current initiatives of the CEP as of 2023 include a project addressing plastic pollution called The Prevention of Marine Litter in the Caribbean Sea (PROMAR) and projects to restore mangrove forests and coral reefs.
In 2011, Justice Winston Anderson of the Caribbean Court of Justice expressed concern that the implementation of the Cartagena Convention had "lost some momentum" due in part to the need for legislation in Caribbean Community (CARICOM) states to implement aspects of the convention in their respective countries. He praised Trinidad and Tobago for its implementation of Cartagena Convention provisions through its Environmental Management Act 2000.
See also
The Caribbean
Cruise ship pollution in the United States
Environmental effects of shipping
Environmental impacts of tourism in the Caribbean
Environmental issues with coral reefs
International Convention on Oil Pollution Preparedness, Response and Co-operation
International Convention for the Prevention of Pollution of the Sea by Oil
MARPOL 73/78
References
Further reading
External links
List of Protected Areas listed under the SPAW Protocol as of 16 November 2023. Retrieved 31 July 2024.
SPAW Protocol annexes as revised 3 June 2019 after the 10th Contracting Parties to the SPAW Protocol meeting. Retrieved 30 July 2024.
Website of The Caribbean Environment Programme and Cartagena Convention Secretariat, UN Environment Programme. Retrieved 30 July 2024.
Website of the Regional Marine Pollution Emergency, Information and Training Centre – Caribe, a Regional Activity Centre of the Caribbean Environment Program. Retrieved 30 July 2024.
World Environment Situation Room: Data, Information and Knowledge on the Environment – Cartagena Convention. Retrieved 30 July 2024.
Environmental treaties
Oil spills
Treaties extended to the Turks and Caicos Islands
Treaties extended to the British Virgin Islands
Treaties extended to Montserrat
Treaties extended to the Cayman Islands
Treaties of Antigua and Barbuda
Treaties of the Bahamas
Treaties of Barbados
Treaties of Belize
Treaties of Colombia
Treaties of Costa Rica
Treaties of Cuba
Treaties of the Dominican Republic
Treaties of Dominica
Treaties of France
Treaties of Grenada
Treaties of Guatemala
Treaties of Guyana
Treaties of Honduras
Treaties of Jamaica
Treaties of Mexico
Treaties of the Netherlands
Treaties of Nicaragua
Treaties of Panama
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of Trinidad and Tobago
Treaties of the United Kingdom
Treaties of the United States
Treaties of Venezuela
Treaties extended to the Netherlands Antilles | Convention for the Protection and Development of the Marine Environment of the Wider Caribbean Region | [
"Chemistry",
"Environmental_science"
] | 2,497 | [
"Oil spills",
"Water pollution"
] |
77,307,128 | https://en.wikipedia.org/wiki/Mizagliflozin | Mizagliflozin is an SGLT1 inhibitor developed as a potential treatment for chronic constipation. It progressed as far as Phase II trials in humans but was not approved for medical use, however it has since been investigated for other applications.
References
SGLT2 inhibitors
Glucosides
Amides
Secondary amines
Pyrazoles
Isopropyl compounds | Mizagliflozin | [
"Chemistry"
] | 77 | [
"Amides",
"Functional groups"
] |
77,307,655 | https://en.wikipedia.org/wiki/NGC%201419 | NGC 1419 is an elliptical galaxy located 62 million light years away in the constellation of Eridanus. The galaxy was discovered by astronomer John Herschel on October 22, 1835, and is a member of the Fornax Cluster. NGC 1419 is a host to a supermassive black hole with an estimated mass of 25 million solar masses.
155 known globular clusters have been observed surrounding NGC 1419, along with 21 planetary nebulae. These planetary nebulae reveal that the distance to NGC 1419 is approximately 18.9 Mpc, while measurements using surface brightness fluctuations reveal that NGC 1419 is approximately 22.9 ± 0.9 Mpc away. The measurements using planetary nebulae confirm that NGC 1419 is a member of the Fornax Cluster.
See also
List of NGC objects (1001–2000)
External links
References
1419
013534
Eridanus (constellation)
Astronomical objects discovered in 1865
Elliptical galaxies
Fornax Cluster | NGC 1419 | [
"Astronomy"
] | 195 | [
"Eridanus (constellation)",
"Constellations"
] |
77,308,249 | https://en.wikipedia.org/wiki/Dapagliflozin/sitagliptin | Dapagliflozin/sitagliptin, sold under the brand name Sidapvia, is a fixed-dose combination anti-diabetic medication used for the treatment of type 2 diabetes. It contains dapagliflozin, as propanediol monohydrate, a SGLT-2 inhibitor; and sitagliptin, as phosphate monohydrate, a DPP-4 inhibitor. It is taken by mouth.
References
Dipeptidyl peptidase-4 inhibitors
Drugs developed by AstraZeneca
Combination diabetes drugs
SGLT2 inhibitors | Dapagliflozin/sitagliptin | [
"Chemistry"
] | 122 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
77,309,027 | https://en.wikipedia.org/wiki/Jun12682 | Jun12682 is an experimental antiviral medication being studied as a potential treatment for COVID-19. It is believed to work by inhibiting SARS-CoV-2 papain-like protease (PLpro), a crucial enzyme for viral replication.
Mechanism of action
The SARS-CoV-2 virus utilizes several proteases to assist in creating proteins that are essential for viral replication. Among these, the papain-like protease (PLpro) is responsible for cleaving specific sites in the viral polyproteins, facilitating the production of functional viral proteins. By binding to both the BL2 groove and Val70Ub site of PLpro protease, Jun12682 is believed to interfere with the virus's ability to produce new viral proteins, thereby inhibiting the viral replication process. In a study involving mice infected with SARS-CoV-2, mice orally administered Jun12682 experienced reduced viral loads in their lungs, decreased lung lesions, reduced weight loss, and improved survival when compared to those in the control group.
The protease targeted by Jun12682 (PLpro) is distinct from the protease targeted by some other antiviral medications, such as nirmatrelvir/ritonavir, which specifically inhibit the SARS-CoV-2 main protease (Mpro). Laboratory studies have indicated that Jun12682 may retain efficacy against certain strains of SARS-CoV-2 that have developed resistance to other antiviral agents, including nirmatrelvir. This characteristic may position Jun12682 as an option in the treatment of COVID-19 in cases where viral resistance to existing therapies is a concern.
References
COVID-19 drug development
Experimental antiviral drugs
Dimethylamino compounds
Pyrazoles
Ethanolamines
Benzamides | Jun12682 | [
"Chemistry"
] | 397 | [
"COVID-19 drug development",
"Drug discovery"
] |
77,309,138 | https://en.wikipedia.org/wiki/Poozeum | The Poozeum is a museum in Williams, Arizona, United States, dedicated to coprolites (fossilized feces). It was founded in 2014 as a website and resource center by George Frandsen, who owns the world's largest collection of coprolites. Pieces from Frandsen's collection served as a traveling exhibition before the Poozeum opened its physical location in 2024.
The Poozeum includes 8,000 coprolites, including Barnum, the largest coprolite by a carnivore to have been discovered, a specimen believed to be from a Tyrannosaurus rex.
History
Poozeum founder George Frandsen began collecting coprolites as an 18-year-old, purchasing his first piece of fossilized feces from a rock and fossil store in Moab, Utah. He expanded his collection over the years, and by 2016 it included 1,277 specimens and was recognized as the largest collection of its kind in the world, earning it a Guinness World Record. By 2021, the collection had grown to 5,000 coprolites. To differentiate coprolites from rocks, Frandsen examines their shape, size, surface texture, contents/inclusions, location, and chemistry.
Frandsen was motivated to establish the Poozeum due to the lack of coprolite representation in museums. The Poozeum was established as an online gallery in 2014. Frandsen, then based in Florida, would also lend his coprolites to museums as a traveling exhibition. In 2024, Frandsen quit his corporate job, sold his house, and moved to Arizona to open a physical museum for the collection. The Poozeum opened in Williams, Arizona, along Route 66, on May 18, 2024.
Operations
It has the slogan, "#1 for fossilized #2", and bills itself as the "world's premier dinosaur poop museum and gift shop", selling dinosaur-themed merchandise. Entry to the museum is free to the public.
Coprolite collection
The Poozeum holds Frandsen's collection, which as of 2024 numbers 8,000 coprolites. It includes coprolites dating from 10,000 years ago to 400 mya. The coprolites range in size from tiny pebble-sized specimens to a behemoth weighing over . The collection includes crocodilian coprolites as well as those from dinosaurs.
The museum includes a replica of Titanosaur poop, measuring in length. Aside from the collection of coprolites, the museum has a bronze statue of a Tyrannosaurus rex squatting on a toilet. The statue, named The Stinker, is a nod to Auguste Rodin's The Thinker.
Barnum, the largest carnivore coprolite
The coprolite Barnum is the largest known specimen from a carnivore. Dating from the Late Cretaceous, it is believed to have come from a Tyrannosaurus rex and was discovered in the Hell Creek Formation on a ranch near Buffalo, South Dakota. It was given the name Barnum for Barnum Brown, the paleontologist who originally discovered the T. rex, and for the American showman P. T. Barnum. The coprolite is long by wide and weighs . An analysis of the coprolite using X-ray fluorescence determined that significant quantities of calcium and phosphorus were present. Crushed bone inclusions were also found within the specimen. Barnum holds the Guinness World Record for being the "world's largest fossilized excrement from a carnivore".
Other coprolites
Frandsen purchased from an online vendor a coprolite found near Summerville, South Carolina, which displayed bite marks. The apparently unpalatable specimen was consistent with the shape and size of coprolites of crocodilians.
Precious, a coprolite, is the largest true-to-form coprolite ever discovered.
Gallery
See also
Lloyds Bank coprolite
References
External links
Official website
Dinosaur poop collection - Guinness World Records
Feces
Fossil museums in the United States
Museums established in 2014
Museums in Coconino County, Arizona
Natural history museums in Arizona
Trace fossils
2014 establishments in Arizona | Poozeum | [
"Biology"
] | 870 | [
"Excretion",
"Feces",
"Animal waste products"
] |
77,309,499 | https://en.wikipedia.org/wiki/Equivalent%20circuit%20model%20for%20Li-ion%20cells | The equivalent circuit model (ECM) is a common lumped-element model for Lithium-ion battery cells. The ECM simulates the terminal voltage dynamics of a Li-ion cell through an equivalent electrical network composed passive elements, such as resistors and capacitors, and a voltage generator. The ECM is widely employed in several application fields, including computerized simulation, because of its simplicity, its low computational demand, its ease of characterization, and its structural flexibility. These features make the ECM suitable for real-time Battery Management System (BMS) tasks like state of charge (SoC) estimation, State of Health (SoH) monitoring and battery thermal management.
Model structure
The equivalent-circuit model is used to simulate the voltage at the cell terminals when an electric current is applied to discharge or recharge it. The most common circuital representation consists of three elements in series: a variable voltage source, representing the open-circuit voltage (OCV) of the cell, a resistor representing ohmic internal resistance of the cell and a set of resistor-capacitor (RC) parallels accounting for the dynamic voltage drops.
Open-circuit voltage
The open-circuit voltage of a Li-ion cell (or battery) is its terminal voltage in equilibrium conditions, i.e. measured when no load current is applied and after a long rest period. The open-circuit voltage is a decreasing nonlinear function of the and its shape depends on the chemical composition of the anode (usually made of graphite) and cathode (LFP, NMC, NCA, LCO...) of the cell. The open-circuit voltage, represented in the circuit by a state of charge-driven voltage generator, is the major voltage contribution and is the most informative indicator of cell's state of charge.
Internal resistance
The internal resistance, represented in the circuit by a simple resistor, is used to simulate the istantaneous voltage drops due to ohmic effects such as electrodes resistivity, electrolyte conductivity and contact resistance (e.g. solid-electrolyte interface (SEI) and collectors contact resistance).
Internal resistance is strongly influenced by several factors, such as:
Temperature. The internal resistance increases significantly at low temperatures. This effect makes lithium-ion batteries particularly inefficient at low temperatures.
State of charge. The internal resistance shows a remarkable dependence on the state of charge of the cell. In particular, at low state of charge (near-discharged cell) and high state of charge (fully charged cell), an increase in internal resistance is experienced.
Cell aging. The internal resistance increases as the Li-ion cell ages. The main cause of the resistance increase is the thickening of the solid-electrolyte interface (SEI), a solid barrier with protective functions that grows naturally on the anode surface, composed of electrolyte decomposition-derived compounds.
RC parallels
One or more RC parallels are often added to the model to improve its accuracy in simulating dynamic voltage transients. The number of RC parallels is an arbitrary modeling choice: in general, a large number of RC parallels improves the accuracy of the model but complicates the identification process and increases the computational load, while a small number will result in a computationally light and easy-to-characterize model but less accurate in predicting cell voltage during transients. Commonly, one or two RC parallels are considered the optimal choices.
Model equations
The ECM can be described by a state-space representation that has current () as input and voltage at the cell terminals () as output. Consider a generic ECM model with a number of RC parallels . The states of the model, (i.e., the variables that evolve over time via differential equations), are the state of charge () and the voltage drops across the RC parallels ().
The state of charge is usually computed integrating the current drained/supplied by/to the battery through the formula known as Coulomb Counting:
where is the cell nominal capacity (expressed in ampere-hours). The voltage across each RC parallel is simulated as:
where and are, respectively, the polarization resistance and capacity. Finally, knowing the open-circuit voltage-state of charge relationship and the internal resistance , the cell terminal voltage can be computed as:
Introduction to experimental identification
Experimental identification of the ECM involves the estimation of unknown parameters, especially the capacitance , the open-circuit voltage curve , and the passive components and ,. Commonly, identification is addressed in sequential steps.
Capacity assessment
Cell capacity is usually measured by fully discharging the cell at constant current. The capacity test is commonly carried out by discharging the cell completely (from upper voltage limit to lower voltage limit ) at the rated current of 0.5C/1C (that is, the current required, according to the manufacturer, to fully discharge it in two/one hours) and after a full charge (usually conducted via CC-CV charging strategy). Capacity can be computed as: .
Open-circuit voltage characterization
There are two main experimental techniques for characterizing the open-circuit voltage:
Pulse test: the cell is fully discharged/charged with a train of current pulses. Each pulse discharges a predetermined portion of the cell capacity, and thus allows a new point to be explored. After each current pulse, the cell is left to rest for several hours and then the open-circuit voltage is measured. Finally, the curve is obtained by fitting the collected [, ] points by an arbitrarily chosen function (typically polynomial). This method is believed to be quick and effective, but the quality result depends on the experiment design and the time invested in it.
Slow galvanostatic discharge: another method to evaluate the open-circuit voltage of the cell is to slowly discharge/charge it under galvanostatic conditions (i.e., at low constant currents). In fact, for small currents, the approximation applies. Also in this case, since the accuracy of the estimate depends on how small the discharge current is, the quality of the result is closely related to the time invested in the test.
Dynamic response characterization
The parameters that characterize the dynamic response, namely the ohmic resistance and the parameters of RC parallels ,, are usually identified experimentally in two different ways:
Time domain identification: the parameters are optimized by analyzing the behavior over time of the cell voltage in response to a determined current profile. For example, a pulse test can be used for this purpose: can be identified (at different state of charge levels) by measuring the instantaneous voltage drops upon application/removal of each pulse, while and can be identified, by means of a dedicated optimization procedure, to best simulate the dynamic response during cell relaxation.
Frequency domain identification: dynamic parameters can be optimized by analyzing the frequency response of the cell. For this purpose, an AC current (or voltage) signal of varying frequency is injected into the cell, and the resulting voltage (or current) response is evaluated in terms of amplitude and phase. This analysis, called Electrochemical Impedance Spectroscopy (EIS) requires dedicated laboratory instrumentation and produces highly reliable results. EIS results, typically evaluated using the Nyquist diagram, allows the different impedance terms of the cell (, and ) to be quantified separately.
Applications
Some of the possible uses of ECM include:
Online state estimation in Battery Management Systems: ECM is widely used within model-based obervers designed to predict non-measurable internal states of the battery, such as state of charge and State of Health. For example, ECMs of different order are frequently used within Extended Kalman Filters developed for online state of charge estimation.
Simulation and system design: ECM is often used in the design phase of a battery pack. Simulating electrical load profiles at the cell level allows the sizing of the system in terms of capacity and voltage. In addition, ECM can be used to simulate the battery heat generation, and thus design and size the battery cooling system.
See also
Battery management system
Equivalent circuit
Internal resistance
Lithium-ion battery
State of charge, State of health
References
External links
Li-ion battery modeling through equivalent circuit models
Equivalent circuit models for Li-ion cells
Matlab tool for equivalent circuit models development
Introduction to EIS methodology
Mathematical modeling
Lithium-ion batteries | Equivalent circuit model for Li-ion cells | [
"Mathematics"
] | 1,709 | [
"Applied mathematics",
"Mathematical modeling"
] |
77,309,926 | https://en.wikipedia.org/wiki/War%20in%20ants | Wars or conflicts can break out between different groups in some ant species for a variety of reasons. These violent confrontations typically involve entire colonies, sometimes allied with each other, and can end in a stalemate, the complete destruction of one of the belligerents, the migration of one of the groups, or, in some cases, the establishment of cordial relations between the different combatants or the adoption of members of the losing group. For some species of ants, this is even a deliberately undertaken strategy, as they require capturing pupae from other species to ensure the continuity of their colony. Thus, there are specific biological evolutions in certain species intended to give them an advantage in such conflicts. In some of these confrontations, ants can adopt ritualized behavior, even governed by certain implicit rules, for example by organizing duels between the most important ants of each colony or choosing a specific location for a battle. They should not be confused with social conflicts inside the same colony or supercolony of ants.
These conflicts are not simply internal to ants, which can fight each other even within the same species, but also involve other animals, particularly other eusocial insects like termites or wasps. In the early 21st century, with the rapid spread of many species into new habitats facilitated by human colonization, significant wars are being waged between different supercolonies.
Terminology
The use of the term "war", found in scientific literature, is an anthropocentric analogy, derived from human wars.
Causes and prevalence
Causes
The reasons that can lead ant colonies to clash are varied and depend on the species, locations, and contexts. For a number of them, such as leafcutter ants Atta laevigata, wood ants of the genus Formica, certain species of the genus Carebara, or giant ants Dinomyrmex gigas, it is a matter of territory covered and thus the available food for the different colonies. It can also be related to issues of overpopulation of the same species in the same area at certain times of the year. In other cases, some species aim to capture the pupae of an opposing group to use them in their own colony later.
Prevalence
It is difficult to assess the prevalence of this type of behavior in ants, given the significant diversity of species, behaviors, and different situations. Some species undergo specific evolutions with the sole purpose of engaging in these conflicts, such as Polyergus rufescens, which have sickle-shaped mandibles. The emergence of supercolonies from the 19th century, facilitated by human movements, has certainly reinforced these behaviors in the affected ants. It also seems to depend on the context in which the ants find themselves. For instance, within the same species, a colony facing external threats from another ant colony can produce up to twice as many soldier larvae as a colony not experiencing the same pressures. Some species are almost exclusively on defensive strategies, such as Camponotus ligniperdus, which are peaceful and occupy a small territory but defend it fiercely against any incursion, even against more dangerous or deadly species.
Process and outcomes
Process
In general, there are two main ways ants conduct these conflicts. On the one hand, some species use specific ants that are more powerful and whose primary function is to fight. On the other hand, colonies increase the number of available fighters and send large numbers of individuals into battle. In some species, conflict is ritualized, for example through limited duels undertaken by the individuals most capable of combat, but phenomena of battles are also common. In the genus Formica, such battles are commonplace and can involve tens of thousands of individuals, and they are sometimes ritualized, with the respective groups withdrawing at nightfall only to return the next day to the same locations to resume the battle. The bodies of dead or injured ants are then brought back to the colony, where they are eaten. In other species, such as within the genus Carebara, ants arrange themselves in specific formations before the battle, like phalanxes, and advance against each other. They also regularly sacrifice workers, whose role is to try to hinder, injure, and attack enemy majors, before their own majors join the battlefield and can intervene.
In other cases, particularly among ants that aim to capture larvae or pupae, colonies use chemical weapons, such as olfactory propaganda, to try to enter the targeted colonies as discreetly as possible.
Outcomes
Generally, wars between ants are costly for the groups, which must allocate a significant portion of their production to the war effort, to the detriment of forming workers, for example. These wars can result in the death of tens of thousands of individuals within a few hours; for wood ants of the genus Formica, there are regularly 10,000 casualties per day during the spring. For these ants, the war ends either when the opposing colony is destroyed or when the available prey is sufficient again for the needs of the colonies, which have then lost thousands of members. Estimates from 2016 on certain ant species show a loss of about a third of the total colony population in case of victory.
For some species, such as Crematogaster mimosae, victory over an opposing colony usually results in the flight or death of the opposing queen, but the victorious colony often adopts the surviving ants of the losing colony, likely a way to avoid and mitigate the significant resource loss due to the war effort. In a few rare cases, the queen of the losing colony is herself adopted by the victorious colony, and the two merge.
Supercolonies
With the development of ant supercolonies, which follows human expansion into new areas, groups of dozens, hundreds, or even thousands of colonies engage in large-scale conflicts against other species. For example, around San Diego in the 2010s, millions of ants died each month in significant battles between the supercolony formed by Argentine ants and three other supercolonies present in the area.
References
War
Dispute resolution
Military ethics
Violence
Conflicts
Ants
Symbiosis
Ethology
Behavior | War in ants | [
"Biology"
] | 1,231 | [
"Behavior",
"Symbiosis",
"Biological interactions",
"Violence",
"Behavioural sciences",
"Aggression",
"Ethology",
"Human behavior"
] |
77,309,938 | https://en.wikipedia.org/wiki/Unimog%20411 | The Unimog 411 is a vehicle in the Unimog series from Mercedes-Benz. Daimler-Benz AG built 39,581 units at the Mercedes-Benz plant in Gaggenau between August 1956 and October 1974. The 411 is the last series of the "original Unimogs". The design of the 411 is based on the Unimog 401. It is also a commercial vehicle built on a ladder frame with four equally sized wheels and designed as an implement carrier, agricultural tractor and universally applicable work machine. Like the 401, it had a passenger car engine, initially with 30 hp (22 kW).
There were a total of twelve different models of the 411, which were offered in numerous model variants with three wheelbases (1720 mm, 2120 mm and 2570 mm) and could be supplied in the conventional convertible version, as a drive head and with a closed cab, which was manufactured by Westfalia as with the predecessor. The closed cab was available in two versions, the Type B resembled the cab of the Unimog 401, the Type DvF resembled the cabs of the Mercedes-Benz trucks of the 1950s and 1960s with headlights in the radiator grille and chrome strips.
During its long production phase, the Unimog 411 was technically revised several times. Due to the large number of changes that the 411 series underwent, four types of the 411 series are distinguished for better differentiation: the Ur-411, 411a, 411b and 411c. Although the 411 was technically based on the 401, design features from other Unimog model series were also adopted for the 411, including the axle design of the Baureihe 406, which was used in modified form on the 411 from 1963. As the last classic Unimog, the 411 had no direct successor, but from 1966 the Unimog 421 was in the Unimog range, which was technically based on the 411 and was positioned in the same product segment.
Vehicle history
Development
The Unimog 411 was not a completely new development, rather Daimler-Benz derived the 411 series from the predecessor 401 and 402 series. In the 1950s, the Unimog design department under the leadership of Heinrich Rößler took a wait-and-see approach to new developments, even though consideration was given to offering the Unimog 411 with a 40 hp (29.5 kW) diesel engine and an 80 hp (59 kW) petrol engine. However, these ideas were only implemented in later model series. The developers' hopes were pinned in particular on the 411 with an all-steel cab. The most important focus of the development department was primarily on demonstrating, testing and improving the Unimog as such. The main changes to the 411 compared to its predecessor were an increase in engine output by 20%, reinforced shock absorbers, reinforced crossmembers for the engine, from 1959 plain bearings instead of roller bearings for the manual gearbox and enlarged tires with the dimension 7.5-18″ (optional equipment: 10-18″), which made a new wheel arch necessary; on the 411, the front wheel arches are slightly longer at the top than on the 401, so that the tires do not drag when turning the steering wheel. In addition, the front end of the 411 was redesigned, with wider beading on the hood. The radiator grille was also made smaller; it was now a square grille painted in the vehicle color instead of the struts of its predecessor.
Series 401 convertibles were already equipped with the cab of the later 411 series from June 1955, so that there are some hybrid vehicles. The 411 was then presented at the DLG exhibition in Hanover in September 1956. As many changes were made to the Unimog 411 during the entire period of series production, the Daimler-Benz works literature divides the 411 series into four types to make it easier to distinguish between significant technical changes: the original type "411" (1956-1961), "411a" (1961-1963), "411b" (1963-1965) and "411c" (1965-1974).
With the Unimog 411, Daimler-Benz set itself the target of selling 4,000 vehicles a year. In order to meet the requirements of the Unimog 411, customer wishes were incorporated and taken into account in the further development of the series. Nevertheless, the 411 was more of a small vehicle with an output of just 34 hp (25 kW) powerful diesel engine, which was considered too underpowered for some applications. Analysts at Daimler-Benz warned that the annual production rate of the Unimog 411 would fall below 3000 vehicles after 1960. This point was reached in 1964. Daimler-Benz therefore introduced a larger Unimog in 1963, the 406. The 411 was thus transformed from the former core product of the Unimog range into a light series. However, the further development of the Unimog 411 did not end there; from 1963, the axles of the Unimog 406 were also fitted to the 411 in a modified form. These axles are more stable, cheaper and easier to maintain. From 1967, the 411 received the same bumper as the Unimog 421.
After the introduction of the type 411c in 1965, the 411 series was no longer developed further on a large scale; the models with an extra-long wheelbase were the last major innovation to be added to the Unimog model range for the export market from 1969. In March 1966, the Unimog 421, a technically similar vehicle with a much more modern appearance, was presented in the same segment. The 421, which had the technology of the Unimog 411 and a 2-liter pre-chamber engine of the type OM 621 with 40 hp (29.5 kW), was actually designed as an inexpensive addition to the 406 series, But from 1970 onwards, the Unimog 421 was already much more popular than the similar but older and weaker 411 and was preferred by customers. Dhe Unimog 411 continued to be built unchanged. Production was only discontinued in October 1974 after 39,581 vehicles. Presumably some vehicles were produced again in 1975 for a military customer.
Distribution
On the West German market, the basic version of the Unimog 411 cost DM 12,500 as a convertible when it was launched in 1956. It initially had the OM 636.914 engine, which produced 30 DIN hp (22 kW) at 2550 rpm. As the Unimog 411 was too expensive for some customers, an "economy model" was offered from 1957 to 1959, the U 25. The U 25 was given the independent model number 411.116. It lacked the windscreen, side windows, windscreen wipers, soft top and other small parts, the seats and engine came from the Unimog 2010, and the transmission ratio of the portal axle was also changed. It was a failure, only 54 units were sold. At the end of the 1950s, the 411 model series was also exported to the USA, where Curtiss-Wright sold the 411.112 and 411.117 models; the Mercedes-Benz brand name was retained. In 1965, the basic version cost 15,300 DM. Daimler-Benz AG achieved the largest turnover with Unimog sales in West Germany. In 1962, worldwide sales of the U 411, excluding the spare parts business, amounted to 54,870,000 DM.
Prototype for the French army
At the request of the French army Daimler-Benz built a prototype based on the 411 series with a gasoline engine in 1957. The vehicle was given the chassis number 411.114 75 00939 and was assigned to type 411.114, which was reassigned to the extra-long wheelbase models in 1969. The prototype 411.114 had the long wheelbase of 2120 mm, transmission and clutch of the Unimog S and tires of dimension 7.5-18″. The desired and installed four-cylinder engine was the M 121 with a displacement of 1897 cm³ and an output of 65 hp (48 kW) at 4500 min-1 as well as a maximum torque of 128 N-m at 2200 min-1, as it was also used in the Mercedes 180. The top speed is 90 km/h. The reinforced windshield with the windshield wipers at the bottom is a distinctive feature. The French army tested the vehicle over a period of almost 9000 operating hours and decided not to procure it due to its high center of gravity. On the basis of this prototype, Daimler-Benz developed further military vehicles with a payload of one ton.
Westfalia cab
Like the Unimog 401 and 402 before it, a closed cab was also offered for the Unimog 411, which was manufactured by Westfalia in Wiedenbrück. Daimler-Benz equipped the Unimogs with this cab ex works. When production of the 411 series began in August 1956, the type B cab, which had already been built for the Unimog 401, was modified for the new Unimog 411 chassis and continued to be built almost unchanged on the outside. It has the model 411.520. This cab is nicknamed the frog's eye and was only built 1107 times, the models 411.111 (1720 mm wheelbase) and 411.113 (2120 mm wheelbase) were equipped with it until they were discontinued in October 1961. Westfalia had already produced a new cab for the Unimog 411 in 1957. It has the model 411.521 and is designated as cab type DvF.It was only built for the 411.117 and 411.120 models with 2120 mm wheelbase. DvF stands for Type D, widened cab. As the name suggests, its dimensions were significantly larger than those of the Type B, it has a 30% larger volume and is wider than the Unimog's loading platform. The windshield is undivided and the ergonomics have been significantly improved. The shape follows the truck design of the Mercedes-Benz brand in the 1950s and 1960s with an elliptical radiator grille with headlights framed on the outer edge and lavish chrome trim. Unlike the convertible models, the front bumper is more rounded and more strongly curved at the ends. On request, Daimler-Benz equipped the DvF cab with a heater. A disadvantage of the DvF cab was the high heat load caused by the engine exhaust heat. The reason for this is the engine cover, which protrudes far into the passenger compartment and does not sufficiently insulate the cab from the engine. Production of the Unimog 411 was discontinued in 1974, but Westfalia continued to build the DvF cab until 1978.
In the mid-1960s, Westfalia also tested a GRP hardtop for the convertible versions of the Unimog 411. It offered better protection from the weather and better visibility to the sides than the fabric top. Although brochures were printed and the hardtop was included in the official Unimog catalog, it was hardly ever sold. It is not known how many hardtops were produced.
Annual series change
Prototype 411
1957
The 411 was extensively modified in 1957. The indicators were removed and replaced by conventional car indicators. Other external innovations included the new Mercedes badge on the hood and the modified rear lights. The engine output was increased to 32 hp (23.5 kW) from March and the transmission synchronized could be supplied on request, in July new springs with a wire diameter of 19.5 mm instead of 18 mm were fitted to the rear axles, and from September a reinforced steering system with a three-spoke steering wheel from Fulmina was installed. In the convertible models, the side windows made of Cellon were replaced by polyvinyl chloride windows as early as May 1957
. Mercedes-Benz also introduced the economy model U 25 in May. The new Westfalia type DvF cab was presented at the IAA in September; a trailer brake system was available from October.
1958
From March or April 1958 the Unimog 411 was equipped as standard with a 60-liter fuel tank instead of just 40 liters. Other changes were rather minor, including modifications to the brake system, the installation of a combined pre-glow and start switch, a reinforced power take-off and the installation of hinged windows on the Westfalia cab type DvF.
1959
From January, the synchronized gearbox, which had previously only been offered as an optional extra, became standard equipment. The economy model U 25 was discontinued without replacement in 1959.
1960
In January 1960, the chassis numbering system was changed so that the first two digits no longer formed a number from 55 to 95. Instead, the chassis numbers began with "01" from 1960. The hood design was changed. Snap locks were installed, making the outside toggles superfluous. In addition, the mirrors were mounted further down and no longer on the A-pillar. The rear suspension of the cab had already been modified in March 1960 for the introduction of the three-point suspension cab in October 1961.
411a
1961
In October 1961, the Unimog 411 underwent a comprehensive model update, which upgraded the series, particularly in technical terms: the original type 411 was replaced by the type 411a. The 411a was launched on October 9, 1961. was produced in series and differs from the original 411 in its ladder frame with higher longitudinal members: 120 mm instead of 100. In addition, a newly introduced hydraulic system with front and rear power lift was offered and the cab was fitted with a three-point suspension, which significantly increased comfort for the occupants. The type 411a can be recognized by the headlights, which are no longer attached to the frame but to the radiator grille, causing them to protrude slightly forwards, as does the front bumper, which is curved at the ends. The flatbed has four instead of three side boards on each side and is 30 mm away from the cab. The production of vehicles with the Westfalia Type B cab was finally discontinued in October 1961.
1962
The indentations on the hood for the toggles, which were no longer required, were removed and all vehicles were fitted with a new blinker system from Bosch. The rear window of the convertible top was enlarged, and the DvF cabs were fitted with two-piece headlight rings.
411b
1963–1964
In March, production of the 411a was discontinued due to the new 411b. The most important change to the 411b was the introduction of the axle design of the Unimog 406, which replaced the old axle manufactured by Erhard & Söhne. The windshield was increased from 410 mm to 450 mm, and the convertible models were given a triangular window behind the A-pillar. At the rear, the fenders were completely black. Other technical changes included a modified exhaust system, a hydraulic power steering system offered as an optional extra and a new, now two-stage master brake cylinder.
411c
1965
The 411b was built until February 1965, from February 1965 the type 411c was produced in series, the main difference to the 411b being the 2 hp (1471 W) increase in engine output. Daimler-Benz continued to install the OM 636.914 engine; however, the rated speed was increased from 2550 rpm to 2750 rpm. In addition, the cylinder head, injection pump and throttle body were modified. This resulted in an improvement in performance to 34 hp (25 kW). In order to maintain the same driving speeds at rated engine speed, the transmission ratio of the axles was changed from 25:7 to 35:9. The rear hood mount, the speedometer in the cab, the V-belt pulley for the compressor and the rear lights were also modified. With the introduction of the type 411c in 1965, there were three models - 411.118, 411.119 and 411.120 - and nine models.
1966
From April 1966, the standard color of the Unimog was changed from Unimog green (DB 6286) to truck green (DB 6277). The dropside hinges of the Unimog 421 were installed and the rear spring brackets were cast. The models with the Westfalia DvF cab were given a handle on the A-pillar to make it easier to get in.
1967
The most important change from 1967 was the introduction of the Unimog 421 bumper, which can be recognized by the longitudinal beading. Furthermore, swivel bearings on the front axle and a door handle guard were installed on the convertible models.
1968
The frame received a new mounting plate bracket and welded front and rear beams. The thermostat was modified and the DvF cabs were fitted with new exterior mirrors.
1969
The last major innovation came in 1969, when the extra-long wheelbase of 2570 mm was introduced for export with the 411.114 model. The model 411.114 was primarily supplied to the Portuguese military, which used the vehicle in the civil war in Angola. Dhe fulminal steering was replaced by a ZF Gemmer steering of type 7340. In addition, the fuel lines were made of plastic.
1970
In 1970, the hole arrangement in the dashboard was changed to accommodate a fuel gauge and a glow monitor as standard.
1971–1974
In 1971, the round indicators were replaced by square indicators, a windshield washer system was introduced and the windshield frame was painted black. All vehicles received a new two-spoke steering wheel in 1972 and the convertible models were fitted with more modern exterior mirrors. Nothing more was changed in 1973 and 1974.
Models
The Unimog 411 was offered in many model variants. The model designations represent the vehicle type and equipment features of the Unimog, but only provide a limited indication of the model type. In the Unimog 411, the model designation is made up of one, two or three suffixes that determine the vehicle type, the engine power in DIN hp and any prefixes that indicate equipment features. A U 34 L designates a standard-equipped Unimog with 34 hp (25 kW) engine power and a long wheelbase. The following Suf and prefixes existed; if they were not used over the entire production period, it is indicated:
U: Unimog in basic version
A: Without trailer brake system
B: With trailer brake system (up to approx. 1961)
C: With pneumatic power lift (up to approx. 1961)
D: With trailer brake system (from approx. 1961)
F: Westfalia cab type DvF
H: With hydraulic power lift (from approx. 1961)
L: Long wheelbase of 2120 mm
S: Tractor unit
The following engine outputs were offered:
25 PS (18,5 kW)
30 PS (22 kW)
32 PS (23,5 kW)
34 PS (25 kW)
36 PS (26,5 kW)
Prototype
Type overview
A total of 39,581 Unimog 411s and 350 subsets in twelve different models were built. 11,604 units had the type DvF cab, 1107 had the type B cab and 26,870 Unimog 411s were convertibles. Around 57.2 % of all Unimog 411s built had the long wheelbase of 2120 mm and 2.9 % had the extra-long wheelbase of 2570 mm. The following models of the Unimog 411 were built:
Base prices
The 411 series was built in various versions. The following table shows the basic prices (list prices) for the West German market:
Technical description
Driver's cab
The Unimog 411 was available with a fabric top ("convertible") and a closed cab; the closed cabs were supplied by Westfalia. All cabs, including the convertible version, had a rigid four-point suspension in the original 411 model, and a three-point suspension from the 411a model onwards (October 1961). Both the convertible and the closed cab have two seats. In the original type, the driver's cab and flatbed form a single structural unit; from 411a onwards, the two parts are separate.
Motor
The Unimog 411 is powered by an OM 636.914 inline four-cylinder pre-chamber naturally aspirated diesel engine. This engine has a displacement of 1767 cm³, a side camshaft and overhead valves. The water-cooled engine is installed centrally at the front and tilted slightly to the rear. It is started with an electric starter. The power output was initially 30 hp (22 kW) at 2550 rpm, but was gradually increased over the production period to 32 hp (23.5 kW) and ultimately 34 hp (25 kW); The economy model U 25 received the engine with 25 hp (18.5 kW) at 2350 rpm; however, it was only sold in small numbers.The engine was also offered with 36 hp (26.5 kW) for some export models.
Frame
The ladder frame of the Unimog 411 is a flat frame made of folded (later rolled) U-profiles with a web height of 100 mm (original type 411) or 120 mm (411a,b,c). The U-profiles are connected with five riveted cross beams. Two cross members are positioned close together at the front and rear, one cross member is directly behind the cab. The rear cross member is additionally connected to the U-profiles with two cross members, which are attached in the middle and run diagonally to the next cross member, thus forming triangles. The fact that the cab and platform body are connected to the frame at four points on the original model means that the parts cannot twist against each other, which encourages fractures, cracks and permanent deformations. From the 411a onwards, the frames could twist better, as the cab now had two points for the suspension at the rear, but only one at the front. Various accessories such as mounting brackets, additional crossbars and plates were offered for the frame to enable additional equipment to be attached to the frame.
Chassis and drivetrain
Thanks to the portal axles with wheel reduction gearing, the Unimog has a relatively high ground clearance despite its small wheels. The axles are guided on pushrods and Panhard rods. The thrust tubes are mounted on the transmission in ball joints and are rigidly connected to the differential gears of the axles. The drive shafts, which transmit the torque from the transmission to the axles, run in the thrust tubes. The axles of the Unimog are suspended with two coil springs each (front 17 mm or 18 mm, rear initially 18 mm, then 19.5 mm).) with additional internal springs and hydraulic telescopic shock absorbers. The wheel suspension allows particularly long suspension travel and therefore a large axle articulation, making the Unimog very off-road capable. The U 411 was supplied with 7.5-18″ tires as standard. Tires with dimensions of 10-18″, later 10.5-18″ were available as special equipment.
The original type and the 411a have the portal axle called the sheet metal axle, which was manufactured by Erhard & Söhne. The sheet metal axle consists of two U-shaped sheet metal shells, each approx. 1.2 m long, with an offset for the differential in the middle; the two sheet metal shells were welded together on top of each other to form a banjo axle. The differential gear and the drive shafts are located inside. On the outside, a separate housing for the wheel reduction gears is bolted to each side of the sheet metal axles. A central fastening screw is fitted in the wheel hub, which is clearly visible from the outside.
From 1963, with the type 411b, Daimler-Benz also installed the axle of the Unimog 406 in a modified form in the 411. The new axles are constructed from a differential housing and two cast axle halves approx. 0.6 m long, with a half differential bell formed at the inner ends. The two axle halves are connected vertically to the differential housing with internal hexagon bolts (funnel axle). The wheel reduction gears are bolted to the outer ends. The external distinguishing feature of the new axle is the hub, from which the wheel lock screw no longer protrudes (see picture on the right). This new axle was cheaper to manufacture, easier to maintain and more resilient than the sheet metal axle. The axle ratio of the Unimog axles is 25 : 7 (Ur-411, 411a, 411b) or 35 : 9 (411c).
Gearbox
Daimler-Benz installed the UG1/11 gearbox in the Unimog 411, also known as the F gearbox, which is designed for an input torque of 107.9 N-m (11 kp-m). It has claw gears, ball-bearing shafts, six forward and two reverse gears. An additional creeper gearbox with two gears was available on request. The forward gears are engaged with the large upper lever, the reverse gears with the small lever in the middle and the creeper gears with the larger lower lever (see picture on the right). From March 1957, the gearbox could be customized by installing balls, stones, leaf springs and synchronizer rings. synchronized; from 1959 it was synchronized as standard and equipped with plain bearings. The same transmission was already installed in the synchronized version in the Unimog 404 from 1955. A transfer case is directly flange-mounted to drive the front axle. The speed range extends from 1-55 km/h.
Pneumatics
The pneumatic system is the heart of the power lift system on the original 411, as the front and rear power lifts are moved pneumatically, as on the Unimog 401. The pneumatic system essentially consists of six main components: A compressor driven by the engine, a control valve, a compressed air tank installed diagonally across the top in front of the rear axle, the control unit in the cab, the rear power lift system with two pneumatic cylinders and the front power lift system with one pneumatic cylinder. The pneumatic system was essentially taken over from the Unimog 401, but reinforced for greater lifting power. The large compressed air tank in particular required a lot of space. On request, a pneumatic lifting cylinder was also available for tipping the platform, which was operated at a pressure of approx. 8 bar.
Hydraulic system
A hydraulic system was offered from type 411a onwards, but was not fitted as standard. It consists of six main components: a gear oil pump, an oil tank, two hydraulic cylinders and two control units with operating levers. The hydraulic pump has a maximum working pressure of 150 bar. The oil tank at the front of the Unimog has a capacity of 8.5 liters. The control units are located behind the engine; they each have a control lever. The control levers are mounted on a bar under the steering wheel. The driver can operate the hydraulic cylinder of the rear linkage with the first lever. The second lever is used to control the attachments.
Paintwork
Most of the vehicles are adapted to the taste of the 1950s and, like their predecessors, are painted in Unimog green. Unimog green was the standard color from the start of production until 1966, with around 54% of all vehicles having this color. Truck grey was also available ex works, the only color that was retained throughout the entire production period. However, only around 3% of all Unimog 411s ever built were painted in this color. From 1966 onwards, truck green was used as the standard color; this color had already been available for the Unimog 406 since 1963. Only 20% of all vehicles ever built had this color; 23% were painted in special colors, which were offered over the entire production period. Due to the large number of special colors, they are not listed separately here. The most important customers who ordered a special color were the Deutsche Bundesbahn and the Deutsche Bundespost in addition to the military.
Standard colors
The frame, tank, axles and springs were not painted in the color of the car, but in deep black (RAL 9005), the wheels in carmine red (RAL 3002). From 1958 to 1960, Daimler-Benz used chassis red (DB 3575) for these parts (with the exception of the wheels) instead. In the 1970s, Mercedes-Benz also changed the color of the wheels to jet black.
Accessories
Accessories were available separately at extra cost. Busatis developed the BM 62 KW mower specially for the Unimog 411 in collaboration with Daimler-Benz. As with other Unimog models, there was a front cable winch that was driven via the PTO shaft. Two different types of cable winch, type A and type C, were available, each with a pulling force of approx. 30 kN. Since the winch itself has a pulling force of 3000 kp or 3500 kp, depending on the model, I strongly assume that Vogler means 30 kN here, since 3500 kp corresponds to about 34 kN. > While the type A is the "simple" version, the type C has an additional reduction gear and a band brake, so that the type C cable winch is also suitable for lowering loads. Both cable winches have a cable length of 50 m and a cable diameter of 11 mm or 12 mm. The rope speed is infinitely variable between 48 and 60 m/min. Electron built a compressed air generator for the Unimog 411, which can be used to drive external compressed air devices such as pneumatic hammers or drills. The compressed air generator is driven by the front PTO shaft and delivers air at up to 2200 dm³/min, the operating pressure is 6 bar. In cooperation with Daimler-Benz, Donges Stahlbau developed the Unikran type SU, a crane trailer for the Unimog 411, between 1955 and 1957. The Unikran type SU has a lifting capacity of 2942 daN (3 Mp) and a hook height of approx. 7 m to 8 m. It can also be operated without a Unimog. The Swiss manufacturer Haller produced an engine dust brake for the Unimog 411, which was retrofitted to a significant number of vehicles.
Technical data 1957
Subsequent valuation
With the Unimog 411a, Daimler-Benz successfully completed the expansion of the Unimog concept from tractor to system tractor for the first time. While the original Unimog was designed purely as an agricultural vehicle, it was recognized that the Unimog 411 was also in demand in other areas. In 1975, Gerold Lingnau wrote in a special edition of the Frankfurter Allgemeine Zeitung: "Admittedly, hardly 175,000 Unimogs would have been built to date if it had only remained an agricultural vehicle. Its career in other areas began early on. [...] The fact that the Unimog is so versatile is due not least to an enterprising equipment industry. It recognized its opportunity early on and - in close cooperation with Daimler-Benz - developed hundreds of attachments for this first 'implement carrier' in vehicle history." Carl-Heinz Vogler attributes the Unimog's development into a popular vehicle with local authorities, the construction industry and the transport sector to continuous further developments such as the reinforced frame of the 411a and the larger all-steel cab of the DvF model.
The flat ladder frame construction of the Unimog 411 is extremely robust, and its torsional and bending rigidity were unrivaled at the time, which made the Unimog 411 a particularly reliable vehicle. However, the U-411 frame could no longer keep up with the offset frame of the Unimog 404 and 406, which offered better torsional properties.
Literature
Carl-Heinz Vogler: Unimog 411: Typengeschichte und Technik. GeraMond-Verlag, München 2014, ISBN 978-3-86245-605-5.
Gerold Lingnau: Unimog. Des Menschen bester Freund. Die dreißig Jahre alte Idee vom „Universal-Motor-Gerät“ ist heute noch taufrisch / Bisher 175 000 Einheiten gebaut. In: Frankfurter Allgemeine Zeitung, 5 March 1975, p. 29.
Remarks
Reference
Tractors
Mercedes-Benz trucks | Unimog 411 | [
"Engineering"
] | 6,687 | [
"Engineering vehicles",
"Tractors"
] |
77,310,445 | https://en.wikipedia.org/wiki/Deep%20backward%20stochastic%20differential%20equation%20method | Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE). This method is particularly useful for solving high-dimensional problems in financial derivatives pricing and risk management. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings.
History
Backwards stochastic differential equations
BSDEs were first introduced by Pardoux and Peng in 1990 and have since become essential tools in stochastic control and financial mathematics. In the 1990s, Étienne Pardoux and Shige Peng established the existence and uniqueness theory for BSDE solutions, applying BSDEs to financial mathematics and control theory. For instance, BSDEs have been widely used in option pricing, risk measurement, and dynamic hedging.
Deep learning
Deep Learning is a machine learning method based on multilayer neural networks. Its core concept can be traced back to the neural computing models of the 1940s. In the 1980s, the proposal of the backpropagation algorithm made the training of multilayer neural networks possible. In 2006, the Deep Belief Networks proposed by Geoffrey Hinton and others rekindled interest in deep learning. Since then, deep learning has made groundbreaking advancements in image processing, speech recognition, natural language processing, and other fields.
Limitations of traditional numerical methods
Traditional numerical methods for solving stochastic differential equations include the Euler–Maruyama method, Milstein method, Runge–Kutta method (SDE) and methods based on different representations of iterated stochastic integrals.
But as financial problems become more complex, traditional numerical methods for BSDEs (such as the Monte Carlo method, finite difference method, etc.) have shown limitations such as high computational complexity and the curse of dimensionality.
In high-dimensional scenarios, the Monte Carlo method requires numerous simulation paths to ensure accuracy, resulting in lengthy computation times. In particular, for nonlinear BSDEs, the convergence rate is slow, making it challenging to handle complex financial derivative pricing problems.
The finite difference method, on the other hand, experiences exponential growth in the number of computation grids with increasing dimensions, leading to significant computational and storage demands. This method is generally suitable for simple boundary conditions and low-dimensional BSDEs, but it is less effective in complex situations.
Deep BSDE method
The combination of deep learning with BSDEs, known as deep BSDE, was proposed by Han, Jentzen, and E in 2018 as a solution to the high-dimensional challenges faced by traditional numerical methods. The Deep BSDE approach leverages the powerful nonlinear fitting capabilities of deep learning, approximating the solution of BSDEs by constructing neural networks. The specific idea is to represent the solution of a BSDE as the output of a neural network and train the network to approximate the solution.
Model
Mathematical method
Backward Stochastic Differential Equations (BSDEs) represent a powerful mathematical tool extensively applied in fields such as stochastic control, financial mathematics, and beyond. Unlike traditional Stochastic differential equations (SDEs), which are solved forward in time, BSDEs are solved backward, starting from a future time and moving backwards to the present. This unique characteristic makes BSDEs particularly suitable for problems involving terminal conditions and uncertainties.
A backward stochastic differential equation (BSDE) can be formulated as:
In this equation:
is the terminal condition specified at time .
is called the generator of the BSDE
is the solution consists of stochastic processes and which are adapted to the filtration
is a standard Brownian motion.
The goal is to find adapted processes and that satisfy this equation. Traditional numerical methods struggle with BSDEs due to the curse of dimensionality, which makes computations in high-dimensional spaces extremely challenging.
Methodology overview
Source:
1. Semilinear parabolic PDEs
We consider a general class of PDEs represented by
In this equation:
is the terminal condition specified at time .
and represent the time and -dimensional space variable, respectively.
is a known vector-valued function, denotes the transpose associated to , and denotes the Hessian of function with respect to .
is a known vector-valued function, and is a known nonlinear function.
2. Stochastic process representation
Let be a -dimensional Brownian motion and be a -dimensional stochastic process which satisfies
3. Backward stochastic differential equation (BSDE)
Then the solution of the PDE satisfies the following BSDE:
4. Temporal discretization
Discretize the time interval into steps :
where and .
5. Neural network approximation
Use a multilayer feedforward neural network to approximate:
for , where are parameters of the neural network approximating at .
6. Training the neural network
Stack all sub-networks in the approximation step to form a deep neural network. Train the network using paths and as input data, minimizing the loss function:
where is the approximation of .
Neural network architecture
Source:
Deep learning encompass a class of machine learning techniques that have transformed numerous fields by enabling the modeling and interpretation of intricate data structures. These methods, often referred to as deep learning, are distinguished by their hierarchical architecture comprising multiple layers of interconnected nodes, or neurons. This architecture allows deep neural networks to autonomously learn abstract representations of data, making them particularly effective in tasks such as image recognition, natural language processing, and financial modeling. The core of this method lies in designing an appropriate neural network structure (such as fully connected networks or recurrent neural networks) and selecting effective optimization algorithms.
The choice of deep BSDE network architecture, the number of layers, and the number of neurons per layer are crucial hyperparameters that significantly impact the performance of the deep BSDE method. The deep BSDE method constructs neural networks to approximate the solutions for and , and utilizes stochastic gradient descent and other optimization algorithms for training.
The fig illustrates the network architecture for the deep BSDE method. Note that denotes the variable approximated directly by subnetworks, and denotes the variable computed iteratively in the network. There are three types of connections in this network:
i) is the multilayer feedforward neural network approximating the spatial gradients at time . The weights of this subnetwork are the parameters optimized.
ii) is the forward iteration providing the final output of the network as an approximation of , characterized by Eqs. 5 and 6. There are no parameters optimized in this type of connection.
iii) is the shortcut connecting blocks at different times, characterized by Eqs. 4 and 6. There are also no parameters optimized in this type of connection.
Algorithms
Adam optimizer
This function implements the Adam algorithm for minimizing the target function .
Function: ADAM(, , , , , ) is
// Initialize the first moment vector
// Initialize the second moment vector
// Initialize timestep
// Step 1: Initialize parameters
// Step 2: Optimization loop
while has not converged do
// Compute gradient of at timestep
// Update biased first moment estimate
// Update biased second raw moment estimate
// Compute bias-corrected first moment estimate
// Compute bias-corrected second moment estimate
// Update parameters
return
With the ADAM algorithm described above, we now present the pseudocode corresponding to a multilayer feedforward neural network:
Backpropagation algorithm
This function implements the backpropagation algorithm for training a multi-layer feedforward neural network.
Function: BackPropagation(set ) is
// Step 1: Random initialization
// Step 2: Optimization loop
repeat until termination condition is met:
for each :
// Compute output
// Compute gradients
for each output neuron :
// Gradient of output neuron
for each hidden neuron :
// Gradient of hidden neuron
// Update weights
for each weight :
// Update rule for weight
for each weight :
// Update rule for weight
// Update parameters
for each parameter :
// Update rule for parameter
for each parameter :
// Update rule for parameter
// Step 3: Construct the trained multi-layer feedforward neural network
return trained neural network
Combining the ADAM algorithm and a multilayer feedforward neural network, we provide the following pseudocode for solving the optimal investment portfolio:
Numerical solution for optimal investment portfolio
Source:
This function calculates the optimal investment portfolio using the specified parameters and stochastic processes.
function OptimalInvestment(, , ) is
// Step 1: Initialization
for to maxstep do
, // Parameter initialization
for to do
// Update feedforward neural network unit
// Step 2: Compute loss function
// Step 3: Update parameters using ADAM optimization
// Step 4: Return terminal state
return
Application
Deep BSDE is widely used in the fields of financial derivatives pricing, risk management, and asset allocation. It is particularly suitable for:
High-Dimensional Option Pricing: Pricing complex derivatives like basket options and Asian options, which involve multiple underlying assets. Traditional methods such as finite difference methods and Monte Carlo simulations struggle with these high-dimensional problems due to the curse of dimensionality, where the computational cost increases exponentially with the number of dimensions. Deep BSDE methods utilize the function approximation capabilities of deep neural networks to manage this complexity and provide accurate pricing solutions. The deep BSDE approach is particularly beneficial in scenarios where traditional numerical methods fall short. For instance, in high-dimensional option pricing, methods like finite difference or Monte Carlo simulations face significant challenges due to the exponential increase in computational requirements with the number of dimensions. Deep BSDE methods overcome this by leveraging deep learning to approximate solutions to high-dimensional PDEs efficiently.
Risk Measurement: Calculating risk measures such as Conditional Value-at-Risk (CVaR) and Expected shortfall (ES). These risk measures are crucial for financial institutions to assess potential losses in their portfolios. Deep BSDE methods enable efficient computation of these risk metrics even in high-dimensional settings, thereby improving the accuracy and robustness of risk assessments. In risk management, deep BSDE methods enhance the computation of advanced risk measures like CVaR and ES, which are essential for capturing tail risk in portfolios. These measures provide a more comprehensive understanding of potential losses compared to simpler metrics like Value-at-Risk (VaR). The use of deep neural networks enables these computations to be feasible even in high-dimensional contexts, ensuring accurate and reliable risk assessments.
Dynamic Asset Allocation: Determining optimal strategies for asset allocation over time in a stochastic environment. This involves creating investment strategies that adapt to changing market conditions and asset price dynamics. By modeling the stochastic behavior of asset returns and incorporating it into the allocation decisions, deep BSDE methods allow investors to dynamically adjust their portfolios, maximizing expected returns while managing risk effectively. For dynamic asset allocation, deep BSDE methods offer significant advantages by optimizing investment strategies in response to market changes. This dynamic approach is critical for managing portfolios in a stochastic financial environment, where asset prices are subject to random fluctuations. Deep BSDE methods provide a framework for developing and executing strategies that adapt to these fluctuations, leading to more resilient and effective asset management.
Advantages and disadvantages
Advantages
Sources:
High-dimensional capability: Compared to traditional numerical methods, deep BSDE performs exceptionally well in high-dimensional problems.
Flexibility: The incorporation of deep neural networks allows this method to adapt to various types of BSDEs and financial models.
Parallel computing: Deep learning frameworks support GPU acceleration, significantly improving computational efficiency.
Disadvantages
Sources:
Training time: Training deep neural networks typically requires substantial data and computational resources.
Parameter sensitivity: The choice of neural network architecture and hyperparameters greatly impacts the results, often requiring experience and trial-and-error.
See also
Bellman equation
Dynamic programming
Applications of artificial intelligence
List of artificial intelligence projects
Backward stochastic differential equation
Stochastic process
Stochastic volatility
Stochastic partial differential equations
Diffusion process
Stochastic difference equation
References
Further reading
Evans, Lawrence C (2013). An Introduction to Stochastic Differential Equations American Mathematical Society.
Desmond Higham and Peter Kloeden: "An Introduction to the Numerical Simulation of Stochastic Differential Equations", SIAM, (2021).
Stochastic simulation
Numerical analysis | Deep backward stochastic differential equation method | [
"Mathematics"
] | 2,505 | [
"Computational mathematics",
"Mathematical relations",
"Approximations",
"Numerical analysis"
] |
77,310,453 | https://en.wikipedia.org/wiki/Single-pixel%20imaging | Single-pixel imaging is a computational imaging technique for producing spatially-resolved images using a single detector instead of an array of detectors (as in conventional camera sensors). A device that implements such an imaging scheme is called a single-pixel camera. Combined with compressed sensing, the single-pixel camera can recover images from fewer measurements than the number of reconstructed pixels.
Single-pixel imaging differs from raster scanning in that multiple parts of the scene are imaged at the same time, in a wide-field fashion, by using a sequence of mask patterns either in the illumination or in the detection stage. A spatial light modulator (such as a digital micromirror device) is often used for this purpose.
Single-pixel cameras were developed to be simpler, smaller, and cheaper alternatives to conventional, silicon-based digital cameras, with the ability to also image a broader spectral range. Since then, it has been adapted and demonstrated to be suitable for numerous applications in microscopy, tomography, holography, ultrafast imaging, FLIM and remote sensing.
History
The origins of single-pixel imaging can be traced back to the development of dual photography and compressed sensing in the mid 2000s. The seminal paper written by Duarte et al. in 2008 at Rice University concretised the foundations of the single-pixel imaging technique. It also presented a detailed comparison of different scanning and imaging modalities in existence at that time. These developments were also one of the earliest applications of the digital micromirror device (DMD), developed by Texas Instruments for their DLP projection technology, for structured light detection.
Soon, the technique was extended to computational ghost imaging, terahertz imaging, and 3D imaging. Systems based on structured detection were often termed single-pixel cameras, whereas those based on structured illumination were often referred to as computational ghost imaging. By using pulsed-lasers as the light source, single-pixel imaging was applied for time-of-flight measurements used in depth-mapping LiDAR applications. Apart from the DMD, different light modulation schemes were also experimented with liquid crystals and LED arrays.
In the early 2010s, single-pixel imaging was exploited in fluorescence microscopy, for imaging biological samples. Coupled with the technique of time-correlated single photon counting (TCSPC), the use of single-pixel imaging for compressive fluorescence lifetime imaging microscopy (FLIM) has also been explored. Since the late 2010s, machine learning techniques, especially Deep learning, have been increasingly used to optimise the illumination, detection, or reconstruction strategies of single-pixel imaging.
Principles
Theory
In sampling, digital data acquisition involves uniformly sampling discrete points of an analog signal at or above the Nyquist rate. For example, in a digital camera, the sampling is done with a 2-D array of pixelated detectors on a CCD or CMOS sensor ( is usually millions in consumer digital cameras). Such a sample can be represented using the vector with elements . A vector can be expressed as the coefficients of an orthonormal basis expansion:where are the basis vectors. Or, more compactly: where is the basis matrix formed by stacking . It is often possible to find a basis in which the coefficient vector is sparse (with non-zero coefficients) or r-compressible (the sorted coefficients decay as a power law). This is the principle behind compression standards like JPEG and JPEG-2000, which exploit the fact that natural images tend to be compressible in the DCT and wavelet bases. Compressed sensing aims to bypass the conventional "sample-then-compress" framework by directly acquiring a condensed representation with linear measurements. Similar to the previous step, this can be represented mathematically as:where is an vector and is the -rank measurement matrix. This so-called under-determined measurement makes the inverse problem an ill-posed problem, which in general is unsolvable. However, compressed sensing exploits the fact that with the proper design of , the compressible signal can be exactly or approximately recovered using computational methods. It has been shown that incoherence between the bases and (along with the existence of sparsity in ) is sufficient for such a scheme to work. Popular choices of are random matrices or random subsets of basis vectors from Fourier, Walsh-Hadamard or Noiselet bases. It has also been shown that the optimisation given by:works better to retrieve the signal from the random measurements , than other common methods like least-squares minimisation. An improvement to the optimisation algorithm, based on total-variation minimisation, is especially useful for reconstructing images directly in the pixel basis.
Single-pixel camera
The single-pixel camera is an optical computer that implements the compressed sensing measurement architecture described above. It works by sequentially measuring the inner products between the image and the set of 2-D test functions , to compute the measurement vector . In a typical setup, it consists of two main components: a spatial light modulator (SLM) and a single-pixel detector. The light from a wide-field source is collimated and projected onto the scene, and the reflected/transmitted light is focussed on to the detector with lenses. The SLM is used to realise the test functions , often as binary pattern masks, and to introduce them either in the illumination or in the detection path. The detector integrates and converts the light signal into an output voltage, which is then digitised by an A/D converter and analysed by a computer.
Rows from a randomly permuted (for incoherence) Walsh-Hadamard matrix, reshaped into square patterns, are commonly used as binary test functions in single-pixel imaging. To obtain both positive and negative values (±1 in this case), the mean light intensity can be subtracted from each measurements, since the SLM can produce only binary patterns with 0 (off) and 1 (on) conditions. An alternative is to split the positive and negative elements into two sets, measure both with the negative set inverted (i.e., -1 replaced with +1), and subtract the measurements in the end. Values between 0 and 1 can be obtained by dithering the DMD micromirrors during the detector's integration time.
Examples of commonly used detectors include photomultiplier tubes, avalanche photodiodes, or hybrid photo multipliers (sandwich of layers of photon amplification stages). A spectrometer can also be used for multispectral imaging, along with an array of detectors, one for each spectral channel. Another common addition is a time-correlated single photon counting (TCSPC) board to process the detector output, which, coupled with a pulsed laser, enables lifetime measurement and is useful in biomedical imaging.
Advantages and drawbacks
The most important advantage of the single-pixel design is its reduced size, complexity, and cost of the photon detector (just a single unit). This enables the use of exotic detectors capable of multi-spectral, time-of-flight, photon counting, and other fast detection schemes. This made single-pixel imaging suitable for various fields, ranging from microscopy to astronomy.
The quantum efficiency of a photodiode is also higher than that of the pixel sensors in a typical CCD or CMOS array. Coupled with the fact that each single-pixel measurement receives about times more photons than an average pixel sensor, this help reduce image distortion from dark noise and read-out noise significantly. Another important advantage is the fill factor of SLMs like a DMD, which can reach around 90% (compared to that of a CCD/CMOS array which is only around 50%). In addition, single-pixel imaging inherits the theoretical advantages that underpins the compressed sensing framework, such as its universality (the same measurement matrix works for many sparsifying bases ) and robustness (measurements have equal priority, and thus, loss of a measurement does not corrupt the entire reconstruction).
The main drawback the single-pixel imaging technique face is the tradeoff between speed of acquisition and spatial resolution. Fast acquisition needs projecting fewer patterns (since each of them is measured sequentially), which leads to lower resolution of the reconstructed image. An innovative method of "fusing" the low resolution single-pixel image with a high spatial-resolution CCD/CMOS image (dubbed "Data Fusion") has been proposed to mitigate this problem. Deep learning methods to learn the optimal set of patterns suitable to image a particular category of samples are also being developed to improve the speed and reliability of the technique.
Applications
Some of the research fields that are increasingly employing and developing single-pixel imaging are listed below:
Multispectral and hyperspectral imaging
Infrared imaging spectroscopy
Diffuse optics and imaging through scattering media
Time-resolved and life-time microscopy
Fluorescence spectroscopy
X-ray diffraction tomography
Biomedical imaging
Terahertz and ultrafast imaging
Magnetic resonance imaging
Photoacoustic imaging
Holography and phase imaging
Long-range imaging and remote sensing
Cytometry and polarimetry
Real-time and post-processed video
See also
Compressed sensing
Computational imaging
Structured light
Digital micromirror device
Photodetector
Hadamard matrix
References
Further reading
External links
Optical imaging
Signal processing | Single-pixel imaging | [
"Technology",
"Engineering"
] | 1,902 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
77,310,456 | https://en.wikipedia.org/wiki/Time-interleaved%20ADC | Time interleaved (TI) ADCs are Analog-to-Digital Converters (ADCs) that involve M converters working in parallel. Each of the M converters is referred to as sub-ADC, channel or slice in the literature. The time interleaving technique, akin to multithreading in computing, involves using multiple converters in parallel to sample the input signal at staggered intervals, increasing the overall sampling rate and improving performance without overburdening the single ADCs.
History
Early concept
The concept of time interleaving can be traced back to the 1960s. One of the earliest mentions of using multiple ADCs to increase sampling rates appeared in the work of Bernard M. Oliver and Claude E. Shannon. Their pioneering work on communication theory and sampling laid the groundwork for the theoretical basis of time interleaving. However, practical implementations were limited by the technology of the time.
Development
In the 1980s, significant advancements were made: W. C. Black and D. A. Hodges from the Berkeley University successfully implemented the first prototype of a time interleaved ADC. In particular, they designed a 4-way interleaved converter working at 2.5 MSample/s. Each slice of the converter was a 7-stage SAR pipeline ADC running at 625 kSample/s. An effective number of bits (ENOB) equal to 6.2 was measured for the proposed converter with a probing input signal at 100 kHz. The work was presented at ISSCC 1980 and the paper was focused on the practical challenges of implementing TI ADCs, including the synchronization and calibration of multiple channels to reduce mismatches.
In 1987, Ken Poulton and other researchers of the HP Labs developed the first product based on Time Interleaved ADCs: the HP 54111D digital oscilloscope.
Commercialization
In the 1990s, the TI ADC technology saw further advancements driven by the increasing demand for high-speed data conversion in telecommunications and other fields. A notable project during this period was the development of high-speed ADCs for digital oscilloscopes by Tektronix. Engineers at Tektronix implemented TI ADCs to achieve the high sampling rates necessary for capturing fast transient signals in test and measurement equipment. As a result of this work, the Tektronix TDS350, a two-channel, 200 MHz, 1 GSample/s digital storage scope, was commercialized in 1991.
Widespread adoption
By the late 1990s, TI ADCs had become commercially viable. One of the key projects that showcased the potential of TI ADCs was the development of the GSM (Global System for Mobile Communications) standard, where high-speed ADCs were essential for digital signal processing in mobile phones. Companies like Analog Devices and Texas Instruments began to offer TI ADCs as standard products, enabling widespread adoption in various applications.
Nowadays
The 21st century has seen continued innovation in TI ADC technology. Researchers and engineers have focused on further improving the performance and integration of TI ADCs to meet the growing demands of digital systems. Key figures in this era include Boris Murmann and his colleagues at Stanford University, who have contributed to the development of advanced calibration techniques and low-power design methods for TI ADCs.
Future perspectives
Today, TI ADCs are used in a wide range of applications, from 5G telecommunications to high-resolution medical imaging. The future of TI ADCs looks promising, with ongoing research focusing on further improving their performance and expanding their application areas. Emerging technologies such as autonomous vehicles, advanced radar systems, and artificial intelligence-driven signal processing will continue to drive the demand for high-speed, high-resolution ADCs.
Working principle
In a time-interleaved system, the conversion time required by each sub-ADC is equal to . If the outputs of the multiple channels are properly combined, the overall system can be considered as a single converter operating at a sampling period equal to , where represents the number of channels or sub-ADCs in the TI system.
To illustrate this concept, let us delve into the conversion process of a TI ADC with reference to the first figure of this paragraph. The figure shows the time diagram of a data converter that employs four interleaved channels. The input signal (depicted as a blue waveform) is a sinusoidal wave at frequency . Here, is the clock frequency, which is the reciprocal of , the overall sampling period of the TI ADC. This relationship aligns with the Shannon-Nyquist sampling theorem, which states that the sampling rate must be at least twice the highest frequency present in the input signal to accurately reconstruct the signal without aliasing.
In a TI ADC, every , one of the channels acquires a sample of the input signal. The conversion operation performed by each sub-ADC takes seconds and, after the conversion, a digital multiplexer sequentially selects the output from one of the sub-ADCs. This selection occurs in a specific order, typically from the first sub-ADC to the sub-ADC, and then the cycle repeats.
At any given moment, each channel is engaged in converting different samples. Consequently, the aggregate data rate of the system is faster than the data rate of a single sub-ADC by a factor of . This is because the TI system essentially parallelizes the conversion process across multiple sub-ADCs. The factor , representing the number of interleaved channels, thus represents the increase in the overall sampling rate of the entire system.
To conclude, the time-interleaving method effectively increases the conversion speed of each sub-ADC by times. As a result, even though each sub-ADC operates at a relatively slow pace, the combined output of the TI system is characterized by a higher sampling rate. Time interleaving is therefore a powerful technique in the design and implementation of data converters since it enables the creation of high-speed ADCs using components that individually have much lower performance capabilities in terms of speed.
Possible architectures
Two architectures are possible to implement a time interleaved ADC. The first architecture is depicted in the first figure of the paragraph and it is characterized by the presence of a single Sample and Hold (S&H) circuit for the entire structure. The sampler operates at a frequency and acquires the samples for all the channels of the TI ADC. Once a sample is acquired, an analog demultiplexer distributes it to the correspondent sub-ADC. This approach centralizes the sampling process, ensuring uniformity in the acquired samples. However, it places stringent speed requirements on the S&H circuit since it must operate at the full sampling rate of the ADC system.
In contrast, the second architecture, illustrated in the second figure of the paragraph, employs different S&H circuits for each channel, each operating at a reduced frequency , where is once again the number of interleaved channels. This solution significantly relaxes the speed requirements for each S&H circuit, as they only need to operate at a fraction of the overall sampling rate. This approach mitigates the challenge of high-speed operation of the first architecture. However, this benefit comes with trade-offs, namely, increased area occupation and higher power dissipation due to the additional circuitry required to implement multiple S&H circuits.
Advantages and trade-offs of the two architectures
The choice between these two architectures depends on the specific requirements and constraints of the application. The single S&H circuit architecture offers a compact and potentially lower-power solution, as it eliminates the redundancy of multiple S&H circuits. The centralized sampling can also reduce mismatches between channels, as all samples are derived from a single source. However, the high-speed requirement of the single S&H circuit can be a significant challenge, particularly at very high sampling rates where achieving the necessary performance may require more advanced and costly technology.
On the other hand, the multiple S&H circuit architecture distributes the sampling load, allowing each S&H circuit to operate at a lower speed. This can be advantageous in applications where high-speed circuits are difficult or expensive to implement. Additionally, this architecture can offer improved flexibility in managing timing and gain mismatches between channels. Each S&H circuit can be independently optimized for its specific operating conditions, potentially leading to better overall performance. The trade-offs include a larger footprint on the integrated circuit and increased power consumption, which may be critical factors in power-sensitive or space-constrained applications.
In practical implementations, the choice between these architectures is influenced by several factors, including the required sampling rate, power budget, available silicon area, and the acceptable level of complexity in calibration and error correction. For instance, in high-speed communication systems the single S&H circuit architecture might be preferred despite its stringent speed requirements, due to its compact design and potentially lower power consumption. Conversely, in applications where power is less of a concern but achieving ultra-high speeds is challenging, the multiple S&H circuit architecture might be more suitable.
Sources of errors
Ideally, all the sub-ADCs are identical. In practice, however, they end up being slightly different due to process, voltage and temperature (PVT) variations. If not taken care of, sub-ADC mismatches can jeopardize the performance of TI ADCs since they show up in the output spectrum as spectral tones.
Offset mismatches (i.e., different input-referred offsets for each sub-ADC) are superimposed to the converted signal as a sequence of period , affecting the output spectrum of the ADC with spurious tones, whose power depends on the magnitude of the offsets, located at frequencies , where M represents the number of channels and k is an arbitrary integer number from to .
Gain errors affect the amplitude of the converted signal and are transferred to the output as an amplitude modulation (AM) of the input signal with a sequence of period . As a matter of fact, this mechanism introduces spurious harmonics at frequencies , whose power depends both on the amplitude of the input signal and on the magnitude of the gain error sequence.
Finally, skew mismatches are due to the channels being timed by different phases of the same clock signal. If one timing signal is skewed with respect to the others, spurious harmonics will be generated in the output spectrum. It can be demonstrated that these spurs are located at the frequencies . Moreover, their power depends both on the magnitude of the skew between the control phases and on the value of the input signal frequency.
Channel mismatches in a TI ADC can seriously degrade its Spurious-Free Dynamic Range (SFDR) and its Signal-to-Noise-and-Distortion Ratio (SNDR). To recover the spectral purity of the converter, the proven solution consists of compensating these non-idealities with digitally implemented corrections. Despite being able to recover the overall spectral purity by suppressing the mismatch spurs, digital calibrations can significantly increase the overall power consumption of the receiver and may not be as effective when the input signal is broadband. For this reason, methods to provide higher stability and usability in real-world cases should be actively researched.
Typical applications
Telecommunications
As cellular communications systems evolve, the performance of the receivers becomes more and more demanding. For example, the channel bandwidth offered by the 4G network can be as high as 20 MHz, whereas it can range from 400 MHz up to 1 GHz in the current 5G NR network. On top of that, the complexity of signal modulation also increased from 64-QAM in 4G to 256-QAM in 5G NR.
The tighter requirements impose new design challenges on modern receivers, whose performance relies on the analog-to-digital interface provided by ADCs. In 4G receivers, the data conversion is performed by Delta-Sigma Modulators (DSMs), as they are easily reconfigurable: It is sufficient to modify the oversampling ratio (OSR), the loop order or the quantizer resolution to adjust the bandwidth of the data converter according to the need. This is a desirable feature of an ADC in receivers supporting multiple wireless communication standards.
In 5G receivers, instead, DSMs are not the preferred choice: The bandwidth of the receiver has to be higher than a few hundreds of MHz, whereas the signal bandwidth of a DSM is only a fraction of half of the sampling frequency . In mathematical terms, . Thus, in practice, it is hard if not impossible to achieve the required sampling frequency with DSMs. For this reason, 5G receivers typically rely on Nyquist ADCs, in which the signal bandwidth can be as high as , according to the Shannon-Nyquist sampling theorem.
The ADCs employed in 5G receivers do not only require a high sampling rate to deal with large channel bandwidths, but also a reasonable number of bits. A high resolution is necessary for the data converter to enable the use of the high-order modulation schemes, which are fundamental to achieve high throughputs with an efficient bandwidth utilization. The resolution of a data converter is defined as the minimum voltage value that it can resolve, i.e., its Least Significant Bit (LSB). The latter parameter depends on the number of physical bits (N) of the converter as (where FSR is the full scale range of the ADC). Hence, the larger the number of levels, the finer the conversion will be. In practice, however, noise (e.g., jitter and thermal noise) poses a fundamental limit on the achievable resolution, which is lower than the physical number of bits and it is typically expressed in terms of ENOB.
Usually, for 5G receivers, ADCs with an ENOB of 12 bits and bandiwdth up to the GHz are the favorable choice. Time interleaved ADCs are frequently employed for this application since they are capable of meeting the above-mentioned requirements. In fact, TI ADCs utilize multiple ADC channels operating in parallel and this technique effectively increases the overall sampling rate, allowing the receiver to handle the wide bandwidths required by 5G network.
Direct RF sampling
A receiver is one of the essential components of a communication system. In particular, a receiver is responsible for the conversion of radio signals in digital words to allow the signal to be further processed by electronic devices. Typically, a receiver include an antenna, a pre-selector filter, a low-noise amplifier (LNA), a mixer, a local oscillator, an intermediate frequency (IF) filter, a demodulator and an analog-to-digital converter.
The antenna is the first component in a receiver system; it captures electromagnetic waves from the air and it converts these radio waves into electrical signals. These signals are then filtered by the pre-selector, which guarantees that only the desired frequency range from the signals captured by the antenna are passed to the next stages of the receiver. The signal is then amplified by an LNA. The amplification action ensures that the signal is strong enough to be processed effectively by the subsequent stages of the system. The amplified signal is then mixed with a stable signal from the local oscillator to produce an intermediate frequency (IF) signal. This process, known as heterodyning, shifts the frequency of the received signal to a lower, more manageable IF. The IF signal undergoes further filtering to remove any remaining unwanted signals and noise. Finally, a demodulator extracts the original information signal from the modulated carrier wave. Precisely, the demodulator converts the IF signal back into the baseband signal, which contains the transmitted information. Different demodulation techniques can be used depending on the type of modulation employed (e.g., amplitude modulation (AM), frequency modulation (FM), or phase modulation (PM)). As a last step, an ADC converts the continuous analog signal into a discrete digital signal, which can be processed by digital signal processors (DSPs) or microcontrollers. This step is crucial for enabling advanced digital signal processing techniques.
To further improve the power efficiency and cost of a receiver, the paradigm of Direct RF Sampling is emerging. According to this technique, the analog signal at radio frequency is simply fed to the ADC, avoiding the downconversion to an intermediate frequency altogether.
Direct RF Sampling has significant advantages in terms of system design and performance. By removing the downconversion stage, the design complexity is reduced, leading to lower power consumption and cost. Additionally, the absence of the mixer and local oscillator means there are fewer components that can introduce noise and distortion, potentially improving the signal-to-noise ratio (SNR) and linearity of the receiver.
However, directly sampling the radio-frequency signal imposes stringent requirements on the performance of the ADC. The signal bandwidth of the ADC in the receiver must be a few GHz to handle the high-frequency signals directly. Achieving such high values with a single ADC is challenging due to limitations in speed, power consumption and resolution.
To meet these demanding requirements, Time interleaved ADC systems are typically adopted. In fact, TI ADCs utilize multiple slower sub-ADCs operating in parallel, each sampling the input signal at different time intervals. By interleaving the sampling process, the effective sampling rate of the overall system is increased, allowing it to handle the high bandwidths required for direct RF sampling.
References
Electronics
Digital signal processing
Electronic circuits | Time-interleaved ADC | [
"Engineering"
] | 3,619 | [
"Electronic engineering",
"Electronic circuits"
] |
77,310,528 | https://en.wikipedia.org/wiki/Ceria%20based%20thermochemical%20cycles | A ceria based thermochemical cycle is a type of two-step thermochemical cycle that uses as oxygen carrier cerium oxides (/) for synthetic fuel production such as hydrogen or syngas. These cycles are able to obtain either hydrogen () from the splitting of water molecules (), or also syngas, which is a mixture of hydrogen () and carbon monoxide (), by also splitting carbon dioxide () molecules alongside water molecules. These type of thermochemical cycles are mainly studied for concentrated solar applications.
Types of cycles
These cycles are based on the two step redox thermochemical cycle. In the first step, a metal oxide, such as ceria, is reduced by providing heat to the material, liberating oxygen. In the second step, a stream of steam oxidises the previously obtained molecule back to its starting state, therefore closing the cycle. Depending on the stoichiometry of the reactions, which is the relation of the reactants and products of the chemical reaction, these cycles can be classified in two categories.
Stoichiometric ceria cycle
The stoichiometric ceria cycle uses the cerium(IV) oxide () and cerium(III) oxide () metal oxide pairs as oxygen carriers. This cycle is composed of two steps:
A reduction step, to liberate oxygen () from the material:
And an oxidation step, to split the water molecules into hydrogen () and oxygen (), and/or the carbon dioxide molecules () into carbon monoxide () and oxygen ():
The reaction for hydrogen production:
The reaction for carbon monoxide production:
The reduction step is an endothermic reaction that takes place at temperatures around 2,300 K (2,027 °C) in order to ensure a sufficient reduction. In order to enhance the reduction of the material, low partial pressures of oxygen are required. To obtain these low partial pressures, there are two main possibilities, either by vacuum pumping the reactor chamber, or by using an chemically inert sweep gas, such as nitrogen () or argon ().
On the other hand, the oxidation step is an exothermic reaction that can take place at a considerable range of temperatures, from 400 °C up to 1,000 °C. In this case, depending on the fuel to be produced, a stream of steam, carbon dioxide or a mixture of both is introduced to the reaction chamber for hydrogen, carbon monoxide or syngas production respectively. The temperature difference between the two steps presents a challenge for heat recovery, since the existing solid to solid heat exchangers are not highly efficient.
The thermal energy required to achieve these high temperatures is provided by concentrated solar radiation. Due to the high concentration ratio required to achieve this high temperatures, the main technologies used are concentrating solar towers (CST) or parabolic dishes.
The main disadvantage of the stoichiometric ceria cycle lies in the fact that the reduction reaction temperature of cerium(IV) oxide () is at the same range of the melting temperature (1,687–2,230 °C) of cerium(IV) oxide (), which in the end results in some melting and sublimation of the material, which can produce reactor failures such as deposition on the window or sintering of the particles.
Non-stoichiometric ceria cycle
The non-stoichiometric ceria cycle uses only cerium(IV) oxide, and instead of totally reducing it to the next oxidation molecule, it performs a partial reduction of it. The quantity of this reduction is commonly expressed as reduction extent and is indicated as . In this way, by partially reducing ceria, oxygen vacancies are created in the material. The two steps are formulated as such:
Reduction reaction:
Oxidation reaction:
For hydrogen production:
For carbon monoxide production:
The main advantage of this cycle is that the reduction temperature is lower, around 1,773 K (1,500 °C) which alleviates the high temperature demand of some materials and avoids certain problems such as sublimation or sintering. Temperatures above these would result in the reduction of the material to the next oxidation molecule, which should be avoided.
In order to reduce the thermal loses of the cycle, the temperature difference between the reduction and oxidation chambers need to be optimized. This results in partially oxidated states, rather than a full oxidation of the ceria. Due to this, the chemical reaction is commonly expressed considering these two reduction extents:
Reduction reaction:
Oxidation reaction:
For hydrogen production:
For carbon monoxide production:
The main disadvantage of these cycles is the low reduction extent, due to the low non-stoichiometry, hence leaving less vacancies for the oxidation process, which in the end translates to lower fuel production rates.
Due to the properties of ceria, other materials are being studied, mainly perovskites based on ceria, to improve the thermodynamic and chemical properties of the metal oxide.
Methane driven non-stoichiometric ceria cycle
Since the temperatures needed to achieve the reduction of the material are considerably high, the reduction of the cerium oxide can be enhanced by providing methane to the reaction. This reduces significantly the temperatures required to achieve the reduction of ceria, ranging between 800-1,000 °C, while also producing syngas in the reduction reactor. In this case, the reduction reaction goes as follows:
The main disadvantages of this cycle are the carbon deposition on the material, which eventually deactivates it after several cycles and needs to be replaced, and the acquisition of the methane feedstock.
Types of reactors
Depending on the type and topology of the reactors, the cycles will function either in continuous production or in batch production. There are two main types of reactors for these specific cycles:
Monolithic reactors
These type of reactors consist on a piece of solid material, which is shaped as a reticulated porous foam (RPC) in other to increase both the surface area and the solar radiation penetration. This reactors are shaped as a cavity receivers, in order to reduce the thermal losses due to reradiation. They usually count with a quartz (fused silica) window in order to let the solar radiation inside the cavity.
Since the metal oxide is a solid structure, both reactions must be done in the same reactor, which leads to a discontinuous production process, carrying out one step after the other. To avoid this stops in the production time, multiple reactors can be arranged to approximate a continuous production process. This is usually referred as a batch process. The intention is to always have one or multiple reactors operating in the oxidation step at the same time, hence always generating hydrogen.
Some new reactor concepts are being studied, in which the RPCs can be moved from one reactor to another, in order to have one single reduction reactor.
Solid particles reactors
These type of reactors try to solve the discontinuity problem of the cycle by using solid particles of the metal oxide instead of having solid structures. This particles can be moved from the reduction reactor to the oxidation reactor, which allows a continuous production of fuel. Many types of reactors work with solid particles, from free falling receivers, to packed beds, fluidized beds or rotary kilns.
The main disadvantage of this approach is that, due to the high temperatures achieved, the solid particles are susceptible to sintering, which is a process in which small particles melt and get stuck to another particles, creating bigger particles, which reduces their surface area and difficult the transportation process.
See also
Thermochemical cycle
Solar fuel
Sulfur–iodine cycle
Hybrid sulfur cycle
References
External links
HYDROSOL project. Retrieved 07/07/2024
Sun to Liquid project Retrieved 11/07/2024
Chemical reactions
Hydrogen production
Cerium
Catalysis | Ceria based thermochemical cycles | [
"Chemistry"
] | 1,599 | [
"Catalysis",
"Chemical kinetics",
"nan"
] |
77,310,532 | https://en.wikipedia.org/wiki/Quantum%20Cascade%20Detector | A Quantum Cascade Detector (QCD) is a photodetector sensitive to infrared radiation. The absorption of incident light is mediated by intersubband transitions in a semiconductor multiple-quantum-well structure. The term cascade refers to the characteristic path of the electrons inside the material bandstructure, induced by absorption of incident light.
QCDs are realized by stacking thin layers of semiconductors on a lattice-matched substrate by means of suitable epitaxial deposition processes, including molecular-beam epitaxy and metal organic vapor-phase epitaxy. The design of the quantum wells can be engineered to tune the absorption in a wide range of wavelengths in the infrared spectrum and to achieve broadband operation: QCDs have been demonstrated to operate from the short-wave to the long-wave infrared region and beyond. QCDs operate in photovoltaic mode, meaning that no bias is required to generate a photoresponse. For this reason, QCDs are also referred to as the photovoltaic counterpart of the photoconductive quantum well infrared photodetectors (QWIPs).
Since the vibrational modes of organic molecules are found in the mid-infrared region of the spectrum, QCDs are investigated for sensing applications and integration in dual-comb spectroscopic systems. Moreover, QCDs have been shown to be promising for high-speed operation in free-space communication applications.
History
In 2002, Daniel Hofstetter, Mattias Beck and Jérôme Faist reported the first ever use of an InGaAs/InAlAs quantum-cascade-laser structure for photodetection at room temperature. The specific detectivity of the device was shown to be comparable to the detectivity of more established detectors at the time, such as QWIPs or HgCdTe detectors. This pioneering work stimulated the search for bi-functional optoelectronic devices embedding both lasing and detection within the same photonic architecture.
The term quantum cascade detector was coined in 2004, when L. Gendron and V. Berger demonstrated the first operating cascade device fully devoted to photodetection purposes, employing a GaAs/AlGaAs heterostructure. This work was motivated by the necessity to find an alternative intersubband infrared photodetector to QWIPs. Indeed, while manifesting high responsivity enhanced by photoconductive gain, QWIPs suffer from large dark current noise, which is detrimental to in room-temperature photodetection.
In the subsequent years researchers have explored a variety of solutions leading to an enhancement of the device performances and functionalities. New material platforms have been studied, such as II-VI ZnCdSe/ZnCdMgSe semiconductor systems. These compounds are characterized by a large conduction band offset, allowing for broadband and room-temperature photodetection. Moreover, QCDs based on GaN/AlGaN and ZnO/MgZnO material platforms have also been reported with the aim to investigate photodetection operation at the very edges of the infrared spectrum.
Innovative architectures have been designed and fabricated. Diagonal-transition quantum cascade detectors have been proposed to improve the mechanism of electronic extraction from the optical well. While in conventional QCDs the transition is hosted in a single well (vertical transition), in diagonal-transition QCDs the photoexcitation takes place in two adjacent wells, in a bound-to-bound or bound-to-miniband transition scheme. The motivation behind the realization of this architecture lies in the opportunity to improve the extraction efficiency towards the cascade, even though at the expense of the absorption strength of the transition. Since early 2000s up to more recent years, QCDs embedded in optical cavities operating in the strong light-matter interaction regime have been investigated, aiming to further improvement of the device performances.
Working principle
QCDs are unipolar devices, meaning that only a single type of charge carrier, either electrons or holes, contributes to the photocurrent. The structure of a QCD consists of a periodic multiple-quantum-well heterostructure, realized by stacking very thin layers of semiconductors characterized by different energy band-gaps. In each period, the first quantum well (also called optical well) is devoted to the resonant absorption of incident radiation. Upon absorption of a photon, an electron is excited from a lower state to an upper state. Since these states are confined within the same band, intersubband transitions occur and QCDs are also referred to as intersubband devices. The transition energy can be tuned by adjusting the thickness of the well: indeed, the energy of an electronic state confined in a quantum well can be written as:within the approximation of infinite potential barriers. It can be derived by solving the Schrödinger equation for an electron confined in a one-dimensional infinite barrier potential. In the formula, is the reduced Planck constant, and represent the wavevector and the effective mass of the electron, respectively, while is the thickness of the quantum well and identifies the th confined state. The well thickness can be tuned in order to engineer the bandstructure of the QCD.
The photoexcited electron is then transferred to a cascade of confined states called extraction region. The transfer mechanism between adjacent wells consists of a double-step process: quantum tunneling transfers the electron through the barrier and scattering with longitudinal optical (LO) phonons relaxes the electron to the ground state. This mechanism is very efficient if the energy difference between adjacent confined states matches the typical LO phonon energy, a condition that is easily achievable by tuning the thickness of the wells. It also sets the cut-off frequency of the detector, being the process that determines the transit time of the electron through the cascade. Since typical time-scales for LO phonon scattering are in the range, the QCD cut-off frequency lies in the 100 range. When the electron reaches the bottom of the cascade, it is confined in the optical well of the next period, where it is once again photoexcited. A displacement current is then generated, and it can be easily measured by a read-out circuit. Notice that the generation of a photocurrent does not require the application of an external bias and, consistently, the energy bands are flat.
Figure of merit
The responsivity of any quantum photodetector can be calculated exploiting the following formula: , where the constant is the electronic charge, represents the radiation wavelength, is the Planck constant, refers to the speed of light in vacuum and is the external quantum efficiency. This last term takes into account both the absorption efficiency , i.e. the probability of photoexciting an electron, and the photodetector gain , which measures the number of electrons contributing to the photocurrent per absorbed photon, according to . The photodetector gain depends on the working principle of the photodetector; in a QCD, it is proportional to the extraction probability : , where is the number of active periods. The responsivity reads:.In first approximation, in weakly-absorbing systems, the absorption efficiency is a linear function of and the responsivity is independent from the number of periods. In other systems an optimal trade-off between absorption efficiency and gain must be found to maximize the responsivity. At the state of the art, QCDs have been demonstrated to have a responsivity in the order of hundreds of . Another figure of merit for photodetectors is the specific detectivity , since it facilitates the comparison between devices with different area and bandwidth . At sufficiently high temperature, where detectivity is dominated by Johnson noise, it can be calculated as: , where is the peak responsivity, is the resistance at zero bias, is the Boltzmann constant and is temperature. Enhancement of the detectivity is accomplished by high resistance, strong absorption and large extraction probability.
Optical coupling
As any intersubband detector, QCDs can absorb only TM-polarized light, while they are blind to vertically-incident radiation. This behavior is predicted by intersubband transition selection rules, which show that a non-zero matrix element is obtained on the condition of light polarized perpendicularly to quantum well planes. Consequently, alternative approaches to couple light into the active region of QCDs have been developed, including a variety of geometrical coupling configurations, diffraction gratings and mode confinement solutions.
45°-wedge-multipass configuration
Incident light impinges vertically on a 45° polished facet of a wedge-like QCD. In this coupling configuration, radiation contains both TM and TE polarizations. While this configuration is easily realized, 50% of the power is not coupled to the device, and the amount of absorbed light is strongly reduced. However, it is regarded as a standard configuration to characterize intersubband photodetectors.
Brewster angle configuration
At the air-semiconductor interface, p-polarized light is fully transmitted if radiation is impinged at the Brewster angle , which is a function of the semiconductor refractive index , since . This is the simplest configuration, since no tilted facets are required. However, due to the high refractive index difference at the interface, only a small fraction of the total optical input power couples to the detector.
Diffraction grating couplers
A metallic diffraction grating is patterned on top of the device to couple the impinging light to surface plasmon polaritons, a type of surface wave that propagates along the metal-semiconductor interface. Being TM-polarized, surface plasmon polaritons are compatible with intersubband device operation, but typically propagates only over 10 periods of the structure.
Waveguide end-fire coupling
Planar or ridge waveguides are employed to confine the optical mode in the active region of the QCD, provided that the semiconductor heterostructure is grown on a substrate exhibiting a lower refractive index. The optical mode, indeed, is guided towards the region of highest refractive index. This is the case of InP-matched InGaAs/AlGaAs heterostructures. The absorption efficiency is limited by waveguide losses, approximately in the order of 1 .
See also
Semiconductor material
II-VI semiconductor compound
Photodetector
Optoelectronics
Infrared spectroscopy
Photonic integrated circuit
References
Further reading
Photodetectors
Optoelectronics
Photonics
Optical devices | Quantum Cascade Detector | [
"Materials_science",
"Engineering"
] | 2,146 | [
"Glass engineering and science",
"Optical devices"
] |
77,310,545 | https://en.wikipedia.org/wiki/Nvidia%20Parabricks | Nvidia Parabricks is a suite of free software for genome analysis developed by Nvidia, designed to deliver high throughput by using graphics processing unit (GPU) acceleration.
Parabricks offers workflows for DNA and RNA analyses and the detection of germline and somatic mutations, using open-source tools. It is designed to improve the computing time of genomic data analysis while maintaining the flexibility required for various bioinformatics experiments. Along with the speed of GPU-based processing, Parabricks ensures high accuracy, compliance with standard genomic formats and the ability to scale in order to handle very large datasets.
Users can download and run Parabricks pipelines locally or directly deploy them on cloud providers, such as Amazon Web Services, Google Cloud, Oracle Cloud Infrastructure, and Microsoft Azure.
Accelerated genome analysis fundamentals
The massive reduction in sequencing costs resulted in a significant increase in the size and the availability of genomics data with the potential of revolutionizing many fields, from medicine to drug design.
Starting from a biological sample (e.g., saliva or blood), it is possible to extract the individual's DNA and sequence it with sequencing machinery to translate the biological information into a textual sequence of bases. Then, once the entire genome is obtained through the genome assembly process, the DNA can be analyzed to extract information that is key in several domains, including personalized medicine and medical diagnostics.
Typically, genomics data analysis is performed with tools based on Central Processing Units (CPUs) for processing. Recently, several researchers in this field have underlined the challenges in terms of computing power delivered by these tools and focused their efforts on finding ways to boost the performance of the applications. The issue has been addressed in two ways: developing more efficient algorithms or accelerating the compute-intensive part using hardware accelerators. Examples of accelerators used in the domain are GPUs, FPGAs, and ASICs
In this context, GPUs have revolutionized genomics by exploiting their parallel processing power to accelerate computationally intensive tasks. GPUs deliver promising results in these scenarios thanks to their architecture, composed of thousands of small cores capable of performing computations in parallel. This parallelism allows GPUs to process multiple tasks simultaneously, significantly speeding up computations that can be broken down into independent units. For instance, aligning millions of sequencing reads against a reference genome or performing statistical analyses on large genomic datasets can be completed much faster on GPUs than when using CPUs. This facilitates the rapid analysis of genomic data from diverse sources, ranging from individual genomes to large-scale population studies, accelerating the understanding of genetic diseases, genetic diversity, and more complex biological systems.
Featured pipelines
Parabricks offers end users various collections of tools organized sequentially to analyze the raw data according to the user's requirements, called pipelines. Nevertheless, users can decide to run the tools provided by Parabricks as a standalone, still exploiting GPU acceleration to overcome possible computational bottlenecks. Only some of the provided tools in the suite are GPU-based.
Overall, all the pipelines share a standard structure. Most of the pipelines are built to analyze FASTQ data resulting from various sequencing technologies (e.g., short- or long-read). Input genomic sequences are firstly aligned and then undergo a quality control process. These two processes provide a BAM or a CRAM file as an intermediate result. Based on this data, the variant calling task that follows employs high-accuracy tools that are already widely used. As output, these pipelines provide the identified mutations in a VCF (or a gVCF).
Germline pipeline
The germline pipeline offered by Parabricks follows the best practices proposed by the Broad Institute in their Genome Analysis ToolKit (GATK). The germline pipeline operates on the FASTQ files provided as input by the user to call the variants that, belonging to the germ line, can be inherited.
This pipeline analyzes data computing the read alignment with BWA-MEM and calling variants using GATK HaplotypeCaller, one of the most relevant tools in the domain for germline variant calling.
DeepVariant germline pipeline
Besides the pipeline that resorts to HaplotypeCaller to call variants, Parabricks also offers an alternative pipeline that still calls germline variants but is based on DeepVariant. DeepVariant is a variant caller, developed and maintained by Google, capable of identifying mutations using a deep learning-based approach. The core of DeepVariant is a convolutional neural network (CNN) that identifies variants by transforming this task into an image classification operation. In Parabricks, the inference process is accelerated in hardware. For this pipeline, only T4, V100, and A100 GPUs are supported.
Analyses performed according to this pipeline are compliant with the use of BWA-MEM for the alignment by Google's CNN for variant calling.
Human_par pipeline
Still compliant with GATK best practices, the human_par pipeline allows users to identify mutations in the entire human genome, including sex chromosomes X and Y, and, thus, it is compliant with their ploidy. For male samples, firstly, the pipeline runs HaplotypeCaller on all the regions that do not belong to the X and Y chromosomes and on the pseudoautosomal region with ploidy equal to 1. Then, HaplotypeCaller analyses the X and Y regions without the pseudoautosomal region with ploidy 2. Regarding female samples, instead, the pipeline runs HaplotypeCaller on the entire genome, with ploidy 2.
The sex of the sample can be determined in two main ways:
Manually set with the --sample-sex option;
Specify the X vs. Y ratio with range options --range-male and --range-female and let the tool automatically infer the sex of the samples based on the X and Y reads count.
The pipeline requires the user to specify at least one of these three options.
As for the germline case, since this pipeline targets the germline variants, the pipeline resorts to BWA-MEM for the alignment, followed by HaplotypeCaller for variant calling.
Somatic pipeline
Parabricks' somatic pipeline is designed to call somatic variants, i.e., those mutations affecting non-reproductive (somatic) cells. This pipeline can analyze both tumor and non-tumor genomes, offering either tumor-only or tumor/normal analyses for comprehensive examinations.
As in the germline pipeline, the alignment task is carried out using BWA-MEM followed by GATK Mutect to identify the possible mutations. Mutect is used instead of HaplotypeCaller due to its focus on somatic mutations, as opposed to germline mutations targeted by HaplotypeCaller.
RNA pipeline
This pipeline is optimized for short variant discovery (i.e., Single-nucleotide polymorphisms (SNPs) and indels) in RNAseq data. It follows the Broad Institute's best practices for these types of analyses.
It relies on the STAR aligner, a read aligner specialized for RNA sequences for aligning the reads, and HaplotypeCaller for calling variants.
Parabricks tools
Parabricks provides a collection of tools to perform genomics analyses, classified into six main categories related to their task. These tools combined constitutes Parabricks' pipelines, and can be also used as-is.
For FASTQ and BAM files processing, the proposed tools are:
(beta)
For calling variants, the proposed tools are:
(GATK Germline Pipeline)
(beta)
(Somatic Variant Caller)
For RNA processing, the proposed tools are:
For results quality control, the proposed tools are:
For processing variants, the proposed tools are:
For processing gVCF files, the proposed tools are:
Not all the listed tools are accelerated on GPU.
Hardware support
Users can download and run Parabricks pipelines on their local servers, allowing for private, on-site data processing and analysis. They also can deploy Parabricks pipelines on cloud platforms, with improved scalability for larger datasets. Supported cloud providers include AWS, GCP, OCI, and Azure.
In the latest release (v4.3.1-1), Parabricks includes support for the NVIDIA Grace Hopper super chip. The NVIDIA GH200 Grace Hopper Superchip is a heterogeneous platform designed for high-performance computing and artificial intelligence, combining an NVIDIA Grace and a Hopper on a single chip. This platform enhances application performance using both GPUs and CPUs, offering a programming model aimed at improving performance, portability, and productivity.
Applications
Due to the computational power required by genomics workloads, Parabricks has found application in several research studies with different applicative domains, especially in cancer research.
Scientists from Washington University used the Parabricks DeepVariant pipeline for identifying variants (e.g., SNPs and small indels) in long-read Hi-Fi whole-genome sequencing (WGS) data generated with PacBio's Revio SMRT Cell technology.
In addition to the pipelines, individual components of Parabricks have been used as standalone tools in academic settings. For example, the accelerated DeepVariant has been employed in a novel process to reduce the processing time further for WGS Nanopore data.
In 2022, Nvidia announced a collaboration with the Broad Institute to provide researchers with the benefits of accelerated computing. This partnership includes the entire suite of Nvidia's biomedical hardware-accelerated software suite called Clara, that includes Parabricks and MONAI. Similarly, the Regeneron Genetics Center uses Parabricks to expedite the secondary analysis of the exomes they sequence in their high-throughput sequencing center, leverage the DeepVariant Germline pipeline inside their workflows.
See also
List of bioinformatics software
List of sequence alignment software
References
Further reading
External links
NVIDIA Clara
NVIDIA Clara for Genomics
Nvidia software
Bioinformatics software
Medical software | Nvidia Parabricks | [
"Biology"
] | 2,123 | [
"Bioinformatics",
"Medical software",
"Bioinformatics software",
"Medical technology"
] |
77,310,564 | https://en.wikipedia.org/wiki/Deepwater%20well%20integrity | Deepwater wellbore integrity can be defined as the application of relevant engineering techniques and operational measures in deepwater drilling to control related risks during the drilling process, ensuring that deepwater oil and gas wells are always in a safe state throughout their entire life cycle.
In deepwater drilling, high investment, high risk, and high return are the most significant characteristics, so special attention needs to be paid to safety operations during the drilling process.
Difficulties
Harsh environment
According to international conventions, oil and gas wells with a depth greater than 500 meters are considered deepwater wells, while oil and gas wells with a depth greater than 1500 meters are considered ultra deepwater wells. In the deepwater marine environment, geological conditions and environmental factors are more complex. Firstly, the deepwater marine environment is more harsh, with frequent typhoons and extreme waves. After the superposition of internal wave and ocean currents in the ocean, the deepwater flow velocity increases, resulting in extremely harsh comprehensive environmental conditions for deepwater drilling; In addition, due to the long offshore distance of deepwater drilling, it is difficult to provide material support, which also makes challenges for deepwater drilling.
Shallow geological hazards
The geological conditions in deepwater areas are complex. For example in the South China Sea region, shallow areas usually meet with three shallow problems when drilling in deepwater areas, namely shallow gas, shallow water flow, and natural gas hydrates. During drilling operations, complex accidents such as gas invasion and overflow are prone to occur. In addition, deep strata have poor diagenesis, weak pressure bearing capacity, so leakage is easy to happen, the risk of well control is high, and the treatment is difficult, which impact on deepwater wellbore integrity seriously.
Complex working environment
In the process of deepwater drilling, due to the long-term complex and dynamic changes in the marine geological environment, the relevant marine petroleum machinery and tools are more susceptible to external damage caused by surrounding marine activities. Therefore, in this complex flow field with high spatiotemporal variability, the relevant petroleum equipment and tools will undergo varying degrees of corrosion, damage, and deformation. These damages will accumulate and cause irreversible leakage, perforation, and fracture to themselves, causing serious damage to deepwater wellbore integrity. In addition, due to the difficulty of timely monitoring and evaluation of leaks in deepwater oil and gas wells, traditional detection methods for onshore oil fields are no longer applicable. Therefore, timely detection of leakage points and accurate evaluation are also important difficulties in ensuring the deepwater wellbore integrity.
Testing
The detection of deepwater wellbore integrity can be generally divided into two aspects from global perspective: detection of wellbore pressure and wellbore temperature. By detecting the two important indicators of temperature and pressure, the change in borehole thickness of the casing string, as well as the location of leakage points and types of damage, can be determined. Therefore a comprehensive assessment of deepwater wellbore integrity can be conducted.
Equipment
The commonly used detection devices are sound wave acquisition tools and electromagnetic detectors. To meet with the special environmental requirements of deepwater oil and gas wells, it is necessary to strengthen the high temperature and high pressure resistance, corrosion resistance, and airtightness of the acoustic testing instrument, so as to make the detection results more accurate. The electromagnetic detector is used to detect damage to the casing string in the wellbore. Due to the use of low magnetic material oil pipes in deepwater oil and gas wells, the response time of the electromagnetic detector will be shortened, optimizing the detection process. Generally, when used together with acoustic acquisition tools and electromagnetic detectors, the detection and analysis of deepwater wellbore integrity will be more comprehensive.
Future work
With the development of artificial intelligence technology, big data-driven methods such as machine learning and deep learning have provided new ideas and methods for solving engineering problems in deepwater oil and gas wells. Artificial intelligence technology is used to monitor wellbore pressure and temperature in real time, warn of abnormal values, classify risk levels, and comprehensively evaluate deepwater wellbore integrity. Then, corresponding risk control measures and well control measures are provided based on historical and near well data to reduce safety risks. Among them, timely and accurate identification of corresponding early accident risks is of great importance. Therefore, in the future, real-time monitoring of deepwater wellbore integrity, real-time warning of abnormal situations, and making accurate and timely processing methods based on artificial intelligence methods will become the development direction of petroleum engineering, offshore engineering, and computer technology.
Reference
Further reading
Petroleum production
Petroleum industry | Deepwater well integrity | [
"Chemistry"
] | 940 | [
"Chemical process engineering",
"Petroleum",
"Petroleum industry"
] |
77,310,683 | https://en.wikipedia.org/wiki/Rain%20attenuation%20frequency%20scaling | In communications satellite systems, rain attenuation frequency scaling is a technique implemented to model rain fade phenomena affecting a telecommunications link, both statistically and instantaneously. Accurate predictions of rain attenuation are crucial both for the proper design of a satellite communication (SatCom) system, as the detrimental impact of hydrometeors present within the troposphere, mainly rain, on radio frequency signals, can lead to system failures (commonly known as network outage periods). Moreover, such analyses are essential for the implementation of adaptive fade mitigation techniques, such as uplink power control and variable rate encoding schemes, to increase the link availability.
A scaling approach is particularly suitable in scenarios where the uplink and downlink, which typically share the same channel capacity and therefore operate at different frequency to avoid co-channel interference, are affected by the same rainfall event along the link. In such context, it may be advantageous to derive the attenuation due to rain at the higher frequency, called target frequency, by properly scaling concurrent attenuation measurements affecting the same link at lower frequency, called reference frequency.
Furthermore, as rain attenuation measurements inherently embed key information about the rain event, such as the spatial distribution of the rain and the information on the raindrop size distribution (DSD), frequency scaling models provide an enhanced prediction accuracy if compared to statistical prediction models, which are typically fed with local pointfall rain data only. As proof, frequency scaling models applied to experimental SatCom systems operating within the geostationary orbit yield statistical errors of 12 - 15% in contrast to the 30 - 40% associated with statistical prediction models.
General definition
Conceptually, the frequency scaling (FS) of rain attenuation, , can be expressed as:
where the estimation of the rain attenuation at the target frequency , namely , is directly related to the corresponding attenuation measured at the reference frequency (Hz), namely (dB), by means of the frequency scaling ratio, , whose definition changes model by model.
Several FS models have been proposed in the past, and they can be classified as either statistical (S-FS) models or instantaneous (I-FS) models. S-FS are typically empirically-based and relate the attenuation values at and as function of the same frequency of exceedance, commonly referred to as the exceedance probability level %:
In this context, is typically a constant only dependent only on the two operating frequencies. However, defining a fixed limits the scaling prediction accuracy, as the value of can vary significantly from one rain event to another, and even within the same event.
I-FS models aim at overcoming this limitation by introducing a time-variant :
In addition to enhanced accuracy, I-FS models are fundamental for assessing the dynamics of rain attenuation along an Earth-space link. This is crucial for investigating, for example, fade slope (i.e., the rate of change with time for rain attenuation) and fade duration (i.e., the duration for a given rain attenuation threshold) statistics.
Statistical frequency scaling models
Several long-term S-FS models have been proposed in the past to extrapolate attenuation induced by rain from one frequency to another. One of the most straightforward S-FS approaches is based on the following power law:
,
where and represent the lower and upper operating frequencies, respectively. Various values for the power n have been proposed:
by Dintelmann
by Owolabi and Ajayi
by Drufuca.
The model recommended by the International Radio Consultative Committee (CCIR), now ITU-R, is defined by a fixed based on a formula of the type:
,
where is a function defined as:
.
Boithias's model is fed by the base attenuation (dB) and the operating frequencies:
,
where
and
.
Similarly, the ITU-R proposes a statistical scaling model valid in the frequency range from 7 to 55 GHz. This model defines a scaling ratio similar to that proposed by Boithias, except for , which is expressed as:
.
The advantage associated with statistical frequency scaling (S-FS) models is their relatively minimal input requirements, typically involving only the operating frequency and, in some cases, the rain attenuation evaluated at the reference frequency. However, it has been demonstrated that these models tend to be accurate only for specific frequency pairs, and no model has shown consistent accuracy across a broader frequency range.
Instantaneous frequency scaling models
I-FS models can be applied at each individual time instant, thereby accommodating the variability of the frequency scaling ratio between different rain events and even within a single rain event. This variability can be accurately accounted for by utilizing the specific rain attenuation, namely (dB/km), which is calculated based on the actual rainfall rate measured at the ground station. Assuming that the impact of the link and the rainfall remains the same across both frequencies, it is possible to define a scaling ratio by using the specific rain attenuation values at the lower and upper frequencies, rather than the relative attenuation values. This is expressed, for a generic time instant , as:
.
A relatively straightforward approach for the estimation of is proposed in the Recommendation ITU-R P.838-3, where the specific rain attenuation is modeled from the local rain rate (mm/h) using the following power-law relationship:
where (GHz) is the frequency, (rad) is the signal polarization and (rad) is the link elevation angle. Values for and are tabulated in the referenced Recommendation for frequencies in the range from 1 to 1000 GHz.
measurements are typically collected using rain gauges, which provide time series of rainfall rate. Additionally, if a disdrometer is available at the site, it can measure not only the precipitation intensity but also the physical microproperties of hydrometeors, such as the size and falling velocity of the drops. This allows for the computation of the raindrop size distribution (DSD):
where (mm) represents the width of each drop-size class,
is the disdrometer sampling area, (s) is the instrument integration time and is the number of velocity classes. Consequently, the specific rain attenuation at a generic time instant depends on as:
where is the number of diameter classes measured by the disdrometer, while the forward scattering coefficient is calculated using the T-matrix method, assuming the axial ratio defined by Beard and Chuang. Although this approach provides highly frequency scaling accuracy, DSD data are seldom available among network planners.
See also
Decibel
Disdrometer
ITU-R
Link budget
Polarization
Radio
Rain fade
Rain gauge
Raindrop size distribution
RF planning
References
External links
https://www.itu.int/en/ITU-R/Pages/default.aspx
https://www.esa.int/
Radio frequency propagation fading
Satellite broadcasting | Rain attenuation frequency scaling | [
"Engineering"
] | 1,432 | [
"Telecommunications engineering",
"Satellite broadcasting"
] |
77,310,737 | https://en.wikipedia.org/wiki/Curved%20structures | Curved structures are constructions generated by one or more generatrices (which can be either curves or surfaces) through geometrical operations. They traditionally differentiate from the other most diffused construction technology, namely the post and lintel, which results from the addition of regular and linear architectural elements.
They have been exploited for their advantageous characteristics since the first civilisations and for different purposes. The materials, the shapes and the assemblage techniques followed the technological and cultural evolution of the societies over time. Curved structures have been preferred to cover large spaces of public buildings. In spite of their sensitivity to earthquakes, they work well from the structural static point of view.
The geometry of curved structures
From the geometrical point of view, curved structures are three-dimensional solids. They are generated starting from genetratrices which undergo the geometrical operations of extrusion or revolution. The three classes of structures stated previously can be explained as follows:
An arch is generated by the revolution of a point or a surface around a centre (or, from a mechanical standpoint it can be thought as a section of a vault);
A Vault is generated by the extrusion of an arched surface;
A Dome is generated by the revolution of an arched surface around an axis.
More complex shapes can be generated by boolean operations on a set of interacting volumes. The simplest examples, resulting from the intersection of two or more vaults and the successive subtraction of the excess volumes, are:
The Groin vault or Cross Vault, resulting in a set of lunettes (either of circular or pointed vaults);
The Domical Vault – and the particular case of the Cloister – which is formed by a set of fuses;
The Umbrella Vault, a set of ribbed fuses, joint at the top and terminating in lunettes at the base;
The Pendentive or Penditive dome, generated by subtracting volumes from a dome;
The Saddle Vault, generated by either translating one parabola through a second one or by a ruled surface.
The actions performed to make these solids are the same needed to generate them in a CAD or – to some extent – in a FEM software to analyse them.
Gaussian curvature and shape-resistant structures
Differently from the post and lintel construction, which capacity depends on the resistance of the single members, curved structures can rely on their shape too. However, single curvature structures (that is, simple vaults) show less capacity than double curvature ones (e.g., domes, domical and cloister and saddle). This is because a simple vault – from a geometric point of view – corresponds to a developable surface, which has null Gaussian curvature, therefore it can be flattened to a planar surface with no distortion. Dome-like and saddle structures have respectively a positive and a negative Gaussian curvature, being shape-resistant structures par excellence.
Architecture and engineering
All the typologies of arches, vaults and domes come from the operations stated in the previous section. They are comprehensively collected and explained in each correspondent Wikipedia article. Curved shapes were used in the past for covering large rooms in buildings, as happened for example in the Domus Aurea of Emperor Nero, the Basilica of Maxentius, the Pantheon, Rome, or Hagia Sophia. However, they could be used for infrastructures too. For instance, the Ancient Roman civilization exploited curved structures for bridges, aqueducts, sewage ducts, and arch-dam. The main materials of such constructions were Masonry and Roman concrete.
With the Industrial Revolution, the material chosen were more likely wrought, cast iron or, later, reinforced concrete. In this way, also the shape of the infrastructures started to change. Some example of curved structures were the Palm House, Kew Gardens by Richard Turner and Decimus Burton and The Crystal Palace by Joseph Paxton, or on the infrastructures side, the Garabit viaduct by Gustave Eiffel. Later in 20th century, Pier Luigi Nervi started studying the possibilities of reinforced concrete, building his famous ribbed hangars.
Many other structures have been built by exploiting curved surface. For instance, the Philips Pavilion in Brussels by Le Corbusier and L'Oceanogràfic in Valencia by Félix Candela and Alberto Domingo are two examples of exploitation of the hyperbolic paraboloid shapes.
The traditional construction process
Because of their nature, curved structures cannot stand alone until their completion, especially vaults and arches. Therefore, the construction of a supporting structure (referred to as centring) is almost always necessary. These are temporary falsework which stay in place until the keystone has been set down and the arch is stabilised.
However, there are a few cases in which, by some expedient and careful design of the construction process, some structures have been erected without any centring. A widely known example is the domical vault of the Florence Cathedral, built by Filippo Brunelleschi in the 15th century. He achieved such a challenge by building a massive structure, mechanically behaving like a spherical dome, but with large ribs and exploiting the masonry herringbone bond to lean and fix every new layer on the previous one. Each layer of the structure seems to be composed by many small arches. The vault is also double-skin, with an intermediate hollow space hosting the staircases, through which air can flow to avoid humidity concentration. To resist parallel tensile stresses which may separate the fuses of the vault, Brunelleschi arranged sandstone chain along some parallel plane.
Another example of structure built with no formwork is the Global Vipassana Pagoda, located in the North of Mumbai, between the Gorai Creek and the Arabian Sea. It is a meditation hall covered by the largest masonry dome in the world, with an inner diameter at ground level of about 85m. The absence of centring was possible thanks to the double curvature of the dome and the special shape given to the carved sandstone blocks constituting the skin.
Structural behaviour
The boundary conditions that would cause bending and shear stress in a post and lintel structure, in a curved structure cause just axial stress in the unit elements. Indeed, according to Professor :de:Jacques Heyman, in the case of masonry curved structures (he referred especially to Gothic architecture), the assumptions of unlimited compressive resistance, null tensile and shear resistance and under the hypothesis of small displacements, it can be assumed that a structure is safe and stable until the funicolar polygon stays within the middle third of the cross section. This method has been widely used in the past because of its simplicity and effectiveness. However it is still studied by some scholars, and adapted to the three-dimensional case for double curvature.
Traditional masonry curved structures are often the result of the assemblage of many units, the Voussoirs. The resistance of an arch then, neglecting the possibility of a material failure, depends on the equilibrium of the voussoirs. Given the shape of vaults and domes instead, the double curvature plays a positive role in terms of stability as well as the arrangement of the single units (interlocking).
Studying the problem of a hemispherical membrane in a gravitational field, it can be demonstrated that the membrane undergoes compressive stress in its upper part, while it is subjected to hoop tensile stresses in the lower part (under 52° from the vertical axis of symmetry). This leads to the formation of meridian craks, which tend to divide the dome in slices.
Daylighting
Daylighting is usually guaranteed by openings at the end of vaulted bays, as happens in Gloucester Cathedral, Chartres Cathedral, or Sainte-Chapelle (Paris), and specifically in the lunettes (where the vaults end against a wall) like in the Church of Santa Maria del Suffragio in L'Aquila (Italy) and in the Church of San Paolo in Albano Laziale (Italy).
Another structurally relevant place for an opening is the top of the domes, where in many cases an oculus can be found. Sometimes it is bare, as in the Roman Pantheon, while often is covered by another architectural element referred to as Lantern, as happens – for instance – in the Florence Cathedral.
Acoustics
Some double curvature structures are known for the echo or the reverberation phenomena they create. These are due to the size of the spaces and the materials exploited for the structure or the finishing (usually hard and with small pores). The shape does a lot in preventing or enhancing the effect. Cross or cloister vaults do not generate an echo. Pointed domes easily create reverberation more than echo. At the same time, spherical surfaces are highly reflective due to their concavity. Indeed, hemispheres, paraboloid, or similar surfaces are effective at reflecting and redirecting sound, sometimes constituting a whispering gallery. Examples of whispering galleries can be found in well-known architectures like St Paul's Cathedral in London, where the phenomenon has been studied by Lord Rauyleigh or the Archbasilica of Saint John Lateran in Rome, but also in caves like the Ear of Dionysius in Syracuse, Sicily, which has been treated by Wallace Clement Sabine.
The existing variety of domes is due to the assignation of symbolic meanings related to history and cultures, ranging from funerary to palatine and religious architecture, but also the response to practical problems. Indeed, a recent study addressed how in the Baroque staircase of the Royal Palace of Caserta (Italy), designed by Luigi Vanvitelli, the double dome could make the listeners feel as if they were enveloped by the music. Thus enhancing the marvel typically researched by baroque architects.
A modern example of architecture thought to respond and participate to sound was the Philips Pavilion designed by Le Corbusier and Iannis Xenakis for the Expo 58 in Brussels.
See also
Architectural acoustics
Gaussian curvature
Gothic architecture
Gustave Eiffel
Le Corbusier
Luigi Vanvitelli
Modern architecture
Renaissance architecture
References
Further reading
External links
Geometria descrittiva (in Italian)
Global Vipassana Pagoda Official Website
Auroville Earth Institute. UNESCO chair earthen architecture
Curves
Geometry
Architecture
Acoustics
Structure
Mechanics | Curved structures | [
"Physics",
"Mathematics",
"Engineering"
] | 2,082 | [
"Classical mechanics",
"Acoustics",
"Construction",
"Mechanics",
"Geometry",
"Mechanical engineering",
"Architecture"
] |
77,310,743 | https://en.wikipedia.org/wiki/Bio-based%20building%20materials | Bio-based building materials incorporate biomass, which is derived from renewable materials of biological origin such as plants, (normally co-products from the agro-industrial and forestry sector), animals, enzymes, and microorganisms, including bacteria, fungi, and yeast.
Today bio-based materials can represent a possible key-strategy to address the significant environmental impact of the construction sector, which accounts for around 40% of global carbon emissions.
Embodied carbon and operational carbon of buildings
Building impacts belong to two distinct but interrelated types of carbon emissions: operational and embodied carbon. Operational carbon includes emissions related to the building's functioning, such as lighting and heating; embodied carbon encompasses emissions resulting from the physical construction of buildings, including the processing of materials, material waste, transportation, assembly, and disassembly.
While research and policy over the past decades have primarily focused on reducing greenhouse gas (GHG) emissions during building operations, by enacting, for instance, the EU Energy Performance of Buildings Directive, the embodied carbon associated with building materials has only recently gained significant attention. This tendency has consequently resulted in a growing interest in the use of low-carbon bio-based materials.
Bio-materials and their co-products offer various benefits: they are renewable, often locally available and during the plant’s growth carbon is sequestered, which enhances the production of possible alternative bio-components.
This means that when bio-based construction materials are used as buildings’ components, their lifespan is usually defined by the building’s service life and results in a temporary reduction of the CO2 concentration in the atmosphere.
During this time, carbon is stored in the building and its emissions are thus slowed down.
Researchers proved that incorporating a larger share of bio-materials can reduce a building's embodied energy by about 20%.
Looking at the wider perspective, studies demonstrated that the use of bio-based materials in the built environment would have the potential to reduce over 320,000 tons of carbon dioxide emissions by 2050, which is set as target date by European Union to reach carbon neutrality. Moreover, with buildings becoming more energy-efficient, the embodied impacts from producing and installing new materials contribute significantly to total lifecycle emissions, ranging from 10% to as much as 80% in highly efficient buildings. This scenario highlights the potential for bio-based materials to have a substantial impact on reducing overall building energy emissions.
From traditional to innovative building applications
Bio-based building materials can be classified depending on their natural origins and on their physical properties, which influence their behaviour when applied to the building system. According to their chemical structure and to their characteristic of being renewable, bio-based materials can be divided into lignocellulosic materials, which come from forestry, vegetation, agriculture; protein-based materials, coming from farming, such as wool and feathers; earth; living materials made of micro-organisms such as mycelium and algae.
Natural materials have been traditionally used in architecture since the vernacular period. Presently, these materials stand out through innovative applications, while novel bio-materials, such as living materials, and bio-wastes, enter the discussion intending to enhance circular business models.
Timber and earth
Among bio-materials, timber, as part of a long, preindustrial history of buildings, has always received the main attention from policy and industry and, in recent years, it has been mainly advocated by researchers and policymakers to replace concrete, iron and steel in the construction sector. Indeed, modular timber construction, such as Plywood, Laminated Veneer Lumber (LVL), Panels, Cross Laminated Timber (CLT), allows for storing a significant amount of carbon in the structure (50% of the mass) and releases significant less GHGs into the atmosphere compared with mineral-based construction. Moreover, wood is considered highly recyclable, as it enables several reuse options.
However, it is important to consider that the climate benefit associated with biogenic carbon storage is only achieved when replaced by the growth of another tree, which normally takes decades. Therefore, even if still representing a renewable resource, within a short time horizon, such as 2050, timber construction can't be climate neutral. Moreover, in the European context, studies have shown that there is an insufficient quantity of timber to meet the expected demand if there were to be a complete shift towards a timber-based built environment.
Due to its strength, durability, non-combustibility, and ability to enhance indoor air quality, also rammed earth has been largely used in construction, starting from the 16th and 17th centuries. With the advent of the Industrial Revolution, however, standardizing earthen materials became difficult, making it challenging to utilize them as effectively as concrete and bricks.
Nowadays, because of low embodied carbon, availability, safety, and thermal characteristics of these building materials, they become a particularly attractive alternatives to more traditional ones. Moreover, there is the potentiality to circumvent disadvantages, such as on-site weather-dependency, by using prefabricated elements and innovative manufacturing processes. In this regard, the Austrian company Erden has developed a technique to prefabricate rammed earth wall elements that can be stacked to construct large-scale buildings. The Belgian BC Materials, instead, transforms excavated earth into building materials, with the production of earth blocks masonry, plasters and paints.
Moreover, the use of additive manufacturing enters the debate as a method with the potentiality to enhance the level of quality in detailing, accuracy, finishing, and reproducibility, while reducing labour needs and increasing in pace. In this regard, a recent collaboration between Mario Cucinella Architects and Wasp, an Italian company specialised in 3D printing, has resulted in the first 3D-printed, fully circular housing constructions made by earth, called TECLA.
Fast-growing bio-materials
Unlike timber, fast-growing materials are bio-resources that have rapid growth, making them readily available for harvest and use in a very short period. Fast-growing materials are typically derived from agricultural by-products, such as hemp, straw, flax, kenaf, and several species of reed, but can also include trees like bamboo and eucalyptus. Due to their short crops rotation periods, these materials, when used, are directly compensated by the regrowth of the new plants and, overall, this results in a cooling effect on the atmosphere.
Over last decades, various construction projects displayed their versatility by using them for many different applications, going from structural components crafted from bamboo to finishing materials like plaster, flooring, siding, roofing shingles, acoustic and thermal panels.
Several studies document their applications in the built environment both as loose materials and as part of a bio-mixture, such as flax concrete, rice husks concrete, straw fibers concrete, or bamboo bio-concrete. Among the others, hempcrete, made of lime and hemp shives, stands out due to its structural and insulating features, while enabling large carbon savings.
In this context, several start-ups and innovative enterprises, such as RiceHouse, Ecological Building System, and Strawcture, have already entered the market with competitive bio-composite alternatives, available either as loose materials or bound by natural or artificial binders.
Living building materials: mycelium and algae
Algae and mycelium are gaining interest as a research field for building applications.
Algae are mainly discussed for their application on building facades for energy production through the development of bio-reactive façades. The SolarLeaf pilot project, implemented by Arup in Hamburg in 2013, marks the first real-world application of this technology in a residential context, showcasing its potential applicability to both new and existing buildings.
Due to its ability to act as a natural binder instead, mycelium, the vegetative part of fungi, is used as the binding agent of many composite materials. Over last years, the research on the topic has been exponential, due to the total biodegradability of the binder and to its ability to valorize waste material, by degrading them and using them as substrates for their growth.
Different temporary projects have displayed the structural capacities of mycelium, both as monolithic and discrete separated elements.
Mycelium bricks were tested in 2014 with the construction of the Hi-fi tower, built at the Museum of Modern Art of New York by Arup and Living architecture. Monolithic structures such as El Monolito Micelio or the BioKnit pavilion, were developed instead to grow mycelium either on-site or in a growing chamber in a single piece.
The absence of established methods for producing large-scale mycelium-based composite components, primarily due to the low structural capabilities of such composites and various technological and design limitations, represents today the main obstacle to its industrial scalability for building applications.
However, the Italian MOGU and the American Ecovative are two mycelium companies that were capable of scaling production to industrial levels, manufacturing and selling acoustic panels for indoor spaces. In this context, the project developed by the collaboration between Arup and the universities of Leuven (BE), Kassel (DE) and the Kalrsruher Institut für Technologie, named HOME, aims to advance the upscaling of mycelium-based composites by developing prototypes and using diverse manufacturing processes for indoor acoustic insulation.
Post-consumer bio-wastes: closing the loop
Textile, papers and food wastes are also gaining progressive interest for buildings’ applications, as circular strategies enabling up-cycling processes and facilitating an effective transition toward a carbon-neutral society.
Literature documents building components developed from food wastes coming from olive pruning, almond skin wastes, coffee beans and pea pods for the realization of acoustic panels and thermal insulating panels.
In the same way, research has also focused on the reuse of cardboard and waste paper to enable the realisation of bio-composite panels. In this regard, the thermal properties of cellulose fibers sourced from paper and cardboard waste have been tested and found to be particularly effective, achieving a thermal conductivity of 0.042 W·m−1·K−1, which is comparable to traditional materials.
Due to the large waste generation caused by the fashion and clothing, several studies and various research projects, such as the RECYdress project (2022) and MATE.ria tessile (2023), both conducted at Politecnico di Milano, have been developed to investigate textiles treatments and their use as secondary raw materials in the building sector. Indeed, residual flows of textile are estimated to have a recycling potential of about 16 kWh of energy saved for each kilogram of textile.
In this regard, the Waste Framework Directive, which manages in Europe textile wastes obliging member states to ensure the separate collection of textiles for re-use and recycling, might be implemented in 2025 to promote extended producer responsibility schemes. This would require fashion brands and textile producers to pay fees in order to help fund the textile waste collection and treatment costs.
Several products leveraging recycled textiles for insulation are already available on the market. Inno-Therm, a company from Great Britain, produces insulation from recycled industrial cotton material-denim. Similarly, Le Relais, a French recycling company, which collects 45000 tons of used textiles annually, developed a thermal insulation product called Mettise. The product contains at least 85% recycled fibers and consists of cotton (70%), wool / acrylic (15%) and polyester (15%).
Current criticalities
To enable the wide utilization of bio-based materials in the built environment, there are several critical issues that require further investigation.
Performance and industrial scalability
According to several researchers, one of the main issues of bio-based materials when applied to the construction sector is their required and expected performances, which shall be comparable to the ones of traditional engineered building materials. Extensive research is thus currently on-going to address the challenges allied with long-term durability, reliability, serviceability, properties and sustainable production.
A policy framework for bio-building materials
In the European context, in the framework of meeting climate mitigation objectives before 2050, European Union is trying to implement, among other measures, the production and utilization of bio-based materials in many diverse sectors and segments of society through regulations such as The European Industrial Strategy, the EU Biotechnology and Biomanufacturing Initiative and the Circular Action Plan.
However, as traditional materials still dominate the construction sector, there is a lack of understanding among some policymakers and developers regarding biomaterials. According to Göswein, the presence of a legal framework would reassure investors and insurance companies and enhance the promotion of circular economy dynamics.
See also
Bio-based material
Mycelium-based materials
Building material
References
Further reading
https://issuu.com/tobiashelmersson/docs/tobias_helmersson-from_the_ground_up
External links
Building materials
Construction | Bio-based building materials | [
"Physics",
"Engineering"
] | 2,649 | [
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Matter",
"Building materials"
] |
77,310,910 | https://en.wikipedia.org/wiki/Normalized%20solutions%20%28nonlinear%20Schr%C3%B6dinger%20equation%29 | In mathematics, a normalized solution to an ordinary or partial differential equation is a solution with prescribed norm, that is, a solution which satisfies a condition like In this article, the normalized solution is introduced by using the nonlinear Schrödinger equation. The nonlinear Schrödinger equation (NLSE) is a fundamental equation in quantum mechanics and other various fields of physics, describing the evolution of complex wave functions. In Quantum Physics, normalization means that the total probability of finding a quantum particle anywhere in the universe is unity.
Definition and variational framework
In order to illustrate this concept, consider the following nonlinear Schrödinger equation with prescribed norm:
where is a Laplacian operator, is a Lagrange multiplier and is a nonlinearity. If we want to find a normalized solution to the equation, we need to consider the following functional: Let be defined by
with the constraint
where is the Hilbert space and is the primitive of .
A common method of finding normalized solutions is through variational methods, i.e., finding the maxima and minima of the corresponding functional with the prescribed norm. Thus, we can find the weak solution of the equation. Moreover, if it satisfies the constraint, it's a normalized solution.
A simple example on Euclidean space
On a Euclidean space , we define a function
with the constraint .
By direct calculation, it is not difficult to conclude that the constrained maximum is , with solutions and , while the constrained minimum is , with solutions and .
History
The exploration of normalized solutions for the nonlinear Schrödinger equation can be traced back to the study of standing wave solutions with prescribed -norm. Jürgen Moser firstly introduced the concept of normalized solutions in the study of regularity properties of solutions to elliptic partial differential equations (elliptic PDEs). Specifically, he used normalized sequences of functions to prove regularity results for solutions of elliptic equations, which was a significant contribution to the field. Inequalities developed by Emilio Gagliardo and Louis Nirenberg played a crucial role in the study of PDE solutions in spaces. These inequalities provided important tools and background for defining and understanding normalized solutions.
For the variational problem, early foundational work in this area includes the concentration-compactness principle introduced by Pierre-Louis Lions in 1984, which provided essential techniques for solving these problems.
For variational problems with prescribed mass, several methods commonly used to deal with unconstrained variational problems are no longer available. At the same time, a new critical exponent appeared, the -critical exponent. From the Gagliardo-Nirenberg inequality, we can find that the nonlinearity satisfying -subcritical or critical or supercritical leads to a different geometry for functional. In the case the functional is bounded below, i.e., subcritical case, the earliest result on this problem was obtained by Charles-Alexander Stuart using bifurcation methods to demonstrate the existence of solutions. Later, Thierry Cazenave and Pierre-Louis Lions obtained existence results using minimization methods. Then, Masataka Shibata considered Schrödinger equations with a general nonlinear term.
In the case the functional is not bounded below, i.e., supcritical case, some new difficulties arise. Firstly, since is unknown, it is impossible to construct the corresponding Nehari manifold. Secondly, it is not easy to obtain the boundedness of the Palais-Smale sequence. Furthermore, verifying the compactness of the Palais-Smale sequence is challenging because the embedding is not compact. In 1997, Louis Jeanjean using the following transform:
Thus, one has the following functional:
Then,
which corresponds exactly to the Pokhozhaev's identity of equation. Jeanjean used this additional condition to ensure the boundedness of the Palais-Smale sequence, thereby overcoming the difficulties mentioned earlier. As the first method to address the issue of normalized solutions in unbounded functional, Jeanjean's approach has become a common method for handling such problems and has been imitated and developed by subsequent researchers.
In the following decades, researchers expanded on these foundational results. Thomas Bartsch and Sébastien de Valeriola investigate the existence of multiple normalized solutions to nonlinear Schrödinger equations. The authors focus on finding solutions that satisfy a prescribed norm constraint. Recent advancements include the study of normalized ground states for NLS equations with combined nonlinearities by Nicola Soave in 2020, who examined both subcritical and critical cases. This research highlighted the intricate balance between different types of nonlinearities and their impact on the existence and multiplicity of solutions.
In bounded domain, the situation is very different. Let's define where . Refer to Pokhozhaev's identity,
The boundary term will make it impossible to apply Jeanjean's method. This has led many scholars to explore the problem of normalized solutions on bounded domains in recent years. In addition, there have been a number of interesting results in recent years about normalized solutions in Schrödinger system, Choquard equation, or Dirac equation.
Some extended concepts
Mass critical, mass subcritical, mass supcritical
Let's consider the nonlinear term to be homogeneous, that is, let's define where . Refer to Gagliardo-Nirenberg inequality: define
then there exists a constant such that for any , the following inequality holds:
Thus, there's a concept of mass critical exponent,
From this, we can get different concepts about mass subcritical as well as mass supercritical. It is also useful to get whether the functional is bounded below or not.
Palais-Smale sequence
Let be a Banach space and be a functional. A sequence is called a Palais-Smale sequence for at the level if it satisfies the following conditions:
1. Energy Bound: .
2. Gradient Condition: as for some .
Here, denotes the Fréchet derivative of , and denotes the inner product in . Palais-Smale sequence named after Richard Palais and Stephen Smale.
See also
Standing wave
Sobolev inequality
Palais–Smale compactness condition
Variational principle
Schrödinger picture
Mathematical formulation of quantum mechanics
Relation between Schrödinger's equation and the path integral formulation of quantum mechanics
References
Further reading
Quantum mechanics
Partial differential equations
Calculus of variations | Normalized solutions (nonlinear Schrödinger equation) | [
"Physics"
] | 1,307 | [
"Quantum mechanics",
"Eponymous equations of physics",
"Equations of physics",
"Schrödinger equation"
] |
77,311,267 | https://en.wikipedia.org/wiki/Rail%20vehicle%20resistance | The rail vehicle resistance (or train resistance or simply resistance) is the total force necessary to maintain a rail vehicle in motion. This force depends on a number of variables and is of crucial importance for the energy efficiency of the vehicle as it is proportional to the locomotive power consumption. For the speed of the vehicle to remain the same, the locomotive must express the proper tractive force, otherwise the speed of the vehicle will change until this condition is met.
Davis equation
A number of experimental measurements of the train resistance have shown that this force can be expressed as a quadratic equation with respect to speed as shown below:
Where is the resistance, is the speed of the rail vehicle and , , and are experimentally determined coefficients. The most well-known of these relations was proposed by Davis W. J. Jr. and is named after him. The Davis equation contains mechanical and aerodynamic contributions to resistance. The first formulation assumes that there is no wind, however, formulations that do not make this assumptions exist:
,
where is the speed of the air with respect to the vehicle while and are experimental coefficients that separately account for mechanical and aerodynamic (viscous) phenomena respectively.
The coefficients for these equations are determined with experiments by measuring the tractive effort from the locomotive at different constant speeds or with a coasting experiments (the rail vehicle is set in motion at a certain speed and then the traction is disengaged, causing the vehicle to stop due to resistance).
Most methods for determining these coefficients do not consider the effect lateral forces on the vehicle. Lateral forces can be caused by the centripetal acceleration of the vehicle following the curving of the tracks, by lateral tilt of the rails, or by aerodynamic forces if crosswind is present. These forces affect the resistance by pushing the vehicle laterally against the rail causing sliding friction between the wheels and the rails. In case of crosswind, the resistance is also affected by the change in the aerodynamic contribution as a consequence of changes in the flow.
Physical interpretation of the Davis equation
Speed-independent term
The first term in the Davis equation () accounts for the contributions to the resistance that are independent from speed. Track gradient and acceleration are two of the contributing phenomena to this term. These are not dissipative processes and thus the additional work required from the locomotive to overcome the increased resistance is converted to mechanical energy (potential energy for the gradient and kinetic energy for the acceleration). The consequence of this is that these phenomena may, in different conditions, result in positive or negative contributions to the resistance. For example, a train decelerating on horizontal tracks will experience reduced resistance than if it where travelling at constant speed. Other contributions to this term are dissipative, for example bearing friction and rolling friction due to the local deformation of the rail at the point of contact with the wheels, these latter quantities can never reduce the train resistance.
The term is constant with respect to vehicle speed but various empirical relations have been proposed to predict its value. It is the general consensus that the term is directly related to the mass of the vehicle with some observing an effect of the number axles as well as the axle loads.
Speed-linear term
The coefficient in the second term of the Davis equation () relates to the terms linearly dependent on speed and is sometimes omitted because it is negligible compared to the other terms. This term accounts for mass-related, speed-dependent, mechanical contributions to the resistance and for the momentum of the intake air for cooling and HVAC.
Similarly to , empirical formulas have been proposed to evaluate the term , and again a mass dependence is present in all major methods for determining the rail vehicle resistance coefficients, with some also observing a dependence from number of trailers and locomotives or a dependence from length.
Speed-quadratic term
The coefficient in the third term of the Davis equation () accounts for the aerodynamic drag acting on the vehicle, it is explained by the fact that as the train moves through the air, it sets some of the air surrounding it in motion (this is called slipstream). To maintain constant speed, the continuous transfer of momentum to the air needs to be compensated by an additional tractive force by the locomotive, this is accounted for by this term. As train speed increases, the aerodynamic drag becomes the dominant contribution to the resistance, for high-speed trains above 250 km/h and for freight trains above 115 km/h it accounts for 75-80% of the resistance.
This term is highly dependent on the geometry of the vehicle, and therefore it will be much lower for the streamlined high-speed passenger train than for freight trains, which behave like bluff bodies and produce much larger and more turbulent slipstreams at the same vehicle speed, leading to increased momentum transfer to the surrounding air.
Few general considerations can be made about the aerodynamic contribution to rail vehicle resistance because the aerodynamic drag heavily depends on both flow conditions and the geometry of the vehicle. However, the drag is higher in crosswind conditions than in still air, and for small angles the relation between drag coefficient and yaw angle is approximately linear.
Empirical relations for the Davis equation coefficients
In the years, empirical relations have been proposed for estimating the values of the coefficients for the Davis equation, these however also rely on more coefficients to determine experimentally. Below are the relations proposed by Armstrong and Swift:
Where and are respectively the total mass of the trailer cars and the total mess of the locomotives expressed in tons, , , and are respectively the number of trailer cars, the number of locomotives, the number of bogies and the number of pantographs, is the total power expressed in kW, and are respectively the head/tail drag coefficients and the bogies drag coefficients, is the frontal cross-sectional area in square metres, is the perimeter, is the length and is the intervehicle gap (all lengths expressed in meters). The coefficients , and are expressed in N, Ns/m and Ns2/m2.
See also
Tractive effort
Traction (mechanics)
Rolling friction
Drag (physics)
References
Trains
Power (physics)
Friction | Rail vehicle resistance | [
"Physics",
"Chemistry",
"Mathematics",
"Technology"
] | 1,226 | [
"Mechanical phenomena",
"Physical phenomena",
"Force",
"Friction",
"Physical quantities",
"Transport systems",
"Quantity",
"Surface science",
"Power (physics)",
"Energy (physics)",
"Trains",
"Wikipedia categories named after physical quantities"
] |
77,311,317 | https://en.wikipedia.org/wiki/Exscalate4Cov | Exscalate4Cov was a public-private consortium supported by the Horizon Europe program from the European Union, aimed at leveraging high-performance computing (HPC) as a response to the coronavirus pandemic. The project utilized high-throughput, extreme-scale, computer-aided drug design software to conduct experiments.
The Exsclate4Cov project, which stands for EXaSCale smArt pLatform Against paThogEns for Corona Virus, was coordinated by Dompé Farmaceutici and involved 17 participants. It was part of the Horizon 2020 SOCIETAL CHALLENGES - Health, demographic change and well-being founding funding.
The project conducted one of the largest virtual screening and drug repositioning experiments, identifying a potentially effective molecule against SARS-CoV-2.
Context
Background
Drug discovery can be a long and costly process, often taking years and requiring substantial financial investment. Pharmaceutical companies have large datasets of chemical compounds, which they test against a drug target, often a protein receptor. The goal is to find compounds that interact with the targets, leading to potential therapeutic effects.
Therefore, the process of finding new drugs usually involves high-throughput screening (HTS). HTS enables the rapid identification of active compounds. For example, virtual screening can be used as an early stage of the drug discovery pipeline to evaluate the interactions between large datasets of small molecules and a drug target, identifying potential hit candidates. This approach helps in identifying potential hit candidates by predicting how different compounds will bind to the target protein, which will go further in the experimental validation.
In an urgent computing scenario, such as a pandemic, where time to solution is critical, virtual screening is used to identify hit molecules for the latter stages of the drug discovery pipeline, such as lead optimization and clinical trial. The Exscalate4Cov project was initiated after the COVID-19 pandemic outbreak. This project aimed to leverage the computational power of EU supercomputers to accelerate the discovery of effective treatments for the coronavirus. By utilizing high-throughput virtual screening, Exscalate4Cov aimed to find faster solutions to the crisis.
Scope
Exscalate4Cov's approach involved screening billions of compounds against various protein targets of the SARS-CoV-2 virus, identifying those with a higher binding affinity with the target. The project's objectives were:
Identify potential drug candidates against the coronavirus to combat the COVID-19 pandemic;
Conduct a large-scale experiment as an example for future pandemic scenarios;
Develop a computer-aided drug design platform that leverages supercomputer capabilities;
Fast sharing of data and scientific discoveries with the community to work in an urgent computing scenario.
Previous projects
The Exscalate4Cov project followed the ANTAREX4ZIKA project, both of which aimed to leverage HPC for drug discovery, albeit targeting different viruses. While Exscalate4Cov focused on the SARS-CoV-2 virus responsible for COVID-19, ANTAREX4ZIKA was dedicated to addressing the Zika virus. The ANTAREX4ZIKA project concluded at the end of 2018 and involved a virtual screening campaign on the CINECA Marconi machine, with a total of 10 PetaFLOPS. The ANTAREX project, which stands for AutoTuning and Adaptivity appRoach for Energy efficient eXascale HPC systems, emphasized auto-tuning and energy efficiency of HPC applications, making them more effective in various research scenarios, including drug discovery.
Consortium
The Exscalate4Cov consortium of public-private entities has been coordinated by Dompè, and it involved 17 other institutions, from research centers to universities.
Pipeline
Inputs at the application level consist of ligands from the chemical space and the protein target of the virtual screening campaign, specifically the spike protein in the case of Exscalate4Cov. Following a molecular docking stage that generates potential ligand conformations, a scoring stage assesses the interaction strength between each ligand's pose and the protein. The pipeline ultimately produces a ranking of hit compounds as its output, indicating the most promising candidates for further investigation.
At the software level, the project utilizes the EXSCALATE docking platform. LiGen (Ligand Generator) is one of the main components of the platform, and it is used to perform molecular docking and scoring simulations. LiGen is responsible for generating and evaluating the conformations of ligands. Another relevant component at the same level is the libdpipe library, which facilitates scaling across multi-node and cores.
To hinge the computational power offered by HPC centers, the docking platform uses MPI to scale multi-node and CUDA acceleration to take advantage of supercomputer's GPUs. The CUDA version has undergone various optimizations, including OpenACC, OpenMP, and other techniques, to enhance performance and efficiency.
Virtual screening campaign
The project's main experiment evaluated the interactions between 12 viral proteins of SARS-CoV-2 against 70 billion molecules from the EXSCALATE chemical library. In November 2020, consortium members coordinated one of the largest virtual screening campaigns, harnessing the combined computational power of two supercomputers totaling 81 PFLOPS.
The supercomputers used are:
Marconi100: Operated by CINECA, each node consists of 1 IBM POWER9 AC922 CPU (32 cores, 128 threads) and 4 NVIDIA V100 GPUs with 16 GB of VRAM. The machine consists of 970 nodes, providing a total of 29.3 PFLOPS.
HPC5: Operated by Eni, each node consists of 1 Intel Xeon Gold 6252 24C CPU (24 cores, 48 threads) and 4 NVIDIA V100 GPUs with 16 GB of VRAM. The machine consists of 1820 nodes, providing a total of 51.7 PFLOPS.
Throughput
The large-scale campaign used a reservation of 800 Marconi100 nodes and 1500 HP5 nodes for 60 hours. Achieving an average throughput was 2400 ligands per second (lig/s) on Marconi100 and 2000 lig/s on HPC5.
Data storage
Another critical aspect of the experiment was data storage management. The platform leveraged efficient MPI I/O operations to handle multi-node computations. The input data required 3.3 TB of space in SMILES format. However, SMILES data needed to be expanded in a pre-processing step involving 100 nodes over five days. Similarly, the post-processing step involved 19 nodes over five days.
Output data
The final output consisted of CSV files containing scores for each input ligand, occupying 69 TB. The resulting dataset, containing 570 million hit compounds, is freely available.
Drug repositioning
The Exscalate4Cov project also conducted drug repositioning experiments. Drug repurposing offers an interesting approach to address unmet clinical needs in case of urgent computing, due to pandemics. Hence, repurposing existing drugs with established safety and toxicology profiles provides a significant advantage by saving time in identifying potential new treatments. During the European Exscalate4Cov project activities, raloxifene was selected through a combined approach of drug repurposing and in-silico screening on SARS-CoV-2 target’s proteins, followed by subsequent in-vitro screening.
Results
Mediate
The project's large-scale campaign results are available through the MEDIATE (MolEcular DockIng AT homE) platform. The objective of MEDIATE is to collect a chemical library of Sars-COV-2 inhibitors. The MEDIATE portal provides access to a set of small molecules that research can use to start de-novo drug design from a reduced set of molecules.
Raloxifene
Raloxifene is a known chemical compound used to treat osteoporosis. As a result of drug repositioning experiments, the E4C project identified raloxifene as a possible candidate to treat early-stage COVID-19 patients, aiming to prevent clinical progression. In October 2020, AIFA authorized clinical trials to treat COVID-19 patients, and it is currently undergoing testing for approval.
Public interest
The experiments, including the discovery of raloxifene as a possible drug candidate against COVID-19, gained significant interest from the scientific community, as documented in several scientific articles.
The project's results also captured national interest in Italy, highlighted by various newspaper articles, due to the use of Italian supercomputers during the pandemic. Additionally, the large-scale campaign results gained attention from international journals.
See also
CINECA
COVID-19 pandemic
Drug discovery
EuroHPC
Horizon 2020
HPC5
Raloxifene
Supercomputing
Virtual screening
References
Further reading
External links
Exscalate4Cov Website
E4C Cordis page
EXSCALATE Webpage
Drug discovery
Supercomputing
Bioinformatics
Horizon 2020 projects | Exscalate4Cov | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 1,838 | [
"Biological engineering",
"Supercomputing",
"Drug discovery",
"Life sciences industry",
"Bioinformatics",
"Medicinal chemistry"
] |
77,311,534 | https://en.wikipedia.org/wiki/Momentum%20mapping%20format | Momentum mapping format is a key technique in the Material Point Method (MPM) for transferring physical quantities such as momentum, mass, and stress between a material point and a background grid.
The Material Point Method (MPM) is a numerical technique using a mixed Eulerian-Lagrangian description. It discretises the computational domain with material points and employs a background grid to solve the momentum equations. Proposed by Sulsky et al. in 1994.
MPM has since been expanded to various fields such as computational solid dynamics. Currently, MPM features several momentum mapping schemes, with the four main ones being PIC (Particle-in-cell), FLIP (Fluid-Implicit Particle), hybrid format, and APIC (Affine Particle-in-Cell). Understanding these schemes in-depth is crucial for the further development of MPM.
Background
MPM represents materials as collections of material points (or particles). Unlike other particle methods such as SPH(Smoothed-particle hydrodynamics) and DEM (Discrete element method), MPM also uses a background grid to solve the momentum equations arising from particle interactions. MPM can be categorized as a mixed particle/grid method or a mixed Lagrangian-Eulerian method. By combining the strengths of both frameworks, MPM aims to be the most effective numerical solver for large deformation problems. It has been further developed and applied to various challenging problems such as high-speed impact (Huang et al., 2011), landslides (Fern et al., 2019), saturated porous media (He et al., 2024), and fluid-structure interaction (Li et al., 2022).
The Material Point Method (MPM) community has developed several momentum mapping schemes, among which PIC, FLIP, the hybrid scheme, and APIC are the most common. The FLIP scheme is widely used for dynamic problems due to its energy conservation properties, although it can introduce numerical noise and instability (Bardenhagen, 2002), potentially leading to computational failure. Conversely, the PIC scheme is known for numerical stability and is advantageous for static problems, but it suffers from significant numerical dissipation (Brackbill et al., 1988), which is unacceptable for strongly dynamic responses. Nairn et al. combined FLIP and PIC linearly (Nairn, 2015) to create a hybrid scheme, adjusting the proportion of each component based on empirical rather than theoretical analysis. Hammerquist and Nairn (2017) introduced an improved scheme called XPIC-m (for eXtended Particle-In-Cell of order m), which addresses the excessive filtering and numerical diffusion of PIC while suppressing the noise caused by the nonlinear space in FLIP used in MPM. XPIC-1 (eXtended Particle-In-Cell of order 1) is equivalent to the standard PIC method. Jiang et al. (2017, 2015) introduced the Affine Particle In Cell (APIC) method, where particle velocities are represented locally affine, preserving linear and angular momentum during the transfer process. This significantly reduces numerical dissipation and avoids the velocity noise and instability seen in FLIP. Fu et al. (2017) introduced generalized local functions into the APIC method, proposing the Polynomial Particle In Cell (PolyPIC) method. PolyPIC views G2P (Grid-to-Particle) transfer as a projection of the particle's local grid velocity, preserving linear and angular momentum, thereby improving energy and vorticity retention compared to the original APIC. Additionally, PolyPIC retains the filtering properties of APIC and PIC, providing robustness against noise.
Affine particle in cell method
In the PIC scheme, particle velocities during the Grid-to-Particle (G2P) substep are directly overwritten by extrapolating the nodal velocities to the particles themselves:
In the FLIP scheme, the material point velocities are updated by interpolating the velocity increments of the grid nodes over the current time step:
The hybrid scheme's momentum mapping can be mathematically represented as:
where the parameters are defined as shown here below
represents the velocity computed using the FLIP scheme
represents the velocity using the PIC scheme
is the proportion of FLIP with representing pure FLIP and representing pure PIC
Based on the idea of "providing the local velocity field around the material point to the background grid by transferring the material point's velocity gradient," Jiang et al. (2015) proposed the APIC method. In this method, the particle velocity is locally affine, mathematically expressed as:
where the parameters are defined as shown here below:
indicates the translational speed
represents the emission matrix, and represent the pattern of horizontal and vertical stretching models, respectively, while and represent the pattern of clockwise and counterclockwise shear motion models, respectively. If , the momentum mapping scheme will be simplified to PIC mode.
Computational implementation
PIC (Particle-In-Cell), FLIP (Fluid-Implicit Particle), hybrid (hybrid solution) and APIC (Affine) The different numerical methods used in Particle-In-Cell fluid simulation greatly show how they map momentum and time integrals between material points and grids, and how they differ from each other. The typical time integration schemes for PIC, FLIP, hybrid, and APIC schemes have their own unique characteristics. The evolution of momentum on the grid under each scheme is identical. Despite the differences among these four-momentum mapping formats, their common points are still dominant. In the P2G process, the momentum mapping in PIC, FLIP, and hybrid schemes is the same. The material point positions are updated in the same manner across all four schemes. During the G2P stage, PIC transfers the updated momentum on grid nodes directly back to the material points, FLIP uses incremental mapping, and the hybrid scheme linearly combines FLIP and PIC using a coefficient. APIC mapping maintains an additional affine matrix on top of the PIC mapping.
Numerical tests
Numerical tests on ring collision highlight the performance of different momentum mapping schemes in dynamic problems. The mean stress distribution and total energy evolution curve at typical time are the key contents of researchers' attention. Due to the PIC mapping scheme canceling out velocities in opposite directions, significant energy loss occurs, preventing effective conversion of kinetic energy into strain energy. GIMP_FLIP (Generalized Interpolation Material Point - Fluid Implicit Particle ) shows notable numerical noise and instability, with severe oscillations in mean stress, leading to numerical fracture. GIMP_FLPI0.99 exhibits improved stability but still carries the risk of numerical fracture. Tests indicate that increasing the PIC component enhances numerical stability, with stress distribution becoming more uniform and regular, and the probability of numerical fracture decreasing. However, energy loss also becomes more pronounced. GIMP_APIC (Generalized Interpolation Material Point - Affine Particle-In-Cell) demonstrates the best performance, providing a stable and smooth stress distribution while maintaining excellent energy conservation characteristics.
Related research and developments
Recently, Qu et al. proposed PowerPIC (Qu et al., 2022), a more stable and accurate mapping scheme based on optimization, which also maintains volume and uniform particle distribution characteristics.
See also
Smoothed Particle Hydrodynamics
Finite Element Method
Particle-in-cell
Material point method
Numerical Methods for Partial Differential Equations
References
External links
Computational physics
Civil engineering
Materials science
Computational mathematics
Numerical analysis
Numerical differential equations
Computational fluid dynamics
Simulation | Momentum mapping format | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,518 | [
"Applied and interdisciplinary physics",
"Computational fluid dynamics",
"Applied mathematics",
"Computational mathematics",
"Materials science",
"Computational physics",
"Construction",
"Civil engineering",
"Mathematical relations",
"nan",
"Numerical analysis",
"Approximations",
"Fluid dyna... |
77,313,933 | https://en.wikipedia.org/wiki/Elmo%20Motion%20Control | Elmo Motion Control is an engineering company specializing in developing, producing, and selling innovative hardware and software solutions in motion control. The company was founded in 1988 and is based in Petah Tikva, Israel. On September 4, 2022, Elmo was fully acquired by Bosch Rexroth.
History
Elmo Motion Control was established in 1988 by Haim Monhait. Four years later, in 1992, the company expanded its operations by opening its first subsidiary in the United States. In 2008, Elmo acquired and merged with Control Solutions (Pitronot Bakara), further solidifying its position in the market. In 2015, the company opened an additional production facility in Warsaw, Poland, to meet the growing demand.
Over the years, Elmo has steadily expanded its global presence by establishing eight additional subsidiaries worldwide. These include operations in China, Europe, and the APAC region. The most recent subsidiary was opened in Singapore in 2019.
Operations
Elmo employs over 400 personnel and has its headquarters and manufacturing facilities in Petah Tikva, Israel. The company also has worldwide sales and technical support offices and additional manufacturing facilities.
Products and markets
Elmo offers complete motion control solutions, ranging from design to delivery, including cutting-edge servo drives, network-based multi-axis motion controllers, power supplies, and integrated servo motors. These solutions can be customized, configured, and simulated using Elmo's proprietary software tools, which are designed to be advanced and easy to use. Elmo's products cater to various industries, such as semiconductors, lasers, robots, drones, life sciences, industrial automation, and extreme environments.
Product lines
Elmo Motion Control provides various servo drives suitable for various motion requirements, from industrial applications that require high precision and power density to extreme applications designed for critical missions in harsh environments. Since its establishment, Elmo has developed three generations of products, each offering servo drives and motion controllers for both industrial and harsh environments. Platinum's latest product line is known for its EtherCAT networking precision and fully certified functional safety in all its products. Elmo's servo-drive product lines comply with global industry standards.
Acquisition by Bosch Rexroth
In September 2022, Elmo Motion Control was fully acquired by Bosch Rexroth, a leading global supplier of drive and control technologies.
References
External links
Official website
Companies based in Petah Tikva
Israeli companies established in 1988
Motion control
2022 mergers and acquisitions | Elmo Motion Control | [
"Physics",
"Engineering"
] | 501 | [
"Physical phenomena",
"Motion (physics)",
"Automation",
"Motion control"
] |
77,313,952 | https://en.wikipedia.org/wiki/Hydric%20brooding | Hydric brooding is an egg incubation practice performed by some species of frogs. It involves either placing urine from the bladder on the eggs to keep them wet or holding the body over the eggs to prevent them from drying out.
Unlike reptile and bird eggs, which have an amniotic membrane to prevent dehydration, amphibian eggs laid on land can become dry and die. In some species, the male frog will periodically return to the clutch and moisten the eggs with urine from his bladder. For example, the male poison arrow frog Phyllobates vittatus sits on top of his eggs and sheds liquid. He visits the clutch roughly three times each day until hatching.
References
Reproductive system
Amphibians
Herpetology | Hydric brooding | [
"Biology"
] | 155 | [
"Behavior",
"Animals",
"Reproductive system",
"Sex",
"Reproduction",
"Amphibians",
"Organ systems"
] |
77,314,022 | https://en.wikipedia.org/wiki/Superseded%20combination | In taxonomy, a superseded combination is a notice of change to the binomial nomenclature of the accepted name of a species. This happens when a species is moved to a new genus after the initial species description. The original name is called a superseded combination, and the new name is called the new combination, or .
Some but not all superseded combinations are basionyms, and some basionyms are not superseded combinations. The superseded combination is not the same as a synonym and technically should not be called one.
If the species is moved again to a third genus, both of the older names are considered superseded combinations. The original name is the superseded original combination and the second name is the superseded recombination. If the species were moved back to a previous genus, the International Commission on Zoological Nomenclature would not consider the current name to be a new combination.
The specific epithet is kept in all these name changes, with perhaps some modification of the suffix to harmonize with the genus name.
For example, in 1766 Peter Simon Pallas described a new species of marine polychaete worm he called Aphrodita flava. In 1867, that name became a superseded (original) combination when Hjalmar Kinberg moved the species to Thesmia, creating the new combination Thesmia flava. The genus Thesmia was later synonymized with Chloeia, creating a new combination of Chloeia flava Aphrodita flava is the superseded original combination, Thesmia flava is the superseded subsequent recombination, and the current name Chloeia flava is the new combination.
References
Botanical nomenclature
Zoological nomenclature | Superseded combination | [
"Biology"
] | 332 | [
"Botanical nomenclature",
"Zoological nomenclature",
"Botanical terminology",
"Biological nomenclature"
] |
77,315,178 | https://en.wikipedia.org/wiki/Tolo%20Calafat | Bartolomé "Tolo" Calafat Marcus (17 September 1970 Palma de Mallorca — 29 April 2010, Annapurna) was a Spanish mountaineer. Calafat was climbing as part of an expedition led by Juanito Oiarzabal on Annapurna when he died of cerebral edema.
Calafat and his team were trapped on Annapurna by a blizzard when he became ill. A high altitude rescue attempt was made, but delayed by the weather. By the time a helicopter with doctor and mountaineer Jorge Egocheaga reached Calafat's position, 7,600m on the mountain, it was too late, and he had been covered by snow.
Prior to his attempt at Annapurna, Calafat successfully summitted Mount Everest in 2006 and Cho Oyo in 2004.
References
1970 births
2010 deaths
Spanish mountain climbers
Spanish summiters of Mount Everest
Electronics engineers
People from Palma de Mallorca
Deaths on Annapurna
Sport deaths in Nepal
Mountaineering deaths | Tolo Calafat | [
"Engineering"
] | 206 | [
"Electronics engineers",
"Electronic engineering"
] |
77,317,100 | https://en.wikipedia.org/wiki/Incompatibility%20of%20quantum%20measurements | Incompatibility of quantum measurements is a crucial concept of quantum information, addressing whether two or more quantum measurements can be performed on a quantum system simultaneously. It highlights the unique and non-classical behavior of quantum systems. This concept is fundamental to the nature of quantum mechanics and has practical applications in various quantum information processing tasks like quantum key distribution and quantum metrology.
History
Early ages
The concept of incompatibility of quantum measurements originated from Heisenberg's uncertainty principle, which states that certain pairs of physical quantities, like position and momentum, cannot be simultaneously measured with arbitrary precision. This principle laid the groundwork for understanding the limitations of measurements in quantum mechanics.
Mid-20th century
In the mid-20th century, researchers began to formalize the idea of compatibility of quantum measurements, and to explore conditions under which a set of measurements can be performed together on a single quantum system without disturbing each other. This was crucial for understanding how quantum systems behave under simultaneous observations.
Late-20th century
The study of incompatibility of quantum measurements gained significant attention with the rise of quantum information theory. Researchers realized that measurement incompatibility is not just a limitation but also a resource for various quantum information processing tasks. For example, it plays a crucial role in quantum cryptography, where the security of quantum key distribution protocols relies on the incompatibility of certain quantum measurements.
21th century
Modern research focuses on quantifying measurement incompatibility using various measures. Quite a number of approaches involve robustness-based measures, which assess how much noise can be added to a set of quantum measurements before they become compatible.
Definition
In quantum mechanics, two measurements, and , are called compatible if and only if there exists a third measurement of which can be obtained as margins, or equivalently, from which can be simulated via classical post-processings. More precisely, let and be two positive operator-valued measures (POVMs), where are two measurable spaces, and
is the set of bounded linear operators on a Hilbert space .
Then are called compatible (jointly measurable) if and only if there is a POVM
such that
for all and . Otherwise, are called incompatible. The definition of compatibility of a finite number of POVMs is similar.
An example
For a two-outcome POVM on a qubit it is possible to write
where with are Pauli matrices.
Let and be two such POVMs with , , and , where . Then for
one can verify that the following POVM
is a compatiblizer of and , despite the fact that and are noncommutative for .
As for , it was shown that are incompatible.
Criteria for compatibility
Analytical criteria
Let be two two-outcome POVMs on a qubit. It was shown that are compatible if and only if
where for .
Moreover, let be three two-outcome POVMs on a qubit with . Then it was proved that are compatible if and only if
where , and for .
Numerical criteria
For a finite number of POVMs on a finite-dimensional quantum system, with a finite number of measurement outcomes (labeled by ), one can cast the following semidefinite program (SDP) to decide whether they are compatible or not:
where is a deterministic classical post-processing (i.e., or valued conditional probability density). If for all the maximizer leads to a negative , then are incompatible. Otherwise, they are compatible.
Quantifiers of incompatibility
Incompatibility noise robustness
Let be a finite number of POVMs. Incompatibility noise robustness of is defined by
where denotes the number of outcomes. This quantity is a monotone under quantum channels since (i) it vanishes on that are compatible, (ii) it is symmetric under the exchange of POVMs, and (iii) it does not increase under the pre-processing by a quantum channel.
Incompatibility weight
Incompatibility weight of POVMs is defined by
where is any set of POVMs.
This quantity is a monotone under compatibility nondecreasing operations that consist of pre-processing by a quantum instrument and classical post-processing.
Incompatibility robustness
Incompatibility robustness of POVMs is defined by
where is any set of POVMs. Similar to incompatibility weight, this quantity is a monotone under compatibility nondecreasing operations.
All these quantifiers are based on the convex distance of incompatible POVMs to the set of compatible ones under the addition of different types of noise. Specifically, all these quantifiers can be evaluated numerically, as they fall under the framework of SDPs. For instance, it was shown that incompatibility robustness can be cast as the following SDP:
where is a deterministic classical post-processing, and is the dimension of the Hilbert space on which are performed. If the minimizer is , then .
Incompatibility and quantum information processing
Measurement incompatibility is intrinsically tied to the nonclassical features of quantum correlations. In fact, in many scenarios, it becomes evident that incompatible measurements are necessary to exhibit nonclassical correlations. Since such correlations are essential for tasks like quantum key distribution or quantum metrology, these connections underscore the resource aspect of measurement incompatibility.
Bell nonlocality
Bell nonlocality is a phenomenon where the correlations between quantum measurements on entangled quantum states cannot be explained by any local hidden variable theory. This was first demonstrated by Bell through Bell's Theorem, which showed that certain predictions of quantum mechanics are incompatible with the principle of locality (an idea that objects are only directly influenced by their immediate surroundings).
If the POVMs on Alice's side are compatible, then they will never lead to Bell nonlocality. However, the converse is not true. It was shown that there exist three binary measurements on a qubit, pairwise compatible but globally incompatible, yet not leading to any Bell nonlocality.
Quantum steering
Quantum steering is a phenomenon where one party (Alice) can influence the state of a distant party's (Bob's) quantum system through local measurements on her own quantum system. This was first introduced by Schrödinger and is a form of quantum nonlocality. It demonstrates that the state of Bob's quantum system can be "steered" into different states depending on the local measurements performed by Alice.
For quantum steering to occur, the local measurements performed by Alice must be incompatible. This is because steering relies on the ability to create different outcomes in Bob's system based on Alice's measurements, which is only possible if those measurements are not compatible. Further, it was shown that if the following POVMs are incompatible, then a state assemblage is steerable, where .
Quantum contextuality
Quantum contextuality refers to the idea that the outcome of a quantum measurement cannot be explained by any pre-existing value that is independent of the measurement context. In other words, the result of a quantum measurement depends on which other measurements are being performed simultaneously. This concept challenges classical notions of reality, where it is assumed that properties of a system exist independently of measurement.
It was proved that any set of compatible POVMs leads to preparation noncontextual correlations for all input quantum states. Conversely, the existence of a preparation noncontextual model for all input states implies compatibility of the involved POVMs.
See also
Generalized probabilistic theory
Mathematical formulation of quantum mechanics
Qudit
Quantum information
References
Further reading
Quantum information science
Mathematical optimization
Measure theory | Incompatibility of quantum measurements | [
"Mathematics"
] | 1,565 | [
"Mathematical optimization",
"Mathematical analysis"
] |
77,317,924 | https://en.wikipedia.org/wiki/Surfshark%20Antivirus | Surfshark Antivirus is a cybersecurity software developed by a company known for Surfshark VPN and internet security services. Surfshark Antivirus provides protection against malware, viruses, and other cyber threats.
Technology
In 2021, Antivirus software was introduced to its Surfshark One bundle. Surfshark Antivirus offers real-time protection to users by scanning and identifying potential threats on devices. It integrates with Surfshark's other security products, including its VPN services. The antivirus is available for multiple platforms, including Windows, macOS, Android.
Surfshark Antivirus features CleanWeb, a tool that minimizes ads, blocks trackers, and protects against phishing. It offers real-time protection, monitoring systems around the clock to guard against malware, and operates efficiently in the background. Surfshark antivirus also uses heuristic detection that scans files for suspicious-looking bits of code. In December 2022, Surfshark added webcam protection to its antivirus tool to control over exactly which apps are allowed to use webcam.
Overview
Surfshark Antivirus is an antivirus software that includes real-time protection, scan-scheduling, malware detection, and malware removal. It is part of the Surfshark app and comes with the Surfshark One and One+ bundles. Antivirus can be used on up to five devices. The user can see the list of Antivirus-protected devices on their Surfshark profile page.
Features
Scanning
Scanning is a feature that scans all files stored on the device, including system and application files, including those the user can’t see. When malware is detected, the user is presented with a list of potentially harmful files and can select which one they want to delete.
The antivirus application automatically copies each infected file to the quarantine folder for backup. If the backup process is unsuccessful, the file is renamed by changing the extension to “.infected.” Available on Windows, macOS, and Android.
Scheduling
Scheduling is a feature that lets users choose the date and time of their device scans. Windows and macOS users can schedule both Quick scans and Full scans. Available on Windows, macOS, and Android.
Quarantine
Quarantine is a feature that automatically isolates potentially harmful files found after a scan or real-time detection so they aren’t executable. Users can delete or restore the files and add them to the exclusion list. Available on Windows, and macOS.
Exclusions
Exclusions is a feature that allows users to choose which files and folders aren’t going to be scanned. Available on Windows, and macOS.
Webcam protection
Webcam protection is a feature that constantly monitors the user’s webcam. The user is informed when an untrusted app tries to access the camera. Available on Windows, and macOS.
Ransomware shield
Ransomware shield is a feature that provides added security to specified files and folders. The user selects which apps have permission to access their files. When an app that doesn’t have permission is accessing the files in the protected folder, the action is blocked, and the user is asked to allow access or to block it. Available on macOS.
Surfshark Cloud Protect
Surfshark Cloud Protect is a feature that scans unknown files on the cloud for zero-day threats. The feature is based on a machine learning system where unknown file hashes are sent to a remote cloud server that quickly analyzes if the file is malicious.
If the file is found to be dangerous, then all Surfshark user’s apps are updated, so this new threat is quarantined. Available on Windows, macOS, and Android.
Certification
AV-Test
In November and December 2022, Surfshark Antivirus for Windows (versions 4.4 & 4.5) received certification from AV TEST, the Independent IT-Security Institute from Magdeburg, Germany.
VB100 Virus Bulletin certificates
Surfshark Antivirus was tested by the Virus Bulletin on four different occasions: May 23, 2022; July 25, 2022; October 24, 2022; January 23, 2023.
Reviews
In February 2023, Surfshark Antivirus was certified as an efficient antivirus by the antivirus testing establishment AV-TEST, an independent antivirus evaluation organization.
Computer Bild, German computer magazine, reviewed Surfshark Antivirus, highlighting its comprehensive protection against viruses and malware, user-friendly interface, and its inclusion in the Surfshark One package, which also offers VPN services, private search functionality, and alert services for enhanced online security.
TechRadar in its review noted that it effectively protects devices from malware and viruses on Android and Windows with real-time protection and flexible scanning options.
PC Mag, British computer media, reviewed Surfshark Antivirus, noting its significant improvements in malware protection, but also pointing out the need for further refinement in its antivirus capabilities.
References
Antivirus software
Computer security software
Windows software | Surfshark Antivirus | [
"Engineering"
] | 1,033 | [
"Cybersecurity engineering",
"Computer security software"
] |
77,318,458 | https://en.wikipedia.org/wiki/Provider-provisioned%20VPN | A provider-provisioned VPN (PPVPN) is a virtual private network (VPN) implemented by a connectivity service provider or large enterprise on a network they operate on their own, as opposed to a "customer-provisioned VPN" where the VPN is implemented by the customer who acquires the connectivity service on top of the technical specificities of the provider.
When internet service providers implement PPVPNs on their own networks, the security model of typical PPVPN protocols is weaker with respect to tunneling protocols used in customer-provided VPN, especially for confidentiality, because data privacy may not be needed.
Provider-provisioned VPN building blocks
Depending on whether a provider-provisioned VPN (PPVPN) operates in Layer 2 (L2) or Layer 3 (L3), the building blocks described below may be L2 only, L3 only, or a combination of both. Multiprotocol Label Switching (MPLS) functionality blurs the L2–L3 identity.
generalized the following terms to cover L2 MPLS VPNs and L3 (BGP) VPNs, but they were introduced in .
Customer devices
A device that is within a customer's network and not directly connected to the service provider's network. Customer devices are not aware of the VPN.
Customer edge device
A device at the edge of the customer's network which provides access to the PPVPN. Sometimes it is just a demarcation point between provider and customer responsibility. Other providers allow customers to configure it.
Provider edge device
A device, or set of devices, at the edge of the provider network that connects to customer networks through customer edge devices and presents the provider's view of the customer site. PEs are aware of the VPNs that connect through them, and maintain VPN state.
Provider device
A device that operates inside the provider's core network and does not directly interface to any customer endpoint. It might, for example, provide routing for many provider-operated tunnels that belong to different customers' PPVPNs. While the P device is a key part of implementing PPVPNs, it is not itself VPN-aware and does not maintain VPN state. Its principal role is allowing the service provider to scale its PPVPN offerings, for example, by acting as an aggregation point for multiple PEs. P-to-P connections, in such a role, often are high-capacity optical links between major locations of providers.
User-visible PPVPN services
OSI Layer 2 services
VLAN
VLAN is a Layer 2 technique that allows for the coexistence of multiple local area network (LAN) broadcast domains interconnected via trunks using the IEEE 802.1Q trunking protocol. Other trunking protocols have been used but have become obsolete, including Inter-Switch Link (ISL), IEEE 802.10 (originally a security protocol but a subset was introduced for trunking), and ATM LAN Emulation (LANE).
Virtual Private LAN Service (VPLS)
Developed by Institute of Electrical and Electronics Engineers, VLANs allow multiple tagged LANs to share common trunking. VLANs frequently comprise only customer-owned facilities. Whereas VPLS as described in the above section (OSI Layer 1 services) supports emulation of both point-to-point and point-to-multipoint topologies, the method discussed here extends Layer 2 technologies such as 802.1d and 802.1q LAN trunking to run over transports such as metro Ethernet.
As used in this context, a VPLS is a Layer 2 PPVPN, emulating the full functionality of a traditional LAN. From a user standpoint, a VPLS makes it possible to interconnect several LAN segments in a way that is transparent to the user, making the separate LAN segments behave as one single LAN.
In a VPLS, the provider network emulates a learning bridge, which may include VLAN service optionally.
Pseudo-wire (PW)
PW is similar to VPLS but can provide different L2 protocols at both ends. Typically, its interface is a WAN protocol such as Asynchronous Transfer Mode or Frame Relay. In contrast, when aiming to provide the appearance of a LAN contiguous between two or more locations, the Virtual Private LAN service or IPLS would be appropriate.
Ethernet-over-IP tunneling
EtherIP () is an Ethernet-over-IP tunneling protocol specification. EtherIP has only a packet encapsulation mechanism. It has no confidentiality or message integrity protection. EtherIP was introduced in the FreeBSD network stack and the SoftEther VPN server program.
IP-only LAN-like service (IPLS)
A subset of VPLS, the CE devices must have Layer 3 capabilities; the IPLS presents packets rather than frames. It may support IPv4 or IPv6.
Ethernet virtual private network (EVPN)
Ethernet VPN (EVPN) is an advanced solution for providing Ethernet services over IP-MPLS networks. In contrast to the VPLS architectures, EVPN enables control-plane-based MAC (and MAC,IP) learning in the network. PEs participating in the EVPN instances learn the customer's MAC (MAC,IP) routes in control-plane using MP-BGP protocol. Control-plane MAC learning brings a number of benefits that allow EVPN to address the VPLS shortcomings, including support for multi-homing with per-flow load balancing and avoidance of unnecessary flooding over the MPLS core network to multiple PEs participating in the P2MP/MP2MP L2VPN (in the occurrence, for instance, of ARP query). It is defined .
OSI Layer 3 PPVPN architectures
This section discusses the main architectures for PPVPNs, one where the PE disambiguates duplicate addresses in a single routing instance, and the other, virtual router, in which the PE contains a virtual router instance per VPN. The former approach, and its variants, have gained the most attention.
One of the challenges of PPVPNs involves different customers using the same address space, especially the IPv4 private address space. The provider must be able to disambiguate overlapping addresses in the multiple customers' PPVPNs.
BGP/MPLS PPVPN
In the method defined by , BGP extensions advertise routes in the IPv4 VPN address family, which are in the form of 12-byte strings, beginning with an 8-byte route distinguisher (RD) and ending with a 4-byte IPv4 address. RDs disambiguate otherwise duplicate addresses in the same PE.
PEs understand the topology of each VPN, which is interconnected with MPLS tunnels directly or via P routers. In MPLS terminology, the P routers are label switch routers without awareness of VPNs.
Virtual router PPVPN
The virtual router architecture, as opposed to BGP/MPLS techniques, requires no modification to existing routing protocols such as BGP. By the provisioning of logically independent routing domains, the customer operating a VPN is completely responsible for the address space. In the various MPLS tunnels, the different PPVPNs are disambiguated by their label but do not need routing distinguishers.
Unencrypted tunnels
Some virtual networks use tunneling protocols without encryption to protect the privacy of data. While VPNs often provide security, an unencrypted overlay network does not fit within the secure or trusted categorization. For example, a tunnel set up between two hosts with Generic Routing Encapsulation (GRE) is a virtual private network but is neither secure nor trusted.
Native plaintext tunneling protocols include Layer 2 Tunneling Protocol (L2TP) when it is set up without IPsec and Point-to-Point Tunneling Protocol (PPTP) or Microsoft Point-to-Point Encryption (MPPE).
See also
Dynamic Multipoint Virtual Private Network
Ethernet VPN
Virtual Private LAN Service
References
External Links
Network architecture | Provider-provisioned VPN | [
"Engineering"
] | 1,693 | [
"Network architecture",
"Computer networks engineering"
] |
77,318,529 | https://en.wikipedia.org/wiki/Anchor%20losses | Anchor losses are a type of damping commonly highlighted in micro-resonators. They refer to the phenomenon where energy is dissipated as mechanical waves from the resonator attenuate into the substrate.
Introduction
In physical systems, damping is the loss of energy of an oscillating system by dissipation. In the field of micro-electro-mechanicals, the damping is usually measured by a dimensionless parameter Q factor (Quality factor). A higher Q factor indicates lower damping and reduced energy dissipation, which is desirable for micro-resonators as it leads to lower energy consumption, better accuracy and efficiency, and reduced noise.
Several factors contribute to the damping of micro-electro-mechanical resonators, including fluid damping and solid damping. Anchor losses are a type of solid damping observed in resonators operating in various environments. When a resonator is fixed to a substrate, either directly or via other structures such as tethers, mechanical waves propagate into the substrate through these connections. The wave traveling through a perfectly elastic solid would have a constant energy and an isolated perfectly elastic solid once set into vibration would continue to vibrate indefinitely. Actual materials do not show such behavior and dissipation will happen due to some imperfection of elasticity within the body. In typical micro-resonators, the substrate dimensions are significantly larger than those of the resonator itself. Consequently, it can be approximated that all waves entering the substrate will attenuate without reflecting back to the resonator. In other words, the energy carried by the waves will dissipate, leading to damping. This phenomenon is referred to as anchor losses.
Estimation of anchor losses
Analytical estimation
Standard theories of structural mechanics permit the expression of concentrated forces and couples exerted by the structure on the support.These generally include a constant component (due, for instance, to pre-stresses or initial deformation) and a sinusoidal varying contribution. Some researchers have investigated some simple geometries following this idea, and one example is the anchor losses of a cantilaver beam connected to a 3-D semi-infinite region:
where L is the length of the beam, H is the in-plane (curvature plane) thickness, W is the out-of-plane thickness, C is a constant depending on the Poisson's coefficient, with C = 3.45 for ν = 0.25, C = 3.23 for ν = 0.3, C = 3.175 for ν = 0.33.
Numerical estimation
Due to the complexity of geometries and the anisotropy or inhomogeneities of materials, usually it is difficult to use analytical method to estimate the anchor losses of some devices. Numerical methods are more widely applied for this issue. An artificial boundary or an artificial absorbing layer is applied to the numerical model to prevent the wave reflection. One such method is the perfectly matched layer, initially developed for electromagnetic wave transmission and later adapted for solid mechanics. Perfectly matched layers act as special elements where wave attenuation occurs through a complex coordinate transformation, ensuring all waves entering the layer are absorbed, thus simulating anchor losses.
To determine the Q factor from a Finite Element Method model with perfectly matched layers, two common approaches are used:
Using the complex eigenfrequency from a modal analysis:
where and is the real and imaginary part of the complex eigenfrequency.
Generating the frequency response from a frequency domain analysis and applying methods such as the half-bandwidth method to calculate the Q factor.
Methods to mitigate anchor losses
Anchor losses are highly dependent on the geometry of the resonator. How to anchor the resonator or the size of the tether has a strong effect on the anchor losses. Some common methods to eliminate anchor losses are summarized as followings.
Anchor at nodal points
A common method is to fix the resonator at the nodal points, where the motion amplitude is minimum. From the definitions of anchor losses, now the wave magnitude into the substrate will be minimized and less energy will dissipate. However, this method may not apply to certain resonators, in which the nodal points are not around the resonator edges, causing difficulty in tether designs.
Quater wavelength tethers
Quarter wavelength tether is an effective approach to minimize the energy loss through these tethers. Similar to the theory used for transmission lines, quarter wavelength tether is assumed as the best acoustic isolation, since the complete in phase reflection occurs as the tether length equals to a quarter acoustic wavelength, or λ/4. Therefore, there is hardly any energy dissipation to the substrate through the tethers. However, quarter-wavelength design results in extremely long tether structures, usually in tens to hundreds of micrometers, which is counter to minimization and leads to a decrease in the mechanical stability of the devices.
Material-mismatched support
The resonator structure and anchoring stem are made with different materials. The acoustic impedance mismatch between these two suppresses the energy from the resonator to the stem, thus reducing anchor losses and allowing high Q factor.
Acoustic reflection cavity
The basic mechanism is to reflect back a portion of the elastic waves at the anchor boundary due to the discontinuity in the acoustic impedance caused by the acoustic cavity (the etching trenches).
Phonon crystal tether and metamaterial
Phonon crystal tether is a promising way to restrain the acoustic wave propagation in the supporting tethers, since they can arouse complete band gaps in which the transmissions of the elastic waves are prohibited. Thus, the vibration energy is retained in the resonator body, reducing the anchor losses into the substrate. Besides the phonon crystal tether, some other kinds of metamaterial could be applied to the anchor and surrounding regions to prohibit the wave transmission. A key drawback of this method is the challenge to the fabrication process.
Optimized anchor geometry
Anchor losses are highly sensitive to the geometry of the anchors. Features such as fillets, curvature, sidewall inclination, and other detailed geometric aspects can affect anchor losses. By carefully optimizing these geometric configurations, anchor losses can be significantly reduced.
See also
Dynamical systems theory
Finite element method
Finite-difference time-domain method
Micro-Electro-Mechanical Systems
Resonator
Infinite element method
References
External links
How to Model Different Types of Damping in COMSOL Multiphysics®
Effect of Perfectly Matched Layers (PML) in FDTD Simulations
Notes on Perfectly Matched Layers (PMLs)
Resonators
Dimensionless numbers of mechanics
Engineering ratios
Ordinary differential equations
Mathematical analysis
Classical mechanics | Anchor losses | [
"Physics",
"Mathematics",
"Engineering"
] | 1,372 | [
"Mathematical analysis",
"Metrics",
"Engineering ratios",
"Quantity",
"Classical mechanics",
"Mechanics",
"Dimensionless numbers of mechanics"
] |
78,728,486 | https://en.wikipedia.org/wiki/CMF%20by%20Nothing | CMF by Nothing is a design-focused technology sub-brand launched by Nothing in 2023. The brand specializes in creating consumer technology products with an emphasis on transparent design aesthetics and sustainable materials.
History
CMF by Nothing was established as a sub-brand of Nothing, a consumer technology company founded by Carl Pei in 2020. The brand aims to make design-focused technology accessible to a broader audience while maintaining Nothing's distinctive transparent design language.
Brand philosophy
The brand's name "CMF" stands for Color, Material, and Finish, reflecting its focus on design fundamentals. CMF by Nothing aims to democratize good design by offering products at more accessible price points while maintaining high design standards. The brand emphasizes the use of sustainable materials and transparent design elements, aligning with Nothing's core design principles.
Products
CMF by Nothing launched its initial product lineup in 2023, which included:
CMF Buds Pro: Wireless earbuds featuring Active Noise Cancellation.
CMF Watch Pro: A smartwatch with health and fitness tracking capabilities.
CMF Power 65W GaN: A compact charging adapter using Gallium Nitride technology.
All products follow a consistent design language emphasizing clean lines, bold colors, and transparent elements while maintaining affordability.
Design approach
The brand's design philosophy centers on three core elements:
Color: Using bold, distinctive color combinations.
Material: Focusing on quality and sustainable material choices.
Finish: Emphasizing attention to detail in the final product appearance.
This approach aims to create products that stand out in the market while remaining accessible to a broader consumer base.
Corporate structure
CMF by Nothing operates as a sub-brand of Nothing, sharing the parent company's resources and design expertise while maintaining its own distinct identity and product line. The brand benefits from Nothing's established supply chain and manufacturing partnerships while focusing on a different market segment.
References
External links
Official website
Consumer electronics brands
Technology companies established in 2023
2023 establishments in Hong Kong
Consumer electronics
Electronics companies of Hong Kong
Design companies
Industrial design
Hong Kong brands
Wearable devices
Mobile phone manufacturers
Audio equipment companies | CMF by Nothing | [
"Engineering"
] | 419 | [
"Industrial design",
"Design engineering",
"Design companies",
"Engineering companies",
"Design"
] |
78,730,630 | https://en.wikipedia.org/wiki/Artificialization | The artificialization of soil, an environment, or natural or semi-natural habitat is the loss of its qualities: its naturalness, a quality that includes a self-sustaining capacity to harbor certain biodiversity, natural cycles (carbon, nitrogen, water, oxygen cycles, etc.), and biogeochemical qualities (carbon sink, for example). It is generally accompanied by a loss of self-healing capacity on the part of the environment (reduced ecological resilience).
Artificialization is often summed up as the disappearance of natural spaces under concrete or bitumen, during the construction of buildings (apartment blocks, hotels, houses, shops, industries, parking lots) or transport networks. While soil sealing is a huge part of land artificialisation, more generally, it takes place when natural environments are heavily transformed by man. For example, leisure and sports facilities (green spaces, golf courses, sports fields, motocross courses, winter sports resorts, etc.), canals, road embankments, and artificial lighting can each lead to ecological traps and other impacts, animal mortality on roads, light pollution, etc., and can also lead to the creation of new habitats. It could also be mentioned the areas developed for military purposes (military testing grounds, underground tunnels, fortifications, glacis, no-man's-land, etc.).
In Europe in 2015, the surface area of sealed land exceeded one million square kilometers, i.e. 2.3% of the European Union's surface area and 200 m² per inhabitant (over 50,000 km2 and 9.4% of the territory in France). On average, 165 ha, or 1,650,000 m² of natural environments and farmland, are destroyed every day in France and replaced by roads, housing, and business parks, as part of the urban sprawl phenomenon. Between 2005 and 2015, this represented almost 6,000 km², the size of a département in ten years. One of the aims of the French Green and Blue Network (TVB or Schéma régional de cohérence écologique) is to limit this phenomenon and mitigate its consequences. Since 2018, the goal of Zero Net Artificialization has been a major roadmap in the fight against artificialization.
Examples of artificialization
Areas affected by artificialization include:
Rural areas and agricultural zones (especially when exposed to intensive, industrial agriculture). This also applies to grasslands which, when enriched with nutrients or sown with plowing, no longer harbor much of the biodiversity of a natural grassland;
Certain forest or silvicultural environments where monocultures are practiced (poplar plantations, rubber, eucalyptus, oil palm, short rotation coppice (SRC) or willow, etc., or forests planted or managed largely on the principle of artificial regeneration and heavily fragmented by roads, tracks, and paths);
Watercourses canalized and fragmented by large dams, cultivated wetlands (e.g. rice paddies), drained wetlands or wetland basins, polders, etc.;
Coastlines and their estuaries, increasingly developed for tourism, industry, and transport: discharge into the sea, construction of dykes, channels, ports, underwater quarries, etc.
Extent and progression of the phenomenon
As far as geomorphological and subsoil effects are concerned, the artificialization of the environment began modestly in prehistoric times: clearing by fire, occupation, and development of caves, increasingly sedentary and built-up human habitat, digging of shafts in the subsoil for flint mining, then seed silos and shafts or galleries for the exploitation of metal ore seams, from the Bronze Age onwards.
This was followed by larger-scale developments, often designed for the intensive exploitation of water from the great rivers (Nile, Tigris, Euphrates...). In Europe, beaver dams (and the beavers themselves, hunted for their meat and fur), which maintained water reserves and open riparian habitats, were destroyed. At the same time, the construction of fords, then bridges, dykes, mills, and systems for impounding and regulating watercourses and for drainage spread, culminating in major episodes of polderization, etc. At the same time, the urbanization of the region increased. At the same time, urbanization expanded, supported by networks of roads and trading centers (e.g. the Silk Road). Peri-urban waste dumps appeared, gradually buried beneath the urban sprawl. Cemeteries, monuments, and fortifications (e.g. the Great Wall of China) are accompanied by vast clearing, leveling, and earthworks (terraces, embankments, sunken paths, low walls).
During the Anthropocene, the development of coal mines, the oil industry, railroads, automobiles, and tractors led to more intensive farming. This accelerated the anthropization of the landscape and subsoil, colonized by millions of kilometers of cables, pipes, and sewers, including in the colonies of wealthy countries on every continent. The network of roads, freeways, and railroads is expanding, as are industrial, commercial, sports, and sometimes military facilities. At the end of the twentieth century, the pace of artificialization accelerated even further and is visible on satellite imagery.
Some areas are particularly hard hit: a large part of the coastline and estuaries of many countries has been artificialized by the construction of seaside resorts, coastal road networks, and port facilities. Cities and their outskirts, as well as all agricultural environments, and all ancient forests in temperate countries and then in most tropical countries (with the exception of a few protected massifs).
Artificialization is moving towards developing countries, with peri-urbanization particularly marked in the vicinity of megacities and urban metropolises (in France, for example, the Île-de-France and Toulouse). In wealthy countries, it's often linked to the success of the single-family home, which is also reflected in urban sprawl and peri-urbanization.
In France
The French urban planning code defines artificialisation as “the lasting alteration of all or part of a soil's ecological functions, in particular its biological, hydric and climatic functions, as well as its agronomic potential by its occupation or use”.
In 2006, 8.3% of mainland France was affected by land artificialisation, a figure that rose to 9.4% in 2015. In fifty years, seven million hectares of land have been buried for housing (40%), the economy (30%: businesses, warehouses, shops), and transport infrastructure (30%). Since 2009, 90% of land artificialisation has been at the expense of fertile soils.
According to Corine Land Cover statistics on land use in France, the French region with the least amount of artificial land is Corsica, with 2.1% of its surface area, while Île-de-France tops the list with 21.6%.
Artificialization is highly polarized at the national level. A study by the Cerema (the French Environment and Spatial Planning Agency) reveals a high level of artificialisation on the coast and around medium-sized towns and cities. In July 2019, annual data on a municipal scale for the period 2009-2017 were published, and have been updated annually since. In 2015 and 2016, it was estimated that the phenomenon had “stabilized” (at 9.3% of mainland France) thanks to the 2008 crisis, which slowed land artificialisation (to +0.8% per year). Recent data, however, confirm a resumption of the phenomenon since 2016: after a period of decline between 2011 and 2016, artificialisation is again increasing its pace, reaching 23,454 ha between 2016 and 2017. In 2022, the Cerema dashboard reveals that 21,079 ha have been taken or pre-empted in 2021 in France on natural and agricultural spaces, almost 1,200 ha more than in 2020, but almost 1,300 ha less than in 2019. Despite this upturn, the Senate majority is calling for a moratorium on the application of the 2021 Climate and Resilience Act, tabling bills to extend deadlines and pointing to the lack of financial resources dedicated to achieving the goal of zero net artificialization. In June 2022, the Association des maires de France (AMF) lodged an appeal with the Conseil d'État against two decrees implementing this law. Christophe Béchu, Minister of Ecological Transition and Territorial Cohesion, has said he is open to rewriting some of the decrees.
In 2009, according to the Institut français de l'environnement (IFEN), land artificialisation increased by 60,000 ha per year (or 6,000 km2 in ten years, equivalent to the size of the Seine-et-Marne département). The 885 coastal municipalities are particularly hard hit. Despite the “natural” and rural areas spared thanks to the Conservatoire du littoral and the Littoral law, less than 500 m from the sea, the rate of artificialization (28.2% of the territory artificialized on average) is 5.5 times higher than the average for metropolitan France. The coastlines of Nord-Pas-de-Calais, Pays de la Loire, Languedoc-Roussillon, and PACA are the most artificialized by construction, while those of Normandy, Brittany and Poitou-Charentes are artificialized by agriculture. Coastal forests and semi-natural areas dominate the landscape only in Aquitaine (with the Atlantic coastal dune forest) and Corsica; Despite the risk of marine invasions induced by rising oceans, this artificialization of the coast is steadily increasing:
From 2000 to 2006, almost 10,000 ha were artificialized on the 10 km strip of coastline alone in mainland France;
From 2000 to 2006, development was highest in the 500 to 2,000 m seaward strip (on 0.42% of the territory), i.e. 2.8 times the average for mainland France;
On the Channel-North Sea coast, artificial development is more evenly distributed from the coastline to two kilometers inland, before decreasing ;
In the Atlantic, on the other hand, artificial development has slowed down along the coastline, increasing between 500 and 1,000 m, before gradually decreasing inland;
On the Mediterranean coast, from 2000 to 2006, artificial development was almost uniform from the coastline to 10 km from the sea.
In China
According to Jean-François Doulet, in the 2010s, the urbanized surface area in China almost quadrupled from the early 1980s to 2012. Artificialization was estimated in 2012 to be equivalent to twice the surface area of the Île-de-France region each year, and a 15-year estimate is for a surface area equivalent to the current urbanized area of Europe.
Causes
Urbanization leads to the creation of suburban areas, housing estates, and towns. Support for commercial activity and tax competition between communes and agglomerations to attract companies leads to the construction of business parks (commercial zones, industrial zones, etc.) and huge parking lots to park their users; The growing mobility of the population has led to the construction and expansion of transport networks.
Consequences
Impact on climate
In terms of global warming, artificial surfaces, usually dark in color, affect albedo by absorbing solar radiation and emitting long-wave infrared radiation. This radiation is reflected to Earth by greenhouse gases in the atmosphere, contributing to global warming.
This artificialization may have an impact on the local climate, by increasing surface temperatures (LST), as shown by the satellite image of surface temperature measurement opposite.
According to the IPCC's Sixth Assessment Report, reducing vegetation affects the local and adjacent climate through the subsequent disruption of the water cycle). Climate balances are complex and closely linked to the properties of physical surfaces, which also have biological functions. In particular, living organisms have developed strategies for capturing and storing water. Examples include the role of Pseudomonas syringae as a nucleating agent, the effect of earthworm galleries on water infiltration in soils, or the effect of glomalin production by fungi on soil compaction.
Plants also regulate atmospheric water. They are capable of producing aerosols to initiate condensation, or releasing water vapor so that the ambient air reaches the dew point and nucleates rain droplets.
Thus, the disruption of an ecosystem, which is often at equilibrium, leads to a less efficient system, which will tend to heat up more, as evaporating water allows cooling. This is illustrated by infrared satellite observations of a deforested area in Brazil, around Jaru, compared with anthropized areas and pristine surfaces. In the Jaru area, less heat is evacuated by evapotranspiration, as the albedo has increased due to changes in land use.
Warmer soils then disrupt local precipitation as, for example, rainfall volatilizes as it reaches the ground. Volumes of water lost through runoff (or conveyed by man) are not evapotranspired and are not received by areas further out on the continent.
Cities, through the heat islands they form, modify the volume and intensity of precipitation. It is therefore possible to see the location of certain cities on a precipitation map. It is possible to see storm fronts cease activity close to cities while continuing on either side and resuming further out.
Conversely, reducing the gap between hot and cold spots can significantly improve the climate, as demonstrated by an observational and modeling study of climate conditions in the Corn Belt region of the USA. This area takes its name from the high density of corn grown there. Temperature trends over the period 1970-2020 are comparatively negative compared with the period 1910-1950 (-0.35°C), while warming is observable around the area. By climatic mechanisms, precipitation increases in this area. A comparison between the results obtained by global modeling and those obtained by a smaller-scale model demonstrates the relevance of implementing models that are correctly parameterized and truly reflect surface properties (in this case, the right percentage of maize).
Impact on biodiversity and ecological functionality
From the point of view of environmental ethics, artificialization raises the dual question of the decline in biodiversity and the relationship between man and nature, at a time when man, ever more urbanized, seems to be drifting away from nature, losing certain landmarks that have been those of his ancestors for thousands of years, which could affect his chronobiological rhythms, his psychomotricity and even the construction of his psyche. In addition, numerous studies have shown that the artificialization of natural environments leads to a loss of biodiversity and changes in the functional composition of biotopes, a loss that is associated with a reduction in the productivity and stability of ecosystems.
From the point of view of ecology and landscape ecology, the artificialization of landscapes, environments, and biotopes is one of the factors contributing to the ecological fragmentation of natural habitats and the qualitative degradation of landscapes. It is one of the factors used to calculate the eco-potentiality of a plot, region, or landscape element. It is also a factor in homogenization (genetic, taxonomic, and functional), which is highly unfavorable to the maintenance of biodiversity. By favoring ubiquitous species to the detriment of much more varied specialist species, anthropogenic homogenization of life (Biotic homogenization) has serious immediate and future consequences for ecological and evolutionary processes. Researchers are calling for a better understanding of the implications of this homogenization for conservation, and for the rapid promotion of proactive, restorative, and adaptive management, to better control the human component of the “anthropic blender” that human activities have become for the planet's biota.
Some artificial environments (such as certain quarries and slag heaps), because they have received neither fertilizers nor pesticides, may nevertheless be home to processes characterized by a high degree of naturalness. Semi-natural environments are also used to designate environments that have been artificially altered, but can still act as a substitute habitat for some of the species in a given biogeographical zone (e.g. meadows, hedgerows, and certain extensively managed forests, such as the “prosilva” type).
It also concerns the nocturnal environment, disturbed by artificial lighting (light pollution).
Impact on natural hazards
From a planner's point of view, the artificialization of an area increases the probability (in terms of frequency and severity) of certain natural disasters and risks (floods, forest fires, mudslides, mining subsidence, cave-ins (e.g. catiches), zoonotic epidemics, etc.), while reducing the environment's resilience in the face of these disturbances.
Impact on hydrology and soil fertility
From an agronomist's point of view, soil artificialization leads to a loss of humus and carbon, a reduced water retention capacity, and, consequently, a loss of fertility, aggravating the phenomena of erosion and soil degradation. In the long term, this leads to a loss of natural and agricultural resources, in addition to the loss of arable land to built-up or waterproofed areas (although some greenhouse crops are grown on artificial soil, or even without any soil at all, using hydroponics).
Observing, assessing, and combating the phenomenon
Quantifying artificialization involves comparing land use data. The fight against artificialization requires knowledge and measurement of the phenomenon, urban renewal, urban densification, the development of green and blue grids, and the application of the ERC principle (avoid-reduce-compensate, which can be translated as follows: avoid building, reduce the surface area to be built on, compensate by planting trees).
In France
Before 2018
The law of December 30, 2006, on the preservation of water resources and aquatic environments allows municipalities to introduce a tax on impervious surfaces.
Article 7 of the 2009 Grenelle de l'environnement implementation bill, known as “Grenelle I”, calls for:
“a study on tax reform and possible incentives to limit the spread of artificial land”, within six months of the law's publication;
within a year of the law's publication, incorporate the following objective into town planning law: “to combat the decline in agricultural and natural land, with local authorities setting quantified targets in this area once space consumption indicators have been defined”.
Agricultural land is the hardest hit, which is why the French Agricultural Modernization Act of July 27, 2010, aims to halve the rate of consumption of agricultural land over ten years (2010-2020), aided by the Departmental Commissions for the Consumption of Agricultural Spaces (CDCEA) it sets up. However, the 2012 Environmental Conference was less ambitious, aiming only to slow down the artificialization of land (to achieve stability by 2025).
Some regional climate-air-energy plans (SRCAE) include quantified targets, such as that of the Nord-Pas-de-Calais region (a threefold reduction in the rate of land development). In parallel with its Trame Verte et Bleue, in 2006 the region experimented with a regional planning directive aimed at combating the artificialization of the territory through peri-urbanization.
An October 17, 2013 report by the French National Audit Office (Cour des Comptes) found that the tools available in France to combat the artificialization of land are “numerous”, but “imprecise” and too dispersed: The Court notes that it has taken too long to set up the National Observatory on the Consumption of Agricultural Land (ONCEA), and calls for improvements to the statistics measuring trends in land artificialisation (they take poor account of conversions of natural and forested land, for example), and for existing measures to protect natural or agricultural land to be made more coherent or better used. The Court also criticizes the lack of enforceability of a number of measures (SRADDT, Directive régionale d'aménagement (DRA), PAEN (périmètre de protection et de mise en valeur des espaces agricoles et naturels), ZAP (Zone agricole protégée), a little-used tool), and calls for the transfer of urban planning powers to inter-municipalities to reduce the “proximity between elected representatives and voters, the sellers of farmland”. She also suggests ways of making taxation more conducive to less artificial development.
After 2018
On July 4, 2018, the French government released the Biodiversity Plan, which aims to achieve "zero net land take" (ZAN) and to “[publish] an annual report on land consumption and [provide] transparent and comparable data at all territorial levels for regions and citizens.” On July 1, 2019, a portal dedicated to artificial land use was launched to raise awareness about the phenomenon. This platform also makes annual and municipal data on French territory accessible, enabling external stakeholders to better understand the issue. In the same year, a National Observatory of Soil Artificialization was established.
The Minister for Ecological and Inclusive Transition commissioned a foresight mission to France Stratégie to outline scenarios for achieving the ZAN target and to identify ways to protect natural, agricultural, and forested areas. The resulting report, authored by biologist Julien Fosse, was made public on July 23, 2019, and presented to Emmanuelle Wargon and Julien Denormandie. The public think tank proposed measures to achieve zero net land take by 2030, focusing on reducing gross artificialization through higher-density new constructions and restoring abandoned artificialized lands.
In 2021, the Climate and Resilience Act, under its section titled “Housing,” set a goal to halve the rate of land take over the next decade compared to the previous one, to achieve zero net artificialization by 2050. The law also prohibits the construction of new large retail spaces, with exceptions for sales areas under 10,000 square meters.
Elsewhere in Europe
Germany set an ambitious target to reduce land take by two-thirds by 2020.
In Switzerland, 100,000 citizens signed a petition calling for a 20-year moratorium on land artificialization (from 2012 to 2032), leading to a referendum in 2013.
In contrast, England reformed its urban planning laws in 2012 to relax regulations. George Osborne, then the British Chancellor of the Exchequer, justified the reform by citing a shortage of buildable land.
In Flanders, Belgium, it was estimated in 2006 that over 20 years, residential developments had consumed about ten hectares per day or roughly 1 square meter per second. The developed area increased by 46% in two decades, resulting in a quarter of Flanders being urbanized (one-fifth of Belgium was artificialized by 2006).
At the European Union Level
Following directives on water and air, the proposed framework directive for soil protection aimed to address soil degradation and erosion across Europe. The European Commission introduced the directive in September 2006, and it was adopted in the first reading by the European Parliament on November 14, 2007. However, it was blocked the following year by five countries—France, Germany, the United Kingdom, Austria, and the Netherlands—preventing a qualified majority. The directive was ultimately abandoned in 2014.
See also
Notes
References
Bibliography
Landscape architecture
Geological processes | Artificialization | [
"Engineering"
] | 4,795 | [
"Landscape architecture",
"Architecture"
] |
78,731,023 | https://en.wikipedia.org/wiki/Spring%20plunger | A spring plunger or detent spring is a spring-loaded mechanical part used for indexing, positioning, and securing of objects, as well as for making objects possible to disassemble in normal use without loosing parts. The spring force keeps the pin in position during normal use.
Typically, it is a machine part consisting of a hollow cylinder with an internal compression spring acting on a pin. The pin may, for example, be shaped as a rod with either a cylindrical or rounded tip (broadly categorized as a spring plunger), or a spring-loaded ball (ball plunger) if the spring acts against a ball.
Manufacture
Spring plungers can be supplied as a complete unit that is mounted into the workpiece by screwing into threads, or in a pluggable version that is pressed into the workpiece. The sleeve is usually made of free machining steel.
Alternatively, it can be made directly into the workpiece by drilling and tapping a hole, then inserting a pin (or ball), spring, and finally a set screw.
The spring force is dimensioned for the intended use. In spring plungers with a screw, the spring pressure can be adjustable within a certain working range.
Different plungers may require different tools for installation, such as a hex key, socket wrench, or flathead screwdriver. Blind hole mounting variants may have tool slots on the same side as the pin.
Materials
At high temperatures or when the parts are exposed to aggressive chemicals, plastic balls or balls made of corrosion-resistant materials such as silicon nitride can be used. The threaded sleeve is then typically made of stainless steel instead of carbon steel.
Use
Spring-loaded plungers have a wide range of uses.
Some spring plungers are designed to be handled only when assembling or occasionally adjusting equipment. Other spring plungers are designed to be routinely manipulated (for example, to adjust the angle of an exercise bench), and may have an integrated operating lever that is pushed in (push pin plungers) or a lever that is pulled out to manipulate the object (pull pin plunger).
Similar principles are used in spring-loaded valves and valve stems(including safety valves), or automatic ejectors on firearms.
Mechanics
Spring plungers can be used, for example, to lock levers in different positions. Spring-loaded plungers with anti-twist protection enable screwless fastening.
Electronics
A pogo pin is a type of electrical connector mechanism widely used in modern electronics. Compared to other electrical connectors, they have better durability and are more resistant to mechanical shock and vibration.
Archery
In modern archery, an archery plunger allows the arrow to be deflected as it is fired from the bow. It allows for adjusting the arrow's relative position on the arrow rest. By adjusting the stiffness and position of the plunger, the arrow can be adjusted to fly with the best possible flight, so that it travels straight.
See also
Spring pin, mechanical fastener that secures the position of two or more parts relative to each other
Detent
Ball detent
References
Fasteners
Mechanical fasteners | Spring plunger | [
"Engineering"
] | 639 | [
"Construction",
"Mechanical fasteners",
"Fasteners",
"Mechanical engineering"
] |
78,731,233 | https://en.wikipedia.org/wiki/Taniyama%27s%20problems | Taniyama's problems are a set of 36 mathematical problems posed by Japanese mathematician Yutaka Taniyama in 1955. The problems primarily focused on algebraic geometry, number theory, and the connections between modular forms and elliptic curves.
History
In the 1950s post-World War II period of mathematics, there was renewed interest in the theory of modular curves due to the work of Taniyama and Goro Shimura. During the 1955 international symposium on algebraic number theory at Tokyo and Nikkō—the first symposium of its kind to be held in Japan that was attended by international mathematicians including Jean-Pierre Serre, Emil Artin, Andre Weil, Richard Brauer, K. G. Ramanathan, and Daniel Zelinsky—Taniyama compiled his 36 problems in a document titled "Problems of Number Theory" and distributed mimeographs of his collection to the symposium's participants. These problems would become well known in mathematical folklore. Serre later brought attention to these problems in the early 1970s.
The most famous of Taniyama's problems are his twelfth and thirteenth problems. These problems led to the formulation of the Taniyama–Shimura conjecture (now known as the modularity theorem), which states that every elliptic curve over the rational numbers is modular. This conjecture became central to modern number theory and played a crucial role in Andrew Wiles' proof of Fermat's Last Theorem in 1995.
Taniyama's problems influenced the development of modern number theory and algebraic geometry, including the Langlands program, the theory of modular forms, and the study of elliptic curves.
The problems
Taniyama's tenth problem addressed Dedekind zeta functions and Hecke L-series, and while distributed in English at the 1955 Tokyo-Nikkō conference attended by both Serre and André Weil, it was only formally published in Japanese in Taniyama's collected works.
According to Serge Lang, Taniyama's eleventh problem deals with elliptic curves with complex multiplication, but is unrelated to Taniyama's twelfth and thirteenth problems.
Taniyama's twelfth problem's significance lies in its suggestion of a deep connection between elliptic curves and modular forms. While Taniyama's original formulation was somewhat imprecise, it captured a profound insight that would later be refined into the modularity theorem. The problem specifically proposed that the L-functions of elliptic curves could be identified with those of certain modular forms, a connection that seemed surprising at the time.
Fellow Japanese mathematician Goro Shimura noted that Taniyama's formulation in his twelfth problem was unclear: the proposed Mellin transform method would only work for elliptic curves over rational numbers. For curves over number fields, the situation is substantially more complex and remains unclear even at a conjectural level today.
See also
Hilbert's problems
Thurston's 24 questions
List of unsolved problems in mathematics
Wiles's proof of Fermat's Last Theorem
Notes
References
Number theory
Algebraic geometry
Mathematical problems | Taniyama's problems | [
"Mathematics"
] | 611 | [
"Discrete mathematics",
"Fields of abstract algebra",
"Algebraic geometry",
"Mathematical problems",
"Number theory"
] |
78,731,525 | https://en.wikipedia.org/wiki/Brokard%27s%20theorem | Brokard's theorem is a theorem in projective geometry. It is commonly used in Olympiad mathematics.
Statement
Brokard's theorem. The points A, B, C, and D lie in this order on a circle with center O'. Lines AC and BD intersect at P, AB and DC intersect at Q, and AD and BC intersect at R. Then O is the orthocenter of . Furthermore, QR is the polar of P, PQ is the polar of R, and PR is the polar of Q'' with respect to .
See also
Orthocenter
Power of a point
References
External link
A proof without words of Brokard's theorem
Projective geometry
Theorems in geometry | Brokard's theorem | [
"Mathematics"
] | 148 | [
"Geometry",
"Geometry stubs",
"Mathematical problems",
"Theorems in geometry",
"Mathematical theorems"
] |
78,731,569 | https://en.wikipedia.org/wiki/List%20of%20Argentine%20computer%20scientists |
List
A
Martin Abadi
B
Cecilia Berdichevsky
Fabián E. Bustamante
Paula Bonta
C
Gregory Chaitin
D
Veronical Dahl
E
Sebastian Elbaum
F
Martin Farach-Colton
Eduardo Fermé
Luciana Ferrer
G
Hector Geffner
Rebeca Guber
H
Bernardo Huberman
Armando Haeberer
K
Delia Kesner
M
Alberto O. Mendelzon
N
Gonzalo Navarro
R
Gustavo Rossi
S
Manuel Sadosky
Sebastian Sardina
Gerardo Schneider
Hugo Scolnik
Guillermo Simari
W
Gabriel Wainer
Argentine
Computer scientists | List of Argentine computer scientists | [
"Technology"
] | 111 | [
"Computing-related lists",
"Lists of computer scientists"
] |
78,731,835 | https://en.wikipedia.org/wiki/Ultradistribution | In functional analysis, an ultradistribution (also called an ultra-distribution) is a generalized function that extends the concept of a distributions by allowing test functions whose Fourier transforms have compact support. They form an element of the dual space 𝒵′, where 𝒵 is the space of test functions whose Fourier transforms belong to 𝒟, the space of infinitely differentiable functions with compact support.
See also
Distribution (mathematics)
Generalized function
References
Functional analysis
Generalized functions
Mathematical analysis | Ultradistribution | [
"Mathematics"
] | 93 | [
"Mathematical analysis",
"Functions and mappings",
"Functional analysis",
"Mathematical analysis stubs",
"Mathematical objects",
"Mathematical relations"
] |
78,733,033 | https://en.wikipedia.org/wiki/Bale%20Messenger | Bale Messenger (, Yes) is an Iranian instant messaging (IM), voice-over-IP (VoIP) service, social media platform, and mobile payment app developed by the National Bank of Iran. It allows users to send text and voice messages, share images and videos, make voice and video calls, share files and user locations, send money, pay bills, use bots and services.
Because of its integration of banking and messaging features, some of its functionalities are unique. Bale's client application runs on Android and can also be accessed from computers and iOS devices via a web app. The service requires a mobile telephone number for registration.
History
Bale was launched in December 2016, with the goal of providing a secure and versatile messaging service that integrates financial transactions. The app was developed to address the growing demand for unified services, combining communication, payments, and Services within a single platform. Bale is widely used in Iran and surrounding regions, offering both communication and financial services. It was removed by Google Play in 2022 along with many other Iranian platforms.
Features
Bale integrates financial services into its messaging platform, allowing users to send and receive money directly within the app. This removes the need for third-party payment applications, enabling transactions such as mobile recharges, utility bill payments, and fund transfers. Additionally, users can send and receive gifts or money requests via their Bale ID or phone number, without requiring the recipient’s payment details. The platform also provides the option for users to set up a username, which allows others to find them without sharing their phone number.
The platform supports cross-platform functionality, with a web version that works independently of the mobile app. This allows users to access their messages and data seamlessly across devices, with synchronized personal data, such as chat history, media, and contacts, without the need for a backup when switching devices. Bale’s web apps are optimized for all devises, providing performance across multiple platforms. Furthermore, voice and video call capabilities are available on both mobile and web versions, facilitating high-quality, face-to-face interactions.
Bale takes security and privacy seriously, especially with its financial features. It uses strong encryption to protect both messages and financial transactions, helping to keep everything secure. The app adheres to data protection standards. Additionally, users have the option to change their phone number while retaining all their personal data, including account information, messages, and group settings. For those wishing to delete all their data, an account deletion option is available. Bale’s services are accessible globally, supporting users with any SIM card or phone number in their messaging and financial transactions.
Message Exchange Bus (MXB)
Bale is connected to the Message Exchange Bus (MXB), which is a technology that connects major Iranian messaging platforms like Bale, Eitaa, Soroush, Rubika, Gap and iGap and enables users to send messages and files and more between these apps without needing a separate account for each one ensuring uninterrupted communication regardless of the platform used. This system is instrumental in creating a unified messaging ecosystem in Iran and is created for the first time in Iran.
Social networking services
Bale has three main sections: the Chat section, the Flow section, and the Services section. Social networking services are primarily focused in the Flow section of the app, which is divided into the Vitrin and Magazine sections and displays both user and other users' statuses.
Vitrin section: This section displays numerous channels that are categorized and searchable.
Magazine section: This section features the best content from channels, as voted by users, displayed in a scrollable, short video format.
Chat section: In this section, users can access viral channels.
Channels and broadcasts
Bale supports channels where users can subscribe to receive broadcast messages. These channels allow businesses, public figures, and organizations to communicate directly with large audiences through streamlined updates, news, and offers.
Statuses
Users can set personal statuses visible to their contacts or to all users. These statuses are used for self-expression or to share information about activities, moods, or events.
Marketing and economic features
Online storefront (Feshnook): Bale includes an integrated online marketplace, allowing businesses to establish virtual stores where users can browse products, make purchases, and pay seamlessly within the app.
Advertising and promotions: The platform offers advertising tools for businesses to create targeted marketing campaigns. Businesses can run ads, promote products and services, and reach potential customers via sponsored messages or promoted content within the app.
Services
Banking services
Bale provides various banking services, including a digital wallet, the ability to receive money via messages without the need for card details, and the option to link bank cards for transaction tracking. Users can also receive notifications about account balances, transfers, and transaction activities.
Bots and AI
Bale supports a robust bot ecosystem with over 30,000 active bots, offering solutions for a wide range of tasks. These include AI-powered bots, such as ChatGPT for natural language processing, and advanced image-generation AI for creating images. Additionally, Bale offers bots that assist in intercepting and tracking mail from postal services, providing updates on delivery status. The platform also integrates psychological philanthropy bots that use artificial intelligence for mental health support and counseling services. These bots, alongside many others, provide diverse functionalities, ensuring users have access to a wide variety of services.
Bot development frameworks
Bale provides several frameworks to facilitate bot development:
Balethon: A Python library designed for creating bots in Bale. It provides a high-level, asynchronous programming interface that supports both functional and object-oriented designs, offering flexibility and extensibility for developers.
python-bale-bot: An API wrapper for Bale written in Python, enabling developers to send and receive messages through bots and interact with Bale's messaging platform.
Bale Bot SDK: A PHP SDK for developing bots on Bale, supporting the Laravel framework and including add-ons to enhance the development experience.
BaleBotAPI: A .NET Standard 2.0 package that allows developers to build bots for Bale using the .NET framework.
Removal from American App Stores
In 2022, Bale Messenger was removed from both the Google Play Store and the Apple App Store, along with several other Iranian apps. This removal came as part of a broader action taken by these platforms to restrict access to apps from Iran. Despite this, Bale continues to be available on Android through direct downloads and other app stores and on iOS through its web app, maintaining its user base.
Child mode
Bale includes a "Child Mode" feature designed to provide a safer and more controlled environment for younger users. This mode restricts access to certain features and content, ensuring a family-friendly experience. In Child Mode, parents or guardians can customize settings to filter out inappropriate content, limit the types of interactions children can have, and control which channels and messages the child can access. Parents can select specific sections to be shown to the child, providing more control over the child's app usage. To exit Child Mode, a PIN must be entered, ensuring that the child cannot leave the mode without parental permission.
Availability
All of Bale's features are available internationally, including messaging, voice and video calls, file transfers, Social Networking Services, bots and AI. The exception is its financial services, which are limited to the Iranian currency, Rial.
See also
Eitaa
Rubika
Messaging apps
References
External links
Android (operating system) software
Instant messaging clients
Iranian social networking websites | Bale Messenger | [
"Technology"
] | 1,575 | [
"Instant messaging",
"Instant messaging clients"
] |
78,733,484 | https://en.wikipedia.org/wiki/Diameter%20%28computational%20geometry%29 | In computational geometry, the diameter of a finite set of points or of a polygon is its diameter as a set, the largest distance between any two points. The diameter is always attained by two points of the convex hull of the input. A trivial brute-force search can be used to find the diameter of points in time (assuming constant-time distance evaluations) but faster algorithms are possible for points in low dimensions.
Static 2d input
In two dimensions, the diameter can be obtained by computing the convex hull and then applying the method of rotating calipers. This involves finding two parallel support lines for the convex hull (for instance vertical lines through the two vertices with minimum and maximum -coordinate) and then rotating the two lines through a sequence of discrete steps that keep them as parallel lines of support until they have rotated back to their original orientation. The diameter is the maximum distance between any pair of convex hull vertices found as the two points of contact of the parallel lines in this sweep. The time for this method is dominated by the time for constructing the convex hull: for a finite set of points, or time for a simple polygon with vertices.
Dynamic 2d input
For a dynamic two-dimensional point set subject to point insertions and deletions, an approximation to the diameter, with an approximation ratio that can be chosen arbitrarily close to one, can be maintained in time per operation. The exact diameter can be maintained dynamically in expected time per operation, in an input model in which the set of points to be inserted and deleted, and the order of insertion and deletion operations, is worst-case but the point chosen to be inserted or deleted in each operation is chosen randomly from the given set.
For a dynamic two-dimensional point set of a different type, points each moving linearly with fixed velocities, the time at which the points attain their minimum diameter and the diameter at that time can be computed in time
Higher dimensions
In three dimensions, the diameter of a set of points can again be computed in time . A randomized method for doing this by Clarkson and Shor uses as a subroutine a randomized incremental algorithm for finding the intersection of congruent spheres. The algorithm repeatedly chooses a random input point, finds the farthest distance from it, intersects spheres with radius centered at each point, and eliminates the points contained in the resulting intersection. The eliminated points are within distance of all other points, and therefore cannot be part of any pair with a larger distance than . Each point is eliminated when the farthest distance from it is less than or equal to , the farthest distance from the randomly chosen point, which happens with probability , so half of the points are eliminated in expectation in each iteration of the algorithm. The total expected time for the algorithm is dominated by the time to find the first intersection of spheres, before the problem is simplified by eliminating any points. This time is . Ramos provides a non-random algorithm by using ε-nets to derandomize a variation of the Clarkson and Shor algorithm, with the same asymptotic runtime.
In any fixed dimension , there exists an algorithm for which the exponent of in the time bound is less than two. It is also possible to approximate the diameter, to within a approximation ratio, in time .
See also
Kinetic diameter (data), the algorithmic problem of maintaining the diameter of moving points
Minimum-diameter spanning tree, a different notion of diameter for low-dimensional points based on the graph diameter of a spanning tree
References
Length
Geometric algorithms | Diameter (computational geometry) | [
"Physics",
"Mathematics"
] | 722 | [
"Scalar physical quantities",
"Physical quantities",
"Distance",
"Quantity",
"Size",
"Length",
"Wikipedia categories named after physical quantities"
] |
78,734,445 | https://en.wikipedia.org/wiki/Springfield%20City%20Hall%20%28Ohio%29 | Springfield City Hall is the city hall of Springfield, Ohio. The building replaced the current Clark County Heritage Center in 1979 as the location of the city's offices. The building was designed by Skidmore, Owings & Merrill.
References
Buildings and structures in Springfield, Ohio
City and town halls in Ohio | Springfield City Hall (Ohio) | [
"Engineering"
] | 63 | [
"Architecture stubs",
"Architecture"
] |
78,734,909 | https://en.wikipedia.org/wiki/Amulirafusp%20alfa | Amulirafusp alfa (IMM0306) is a cancer immunotherapy that targets CD20-positive B-cell malignancies. It is a fusion protein that combines a CD20 monoclonal antibody with the CD47 binding domain of SIRPα, allowing it to simultaneously engage both CD20 and CD47 on cancer cells. Amulirafusp alfa has a dual mechanism of action that enhances both macrophage-mediated phagocytosis and natural killer cell activation.
References
Monoclonal antibodies | Amulirafusp alfa | [
"Chemistry"
] | 112 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.