id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
56,072,420 | https://en.wikipedia.org/wiki/Mineral%20bonded%20wood%20wool%20board | Mineral bonded wood wool boards (WW boards) are building boards made of wood wool fibres, water and the binding agents cement, caustic magnesia and gypsum. Mineral bound wood wool boards are used in a wide range of applications, e.g., thermal insulation, acoustic insulation, indoor decoration, etc.
Historical brand names include Ceban, Erulit, Fibrolith, Frankotekt, Hapec, Hapri, Heraklith, Hincolith, Holwolith, Klimalit, Lenzolite, Lignolith, Lossius and Saalith. Because of the term wool, laypersons sometimes mistake wood wool boards with wood fibre insulating boards, a different insulation material that does not contain mineral bindings.
In German speaking areas, because of their surface also known as sauerkraut plates.
History
Wood wool boards were standardised in 1938 according to DIN 1101 and are therefore among the oldest insulation materials made of renewable raw materials. This standard was replaced by European Standard EN 13168 in January 2002.
Materials and production
Wood wool boards and wood-wool layers in composite boards are generally manufactured using coniferous wood species, mainly spruce and pine. The wood logs are processed to make wood fibres of various widths, generally between 1mm and 3mm. After drying the wood, it is planed into long fibres in a wood wool machine. The fibres are then mixed with the binding agent (caustic magnesite or cement) in a water solution, in a mixer. Wood wool boards which are bound with magnesite are easily distinguishable by their beige colour. Boards which are bound with grey cement have a greyish colour. White cement is used to maintain the natural colour of the wood. The combination of wood wool fibres and binding agent is automatically fed into a moulding line to be shaped according to the required board dimensions (length, width and thickness). After pre-compaction, the endless row of moulds filled with the mixture is separated into individual moulds with a saw. The moulds are stacked on top of each other, pressed again and weighted so that they are perfectly flush with each other. After hardening of the binder (usually between 24 and 48 hours) the moulds are removed, the semi-finished boards are dried and cut to raw size. The products are then taken to the maturation stock where they will remain for a period that depends on type of binder and thickness. The boards can be subject to further processing in the finishing department, e.g. in case a special edge profile or board shape is needed. The whole production process is carried out in such way that the product meets the requirements set in EN 13168.
Properties
Wood wool boards are rigid and very strong. Their thermal conductivity is higher than other insulation material between 0.070 and 0.100 W/(m.K) compared with mineral wool insulation materials to approximately 0.040 W/(m.K). But their specific thermal capacity and therefore summer heat insulation is higher than other materials, e.g. when installed in lofts’ pitched roofs, wood wool boards offer better properties than basic dry wall systems in terms of summer heat insulation. Wood wool boards can be made to offer a high degree of sound insulation (e.g. if they are plastered) or sound absorption performance (e.g. non-plastered boards) and a good moisture-regulating capacity thanks to their open structure. In the harmonized Euroclass system of reaction to fire performance of building products, wood wool boards can be classified as A2-s1, d0 according to EN 13501-1 Fire classification of construction products and building elements; Part 1: Classification using data from reaction to fire tests.
In combination with other insulation materials (e.g mineral wool), the resistance against fire can reach up to two hundred forty minutes depending on the product’s thickness and setup.
Multi-layer lightweight building boards
Wood wool composite boards (WW-C boards) are a two- or three-layer combination of wood wool boards and various insulation materials (mineral fibre boards, EPS boards, XPS boards, etc…) The thickness of the wood wool layer in a composite board is at least 5 mm. Due to the higher insulation effect of the mineral or synthetic insulation layer, composite boards provide a higher level of thermal insulation relative to single wood wool boards.
See also
Coarse chipboard
Hardboard
Medium-density fibreboard
Particle board
References
Biodegradable materials
Wood products
Passive fire protection
Wallcoverings | Mineral bonded wood wool board | [
"Physics",
"Chemistry"
] | 948 | [
"Biodegradation",
"Biodegradable materials",
"Materials",
"Matter"
] |
56,073,390 | https://en.wikipedia.org/wiki/Combustion%20instability | Combustion instabilities are physical phenomena occurring in a reacting flow (e.g., a flame) in which some perturbations, even very small ones, grow and then become large enough to alter the features of the flow in some particular way.
In many practical cases, the appearance of combustion instabilities is undesirable. For instance, thermoacoustic instabilities are a major hazard to gas turbines and rocket engines. Moreover, flame blowoff of an aero-gas-turbine engine in mid-flight is clearly dangerous (see flameout).
Because of these hazards, the engineering design process of engines involves the determination of a stability map (see figure). This process identifies a combustion-instability region and attempts to either eliminate this region or moved the operating region away from it. This is a very costly iterative process. For example, the numerous tests required to develop rocket engines are largely in part due to the need to eliminate or reduce the impact of thermoacoustic combustion instabilities.
Classification of combustion instabilities
In applications directed towards engines, combustion instability has been classified into three categories, not entirely distinct. This classification was first introduced by Marcel Barrère and Forman A. Williams in 1969. The three categories are
Chamber instabilities - instabilities arising due to the occurrence of combustion inside a chamber (thermo-acoustic instabilities, shock instabilities, fluid-dynamic instabilities associated with the chamber, etc.,)
Intrinsic instabilities - instabilities arising irrespective of whether combustion occurs inside a chamber or not (chemical-kinetic instabilities, diffusive-thermal instabilities, hydrodynamic instabilities such as Darrieus–Landau instability, Rayleigh–Taylor instability, Saffman–Taylor instability, etc.,)
System instabilities - instabilities arising due to the interaction between combustion processes in the chamber and anywhere else in the system (feed-system interactions, exhaust-system interactions, etc.,)
Intrinsic flame instabilities
In contrast with thermoacoustic combustion instabilities, where the role of acoustics is dominant, intrinsic flame instabilities refer to instabilities produced by differential and preferential diffusion, thermal expansion, buoyancy, and heat losses. Examples of these instabilities include the Darrieus–Landau instability, the Rayleigh-Taylor instability, and diffusive-thermal instability.
Chamber instabilities
Thermoacoustic combustion instabilities
In this type of instabilities the perturbations that grow and alter the features of the flow are of an acoustics nature. Their associated pressure oscillations can have well defined frequencies with amplitudes high enough to pose a serious hazard to combustion systems. For example, in rocket engines, such as the Rocketdyne F-1 rocket engine in the Saturn V program, instabilities can lead to massive damage of the combustion chamber and surrounding components (see rocket engines). Furthermore, instabilities are known to destroy gas-turbine-engine components during testing. They represent a hazard to any type of combustion system.
Thermoacoustic combustion instabilities can be explained by distinguishing the following physical processes:
the feedback between heat-release fluctuations (or flame fluctuations) with the combustor or combustion chamber acoustics
the coupling of these two processes in space-time
the strength of this coupling in comparison with acoustic losses
the physical mechanisms behind the heat-release fluctuations
The simplest example of a thermoacoustic combustion instability is perhaps that happening in a horizontal Rijke tube (see also thermoacoustics): Consider the flow through a horizontal tube open at both ends, in which a flat flame sits at a distance of one-quarter the tube length from the leftmost end. In a similar way to an organ pipe, acoustic waves travel up and down the tube producing a particular pattern of standing waves. Such a pattern also forms in actual combustors, but takes a more complex form. The acoustic waves perturb the flame. In turn, the flame affects the acoustics. This feedback between the acoustic waves in the combustor and the heat-release fluctuations from the flame is a hallmark of thermoacoustic combustion instabilities. It is typically represented with a block diagram (see figure). Under some conditions, the perturbations will grow and then saturate, producing a particular noise. In fact, it is said that the flame of a Rijke tube sings.
The conditions under which perturbations will grow are given by Rayleigh's (John William Strutt, 3rd Baron Rayleigh) criterion: Thermoacoustic combustion instabilities will occur if the volume integral of the correlation of pressure and heat-release fluctuations over the whole tube is larger than zero (see also thermoacoustics). In other words, instabilities will happen if heat-release fluctuations are coupled with acoustical pressure fluctuations in space-time (see figure). However, this condition is not sufficient for the instability to occur.
Another necessary condition for the establishment of a combustion instability is that the driving of the instability from the above coupling must be larger than the sum of the acoustic losses. These losses happen through the tube's boundaries, or are due to viscous dissipation.
Combining the above two conditions, and for simplicity assuming here small fluctuations and an inviscid flow, leads to the extended Rayleigh's criterion. Mathematically, this criterion is given by the next inequality:
Here p' represents pressure fluctuations, q' heat release fluctuations, velocity fluctuations, T is a long enough time interval, V denotes volume, S surface, and is a normal to the surface boundaries. The left hand side denotes the coupling between heat-release fluctuations and acoustic pressure fluctuations, and the right hand side represents the loss of acoustic energy at the tube boundaries.
Graphically, for a particular combustor, the extended Rayleigh's criterion is represented in the figure on the right as a function of frequency. The left hand side of the above inequality is called gains, and the right hand side losses. Notice that there is a region where the gains exceeds the losses. In other words, the above inequality is satisfied. Furthermore, note that in this region the response of the combustor to acoustic fluctuations peaks. Thus, the likelihood of a combustion instability in this region is high, making it a region to avoid in the operation of the combustor. This graphical representation of a hypothetical combustor allows to group three methods to prevent combustion instabilities: increase the losses; reduce the gains; or move the combustor's peak response away from the region where gains exceed losses.
To clarify further the role of the coupling between heat-release fluctuations and pressure fluctuations in producing and driving an instability, it is useful to make a comparison with the operation of an internal combustion engine (ICE). In an ICE, a higher thermal efficiency is achieved by releasing the heat via combustion at a higher pressure. Likewise, a stronger driving of a combustion instability happens when the heat is released at a higher pressure. But while high heat release and high pressure coincide (roughly) throughout the combustion chamber in an ICE, they coincide at a particular region or regions during a combustion instability. Furthermore, whereas in an ICE the high pressure is achieved through mechanical compression with a piston or a compressor, in a combustion instability high pressure regions form when a standing acoustic wave is formed.
The physical mechanisms producing the above heat-release fluctuations are numerous. Nonetheless, they can be roughly divided into three groups: heat-release fluctuations due to mixture inhomogeneities; those due to hydrodynamic instabilities; and, those due to static combustion instabilities.
To picture heat-release fluctuations due to mixture inhomogeneities, consider a pulsating stream of gaseous fuel upstream of a flame-holder.
Such a pulsating stream may well be produced by acoustic oscillations in the combustion chamber that are coupled with the fuel-feed system. Many other causes are possible. The fuel mixes with the ambient air in a way that an inhomogeneous mixture reaches the flame, e.g., the blobs of fuel-and-air that reach the flame could alternate between rich and lean. As a result, heat-release fluctuations occur.
Heat-release fluctuations produced by hydrodynamic instabilities happen, for example, in bluff-body-stabilized combustors when vortices interact with the flame (see previous figure).
Lastly, heat-release fluctuations due to static instabilities are related to the mechanisms explained in the next section.
Static instability or flame blow-off
Static instability or flame blow-off refer to phenomena involving the interaction between the chemical composition of the fuel-oxidizer mixture and the flow environment of the flame. To explain these phenomena, consider a flame that is stabilized with swirl, as in a gas-turbine combustor, or with a bluff body. Moreover, say that the chemical composition and flow conditions are such that the flame is burning vigorously, and that the former is set by the fuel-oxidizer ratio (see air-fuel ratio) and the latter by the oncoming velocity. For a fixed oncoming velocity, decreasing the fuel-oxidizer ratio makes the flame change its shape, and by decreasing it further the flame oscillates or moves intermittently. In practice, these are undesirable conditions. Further decreasing the fuel-oxidizer ratio blows-off the flame. This is clearly an operational failure. For a fixed fuel-oxidizer ratio, increasing the oncoming velocity makes the flame behave in a similar way to the one just described.
Even though the processes just described are studied with experiments or with Computational Fluid Dynamics, it is instructive to explain them with a simpler analysis. In this analysis, the interaction of the flame with the flow environment is modeled as a perfectly-mixed chemical reactor. With this model, the governing parameter is the ratio between a flow time-scale (or residence time in the reactor) and a chemical-time scale, and the key observable is the reactor's maximum temperature. The relationship between parameter and observable is given by the so-called S-shape curve (see figure). This curve results from the solution of the governing equations of the reactor model. It has three branches: an upper branch in which the flame is burning vigorously, i.e., it is "stable"; a middle branch in which the flame is "unstable" (the probability for solutions of the reactor-model equations to be in this unstable branch is small); and a lower branch in which there is no flame but a cold fuel-oxidizer mixture. The decrease of the fuel-oxidizer ratio or increase of oncoming velocity mentioned above correspond to a decrease of the ratio of the flow and chemical time scales. This in turn corresponds to a movement towards the left in the S-shape curve. In this way, a flame that is burning vigorously is represented by the upper branch, and its blow-off is the movement towards the left along this branch towards the quenching point Q. Once this point is passed, the flame enters the middle branch, becoming thus "unstable", or blows off. This is how this simple model captures qualitatively the more complex behavior explained in the above example of a swirl or bluff-body-stabilized flame.
References
Combustion | Combustion instability | [
"Chemistry"
] | 2,368 | [
"Combustion"
] |
56,073,671 | https://en.wikipedia.org/wiki/Solovay%E2%80%93Kitaev%20theorem | In quantum information and computation, the Solovay–Kitaev theorem says that if a set of single-qubit quantum gates generates a dense subgroup of SU(2), then that set can be used to approximate any desired quantum gate with a short sequence of gates that can also be found efficiently. This theorem is considered one of the most significant results in the field of quantum computation and was first announced by Robert M. Solovay in 1995 and independently proven by Alexei Kitaev in 1997. Michael Nielsen and Christopher M. Dawson have noted its importance in the field.
A consequence of this theorem is that a quantum circuit of constant-qubit gates can be approximated to error (in operator norm) by a quantum circuit of gates from a desired finite universal gate set (where c is a constant). By comparison, just knowing that a gate set is universal only implies that constant-qubit gates can be approximated by a finite circuit from the gate set, with no bound on its length. So, the Solovay–Kitaev theorem shows that this approximation can be made surprisingly efficient, thereby justifying that quantum computers need only implement a finite number of gates to gain the full power of quantum computation.
Statement
Let be a finite set of elements in SU(2) containing its own inverses (so implies ) and such that the group they generate is dense in SU(2). Consider some . Then there is a constant such that for any , there is a sequence of gates from of length such that . That is, approximates to operator norm error. Furthermore, there is an efficient algorithm to find such a sequence. More generally, the theorem also holds in SU(d) for any fixed d.
This theorem also holds without the assumption that contains its own inverses, although presently with a larger value of that also increases with the dimension .
Quantitative bounds
The constant can be made to be for any fixed . However, there exist particular gate sets for which we can take , which makes the length of the gate sequence optimal up to a constant factor.
Proof idea
Every known proof of the fully general Solovay–Kitaev theorem proceeds by recursively constructing a gate sequence giving increasingly good approximations to . Suppose we have an approximation such that . Our goal is to find a sequence of gates approximating to error, for . By concatenating this sequence of gates with , we get a sequence of gates such that .
The main idea in the original argument of Solovay and Kitaev is that commutators of elements close to the identity can be approximated "better-than-expected". Specifically, for satisfying and and approximations satisfying and , then
where the big O notation hides higher-order terms. One can naively bound the above expression to be , but the group commutator structure creates substantial error cancellation.
We can use this observation to approximate as a group commutator . This can be done such that both and are close to the identity (since ). So, if we recursively compute gate sequences approximating and to error, we get a gate sequence approximating to the desired better precision with . We can get a base case approximation with constant with an exhaustive search of bounded-length gate sequences.
Proof of Solovay-Kitaev Theorem
Let us choose the initial value so that to be able to apply the iterated “shrinking” lemma. In addition we want to make sure that decreases as we increase . Moreover, we also make sure that is small enough so that .
Since is dense in , we can choose large enough so that is an -net for (and hence for , as well), no matter how small is. Thus, given any , we can choose such that . Let be the “difference” of and . Then
Hence, . By invoking the iterated "shrinking" lemma with , there
exists such that
Similarly let . Then
Thus, and we can invoke the iterated "shrinking" lemma (with
this time) to get such that
If we continue in this way, after k steps we get such that
Thus, we have obtained a sequence of
gates that approximates to accuracy . To determine the value of , we set
and solve for k:
Now we can always choose slightly smaller so that the obtained value
of is an integer. Let so that. Then
Hence for any there is a sequence of gates that
approximates to accuracy .
Solovay-Kitaev algorithm for qubits
Here the main ideas that are used in the SK algorithm have been presented. The SK algorithm may be expressed in nine lines of pseudocode. Each of these
lines are explained in detail below, but present it here in its entirety both for the reader's reference, and to stress the conceptual simplicity of the algorithm:
function Solovay-Kitaev(Gate , depth ) is
if ( == 0)
return Basic Approximation to
else
set = Solovay-Kitaev(,)
set = GC-Decompose()
set = Solovay-Kitaev()
set = Solovay-Kitaev()
return ;
end function
Let us examine each of these lines in detail. The first line:
function Solovay-Kitaev(Gate , depth ) is
indicates that the algorithm is a function with two inputs: an arbitrary single-qubit quantum
gate, , which we desire to approximate, and a non-negative integer, , which controls
the accuracy of the approximation. The function returns a sequence of instructions which
approximates to an accuracy , where is a decreasing function of , so that as gets
larger, the accuracy gets better, with as . is described in detail below.
The Solovay-Kitaev function is recursive, so that to obtain an -approximation to ,
it will call itself to obtain -approximations to certain unitaries. The recursion terminates
at , beyond which no further recursive calls are made:
if ( == 0)
return Basic Approximation to
In order to implement this step it is assumed that a preprocessing stage has been completed
which allows one to find a basic -approximation to arbitrary . Since is a constant, in principle this preprocessing stage may be accomplished simply by enumerating
and storing a large number of instruction sequences from , say up to some sufficiently large
(but fixed) length , and then providing a lookup routine which, given , returns the closest sequence.
At higher levels of recursion, to find an -approximation to , one begins by finding an
-approximation to :
else
set = Solovay-Kitaev(,)
is used as a step towards finding an improved approximation to . Defining ≡ , the next three steps of the algorithm aim to find an -approximation to , where is some improved level of accuracy, i.e., . Finding such an approximation
also enables us to obtain an -approximation to , simply by concatenating exact
sequence of instructions for with -approximating sequence for .
How do we find such an approximation to? First, observe that is within a distance of the identity. This follows from the definition of and the fact that is within a distance of .
Second, decompose as a group commutator of unitary gates and . For any it turns out that this is not obvious and that there is always an infinite set of
choices for and such that . For our purposes it is important that we
find and such that for some constant . We call such a decomposition a balanced group commutator.
set = GC-Decompose()
For practical implementations we will see below that it is useful to have as small as
possible.
The next step is to find instruction sequences which are -approximations to and :
set = Solovay-Kitaev()
set = Solovay-Kitaev()
The group commutator of and turns out to be an ≡ -approximation to , for some small constant . Provided , we see that , and this procedure therefore provides an improved
approximation to , and thus to .
The constant is important as it determines the precision required of the initial approximations. In particular, we see that for this construction to guarantee that we must have .
The algorithm concludes by returning the sequences approximating the group commutator, as well as :
return '';
Summing up, the function Solovay-Kitaev(U, n) returns a sequence which provides an -approximation to the desired unitary . The five constituents in this sequence
are all obtained by calling the function at the th level of recursion.
References
Mathematical theorems
Quantum computing
Quantum information theory | Solovay–Kitaev theorem | [
"Mathematics"
] | 1,762 | [
"Mathematical theorems",
"Mathematical problems",
"nan"
] |
56,073,712 | https://en.wikipedia.org/wiki/C2H3Br3 | {{DISPLAYTITLE:C2H3Br3}}
The molecular formula C2H3Br3 (molar mass: 266.76 g/mol, exact mass: 263.7785 u) may refer to:
Tribromoethanes
1,1,1-Tribromoethane
1,1,2-Tribromoethane | C2H3Br3 | [
"Chemistry"
] | 77 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
56,073,965 | https://en.wikipedia.org/wiki/Plant%E2%80%93fungus%20horizontal%20gene%20transfer | Plant–fungus horizontal gene transfer is the movement of genetic material between individuals in the plant and fungus kingdoms. Horizontal gene transfer is universal in fungi, viruses, bacteria, and other eukaryotes. Horizontal gene transfer research often focuses on prokaryotes because of the abundant sequence data from diverse lineages, and because it is assumed not to play a significant role in eukaryotes.
Most plant–fungus horizontal gene transfer events are ancient and rare, but they may have provided important gene functions leading to wider substrate use and habitat spread for plants and fungi. Since these events are rare and ancient, they have been difficult to detect and remain relatively unknown. Plant–fungus interactions could play a part in a multi-horizontal gene transfer pathway among many other organisms.
Mechanisms
Fungus–plant-mediated horizontal gene transfer can occur via phagotrophic mechanisms (mediated by phagotrophic eukaryotes) and nonphagotropic mechanisms. Nonphagotrophic mechanisms have been seen in the transmission of transposable elements, plastid-derived endosymbiotic gene transfer, prokaryote-derived gene transfer, Agrobacterium tumefaciens-mediated DNA transfer, cross-species hybridization events, and gene transfer between mitochondrial genes. Horizontal gene transfer could bypass eukaryotic barrier features like linear chromatin-based chromosomes, intron–exon gene structures, and the nuclear envelope.
Horizontal gene transfer occurs between microorganisms sharing overlapping ecological niches and associations like parasitism or symbiosis. Ecological association can facilitate horizontal gene transfer in plants and fungi and is an unstudied factor in shared evolutionary histories.
Most horizontal gene transfers from fungi into plants predate the rise of land plants. A greater genomic inventory of gene family and taxon sampling has been identified as a desirable prerequisite for identifying further plant–fungus events.
Indicators of past horizontal gene transfer
Evidence for gene transfer between fungi and eukaryotes is discovered indirectly. Evidence is found in the unusual features of genetic elements. These features include: inconsistency between phylogeny across genetic elements, high DNA or amino acid similarity from phylogenetically distant organisms, irregular distribution of genetic elements in a variety of species, similar genes shared among species within a specific habitat or geography independent of their phylogenetic relationship, and gene characteristics inconsistent with the resident genome such as high guanine and cytosine content, codon usage, and introns.
Alternative hypotheses and explanations for such findings include erroneous species phylogenies, inappropriate comparison of paralogous sequences, sporadic retention of shared ancestral characteristics, uneven rates of character change in other lineages, and introgressive hybridization.
The "complexity hypothesis" is a different approach to understanding why informational genes have less success in being transferred than operational genes. It has been proposed that informational genes are part of larger, more conglomerate systems, while operational genes are less complex, allowing them to be horizontally transferred at higher frequencies. The hypothesis incorporates the "continual hypothesis", which states that horizontal gene transfer is constantly occurring in operational genes.
Examples
Horizontal gene transfer as a multi-vector pathway
Plant–fungus horizontal gene transfer could take place during plant infection. There are many possible vectors, such as plant–fungus–insect interactions. The ability for fungi to infect other organisms provides this possible pathway.
In rice
A fungus–plant pathway has been demonstrated in rice (Oryza sativa) through ancestral lineages. A phylogeny was constructed from 1689 identified genes and all homologs available from the rice genome (3177 gene families). Fourteen candidate plant–fungus horizontal gene transfer events were identified, nine of which showed infrequent patterns of transfer between plants and fungi. From the phylogenetic analysis, horizontal gene transfer events could have contributed to the L-fucose permease sugar transporter, zinc binding alcohol dehydrogenase, membrane transporter, phospholipase/carboxylesterase, iucA/iucC family protein in siderophore biosynthesis, DUF239 domain protein, phosphate-response 1 family protein, a hypothetical protein similar to zinc finger (C2H2-type) protein, and another conserver hypothetical protein.
Ancestral shikimate pathway
Some plants may have obtained the shikimate pathway from symbiotic fungi. Plant shikimate pathway enzymes share similarities to prokaryote homologs and could have ancestry from a plastid progenitor genome. It is possible that the shikimate pathway and the pentafunctional arom have their ancient origins in eukaryotes or were conveyed by eukaryote–eukaryote horizontal gene transfer. The evolutionary history of the pathway could have been influenced by a prokaryote-to-eukaryote gene transfer event. Ascomycete fungi along with zygomycetes, basidiomycetes, apicomplexa, ciliates, and oomycetes retained elements of an ancestral pathway given through the bikont/unikont eukaryote root.
Ancestral land plants
Fungi and bacteria could have contributed to the phenylpropanoid pathway in ancestral land plants for the synthesis of flavonoids and lignin through horizontal gene transfer. Phenylalanine ammonia lyase (PAL) is known to be present in fungi, such as Basidiomycota yeast like Rhodotorula and Ascomycota such as Aspergillus and Neurospora. These fungi participate in the catabolism of phenylalanine for carbon and nitrogen. PAL in some plants and fungi also has a tyrosine ammonia lyase (TAL) for the synthesis of p-coumaric acid into p-coumaroyl-CoA. PAL likely emerged from bacteria in an antimicrobial role. Horizontal gene transfer took place through a pre-Dikarya divergent fungal lineage and a Nostocale or soil-sediment bacterium through symbiosis. The fungal PAL was then transferred to an ancestor of a land plant by an ancient arbuscular mycorrhizal symbiosis that later developed in the phenylpropanoid pathway and land plant colonization. PAL enzymes in early bacteria and fungi could have contributed to protection against ultraviolet radiation, acted as a light capturing pigment, or assisted in antimicrobial defense.
Gene transfer for enhanced intermediate and secondary metabolism
Sterigmatocystin gene transfer has been observed with Podospora anserina and Aspergillus. Horizontal gene transfer in Aspergillus and Podospora contributed to fungal metabolic diversity in secondary metabolism. Aspergillus nidulans produces sterigmatocystin – a precursor to aflatoxins. Aspergillus was found to have horizontally transferred genes to Podospora anserina. Podospora and Aspergillus show high conservation and microsynteny sterigmatocystin/aflatoxin clusters along with intergenic regions containing 14 binding sites for AfIR, a transcription factor for the activation of sterigmatocystin/aflatoxin biosynthetic genes. Aspergillus to Podospora represents a large metabolic gene transfer which could have contributed to fungal metabolic diversity. Transposable elements and other mobile genetic elements like plasmids and viruses could allow for chromosomal rearrangement and integration of foreign genetic material. Horizontal gene transfer could have significantly contributed to fungal genome remodeling and metabolic diversity.
Acquired pathogenic capabilities
In Stagonospora and Pyrenophora, as well as in Fusarium and Alternaria, horizontal gene transfer provides a powerful mechanism for fungi to acquire pathogenic capabilities to infect a new host plant. Horizontal gene transfer and interspecific hybridization between pathogenic species allow for hybrid offspring with an expanded host range. This can cause disease outbreaks on new crops when an encoded protein is able to cause pathogenicity.
The interspecific transfer of virulence factors in fungal pathogens has been shown between Stagonospora modorum and Pyrenophora tritici-repentis, where a host-selective toxin from S. nodorum conferred virulence to P. tritici-repentis on wheat.
In Fusarium, a nonpathogenic strain was experimentally converted into a pathogen and could have contributed to pathogen adaption in large genome portions. Fusarium graminearum, Fusarium verticilliodes, and Fusarium oxysprorum are maize and tomato pathogens that produce fumonisin mycotoxins that contaminate grain. These examples highlight the apparent polyphyletic origins of host specialization and the emergence of new pathogenic lineages distinct from genetic backgrounds. The ability to transfer genetic material could increase disease in susceptible plant populations.
References
Genetics
Microbial population biology | Plant–fungus horizontal gene transfer | [
"Biology"
] | 1,856 | [
"Genetics"
] |
56,074,260 | https://en.wikipedia.org/wiki/Malin%20Falkenmark | Malin Fredrika Sofia Sundberg-Falkenmark (21 November 1925 – 3 December 2023) was a Swedish hydrologist. Falkenmark is best known for her long-standing work and expertise on the sustainable use of water resources to meet human and ecosystem needs. Her work is characterized by an integration of both natural- and social-science approaches. She is particularly known for developing what is now known as the Falkenmark Water Stress Indicator, an indicator used to measure and describe the water available for human use (water scarcity). She was the daughter of Halvar Sundberg.
Life and career
Falkenmark graduated as a Fil. Mag. (Swedish equivalent to a master's degree) in mathematics, physics, chemistry and mechanics at Uppsala University, in 1951. In 1964, she became the first Fil. Lic. (Swedish equivalent of PhD at the time) of hydrology in Sweden, where she studied the “Bearing capacity of an ice sheet”. Later in 1975, she was awarded the title of PhD Honoris causa at Linköping University.
Falkenmark's professional history includes holding positions as State Hydrologist at the Swedish Meteorological and Hydrological Institute (1950s-60's), and later at the Swedish Natural Science Research Council (1965–95), where she became Executive Secretary, and later Chair of the Swedish National Committees for UNESCO’s International Hydrological Decade/Programme.
As the Chair of the Scientific Program Committee at the Stockholm International Water Institute (SIWI) (1991–2003), Falkenmark led the establishment of the annual Stockholm World Water Week (initially named Stockholm Water Symposium), which grew to be the "annual focal point for the globe’s water issues".
Falkenmark held numerous posts on international boards and committees, including as General Rapporteur of the United Nations Water Conference Mar del Plata (1977); World Bank Consultant with special responsibility regarding the looming water scarcity (1988–92); member of the UN Committee on Energy and Natural Resources for Development and the UN Millennium Project Task Force for Environmental Sustainability; member of the Technical Advisory Committee of the Global Water partnership; and Scientific Advisor to the Global Environment Facility and the Comprehensive Freshwater Assessment of the World.
Falkenmark was a Professor of Applied and International Hydrology. Between 1976 and 1979, she led the planning and development of the Department for Water and Environment Studies at Linköping University; after her formal retirement, in 1991, she became part of Stockholm University’s Department of Systems Ecology.
In 2007, she joined the Stockholm Resilience Center as a senior researcher. She was also the senior scientific advisor at the Stockholm International Water Institute (SIWI).
In 2018, she shared the Blue Planet Prize with ecologist Brian Walker.
Falkenmark died on 3 December 2023, at the age of 98.
The Falkenmark Water Stress Indicator
In an article published in 1989, Falkenmark introduced an indicator for water stress that expresses the level of water scarcity in a certain region as the amount of renewable freshwater that is available for each person each year. It eventually became known as the Falkenmark Indicator, and is not only one of the earliest, but also one of the most used indicators to measure and describe water availability for human use.
The level of water scarcity in a certain country was determined based on thresholds: If the amount of renewable water in a country is below 1,700 m3 per person per year, that country is said to be experiencing water stress; below 1,000 m3 it is said to be experiencing water scarcity; and below 500 m3, absolute water scarcity.
The Blue and Green Water Paradigm
The concepts of green and blue water were first introduced by Falkenmark in 1995, and defined green water as "the rainwater that infiltrates into the root zone and is used for biomass production", and blue water as "the water that either runs off from the soil surface or percolates beyond the root zone to form groundwater".
In a later publication, green water was defined as the soil water held in the unsaturated zone, formed by precipitation and available to plants, while blue water refers to liquid water in rivers, lakes, wetlands and aquifers, which can be withdrawn for irrigation and other human uses. Both resources are important for food production; rainfed agriculture uses green water only, while irrigated agriculture uses both green and blue water.
Selected awards and recognition
IWRA Ven Te Chow Memorial Award (1991)
KTH Stora Pris (1995)
Volvo Environment Prize (1998)
International Hydrology Prize (1998)
Rachel Carson Prize (2005)
Crystal Drop Award (2005)
Prince Albert II of Monaco Water and Desertification Award (2010)
Blue Planet Prize (2018)
References
1925 births
2023 deaths
Swedish hydrologists
Uppsala University alumni
Hydrologists
Swedish women scientists
Scientists from Stockholm | Malin Falkenmark | [
"Environmental_science"
] | 1,003 | [
"Hydrology",
"Hydrologists"
] |
56,075,062 | https://en.wikipedia.org/wiki/Capital%20and%20income%20breeding | Capital breeding and income breeding refer to the methods by which some organisms perform time breeding and use resources to finance their breeding. The former "describes the situation in which reproduction is financed using stored capital; [whereas the latter] [...] refers to the use of concurrent intake to pay for a reproductive attempt."
Income breeders who are growing especially fast hold off the development of their offspring after a threshold is reached so they can produce more offspring, although this does not occur in slower growing income breeders. An organism can be both a capital and an income breeder; the parasitoid Eupelmus vuilletti, for example, is an income breeder in terms of sugars, but a capital breeder in terms of lipids. A different example of the interaction between capital and income breeding is found in Vipera aspis; although these snakes are capital breeders, they lay larger litters when food is abundant, which is a characteristic of income breeders.
The dichotomy between income and capital breeders was introduced in 1980 by R. H. Drent and S. Daan to explain why birds usually laid their eggs later than the time that would maximize nestling survival for the population.
Ectotherms are generally capital breeders, whereas endotherms rely on income breeding more often. This difference is likely due to the difference in maintenance costs, and thus in the energy that can be allocated to stores.
Determinants of capital versus income breeding
In organisms that breed multiple times and live in places where food availability and mortality change significantly on the basis of season, it is predicted that capital breeding will be more prevalent, as the time when the organism is not breeding but when conditions are still favourable will be dedicated to rebuilding stores, therefore allowing them to achieve higher rates of reproduction. Capital breeding also increases with size (at least in organisms with optimal storage and indeterminate growth), as the energy dedicated to growth gives less and less return, thus meaning that energy dedicated to storage will have more return compared to that dedicated to growth. But, in eastern grey kangaroos, capital breeding is used during times of food scarcity, whereas income breeding is used during times of normal food availability. Income breeding, on the other hand, will generally be favoured in non-seasonal environments, as holding off breeding will not increase the chances that the offspring will survive. In addition, high or unpredictable demands during reproduction, which would cause the energy needed to exceed the energy provided by an income breeding strategy, may encourage capital breeding. Similarly, the possibility of decreased agility or increased conspicuity associated with, for example, egg-carrying could increase predation on reproducing individuals, making a strategy based on capital breeding more favourable, so as to avoid having to forage while reproducing.
This model does not hold for organisms that have a feeding season right after the breeding season. Copepods, for example, have their breeding season just before the feeding season, and are primarily divided into mainly capital or mainly income breeders on the basis of geography.
In endotherms
Endotherms have a higher level of energy that needs to be dedicated to maintenance, thus explaining their increased reliance on income breeding.
In birds
The terms capital breeding and income breeding originated to explain why most individuals lay after the time when nestlings are most likely to survive. Both systems fit with the optimal time of laying for low-quality and high-quality individuals. High-quality individuals may choose to hold off laying until another egg is produced, as the decrease in the likelihood of survival for each egg is compensated by the additional egg. This is the opposite in low-quality individuals, in which the time to make an extra egg decreases the survival of each egg to the point where an additional egg cannot compensate for this loss.
In pinnipeds
The reliance of capital and income breeding in pinnipeds primarily depends on the availability of food, with more food favouring an increased reliance on capital breeding. This is because increase food availability allows for the accumulation of capital, which allows for a species to use capital breeding, which is more efficient as there are less energetic costs associated with it. Increased seasonality is another factor in capital versus income breeding, with higher seasonality associated with an increased reliance on income breeding, for the reasons discussed previously. Increased unpredictability also affects a pinnipeds reliance on capital breeding; less predictability increase reliance on capital breeding, as a species can use its accumulated stores to breed when there is less food available, whereas an income breeder cannot.
In ectotherms
Ectotherms are generally capital breeders, likely because they have a lower level of body maintenance, meaning that more energy can be converted to body stores.
In copepods
Copepods generally have their reproduction strategy influenced by geography, with those at higher latitudes usually being capital breeders, and those in waters closer to the equator conforming to the income breeding strategy. This is because more temperate waters allow for a longer feeding season, which allows for multiple generations in income breeders (who reproduce during the feeding season), whereas colder seas with shorter feeding seasons favour capital breeders, who are not as much affected compared to income breeders by having to get their offspring to maturity before the feeding season ends. The length of the feeding season also selects for size in these organisms; income breeders are as small as possible so they can take advantage of having multiple generations per breeding season, in contrast with capital breeders, which are as large as possible so as to catch the most food to put into their reserves.
References
Reproduction | Capital and income breeding | [
"Biology"
] | 1,128 | [
"Biological interactions",
"Behavior",
"Reproduction"
] |
54,500,417 | https://en.wikipedia.org/wiki/NGC%207066 | NGC 7066 is a spiral galaxy located about 210 million light-years away in the constellation of Pegasus. NGC 7066 was discovered by astronomer Lewis Swift on August 31, 1886.
See also
List of NGC objects (7001–7840)
References
External links
Spiral galaxies
Pegasus (constellation)
7066
66747
Astronomical objects discovered in 1886
11741 | NGC 7066 | [
"Astronomy"
] | 72 | [
"Pegasus (constellation)",
"Constellations"
] |
54,500,840 | https://en.wikipedia.org/wiki/Bohr%20model%20of%20the%20chemical%20bond | In addition to the model of the atom, Niels Bohr also proposed a model of the chemical bond.
He proposed this model first in the article "Systems containing several nuclei" - the third and last of the classic series of articles by Bohr, published in November 1913 in Philosophical Magazine.
According to his model for a diatomic molecule, the electrons of the atoms of the molecule form a rotating ring whose plane is perpendicular to the axis of the molecule and equidistant from the atomic nuclei. The dynamic equilibrium of the molecular system is achieved through the balance of forces between the forces of attraction of nuclei to the plane of the ring of electrons and the forces of mutual repulsion of the nuclei. The Bohr model of the chemical bond took into account the Coulomb repulsion - the electrons in the ring are at the maximum distance from each other.
Thus, according to this model, the methane molecule is a regular tetrahedron, in which center the carbon nucleus locates, and in the corners - the nucleus of hydrogen. The chemical bond between them forms four two-electron rings, rotating around the lines connecting the center with the corners.
The Bohr model of the chemical bond could not explain the properties of the molecules. Attempts to improve it have been undertaken many times, but have not led to success.
A working theory of chemical bonding was formulated only by quantum mechanics on the basis of the principle of uncertainty and the Pauli exclusion principle. In contrast to the Bohr model of chemical bonding, it turned out that the electron cloud mainly concentrates on the line between the nuclei, providing a Coulomb attraction between them. For many-electron atoms, the valence bond theory, laid down in 1927 by Walter Heitler and Fritz London, was a successful approximation.
References
Bibliography
Chemical bonding
Quantum chemistry
Chemistry theories
Electron | Bohr model of the chemical bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 373 | [
"Electron",
"Quantum chemistry stubs",
"Quantum chemistry",
"Molecular physics",
"Theoretical chemistry stubs",
"Quantum mechanics",
"Theoretical chemistry",
"Condensed matter physics",
" molecular",
"nan",
"Atomic",
"Chemical bonding",
"Physical chemistry stubs",
" and optical physics"
] |
54,501,194 | https://en.wikipedia.org/wiki/Derived%20tensor%20product | In algebra, given a differential graded algebra A over a commutative ring R, the derived tensor product functor is
where and are the categories of right A-modules and left A-modules and D refers to the homotopy category (i.e., derived category). By definition, it is the left derived functor of the tensor product functor .
Derived tensor product in derived ring theory
If R is an ordinary ring and M, N right and left modules over it, then, regarding them as discrete spectra, one can form the smash product of them:
whose i-th homotopy is the i-th Tor:
.
It is called the derived tensor product of M and N. In particular, is the usual tensor product of modules M and N over R.
Geometrically, the derived tensor product corresponds to the intersection product (of derived schemes).
Example: Let R be a simplicial commutative ring, Q(R) → R be a cofibrant replacement, and be the module of Kähler differentials. Then
is an R-module called the cotangent complex of R. It is functorial in R: each R → S gives rise to . Then, for each R → S, there is the cofiber sequence of S-modules
The cofiber is called the relative cotangent complex.
See also
derived scheme (derived tensor product gives a derived version of a scheme-theoretic intersection.)
Notes
References
Lurie, J., Spectral Algebraic Geometry (under construction)
Lecture 4 of Part II of Moerdijk-Toen, Simplicial Methods for Operads and Algebraic Geometry
Ch. 2.2. of Toen-Vezzosi's HAG II
Algebraic geometry | Derived tensor product | [
"Mathematics"
] | 362 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
54,501,288 | https://en.wikipedia.org/wiki/Digital%20archaeology | Digital archaeology is the application of information technology and digital media to archaeology. This includes the use of tools such as databases, 3D models, digital photography, virtual reality, augmented reality, and geographic information systems. Computational archaeology, which covers computer-based analytical methods, can be considered a subfield of digital archaeology, as can virtual archaeology. Digital archaeology plays a key role in data collection, analysis, and public outreach, enhancing the study and preservation of archaeological sites and artifacts.
The use of digital technology to conduct archaeological research allows data to be collected without the invasion or destruction of archaeological sites and the cultural heritage they hold, aiding the preservation of archaeological data. This is how many early archaeological sites were discovered in-depth. Applications of this technology have aided the reconstruction of historical monuments and artefacts such as pottery, human fossils, and mummified remains.
Subfields
Virtual archaeology
Virtual archaeology is a subfield of digital archaeology that creates and use virtual models and simulations of archaeological sites, artifacts, and processes. It makes use of 3D modeling, virtual reality (VR), augmented reality (AR), and other technologies to recreate or visualize archaeological findings.
Computational archaeology
Computational archaeology is a subfield of digital archaeology that focuses on the analysis and interpretation of archaeological data using advanced computational techniques. This field employs data modeling, statistical analysis, and computer simulations to understand and reconstruct past human behaviors and societal developments.
Methods and Technologies in Digital Archaeology
Geographical Information Systems
A Geographical Information System (GIS) is used within digital archaeology to document, survey and analyse the spatial data of archaeological sites. The use of a GIS within the study of archaeology involves in-field analysis and collection of archaeological and environmental data, predominantly through aerial photography, spatial cognition, digital maps and satellite imaging. The application of GIS in the analysis of archaeological data allows archaeologists to process the data collected efficiently, recreate landscapes of archaeological sites through spatial analysis, and supply the archaeological findings to public archives. The use of this digital method has enhanced the ability of archaeologists to analyse the geography and spatial relationships of ancient archaeological sites.
3D Modelling
3D modelling is a digital technique used within archaeological research to interpret, analyse, and visualise data. The technique utilises methods of satellite imaging and aerial photography, amongst other digital imaging techniques to construct 3D models of the geography, architecture and archaeological findings of historical sites.
The application of computer technology allows large amounts of image sequencing to be collected and processed by archaeologists, enhancing the photorealistic texture mapping within the construction of these 3D models.
Remote Sensing
Aerial Photography
Aerial Photography is a tool used within the field of archaeological research to discover, place and document archaeological sites. The application of this technology developed from its previous use as a method of military surveillance throughout the First World War, and offers a non-destruction means of archaeological research.
The documentation of archaeological sites through Aerial Photography techniques involve the use of digital cameras, GIS and rectification software to collect numerous black and white photographs of the site for archaeological study. These photographs can be used by archaeologists to enhance the details of the site and plot the composite features. These results are often analysed to create a geographical framework, allowing archaeologists to create a map inclusive of the sites landscape features.
Sites recognised by Aerial Photography are then classified into shadow sites, crop-marks and soil-marks.
Photogrammetry
Photogrammetry is the science and technology of obtaining reliable information about physical objects and the environment through the process of recording, measuring and interpreting photographic images and patterns of electromagnetic radiant imagery and other phenomena.
LiDAR
LiDAR is a method for determining ranges by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver.
Total Station Theodolite
A Total Station Theodolite (TST) is a surveying instrument that utilises electronic distance measurement technology to analyse archaeological sites. TST technology allows the distance of an archaeological site to be documented and maps to be established. This is conducted through the measurement of distance between the TST instrument and the site selected. The use of reflectorless TST technology as a method of archaeological research utilises an infrared beam to record measurements of archaeological sites, this allows archaeologists to study the spatial landscape of sites despite possible inconsistencies in elevation.
TST technology is considered a direct surveying technique as it utilises the manual acquisition of points of reference by the operator. TST techniques allow data to be downloaded and analysed after the archaeological survey is complete, limiting the awareness of an archaeologist when conducting in-field analysis. However, if the TST technology is connected to a portable computer recording the archaeological data, an archaeologist is able to view the data as it is collected.
Data Collection
The use of Information Communication Technology and digital techniques in archaeological studies has furthered the development of documenting archaeological data. This incorporation of modern technology throughout the process of conducting archaeological research has allowed commercial, academic and heritage management fields to become increasingly unified. The recording of archaeological data is distinguished through methods of acquisition, analysis, and representation throughout the process of data handling.
Data collected through digital technology when conducting archaeological research is stored on archives at digital repositories. The databases are then checked for integrity to ensure the data can be accessed and analysed for further research. The development of Information Communication Technology and digital techniques has allowed larger amounts of data to be collected and stored from archaeological research.
Applications in Fieldwork
Virtual Reconstruction of Roman Wall Paintings in the Sarno Baths
The application of digital technology through virtual analysis and 3D reconstruction of the frigidarium in the Sarno Baths in Rome has allowed archaeologists to reconstruct and preserve deteriorating wall paintings. The reconstruction involved digitally removing salt deposits and abrasions in the paint layers. The use of virtual analysis and digital imaging by archaeologists allowed the preservation and reconstruction of the wall decorations to reveal further archaeological data on the methods of its original construction.
Delphi4Delphi
The Digital Enterprise for Learning Practice of Heritage Initiative for Delphi, otherwise referred to as Delphi4Delphi, is a research project conducted by archaeologists to document and reconstruct the historical sites at Delphi, Greece. The project aimed to capture and reconstruct archaeological monuments and artefacts located in Delphi through 3D imaging and reconstruction. The archaeological sites studied were the Temple of Apollo, the Sanctuary of Athena Pronea, the Treasury of the Siphians, the theatre and gymnasium, and the bronze charioteer and marble sphinx located at the site. The project utilised digital methods of spectral documentation, 3D stereo photography systems, and the processing of 2D image sequences into 3D structures to document, analyse and reconstruct the archaeological sites.
Multi-Object Segmentation for Assisted Image Reconstruction
The Multi-Object Segmentation for Assisted Image Reconstruction, or MOSAIC+, is a project conducted by archaeologists to reconstruct fragments found in the Church of St.Trophimena in Salerno, Italy. Archaeologists conducted research involving the craquelure detection of the Visitation fresco, painted by Francesco Salviati in 1538, utilising differing dimensions of the patch and in-painting present. This study found the use of this digital imaging technology as non-optimal due to the distribution of larger holes within the image's restoration. Further research was conducted into the fresco fragments and their reconstruction before and after undergoing the processes of craquelure detection and in-painting.
MOSAIC+ aimed to develop the work of archaeologists through the catalogue, indexing, retrieval and reconstruction of fragments found at archaeological sites, allowing the extraction of colour and shape features to be completed accurately. Through the application of digital techniques throughout conducting this research, the results indicate the possibility of virtual reconstruction to restore the appearance of archaeological artworks and aid the reconstruction of fragmented artefacts by archaeologists.
Maxentius 3D Project
The Maxentius 3D Project, undertaken by the Sapienza University located in Italy, is a research project involving the 3D reconstruction of the Circus of Maxentius in Rome. The Circus of Maxentius, situated in the Appion way regional park, is a structure commissioned by the Roman Emperor Maxentius towards the beginning of the 4th century A.D. However, due to its position within a regionally protected area, the vegetation preventing the reconnaissance of the structure by researchers cannot be removed in order to preserve the local ecosystem. Although the site is largely covered by this vegetation, the study of archaeological data collected through cartography, axonometric drawings, archaeological plans and historical illustrations, has allowed archaeologists to construct a 3D model of the monument used to document, analyse and hypothesise its reconstruction.
The project involved the archaeological analysis of the two towers of the Oppidum, the Carceres, the Stands, the Tribunal, the Pulvina, the Spina, the Porta Libitinensis, the Porta Triumphalis and the terrain to create a scientifically correct 3D model of the site. It is through this analysis that archaeologists were able to document a terraced roof, twin staircases and embedded amphoras located at the site, and were able to form a deeper understanding of the sites original construction. The use of archaeological data and digital techniques throughout this research project revealed the possibility for 3D imaging to hypothesise the accurate reconstruction of archaeological sites.
The 3D Reconstruction of Soli, Cyprus
The 3D reconstruction of Soli, Cyprus, has allowed archaeologists to create 3D visual models of sites of cultural heritage and archaeological architecture that are inaccessible or restricted to documentation by archaeologists through the analysis of open data from social media sites. Soli, initially designed by the Athenian statesman Solon, is an ancient city built during the 6th century BC and is located in the northern region of Cyprus. The study focused on the reconstruction of the amphitheatre located at the site, a Roman structure built on a previous Greek theatre dating back to the 2nd century BC.
The application of digital imaging, distortion correction and geo-referencing techniques to estimate the sites 3D landscape features from 2D image sequences, and verification of the documentation through existing drawings and Google Earth maps, allowed archaeologists to reconstruct the amphitheatre. Archaeologists were then able to create a geo-referenced 3D model and a digital surface model through processes of image extraction, quality analysis, image alignment, 3D cloud point generation, modelling, photorealistic texture mapping, and geo-referencing. Researchers additionally utilised KMPlayer software to extract the image sequencing frames into JPEG'S, a lens correction model was then applied and the points of interest throughout the site were matched through the overlap of selected images. Through the application of aerial video imagery and digital imaging techniques throughout this project, archaeologists were able to capture, store, process, share, visualise and annotate 3D models of the amphitheatre located at the inaccessible site of Soli, Cyprus through time and cost effective measures in the field.
A Night in the Forum
A Night in the Forum is an Educational Environmental Narrative compatible with PlayStation® VR that is modelled from the 3D reconstruction of the forum of Augustus in Rome. The project utilised 3D modelling and Virtual Reality technology, applying Image Based Modelling to combine computer vision and photogrammetric algorithms to reconstruct the archaeological site from 2D images. The construction of the VR game involved stages of Pre-production, production, and Level creation Authoring.
The pre-production stage involved the documentation and analysis of the archaeological data relevant to the game context. This process involved the geometric acquisition of cultural artefacts through the use of image-based and range-based sensors, allowing researchers to obtain digital replicas of the objects. The data gathered by researchers in the field was processed though Agisoft Photoscan software to estimate the camera positions and depth information to form into point clouds.
The production phase of this project prioritised the archaeological interpretation of data, 3D modelling reconstruction, performance analysis and optimisation of assets. The process involved the application of three-dimensional surveying and topographic surveying to ensure realism within an aesthetic rendering of the VR game. The 3D models obtained from surveys allowed graphic simulations to be conducted and the extraction of metric data to be accurate. This allowed the virtual location and restoration of fragments documented to be hypothesised.
The Level Creation and Authoring phase of the project involved the graphic layout and environmental simulation of the VR game and the addition of details that confer realism. This process involved the application of scene-dressing, real time rendering and soundscapes.
Through the application of 3D modelling and Virtual Reality technology to create of A Night in the Forum, the project aims to allow players to experience the complex administration of Imperial Rome and gain knowledge on the forum of Augustus. The use of visualisation allowed archaeologists to enhance the understanding of archaeological contexts and the study archaeological sites through visual models. However, the use of this digital technology throughout the process of developing this game resulted in prolonged production, increased costs and the necessary involvement of experts in the fields of both archaeology and computer graphics.
Evaluation
Benefits
The use of digital technology in the field of archaeology allows the analysis, documentation and reconstruction of data, historical sites and artefacts to be conducted through non-intrusive methods, allowing archaeologists to preserve the data and cultural heritage held within these archaeological findings.
As the Information Communication Technology available within the field of archaeology develops through technological advancements, archaeologists are able to obtain further access to these technologies, allowing greater amounts of archaeological data to be accurately documented and analysed. The technology currently available has allowed data to be efficiently disseminated, processed and supplied to public archives, with the use of in field surveillance techniques allowing a greater amount of on-site data analysis to be conducted by archaeologists.
The use of 3D modelling technology within digital archaeology allows researchers to accurately model archaeological sites, providing further information to formulate archaeological perspectives and promoting the communication between the cultural heritage of archaeological sites and the public population.
Criticisms
The use of digital technology within archaeology has allowed greater amounts of data to be collected by archaeologists. This collection of data requires greater maintenance of digital archives, often without a clear understanding of its relevance within archaeological research and dependent on further technological advancements to be accurately interpreted.
As the digital techniques used for archaeological research are developed, the sophistication of these technological advancements creates a larger margin of error for archaeologists when conducting, documenting and reconstructing archaeological research.
See also
Archaeogaming
Digital heritage
Digital history
Digital humanities
Digital Archaeology (exhibition)
Institute for Digital Archaeology
Open data
References
Further reading
Archaeological sub-disciplines
Digital humanities | Digital archaeology | [
"Technology"
] | 2,916 | [
"Digital humanities",
"Computing and society"
] |
54,501,555 | https://en.wikipedia.org/wiki/Active%20Shipbuilding%20Experts%27%20Federation | The Active Shipbuilding Experts' Federation is an international non-governmental organization. Its purpose is to contribute sound development of international maritime transportation and further enhancement of the world maritime safety, marine environment protection and maritime security, through communication and cooperation among the shipbuilding industry on technical agenda especially in International Maritime Organization. The federation's activities cover matters in relation to building new ships as well as repair conversions, offshore units.
Members
The Active Shipbuilding Experts' Federation has 10 members. which are constructing more than 90% of global share of new ship deliveries. Members are national shipbuilders' associations or a major shipbuilder in the absence of an association.
The Shipbuilders' Association of Japan (SAJ)
China Association of the National Shipbuilding Industry (CANSI)
Korea Offshore & Shipbuilding Association ( KOSHIPA)
Turkish Shipbuilders' Association (GISBIR)
Association of Marine Industries of Malaysia (AMIM)
The Association of Singapore Marine & Offshore Energy Industries (ASMI)
Colombo Dockyard PLC (Sri Lanka)
Shipyard Association of India (SAI)
Indonesia Shipbuilding and Offshore Industry Association (IPERINDO)
Shipbuilding Industry Corporation (SBIC)
Thai Shipbuilding and Repairing Association (TSBA)
Notes and references
External links
Active Shipbuilding Experts' Federation
Marine engineering organizations
Organizations based in Seoul | Active Shipbuilding Experts' Federation | [
"Engineering"
] | 258 | [
"Marine engineering organizations",
"Marine engineering"
] |
54,501,624 | https://en.wikipedia.org/wiki/Factorization%20homology | In algebraic topology and category theory, factorization homology is a variant of topological chiral homology, motivated by an application to topological quantum field theory and cobordism hypothesis in particular. It was introduced by David Ayala, John Francis, and Nick Rozenblyum.
References
External links
Homological algebra | Factorization homology | [
"Mathematics"
] | 65 | [
"Mathematical structures",
"Topology stubs",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Homological algebra"
] |
54,501,684 | https://en.wikipedia.org/wiki/Reciprocals%20of%20primes | The reciprocals of prime numbers have been of interest to mathematicians for various reasons. They do not have a finite sum, as Leonhard Euler proved in 1737.
Like rational numbers, the reciprocals of primes have repeating decimal representations. In his later years, George Salmon (1819–1904) concerned himself with the repeating periods of these decimal representations of reciprocals of primes.
Contemporaneously, William Shanks (1812–1882) calculated numerous reciprocals of primes and their repeating periods, and published two papers "On Periods in the Reciprocals of Primes" in 1873 and 1874. In 1874 he also published a table of primes, and the periods of their reciprocals, up to 20,000 (with help from and "communicated by the Rev. George Salmon"), and pointed out the errors in previous tables by three other authors.
Rules for calculating the periods of repeating decimals from rational fractions were given by James Whitbread Lee Glaisher in 1878. For a prime , the period of its reciprocal divides .
The sequence of recurrence periods of the reciprocal primes appears in the 1973 Handbook of Integer Sequences.
List of reciprocals of primes
* Full reptend primes are italicised.
† Unique primes are highlighted.
Full reptend primes
A full reptend prime, full repetend prime, proper prime or long prime in base b is an odd prime number p such that the Fermat quotient
(where p does not divide b) gives a cyclic number with p − 1 digits. Therefore, the base b expansion of repeats the digits of the corresponding cyclic number infinitely.
Unique primes
A prime p (where p ≠ 2, 5 when working in base 10) is called unique if there is no other prime q such that the period length of the decimal expansion of its reciprocal, 1/p, is equal to the period length of the reciprocal of q, 1/q. For example, 3 is the only prime with period 1, 11 is the only prime with period 2, 37 is the only prime with period 3, 101 is the only prime with period 4, so they are unique primes. The next larger unique prime is 9091 with period 10, though the next larger period is 9 (its prime being 333667). Unique primes were described by Samuel Yates in 1980. A prime number p is unique if and only if there exists an n such that
is a power of p, where denotes the th cyclotomic polynomial evaluated at . The value of n is then the period of the decimal expansion of 1/p.
At present, more than fifty decimal unique primes or probable primes are known. However, there are only twenty-three unique primes below 10100.
The decimal unique primes are
3, 11, 37, 101, 9091, 9901, 333667, 909091, ... .
References
External links
Prime numbers
Rational numbers | Reciprocals of primes | [
"Mathematics"
] | 612 | [
"Prime numbers",
"Mathematical objects",
"Numbers",
"Number theory"
] |
54,502,546 | https://en.wikipedia.org/wiki/Chrystal%27s%20equation | In mathematics, Chrystal's equation is a first order nonlinear ordinary differential equation, named after the mathematician George Chrystal, who discussed the singular solution of this equation in 1896. The equation reads as
where are constants, which upon solving for , gives
This equation is a generalization particular cases of Clairaut's equation since it reduces to a form of Clairaut's equation under condition as given below.
Solution
Introducing the transformation gives
Now, the equation is separable, thus
The denominator on the left hand side can be factorized if we solve the roots of the equation and the roots are , therefore
If , the solution is
where is an arbitrary constant. If , () then the solution is
When one of the roots is zero, the equation reduces to a special-case of Clairaut's equation and a parabolic solution is obtained in this case, and the solution is
The above family of parabolas are enveloped by the parabola , therefore this enveloping parabola is a singular solution.
References
Eponymous equations of physics
Ordinary differential equations | Chrystal's equation | [
"Physics"
] | 221 | [
"Eponymous equations of physics",
"Equations of physics"
] |
54,502,558 | https://en.wikipedia.org/wiki/Perenniporiella | Perenniporiella is a genus of five species of polypore fungi in the family Polyporaceae. The genus was segregated from Perenniporia by Cony Decock and Leif Ryvarden in 2003 with P. neofulva as the type species.
Species
Perenniporiella chaquenia Robledo & Decock (2009) – Argentina
Perenniporiella micropora (Ryvarden) Decock & Ryvarden (2003)
Perenniporiella neofulva (Lloyd) Decock & Ryvarden (2003)
Perenniporiella pendula Decock & Ryvarden (2003)
Perenniporiella tepeitensis (Murrill) Decock & R.Valenz. (2010) – Mexico; Southeastern United States
References
Polyporales genera
Polyporaceae
Taxa named by Leif Ryvarden
Fungi described in 2003
Fungus species | Perenniporiella | [
"Biology"
] | 189 | [
"Fungi",
"Fungus species"
] |
54,503,331 | https://en.wikipedia.org/wiki/Weyl%27s%20tube%20formula | Weyl's tube formula gives the volume of an object defined as the set of all points within a small distance of a manifold.
Let be an oriented, closed, two-dimensional surface, and let denote the set of all points within a distance of the surface . Then, for sufficiently small, the volume of is
where is the area of the surface and is its Euler characteristic. This expression can be generalized to the case where is a -dimensional submanifold of -dimensional Euclidean space .
References
Manifolds | Weyl's tube formula | [
"Mathematics"
] | 105 | [
"Topological spaces",
"Topology",
"Manifolds",
"Space (mathematics)"
] |
54,504,773 | https://en.wikipedia.org/wiki/Euler%20characteristic%20of%20an%20orbifold | In differential geometry, the Euler characteristic of an orbifold, or orbifold Euler characteristic, is a generalization of the topological Euler characteristic that includes contributions coming from nontrivial automorphisms. In particular, unlike a topological Euler characteristic, it is not restricted to integer values and is in general a rational number. It is of interest in mathematical physics, specifically in string theory. Given a compact manifold quotiented by a finite group , the Euler characteristic of is
where is the order of the group , the sum runs over all pairs of commuting elements of , and is the space of simultaneous fixed points of and . (The appearance of in the summation is the usual Euler characteristic.) If the action is free, the sum has only a single term, and so this expression reduces to the topological Euler characteristic of divided by .
See also
Kawasaki's Riemann–Roch formula
References
Further reading
External links
https://mathoverflow.net/questions/51993/euler-characteristic-of-orbifolds
https://mathoverflow.net/questions/267055/is-every-rational-realized-as-the-euler-characteristic-of-some-manifold-or-orbif
Differential geometry
String theory | Euler characteristic of an orbifold | [
"Astronomy",
"Mathematics"
] | 270 | [
"String theory",
"Astronomical hypotheses",
"Geometry",
"Geometry stubs"
] |
54,504,774 | https://en.wikipedia.org/wiki/Autobahn%20Police%20Simulator | Autobahn Police Simulator (German original title: Autobahn Polizei Simulator) is a police driving simulation game developed by Z-Software and published by . The sequel of the game was released on 7 December 2017.
The game is set on the German Autobahn.
Gameplay
Autobahn Police Simulator supports a gaming steering wheel and includes both a third-person and first-person view as well as male and female characters. During gameplay, players are able to use the "follow me" sign to flag down suspicious cars. Players can secure accidents, inspect truck loads, verify TÜV (MOT) inspection stickers, and check for alcohol or drugs. There are 40 missions in the game and free play mode is available. Players can also explore the world on foot. There are also in-game radio communications and a day and night cycle.
References
2015 video games
Aerosoft games
Driving simulators
IOS games
Single-player video games
Video games about police officers
Video games developed in Germany
Windows games | Autobahn Police Simulator | [
"Technology"
] | 200 | [
"Driving simulators",
"Real-time simulation"
] |
54,505,172 | https://en.wikipedia.org/wiki/Polynomial%20differential%20form | In algebra, the ring of polynomial differential forms on the standard n-simplex is the differential graded algebra:
Varying n, it determines the simplicial commutative dg algebra:
(each induces the map ).
References
Aldridge Bousfield and V. K. A. M. Gugenheim, §1 and §2 of: On PL De Rham Theory and Rational Homotopy Type, Memoirs of the A. M. S., vol. 179, 1976.
External links
https://ncatlab.org/nlab/show/differential+forms+on+simplices
https://mathoverflow.net/questions/220532/polynomial-differential-forms-on-bg
Differential algebra
Ring theory | Polynomial differential form | [
"Mathematics"
] | 160 | [
"Differential algebra",
"Algebra stubs",
"Ring theory",
"Fields of abstract algebra",
"Algebra"
] |
54,505,993 | https://en.wikipedia.org/wiki/Basta%20%28archaeological%20site%29 | Basta () is a pre-historic archaeological site and village in Ma'an Governorate, Jordan, southeast of Petra. It is named for the nearby contemporary village of Basta. Like the nearby site of Ba'ja, Basta was built in c. and belongs to the PPNB (Pre-Pottery Neolithic B) period. Basta is one of the earliest known places to have a settled population who grew crops and domesticated livestock.
Archeological site
The Basta's settlement dates back to the early periods of human small settlements and the use of agricultural crops as a way to sustain their inhabitants. Along with the crops, also is one of the archeological sites that marks the first use of animal domestication. Due to the relics founded dating before 9000 BC, the place is considered as one of the first places in the world that initiated the process of human settlement in great scale.
The houses in Basta were built on the familiar circular shape, and this design enabled individuals within the same house to live together. They used limestone to build their homes, as this can be known by the height of the walls in some places and the stone partitions whose floors were made with wood. These woods were from trees of the area and imported from other regions.
There are no cemeteries, on the contrary, the people of the ancient village used to bury their dead under the floors of their homes. The archaeologists believes that the intent is to remind successive generations of the relationship of families and individuals to their homes, which later became a basic concept of place and home ownership for farmers and villagers. Of course, this socio-religious concept has its impact on the formation of the city, the first nucleus of civilization at the region.
Being a civilization that predates the invention of pottery, the household items that were found were made of stone and bone from which grinding tools and mills were made, while flint was used to make arrowheads. Animal dolls such as a sitting deer, the head of a bull or a cow, the head of a bear and the head of a ram, were also found, which may have religious meaning.
In their highest point, Basta became a regional center of trade and "industrial" production of handmade tools. Thanks to the domestication, trade and agriculture, Basta reached the population of at least 1000 people, which it made one of the most populated settlements in that time along with the ancient settlement of Beidha, which is located near.
There is no consensus about the decline of Basta, but researchers believes that the fast growth of the settlement, an earthquake and the overconsumption of natural resources of the area were the factors that caused the decline and, consequently, the disappearance of the city around 5000 BC.
See also
Archaeological sites in Jordan
Domestication
Pre-Pottery Neolithic B
References
Bibliography
Further reading
Gebel, Basta II: The Architecture and Stratigraphy. Berlin 2006.
Nissen, Basta I: The Human Ecology. Berlin, 2004.
External links
Photos of Basta at the American Center of Research
Ain Ghazal
Neolithic settlements
7th-millennium BC establishments
Megasites
Pre-Pottery Neolithic B | Basta (archaeological site) | [
"Physics",
"Mathematics"
] | 633 | [
"Quantity",
"Megasites",
"Physical quantities",
"Size"
] |
54,506,466 | https://en.wikipedia.org/wiki/2017%20Total%20Solar%20Eclipse%20stamp | The United States Postal Service issued the Total Eclipse of the Sun Forever stamp on June 20, 2017. The stamp includes two superimposed images, one showing a total solar eclipse and the second showing a full moon that is revealed upon heat being applied. This stamp commemorates the solar eclipse of August 21, 2017, which was visible across the continental United States from coast to coast, weather permitting.
Details
In the first U.S. stamp application of thermochromic ink, the Total Eclipse of the Sun Forever stamps reveal a second image. By rubbing a thumb or finger on the image, the heat imparted will cause an underlying image of the full moon to be revealed. Afterward, the image reverts to the dark image as it cools.
The US Postal Service notes that exposure to ultraviolet (UV) light causes degradation of thermochromic inks, so the eclipse stamps should be shielded from sunlight to preserve their thermochromic behavior. To help with this, the Postal Service sends panes of this stamp to purchasers in special UV-blocking envelopes. In addition, UV-protective sleeves for the eclipse stamps are available from post offices for 25¢ each.
The photograph of the total solar eclipse on the stamp was taken at Jalu, Libya on March 29, 2006, by Fred Espenak. The stamp's alternate image is a photo of the full moon taken by Espenak at his observatory in Portal, Arizona in 2010. Known as "Mr. Eclipse", Espenak is a retired NASA astrophysicist. The stamp was designed by USPS art director Antonio Alcalá of Alexandria, Virginia.
The stamp is printed in a pressure-sensitive adhesive pane of 16 stamps, in one design.
Denomination
The stamp is a Forever stamp so has no defined denomination which means it will always be equal in value to the current First-Class Mail 1-ounce price, regardless of any future rate changes.
First day ceremony
The stamp's First-Day-of-Issue ceremony took place on June 20, 2017, at the University of Wyoming's Art Museum in conjunction with its annual summer solstice celebration. That building was designed with an architectural feature whereby, on the day of the summer solstice each year, a single beam of sunlight moves across the floor and shines on a silver dollar embedded in the floor in the center of the Rotunda Gallery at noon.
See also
Commemorative stamp
Thermochromism
References
External links
Solar Eclipse Stamps on the MrEclipse.com website
21st-century solar eclipses
Postage stamps of the United States
Thermochromism
2017 07 20
2017 works
Works about the Moon
Sun in culture | 2017 Total Solar Eclipse stamp | [
"Materials_science"
] | 548 | [
"Smart materials",
"Chromism",
"Thermochromism"
] |
54,506,518 | https://en.wikipedia.org/wiki/JSFiddle | JSFiddle is an online IDE service and online community for testing and showcasing user-created and collaborational HTML, CSS and JavaScript code snippets, known as 'fiddles'. It allows for simulated AJAX calls. In 2019, JSFiddle was ranked the second most popular online IDE by the PopularitY of Programming Language (PYPL) index based on the number of times it was searched, directly behind Cloud9 IDE, worldwide and in the USA.
Concept
JSFiddle is an online IDE which is designed to allow users to edit and run HTML, JavaScript, and CSS code on a single page. Its interface is minimalist and split into four main frames, which correspond to editable HTML, JavaScript and CSS fields and a result field which displays the user's project after it is run. Since early on, JSFiddle adopted smart source-code editor with programming features.
As of 2020, JSFiddle uses CodeMirror to support its editable fields, providing multicursors, syntax highlighting, syntax verification (linter), brace matching, auto indentation, autocompletion, code/text folding, Search and Replace to assist web developers in their actions. On the left, a sidebar allows users to integrate external resources such as external CSS stylesheets and external JavaScript libraries. The most popular JavaScript frameworks and CSS frameworks are suggested to users and available via a click.
JSFiddle allows users to publicly save their code an uncapped number of times for free. Each version is saved online at the application's website with an incremental numbered suffix. This allows users to re-access their saved code. Code saved on JSFiddle may also be edited into new versions, shared with other parties, and forked into a new line.
JSFiddle is widely used among web developers to share simple tests and demonstrations. JSFiddle is also widely used on Stack Overflow, the dominant question-answer online forum for the web industry.
History
In 2009, JSFiddle's predecessor, MooShell, was created by Piotr Zalewa as a website application which was exclusive to the MooTools community. In 2010, Oskar Krawczyk joined the project as a developer, and the platform was made freely available under the name of JSFiddle.
In 2016, JSFiddle underwent a full platform overhaul and became ad-sponsored. In 2017, Michał Laskowski and Andrzej Kała joined the company.
References
External links
https://jsfiddle.net
Technology websites
Online integrated development environments | JSFiddle | [
"Technology"
] | 555 | [
"Computing stubs"
] |
54,507,206 | https://en.wikipedia.org/wiki/Dual%20photon | In theoretical physics, the dual photon is a hypothetical elementary particle that is a dual of the photon under electric–magnetic duality which is predicted by some theoretical models, including M-theory.
It has been shown that including magnetic monopole in Maxwell's equations introduces a singularity. The only way to avoid the singularity is to include a second four-vector potential, called dual photon, in addition to the usual four-vector potential, photon. Additionally, it is found that the standard Lagrangian of electromagnetism is not dual symmetric (i.e. symmetric under rotation between electric and magnetic charges) which causes problems for the energy–momentum, spin, and orbital angular momentum tensors. To resolve this issue, a dual symmetric Lagrangian of electromagnetism has been proposed, which has a self-consistent separation of the spin and orbital degrees of freedom. The Poincaré symmetries imply that the dual electromagnetism naturally makes self-consistent conservation laws.
Dual electromagnetism
The free electromagnetic field is described by a covariant antisymmetric tensor of rank 2 by
where is the electromagnetic potential.
The dual electromagnetic field is defined as
where denotes the Hodge dual, and is the Levi-Civita tensor
For the electromagnetic field and its dual field, we have
Then, for a given gauge field , the dual configuration is defined as
where the field potential of the dual photon, and non-locally linked to the original field potential .
p-form electrodynamics
A p-form generalization of Maxwell's theory of electromagnetism is described by a gauge-invariant 2-form defined as
.
which satisfies the equation of motion
where is the Hodge star operator.
This implies the following action in the spacetime manifold :
where is the dual of the gauge-invariant 2-form for the electromagnetic field.
Dark photon
The dark photon is a spin-1 boson associated with a U(1) gauge field, which could be massless and behaves like electromagnetism. But, it could be unstable and massive, quickly decays into electron–positron pairs, and interact with electrons.
The dark photon was first suggested in 2008 by Lotty Ackerman, Matthew R. Buckley, Sean M. Carroll, and Marc Kamionkowski to explain the 'g–2 anomaly' in experiment E821 at Brookhaven National Laboratory. Nevertheless, it was ruled out in some experiments such as the PHENIX detector at the Relativistic Heavy Ion Collider at Brookhaven.
In 2015, the Hungarian Academy of Sciences's Institute for Nuclear Research in Debrecen, Hungary, suggested the existence of a new, light spin-1 boson, dubbed the X17 particle, 34 times heavier than the electron that decays into a pair of electron and positron with a combined energy of 17 MeV. In 2016, it was proposed that it is an X-boson with a mass of 16.7 MeV that explains the g−2 muon anomaly.
See also
Magnetic photon, a different extension for magnetic monopoles
List of hypothetical particles
References
Gauge bosons
Bosons
String theory
Hypothetical elementary particles
Quantum electrodynamics
Magnetic monopoles
Force carriers | Dual photon | [
"Physics",
"Astronomy"
] | 665 | [
"Physical phenomena",
"Astronomical hypotheses",
"String theory",
"Force carriers",
"Unsolved problems in physics",
"Bosons",
"Subatomic particles",
"Fundamental interactions",
"Magnetic monopoles",
"Hypothetical elementary particles",
"Physics beyond the Standard Model",
"Matter"
] |
54,507,343 | https://en.wikipedia.org/wiki/Lodovico%20delle%20Colombe | Lodovico delle Colombe (20 January 1565 – after 1623) was an Italian Aristotelian scholar, famous for his battles with Galileo Galilei in a series of controversies in physics and astronomy.
Early life
Delle Colombe was born in Florence in the second half of the 16th century. A date of January 20, 1565 has been suggested, but the source for this is unknown. Likewise nothing is known of his family, except that he was of noble origin, or of his education. He became a member of the Accademia Fiorentina when Francesco Nori was its consul and was also a member of the Consiglio dei Dugento, the advisory body to the Grand Duchy of Tuscany. He was said to be tall and very thin with a long white beard, a little bald head and sunken eyes. He wore a fleece jacket and a large collar. Because of his appearance and his lonely and melancholy character he was nicknamed 'the Superintendent of Limbo' by the satirical poet Francesco Ruspoli. One of his brothers, Raffaello delle Colombe (1557–1627), was the Dominican Prior of Santa Maria Novella, who denounced Galileo from his pulpit.
Dispute over the 1604 supernova
In October 1604, a new star was seen. By night, it was the brightest star in the sky, and it was visible during the day for more than three weeks, before eventually dimming. In 1606, Delle Colombe published his discourse on the phenomenon, dedicated to Alessandro Marzi Medici, the Archbishop of Florence. In his work delle Colombe argued that the star was not new, but permanent, though only occasionally visible. This argument echoed that of Johannes van Heeck in supporting the generally accepted model of the universe, known as the Aristotelian model or the Ptolemaic system. This held that the stars were fixed in their positions and unchanging; thus if an unusual event took place among the stars, this suggested that they could not be fixed in a 'firmament'. By arguing that the star was permanent rather than new, Delle Colombe defended the Aristotelian view, while suggesting reasons for why it had not previously been observed. In support of these arguments, delle Colombe drew not only on astronomical observations but on the authority of Aristotle, and of many other Peripatetic thinkers, including the Conimbricenses, Gasparo Contarini, and Julius Caesar Scaliger.
Galileo, then a professor at the University of Padua, lectured on the supernova, proposing different possible ways it could have been produced. Galileo was cautious in his views, but did regard the phenomenon as new, and not as a permanent star. A few months after delle Colombe's book appeared, a response to his ideas was published under the title of Considerations of Alimberto Mauri on Some Passages in the Discourse of Lodovico delle Colombe. Alimberto Mauri was a pseudonym and delle Colombe (like most scholars since) believed that the author was Galileo. The book ridiculed many of delle Colombe's views about the star, and belittled him as 'nostro colombo' ('our pigeon'). It asserted that astronomy had no need of Aristotelian philosophy, and should focus on observation and mathematics. The approach of first seeing something in the sky and then developing an elaborate explanation to make that observation fit with Aristotelian cosmology was directly challenged. Delle Colombe himself then responded to 'Mauri' by publishing Risposte piacevoli e curiose (Pleasant and curious replies) in 1608. In this text delle Colombe not only attacked the ideas of Copernicus, but associated them with Galileo by name.
Dispute over the motion of the Earth
The disputes between delle Colombe and Galileo grew more protracted when Galileo first published new findings which challenged Artistotelian cosmology, and then moved from Padua to Florence. In 1609 Galileo had built a telescope, through which he had observed the moons of Jupiter as well as the mountains and craters on the Moon. In March 1610 he published his findings in Siderius Nuncius (The Starry Messenger), which he dedicated to Cosimo II de' Medici, Grand Duke of Tuscany, naming the moons of Jupiter the 'Medician stars'. He then negotiated with the Grand Duke to secure for himself the position of Philosopher and Chief Mathematician at the court in Florence.
Galileo barely mentioned the motion of the Earth in Siderius Nuncius, as its focus was on his new discoveries. Nevertheless, in his discussion of earthshine he implied that the Earth changes its position rather than remaining static, and then added:
' Let these few remarks suffice us here concerning this matter, which will be more fully treated in our System of the World. In that book, by a multitude of arguments and experiences, the solar reflection from the earth will be shown to be quite real-against those who argue that the earth must be excluded from the dancing whirl of stars for the specific reason that it is devoid of motion and of light. We shall prove the earth to be a wandering body surpassing the moon in splendor, and not the sink of all dull refuse of the universe; this we shall support by an infinitude of arguments drawn from nature '
Galileo may have been trying to quietly introduce his more speculative ideas about the universe among the empirical observations made with his telescope, but these few sentences gave delle Colombe sufficient grounds to attack him at an apparent weak point, in an attempt to force him to defend the motion of the Earth specifically. In 1610, delle Colombe contested Galileo's views – though he did not mention Galileo by name – in his leaflet Contro il Moto della Terra (Against the Motion of the Earth). It was not printed, but circulated in manuscript form, mostly in Florence.
The text introduced a number of objections to the idea of the Earth's motion, which appealed to common sense. He began by pointing out that mathematics was not an adequate tool for describing nature, since it dealt only with abstractions and might therefore predict phenomena which did not actually occur. Then he proceeded to outline some thought experiments. If a cannon was fired due east, the ball moved in the direction of the Earth's rotation; if fired to the west, the ball moved against the Earth's rotation. Since the ball lands the same distance from the cannon in either case, the Earth cannot be moving. Likewise if a crossbow were shot directly upwards, the bolt would land some way from the archer rather than at his feet, if the Earth moved.
The final section of this work marked a shift in the arguments put forward by delle Colombe – instead of simply defending Aristotle or challenging Galileo for empirical proofs, he introduced a series of scriptural and theological objections to his claims. (Galileo appears to have used delle Colombe's arguments as the basis for those of the character Simplicio in his 1632 work Dialogue Concerning the Two Chief World Systems.) Delle Colombe was the first writer to use biblical text against Galileo.
Galileo decided not to respond to delle Colombe's invitation to argue his case about literal and other readings of holy text, saying that as 'Contra il Moto della Terra' had only been circulated in manuscript and not published, there was no reason for him to reply.
Dispute over the surface of the Moon
While the main arguments in 'Contro il Moto della Terra' are concerned with the Earth's motion, delle Colombe also responded to Galileo's claim that the surface of the Moon was rough and mountainous. It was a basic tenet of Aristotelian cosmology that the planets were all perfect spheres. He therefore speculated that the apparent mountains Galileo had seen were not mountains at all, but merely dense 'ripples' of matter with a pure and transparent material filling the gaps between them to make a perfect sphere. Once again, he was devising speculative answers to reconcile Aristotelian ideas with empirical observations.
In 1611 delle Colombe pursued this matter in a letter to Christopher Clavius hoping to enlist his support, but Clavius did not reply. Galileo soon obtained a copy of delle Colombe's letter to Clavius, and responded by writing to a friend, confirming his opinion about the Moon's surface, emphasising delle Colombe's incompetence in astronomy, and ridiculing him as pippione (pigeon). However, as delle Colombe's views were not published, Galileo did not publicly respond either. Indeed, Galileo, who privately ridiculed delle Colombe mercilessly, never gave him the satisfaction of a formal public response to his challenges. Although delle Colombe enjoyed the support of Archbishop Alessandro Marzi Medici and of Don Giovanni de' Medici, his patron was never the Archduke himself or any other very senior figure, and it may be that Galileo simply refused to engage directly with him for this reason.
Dispute over floating bodies
In July 1611 a debate about the nature of cold was held at the house of Galileo's friend Filippo Salviati, with two Aristotelians, :it:Vincenzo di Grazia and Giorgio Coresio. The Aristotelians held that ice was condensed water, which floated because of its flat, thin shape; Galileo, whose view was informed by Archimedes, maintained that ice was rarified water, which floated because it was less dense than the water supporting it. As the discussion developed, Galileo took the position that all bodies denser than water sink, while all lighter than water float, regardless of their shape. Three days after this first encounter, di Grazia visited Galileo, and told him that a friend had volunteered to disprove Galileo's position by demonstration. This was delle Colombe. Galileo signed an agreement setting out the terms of the demonstrations both parties were to prepare, and a date was set for them to meet and present their cases in mid-August at the house of Canon Francesco Nori.
In the days before this proposed meeting, delle Colombe took his demonstration to the streets and public places of Florence to show how easily he could prove Galileo wrong. As he showed, a thin strip of ebony floated on water, while a sphere of the same material sank. This, he maintained, showed that shape, not density, determined whether an object would float. However delle Colombe never actually appeared on the agreed day for the planned demonstrations, suggesting that he perhaps understood that his demonstration relied on the particular effects of the surface tension of water rather than on the shapes of his pieces of ebony. Galileo then proposed that they meet on a new date at Salviati's house, and that on this occasion the objects to be placed on the water should all be thoroughly wetted first (which would have removed the surface tension effect). However, when delle Colombe arrived on the appointed day he found that Galileo was unwilling to proceed as they had agreed. Apparently the Grand Duke had rebuked him for involving himself in a vulgar public spectacle, and Galileo therefore said that he would commit his proofs to writing rather than undertake public demonstrations. After this, delle Colombe seems not to have had any further direct role in public debate on this topic. In September Galileo circulated a manuscript outlining his views, but when Cosimo II staged a public debate on the question of floating bodies, it was Flaminio Papazzoni rather than delle Colombe who argued, and lost, against Galileo.
In 1612 Galileo formally set out his views on this topic in his Discourse on Floating Bodies and Delle Colombe swiftly replied with Discorso Apologetico Galileo did not reply immediately, and indeed when his reply was published, it was over the name of his friend Benedetto Castelli. Galileo may have felt no useful purpose was served by involving himself directly in this debate, and he may simply not have had the time to respond in detail to all the errors in delle Colombe's book. Castelli was able to produce a thoroughly argued refutation of delle Colombe's points, clearly with help and input from Galileo. Castelli's Risposta alle Opposizioni del S. Lodovico delle Colombe was published in 1615, marking the end of this particular dispute.
The Pigeon League and the Roman Inquisition
While delle Colombe was almost alone in arguing publicly against Galileo, there was a group of scholars and churchmen who supported his Aristotelian views. After Galileo referred disparagingly to delle Colombe as 'pippione' ('pigeon'), his close friend the painter Lodovico Cigoli coined the nickname 'Lega del Pippione' ('The Pigeon League') for delle Colombe's group. (As well as being a play on delle Colombe's name, 'pippione' carried the connotation of 'bird-brained' or 'easily duped'). Among the Pigeon League's members was Delle Colombe's brother Rafaello, a leading Dominican, close to archbishop Marzi Medici, who frequently sermonised against Copernican ideas in general and Galileo in particular. Two others were also Dominicans, Tommaso Caccini and Niccolò Lorini.
In December 1611, soon after the dispute over floating bodies, Cigoli wrote to Galileo warning him that a group of his enemies were meeting regularly in the house of the Archbishop, "in a mad quest for any means by which they can damage you." They had apparently already tried to persuade someone to attack him from the pulpit. The attack, when it came, was launched by Caccini. He had obtained a copy of Galileo's letter to Benedetto Castelli, which explained how the Copernican system could be reconciled with the book of Joshua. On 20 December 1614, he preached a sermon in Santa Maria Novella in Florence, attacking Galileo and quoting the Acts of the Apostles,"Viri galilaei, quid statis adspicientes in caelum?" ("Men of Galilee, why do you stand gazing into heaven?").
Six weeks later, on 7 February 1615, Lorini sent a copy of Galileo's Letter to Castelli to the Roman Inquisition with a written complaint. On 20 March 1615 Caccini followed this up with a second deposition to the Inquisition, accusing him of heresy. The Inquisition thereupon opened an investigation into Galileo. What direct role delle Colombe may have played in these developments is not known.
Later life
Delle Colombe is referred to as a living person in Castelli's 1615 book. After this date there are no known works by him and no literary or scientific references to him. In 1623, Lodovico delle Colombe and his brother Corso were elected to the Consiglio dei Dugento. Nothing is known of him after this date.
Key texts in the disputes
References
Galileo Galilei
17th-century Italian astronomers
17th-century Italian writers
17th-century Italian male writers
Catholicism-related controversies
Copernican Revolution
Natural philosophers | Lodovico delle Colombe | [
"Astronomy"
] | 3,141 | [
"Copernican Revolution",
"History of astronomy"
] |
54,508,489 | https://en.wikipedia.org/wiki/Locally%20constant%20sheaf | In algebraic topology, a locally constant sheaf on a topological space X is a sheaf on X such that for each x in X, there is an open neighborhood U of x such that the restriction is a constant sheaf on U. It is also called a local system. When X is a stratified space, a constructible sheaf is roughly a sheaf that is locally constant on each member of the stratification.
A basic example is the orientation sheaf on a manifold since each point of the manifold admits an orientable open neighborhood (while the manifold itself may not be orientable).
For another example, let , be the sheaf of holomorphic functions on X and given by . Then the kernel of P is a locally constant sheaf on but not constant there (since it has no nonzero global section).
If is a locally constant sheaf of sets on a space X, then each path in X determines a bijection Moreover, two homotopic paths determine the same bijection. Hence, there is the well-defined functor
where is the fundamental groupoid of X: the category whose objects are points of X and whose morphisms are homotopy classes of paths. Moreover, if X is path-connected, locally path-connected and semi-locally simply connected (so X has a universal cover), then every functor is of the above form; i.e., the functor category is equivalent to the category of locally constant sheaves on X.
If X is locally connected, the adjunction between the category of presheaves and bundles restricts to an equivalence between the category of locally constant sheaves and the category of covering spaces of X.
References
External links
https://golem.ph.utexas.edu/category/2010/11/locally_constant_sheaves.html (recommended)
Algebraic topology
Sheaf theory | Locally constant sheaf | [
"Mathematics"
] | 395 | [
"Mathematical structures",
"Algebraic topology",
"Topology stubs",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Sheaf theory"
] |
54,508,506 | https://en.wikipedia.org/wiki/Pudovik%20reaction | In organophosphorus chemistry, the Pudovik reaction is a method for preparing α-aminomethylphosphonates. Under basic conditions, the phosphorus–hydrogen bond of a dialkylphosphite, (RO)2P(O)H, adds across the carbon–nitrogen double bond of an imine (a hydrophosphonylation reaction). The reaction is closely related to the three-component Kabachnik–Fields reaction, where an amine, phosphite, and an organic carbonyl compound are condensed, which was reported independently by Martin Kabachnik and Ellis Fields in 1952. In the Pudovik reaction, a generic imine, RCH=NR', would react with a phosphorous reagent like diethylphosphite as follows:
RCH=NR' + (EtO)2P(O)H → (EtO)2P(O)CHR-NHR'
In addition to the Lewis-acid catalyzed Pudovik reaction, the reaction may be carried out in the presence of chiral amine bases. Catalytic amounts of quinine, for instance, promote the enantioselective Pudovik reaction of aryl aldehydes. Catalytic, enantioselective variants of the Pudovik reaction have been developed.
References
Name reactions | Pudovik reaction | [
"Chemistry"
] | 283 | [
"Name reactions"
] |
54,508,553 | https://en.wikipedia.org/wiki/Woody%20plant%20encroachment | Woody plant encroachment (also called woody encroachment, bush encroachment, shrub encroachment, shrubification, woody plant proliferation, or bush thickening) is a natural phenomenon characterised by the area expansion and density increase of woody plants, bushes and shrubs, at the expense of the herbaceous layer, grasses and forbs. It refers to the expansion of native plants and not the spread of alien invasive species. Woody encroachment is observed across different ecosystems and with different characteristics and intensities globally. It predominantly occurs in grasslands, savannas and woodlands and can cause regime shifts from open grasslands and savannas to closed woodlands.
Causes include land-use intensification, such as overgrazing, as well as the suppression of wildfires and the reduction in numbers of wild herbivores. Elevated atmospheric CO2 and global warming are found to be accelerating factors. To the contrary, land abandonment can equally lead to woody encroachment.
The impact of woody plant encroachment is highly context specific. It can have severe negative impact on key ecosystem services, especially biodiversity, animal habitat, land productivity and groundwater recharge. Across rangelands, woody encroachment has led to significant declines in productivity, threatening the livelihoods of affected land users. Woody encroachment is often interpreted as a symptom of land degradation due to its negative impacts on key ecosystem services, but is also argued to be a form of natural succession. Various countries actively counter woody encroachment, through adapted grassland management practices, controlled fire and mechanical bush thinning. Such control measures can lead to trade-offs between climate change mitigation, biodiversity, combatting diversification and strengthening rural incomes.
In some cases, areas affected by woody encroachment are classified as carbon sinks and form part of national greenhouse gas inventories. The carbon sequestration effects of woody plant encroachment are however highly context specific and still insufficiently researched. Depending on rainfall, temperature and soil type, among other factors, woody plant encroachment may either increase or decrease the carbon sequestration potential of a given ecosystem. In its Sixth Assessment Report of 2022, the Intergovernmental Panel on Climate Change (IPCC) states that woody encroachment may lead to slight increases in carbon, but at the same time mask underlying land degradation processes, especially in drylands. The UNCCD has identified woody encroachment as a key contributor to rangeland loss globally.
Ecological definition and etymology
Woody plant encroachment is the increase in abundance of indigenous woody plants, such as shrubs and bushes, at the expense of herbaceous plants, grasses and forbs, in grasslands and shrublands. The term encroachment is thus used to describe how woody plants outcompete grasses during a given time, typically years or decades. Although such differentiation is not always applied, encroachment refers to the expanstion of woody plants into open areas and thickening refers to the increasing density in a given area, including sub-canopy cover plants. This is in line with the meaning of the term encroachment, which is "the act of slowly covering more and more of an area". Among earliest published notions of woody plant encroachment are publications of R. Staples in 1945, O. West in 1947 and Heinrich Walter in 1954.
Although the terms are used interchangeably in some literature, woody plant encroachment is different from the spread of invasive species. As opposed to invasive species, which are deliberately or accidentally introduced species, encroacher species are indigenous to the respective ecosystem and their classification as encroachers depends on whether they outcompete other indigenous species in the same ecosystem over time. As opposed to alien plant invasion, woody plant encroachment is thus not defined by the mere presence of specific plant species, but by their ecological dynamics and changing dominance.
In some instances, woody plant encroachment is a type of secondary succession. This applies to cases of land abandonment, for example when previous agricultural land is abandoned and woody plants re-establish. However, this is distinctly different from woody plant encroachment that occurs due to global drivers, e.g. increased carbon dioxide in Earth's atmosphere, and unsustainable forms of land use intensification, such as overgrazing and fire suppression. Such drivers disrupt the ecological succession in a given grassland, specifically the balance between woody and herbaceous plants, and provide a competitive advantage to woody plants. The resulting process that leads to an abundance of woody plants is sometimes considered an ecological regime shift (also ecological state transition) that can shift drylands from grassy dominated regimes towards woody dominated savannas. An increase in spatial variance is an early indicator of such regime shift. Depending on the ecological and climatic conditions this shift can be a type of land degradation and desertification. Progressing shrub encroachment is expected to feature a tipping point, beyond which the affected ecosystem will undergo substantial, self-perpetuating and often irreversible impact.
Research into the type of woody plants that tend to become encroaching species is limited. Comparisons of encroaching and non-encroaching vachellia species found that encroaching species have a higher acquisition and competition for resources. Their canopy architecture is different and only encroaching tree species reduce the productivity of perennial vegetation. In a comparison of Vachellia and Senegalia, Vachellia was found to be the more aggressive encroaher than Senegalia, growing faster and taller with thicker, animal-dispersed seeds, while Senegalia adapts to grass competition with denser roots and wind-dispersed seeds.
By definition, woody plant encroachment occurs in grasslands. It is thus distinctly different from reforestation and afforestation. However, there is a strong overlap between vegetation greening, as detected through satellite-derived vegetation indices, and woody plant encroachment. Studies show that vegetation can impoverisch despite a greening trend.
Grasslands and forests, as well as grasslands and shrublands, can be alternative stable states of ecosystems, but empirical evidence of such bistability is still limited.
Global extent
The UNCCD identifies woody encroachment as a key contributor to rangeland loss globally. Woody encroachment occurs on all continents, affecting and estimated total area of 500 million hectares (5 million square kilometres). Its causes, extent and response measures differ and are highly context specific. Ecosystems affected by woody encroachment include closed shrublands, open shrublands, woody savannas, savannas, and grasslands. It can occur not only in tropical and subtropical climates, but also in temperate areas. Woody encroachment occurs at 1 percent per decade in the Eurasian steppes, 10–20 percent in North America, 8 percent in South America, 2.4 percent in Africa and 1 percent in Australia. In the European Alps, recorded expansion rates range from 0.6 percent to 16 percent per year.
In Sub-Saharan Africa, woody vegetation cover has increased by 8% during the past three decades, mainly through woody plant encroachment. Overall, 750 million hectares of non-forest biomes experienced significant net gains in woody plant cover, which is more than three times the area that experienced net losses of woody vegetation. In around 249 million hectares of African rangelands, long-term climate change was found to be the key driver of vegetation change. Across Africa, 29 percent of all trees are found outside classified forests. In some countries, such as Namibia and Botswana, this percentage is above 80 percent and likely linked to woody encroachment. In Southern Africa, woody encroachment has been identified as the main factor of greening, i.e. of the increase in vegetation cover detected through remote sensing. The future trend of biome change through woody encroachment in Africa bears great uncertainty.
In Southern Europe an estimated 8 percent of land area has transitioned from grazing land to woody vegetation between 1950 and 2010.
In the Eurasian Steppe, the largest grassland globally, climate change linked woody plant encroachment has been found to occur at around 1% per decade.
In the Arctic Tundra, shrub plant cover has increased by 20 percent during the past 50 years. During the same time period, shrub and tree cover increased by 30 percent in the savannas of Latin America, Africa and Australia.
Causes
Woody encroachment is assumed to have its origins at the beginning of Holocene and the start of warming, with tropical species expanding their ranges away from the equator into more temperate regions. But it has occurred at unparalleled rates since the mid-19th century. As such, it is classified as a type of grassland degradation, which occurs through direct and indirect human impact during the Anthropocene.
Susceptibility of ecosystems
There is evidence that some characteristics of ecosystem render them more susceptible to woody encroachment than others. For example, coarse-textured soils promote woody plant growth, while fine-textured soils limit it. Moreover, the likelihood of woody encroachment is influenced by soil moisture and soil nutrient availability, which is why it often occurs on downslope locations and coolers slopes. The causes of woody encroachment differ significantly under different climatic conditions, e.g. between wet and dry savanna.
Various factors have been found to contribute to the process of woody plant encroachment. Both local drivers (i.e. related to land use practices) as well as global drivers can cause woody plant encroachment. Due to its strong link to human induced causes, woody plant encroachment has been termed a social-ecological regime shift. Research shows that both legacy effects of specific events, as well as plant traits can contribute to encroachment. There is still insufficient research on the interplay between the various positive and negative feedback loops in encroaching ecosystems.
Land use
Where land is abandoned, and respective anthropogenic pressures cease, a rapid spread of native bush plants is often observed. This is for example the case in former forest areas in the Alps that had been converted to agricultural land and later abandoned. In Southern Europe encroachment is thus linked to rural exodus. In such instances, land use intensification, e.g. increased grazing pressure, is found to be effective against woody encroachment. More recently, it is observed that land use cessation is not the only driver of woody encroachment in aforementioned regions, since the phenomenon occurs also where land continued to be used for agricultural purposes.
In other regions land use intensification is the main cause of woody plant encroachment. This is due to the interrelated fragmentation of landscapes and the loss of historical disturbance regimes, mainly in the following forms:
Overgrazing: In the context of land intensification, a frequently cited cause of woody plant encroachment is overgrazing, commonly a result of overstocking and fencing of farms, as well as the lack of animal rotation and land resting periods. Overgrazing plays an especially strong role in mesic grasslands, where bushes can expand easily when gaining a competitive advantage over grasses, while woody encroachment is less predictable in xeric shrublands. Seed dispersal through animals is found to be a contributing factor to woody encroachment. While overgrazing has in the past frequently been found to be a main driver of woody encroachment, it is observed that woody encroachment continues in the respective areas even after grazing reduced or even ceases.
Absence of large mammals: linked to the introduction of rangeland agriculture as well as unsustainable hunting practices, the reduction of large mammals such as elephant and rhino (in Africa) or elk (in Northern America) is a contributing factor to woody encroachment.
Fire suppression: A connected cause for woody plant encroachment is the reduction in the frequency of wildfires that would occur naturally, but are suppressed in frequency and intensity by land owners due to the associated risks and the fragmentation of landscapes. When the lack of fire reduces tree mortality and consequently the grass fuel load for fire decreases, a negative feedback loop occurs. It has been estimated that from a threshold of 40% canopy cover, surface grass fires are rare. At intermediate rainfall, fire can be the main determinant between the development of savannas and forests. In experiments in the United States it was determined that annual fires lead to the maintenance of grasslands, 4-year burn intervals lead to the establishment of shrubby habitats and 20-year burn intervals lead to severe woody plant encroachment. Moreover, the reduction of browsing by herbivores, e.g. when natural habitats are transformed into agricultural land, fosters woody plant encroachment, as bushes grow undisturbed and with increasing size also become less susceptible to fire. Already one decade of land management change, such as the exclusion of fires and overgrazing, can lead to severe woody plant encroachment. The global increase in atmospheric CO2 contributes to the reduction of wildfires, as it decreases flammability of grass.
Competition for water: a positive feedback loop occurs when encroaching woody species reduce the plant available water, providing a disadvantage for grasses, promoting further woody encroachment. According to the two-layer theory, grasses use topsoil moisture, while woody plants predominantly use subsoil moisture. If grasses are reduced by overgrazing, this reduces their water intake and allows more water to penetrate into the subsoil for the use by woody plants. Moreover, research suggests that bush roots are less vulnerable to water stress than grass roots during droughts.
Population pressure: population pressure can be the cause for woody plant encroachment, when large trees are cut as building material or fuel. This stimulates coppice growth and results in an increase of the shrub vegetation.
Ecosystem restoration: active interventions and changes of ecosystem, such as the creation of wetlands, can trigger woody encroachment.
Climate change
While changes in land management are often seen as the main driver of woody encroachment, some studies suggest that global drivers increase woody vegetation regardless of land management practices. For example, a representative sampling of South African grasslands, woody plant encroachment was found to be the same under different land uses and different rainfall amounts, suggesting that climate change may be the primary driver of the encroachment. Once established, shrubs suppress grass growth, perpetuating woody plant encroachment. Suitable habitat for key encroacher species is expected to increase under climate change.
Predominant global drivers include the following:
Atmospheric CO2: climate change has been found to be a cause or accelerating factor for woody plant encroachment. This is because increased atmospheric CO2 concentrations fosters the growth of woody plants. Woody plants with C3 photosynthetic pathway thrive under high CO2 concentrations, as opposed to grasses with C4 photosynthetic pathway. Also tolerance to herbivory is found to be enhanced during the plants' recruitment stage under increased CO2 concentrations.
Rainfall patterns: a frequently cited theory is the state-and-transition model. This model outlines how rainfall and its variability is the key driver of vegetation growth and its composition, bringing about woody plant encroachment under certain rainfall patterns. For example, if rainfall intensity increases, deep soil water typically increases, which in turn benefits bushes more than grasses. Both the amount of rainfall and its timing are important and distinct factors. Changes in precipitation can foster woody encroachment. Increased precipitation can foster the establishment, growth and density of woody plants. Also decreased precipitation can promote woody plant encroachment, as it fosters the shift from mesophytic grasses to xerophytic shrubs.
Global warming: woody encroachment correlates to warming in the tundra, while it is linked to increased rainfall in the savanna. Species such as Vachellia sieberiana thrive under warming irrespective of the competition with grasses. The Intergovernmental Panel on Climate Change (IPCC) in its report "Global warming of 1.5°C" states that high-latitude tundra and boreal forests are at particular risk of climate change-induced degradation, with a high likelihood of shrub encroachment under continued warming. In other ecosystems, such as sub-Sahara grasslands, rising aridity may cause woody plants to be more prone to hydraulic failure.
Droughts: droughts contribute to woody plant encroachment, if they reduce the perennial grass cover and the latter recovers slowly, providing shrubs with a competitive advantage with regard to the acquisition of deep-soil water. Drought, in combination with high levels of grazing pressure, can function as the tipping point for an ecosystem, causing woody encroachment.
Impact on grassland ecosystems
Woody encroachment constitutes a major global shift in plant composition, structure and function, with far-reaching impact on the affected ecosystems. The accelerating rate of woody encroachment across grasslands globally may lead to an abrupt decline of this biome type, owing to human impact. For example, the Great Plains biome is found to be at the brink of collapse due to woody encroachment, with 62% of Northern American grassland lost to date.
Encroachment is commonly identified as a form of land degradation, with severe negative consequences for various ecosystem services, such as biodiversity, groundwater recharge, carbon storage capacity and herbivore carrying capacity. Nevertheless, negative impact is not universal. Impacts are dependent on species, scale and environmental context factors. Woody encroachment can have significant positive impacts on ecosystem services as well. Research suggests that ecosystem multifunctionality increases under woody encroachment.
Affected ecosystem services fall into the category of provisioning (e.g. forage value), regulating (e.g. hydrological regulation, soil stability) and supporting (nutrient cycling, carbon sequestration, biodiversity, primary production). There is a need for ecosystem-specific assessments and responses to woody encroachment. Generally, the following context factors determine the ecological impact of woody encroachment:
Prevailing land use: While positive ecological effects can occur in unmanaged landscapes or certain land-uses, negative ecological effects are observed especially in landscapes used for livestock grazing.
Density of woody plants: Plant diversity and ecosystem multifunctionality typically peaks at intermediate levels of woody cover and high woody covers generally have negative impacts.
Environmental conditions: Arid ecosystems show more negative responses to woody encroachment than non-arid ecosystems. In arid ecosystems woody encroachment is sometimes regarded as a form of land degradation and an expression of desertification Due to its ambiguous role in these dry ecosystems, it has been termed "green desertification". To the contrary, in ecosystems of the Mediterranean region and in Alpine grasslands, encroachment can enhance ecosystem functionality and reverse desertification trends. A key difference is that during woody encroachment the herbaceous cover in the inter-canopy zones can remain intact, while during desertification these zones degrade and turn into bare soil devoid of organic matter.
Biodiversity
Woody encroachment causes widespread declines in the diversity of herbaceous vegetation through competition for water, light, and nutrients Bush expands at the direct expense of other plant species, potentially reducing plant diversity and animal habitats. Woody encroachment impacts animal diversity by altering the structural diversity of vegetation, which affects habitat quality and species interactions. While moderate bush cover increases diversity, excessive encroachment leads to habitat loss and reduced niches, negatively impacting species such as insects, spiders, mammals, birds, and reptiles. These changes can cascade through ecosystems, affecting herbivores and top predators, altering behaviors like hunting efficiency and foraging strategies. These effects are context specific, a meta-analysis of 43 publications of the time period 1978 to 2016 found that woody plant encroachment has distinct negative effects on species richness and total abundance in Africa, especially on mammals and herpetofauna, but positive effects in North America. However, in context specific analyses also in Northern America negative effects are observed. For example, piñon-juniper encroachment threatens up to 350 sagebrush-associated plant and animal species in the US. A study of 30 years of woody encroachment in Brazil found a significant decline of species richness by 27%. Shrub encroachment may result in increased vertebrate species abundance and richness. However, these encroached habitats and their species assemblages may become more sensitive to droughts. As encroachment is not a stable state, but characterised by changing bush densities, it is important to identify how different density threshold affect plant and animal species.
Evidence of biodiversity losses includes the following:
Grasses: encroachment results in substantial loss of herbaceous diversity, with a loss of richness that is not replaced. Studies in South Africa have found that grass richness reduces by more than 50% under intense woody plant encroachment. In North America, a meta-analysis of 29 studies from 13 different grassland communities found that species richness declined by an average of 45% under woody plant encroachment. Rare species and those with lower stature, are at risk of going extinct. Among the severely affected flora is the small white lady's slipper. Generally, large bushes are found to coexist with the herbaceous layer, while smaller shrubs compete with it. Increased shade is a contributing factor to the reduction of grass abundance and diversity.
Mammals: woody plant encroachment has a significant impact on herbivore assemblage structure and can lead to the displacement of herbivores and other mammal types that prefer open areas. Among other factors, predation success of various mammals is negatively impacted by bush encroachment. Among the species found to lose habitat in areas affected by woody plant encroachment are cats such as cheetah, white-footed fox, as well as antelopes such as the Common tsessebe, Hirola and plains zebra. In Latin America the habitat of the almost extinct Guanaco is threatened by woody encroachment. In some rangelands, woody plant encroachment is associated with a decline in wildlife grazing capacity of up to 80%. Among rodent species, those specialists on grasslands typically decline in abundance under woody encroachment, while those specialised on forests might increase in abundance. Also burrowing mammals can lose habitat when woody encroachment occurs.
Birds: the impact of woody encroachment on bird species must be differentiated between shrub-associated species and grassland specialists. Studies find that shrub-associated species benefit from woody encroachment up to a certain threshold of woody cover (e.g. 22 percent in a study conducted in North America), while grassland specialist populations decline. Experiments in Namibia have shown that foraging birds, such as the endangered Cape vulture, avoid encroachment levels above 2,600 woody plants per hectare. In Southern Africa, woody encroachment drives population decline of 20% of the common open ecosystem bird species, on average at a rate of 50% population decline over fifty years. In North American grasslands, bird population decline as a result of woody encroachment has been identified as a critical conservation concern. Among the birds negatively affected by woody plant encroachment are the Secretarybird, Grey go-away-bird, Marico sunbird, lesser prairie chicken, Greater sage-grouse, Archer's lark, Northern bobwhite, Kori bustard, and Yellow cardinal.
Insects: woody plant encroachment is linked to species loss or reduction in species richness of insects with preference for open habitats. Affected species include butterfly, ant and beetle.
Vegetation structure
Encroachment often creates connected bare plant interspaces where water and wind erosion can occur.
Soil quality
Soil quality under woody encroachment in dryland ecosystems is determined by a combination of plant cover, precipitation, soil physiochemical characteristics, and topographic variables. Encroachment has a significant impact on soil bacterial communities.
Soil quality can decline significantly in arid and semi-arid regions under woody encroachment, manifesting though reduced soil moisture levels, nutrient availability and microbial activity. This drives soil drought conditions and decreases perennial herbaceous plants, while increasing bare ground. Encroachment leads to plant communities developing tougher, nutrient-poor tissues, which makes the soil more acidic, causes organic matter to build up, and reduces phosphorus levels.
To the contrary, in Mediterranean and very humind climates, woody encroachment often leads to enhanced soil quality by increasing concentrations of carbon, nitrogen and phosphorus, especially in the topsoil.
Groundwater recharge and soil moisture
While water loss is common in closed canopy woodlands (i.e. sub-humid conditions with increased evapotranspiration) in semiarid and arid ecosystems, recharge can also improve under encroachment, provided there is good ecohydrological connectivity of the respective landscape. Ecohydrological connectivity is suggested as a unifying framework for the understanding of different groundwater impacts under encroachment.
Woody plant encroachment is frequently linked to reduced groundwater recharge, based on evidence that bushes consume significantly more rainwater than grasses and encroachment alters water streamflow. Woody encroachment generally leads to root elongation in the soil and the downward movement of water is hindered by increased root density and depth. The impact on groundwater recharge differs between sandstone bedrocks and karst regions as well as between deep and shallow soils.
Besides groundwater recharge, woody encroachment increases tree transpiration and evaporation of soil moisture, due to increased canopy cover. Woody encroachment leads to the drying up of stream flows. Further, woody plant control can effectively improve the connectivity of water resources. Although this is strongly context dependent, bush control can be an effective method for the improvement of groundwater recharge.
There is limited understanding how hydrological cycles through woody encroachment affect carbon influx and efflux, with both carbon gains and losses possible. Moreover, there is evidence that woody encroachment enhances bedrock weathering, with unclear consequences for soil erosion and subsurface water flows.
However, concrete experience with changes in groundwater recharge is largely based on anecdotal evidence or regionally and temporally limited research projects. Applied research, assessing the water availability after brush removal, was conducted in Texas, US, showing an increase in water availability in all cases. Studies in the United States moreover find that dense encroachment with Juniperus virginiana is capable of transpiring nearly all rainfall, thus altering groundwater recharge significantly. An exception is shrub encroachment on slopes, where groundwater recharge can increase under encroachment. Further studies in the US indicate that also stream flow is significantly hampered by woody plant encroachment, with the associated risk of higher pollutant concentrations.
Studies in South Africa have shown that approximately 44% of rainfall is captured by woody canopies and evaporated back in to the atmosphere under woody encroachment. This effect is strongest with fine-leaved species and in events of lower rainfall sizes and intensities. It was found that up to 10% less rain enters the soil overall under woody encroachment. A meta-analysis of studies in South Africa further finds that woody encroachment has low water loss effect in areas with limited rainfall. Streamflow can increase after targeted removal of invasive and encroaching species, as showcased in South Africa.
Carbon sequestration
The impact of bush control on the carbon sequestration and storage capacity of the respective ecosystems is an important management consideration. Against the background of global efforts to mitigate climate change, the carbon sequestration and storage capacity of natural ecosystems receives increasing attention. Grasslands constitute 40% of Earth's natural vegetation and hold a considerable amount of the global Soil Organic Carbon. Shifts in plant species composition and ecosystem structure, especially through woody encroachment, lead to significant uncertainty in predicting carbon cycling in grasslands. Research on the changes to carbon sequestration under woody plant encroachment and its control is still insufficient. The Intergovernmental Panel on Climate Change (IPCC) states that woody plant encroachment generally leads to increased aboveground woody carbon, while below-ground carbon changes depend on annual rainfall and soil type. The IPCC points out that carbon stock changes under bush encroachment have been studied in Australia, Southern Africa and North America, but no global assessment has been done to date.
Total ecosystem carbon: considering above-ground biomass alone, encroachment can be seen as a carbon sink. However, considering the losses in the herbaceous layer as well as changes in soil organic carbon, the quantification of terrestrial carbon pools and fluxes becomes more complex and context specific. Changes to carbon sequestration and storage need to be determined for each respective ecosystem and holistically, i.e. considering both above-ground and below-ground carbon storage. Generally, elevated CO2 leads to increased woody growth, which implies that the woody plants increase their uptake of nutrients from the soil, reducing the soil's capacity to store carbon. In contrast, grasses increase little biomass above-ground, but contribute significantly to below-ground carbon sequestration. It is found that above-ground carbon gains can be completely offset by below-ground carbon losses during encroachment. It is generally observed that carbon increases overall in wetter ecosystems under encroachment and can reduce in arid ecosystems under encroachment. Some studies find that carbon sequestration can increase for a number of years under woody encroachment, while the magnitude of this increase is highly dependent on annual rainfall. It is found that woody encroachment has little impact on sequestration potential in dry areas with less than 400mm in precipitation. This implies that the positive carbon effect of woody plant encroachment may decrease with progressing climate change, particularly in ecosystems that are forecasted to experience decreased precipitation and increased temperature. Woody encroachment is further linked to fluvial erosion that in turn leads to the loss of previously stabilised organic carbon from legacy grasslands. Moreover, encroached ecosystems are more likely than open grasslands to lose carbon during droughts. Among the ecosystems expected to lose carbon storage under woody encroachment is the tundra.
Factors relevant for comparisons of carbon sequestration potentials between encroached and non-encroached grasslands include the following: above-ground net primary production (ANPP), below-ground net primary production (BNPP), photosynthesis rates, plant respiration rates, plant litter decomposition rates, soil microbacterial activity. Also plant biodiversity is an important indicator, as plant diversity contributes more to soil organic carbon than the quantity of organic matter.
Above-ground carbon: woody plant encroachment implies an increase in woody plants, in most cases at the expense of grasses. Considering that woody plants have a longer lifespan and generally also more mass, woody plant encroachment typically implies an increase in above-ground carbon storage through biosequestration. Studies however find that this is dependent on climatic conditions, with aboveground carbon pools decreasing under woody encroachment where mean annual precipitation is less than 330mm and increasing where precipitation is higher. A contributing factor is that woody encroachment decreases above-ground plant primary production in mesic ecosystems.
Below-ground carbon: globally, the soil organic carbon pool is twice as large as the plant carbon pool, making its quantification essential. Soil organic carbon makes out two-thirds of total soil carbon. Comparisons of grasslands, shrublands and forests show that forest and shrubland hold more above-ground carbon, while grasslands boast more soil carbon. Generally, herbaceous plants allocate more biomass below-ground than woody plants. The impact of woody encroachment on soil organic carbon is found to be dependent on rainfall, with soil organic carbon increasing in dry ecosystems and decreasing in mesic ecosystems under encroachment. Degradation of grasslands has in some areas led to the loss of up to 40% of the ecosystem's soil organic carbon. An important factor is that under woody plant encroachment the increased photosynthetic potential is largely offset by increased plant respiration and respective carbon losses. In tropical savanna soils, most soil organic carbon is derived from grass, not woody plants. For example, research in South Africa found that soil organic carbon from tree input matched grass-derived soil organic carbon only after 70 years of fire exclusion, challenging the view that increased tree density leads to SOC improvements. Contributing factors vary broadly in different settings, as is also evident in the role of litter. Generally, organic carbon in the topsoil can benefit from increased litter under encroachment. However, in South Africa woody plant encroachment was found to slow decomposition rates of litter, which took twice the time to decay under woody plant encroachment compared to open savannas.
Soil organic carbon changes need to be viewed at landscape level, as there are differences between under canopy and inter canopy processes. When a landscape becomes increasingly encroached and the remaining open grassland patches are overgrazed as a result, soil organic carbon may decrease.
In pastoral lands of Ethiopia, woody plant encroachment was found to have little to no positive effect on soil organic carbon and woody encroachment restriction was the most effective way to maintain soil organic carbon. In the United States, substantial soil organic carbon sequestration was observed in deeper portions of the soil, following woody encroachment.
An important factor is that rooting depth increases with woody encroachment, on average by 38 cm and up to 65 cm. Deeper rooting may promote the accumulation of organic carbon in the deep soil layers, but at the same time also lead to a positive priming effect, i.e. the stimulation of microbial activity and decomposition of organic matter. The trajectory of deep soil carbon under woody encroachment will depend on the balance of increased SOC accumulation and priming losses.
A meta-analysis of 142 studies found that shrub encroachment alters soil organic carbon (0–50 cm), with changes ranging between -50 and 300 percent. Soil organic carbon increased under the following conditions: semi-arid and humid regions, encroachment by leguminous shrubs as opposed to non-legumes, sandy soils as opposed to clay soils. The study further concludes that shrub encroachment has a mainly positive effect on top-soil organic carbon content, with significant variations among climate, soil and shrub types. There is a lack of standardised methodologies to assess the effect of woody encroachment on soil organic carbon.
Land productivity and rural livelihoods
Woody plant encroachment directly impacts land productivity, as widely documented in the context of animal carrying capacity. In the western United States, 25% of rangelands experience sustained tree cover expansion, with estimated losses for agricultural producers of $5 billion since 1990. The forage lost annually is estimated to be equal to the consumption of 1.5 million bison or 1.9 million cattle. In Northern America, each 1 percent of increase in woody cover implies a reduction of 0.6 to 1.6 cattle per 100 hectares. In the Southern African country Namibia it is assumed that agricultural carrying capacity of rangelands has declined by two-thirds due to woody plant encroachment. In East Africa there is evidence that an increase of bush cover of 10 percent reduced grazing by 7 percent, with land becoming unusable as rangeland when the bush cover reaches 90 percent.
Woody encroachment is often considered to have a negative impact on rural livelihoods. In Africa, 21% of the population depend on rangeland resources. Woody encroachment typically leads to an increase in less palatable woody species at the expense of palatable grasses. This reduces the resources available to pastoral communities and rangeland based agriculture at large. Woody encroachment has negative consequences on livelihoods especially arid areas, which support a third of the world population's livelihoods. Woody plant encroachment is expected to lead to large scale biome changes in Africa and experts argue that climate change adaptation strategies need to be flexible to adjust to this process. In South Africa, the shrub Seriphium plumosum is commonly referred to as "bankrupt bush" due to its association with farm productivity reductions of up to 80%.
Tourism potential
Touristic potential of land is found to decline in areas with heavy woody plant encroachment, with visitors shifting to less encroached areas and better visibility of wildlife.
Others
In the United States, woody encroachment has been linked to the spread of tick-borne pathogens and respective disease risk for humans and animals. In the Arctic tundra, shrub encroachment can reduce cloudiness and contribute to a raise in temperature. In Northern America, significant increases in temperature and rainfall were linked to woody encroachment, amounting to values up to 214mm and 0.68 °C respectively. This is caused by a decrease in surface albedo.
Targeted bush control in combination with the protection of larger trees is found to improve scavenging that regulates disease processes, alters species distributions, and influences nutrient cycling.
Studies of woody plant encroachment in the Brazilian savanna suggest that encroachment renders affected ecosystems more vulnerable to climate change.
Quantification and monitoring
There is no static definition of what is considered woody encroachment, especially when encroachment of indigenous plants occurs. While it is simple to determine vegetation trends, e.g. an increase in woody plants over time, it is more complex to determine thresholds beyond which an area is to be considered as encroached. Various definitions as well as quantification and mapping methods have been developed.
Data collection can typically involve mapping and morphological characterisation of trees and shrubs, phytosociological survey of permanent plots, grid-point intercept survey of permanent plots, line-intercept surveys along transects as well as allometric shrub measurements along transects. In Southern Africa, the BECVOL method (Biomass Estimates from Canopy Volume) finds frequent application. It determines Evapotranspiration Tree Equivalents (ETTE) per selected area. This data is used for comparison against climatic factors, especially annual rainfall, to determine whether the respective area has a higher number of woody plants than is considered sustainable.
Remote sensing imagery is increasingly used to determine the extent of woody encroachment. Limitations of this methodology include difficulties to distinguish species and the inability to detect small shrubs. Moreover, UAV (drone) based multispectral data and Lidar data are frequently used to quantify woody encroachment. The combination of colour-infrared aerial imagery and support-vector machines classification, can lead to high accuracy in identifying shrubs. The probability of woody plant encroachment for the African continent has been mapped using GIS data and the variables precipitation, soil moisture and cattle density. An exclusive reliance on remote sensing data bears the risk of wrongly interpreting woody plant encroachment, e.g. as beneficial vegetation greening. Hyperspectral vegetation indices (HVIs) can be developed to accurately separate shrub cover from green vegetation. Google Earth images have been successfully used to analyse woody encroachment in South Africa. In Namibia, the so-called Bush Information System is based on synthetic-aperture radar satellite data. Satellite remote sensing is used to determine the effect of targeted plant removal in encroached areas.
Increasingly, machine-learning techniques and applications based on artificial intelligence are used to investigate woody plant encroachment. Among others, there has been research on computer aided analysis of visual images taken from a driving vehicle.
Rephotography is found to be an effective tool for the monitoring of vegetation change, including woody encroachment and forms the basis of various encroachment assessments.
Methods to overcome the limited availability of photographic evidence or written records include the assessment of pollen records. In a recent application, vegetation cover of the past 130 years in a woody plant encroachment area in Namibia was established.
Vegetation mapping tools developed for the use by individual land users and support organisations include the American Rangeland Analysis Platform, and the Namibian Biomass Quantification Tool.
Restoration
Brush control is the active management of the density of woody species in grasslands. Although woody encroachment in many instances is a direct consequence of unsustainable management practices, it is unlikely that the introduction of more sustainable practices alone (e.g. the management of fire and grazing regimes) will achieve to restore already degraded areas. Encroached grasslands can constitute a stable state, meaning that without intervention the vegetation will not return to its previous composition.
For decisions on appropriate control measures, it is essential that both local and global drivers of woody encroachment, as well as their interaction, are understood. Restoration must be approached as a set of interventions that iteratively move a degraded ecosystem to a new system state. Responsive measures, such as mechanical removal, are needed to restore a different balance between woody and herbaceous plants. Once a high woody plant density is established, woody plants contribute to the soil seed bank more than grasses and the lack of grasses presents less fuel for fires, reducing their intensity. This perpetuates woody encroachment and necessitates intervention, if the encroached state is undesirable for the functions and use of the respective ecosystems. Most interventions constitute a selective thinning of bush densities, although in some contexts also repeat clear-cutting has shown to effectively restore diversity of typical savanna species. In decision making on which woody species to thin out and which to retain, structural and functional traits of the species play a key role. Bush control measures must go hand in hand with grazing management, as both are crucial factors influencing the future state of the respective ecosystems. State and Transition Models have been developed to provide management support to land users, capturing ecosystem complexities beyond succession, but their applicability is still limited.
The restoration of degraded grasslands can bring about a wide range of ecosystem service improvements. It can therewith also strengthen the drought resilience of affected ecosystems. Bush control can lead to biodiversity improvements regardless of the predominant land use.
Types of interventions
The term bush control, or brush management, refers to actions that are targeted at controlling the density and composition of bushes and shrubs in a given area. Such measures either serve to reduce risks associated with woody plant encroachment, such as wildfires, or to rehabilitate the affected ecosystems. It is widely accepted that encroaching indigenous woody plants are to be reduced in numbers, but not eradicated. This is critical as these plants provide important functions in the respective ecosystems, e.g. they serve as habitat for animals. Efforts to counter woody plant encroachment fall into the scientific field of restoration ecology and are primarily guided by ecological parameters, followed by economic indicators.
Three different categories of control measures can be distinguished:
Preventive measures: application of proven good management practices to prevent the excessive growth of woody species, e.g. through appropriate stocking rates and rotational grazing in the case of rangeland agriculture. It is generally assumed that preventative measures are a more cost-effective method to combat woody encroachment than treating ecosystems once degradation has occurred. Certain land uses and animal species can aid in preventing woody plant encroachment, for example elephants. Research on degradation tipping points, suggests soil organic carbon and carbon isotopes as early-warning indicators in potentially encroached areas.
Responsive measures: the reduction of bush densities through targeted bush harvesting or other forms of removal (bush thinning).
Maintenance measures: repeated or continuous measures of maintaining the bush density and composition that has been established through bush thinning.
There is an increasing focus on the carbon sequestration impact, which differs among control measures. The application of chemicals, for example, can lead to higher carbon losses than mechanical shrub thinning.
Control measures
Natural bush control
The administration of controlled fires is a commonly applied method of bush control. The relation between prescribed fire and tree mortality, is subject of ongoing research. The success rate of prescribed fires differs depending on the season during which it is applied. In some cases, fire treatment slows down woody encroachment, but is unsuccessful in reversing it. Optimal fire management may vary depending on vegetation community, land use as well as frequency and timing of fires. Controlled fires are not only a tool to manage biodiversity, but can also be used to reduce GHG emissions by shifting fire seasonality and reducing fire intensity.
Fire was found to be especially effective in reducing bush densities, when coupled with the natural event of droughts. Also the combination of fire and browsers, called pyric herbivory, is shown to have positive restoration effects. Cattle can in part substitute for large herbivores. Moreover, fires have the advantage that they consume the seeds of woody plants in the grass layer before germination, therefore reducing the grasslands sensitivity to encroachment. Prerequisite for successful bush control through fire is sufficient fuel load, thus fires have a higher effectiveness in areas where sufficient grass is available. Furthermore, fires must be administered regularly to address re-growth. Bush control through fire is found to be more effective when applying a range of fire intensities over time. Fire primarily affects shrubs at early growth stages, causing less damage on plants of larger hight and crown diameter. Fuel load and therewith the efficacy of fires for bush control can reduce due to the presence of herbivores.
Long-term research in the South African savanna found that high-intensity fire did reduce encroachment in the short-term, but not in the mid-term. In a cross-continental collaboration between South Africa and the US, a synthesis on the experience with fire as a bush control method was published.
Rewilding ecosystems with historic herbivores can further contribute to bush control. The presence of herbivores contributes to woody suppression, especially at the early demographic stages.
Variable livestock grazing can be used to reduce woody encroachment as well as re-growth after bush thinning. A well-documented approach is the introduction of larger herds of goats that feed on the wood plants and thereby limiting their growth. There is evidence that some rural farming communities have used small ruminants, like goats, to prevent woody plant encroachment for decades. Further, intensive rotational grazing, with resting periods for pasture recovery, can be a tool to limit woody encroachment. Overall, the role of targeted grazing systems as biodiversity conservation tool is subject of ongoing research.
Chemical bush control
Wood densities are frequently controlled through the application of herbicides, in particular arboricides. Commonly, applied herbicides are based on the active ingredients tebuthiuron, ethidimuron, bromacil and picloram. In East Africa, first comprehensive experiments on the effectiveness of such bush control date back to 1958–1960. There is however evidence that applied chemicals can have negative long-term effects and effectively prevent the recruitment of desired grasses and other plants. The application of non-species-specific herbicides is found to result in lower species richness than the application of species-specific herbicides. Further, arboricide application can negatively affect insect populations and arthropods, which in turn is a threat for bird populations. Scientific trials in South Africa showed that the application of herbicides has the highest success rate when coupled with mechanical bush thinning.
Mechanical bush control
Cutting or harvesting of bushes and shrubs with manual or mechanised equipment. Mechanical cutting of woody plants is followed by stem-burning, fire or browsing to suppress re-growth. Some studies find that mechanical bush control is more sustainable than controlled fires, because burning leads to deeper soil degradation and faster recovering of shrubs. Bush that is mechanically harvested is often burnt on piles, but can also serve as feedstock for value addition, including firewood, charcoal, animal feed, energy and construction material. Mechanical cutting is found to be effective, but requires repeat application. When woody branches are left to cover the degraded soil, this method is called brush packing. Some forms of mechanical woody plant removal involve uprooting, which tends to lead to better results in terms of the restoration of the grass layer, but can have disadvantages for chemical and microbiological soil properties.
Economics
As woody encroachment is often widespread and most rehabilitation efforts costly, funding is a key constraint. In the case of mechanical woody plant thinning, i.e. the selective harvesting, the income from downstream value chains can fund the restoration activities.
An example of highly commercialised encroacher biomass use is charcoal production in Namibia. There are also efforts to use encroaching woody species as source of alternative animal fodder. This involves either making use of the leaf material of encroaching species, or milling the entire plant.
In the same vein, the World Wildlife Fund has identified invasive and encroaching plant species as a possible feed stock for Sustainable Aviation Fuel in South Africa.
Also, Payment for Ecosystem Services and specifically Carbon Credits are increasingly explored as a funding mechanism for the control of woody encroachment. Savanna fire management is found to have potential to generate carbon revenue, with which rangeland restoration in Africa can be funded.
Challenges
Grassland restoration has generally received less attention than forest restoration during recent decades. This is partially explained by widespread opinions, such as grasslands being biodiversity-poor and providing few ecosystem services or that grasslands are a transitional biome.
Literature emphasises that a restoration of woody plant encroachment areas to a desired previous non-encroached state is difficult to achieve and the recovery of key-ecosystem may be short-lived or not occur. Intervention methods and technologies must be context-specific to achieve their intended outcome. No single grass-bush ratio will maximise all ecosystem services.
Current efforts of selective plant removal are found to have slowed or halted woody encroachment in respective areas, but are sometimes found to be outpaced by continuing encroachment. A meta-analysis of 524 studies on ecosystem responses to both encroachment and the removal of woody plants, finds that most efforts to restore the respective ecosystems fail, while the success rate predominantly depends on encroachment stage and plant traits. It was further found that different control methods have different effects on specific ecosystem services. For example, mechanical removal of woody plants can enhance forage value, while reducing hydrological regulation. In contrast, chemical removal can enhance hydrological regulations at the expense of plant diversity. This implies that there are trade-offs to be considered for each set of control measures.
When bush thinning is implemented in isolation, without follow-up measures, grassland may not be rehabilitated. This is because such once-off treatments typically target small areas at a time and they leave plant seeds behind enabling rapid re-establishment of bushes. A combination of preventative measures, addressing the causes of woody plant encroachment, and responsive measures, rehabilitating affected ecosystems, can overcome woody plant encroachment in the long-run.
In grassland conservation efforts, the implementation of measures across networks of private lands, instead of individual farms, remains a key challenge. Due to the high cost of chemical or mechanical removal of woody species, such interventions are often implemented on a small scale, i.e. a few hectares at a time. This differs from natural control processes before human land use, e.g. widespread fires and vegetation pressure by free roaming wildlife. As a result, the interventions often have limited impact on the continued dispersal and spread of woody plants. For this reason, a key strategy developed in Northern America is termed "defending the core". It involves the systematic expansion of healthy areas of grasslands to the outside, i.e. thinning of bush stands at the perimeter.
Countering woody encroachment can be costly and largely depends on the financial capacity of land users. Linking bush control to the concept of Payment for ecosystem services (PES) has been explored in some countries.
The perceptions and priorities of land users, in terms of ecosystem services to be restored, are often not sufficiently known or taken into consideration when undertaking or promoting restoration measures.
Managing the woody cover alone does not guarantee productive ecosystems, as also the cover and diversity of desired grass species must form part of the management considerations.
Relation to climate change mitigation and adaptation
National carbon accounting and related trade-offs
Grassland conservation can make a significant contribution to global carbon sequestration targets, but compared to sequestration potential in forestry and agriculture, this is still insufficiently explored and implemented. Detailed accounting for the effect of woody encroachment on global carbon pools and fluxes is unclear. Given scientific uncertainties, it varies widely how countries factor woody encroachment and the control thereof into their national Greenhouse Gas Inventories.
In early carbon sink quantifications, woody encroachment was found to account for as much as 22% to 40% of the regional carbon sink in the USA. In the US, woody encroachment is however seen as a key uncertainty in the US carbon balance. The sink capacity is found to decrease when encroachment has reached its maximum extent. Also in Australia, woody encroachment constitutes a high proportion of the national carbon account. Australia's carbon plan is however criticised for ignoring the carbon potential of the soil, which in drylands is found to be seven to one hundred times larger than that of vegetation. In South Africa, woody encroachment was estimated to have added around 21.000 Gg CO2 to the national carbon sink, while it has been highlighted that especially the loss of grass roots leads to losses of below-ground carbon, which is not fully compensated by gains of above-ground carbon.
It is suggested that the classification of encroached grasslands and savannas as carbon sinks may often be incorrect, underestimating soil organic carbon losses. Beyond difficulties to conclusively quantify the changes in carbon storage, promoting carbon storage through woody encroachment can constitute a trade-off, as it may reduce biodiversity of savanna endemics and core ecosystem services, like land productivity and water availability.
Several trade-offs must be considered in land management decisions, such as a possible carbon-biodiversity tradeoff. It can have severe negative consequences, if woody encroachment or the invasion of alien woody species, is accepted and seen as a way to increase ecosystem CO2 sink capacities. In its 2022 Sixth Assessment Report, the Intergovernmental Panel on Climate Change (IPCC) identifies woody encroachment as a contribution to land degradation, through the loss of open ecosystems and their services. The report further stipulates that while there may be slight increases in carbon, woody encroachment at the same time masks negative impacts on biodiversity and water cycles and therewith livelihoods.
Carbon focused restorations approaches remain vital and can be balanced with the need to enhance other ecosystem services through spatially mixed management strategies, leaving encroached patches and in thinned areas.
Conflicting climate change mitigation measures
Woody encroachment can be exacerbated when affected ecosystems become the target of misguided afforestation. It is found that grasslands are frequently misidentified as degraded forests and targeted by afforestation efforts. According to an analysis of areas identified to have forest restoration potential by the World Resources Institute, this includes up to 900 million hectares grasslands. In Africa alone, 100 million hectares of grasslands are found to be at risk by misdirected afforestation efforts. Among the areas mapped as degraded forests are the Serengeti and Kruger National Parks, which have not been forested for several million years. Over half of all tree-planting projects in Africa are implemented in savannah grasslands.
Research in Southern Africa suggests, that tree planting in such ecosystems does not lead to increased soil organic carbon, as the latter is predominantly grass-derived. Also the Intergovernmental Panel on Climate Change (IPCC) states that mitigation action, such as reforestation or afforestation, can encroach on land needed for agricultural adaptation and therewith threaten food security, livelihoods and ecosystem functions.
Encroachment control as adaptation measure
Some countries, for example South Africa, acknowledge inconclusive evidence on the emissions effect of bush thinning, but strongly promote it as a means of climate change adaptation. Geographic selection of intervention areas, targeting areas that are at an early stage of encroachment, can minimise above-ground carbon losses and therewith minimise the possible trade-off between mitigation and adaptation. The Intergovernmental Panel on Climate Change (IPCC) reflects on this trade-off: "This variable relationship between the level of encroachment, carbon stocks, biodiversity, provision of water and pastoral value can present a conundrum to policymakers, especially when considering the goals of three Rio Conventions: UNFCCC, UNCCD and UNCBD. Clearing intense woody plant encroachment may improve species diversity, rangeland productivity, the provision of water and decrease desertification, thereby contributing to the goals of the UNCBD and UNCCD as well as the adaptation aims of the UNFCCC. However, it would lead to the release of biomass carbon stocks into the atmosphere and potentially conflict with the mitigation aims of the UNFCCC." The IPPC further lists bush control as relevant measure under ecosystem-based adaptation and community-based adaptation.
See also
Convention on Biological Diversity
Effects of climate change on plant biodiversity
Environmental restoration
Farmer-managed natural regeneration
Grassland degradation
Land rehabilitation
Land restoration
Rangeland management
United Nations Convention to Combat Desertification
References
Sources
UNCCD – 'Silent demise’ of vast rangelands threatens climate, food, wellbeing of billions (2024)
IPCC – Chapter 2: Terrestrial and Freshwater Ecosystems and Their Services. In: Climate Change 2022: Impacts, Adaptation and Vulnerability (2022)
IPCC – Cross-Chapter Paper 3: Deserts, Semiarid Areas and Desertification. In: Climate Change 2022: Impacts, Adaptation and Vulnerability (2022)
IPCC Special Report – Climate Change and Land – Climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems (2019)
Twidwell, Dirac; Fogarty, Dillon T. (2021). A guide to reducing risk and vulnerability to woody encroachment in rangelands (PDF). University of Nebraska-Lincoln.
De Klerk, J. N. (2004) Bush Encroachment in Namibia
Department of Environmental Affairs (2019) Towards a Policy on Indigenous Bush Encroachment in South Africa
Brush management as a rangeland conservation strategy: A critical evaluation, U.S. Department of Agriculture (2011)
External links
Websites
Stockholm Resilience Centre – Regime Shifts DataBase: Bush Encroachment
Panorama.Solutions – Rangeland Restoration through Bush Control
The Rangelands Partnership – Global Rangelands Portal
Wrangle – World Rangeland Learning Experience
Articles
Rural 21 Magazine – Namibia's bush business
Daily Maverick: Biological vandalism— the world's wild savannas may be doomed, but few pay attention
Daily Maverick: Fierce fires can help save the veld from bush encroachment and extinction, says top UCT ecologist
Agricultural land
Biodiversity
Ecosystems
Environmental issues
Ecological restoration
Environment of Namibia
Environment of South Africa
Global environmental issues
Grasslands
Human impact on the environment
Land management
Land use
Tropical and subtropical grasslands, savannas, and shrublands
Temperate grasslands, savannas, and shrublands | Woody plant encroachment | [
"Chemistry",
"Engineering",
"Biology"
] | 12,307 | [
"Symbiosis",
"Ecological restoration",
"Grasslands",
"Ecosystems",
"Biodiversity",
"Environmental engineering"
] |
54,508,765 | https://en.wikipedia.org/wiki/NGC%207068 | NGC 7068 is a spiral galaxy located about 215 million light-years away in the constellation of Pegasus. NGC 7068 was discovered by astronomer Albert Marth on November 7, 1863.
On June 26, 2013 a Type Ia supernova designated as SN 2013ei was discovered in NGC 7068.
References
External links
Spiral galaxies
Pegasus (constellation)
7068
66765
Astronomical objects discovered in 1863 | NGC 7068 | [
"Astronomy"
] | 85 | [
"Pegasus (constellation)",
"Constellations"
] |
53,203,619 | https://en.wikipedia.org/wiki/IBM%201017 | The IBM 1017 is a table-top paper tape reader from IBM introduced in 1968.
The 1017 reads 5, 6, 7, and 8-track paper or polyester tape at 120 characters-per-second (cps). Two models were available, the model 1 reads strips of tape, while the model 2 has supply and take-up reels, and can read either strips or reels.
IBM 1018
The IBM 1018 is a paper tape punch from IBM introduced in 1968.
The 1018 punches paper or polyester tape at 120 cps.
Attachment
The 1017 and 1018 can attach to the multiplexor channel of an IBM System/360 Model 25, Model 30, Model 40, or Model 50, via an IBM 2826 control unit. They also attach to an 2770 Data Communication System.
The 1017 and 1018 are supported by DOS/360.
References
IBM mainframe peripherals
Computer storage tape media
History of computing | IBM 1017 | [
"Technology"
] | 197 | [
"Computers",
"History of computing"
] |
53,204,711 | https://en.wikipedia.org/wiki/LG%20Watch%20Style | The LG Watch Style is a smartwatch released by LG Corporation on 9 February 2017. The device is one of the first smartwatches to ship with Android Wear version 2.0.
References
External links
Android (operating system) devices
Products introduced in 2017
Wear OS devices
Smartwatches
LG Electronics products | LG Watch Style | [
"Technology"
] | 64 | [
"Wear OS devices",
"Smartwatches"
] |
53,204,742 | https://en.wikipedia.org/wiki/Octaethylporphyrin | Octaethylporphyrin (H2OEP) is an organic compound that is a relative of naturally occurring heme pigments. The compound is used in the preparation of models for the prosthetic group in heme proteins. It is a dark purple solid that is soluble in organic solvents. As its conjugate base OEP2-, it forms a range of transition metal porphyrin complexes. When treated with ferric chloride in hot acetic acid solution, it gives the square pyramidal complex Fe(OEP)Cl. It also forms the square planar complexes Ni(OEP) and Cu(OEP).
Contrast with other porphyrins
Unlike complexes of the naturally occurring porphyrins, OEP complexes have four-fold symmetry, which simplifies spectroscopic analysis. In contrast to tetraphenylporphyrin and related analogues, H2OEP features unprotected meso positions. In this way, it is a more accurate model for naturally occurring porphyrins.
Synthesis
H2OEP is prepared by condensation of 3,4-diethylpyrrole with formaldehyde.
References
Porphyrins | Octaethylporphyrin | [
"Chemistry"
] | 250 | [
"Porphyrins",
"Biomolecules"
] |
53,205,218 | https://en.wikipedia.org/wiki/Trash%20interceptor | A trash interceptor is a device in a river to collect and remove floating debris – before the debris flows out into a harbor, for instance.
Mr. Trash Wheel
Installed in May 2014, the water wheel trash interceptor known as Mr. Trash Wheel, officially the Inner Harbor Water Wheel, is the world's first permanent water wheel trash interceptor. It sits at the mouth of the Jones Falls River in Baltimore's Inner Harbor. A February 2015 agreement with a local waste-to-energy plant is believed to make Baltimore the first city to use reclaimed waterway debris to generate electricity.
The Jones Falls river watershed drains fifty-eight square miles of land outside of Baltimore and is a significant source of trash that enters the harbor. Garbage collected by Mr. Trash Wheel could come from anywhere in the watershed. Operated by solar and hydro power, the wheel moves continuously, removing garbage and dumping it into an attached dumpster; its daily capacity is estimated at 25 tons. In its first 18 months of operation, it removed more than 350 tons of litter from Baltimore's landmark and tourist attraction, including approximately 200,000 bottles, 173,000 potato chip bags, and 6.7 million cigarette butts. The water wheel has been very successful at trash removal, visibly decreasing the amount of garbage that collects in the harbor, especially after a rainfall.
After the success of Mr. Trash Wheel, the Waterfront Partnership raised money to build a second water wheel at the end of Harris Creek, an entirely piped stream that flows beneath Baltimore's Canton neighborhood and empties into the Baltimore Harbor. The planned new water wheel was inaugurated in December, 2016, and dubbed "Professor Trash Wheel". Two more trash wheels, "Captain Trash Wheel" and "Gwynnda the Good Wheel of the West", were added in 2018 and 2021 respectively.
River Thames passive debris collector
There are several passive debris collectors (PDCs) on the River Thames in London, including one by the Houses of Parliament. Unlike Baltimore's Mr. Trash Wheel they are totally passive and any debris collected by them must be lifted out by the use of a crane-equipped boat.
See also
Bubble curtain – used to reduce liquid or debris floating on the surface from spreading
The Ocean Cleanup – nonprofit environment organization building interceptors for 1,000 rivers
References
Inner Harbor, Baltimore
Environmental mitigation | Trash interceptor | [
"Chemistry",
"Engineering"
] | 471 | [
"Environmental mitigation",
"Environmental engineering"
] |
53,205,844 | https://en.wikipedia.org/wiki/Tenth%20Cambridge%20survey | The Tenth Cambridge Survey (10C) is a radio survey at 15.7GHz using the Arcminute Microkelvin Imager Large Array, operated by the Cavendish Astrophysics Group at the University of Cambridge.
References
10 | Tenth Cambridge survey | [
"Astronomy"
] | 48 | [
"Astronomical catalogue stubs",
"Astronomy stubs"
] |
53,206,138 | https://en.wikipedia.org/wiki/Women%27s%20Technology%20Empowerment%20Centre | The Women's Technology Empowerment Centre (W.TEC) is a non-profit organization that provides technology education for women and girls in Nigeria. W.TEC offers services and programs including mentoring, training, technology camps, awareness campaigns, collaborative projects, and research and publication in order to empower women.
History
Oreoluwa Lesi had noticed a gender gap in the knowledge of Information and communications technology in Nigeria and other African countries. She founded the organization in 2008 in Lagos.
Over the years, W.TEC has extended its scope, reaching over 26,000 participants and expanding into Kwara and Anambra states.
In 2017, Facebook partnered with W.TEC to improve Internet safety.
Services
W.TEC's work includes technology-training programmes for girls through intensive girls-only camps and technology clubs in W.TEC academy. During the camps and after-school clubs, the girls learn to create and innovate with technology by building and making websites, web applications, video games, films, and other digital content. In the words of Adeola Akinyemiju, the Finance Director, W.TEC "is training girls on electronic and digital circuit technologies, web designs to bridge the gender gap in the engineering space." The organisation advocates against and works to break down gender stereotypes, especially with respect to careers.
Programmes operated by W.TEC include:
W.TEC Academy (Technology Afterschool Club)
Staying Safe Online
Research
Reception
In March 2019, Tim Berners-Lee, the inventor of the World Wide Web, visited W.TEC as part of a worldwide tour in celebration of the 30th anniversary of the Web. During his visit, he spoke about the "Contract for the Web". He later remarked that his audience, largely composed of young girls, had "wonderful energy and creativity". In 2020, Time magazine asked Tim Berners-Lee to write to a young person or group of young people of his own choosing. He chose the girls of W.TEC.
Awards
Plan International: Recognition for Supporting Girls in ICT (2009)
International Telecommunication Union (ITU): Gender Equality Mainstreaming – Technology (GEM-TECH) Award (finalist; Category 5: Closing the ICT Gender Gap; 2014)
Nigeria Internet Registration Association (NIRA): Presidential Award for Women's Development (2019)
EQUALS in Tech Awards (Skills Category; 2019)
ITU: WSIS Prizes Champion (Access to Information and Knowledge Category; 2020)
References
Non-profit organizations based in Nigeria
Organizations for women in science and technology
Women's organizations based in Nigeria
Organizations established in 2008 | Women's Technology Empowerment Centre | [
"Technology"
] | 535 | [
"Organizations for women in science and technology",
"Women in science and technology"
] |
53,207,054 | https://en.wikipedia.org/wiki/Carbohydrate%20Structure%20Database | Carbohydrate Structure Database (CSDB) is a free curated database and service platform in glycoinformatics, launched in 2005 by a group of Russian scientists from N.D. Zelinsky Institute of Organic Chemistry, Russian Academy of Sciences. CSDB stores published structural, taxonomical, bibliographic and NMR-spectroscopic data on natural carbohydrates and carbohydrate-related molecules.
Overview
The main data stored in CSDB are carbohydrate structures of bacterial, fungal, and plant origin. Each structure is assigned to an organism and is provided with the link(s) to the corresponding scientific publication(s), in which it was described. Apart from structural data, CSDB also stores NMR spectra, information on methods used to decipher a particular structure, and some other data.
CSDB provides access to several carbohydrate-related research tools:
Simulation of 1D and 2D NMR spectra of carbohydrates (GODDESS: glycan-oriented database-driven empirical spectrum simulation).
Automated NMR-based structure elucidation (GRASS: generation, ranking and assignment of saccharide structures).
Statistical analysis of structural feature distribution in glycomes of living organisms
Generation of optimized atomic coordinates for an arbitrary saccharide and subdatabase of conformation maps.
Taxon clustering based on similarities of glycomes (carbohydrate-based tree of life)
Glycosyltransferase subdatabase (GT-explorer)
History and funding
Until 2015, Bacterial Carbohydrate Structure Database (BCSDB) and Plant&Fungal Carbohydrate Structure Database (PFCSDB) databases existed in parallel. In 2015, they were joined into the single Carbohydrate Structure Database (CSDB). The development and maintenance of CSDB have been funded by International Science and Technology Center (2005-2007), Russian Federation President grant program (2005-2006), Russian Foundation for Basic Research (2005-2007,2012-2014,2015-2017,2018-2020), Deutsches Krebsforschungszentrum (short-term in 2006-2010), and Russian Science Foundation (2018-2020).
Data sources and coverage
The main sources of CSDB data are:
Scientific publications indexed in the dedicated citation databases, including NCBI Pubmed and Thomson Reuters Web Of Science (approx. 18000 records).
CCSD (Carbbank ) database (approx. 3000 records).
The data are selected and added to CSDB manually by browsing original scientific publications. The data originating from other databases are subject to error-correction and approval procedures.
As of 2017, the coverage on bacteria and archaea is ca. 80% of carbohydrate structures published in scientific literature The time lag between the publication of relative data and their deposition into CSDB is about 18 months. Plants are covered up to 1997, and fungi up to 2012.
CSDB does not cover data from the animalia domain, except unicellular metazoa. There is a number of dedicated databases on animal carbohydrates, e.g. UniCarbKB or GLYCOSCIENCES.de .
CSDB is reported as one of the biggest projects in glycoinformatics. It is employed in structural studies of natural carbohydrates and in glyco-profiling.
The content of CSDB has been used as a data source in other glycoinformatics projects.
Deposited objects
Molecular structures of glycans, glycopolymers and glycoconjugates: primary structure, aglycon information, polymerization degree and class of molecule. Structural scope includes molecules composed of residues (monosaccharides, alditols, amino acids, fatty acids etc.) linked by glycosidic, ester, amidic, ketal, phospho- or sulpho-diester bonds, in which at least one residue is a monosaccharide or its derivative.
Bibliography associated with structures: imprint data, keywords, abstracts, IDs in bibliographic databases
Biological context of structures: associated taxon, strain, serogroup, host organism, disease information. The covered domains are: prokaryotes, plants, fungi and selected pathogenic unicellular metazoa. The database contains only glycans originating from these domains or obtained by chemical modification of such glycans.
Assigned NMR spectra and experimental conditions.
Glycosyltransferases associated with taxons: gene and enzyme identifiers, full structures, donor and substrates, methods used to prove enzymatic activity, trustworthiness level.
References to other databases
Other data collected from original publications
Conformation maps of disaccharides derived from molecular dynamics simulations.
Interrelation with other databases
CSDB is cross-linked to other glycomics databases, such as MonosaccharideDB, Glycosciences.DE , NCBI Pubmed, NCBI Taxonomy, NLM catalog, International Classification of Diseases 11, etc. Besides a native notation, CSDB Linear, structures are presented in multiple carbohydrate notations (SNFG, SweetDB, GlycoCT, WURCS, GLYCAM, etc.). CSDB is exportable as a Resource Description Framework (RDF) feed according to the GlycoRDF ontology.
External links
CSDB web site
CSDB usage examples
CSDB technical documentation
CSDB Linear (structure encoding notation)
Carbohydrate databases registered in NAR collection
Carbohydrate databases in the recent decade (lection)
References
Biochemistry databases
Carbohydrates
Glycomics | Carbohydrate Structure Database | [
"Chemistry",
"Biology"
] | 1,210 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Biochemistry databases",
"Organic compounds",
"Glycomics",
"Carbohydrate chemistry",
"Biochemistry",
"Glycobiology"
] |
53,207,360 | https://en.wikipedia.org/wiki/Backyard%20Worlds | Backyard Worlds: Planet 9 is a NASA-funded citizen science project which is part of the Zooniverse web portal. It aims to discover new brown dwarfs, faint objects that are less massive than stars, some of which might be among the nearest neighbors of the Solar System, and might conceivably detect the hypothesized Planet Nine. The project's principal investigator is Marc Kuchner, an astrophysicist at NASA's Goddard Space Flight Center.
Origins
Backyard Worlds was launched in February 2017, shortly before the 87th anniversary of the discovery of Pluto, which until its reclassification as a dwarf planet in 2006 was considered the Solar System's ninth major planet. Since that reclassification, evidence has come to light that there may be another planet located in the outer region of the Solar System far beyond the Kuiper belt, most commonly referred to as Planet Nine. This hypothetical new planet would be located so far from the Sun that it would reflect only a very small amount of visible light, rendering it too faint to be detected in most astronomical surveys conducted to date. However, models of the conjectured planet's atmosphere suggest that methane condensation could in some cases make it detectable in infrared images captured by the Wide-field Infrared Survey Explorer (WISE) space telescope. Due to the effects of proper motion and parallax, Planet Nine would appear to move in a distinctive way between images taken of the same patch of sky at different times. In addition to Planet Nine, other objects of interestsuch as undiscovered nearby brown dwarfswould also be seen to move in the project's images.
Project description
Citizen scientists accessing the website search through a flip book-style animation of specially-processed mid-infrared images captured by WISE known as unWISE coadds, taken with filters at the wavelengths of 3.4 and 4.6 micrometers. The coadded unWISE images permits fainter objects to be detected than previous processing of WISE imagery allowed. In the flip books these coadds are differenced, a process designed to remove most of signal from stationary objects, leaving moving objects intact. The aim is to identify points of light that move between the flip book frames, including slower-moving "dipoles". Citizen scientists who spot a moving object are encouraged to fill out a "Think You've Got One" form which the project scientists review to confirm if there is motion. The images contain instrumental artifacts and are noisy, which hampers the use of automated image processing software and makes the task ideal for exploiting human visual recognition capabilities. Additionally, to improve the ability to detect objects some participants have created their own tools such as Wiseview, a web-based animation visualization tool.
Once candidates have been identified the science team follow-up the most scientifically interesting objects using ground-based telescopes (at sites such as Mont Mégantic Observatory, Apache Point Observatory, W. M. Keck Observatory, Las Campanas Observatory, Gemini Observatory and the NASA Infrared Telescope Facility) and space telescopes (principally the Spitzer Space Telescope and the Hubble Space Telescope), in order to clarify their nature and assign a spectral type if possible.
The project has been awarded a grant from NASA's Astrophysics Data Analysis Program which will fund it until 2020.
In November 2018, the project was "rebooted", with new images and reduced noise. By August 2020, more than 100,000 citizen scientists worldwide had taken part in the project.
Cool Neighbors project
In June 2023 the project "Backyard Worlds: Cool Neighbors" was launched. The "Planet 9" predecessor was focused on finding a hypothesised outer planet, which is not ideal in finding faint brown dwarfs. The new project has switched to focus on searching for faint and cool Y-dwarfs. The new "Cool Neighbors" project pre-selects its images with the help of machine learning.
Project status
In December 2017, seven new brown dwarfs were confirmed, as well as two cool subdwarfs. The spectral types of the new brown dwarfs were T0, T2.8, T5, T6, T6.5, and two of type T8. In addition, there were 337 brown dwarf candidates awaiting spectra for confirmation.
As of the first anniversary of the project in February 2018, the project had discovered 17 brown dwarfs and two cool subdwarfs. The coldest object discovered is of spectral type T9, which raises hopes of discovering type Y dwarfs in the future. In addition, a spectrum was also taken of one possibly variable object of unknown type that does not actually exhibit proper motion. There are 432 objects of interest awaiting verification, of which 38 are Y dwarf candidates.
In July 2018 an update on the project's blog stated that in total 42 brown dwarfs had been spectroscopically confirmed from a list of 879 candidates. Fourteen of those confirmed are closer to the solar system than .
As of July 2019, there are 1305 candidate objects to be followed up, of which there are 131 confirmed objects: 70 dwarfs of type T and 61 dwarfs of type L. Of the candidate and confirmed brown dwarfs, 55 of them are closer to the solar system than 20 parsecs. There are also roughly 100 Y dwarf candidates.
At the 235th meeting of the American Astronomical Society in January 2020 a summary of the current status of the project was presented and this included 1503 L, T and Y dwarf candidates. In total 221 spectra have been taken of candidate objects.
Published discoveries
WISEA 1101+5400
In June 2017, it was announced that Backyard Worlds had made its first official discovery: a brown dwarf designated WISEA 1101+5400, of spectral type T5.5 and located 34 parsecs (111 light years) from Earth. A paper announcing the discovery was accepted for publication in Astrophysical Journal Letters, and Backyard Worlds now holds the record among all Zooniverse projects as having the shortest time from project launch to first publication.
LSPM J0207+3331
In October 2018, a participant in the project discovered LSPM J0207+3331the oldest and coldest white dwarf known to host a circumstellar disk, despite being 3 billion years old. The time since this star became a white dwarf is far longer than the expected timescale for such disks to be cleared from a system. The disk consists of two rings at different temperatures. This star has been studied with the Keck telescope and is the subject of ongoing research.
W2150AB
At the 235th meeting of the American Astronomical Society in January 2020 the discovery of the wide brown dwarf binary W2150AB was presented by Jacqueline Faherty. The L1+T8 co-moving system is separated by 341 au, being one of three brown dwarf binary systems where both objects are easily resolved by ground-based telescopes. The system has the lowest gravitational binding energy for a brown dwarf binary that is not young and with the primary being a L-dwarf or later.
WISE J0830+2837
The discovery of WISE J0830+2837, the first Y-dwarf discovered by volunteers was also presented at the 235th meeting by project scientist Daniella Bardalez Gagliuffi. The Y-dwarf was not detected by the Hubble Space Telescope, but the Spitzer Space Telescope did detect this object due to it observing at longer wavelengths of light. It is about 11.2 parsec (36.5 light years) distant and has a temperature of about 350 K (77 °C or 170 °F). This estimated temperature would place it between the majority of the Y-dwarf population so far identified and WISE 0855−0714, the coldest object of this type known.
T subdwarfs
A paper was published in the Astrophysical Journal in July 2020 reporting the discovery of two unusual brown dwarfs; WISEA J041451.67-585456.7 was discovered by Backyard Worlds volunteers and WISEA J181006.18-101000.5 by the NEOWISE Proper Motion Survey, also with the aid of a Backyard Worlds citizen scientist. These high-proper motion objects display unique colors and near-infrared spectra that do not fully match current models. The models producing the best matches to the spectra imply the brown dwarfs have [Fe/H] ≤ -1, meaning they have extremely sub-solar metallicity, containing far lower amounts of elements heavier than hydrogen or helium compared to the Sun. The estimates from the model spectra suggest that these objects have up to 30 times less iron than typical for known brown dwarfs. The authors argue that the spectral properties combined with the estimated low temperatures of approximately 1200-1400 K make these brown dwarfs likely the first extreme subdwarfs of the T spectral class (esdTs) to be identified. The extremely low metallicity implies these brown dwarfs are very old, approximately 10 billion years, as the galaxy at this time would have featured lower quantities of heavy elements. This provides evidence that substellar objects were able to form in the low metallicity environment of the Milky Way's past.
A study by Lodieu et al. observed WISE1810 with a range of ground-based telescopes, using imaging and spectroscopy. They find a closer distance of parsec, a radius of and a mass of . This makes WISE1810 the closest extreme ultra-cool subdwarf and the closest extreme metal-poor brown dwarf known to science, as of June 2022. The optical and infrared spectrum does not show any methane or carbon monoxide absorption, which is expected at these temperatures of about 800 K, and the WISE photometry suggest a depleted methane atmosphere. Only H2 CIA and water vapor absorption is detected, suggesting a carbon-deficient and metal-poor atmosphere, or alternatively an oxygen-enhanced atmosphere.
A study in 2024 discovered three additional T-type subdwarfs and introduced a new classification system for T subdwarfs. This system classified WISE 1810 as a esdT3: and WISE 0414 as a esdT6:. The study also found L and T dwarfs with unusual galactic orbits. 2MASS J053253.46+824646.5 (previous known L subdwarf) and CWISE J113010.07+313944.7 (mild T-subdwarf) were identified as possible members of the Thamnos stellar streams. These two brown dwarfs are on a retrograde Galactic orbit. CWISE J155349.96+693355.2 was found to be a possible member of the Helmi stream (prograde galactic orbit). Three T dwarfs (SDSS J014016.89+015054.1, CWISEP J111055.12-174738.2, and CWISEP J145837.91+173450.1) are found to have high metallicity and might be part of the high velocity thick disk. This population originates from the inner Milky Way and was scattered outwards.
95 cold brown dwarfs observed with Spitzer
In August 2020, the Backyard Worlds team published a paper in the Astrophysical Journal detailing follow-up conducted using the Spitzer Space Telescope on a sample of the coldest discoveries that had been made before the telescope was decommissioned. 95 had Spitzer mid-infrared colors consistent with being a cold brown dwarf, with 75 of these having their proper motion confirmed by comparison to their position in WISE images. Among the discoveries highlighted as most significant were; 3 possible T subdwarfs based on high tangential velocity estimates, a rare widely separated T8 companion to the white dwarf LSPM J0055+5948, and 5 new Y dwarfs, four of which (including the previously published WISE J0830+2837) where Spitzer colors indicate they have spectral types Y1 or later, with only at most 6 of these coldest set of brown dwarfs previously being known. The T8 companion to the white dwarf LSPM J0055+5948 could be the oldest (7-13 billion years old) brown dwarf known to science, together with Wolf 1130C (>10 billion years old).
Co-moving benchmark systems
The backyard worlds project found additional co-moving systems. 34 low-mass co-moving companions were discovered in 2022 with the NOIRLab Source Catalog DR2. Later in March 2024 an additional 89 ultracool dwarf companions were identified. This study increased the number of ultracool companions to FGK stars by about 42%. These benchmark system represent a wide variety of systems, including six systems with white dwarf hosts, systems with binary hosts or companions that are binaries, systems with old or young ages, systems with red or blue spectral types and systems with a wide separation of >1000 astronomical units (AU). One young co-moving system consists of GJ 900, a K7+M4+M6 triple star system and the T9-dwarf CW2335+0142, which is a planetary-mass object (~10.5 ). Another notable system is CW0627−0028AB, which is a wide T0blue+T3 dwarf system or a possible triple (L5+T2.5)+T3 system. If the distance is confirmed, it would be the widest substellar binary discovered at a separation of about 860 AU. The brown dwarf companion CWISE J060202.17-462447.8 (~52 ) to the white dwarf WD J060159.98-462534.40 is an additional contender for the oldest brown dwarf with an age of billion years. Additional M+T co-moving systems were discovered in April 2024 in a collaborative work together with the CatWISE team. 13 new systems were discovered, representing a 60% increase of the number of M+T systems. The sample includes young and old objects, including the candidate planetary-mass companion 2MASS J05581644–4501559 B and UCAC3 52–1038 B, which is on a wide 7100 AU orbit. In December 2024 a team published a list of 51 ultracool dwarfs that are co-moving with white dwarfs. Some of these were also discovered by backyard worlds citizen scientists.
Additional discoveries
This list contains additional notable discoveries by the Backyard Worlds: Planet 9 project.
See also
Amateur astronomy
Citizen science
Zooniverse projects:
References
External links
Astronomy websites
Astronomy projects
Human-based computation
Citizen science
Internet properties established in 2017
Exoplanet search projects | Backyard Worlds | [
"Astronomy",
"Technology"
] | 2,991 | [
"Exoplanet search projects",
"Works about astronomy",
"Information systems",
"Astronomy projects",
"Human-based computation",
"Astronomy websites"
] |
53,208,331 | https://en.wikipedia.org/wiki/Bio-ink | Bio-inks are materials used to produce engineered/artificial live tissue using 3D printing. These inks are mostly composed of the cells that are being used, but are often used in tandem with additional materials that envelope the cells. The combination of cells and usually biopolymer gels are defined as a bio-ink. They must meet certain characteristics, including such as rheological, mechanical, biofunctional and biocompatibility properties, among others. Using bio-inks provides a high reproducibility and precise control over the fabricated constructs in an automated manner. These inks are considered as one of the most advanced tools for tissue engineering and regenerative medicine (TERM).
Like the thermoplastics that are often utilized in traditional 3D printing, bio-inks can be extruded through printing nozzles or needles into filaments that can maintain its shape fidelity after deposition. However, bio-ink are sensitive to the normal 3D printing processing conditions.
Differences from traditional 3D printing materials
Printed at a much lower temperature (37 °C or below)
Mild cross-linking conditions
Natural derivation
Bioactive
Cell manipulatable
Printability
Bioink compositions and chemistries are often inspired and derived from existing hydrogel biomaterials. However, these hydrogel biomaterials were often developed to be easily pipetted and cast into well plates and other molds. Altering the composition of these hydrogels to permit filament formation is necessary for their translation as bioprintable materials. However, the unique properties of bioinks offer new challenges in characterizing material printability.
Traditional bioprinting techniques involve depositing material layer-by-layer to create the end structure, but in 2019 a new method called volumetric bioprinting was introduced. Volumetric bioprinting occurs when a bio-ink is placed in a liquid cell and is selectively irradiated by an energy source. This method will actively polymerize the irradiated material and that will comprise the final structure. Manufacturing biomaterials using volumetric bioprinting of bio-inks can greatly decrease the manufacturing time. In materials science, this is a breakthrough that allows personalized biomaterials to be quickly generated. The procedure must be developed and studied clinically before any major advances in the bioprinting industry can be realized.
Unlike traditional 3D printing materials such as thermoplastics that are essentially 'fixed' once they are printed, bioinks are a dynamic system because of their high water content and often non-crystalline structure. The shape fidelity of the bioink after filament deposition must also be characterized. Finally, the printing pressure and nozzle diameter must be taken into account to minimize the shear stresses placed on the bioink and on any cells within the bioink during the printing process. Too high shear forces may damage or lyse cells, adversely affecting cell viability.
Important considerations in printability include:
Uniformity in filament diameter
Angles at the interaction of filaments
"Bleeding" of filaments together at intersects
Maintenance of shape fidelity after printing but before cross-linking
Printing pressure and nozzle diameter
Printing viscosity
Gellation properties
Classification of bio Inks
Structural
Structural bio inks are used to create the framework of the desired print using materials like alginate, decellularized ECM, gelatins, and more. From the choice of material you are able to control mechanical properties, shape and size, and cell viability. These factors make this type one of the more basic but still one of the most important aspects to a Bio-printed design.
Sacrificial
Sacrificial bio inks are materials that will be used to support during printing and then will be removed from the print to create channels or empty regions within the outside structure. Channels and open spaces are massively important to allow for cellular migration and nutrient transportation lending them useful if trying to design a vascular network. These materials need to have specific properties dependent on the surrounding material that needs to stay such as water solubility, degradation under certain temperatures, or natural rapid degradation. Non Crosslinked gelatins and pluronics are examples of potential sacrificial material.
Functional
Functional bio inks are some of the more complicated forms of ink, these are used to guide cellular growth, development, and differentiation. This can be done in the form of integrating growth factors, biological cues, and physical cues such as surface texture and shape. These materials could be described as the most important as they are the biggest factor in developing a functional tissue as well as structural related function.
Support
Bio printed structures can be extremely fragile and flimsy due to intricate structures and overhangs in the early period after printing. These support structures give them the chance to get out of that phase. Once the construct is self supportive, these can be removed. In other situations, such as introducing the construct to a bioreactor after printing, these structures can be used to allow for easy interface with systems used to develop the tissue at a faster rate.
Polysaccharides
Alginate
Alginate is a naturally derived biopolymer from the cell wall of brown seaweed that has been widely used in biomedicine because of its biocompatibility, low cytotoxicity, mild gelation process and low cost. Alginates are particularly suitable for bioprinting due to their mild cross-linking conditions via incorporation of divalent ions such as calcium. These materials have been adopted as bioinks through increasing their viscosity. Additionally, these alginate-based bioinks can be blended with other materials such as nanocellulose for application in tissues such as cartilage.
Since fast gelation leads to good printability, bioprinting mainly utilizes alginate, modified alginate alone or alginate blended with other biomaterials. Alginate has become the most widely used natural polymer for bioprinting and is most likely the most common material of choice for in vivo studies.
Gellan Gum
Gellan gum is a hydrophilic and high-molecular weight anionic polysaccharide produced by bacteria. It is very similar to alginate and can form a hydrogel at low temperatures. It is even approved for use in food by the United States Food and Drug Administration (FDA). Gellan gum is mainly used as a gelling agent and stabilizer. However, it is almost never used alone for bioprinting purposes.
Agarose
Agarose is a polysaccharide extracted from marine algae and red seaweed. It is commonly used in electrophoresis applications as well as tissue engineering for its gelling properties. The melting and gelling temperatures of agarose can be modified chemically, which in turn makes its printability better. Having a bio-ink that can be modified to fit a specific need and condition is ideal.
Protein-based Bio-inks
Gelatin
Gelatin has been widely utilized as a biomaterial for engineered tissues. The formation of gelatin scaffolds is dictated by the physical chain entanglements of the material which forms a gel at low temperatures. However, at physiological temperatures, the viscosity of gelatin drops significantly. Methacrylation of gelatin is a common approach for the fabrication of gelatin scaffolds that can be printed and maintain shape fidelity at physiological temperature.
Collagen
Collagen is the main protein in the extracellular matrix of mammalian cells. Because of this collagen possesses tissue-matching physicochemical properties and biocompatibility. On top of this, collagen has already been used in biomedical applications. Some studies that collagen has been used in are engineered skin tissue, muscle tissue and even bone tissue.
Synthetic Polymers
Pluronics
Pluronics have been utilized in printing application due to their unique gelation properties. Below physiological temperatures, the pluronics exhibit low viscosity. However, at physiological temperatures, the pluronics form a gel. However, the formed gel is dominated by physical interactions. A more permanent pluronic-based network can be formed through the modification of the pluronic chain with acrylate groups that may be chemically cross-linked.
PEG
Polyethylene glycol (PEG) is a synthetic polymer synthesized by ethylene oxide polymerization. It is a favorable synthetic material because of its tailorable but typically strong mechanical properties. PEG advantages also include non-cytotoxicity and non-immunogenicity. However, PEG is bioinert and needs to be combined with other biologically active hydrogels.
Other Bio-inks
Decellularized ECM
Decellularized extracellular matrix based bioinks can be derived from nearly any mammalian tissue. Organs such as heart, muscle, cartilage, bone, and fat are decellularized, lyophilized, and pulverized, to create a soluble matrix that can then be formed into gels. These bioinks possess several advantages over other materials due to their derivation from mature tissue. They consist of a complex mixture of ECM structural and decorating proteins specific to their tissue origin, and provide tissue-specific cues to cells. Often these bioinks are cross-linked through thermal gelation or chemical cross-linking such as through the use of riboflavin. Different additives, e.g. GelMA, alginate, have been used to improve the printability of decellularized ECM.
See also
3D printing
3D bioprinting
List of 3D printer manufacturers
List of common 3D test models
List of emerging technologies
List of notable 3D printed weapons and parts
Organ-on-a-chip
References
Biomaterials | Bio-ink | [
"Physics",
"Biology"
] | 2,003 | [
"Biomaterials",
"Materials",
"Matter",
"Medical technology"
] |
53,212,192 | https://en.wikipedia.org/wiki/Seeker%20%28media%20company%29 | Seeker (stylized See<er) is an American digital media network and content publisher based in San Francisco, California. The network was established in 2015 within a former independent division of Discovery Communications known as Discovery Digital Networks. Seeker produces online video and editorial content for the digital media landscape, with an emphasis on social platforms and YouTube.
History
Seeker was relaunched in May 2016 in an effort by Discovery Digital Networks to reach millennial audiences looking to satisfy their curiosity by immersing themselves in science, technology and culture. The network was initially launched in March 2015, with a focus on exploration and adventure.
In October 2016, Seeker was acquired by the newly founded Group Nine Media, along with Thrillist Media Group, NowThis News, The Dodo and SourceFed Studios. This new media group earned a $100 Million investment from Discovery Communications, and is under the leadership of former Thrillist CEO Ben Lerer.
Properties
Seeker's YouTube Channel (also called Seeker; formerly DNews) surpassed more than 4 million YouTube subscribers in August 2019.
In 2015, Seeker's program Rituals, with Laura Ling, was nominated for an Emmy. A Seeker Stories documentary co-produced with the ONE Campaign about energy poverty in Sub-Saharan Africa was honored with a Shorty Award in 2016. Seeker Daily, a short-form news show, partnered with YouTube to cover the 2016 Republican & Democratic Party national conventions in Cleveland and Philadelphia.
In 2016, Seeker began producing content for virtual reality headsets. Seeker VR content is also distributed on YouTube. and the DiscoveryVR app. Addison O'Dea was among the first to create original films for them, including explorations into the origins of voodoo in West Africa and explorations into the Sahara to find ancient Koranic libraries.
In May 2018, Seeker launched a new vertical, "Seeker Universe". The channel is dedicated to outer-space content and intended for a millennial audience.
In June 2018, Seeker partnered with Discovery to launch "The Swim", a multi-platform franchise following Ben Lecomte's 5,000-mile-long swim across the Pacific Ocean from Tokyo to San Francisco in an effort to gain awareness on the state of ocean health from pollution. His six-month journey was available to viewers across multiple platforms, a mid-form video series on Seeker's channels and Discovery GO, short-form social videos, weekly Instagram Stories, weekly TV Swim updates on Discovery channel, and the project culminates with a feature-length documentary later in 2019.
In April 2019, Seeker released its new YouTube series "SICK", which looks at how diseases work in the human body. Each episode covers a different disease and brings in researchers and doctors to explain them.
In July 2019, Seeker partnered with Discovery on a one-hour television special, Confessions from Space, in celebration of the 50th anniversary of the Apollo 11 Moon landing.
Legacy properties
TestTube and DNews
On September 12, 2012, DNews launched on YouTube with three hosts: Trace Dominguez, Anthony Carboni, and Laci Green. On April 19, 2014, it was announced that Tara Long would be joining as a host. Three videos were uploaded every day. The show was rebranded to Seeker on March 1, 2017.
In May 2013, Discovery Digital Networks launched the TestTube network, which became the home of DNews, Laci Green's Sex+ Channel, and Blow it Up! hosted by Tory Belleci from Mythbusters.
In March 2015, Discovery Digital Networks launched Seeker Network, which became the home of Seeker Daily, Seeker Stories, and several affiliate shows that centered around adventure and human interest stories.
On May 25, 2016, Discovery Digital Networks rolled out changes to its network lineup. The TestTube channel – which had since been renamed to TestTube News – rebranded to Seeker Daily, a show that previously ran on the primary Seeker channel. The overall format of the TestTube News was preserved, but the TestTube brand was phased out of existence.
On March 2, 2017, DNews' YouTube channel was rebranded to Seeker.
On March 17, 2017, the Seeker Daily channel was rebranded with a new format and new team as NowThis (part of Group Nine Media; along with Seeker).
References
External links
Seeker.com
GroupNineMedia.com
2015 establishments in California
Digital media
Mass media about Internet culture
Vox Media
Former Discovery, Inc. subsidiaries | Seeker (media company) | [
"Technology"
] | 897 | [
"Multimedia",
"Digital media"
] |
53,212,411 | https://en.wikipedia.org/wiki/Newton%20for%20Beginners | Newton for Beginners, republished as Introducing Newton, is a 1993 graphic study guide to the Isaac Newton and classical physics written and illustrated by William Rankin. The volume, according to the publisher's website, "explains the extraordinary ideas of a man who [...] single-handedly made enormous advances in mathematics, mechanics and optics," and, "was also a secret heretic, a mystic and an alchemist."
"William Rankin," Public Understanding of Science reviewer Patrick Fullick confirms, "sets out to illuminate the man whose work laid the foundations of the physics of the last 350 years, and to place him and his work in the context of the times in which he lived." New Scientist reviewer Roy Herbert adds that, "alongside theories of the Universe from ancient times, the book explains those originating since Isaac Newton, so placing him deftly in his scientific context."
Publication History
This volume was originally published in the UK by Icon Books in 1993 as Newton for Beginners, and subsequently republished with different covers in different editions.
Selected editions:
Related volumes in the series:
Reception
"This book shares the general characteristics of the Beginners series with a large number of line drawings and cartoons with associated text and many asides," states Patrick Fullick, writing in Public Understanding of Science, "for some readers the asides may seem idiosyncratic or even annoying." "Some may dislike the humour and bad puns that abound in this work," confirms Bill Palmer, writing in the Journal of the Science Teacher Association of the Northern Territory, "but I suspect that those starting the study of Newton's life and work will appreciate this attempt to facilitate reading."
"The book is well-grounded in recent historiography," and, "Rankin is clearly sympathetic towards his subject," states Fullick, "but inevitably Newton still comes over as one whose intellectual vanity was at times apt to overcome his self-control." Roy Herbert, writing in New Scientist, confirms that despite being a colossus, "Many of his contemporaries saw him as something else and these bit players provide a background of 17th-century backbiting and squabbling (Newton took part) that is always fascinating."
"Newton's story is told accurately and entertainingly," concludes Palmer. "It combines drawings with text and pulls off the difficult trick of imparting serious information while keeping the reader amused with jokes and irreverent asides," adds Herbert, "it is a technique that has strong appeal and so, even if you have misgivings about it, you are lured along the trail." "The communication of the idea that the great scientists of the past had their hopes and fears and that they had concerns other than the purely academic or professional is probably done as well pictorially as by other means," confirms Fullick. "Easily swallowed and," concludes Herbert, "retainable."
References
Non-fiction graphic novels
Biographical comics
Comics based on real people
Popular physics books
Educational comics
1993 in comics
Cultural depictions of Isaac Newton
Comics set in the 17th century
Comics set in the United Kingdom
Books about the history of physics | Newton for Beginners | [
"Astronomy"
] | 645 | [
"Cultural depictions of Isaac Newton",
"Cultural depictions of astronomers"
] |
53,214,319 | https://en.wikipedia.org/wiki/Muhammad%20M.%20Hussain | Muhammad Mustafa Hussain is an electronics engineer specializing in CMOS technology-enabled low-cost flexible, stretchable and reconfigurable electronic systems. He was a professor in King Abdullah University of Science and Technology and University of California, Berkeley, and is a currently an electrical and computer engineering professor at Purdue University. He is the principal investigator (PI) at Integrated Nanotechnology Laboratory, and Integrated Disruptive Electronic Applications (IDEA) Laboratory. He is also the director of the Virtual Fab: vFabLab™ (https://vFabLab.org).
Education and career
Born and brought up in Dhaka, Bangladesh, Hussain obtained his bachelor's degree in electrical and electronics engineering from the Bangladesh University of Engineering and Technology, in 2000. He completed his master's from University of Southern California in 2002 and joined University of Texas at Austin, where he completed another M.S. and doctoral degrees (December 2005). In 2006, he joined Texas Instruments as an integration engineer to lead the 22 nm node, non-planar, MugFET technology development. In 2008, he joined SEMATECH as the program manager of Novel Emerging Technology Program, where he oversaw CMOS technology development in Austin, Texas, and in Albany, New York. His program was supported by United States Defense Advanced Research Project Agency (DARPA). He joined the King Abdullah University of Science and Technology (KAUST) as a founding faculty in August 2009.
Achievements
Hussain is the Fellow of IEEE, American Physical Society (APS), Institute of Physics (IOP), UK and Institute of Nanotechnology, UK. He serves as an editor in notable journals such as Applied Nanoscience (Springer-Nature) and IEEE Transactions on Electron Devices. He has been awarded the IEEE Electron Devices Society Distinguished Lecturer award for his teaching skills. His research on saliva based power generation, self-destructible electronics, paper skin, smart thermal patch, paper watch, multidimensional IC (MD-IC), corrugation enabled solar cells and decal electronics have garnered widespread international media attention. He has been a leading authority in the field of flexible inorganic electronics, in particular, through the flexible silicon process. He has also pioneered a new architecture for silicon transistors called the silicon Nanotube FET. He has served as the Editor-in-Chief for Handbook of Flexible and Stretchable Electronics.
Awards and honors
Edison Award Gold 2020 for Bluefin, "Personal Technology" category.
Best Innovation Award in CES 2020 for Bluefin, "Tech for a better World" category.
Fellow of World Technology Network and finalist in Health and Medicine, World Technology Network Award, 2016.
Outstanding Young Texas Exes Award, 2015. (The University of Texas at Austin Alumni Award)
Scientific American Top 10 World Changing Ideas, 2014.
DOW Chemical Sustainability and Innovation Challenge Award 2012 for invention of Thermoelectric Windows, 2012.
References
Year of birth missing (living people)
Living people
Bangladeshi engineers
Electronics engineers
Bangladesh University of Engineering and Technology alumni
University of Southern California alumni
Cockrell School of Engineering alumni
Fellows of the American Physical Society | Muhammad M. Hussain | [
"Engineering"
] | 636 | [
"Electronics engineers",
"Electronic engineering"
] |
53,214,717 | https://en.wikipedia.org/wiki/Intratracheal%20instillation | Intratracheal instillation is the introduction of a substance directly into the trachea. It is widely used to test the respiratory toxicity of a substance as an alternative to inhalation in animal testing. Intratracheal instillation was reported as early as 1923 in studies of the carcinogenicity of coal tar. Modern methodology was developed by several research groups in the 1970s. By contrast, tracheal administration of pharmaceutical drugs in humans is called endotracheal administration.
Background
As compared to inhalation, intratracheal instillation allows greater control over the dose and location of the substance, is cheaper and less technically demanding, allows lower amounts of scarce or expensive substances to be used, allows substances to be tested that can be inhaled by humans but not small mammals, and minimizes exposure to laboratory workers and to the skin of laboratory animals. Disadvantages include its nonphysiological and invasive nature, the confounding effects of the delivery vehicle and anesthesia, and the fact that it bypasses the upper respiratory tract. Instillation results in a less uniform distribution of the substance than inhalation, and the substance is cleared from the respiratory tract more slowly. Their results provide a quick screen of potential toxicity and can be used to test its mechanism, but may not be directly applicable to occupational exposure that occurs over an extended period. Some of these difficulties are overcome by another method, pharyngeal aspiration, which is less technically difficult and causes less trauma to the animal, and has a pulmonary deposition pattern more similar to inhalation.
Methodology
Intratracheal instillation is often performed with mice, rats, or hamsters, with hamsters often preferred because their mouth can be opened widely to aid viewing the procedure, and because they are more resistant to lung diseases than rats. Instillation is performed either through inserting a needle or catheter down the mouth and throat, or through surgically exposing the trachea and penetrating it with a needle. Generally, short-acting inhaled anesthetic drugs such as halothane, metaphane, or enflurane are used during the instillation procedure. Saline solution is usually used as a delivery vehicle in a typical volume of 1–2 mL/kg body weight. A wide range of substances can be tested, including both soluble materials and insoluble particles or fibers, including nanomaterials.
See also
Tracheal intubation
References
Occupational safety and health
Toxicology
Trachea | Intratracheal instillation | [
"Environmental_science"
] | 514 | [
"Toxicology"
] |
53,215,263 | https://en.wikipedia.org/wiki/The%20Boring%20Company | The Boring Company (TBC) is an American infrastructure, tunnel construction service, and equipment company founded by Elon Musk. TBC was founded as a subsidiary of SpaceX in 2017, and was spun off as a separate corporation in 2018. TBC has completed one tunneling project that is open to the public, as well as multiple test tunnels.
In 2018, TBC completed one tunnel for testing in Los Angeles County, California.
In 2021, TBC completed the Las Vegas Convention Center (LVCC) Loop, a three-station transportation system with of tunnels. As of April 2024, a segment to Resorts World Las Vegas is also open, and tunnels to Encore and Westgate resorts are being finalized. The system is planned to expand to a total of of tunnels in Las Vegas.
Many other TBC projects in cities across the United States have been announced, but subsequently were cancelled or became inactive due to a lack of activity from the company.
History
Musk announced the idea of the Boring Company in December 2016, and it was officially registered as "TBC – The Boring Company" on January 11, 2017. Musk cited difficulty with Los Angeles traffic, and what he sees as limitations of its two-dimensional transportation network, as his early inspiration for the project. The Boring Company was formed as a SpaceX subsidiary. According to Musk, the company's goal is to enhance tunneling speed enough such that establishing a tunnel network is financially feasible.
In early 2018, the Boring Company was spun out from SpaceX and into a separate corporate entity. Somewhat less than 10% of equity was given to early employees, and over 90% to Elon Musk. Early employees came from a variety of different backgrounds, including those from SpaceX. The company began designing its own tunnel boring machines, and completed several tests in Hawthorne, California. The Hawthorne test tunnel opened to the public on December 18, 2018.
After raising US$113 million from Musk and flamethrower sales during 2018, the Boring Company sold $120 million in stock to venture capital firms in July 2019. By November 2019, Steve Davis had become company president after leading efforts for Musk since 2016. Davis was one of the earliest hires at SpaceX (in 2003) and has twin master's degrees in particle physics and aerospace engineering, as well as degrees in finance and mechanical engineering. In November 2020, TBC announced hiring for positions in Austin, Texas, and by December 2020 had leased two buildings in a industrial complex northeast of Austin, approximately north of Texas Gigafactory.
On April 20, 2022, the company announced an additional $675 million Series C funding round, valuing the company at approximately $5.675 billion. The round was led by Vy Capital and Sequoia Capital, with participation from Valor Equity Partners, Founders Fund, 8VC, Craft Ventures, and DFJ Growth. In 2022, the company was cited by the Texas Commission on Environmental Quality for five violations of Texas environmental regulations.
Sometime before April 2023, the company moved their headquarters and engineering facilities to Bastrop, Texas, approximately east of Texas Gigafactory.
Tunnels connecting different parts of the Las Vegas Convention Center are open, and a tunnel to Resorts World began operating in July 2023. Due to operational expenses, it is probable that the Boring Company is subsidizing the Loop to keep customer prices low. A day pass from Resorts World costs $5, while the LVCVA is paying the Boring Company an additional $4.5 million annually, which equates to $7.50 per ride. In February 2024, OSHA found several safety violations in the Boring Company, including 8 serious violations and allegations that workers have faced chemical burns from sludge while working in the tunnels. The company challenged the ruling; however, an article by Fortune revealed details about the construction of the Las Vegas tunnel, citing numerous employee accounts that described the working conditions as "almost unbearable."
In April 2024, the Boring Company was named among the "Dirty Dozen", the worst workplace safety offenders in the USA, by the National Council of Occupational Safety and Health.
Machines
The first boring machine used by TBC was Godot, a conventional tunnel boring machine (TBM) made by Lovat. TBC then designed their own line of machines called Prufrock. Prufrock 1 was unveiled in 2020, and was used mostly for testing. Engadget reported that the Prufrock 2, which was unveiled in August 2022, could dig up to a mile per week. Prufrock 3 was planned to dig up to seven miles per day, although this was not achieved. Instead in 2024, P3 was able to tunnel 40-46 m/day.
In May 2024, Prufrock 4 was nearly complete. In August, it began testing. Prufrock 5 was in the design stage. Prufrock 4 is 308 feet long. It weighs . It produces up to 4.7 million pounds of thrust. The goal is to triple tunneling speed and improve cooling systems.
Process
TBC claims to be redesigning the entire tunnel boring process to reduce cost, accelerate tunnel completion, improve safety, and reduce site impacts. Innovations include:
Porpoising
Replace tunnel entry and exit excavations by having the TBM "porpoise" in and out of the ground. The TBM is trucked in and placed at an angle to the ground. (Prufrock 2 and 3 required an earthen ramp to set it at the correct angle before beginning to tunnel). It then bores into the ground. It changes angles as it continues boring, eventually returning to the surface and being loaded onto the truck.
In conventional systems, one large excavation is made at the tunnel entrance to allow the TBM to be lowered to the tunnel depth and assembled. A similar excavation is made at the tunnel exit to allow the TBM to be disassembled and lifted out.
Liner truck
TBC moves tunnel lining segments into the tunnel via an all-electric autonomous, wheeled liner truck powered by motors and batteries from Tesla. Conventional systems typically use a diesel rail system, which must be constructed along with the tunnel lining.
Continuous tunneling
TBC is working to install ring liners without stopping tunneling. Conventional systems stop every five feet or so to install another segment of the tunnel lining, and to extend the rail line. The goal is to increase tunneling time/day from 11 hours to 24 hours.
Tunnels
Hawthorne test tunnel
TBC built a high-speed tunnel in 2017 on a route in Hawthorne, California, at the SpaceX headquarters and manufacturing facility. The tunnel roadway has an asphalt surface, a guide-way for autonomous vehicle operation, and supports car trips at speeds of with autonomous control and up to under human control.
Las Vegas Convention Center (LVCC)
Convention Center
In May 2019, the company won a $48.7 million project to shuttle visitors in a loop underneath the LVCC. Boring of the first tunnel, long, began on November 15, 2019, and finished on February 14, 2020, excavating an average of per day. In May 2020, the boring of the second tunnel was completed, for a total of of tunnels. The tunnel opened in October 2021. Standard Tesla vehicles with human drivers are used as shuttles, traveling at about . The service was described by Las Vegas Tourism as "an important step in the development of a game-changing transportation solution in Las Vegas."
Testing with volunteers in late May 2021 showed that the system could transport 4,400 passengers per hour. The system started transporting convention attendees on June 8, 2021. Designed to solve traffic congestion, the tunnel was intended to provide trips of less than two minutes, but has faced a number of traffic jams during busy events in 2021 and 2022.
Private tunnels to convention center
The tunnel to Resorts World Las Vegas opened in July 2022. As of April 2024, Las Vegas strip hotel Encore has a private tunnel underway to allow direct access from the hotel to LVCC.
Vegas Loop
In October 2021, Clark County Commissioners approved a 50-year franchise agreement for a 52-stop, mostly-underground system, a " dual loop system...operating mainly in the Resort Corridor with stations at various resorts and connections to Allegiant Stadium, Brightline West Las Vegas Station, and the University of Nevada, Las Vegas." TBC planned to build five to ten stations during the first year, and then add approximately 16 stations per year thereafter. TBC would be responsible for funding the tunnel, while station costs would be funded by the resort properties and landowners.
In May 2023, TBC was given permission to build the Vegas Loop underground transportation system to 69 stations for a tunnel network of . It would include the existing LVCC Loop and extensions to casinos along the Strip, Harry Reid International Airport, Allegiant Stadium, downtown Las Vegas, and eventually to Los Angeles. TBC claims that once complete, the Vegas Loop would be able to transport more than 90,000 passengers per hour. In March 2024, the Las Vegas Convention and Visitors Authority board of directors voted to extend the existing tunnel, and vowed to address concerns that rose over Occupational Safety and Health Administration (OSHA) violations by TBC, which had resulted in a $100,000 fine.
Projects under discussion
Inquiries and discussions have been held with Boring Company for various projects.
In February 2021, Miami mayor Francis Suarez revealed that Musk had proposed to dig a two-mile tunnel under the Miami River for $30 million, within a six-month timescale, compared with $1 billion over four years estimated by the local transit authority. Much of the savings would be achieved by simplifying ventilation systems and allowing only electric vehicles. As of November 2023, the city is waiting for the Miami Dade Transportation Planning Organization to complete an analysis of the project.
In July 2021, Fort Lauderdale, Florida, accepted a proposal from the Boring Company for a tunnel between downtown and the beach, to be dubbed the "Las Olas Loop." In August 2021, the city was beginning final negotiations with TBC, and Mayor Dean Trantalis estimated the total cost of the round-trip tunnel would be between $90 and $100 million, including stations. As of December 2022, the city suspended efforts to continue the project.
In August 2021, a preliminary concept discussion was held with officials of Cameron County on the potential construction of a tunnel from South Padre Island to Boca Chica Beach in South Texas. If built, the tunnel would be required to pass beneath the Brownsville Ship Channel. It would allow SpaceX's Boca Chica facility to remain accessible if Highway 4, its sole access road was closed.
Inactive and cancelled projects
United States
Washington, DC and Baltimore, Maryland – In 2017, Musk announced plans to build a Hyperloop connecting Washington, DC to Baltimore. This was supplanted in 2018 by a proposal to build a route following the Baltimore–Washington Parkway. The Maryland Transportation Authority officially approved the project. In 2019, a draft Environmental Assessment for the project was completed. As of 2021, the project was no longer listed on the company website.
Chicago – In 2018, the company won a competition to build a high-speed link from downtown Chicago to O'Hare Airport. As of 2021 the plan had been dropped.
Los Angeles – In 2018, TBC proposed to develop a test tunnel on a north–south alignment parallel to Interstate 405 and adjacent to Sepulveda Boulevard. Public opposition and lawsuits led the company to abandon the idea. Also in 2018, the company proposed to build a tunnel called the "Dugout Loop" from Vermont Avenue to Dodger Stadium. , the project had been removed from TBC's website.
San Jose, California – In 2019, a link between San Jose International Airport and Diridon station, was discussed as an alternative to an $800 million traditional rail link. Plans were later dropped.
San Bernardino County, California – In February 2021 the San Bernardino County Transportation Authority (SBCTA) in California approved beginning contract negotiations with TBC to build a nearly tunnel connecting the Ontario airport with the Rancho Cucamonga Metrolink/Future Brightline West train station. However, TBC did not submit a proposal after a third party was involved to study the project impacts. As of 2022, the SBCTA has plans to build the tunnel system using "another company more familiar with the state's bureaucracy to do the Environmental Impact Report."
Australia
In January 2019, Musk responded to an Australian member of parliament regarding a tunnel through the Blue Mountains to the west of Sydney, suggesting costs of $750 million for a tunnel, plus $50 million per station.
Promotional merchandise
In 2018, the company began offering 20,000 "flamethrowers" for preordering. The "flamethrower" was a blow torch shaped to look like a gun and is legal in all U.S. states except Maryland. All 20,000 "flamethrowers" were sold in just a few days. After customs officials said that they would not allow imports of any items called "flamethrowers," Musk announced that he would rename them to "Not-A-Flamethrower" since the devices were in fact akin to roofing torches. Musk announced separate sales of a fire extinguisher, which he described as "overpriced... but this one comes with a cool sticker."
Not-a-Boring Competition student contests
In 2020, TBC released rules for a student tunnel-boring competition. The first competition was held in Las Vegas in September 2021. Officially named the Not-a-Boring Competition, the challenge was to "quickly and accurately drill a tunnel that was -long and -wide."
Applications were received from 400 potential participants. A technical design review left 12 teams that were invited to Las Vegas to demonstrate their engineering solution in a September 2021 competition. The winning team was TUM Boring from Technical University of Munich who managed to excavate a bore while meeting the requisite safety requirements. TUM Boring used a conventional pipe jacking method to build the tunnel, but employed a novel revolving pipe storage design to minimize downtime between pipe segments.
A second competition was held in April 2023. New contest criteria required a -long -diameter, this time with a turn radius. Five teams from four countries—the United States, Germany, United Kingdom, and Switzerland—made the finals and journeyed to Texas to compete. TUM Boring again won with a design that reached a maximum velocity of . Swissloop Tunneling finished second overall and won the innovation award.
Criticism
Civil engineering experts and tunneling industry veterans questioned whether TBC could render tunnels faster and cheaper than competitors. Tunnelling Journal dismissed the company as a "vanity project."
Musk's planned tunnels were criticized for lacking such safety features as emergency exit corridors, ventilation systems, or fire suppression. In addition, the single lane tunnels left it impossible for vehicles to pass one another in the event of collision, mechanical failure, or other traffic obstruction, and instead would shut down the entire tunnel section. The low capacity of TBC tunnels make them inefficient when compared to existing public transit solutions, with only a fraction of the capacity of a conventional rapid-transit subway.
James Moore, director of transportation engineering at the University of Southern California, said that "there are cheaper ways to provide better transportation for large numbers of people," such as managing traffic with tolls. Public transit consultant Jarrett Walker called TBC "wildly hyped," and criticized how the company "dazzled city governments and investors with visions of an efficient subway where you never have to get out of your car, [but turned] out to be a paved road tunnel."
See also
Underground construction
References
External links
.
55 minutes, video of information session on the vision of the Boring Company and the project in Los Angeles, with Q&A.
Elon Musk
2016 establishments in California
American companies established in 2016
Construction and civil engineering companies
Construction equipment manufacturers of the United States
Hyperloop
Privately held companies based in Texas
Subterranean excavating equipment companies
Underground construction companies | The Boring Company | [
"Technology",
"Engineering"
] | 3,308 | [
"Transport systems",
"Civil engineering organizations",
"Construction and civil engineering companies",
"Vacuum systems",
"Hyperloop"
] |
53,217,558 | https://en.wikipedia.org/wiki/NGC%207610 | NGC 7610 is a spiral galaxy in the constellation Pegasus. Discovered by Andrew Ainslie Common in August 1880, it was accidentally "rediscovered" by him the same month, and later given the designation NGC 7616.
Supernova
One supernova has been observed in NGC 7610: SN 2013fs (type II-P, mag. 16.5) was discovered by Kōichi Itagaki on 7 October 2013. It was detected approximately 3 hours after the light from the explosion reached Earth, and within a few hours optical spectra were obtained - the earliest such observations ever made of a supernova.
See also
List of NGC objects (7001–7840)
References
External links
Spiral galaxies
Pegasus (constellation)
7610
071087
12511
23171+0954
+02-59-025 | NGC 7610 | [
"Astronomy"
] | 167 | [
"Pegasus (constellation)",
"Constellations"
] |
53,218,428 | https://en.wikipedia.org/wiki/SALSA%20%28food%20standard%29 | SALSA (Safe and Local Supplier Approval) is a British food standard.
History
SALSA was set up in 2007 by the British Hospitality Association, British Retail Consortium and Food and Drink Federation.
Structure
The organization regulating SALSA is headquartered in Oxfordshire.
See also
Assured Food Standards
References
External links
SALSA Food Website
Online Safety Training
2007 establishments in the United Kingdom
Certification marks
Food safety in the United Kingdom
Food safety organizations
Organisations based in Oxfordshire | SALSA (food standard) | [
"Mathematics"
] | 83 | [
"Symbols",
"Certification marks"
] |
53,218,499 | https://en.wikipedia.org/wiki/Open%20Information%20Security%20Management%20Maturity%20Model | The Open Group Information Security Management Maturity Model (O-ISM3) is a maturity model for managing information security. It aims to ensure that security processes in any organization are implemented so as to operate at a level consistent with that organization’s business requirements. O-ISM3 defines a comprehensive but manageable number of information security processes sufficient for the needs of most organizations, with the relevant security control(s) being identified within each process as an essential subset of that process.
History
The original motivation behind O-ISM3 development was to narrow the gap between theory and practice for information security management systems, and the trigger was the idea of linking security management and maturity models. O-ISM3 strove to keep clear of a number of pitfalls with previous approaches.
The looked at Capability Maturity Model Integration, ISO 9000, COBIT, ITIL, ISO/IEC 27001:2013, and other standards, and found some potential for improvement in several fields, such as linking security to business needs, using a process based approach, providing some additional details (who, what, why) for implementation, and suggesting specific metrics, while preserving compatibility with the most popular IT and security management standards.
Availability
The Open Group provides the standard free of charge.
References
Data security
Security
Information governance
Methodology
Open Group standards | Open Information Security Management Maturity Model | [
"Engineering"
] | 267 | [
"Cybersecurity engineering",
"Data security"
] |
51,635,965 | https://en.wikipedia.org/wiki/Haplarithmisis | Haplarithmisis (Greek for haplotype numbering) is a conceptual process in Genetics that enables simultaneous haplotyping and copy-number profiling of DNA samples derived from cells. Haplarithmisis also reveals parental, segregation, and mechanistic origins of genomic anomalies. The resulting profiles of haplarithmisis are called parental haplarithms (i.e. paternal haplarithm and maternal haplarithm).
Clinical Applications
Haplarithmisis enabled a new form of preimplantation genetic diagnosis, by which segmental and full chromosome anomalies could not only be detected but also traced back to meiosis or mitosis.
Research Applications
In its first application in basic genome research, haplarithmisis led to discovery of parental genome segregation, a phenomenon that causes the segregation of entire parental genomes in distinct blastomere lineages causing cleavage-stage chimerism and mixoploidy.
References
Human genetics
Molecular biology techniques
Genomics
Translational medicine | Haplarithmisis | [
"Chemistry",
"Biology"
] | 219 | [
"Translational medicine",
"Molecular biology techniques",
"Molecular biology"
] |
51,637,929 | https://en.wikipedia.org/wiki/Pi%C3%B1atex | Piñatex () is the trade name for a non-biodegradable leather alternative made from cellulose fibres extracted from pineapple leaves, PLA (polylactic acid), and petroleum-based resin. Piñatex was developed by Carmen Hijosa and first presented at the PhD graduate exhibition at the Royal College of Art, London. Piñatex is manufactured and distributed by Hijosa's company Ananas Anam Ltd.
Development
Piñatex's development began when Hijosa was working as consultant in the leather goods industry in the Philippines in the 1990s. She observed the leather produced there was poor quality, environmentally unsustainable and involved a hazardous production process for those working in the industry. Hijosa was inspired by the barong tagalog, a traditional Philippine garment worn untucked over an undershirt and made of pineapple fibers. She then spent seven years developing the product through a PhD at the Royal College of Art in London, and joint collaborations with Bangor University in Wales, Northampton Leather Technology Center, Leitat Technological Centre in Spain, alongside NonWoven Philippines Inc. in Manila, and Bonditex S.A., a textile finishing company in Spain.
Production
Piñatex is created by felting the long fibres from pineapple leaves together to create a non-woven substrate, with the addition of PLA (polylactic acid), a vegetable-based plastic material derived from cornstarch, resulting in a base material of 80% pineapple leaf fibre and 20% PLA. The material is then coated with a petroleum-based resin.
The production process uses waste pineapple leaves, as the pineapple industry globally produces 40,000 tonnes of waste leaves each year, which are usually left to rot or are burned. Approximately 480 leaves (the waste from 16 pineapple plants) are needed to create of material. The material uses the long leaf fibres which are separated by the pineapple farmers for additional income, the leftover biomass from the process can be used as a fertiliser. The production of Piñatex uses no additional water, pesticides or fertilizers, and avoids the use of heavy metallic salts used in the production of chrome-tanned leather.
Properties
Piñatex is produced in a range of colours and finishes, including a textured surface and a metallic finish. It has been described as having a softer, more pliable, "leather-like" texture than other synthetic leathers. It can also be cut, stitched, embossed and embroidered for different design uses. Because the substrate of Piñatex is 80% pineapple fibres and 20% PLA, it is not biodegradable.
Sustainability
Piñatex is not biodegradable. It is composed of a mixture of pineapple leaves, PLA (Polylactic acid), and petroleum-based resins. PLA, also known as bioplastic, is sourced from renewable resources and is commonly labeled 'biodegradable'. However, the United Nations Environmental Programme issued a report in 2015 concluding, "The adoption of plastic products labelled as 'biodegradable' will not bring about a significant decrease either in the quantity of plastic entering the ocean or the risk of physical and chemical impacts on the marine environment, on the balance of current scientific evidence."
In manufactured goods
Piñatex has been used in the manufacture of products such as bags, shoes, wallets, watch bands, and seat covers, and is being further developed for use in clothing. Products have been produced by designer Ally Capellino, LIAN & LIV, Time IV Change, ROMBAUT, and Nae; prototypes have been created by Puma and Camper. Bourgeois Boheme, a vegan footwear label, uses Piñatex in their sandals.
Recognition
In 2016, Piñatex won the Arts Foundation UK award for Material Innovation and in 2015 Dr Hijosa was finalists of the Cartier Women's Initiative Awards. Piñatex is a PETA-certified vegan fashion label.
Piñatex was highlighted in L.J.M. Owen's book, Egyptian Enigma. It was the featured fabric on a journal gifted to the main character, Dr. Elizabeth Pimms, by her sister Sam Pimms, an ardent vegetarian.
References
External links
Reference Paper- PALF Processing, Mechanical Properties and Applications
"Review on mechanical properties evaluation of pineapple leaf fibre (PALF) reinforced polymer composites"
Nonwoven fabrics
Artificial leather
Clothing industry
British brands
Manufacturing companies of the United Kingdom
Pineapples | Piñatex | [
"Chemistry"
] | 942 | [
"Artificial leather",
"Synthetic materials"
] |
51,638,205 | https://en.wikipedia.org/wiki/Jan%20Drenth | Jan Drenth (born 20 February 1925) is a Dutch chemist. He was a professor of structural chemistry at the University of Groningen from 1969 to 1990.
Career
Drenth was born in Groningen. He obtained his PhD in mathematics and physics under Eelco Wiebenga at the University of Groningen in 1957, with a dissertation titled: Een röntgenografisch onderzoek van excelsine, edestine en tabakszaadglobuline. Drenth subsequently moved to New York, United States, where he became a post-doc and studied protein crystallography under Barbara Low. Drenth then returned to the Netherlands and in 1967 was appointed as lector. In 1969 he was appointed as professor of structural chemistry, which he remained until his retirement in 1990.
Drenth was elected a member of the Royal Netherlands Academy of Arts and Sciences in 1973.
Works
Principles of Protein X-Ray Crystallography.
References
1925 births
Living people
20th-century Dutch chemists
Crystallographers
Members of the Royal Netherlands Academy of Arts and Sciences
Scientists from Groningen (city)
University of Groningen alumni
Academic staff of the University of Groningen | Jan Drenth | [
"Chemistry",
"Materials_science"
] | 239 | [
"Crystallographers",
"Crystallography"
] |
51,639,135 | https://en.wikipedia.org/wiki/Exact%20completion | In category theory, a branch of mathematics, the exact completion constructs a Barr-exact category from any finitely complete category. It is used to form the effective topos and other realizability toposes.
Construction
Let C be a category with finite limits. Then the exact completion of C (denoted Cex) has for its objects pseudo-equivalence relations in C. A pseudo-equivalence relation is like an equivalence relation except that it need not be jointly monic. An object in Cex thus consists of two objects X0 and X1 and two parallel morphisms x0 and x1 from X1 to X0 such that there exist a reflexivity morphism r from X0 to X1 such that x0r = x1r = 1X0; a symmetry morphism s from X1 to itself such that x0s = x1 and x1s = x0; and a transitivity morphism t from X1 × x1, X0, x0 X1 to X1 such that x0t = x0p and x1t = x1q, where p and q are the two projections of the aforementioned pullback. A morphism from (X0, X1, x0, x1) to (Y0, Y1, y0, y1) in Cex is given by an equivalence class of morphisms f0 from X0 to Y0 such that there exists a morphism f1 from X1 to Y1 such that y0f1 = f0x0 and y1f1 = f0x1, with two such morphisms f0 and g0 being equivalent if there exists a morphism e from X0 to Y1 such that y0e = f0 and y1e = g0.
Examples
If the axiom of choice holds, then Setex is equivalent to Set.
More generally, let C be a small category with finite limits. Then the category of presheaves SetCop is equivalent to the exact completion of the coproduct completion of C.
The effective topos is the exact completion of the category of assemblies.
Properties
If C is an additive category, then Cex is an abelian category.
If C is cartesian closed or locally cartesian closed, then so is Cex.
References
External links
Category theory | Exact completion | [
"Mathematics"
] | 487 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations"
] |
51,640,091 | https://en.wikipedia.org/wiki/2016%20Colonial%20Pipeline%20leak | On Monday, September 12, 2016, a leak in the Colonial Pipeline in Shelby County, Alabama, spilled an estimated 350,000 US gallons of summer-grade gasoline, requiring a partial shutdown of the pipeline, and causing gas shortages in much of the Southeastern United States. Six states are affected (Alabama, Georgia, Tennessee, North Carolina, South Carolina, and Virginia), with Alabama, Tennessee, Georgia and Virginia declaring states of emergency. The same line suffered an explosion in late October at a site only miles distant.
See also
2016 Southeastern United States gasoline shortage
2020 Colonial Pipeline oil spill
References
Colonial pipeline leak
Colonial pipeline leak
Oil spills in the United States
Pipeline accidents in the United States
Colonial pipeline leak | 2016 Colonial Pipeline leak | [
"Chemistry"
] | 143 | [
"Petroleum",
"Petroleum stubs"
] |
51,644,200 | https://en.wikipedia.org/wiki/HD%2030963 | HD 30963 is a star in the constellation of Eridanus. With an apparent magnitude of 7.23, it cannot be seen with the naked eye. Parallax measurements put the star at a distance of around 1,022 light-years (313 parsecs).
HD 30963 is a late B-type star. It is a mercury-manganese star, a class of chemically peculiar star that has an overabundance of certain elements like mercury. HD 30963 has 150,000 times as much mercury, 2,500 times as much platinum, 1,000 times as much yttrium, and 150 times as much zirconium compared to the Sun. It has a mass of , and its surface temperature is about .
HD 30963 is close to the orbit that the solar system is traversing in the Milky Way. The sun will be close to the current location of HD 30963 in about 18.5 million years. Interstellar absorption lines for Na I are present for velocities lower than 10 km/s.
References
Extra reading
B-type giants
Mercury-manganese stars
Durchmusterung objects
030963
022588
Eridanus (constellation) | HD 30963 | [
"Astronomy"
] | 257 | [
"Eridanus (constellation)",
"Constellations"
] |
51,645,890 | https://en.wikipedia.org/wiki/Automated%20threat | An automated threat is a type of computer security threat to a computer network or web application, characterised by the malicious use of automated tools such as Internet bots. Automated threats are popular on the internet as they can complete large amounts of repetitive tasks with almost no cost to execute.
Threat ontology
The OWASP Automated Threat Handbook provides a threat ontology list for classifying automated threats, which are enumerated below.
References
Types of malware
Impact of automation | Automated threat | [
"Engineering"
] | 94 | [
"Impact of automation",
"Automation"
] |
51,646,890 | https://en.wikipedia.org/wiki/Haplarithm | Parental (paternal and maternal) haplarithms are the outputs of haplarithmisis process. For instance, paternal haplarithm represents chromosome specific profile illuminating paternal haplotype of that chromosome (including homologous recombination between the two paternal homologous chromosomes) and the amount of those haplotypes. Importantly, the haplarithm signatures allow tracing back the genomic aberration to meiosis and/or mitosis.
References
Genomics
Human genetics
Molecular biology techniques | Haplarithm | [
"Chemistry",
"Biology"
] | 109 | [
"Molecular biology techniques",
"Molecular biology"
] |
51,648,390 | https://en.wikipedia.org/wiki/List%20of%20music%20sequencers | Music sequencers are hardware devices or application software that can record, edit, or play back music, by handling note and performance information.
Hardware sequencers
Many synthesizers, and by definition all music workstations, groove machines and drum machines, contain their own sequencers.
The following are specifically designed to function primarily as the music sequencers:
Rotating object with pins or holes
Barrel or cylinder with pins (since 9th or 14th century) — utilized on barrel organs, carillons, music boxes
Metal disc with punched holes (late 18th century) — utilized on several music boxes such as Polyphon, Regina, Symphonion, Ariston, Graphonola (early version), etc.
Punched paper
Book music (since 1890) for pneumatics system — utilized on several mechanical organs
Music roll for pneumatics system — utilized on player pianos (using piano rolls), Orchestrions, several mechanical organs, etc.
Punch tape system for earliest studio synthesizers
RCA Mark II Sound Synthesizer by Herbert Belar and Harry Olson at RCA, a room-filling device built in 1957 for half a million dollars. Included a 4-polyphony synth with 12 oscillators, a sequencer fed with wide paper tape, with output recorded by a disc cutting lathe.
Siemens Synthesizer (1959) at Siemens-Studio für elektronische Musik
Sound-on-film
Variophone (1930) by Evgeny Sholpo—on earliest version, hand drawn waves on film or disc were used to synthesize sound, and later versions were promised to experiment on musical intonations and temporal characteristics of live music performance, however not finished. Variophone is often referred as a forerunner of drawn sound system including ANS synthesizer and Oramics.
Composer-Tron (1953) by Osmond Kendal—rhythmical sequences were controlled via marking cue on film, while timbre of note or envelope-shape of sound were defined via hand drawn shapes on a surface of a CRT input device, drawn with a grease pencil.
ANS synthesizer (1938-1958) by Evgeny Murzin—an earliest realtime additive synthesizer using 720 microtonal sine waves (1/6 semitones × 10 octaves) generated by five glass discs. Composers could control the time evolution of amplitudes of each microtone via scratches on a glass plate user interface covered with black mastic.
Oramics (1957) by Daphne Oram—hand drawn contours on a set of ten sprocketed synchronized strips of 35 film were used to control various parameters of monophonic sound generator (frequency, timbre, amplitude and duration). Polyphonic sounds were obtained using multitrack recording technique.
Electro-mechanical sequencers
Wall of Sound (mid-1940s–1950s) by Raymond Scott—early electro-mechanical sequencer developed by Raymond Scott to produce rhythmic patterns, consistent with stepping relays, solenoids, and tone generators
Circle Machine (1959) by Raymond Scott—electro-optical rotary sequencer developed by Raymond Scott to generate arbitrary waveforms, consistent with dimmer bulbs arranged in a ring, and a rotating arm with photocell scanning over the ring
Wurlitzer Sideman (1959)—first commercial drum machine; rhythm patterns were electro-mechanically generated by rotating disk switches, and drum sounds were electronically generated by vacuum-tube circuits
Analog sequencers
Analog sequencers with CV/Gate interface
Buchla 100's sequencer modules (1964/1966–)
One of the earliest analog sequencers of the modular synthesizer era since 1960. Later, Robert Moog admired Buchla's unique works including it
Moog 960 Sequential Controller / 961 Interface / 962 Sequential Switch (c.1968)
A popular analog sequencer module for the Moog modular synthesizer system, following the earliest Buchla sequencer
Aries AR334 (module)
ARP 1601 and 1027 (module)
Buchla 245, 246
Doepfer Dark Time
Electro Harmonix Sequencer
EML 400
ETI 603 (DIY project)
genoQs Octopus-digital midi
genoQs Nemo-digital midi
Korg SQ-10
MFB Urzwerg / MFB Urzwerg Pro—CV/Gate step sequencer with 8steps/4tracks or 16steps/2tracks; also synchronizable with MIDI sequencer
Oberheim Mini Sequencer MS1A
PAiA 4780
Polyfusion AS1, AS1R and 2040/2041/2042/2043 modules
PPG 313, 314
Roland 104, 182, 717A
Sequential Circuits Model 600
Serge Modular TKB, SQP, SEQ8
Steiner Parker 151
Synthesizers.com Q119
Synthesizers.com Q960—reissue of Moog 960
WMS 1020A
Yamaha CS30 (1977)—monophonic synthesizer keyboard with built-in 8-step analog sequencer
Analog-style step sequencers
Analog-style MIDI step sequencers
Since the analog synthesizer revivals in the 1990s, newly designed MIDI sequencers with a series of knobs or sliders similar to analog sequencer have appeared. These often equip CV/Gate and DIN sync interface along with MIDI, and even patch memory for multiple sequence patterns and possibly song sequences. These analog-digital hybrid machines are often called "Analogue-style MIDI step sequencer" or "MIDI analogue sequencer", etc.
Doepfer MAQ 16/3—MIDI analog sequencer, designed in cooperation with Kraftwerk
Doepfer Regelwerk—MIDI analog sequencer with MIDI controller
Frostwave Fat Controller
Infection Music Phaedra
Infection Music Zeit
Latronic Notron
Manikin Schrittmacher
Quasimidi Polymorph (1999)—Four-part multitimbral tabletop synthesizer, with an analogue-like step sequencer
Roland EF-303—Multiple effects unit with 16-step modulation, also usable as the analog-style MIDI step sequencer
Sequentix P3
Analog-style MIDI pattern sequencers
Several machines also provide "song mode" to play the sequence of memorised patterns in specified order, as per drum machines.
Doepfer Schaltwerk—MIDI pattern sequencer
Step sequencers (supported on)
Typical step sequencers are integrated on drum machines, bass machines, groove machines, music production machines, and these software versions. Often, these also support the semi-realtime recording mode, too.
MFB Step 64—Standalone step sequencer dedicated for drum patterns (16 steps/4 tracks or 64 steps/1 track, 118 programs×4 banks, 16 song sequences, each with up to 128 sequences)
Embedded self-contained step sequencers
Several tiny keyboards provide a step sequencer combined with an independent timing mode for recording and performance:
Casio VL-Tone VL-1 (1979), Casiotone MT-70 (c.1984), Sampletone SK-1 (1986), etc.—Timings of musical notes stored on the step sequencer, can be designated by the two trigger buttons labeled "One Key Play", around the right hand position
Embedded CV/Gate step sequencers
Several machines have white and black chromatic keypads, to enter the musical phrases.
Multivox / Firstman SQ-01 (1980)—a forerunner of TB-303
Roland TB-303 (1981)
Roland SH-101 (1982)—monophonic keytar synthesizer with sequencer
Roland MC-202 (1983)—monophonic tabletop synthesizer with sequencer, similar to SH-101
Embedded MIDI step sequencers
Groovebox-type machines with white and black chromatic keypads, often support step recording mode along with realtime recording mode:
Korg Electribe / Electribe 2 series
Roland Corporation MC series: MC-09 / MC-303 / MC-307 / MC-505 / MC-808 / MC-909
Yamaha RM1x
Yamaha RS7000—Music Production Studio
Other groovebox-type machines (including several music production machines) also often support step recording mode, of course:
Linn 9000 (1984)
Sequential Circuits Studio 440 (1986)
E-mu SP-12 (1986)
E-mu SP-1200 (1987)
Akai MPC series (1988–)
Akai MPC Renaissance / Studio / Fly (2012)—Software with control surfaces
Native Instruments Maschine (2009)—Software with control surface
Roland MV-30
Roland MV-8000—Production Studio
Button-grid-style step sequencers
Recently emerging button-grid-style interfaces/instruments are naturally support step sequence. On these machines, one axis on grid means musical scale or sample to play, and another axis means timing of notes.
Akai APC40—interface for Ableton Live
Arduinome—interface
Bliptronics 5000—instrument
Monome—interface
Novation Launchpad—interface for Ableton Live
Yamaha Tenori-on—instrument
Synthstrom Deluge - Piano-roll-style sequencing on 128 pads (16×8)
In addition, newly designed hardware MIDI sequencers equipping a series of knobs/sliders similar to analog sequencers, are appeared. For details, see #Analog-style MIDI step sequencers.
Digital sequencers
CV/Gate
Also often support Gate clock and DIN sync interfaces.
EDP Spider (late 1970s)—supported LINK and CV/Gate
EMS Sequencer series (1971)
Max Mathews GROOVE system (1970)
Multivox MX-8100 / Firstman SQ-10 (1979/1980)—supported V/Oct. and Hz/V
Oberheim DS-2 (1974)
Roland CSQ-100
Roland CSQ-600 (1980)—it memories 600 notes for individual 4 tracks, a buddy of TR-808
Roland MC-4 Microcomposer (1981)
Roland MC-8 Microcomposer (1977)—also supporting DCB via OP-8
Sequential Circuits Model 800 (1977)
Proprietary digital interfaces (pre MIDI era)
NED Synclavier series—CV/Gate interface and MIDI retrofit kit were available on Synclavier II. Also MIDI became standard feature on Synclavier PSMT
Fairlight CMI series—CV/Gate interface was optionally available on Series II, and MIDI was supported on Series IIx and later models
Oberheim DSX (Oberheim Parallel Bus)
PPG Wave family (PPG Bus)
Rhodes Chroma (Chroma Computer Interface)
Roland JSQ-60 (Roland Digital Control Bus (DCB))
Sequential Circuits PolySequencer 1005 (SCI Serial Bus)
Yamaha CS70M (Key Code Interface)
Hardware MIDI sequencers
Standalone MIDI sequencers
Akai ASQ10
Alesis MMT-8—a buddy of HR-16 drum machine
Korg SQD-1
Korg SQD-8
Kawai Q-80
Roland MC-327
Roland MC series: MC-50/MC-50MkII/MC-80/MC-300/MC-500 Microcomposer
Roland MSQ-100 (1985)
Roland MSQ-700 (1984)—one of the earliest multitrack MIDI sequencer (8tr), a buddy of TR-909
Roland SB-55—SMF recorder
Yamaha QX series: QX1/QX3/QX5/QX7/QX21
MIDI phrase sequencers
Zyklus MPS
Embedded MIDI sequencers
Sequential Circuits Six-Track (1984), MultiTrak (1985), Split-8 / Pro-8 (1985)
MIDI sequencers with embedded sound module
Yamaha TQ5—desktop version of EOS YS200 FM workstation
Yamaha QY10—with embedded GM tone generator (1990)
Yamaha QY20—with embedded GM tone generator (1992)
Yamaha QY300—with embedded GM tone generator (1994)
Yamaha QY20—with embedded GM tone generator (1995)
Yamaha QY700—with embedded XG tone generator (1996)
Yamaha QY70—with embedded XG tone generator (1997)
Yamaha QY100—with embedded XG tone generator (2000)
Palmtop MIDI sequencers
Korg SQ-8—palmtop sequencer
Philips Micro Composer PMC100
Roland PMA-5—palmtop sequencer with touch screen
Yamaha Walkstation series: QY8/QY10/QY20/QY22/QY70/QY100—palmtop sequencer with embedded sound module
Accompaniment machines
Boss DR-5 Dr.Rhythm Section
Yamaha QR10 Musical Accompaniment Player
Open-source hardware
MIDIbox Sequencer modules—Analog-style MIDI step sequencer/MIDI effect processor modules of MIDIbox project
oTTo Sampler, Sequencer, Multi-engine synth and effects - in a box.
Software sequencers and DAWs with sequencing features
Free, open source
Scorewriters
MuseScore—Linux, Windows, OS X
DAW with MIDI sequencers
Ardour—Linux, OS X, FreeBSD, Windows
LMMS—Linux, Windows
MusE—Linux
Qtractor—Linux
Rosegarden—Linux
Auxy: Beat Studio—iOS 7
Drum machines
Hydrogen—Linux, OS X
Commercial
Scorewriters
Aegis Sonix—Amiga
Software MIDI sequencers
B-Step Sequencer from Monoplugs
Fugue Machine from Alexandernaut
Master Tracks Pro from Passport Music Software
Bars & Pipes Professional from Blue Ribbon Software [improved
by Alfred Faust] at http://bnp.hansfaust.de/indexeng.html
Loop-oriented DAWs with MIDI sequencers
ACID Pro and Cinescore from Sony Creative Software
Live from Ableton
GarageBand from Apple
REAPER from Cockos
Tracktion from Mackie
Tracker-oriented DAWs with MIDI sequencers
Renoise
DAWs with MIDI sequencers
Ableton Live from Ableton
Audition from Adobe (removed since Version 4 CS5.5)
Bitwig Studio from Bitwig
Cubase and Nuendo from Steinberg
Digital Performer from MOTU
REAPER from Cockos
FL Studio from Image Line Software
Logic Pro and Logic Express from Apple
Mixcraft from Acoustica
Mixbus from Harrison
MuLab from MUTools
MultitrackStudio from Bremmers Audio Design
n-Track Studio from n-Track Software
Pro Tools from Avid
Samplitude, Sequoia, Music Maker and Music Studio from Magix
Sonar, Music Creator and Home Studio from Cakewalk
Studio One from PreSonus
Podium from Zynewave (gratis)
Z-Maestro from Z-Systems
Integrated software studio environments
Reason and Record from Propellerhead
Storm from Arturia
See also
List of music software
References
Music sequencers | List of music sequencers | [
"Engineering"
] | 2,996 | [
"Music sequencers",
"Automation"
] |
51,649,142 | https://en.wikipedia.org/wiki/Urban%20air%20mobility | Urban air mobility (UAM) is the use of small, highly automated aircraft to carry passengers or cargo at lower altitudes in urban and suburban areas which have been developed in response to traffic congestion. It usually refers to existing and emerging technologies such as traditional helicopters, vertical-takeoff-and-landing aircraft (VTOL), electrically propelled vertical-takeoff-and-landing aircraft (eVTOL), and unmanned aerial vehicles (UAVs). These aircraft are characterized by the use of multiple electric-powered rotors or fans for lift and propulsion, along with fly-by-wire systems to control them. Inventors have explored urban air mobility concepts since the early days of powered flight. However, advances in materials, computerized flight controls, batteries and electric motors improved innovation and designs beginning in the late 2010s. Most UAM proponents envision that the aircraft will be owned and operated by professional operators, as with taxis, rather than by private individuals.
Urban air mobility is a subset of a broader advanced air mobility (AAM) concept that includes other use cases than intracity passenger transport; NASA describes advanced air mobility as including small drones, electric aircraft, and automated air traffic management among other technologies to perform a wide variety of missions including cargo and logistics. This is also supported by the drone market consulting firm Drone Industry Insights, who also includes vertiports into the definition of AAM and UAM.
History
Pre-history
The development of the earliest predecessors of UAM aircraft began in the early 1900s with early concepts of “flying cars” such as Glenn Curtiss's Autoplane, developed in 1917. Three years later, Henry Ford began prototyping “plane cars” as single-seat aircraft, but halted development after a fatal crash in early tests.
One of the first vertical-takeoff-and-landing aircraft (VTOLs) was the 1924 Berliner No. 5. It recorded its best performance when it reached a height of 4.57 m (15 ft) during a one-minute, thirty-five second flight.
Pitcairn, Cierva, Buhl and other manufacturers developed autogyros prototypes.
The Avrocar was a disk-shaped aircraft designed for military use. Initially funded by the Canadian government, the project was dropped due to costs until the U.S. Army and Air Force took over the development of the Avrocar in 1958. The Avrocar encountered issues with both thrust and stability and the project was eventually canceled in 1961.
Helicopters and air taxi services
Beginning in the early 1950s, air operators offered UAM air taxi services via helicopters in a handful of U.S. cities, including New York, Los Angeles, and San Francisco. In 1964, New York Airways (NYA) and Pan American offered more than 30 flights between John F. Kennedy International Airport and Newark Liberty International Airport with stops in Manhattan such as Wall Street. The average cost for a one-way fare was $4–11.
From 1964 to 1968, PanAm offered regular helicopter connections between midtown Manhattan and John F. Kennedy International Airport, allowing passengers to connect directly to their flights from the New York City Pan American building. The service was halted in 1979 after a crash in 1977 killed four people on the roof and one on the ground below. In the 1980s, Trump Shuttle offered helicopter service between Wall Street and LaGuardia Airport, utilizing Sikorsky S-61 helicopters. The service was discontinued in the 1990s after Trump Shuttle was acquired by US Airways. In 1986, Helijet began as a helicopter airline with routes between Vancouver and Victoria in British Columbia.
BLADE, launched in 2014 in New York City, providing helicopter-based air taxi services. BLADE has since launched similar services in the San Francisco Bay Area and Mumbai.
In 2017 Voom, a subsidiary of aircraft maker Airbus, flew more than 15,000 passengers in São Paulo, Brazil using Airbus helicopters. The Voom UAM demonstration program operated for four years and was shut down in March 2020. In 2019, Uber began to offer Uber Copter in Lower Manhattan New York to John F. Kennedy International Airport. Some cities have encouraged the idea of inexpensive, point-to-point air travel as a way of reducing traffic congestion and moving goods.
VTOLs and eVTOLs
By the mid-2000s, aircraft designers were incorporating technologies pioneered in small drones into new aircraft designs for passengers. These technologies included distributed propulsion (the use of multiple rotors or fans), lithium ion batteries, inexpensive accelerometers, miniaturized navigation systems and carbon-fiber construction.
In 2010, Kitty Hawk Corporation, funded by Google Co-founder Larry Page, began development of the Kitty Hawk Flyer. On October 5, 2011, Marcus Leng, Founder of Opener, piloted the first manned flight of a fixed-wing all electric VTOL aircraft. On October 21, 2011, the co-founder and primary designer of Volocopter, Thomas Senkel, flew the first manned flight of an electric multicopter, the Volocopter VC1 prototype. In 2012, Joby Aviation and NASA partnered to prototype an experimental eVTOL. In 2014, The Leading Edge Asynchronous Propeller Technology (LEAPTech) project was launched as a collaboration of NASA Langley Research Center and NASA Armstrong Flight Research Center along with Empirical Systems Aerospace (ESAero) and Joby Aviation.
Lockheed Martin debuted their optionally-piloted helicopter, the S-76B Sikorsky Autonomous Research Aircraft (SARA) in 2019, in downtown Los Angeles. In 2018, the Wisk Cora eVTOL test flight occurred in Mountain View, CA. That same year, Opener flew the BlackFly a personal air vehicle, after nine years of development. Joby Aviation tested its tilt-rotor UAM vehicle in flight in March 2021. In June 2021, EHang completed the first pilotless test flight of the AAV EHang216 in Honshu, China. In the same month, Volocopter demonstrated its first public flight of an electric air taxi in France along with remote-controlled flight of its eVTOL, the Volocopter 2X. In July 2021, Joby completed a flight of its eVTOL that flew a 150-mile flight on a single battery charge by flying in a 14-mile circle 11 times for a total flight time of one hour and 17 minutes.
Air mobility is progressing in both manned and UAV directions. In Hamburg, the WiNDroVe project – (use of drones in a metropolitan area) was implemented from May 2017 through January 2018. In Ingolstadt, Germany the Urban Air Mobility project began in June 2018, involving Audi, Airbus, the Carisma Research Center, the Fraunhofer Application Center for Mobility, the THI University of Applied Sciences (THI in the artificial intelligence research network) and other partners. Envisioned was use of UAM in emergency services, transport of blood and organs, traffic monitoring, public safety and passenger transport.
The German, Dutch and Belgian cities Maastricht, Aachen, Hasselt, Heerlen and Liège joined the UAM Initiative of the European Innovation Partnership on Smart Cities and Communities (EIP-SCC). Toulouse, France, is participating in the European Urban Air Mobility Initiative. The project is coordinated by Airbus, the European institutional partner Eurocontrol and EASA (European Aviation Safety Agency).
Implementation
The concept was realized in São Paulo, Brazil, with over 15,000 passengers flown by Voom. There, urban air mobility was provided by helicopters. Helicopter air taxis are already available in Mexico City, Mexico. Fast air connections are still associated with high costs, and cause considerable noise and high energy consumption.
The Voom UAM demonstration program operated for four years, and was shut down in March 2020.
Urban-Air Port, a UK Government-sponsored helipad+ startup R&D firm, with a prototype at Coventry, equipped for eVTOLs, PAVs and drones, in conjunction with Hyundai.
Aircraft
Personal air vehicles (PAVs) are under development for urban air mobility. These include projects such as the CityAirbus demonstrator, the Lilium Jet or the Volocopter, the EHang 216 and the experimental Boeing Passenger Air Vehicle.
In the concept phase, urban air mobility aircraft, having VTOL capabilities, are deployed to take off and land vertically in a relatively small area to avoid the need of a runway. The majority of designs are electric and use multiple rotors to minimize noise (due to rotational speed) while providing high system redundancy. Many of them have completed their first flight.
The most common configurations of urban air mobility aircraft are multicopters (such as the Volocopter) or so-called tiltwing convertiplane aircraft (e.g. A³ Vahana). The first type uses only rotors with vertical axis, while the second additionally have propulsion and lift systems for horizontal flight (e.g. pressure propeller and wing).
Power source
In order for UAM aircraft to be most efficient, recharging and refueling must be done as quickly as possible, whether that is swapping batteries, fast recharging batteries, or hydrogen refueling.
Conventional fuel
Conventional fossil fuels are readily available and offer high power density (the amount of power produced per kilogram of fuel). However, traditional piston or turbine engines emit smoke and noise. The heavy mechanical linkages needed to distribute power limit the number and configuration of rotors on an aircraft.
Sustainable or synthetic aviation fuel
Synthetic fuels have the potential to produce nearly -neutral energy while utilizing existing refueling infrastructure. But they pose the same challenges as conventional fuel in terms of noise and mechanical limitations.
Electric
Rechargeable batteries are often used in UAVs and eVTOLs. Emerging eVTOL vehicles are limited by the relatively low energy density to weight ratio in current battery technology, as well as the lack of infrastructure required for recharging stations.
Hybrid-electric
Hybrid-electric systems use a combination of internal combustion engine (ICE) and electric propulsion system components. Different combinations are possible. These systems can provide combined advantages from different energy sources, but still must be viewed in terms of the overall system's efficiency.
Hydrogen fuel cells
Hydrogen fuel cells generate electricity by circulating hydrogen gas through a catalytic membrane. Small fuel cells can power light drones for three times longer than equivalent batteries. Fuel cells are in development for larger aircraft. Experimental regional aircraft retrofitted with fuel cell-electric propulsion systems have flown in 2023. In January 2023, ZeroAvia flew a Dornier 228 with one original Honeywell TPE 331 turboprop engine on the right wing and a proprietary ZeroAvia hydrogen-electric engine on the left wing. In March 2023, Universal Hydrogen's electric Dash 8-300 made its maiden flight.
Propulsion
Common VTOL and eVTOL configurations include:
Multirotor or multicopter
Multirotor aircraft have small wings, or no wings at all. They use downward-facing propellers or fans to generate the majority of their lift.
Lift-plus-cruise
Lift-plus-cruise aircraft utilize vertically mounted propellers for take-off and landing, but a horizontal propeller and wings for sustained cruise flight.
Ducted fans
Ducted fans are a type of propeller mounted within a duct, which optimizes the thrust from the tips of the blades.
Tiltrotors
Tiltrotor aircraft lift exclusively by rigid propeller and have no other horizontal propulsion type. They generate horizontal thrust by physically tilting the rotors into a horizontal position once airborne.
Tiltwing
Tiltwing aircraft are similar to tiltrotor aircraft, but rather than independently rotating the rotors, the entire wing is rotated.
Flight controls
Flight controls consists of flight control surfaces, cockpit controls, and operating mechanisms to control an aircraft's direction in flight. Honeywell, Pipistrel, Vertical Aerospace, Lilium and other companies are collaborating to create new flight controls for a variety of eVTOL aircraft. Honeywell developed a fly-by-wire computer that controls multiple rotors, a detection and avoidance radar to navigate traffic, and software to track landing zones for repeatable vertical landings.
Fly by wire
Fly-by-wire systems translate a pilot's inputs into commands sent to an aircraft's motors, propeller governors, ailerons, elevators and other moving surfaces. They are essential in multirotor designs because human pilots cannot control multiple propellers without computer assistance. In June 2019, Honeywell introduced a miniaturized computer specifically designed for UAM aircraft.
Software
Advanced autonomous eVTOL fleets require management software to scale to profitable levels. Pilot training is costly and expensive, and pilots themselves take up much of an aircraft's payload. So many manufacturers are designing aircraft that can fly autonomously as automation technology improves. Sikorsky is developing MATRIX technology, while Honeywell is partnered with Pipistrel and other manufacturers to develop automatic landing systems for their respective aircraft. Artificial intelligence (AI) and machine learning are necessary to develop autonomous craft, but pose a complication to certification because they are non-deterministic, i.e. they may behave differently given the same input in the same scenario.
Avionics
Avionics are electronic systems designed for aircraft. Honeywell is developing integrated avionics systems comprising a vehicle management system, autonomous navigation, a fly-by-wire control system, and compact satellite connectivity. The avionics are modular and able to integrate with third-party applications. The architecture can also incorporate simplified vehicle operations, which replaces traditional pilot displays with imagery that is similar to a car GPS system or smartphone app.
Infrastructure
UAM requires infrastructure for vehicles to take off, land, be repaired, recharge or refuel, and park. The size of the physical infrastructure determines the market size, as trips can only be completed between established landing areas. While some components can be integrated into existing aviation and aerospace infrastructure, additional facilities need to be constructed. For large cities it is estimated that there could be 85–100 take-off and landing pads to accommodate a UAM environment.
Vertiports
See main article vertiport
According to the FAA, a vertiport is an identifiable ground or elevated area, that can be associated with various equipment and facilities, used for the take off and landing of tiltrotor aircraft and rotorcraft. The industry has used different terms for describing the various levels of equipment and sizes of these facilities. Vertipads are simple landing pads designed to be used by one aircraft at a time. Vertiports or vertibases can feature one or more final approach and takeoff (FATO) and touch-down and lift-off (TLOF) areas, as well as several VTOL stands and other aircraft and passenger facilities. Vertihubs are larger aviation facilities serving the largest structure in the UAM environment. They can offer service ssuch as FBOs and MROs. Vertihubs would serve concentrated high-traffic regions.
In 2020, Lilium announced their plans to construct a vertiport near Orlando International Airport. Joby has partnered with REEF Technology and Neighborhood Property Group (NPG) to use the rooftops of parking structures as take-off and landing areas.
Helipads
Existing helipads, or helicopter landing pads, can be used to accommodate UAM aircraft. Helipads are insufficient to sustain the industry without construction of additional infrastructure or modification of existing helipads.
Airports
Airports are already being used in limited locations to facilitate on-demand helicopter and eVTOL services. Such airports include John Wayne Airport, John F. Kennedy International Airport, and Portland International Airport.
Air traffic management
Unmanned aircraft systems (UAS) traffic management (collectively UTM) is a specific air traffic management system designed around the unique needs of unmanned and low-altitude aircraft. UTM provides airspace integrations necessary for ensuring safe operation through services such as design of the actual airspace, delineations of air corridors, dynamic geofencing to maintain flight paths, weather avoidance, and route planning without continuous human monitoring. Airspace Link developed AirHub, a system to connect cities, states, drone operators, and the FAA into a single space to map out the safest routes for autonomous drones using publicly available flight data.
Regulations
Governments around the world have begun debating changes to their airspace rules to accommodate high numbers of autonomous or semi-autonomous aircraft operating at low altitudes. NASA and EASA have proposed concepts for the requirements of a UAM system. NASA's concept of operations, or ConOps, relies on defined corridors for UAM craft which must then abide by specific protocols when inside the corridor. EASA's regulatory approach leaves local decision to “local actors” and will instead seek to certify the aircraft themselves for safety. They developed the VTOL special condition to certify the specific class of aircraft that were previously undefined.
Certifications
Aircraft
Aircraft need to be certified as airworthy, as well as registered with the appropriate governing body. Regulations for UAM aircraft are most similar to helicopter regulations but will need additional regulations for electric and/or autonomous craft. FAA established certification basis for its eVTOL craft. eVTOLs are classified with the FAA as an airplane that can take off and land vertically. EASA released special condition VTOL certification to separate VTOLs and eVTOLs from conventional rotocraft or fixed-wing aircraft. Archer Aviation uses a blend of the FAA Part 23, 27, 33, 35, and 36 requirements to certify its eVTOL. BETA applied for eVTOL certification under Part 23 with the FAA. BETA was the first manned eVTOL to receive military airworthiness from the Air Force.
Operations
All VTOL and eVTOL aircraft that carry persons or property for hire must be flown by an appropriately certificated operator. Joby applied for a FAA Part 135 certificate to operate their own aircraft for UAM projects. Lilium partnered with Luxaviation to operate eVTOL jets in Europe.
Pilots
Pilots need to be certified to operate an eVTOL and remote eVTOLs. Pilots can obtain a commercial pilot license (CPL(H)) or an air transport pilot license (ATPL(H)) for manned craft. CAE is developing training programs utilizing data analytics with complex simulators. CAE and BETA partnered to offer eVTOL pilot and maintenance technician training for ALIA eVTOLs. CAE and Volocopter partnered to develop a pilot training program for Volocopter eVTOLs.
Mechanics
Mechanics also need to be certified, but as this is an emerging industry, there are not yet regulations in place to do so for the relevant aircraft and technologies.
Applications
Applications include commute, law enforcement, air medical, fire, private security, and military.
Public acceptance
Public acceptance of UAM relies on a variety of factors, including but not limited to safety, energy consumption, noise, security, and social equity. Safety risks overlap with most current aircraft risks, including the potential for flights outside of approved airspace, proximity to people and/or buildings, critical system failures or loss of control, and hull loss.
In the case of autonomous or remote-piloted aircraft, cybersecurity becomes a risk as well. The type of and volume of the noise caused by aircraft and rotorcraft are two leading factors regarding the public perception of eVTOL craft in UAM applications.
Specific security concerns include the physical security of passengers in the absence of crew members and the cybersecurity of both the craft and the systems governing it. In regard to social equity, the high initial costs of UAM services could prove to be detrimental to public opinion, especially as the affordability of services and technologies is not guaranteed. In the NASA UAM market study, respondents with higher incomes were more likely to take UAM trips.
An EASA survey showed that 83% of respondents had a positive attitude towards UAM, while 71% were ready to try UAM services. Projects underway include Lilium announcing to create the first U.S. vertihub in Orlando for its on-demand electric jet service and EHang created an UAM pilot program in Spain in the city of Seville.
Training and education
In December 2016, the Vertical Lift Research Centers of Excellence (VLRCOE) announced its new academic teams for its program. The joint effort of the United States Army, United States Navy, and NASA aims to foster direct collaboration between the government and academic institutions. Universities have been associated into various teams: Georgia Institute of Technology, Iowa State University, Purdue University, University of Michigan, and Washington University; University of Liverpool, Pennsylvania State University, Embry Riddle Aeronautical University, University of California, Davis, and University of Tennessee, University of Maryland, United States Naval Academy, University of Texas at Arlington, University of Texas at Austin, and Texas A&M University; Technical University of Munich, Roma Tre University, and Technion – Israel Institute of Technology.
Volocopter and CAE partnered to create the first eVTOL pilot training and development program in July 2021.
See also
Lists of aviation topics
List of aviation, avionics, aerospace and aeronautical abbreviations
Index of aviation articles
References
Infrastructure
Proposed transport infrastructure
Technology in society | Urban air mobility | [
"Engineering"
] | 4,307 | [
"Construction",
"Infrastructure"
] |
51,650,091 | https://en.wikipedia.org/wiki/Nicolae%20Negur%C4%83 | Nicolae Negură (October 22, 1832–September 1884) was a Moldavian-born Romanian physician.
Born in Huși, Negură studied medicine in Germany, where he was influenced by the materialism of Carl Vogt, Jacob Moleschott and Ludwig Büchner. He received a doctorate in 1856 from the University of Berlin; the thesis was titled De febre Moldaviensi. Later that year, he settled in the Moldavian capital Iași, obtaining a license to practice and obtaining a post as a primary care physician at Sfântul Spiridon Hospital. In September 1859, at a time when medical education in Moldavia was limited to a midwives' school, he proposed that the Ministry of Religious Affairs and Public Instruction set up a school for surgeons that would serve as the basis for a medical faculty. After receiving ministerial approval, he proceeded within a year to translate and publish a manual of anatomy and to obtain necessary materials such as a microscope, normal and diseased anatomical samples, skeletons and plaster casts. He had the support of domnitor Alexandru Ion Cuza, and of Mihail Kogălniceanu.
At the end of November, Negură was able to begin teaching in the Academia Mihăileană building. The following January, he was named professor of surgery and medicine by princely decree. Certain circles, consolidated around the Sfântul Spiridon administration, opposed his initiative, but Negură was able to complete the 1860–1861 academic year. However, opposition then forced the school to shut down. Although ephemeral, it marked an important stage in the evolution of medical education in Romania.
Negură's philosophical writings offer a clue to his motivation in creating the school. His 1865 articles "Viața, existența și moartea" and "Voința, putința și răbdarea" mark him as an early Romanian promoter of a material conception about nature and thought. He also published Migrenă, an 1868 study of migraines; and Higienă publică și privată (1875), a hygiene manual. He was an active member of the Iași society of physicians and naturalists. Moving to Bucharest around 1865, he offered courses on forensic medicine and toxicology at the national school of medicine and pharmacy.
Notes
1832 births
1884 deaths
19th-century Prussian people
People from the Principality of Moldavia
People from Huși
Humboldt University of Berlin alumni
19th-century Romanian physicians
Materialists | Nicolae Negură | [
"Physics"
] | 507 | [
"Materialism",
"Matter",
"Materialists"
] |
51,651,945 | https://en.wikipedia.org/wiki/Appalachian%20bogs | Appalachian bogs are boreal or hemiboreal ecosystems, which occur in many places in the Appalachian Mountains, particularly the Allegheny and Blue Ridge subranges. Though popularly called bogs, many of them are technically fens.
Natural history
After the Pleistocene ice ages, species and ecosystems that had shifted southward often survived in local refugia. As a result, cold-adapted ecosystems, such as bogs, remain as far south as East Tennessee and Western North Carolina. Development of land has greatly reduced both the number and acreage of the bogs in North Carolina. Bog ecosystems evolved in humid cold temperate regions and are generally ombrotrophic which means the system is dependent on precipitation for moisture and nutrient inputs.
Shady Valley bogs
Situated between Holston Mountain and the Iron Mountains, the community of Shady Valley, Tennessee, once contained an estimated 10,000 acres (40 km2) of cranberry bogs. In recent years, The Nature Conservancy has initiated a bog restoration program in Shady Valley.
The Conservancy also sponsors the town's annual Cranberry Festival, which is held the second weekend in October.
Notable bog preserves
Cranberry Glades, in Pocahontas County, West Virginia
Cranesville Swamp Preserve, in Preston County, West Virginia and Garrett County, Maryland
Mountain Bogs National Wildlife Refuge in Ashe County, North Carolina.
Tamarack Swamp, in Pennsylvania's West Branch Susquehanna Valley
Tannersville Cranberry Bog, in Northeastern Pennsylvania
Cataract bogs
A cataract bog is a rare ecological community, formed where a permanent stream flows over a granite outcropping. The sheeting of water keeps the edges of the rock wet without eroding the soil, but in this precarious location no tree or large shrub can maintain a roothold. The result is a narrow, permanently wet, sunny habitat.
While a cataract bog is host to plants typical of a bog, it is technically a fen, not a bog. Bogs get water from the atmosphere, while fens get their water from groundwater seepage.
Cataract bogs inhabit a narrow, linear zone next to the stream, and are partly shaded by trees and shrubs in the adjacent plant communities.
Cataract bogs are found only in the Southern Appalachian Mountains of the United States, at elevations of between . They are restricted to the Blue Ridge Escarpment region of South Carolina and a small area of North Carolina, a region with exceptionally high rainfall.
Sods
Sods is a term used in the Allegheny Mountains of eastern West Virginia for a mountaintop meadow or bog, in an area that is otherwise generally forested. The term is similar (perhaps identical) to that of a "grass bald", a more widespread designation applied throughout the central and southern Appalachian region.
The best known example of a sods is Dolly Sods, a federally designated wilderness area in Tucker County, West Virginia and popular destination for recreationalists. Other examples include Nelson Sods (Pendleton County) and Baker Sods (Randolph County).
See also
Appalachian balds
Appalachian temperate rainforest
Cove (Appalachian Mountains)
Southern Appalachian spruce–fir forest
Fen
References
Bogs of the United States
Ecological restoration
Ecology of the Appalachian Mountains | Appalachian bogs | [
"Chemistry",
"Engineering"
] | 656 | [
"Ecological restoration",
"Environmental engineering"
] |
76,004,408 | https://en.wikipedia.org/wiki/Aluminium%20arsenide%20antimonide | Aluminium arsenide antimonide, or AlAsSb (AlAs1-xSbx), is a ternary III-V semiconductor compound. It can be considered as an alloy between aluminium arsenide and aluminium antimonide. The alloy can contain any ratio between arsenic and antimony. AlAsSb refers generally to any composition of the alloy.
Preparation
AlAsSb films have been grown by molecular beam epitaxy and metalorganic chemical vapor deposition on gallium arsenide, gallium antimonide and indium arsenide substrates. It is typically incorporated into layered heterostructures with other III-V compounds.
Structural and Electronic Properties
The room temperature (T = 300 K) bandgap and lattice constant of AlAsSb alloys are between those of pure AlAs (a = 0.566 nm, Eg = 2.16 eV) and AlSb (a = 0.614 nm, Eg = 1.62 eV). Over all compositions, the bandgap is indirect, like it is in pure AlAs and AlSb. AlAsSb shares the same zincblende crystal structure as AlAs and AlSb.
Applications
AlAsSb can be lattice-matched to GaSb, InAs and InP substrates, making it useful for heterostructures grown on these substrates.
AlAsSb is occasionally employed as a wide-bandgap barrier layer in InAsSb-based infrared barrier photodetectors. In these devices, a thin layer of AlAsSb is grown between doped, smaller-bandgap InAsSb layers. These device geometries are frequently referred to as "nbn" or "nbp" photodetectors, indicating a sequence of an n-doped layer, followed by a barrier layer, followed by an n- or p-doped layer. A large discontinuity is introduced into the conduction band minimum by the AlAsSb barrier layer, which restricts the flow of electrons (but not holes) through the photodetector in a manner that reduces the photodetector's dark current and improves its noise characteristics.
References
Antimonides
Aluminium compounds
Arsenides
III-V compounds | Aluminium arsenide antimonide | [
"Chemistry"
] | 459 | [
"III-V compounds",
"Inorganic compounds"
] |
76,004,471 | https://en.wikipedia.org/wiki/Tournefortia%20gnaphalodes | Tournefortia gnaphalodes, the sea lavender, bay lavender, sea rosemary, iodine bush, or beach heliotrope, is a species of flowering plant in the family Boraginaceae. It is native to Florida, Mexico, Central America, the Caribbean, Bermuda, northeastern Colombia, and Venezuela. A semisucculent evergreen shrub reaching , it is typically found in coastal areas. Occasionally cultivated as an ornamental, it is often used for dune stabilization.
References
gnaphalodes
Halophytes
Flora of Florida
Flora of Veracruz
Flora of Southeastern Mexico
Flora of Belize
Flora of Honduras
Flora of Nicaragua
Flora of the Caribbean
Flora of Colombia
Flora of Venezuela
Plants described in 1819 | Tournefortia gnaphalodes | [
"Chemistry"
] | 142 | [
"Halophytes",
"Salts"
] |
76,004,852 | https://en.wikipedia.org/wiki/Pentadentate%20ligand | A pentadentate ligand is a ligand that coordinates via five donor atoms.
There are different possible ways for a ligand to arrange around an ion. For an octahedral coordination with six positions, the possible arrangements of a linear pentadentate ligand are designated by ffms ffma fff fmf fmama fmsma fmsms fmams where each letter f or s represents three consecutive donor atoms: f represents facial fac arrangement, m is meridianal mer a is "anti" and s is "syn" for positioning of the mer arrangement relative to other donors.
For a chain branched at the donor atom the tertiary atom will have two chains of length one and one of length one attached. The pattern can use parenthesis: () to indicate a side chain, eg NNM(N)N. For octahedral coordination there are four arrangements designated: f(m)m f(f)f f(f)ma and f(f)ms where the parenthesis ow indicate how the side chain participates in coordination.
Ligands with four donor atoms on four chains around a central donor atom, will organise around the metal atom equatorially.
The metal may be five-coordinate with arrangements being square-pyramidal or trigonal-bipyramidal, or somewhere between the two.
2,2':6',2'':6'',2''':6''',2''''-Quinquepyridine is fairly rigid, and tries to be in a plane, so coordination will be pentagonal planar. For silver it is planar, but for rhenium it slightly twists to a helix.
Examples
References
Ligands | Pentadentate ligand | [
"Chemistry"
] | 351 | [
"Ligands",
"Coordination chemistry"
] |
76,005,038 | https://en.wikipedia.org/wiki/Neptunium%20nitride | Neptunium nitride is a binary inorganic compound of neptunium and nitrogen with the chemical formula .
Preparation
Neptunium nitride can be prepared by the reaction of freshly obtained neptunium hydride and ammonia:
The reaction of neptunium and nitrogen can also obtain neptunium nitride:
Physical properties
Neptunium nitride forms black crystals in the cubic system with Fm3m space group. It is insoluble in water and decomposes if heated.
Uses
Neptunium nitride is used as a target material for plutonium-238 production.
+ →
References
Nitrides
Neptunium compounds
Nitrogen compounds | Neptunium nitride | [
"Chemistry"
] | 145 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
76,005,823 | https://en.wikipedia.org/wiki/Americium%20nitride | Americium nitride is a binary inorganic compound of americium and nitride with the chemical formula .
Preparation
Americium can be obtained from the reaction of metallic americium with nitrogen or ammonia at 750–800°C:
It can also be obtained from the reaction of americium trihydride () with nitrogen at 750 °C:
Chemical properties
Americium nitride reacts with cadmium chloride to form americium trichloride:
References
Nitrides
Americium compounds
Nitrogen compounds | Americium nitride | [
"Chemistry"
] | 109 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
76,005,913 | https://en.wikipedia.org/wiki/TOI-715 | TOI-715 is a red dwarf star located from the Earth in the constellation Volans, very close to the southern celestial pole. It hosts one confirmed exoplanet, named TOI-715 b, a super-Earth orbiting in its habitable zone. Another planet in the system is suspected. The star has an apparent magnitude of 16.7 and is too faint to be seen with the naked eye or even a small telescope. It is smaller and cooler than the Sun, with 24% its radius and a temperature of (53% solar).
Characteristics
TOI-715 is a red dwarf star, a type of stars that are smaller, cooler and dimmer than the Sun, and are the most common type of stars in the Universe. TOI-715 in particular has 24% of the Sun's radius, 23% of its mass, and a surface temperature of , while the Sun has a surface temperature of . It is older than the Sun and has an age of billion years, and therefore has lower magnetic activity. Its metallicity, i.e the abundance of elements other than hydrogen and helium, is 23% larger than the Sun's.
Planetary system
In 2023, a planetary companion was detected around TOI-715. The planet was initially identified by the Transiting Exoplanet Survey Satellite (TESS), in May 24, 2019, and later confirmed using ground-based photometry. Designated TOI-715 b, the planet is a super-Earth (radius = ) that is orbiting within its star's conservative habitable zone, being the first TESS discovery in this region. The planet has an orbital period of 19 days and is orbiting TOI-715 at a distance of . It receives insolation from its host star equivalent to 67% of the insolation received by the Earth from the Sun, and has an equilibrium temperature of -39°C.
Another planet in the system is suspected. This planet has an orbital period of 25.6days and has a radius of . It is located in the outer edge of the habitable zone. If confirmed, it would be the smallest habitable-zone planet discovered by TESS.
The planet TOI-715 b could be more closely scrutinized by the James Webb Space Telescope for confirming the existence of an atmosphere. If TOI-715 b is an ocean planet, its atmosphere would be more prominent and easier to detect than that of a massive, dense, dry planet. The planet can also be characterized with precise radial velocity measurements and transmission spectroscopy, and detailed follow-up research of this planet can help the understanding of the formation of small and close-in planets.
The planetary system of TOI-715 is located relatively close to Earth, at a distance of 137light-years.
Conservative habitable zone
The concept of "conservative habitable zone" was defined by Koparappu et al. in 2014. It is the region where the planet receives insolation equivalent to 0.42 to 0.842 times the insolation received by the Earth from the Sun. As both planets have insolations equivalent to S🜨 and S🜨 respectively, they are located inside the conservative habitable zone.
References
Volans
M-type main-sequence stars
Planetary systems with one confirmed planet
2MASS objects
TIC objects | TOI-715 | [
"Astronomy"
] | 686 | [
"Volans",
"Constellations"
] |
76,008,401 | https://en.wikipedia.org/wiki/FEM-1689 | FEM-1689 is a drug which acts as a potent and selective sigma-2 receptor ligand with a binding affinity of 11 nM, and was developed for the treatment of neuropathic pain.
References
Tertiary amines
Sigma agonists | FEM-1689 | [
"Chemistry"
] | 49 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
76,008,718 | https://en.wikipedia.org/wiki/NGC%207421 | NGC 7421 is a barred spiral galaxy in the southern constellation of Grus. It was discovered by English astronomer John Herschel on August 30, 1834. In Danish astronomer J. L. E. Dreyer's New General Catalogue of Nebulae and Clusters of Stars it was described as: considerably bright, large, very little extended, gradually pretty much brighter middle, and partially resolved. NGC 7421 is located at an estimated distance of from the Sun. It is a member of the IC 1459 galaxy group.
In the Third Reference Catalogue of Bright Galaxies, NGC 7421 was assigned a morphological classification of SB(rs)bc, which indicates a barred spiral galaxy (SB) with a transitional ring structure (rs) and moderately wound spiral arms (bc). The galactic plane is inclined at an angle of 36.2° to the plane of the sky, with the major axis aligned along a position angle of 80.6°. It displays an asymmetric morphology, which shows up in a lopsided optical appearance and in the distribution of CO and neutral hydrogen atoms. New stars are forming at a rate of ·yr−1. The gas fraction and star formation rate is significantly lower than normal, suggesting an interaction with the external environment.
The western boundary of this galaxy resembles a bow shock that suggests an interaction with the intracluster medium. Radio mapping of neutral hydrogen by the ATCA displays an extended wake to the north and south of the galaxy, supporting this hypothesis. A past tidal interaction may be needed to explain the asymmetry of this galaxy. A candidate galaxy is NGC 7418, which is located at an angular separation of . However, there are no tidal tails visible from such an encounter. The total mass of the neutral hydrogen in this galaxy is .
The type II supernova SN 2023abdg was observed in association with this galaxy. It was discovered on December 12, 2023, by ATLAS.
See also
NGC 4654, a galaxy in Virgo with a similar appearance
References
Barred spiral galaxies
Grus (constellation)
7241
18340830
Discoveries by John Herschel
70083
-06-50-015 | NGC 7421 | [
"Astronomy"
] | 439 | [
"Grus (constellation)",
"Constellations"
] |
76,010,291 | https://en.wikipedia.org/wiki/Fetal%20valproate%20spectrum%20disorder | Fetal valproate spectrum disorder (FVSD), previously known as fetal valproate syndrome (FVS), is a rare disease caused by prenatal exposure to valproic acid (VPA), a medication commonly used to treat epilepsy, bipolar disorder, and migraines. This exposure can lead to a range of neurodevelopmental and physical symptoms, including cognitive impairments, developmental delays, autism spectrum disorder (ASD), attention deficit hyperactivity disorder (ADHD), and congenital malformations.
Overview
Valproate causes birth defects; exposure during pregnancy is associated with about three times as many major abnormalities as usual, mainly spina bifida with the risks being related to the strength of medication used and use of more than one drug. "Fetal Valproate Syndrome" (FVS) has been used to refer to the effects of valproate exposure in utero. However, similar to the discussion about the adverse effect of exposure to alcohol in utero ("fetal alcohol spectrum disorder"), a 2019 study proposed the term "Fetal valproate spectrum disorder" (FVSD) because valproate exposure can lead to a wide range of possible presentations, which can be influenced by various factors (including dosage and timing of exposure). The dysmorphic features associated with VPA exposure can be subtle and age-dependent, making it challenging to designate individuals as having the characteristic dysmorphism or not, especially for those with limited expertise in the area. While the presence of typical facial dysmorphism is suggestive of the condition, it is not required for diagnosis. This change in terminology to FVSD would benefit individuals affected by the neurodevelopmental effects of VPA exposure without significant malformations, since they can experience impairments in their everyday functioning similar to those with classical FVS. Characteristics of valproate syndrome may include facial features that tend to evolve with age, including a triangle-shaped forehead, tall forehead with bifrontal narrowing, epicanthic folds, medial deficiency of eyebrows, flat nasal bridge, broad nasal root, anteverted nares, shallow philtrum, long upper lip and thin vermillion borders, thick lower lip and small downturned mouth. While developmental delay is usually associated with altered physical characteristics (dysmorphic features), this is not always the case.
Children of mothers taking valproate during pregnancy are at risk for lower IQs. Maternal valproate use during pregnancy increased the probability of autism in the offspring compared to mothers not taking valproate from 1.5% to 4.4%. A 2005 study found rates of autism among children exposed to sodium valproate before birth in the cohort studied were 8.9%. The normal incidence for autism in the general population in 2018 was estimated at 1 in 44 (2.3%). An updated March 2023 report estimates the number increased to 1 in 36 in 2020 (approximately 4% of boys and 1% of girls). A 2009 study found that the 3-year-old children of pregnant women taking valproate had an IQ nine points lower than that of a well-matched control group. However, further research in older children and adults is needed.
Sodium valproate has been associated with paroxysmal tonic upgaze of childhood, also known as Ouvrier–Billson syndrome, from childhood or fetal exposure. This condition resolved after discontinuing valproate therapy.
Women who intend to become pregnant should switch to a different medication if possible or decrease their dose of valproate. Women who become pregnant while taking valproate should be warned that it causes birth defects and cognitive impairment in the newborn, especially at high doses (although valproate is sometimes the only drug that can control seizures, and seizures in pregnancy could have worse outcomes for the fetus than exposure to valproate). Studies have shown that taking folic acid supplements can reduce the risk of congenital neural tube defects. The use of valproate for migraine or bipolar disorder during pregnancy is contraindicated in the European Union, Australia, New Zealand, the UK and the United States, and the medicines are not recommended for epilepsy during pregnancy unless there is no other effective treatment available.
Paternal exposure
A 2023 retrospective study of Norway, Denmark, and Sweden found a significantly increased risk of neurodevelopmental disabilities in the children of fathers exposed to valproate up to 3 months prior to conception, compared to offspring paternally exposed to lamotrigine/levetiracetam.
This led the EMA to recommend "the need to consider effective contraception, while using valproate and for at least 3 months after treatment discontinuation. Male patients should not donate sperm during treatment and for at least 3 months after treatment discontinuation."
See also
Syndromic autism
Fetal alcohol spectrum disorder
References
Syndromic autism
Congenital malformation due to exogenous toxicity | Fetal valproate spectrum disorder | [
"Environmental_science"
] | 1,024 | [
"Toxicology",
"Congenital malformation due to exogenous toxicity"
] |
76,010,471 | https://en.wikipedia.org/wiki/Derrick%20Brown%20%28computer%20scientist%29 | Derrick Brown was an American Computer Scientist. Brown helped create "the Black equivalent of the original Yahoo index", called Universal Black Pages.
Brown was born in Elloree, South Carolina in 1969.
References
Living people
Computer scientists
People from Elloree, South Carolina
1969 births | Derrick Brown (computer scientist) | [
"Technology"
] | 57 | [
"Computer science",
"Computer scientists"
] |
76,010,836 | https://en.wikipedia.org/wiki/Exidia%20crenata | Exidia crenata is a species of fungus in the family Auriculariaceae. It has the English name of amber jelly roll. Basidiocarps (fruit bodies) are gelatinous, brown to orange-brown, and turbinate (top-shaped). It typically grows on dead attached twigs and branches of broadleaved trees and is found in North America.
Taxonomy
The species was originally described from North Carolina in 1822 by German-American mycologist Lewis David de Schweinitz as Tremella crenata. It was transferred to the genus Exidia by Fries in the same year. Exidia crenata was widely considered a synonym of the European Exidia recisa until molecular research, based on cladistic analysis of DNA sequences, showed that the American species is distinct.
Description
The gelatinous fruit bodies are amber, wide, and thick. They can be translucent and tend to be moist and/or glossy. The spore print is white.
Similar species
Similar species include E. recisa and members of Auricularia and Phaeotremella.
Habitat and distribution
Exidia crenata is a wood-rotting species, typically found on dead attached twigs and branches of broadleaf trees, particularly oak. It is widely distributed in eastern North America, where it can be found from September through May, thriving in winter.
References
Auriculariales
Fungi described in 1822
Fungi of North America
Fungus species
Taxa named by Lewis David de Schweinitz | Exidia crenata | [
"Biology"
] | 306 | [
"Fungi",
"Fungus species"
] |
76,012,741 | https://en.wikipedia.org/wiki/Gate%20lice | Gate lice is a pejorative term used to describe a phenomenon observed among air travelers where passengers gather in front of boarding gates before their designated boarding time. The term has gained recognition within the community of frequent flyers, particularly on platforms such as Flyertalk. This phenomenon may make the boarding process more cumbersome. For instance, it can lead to congestion, longer wait times for those who have prioritized boarding, and confusion. To avoid behaving in this manner, it is recommended to stay in one's seat until one's boarding zone is called.
Contributing factors
The rationale for gate lice behavior may be due to various contributing factors. Some attribute it to the inexperience of certain travelers who may not fully comprehend airline boarding procedures. Additionally, the presence of elite fliers with priority boarding privileges board early, forming clusters in front of the gate and contributing to congestion. Airport gate designs can also play a role, for example at O'Hare International Airport gate layouts are conducive for congestion. Also the baggage fees may also play a role, as some passengers may seek to board early to secure overhead bin space to potentially avoid fees. Also, people may seek the overhead bin space to avoid lost luggage. In some cases, people may seek overhead bin space to store items required on the flight.
Psychological factors may also play a role. When people see other people crowding the boarding area, there may be a social tendency to move towards conformity. Also, the overhead bin space may be viewed as a limited resource leading to competition. The underlying uncertainty and competition may lead to anxiety and hostility. Waiting in line may also help bring a sense of control as well as relieve anxiety.
Following the COVID-19 pandemic, the phenomenon has increased possibly as travelers have become more anxious.
Industry response
Some airlines have implemented measures to address the challenges posed by gate lice. This includes the creation of dedicated lanes for elite fliers and the removal of special pre-boarding privileges for families with small children. Various airlines, such as United, Continental, Delta, Northwest, and Southwest, have introduced priority boarding programs catering to specific customer groups.
As of October 2024, American Airlines was testing a program in several U.S. airports that alerts gate agents to passengers who attempt to board before their assigned boarding group. The system creates an audible signal when the passenger's boarding pass is scanned before their boarding group is called.
References
Airports
Travel | Gate lice | [
"Physics"
] | 502 | [
"Physical systems",
"Transport",
"Travel"
] |
76,013,518 | https://en.wikipedia.org/wiki/George%20N.%20Phillips | George N. Phillips, Jr. is a biochemist, researcher, and academic. He is the Ralph and Dorothy Looney Professor of Biochemistry and Cell Biology at Rice University, where he also serves as Associate Dean for Research at the Wiess School of Natural Sciences and as a professor of chemistry. Additionally, he holds the title of professor emeritus of biochemistry at the University of Wisconsin-Madison.
Phillips' research is primarily centered on protein structure, protein dynamics, and computational biology, with a specific emphasis on understanding the correlation between the dynamics of proteins and their biological functions. He has authored book chapters, and is an editor for the Handbook of Proteins: Structure, Function and Methods Volume 2. He is the recipient of the Arnold O. Beckman Research Award, the American Heart Association's Established Investigator Award, and the Vilas Associate Award.
Phillips is an Elected Fellow of the Biophysical Society, the American Crystallographic Association, and the American Association for the Advancement of Science. He served as president and vice-president of the American Crystallographic Association from 2011 to 2013. He also holds the position of Editor-in-Chief for Structural Dynamics with the AIP Press and serves as an Associate Editor for Critical Reviews in Biochemistry and Molecular Biology.
Education
Phillips obtained his bachelor's degree in Biochemistry and Chemistry from Rice University in 1974 and followed it with a Ph.D. in biochemistry from the same institution in 1976. He also held a Robert A. Welch Predoctoral Fellowship from 1974 to 1976 and received a Postdoctoral Fellowship from the National Institutes of Health in 1977 as well as a Research Fellowship from the Medical Foundation in 1980.
Career
Phillips started his academic career as an assistant professor at the University of Illinois Urbana-Champaign, followed by his appointment as a professor of biochemistry at Rice University in 1987. In 1993, he assumed the position of Rice Scientia Lecturer, subsequently receiving the Robert A. Welch Lecturer appointment in 2001. He joined the University of Wisconsin-Madison in 2000 as a professor of Biochemistry and took on the role of professor emeritus in 2012. He has been serving as a professor of chemistry, as well as the Ralph and Dorothy Looney Professor of Biochemistry and Cell Biology at Rice University.
Research
Phillips has directed his research toward the field of computational biology, primarily exploring protein structure. In the Phillips Lab, his work has involved conducting research on the binding of oxygen and ligands to heme proteins, as well as the development of techniques for analyzing protein and nucleic acid dynamics through diffuse X-ray scattering analysis.
Protein structures
Phillips conducted various studies on protein structures and their functional implications. He examined the structural features of type 6 streptococcal M proteins, highlighting their predominantly alpha-helical coiled-coil, which demonstrates a unique conformation in bacterial surface projections. His research on the crystal structure of tropomyosin filaments proposed a model in which tropomyosin exhibited distinct conformations related to muscle contraction, suggesting a statistical mechanism for regulating muscle function.
In one of his highly cited studies, Phillips, alongside Fan Yang and Larry G. Moss, described the crystal structure of recombinant wild-type green fluorescent protein, unveiling a unique structure referred to as the "ß-can." This study also delved into the protective environment for the fluorophores within the cylinder and its applications in elucidating the effects of GFP mutants.
Phillips has utilized X-ray crystallography and various advanced spectroscopy techniques to provide details about the dynamic structural changes in proteins. He used X-ray crystallography to determine the structure of unstable intermediate caused by photodissociation of CO from myoglobin and provided insights into the dynamics and structural alterations involved in this protein reaction. In addition, his study focused on capturing the structural evolution of the protein on a picosecond timescale used time-resolved X-ray diffraction and mid-infrared spectroscopy on a myoglobin (Mb) mutant (L29F mutant) revealing conformational changes within the protein.
Heme proteins and ligand interactions
Phillips' research on heme proteins and ligand affinity has provided insights into engineering strategies for physiological functions. He explored the impact of His64 in sperm whale myoglobin on ligand affinity, shedding light on structural changes induced by ligand binding and mechanisms of ligand discrimination in myoglobin. By measuring CO binding properties in various mutants and comparing them to mutant myoglobins, he elucidated how mutations influence CO affinity. In his 1994 study, he delved into how heme proteins like myoglobin and hemoglobin differentiate between oxygen (O2) and carbon monoxide (CO) binding at the atomic level. He investigated the role of nitric oxide in physiological functions by examining the kinetics of NO-induced oxidation in myoglobins and hemoglobins revealing insights into protein engineering strategies aimed at mitigating hypertensive events.
Computational biology
Phillips' contributions to computational biology include advanced techniques for interpreting experimental data in complex chemical and biological systems. He focused on the interaction between troponin T (TnT) and tropomyosin, shedding light on the molecular mechanisms in muscle contractions. Additionally, he explored protein dynamics in crystals by using the Gaussian network model (GNM) and a crystallographic model to calculate Cα atom fluctuations in 113 proteins emphasizing the improved results obtained by considering neighboring molecules in the crystal. In a book chapter discussing ongoing advancements in experimental methods for complex chemical and biological systems, he highlighted the growing need for creative approaches and delved into the exploration of Normal Mode Analysis as a technique to address these challenges.
Awards and honors
1982 – Arnold O. Beckman Research Award, University of Illinois
1983 – Established Investigator Award, American Heart Association
2003 – Vilas Associate Award, UW-Madison
Bibliography
Books
Handbook of Proteins: Structure, Function and Methods Volume 2 (2008) ISBN 978-0470060988
Selected articles
Quillin, M. L., Arduini, R. M., Olson, J. S., & Phillips Jr, G. N. (1993). High-resolution crystal structures of distal histidine mutants of sperm whale myoglobin. Journal of molecular biology, 234(1), 140–155.
Springer, B. A., Sligar, S. G., Olson, J. S., & Phillips, G. N. J. (1994). Mechanisms of ligand recognition in myoglobin. Chemical Reviews, 94(3), 699–714.
Eich, R. F., Li, T., Lemon, D. D., Doherty, D. H., Curry, S. R., Aitken, J. F., ... & Olson, J. S. (1996). Mechanism of NO-induced oxidation of myoglobin and hemoglobin. Biochemistry, 35(22), 6976–6983.
Yang, F., Moss, L. G., & Phillips Jr, G. N. (1996). The molecular structure of green fluorescent protein. Nature biotechnology, 14(10), 1246–1251.
Schotte, F., Lim, M., Jackson, T. A., Smirnov, A. V., Soman, J., Olson, J. S., ... & Anfinrud, P. A. (2003). Watching a protein as it functions with 150-ps time-resolved x-ray crystallography. Science, 300(5627), 1944–1947.
References
Biochemists
University of Wisconsin–Madison faculty
Rice University faculty
Rice University alumni
Living people
Year of birth missing (living people) | George N. Phillips | [
"Chemistry",
"Biology"
] | 1,600 | [
"Biochemistry",
"Biochemists"
] |
76,014,273 | https://en.wikipedia.org/wiki/Ferdinand%20Giese | Ferdinand Giese (Johann Emanuel Ferdinand Giese; 13 January 1781 – 22 May 1821) was a Baltic German pharmacologist. 1817–1818 he was the rector of Tartu University.
He graduated from Erfurt University. 1804–1814 he worked at Kharkiv University. Since 1814 he taught at Tartu University.
References
1781 births
1821 deaths
Academic staff of the University of Tartu
Rectors of the University of Tartu
Pharmacologists
Baltic-German people from the Russian Empire
Scientists from the Russian Empire | Ferdinand Giese | [
"Chemistry"
] | 107 | [
"Pharmacology",
"Biochemists",
"Pharmacologists"
] |
76,014,437 | https://en.wikipedia.org/wiki/HiPACT%20%28carbon%20capture%29 | HiPACT is a carbon capture and storage technology.
References
Carbon capture and storage | HiPACT (carbon capture) | [
"Engineering"
] | 17 | [
"Geoengineering",
"Carbon capture and storage"
] |
76,014,500 | https://en.wikipedia.org/wiki/C2H3NO2 | {{DISPLAYTITLE:C2H3NO2}}
The molecular formula C2H3NO2 (molar mass: 73.051 g/mol) may refer to:
Nitroethylene
Dehydroglycine
Molecular formulas | C2H3NO2 | [
"Physics",
"Chemistry"
] | 53 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
76,014,694 | https://en.wikipedia.org/wiki/Gadolinium%28III%29%20nitride | Gadolinium(III) nitride is a binary inorganic compound of gadolinium and nitrogen with the chemical formula .
Preparation
Gadolinium(III) nitride can be prepared by the direct reaction of gadolinium metal and nitrogen gas at 1600 °C and at a pressure of 1300 atm.
Properties
Physical
Gadolinium(III) nitride forms a black powder. It is isomorphous with sodium chloride with the space group of Fmm.
Chemical
Gadolinium(III) nitride hydrolyzes in humid air to form gadolinium(III) hydroxide and ammonia. It is insoluble in water but soluble in acids.
Uses
Gadolinium(III) nitride is used as a semiconductor. It can also be used as a magnetic material, a catalyst in chemical reactions and a component in neutron converters for radiation detectors.
References
Nitrides
Gadolinium compounds
Nitrogen compounds
Rock salt crystal structure | Gadolinium(III) nitride | [
"Chemistry"
] | 204 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
76,014,795 | https://en.wikipedia.org/wiki/Liquid%20carryover | Liquid carryover refers to the unintended transport of liquids such as water, hydrocarbon condensates, compressor oil or glycol in a natural gas, hydrogen, carbon dioxide or other industrial gas pipeline or process. Ideally, only gas enters gas processing.
Understanding pipeline composition at critical points is crucial to ensure optimal efficiency and safety.
Natural gas processing aims to deliver gas suitable for transmission systems without causing operational issues in downstream pipelines, compressors, or equipment. Ideally, all dry industrial gases remain "dry" during processing. However, due to fluid dynamics complexities, gas and liquid phases may not fully separate, leading to wet gas or two-phase flows. These can occur as mist flow (tiny liquid droplets) or stratified flow (a liquid stream along the pipe wall). These conditions can significantly impact gas processing facilities' operational safety, efficiency, and lifespan.
Challenges and risks
Liquid carryover is a major concern, responsible for roughly 60% of plant failures in natural gas processing. Effective phase separation at the beginning of the processing train prevents hydrocarbons and other liquids from entering the gas treatment plant. Improper separation allows liquid carryover to contaminate the desulfurization stage, triggering foaming and fouling, leading to unplanned shutdowns and reduced gas flow.
As the gas progresses through desulfurization and dehumidification, it comes into contact with significant processing liquids. Amine-based liquids used in desulfurization to remove hydrogen sulfide (H2S) and carbon dioxide (CO2) can carry over if not properly separated, contaminating the dehumidification stage. Dehumidification utilizes a liquid desiccant, such as monoethylene glycol (MEG) or triethylene glycol (TEG), to reduce gas moisture content and meet sales gas specifications. Carryover of glycol into this process can cause issues by blocking heat exchangers or disrupting temperature control. Notably, while glycol is a common component found during pipeline pigging analysis, there's currently no method to directly determine glycol carryover besides process cameras.
The primary method for extracting natural gas liquids (NGLs) involves reducing gas temperature below its hydrocarbon dew point, separating the liquids. However, achieving temperature reduction through Joule-Thompson pressure reduction creates ideal conditions for sub-micron mist flow formation. This type of wet gas flow is particularly challenging to filter and requires specialized filtration systems. As the gas warms up, the liquids vaporize, saturating the vapor phase with respect to hydrocarbons. This can lead to liquid dropout as mist or stratified flows due to pressure and temperature drops during gas transmission.
Over time, solid and liquid accumulation at low points in the transmission system can lead to corrosion, potentially causing ruptures and failures at compressor stations.
Traditional monitoring techniques
Standards from the American Petroleum Institute (API) 14.1 and the International Organization for Standardization (ISO) EN10715 provide guidance for gas sampling for either laboratory or online analyzers of gas streams. They also offer guidelines for managing high-pressure gases to prevent liquid dropout in the sample system during pressure reduction from line pressure to atmospheric pressure. These standards aim to ensure a representative gas sample reaches the analyzer and prevent liquids from damaging it. However, wet gas or two-phase flows fall outside the scope of these standards, meaning gas analyzers can have significant errors and often miss liquid carryover events.
Impact on operations
Liquid carryover's operational inefficiencies have both immediate and long-term consequences. Foaming, requiring reduced gas flow and de-foaming chemicals, can occur. As a precaution, gas processing facilities may intentionally limit operational capacities, sacrificing optimal gas throughput. For gas processors, errors in hydrocarbon dew point and BTU determination can lead to lost revenue, pigging costs, and rectification or rebuild costs.
Transmission system
The presence of wet gas and liquid hold-up in pipelines significantly increases the risks of pipeline ruptures and shortens the lifespan of pipeline assets. To mitigate these risks, operators must increase the frequency of pipeline pigging.
Power stations
As the gas reaches the power station, the likelihood of contamination rises due to various factors. These include:
Fouling: Build-up of unwanted materials within the pipeline.
Natural gas liquids (NGLs) carryover: NGLs are heavier hydrocarbons that can condense and enter the gas stream under certain conditions.
Lubrication grease: Grease used during valve operations can inadvertently enter the gas stream.
Compressor oil leaks: Leaking compressor oil can contaminate the gas.
Iron sulfides: These compounds can form on the pipe wall and become entrained in the gas flow.
Even though some power stations preheat the fuel gas, contamination with compressor oil or glycol (if not properly vaporized) can cause several maintenance issues. These include:
Blockage and wear of fuel nozzles, leading to uneven combustion.
Hot spots on turbine blades, potentially forcing the power station offline.
Liquefied natural gas (LNG) plants
Liquid carryover in incoming natural gas feed lines can also disrupt operations at LNG plants. Molecular sieves, used to dry the gas to extremely low moisture levels, become contaminated and lose efficiency when exposed to liquid hydrocarbons. In some cases, heavy hydrocarbons, believed to be compressor oil, have reached the LNG plant's "cold box," causing pressure differentials and shortening the operational period of the LNG train.
Calorific value and flow Measurements
During periods of mixed-phase flow (containing both gas and liquid), removing liquids from the gas sample being analyzed can lead to significant errors in determining the calorific value (BTU) of the gas. This makes it difficult to obtain an accurate picture of the overall fluid stream.
Gas analyzers can only report on the portion of the fluid they are presented with. This means that measurements made at custody transfer points, where gas ownership changes hands, are unreliable when two-phase flow is present. Process camera systems offer the highest level of sensitivity to both mist flow and stratified flow, providing operators with greater certainty about gas quality and improving the accuracy of BTU or Wobbe Index measurements.
When liquid carryover is not specifically monitored, operators remain unaware of both continuous and occasional liquid events that significantly affect BTU calculations. This leads to inaccurate gas quality measurements.
Process camera systems have observed that when liquid events occur as stratified flow, debris from the pipe wall (such as iron sulfide and scale) can accumulate on the bottom of the pipe. The high-velocity gas stream above the liquid layer removes lighter liquids, leaving behind a sludge that eventually dries into a stationary material. This material can reduce the pipe diameter.
If this scenario occurs at a custody transfer point, flow computers might use an incorrect pipe diameter in their calculations. Even with a properly calibrated flow meter, small amounts of debris (2-3mm) can cause a significant offset (0.2%) in the measurement. To ensure accurate fiscal measurements, these potential errors must be continuously monitored and factored into the uncertainty budget for all flow meters.
The Sarbanes-Oxley Act mandates that flow uncertainty budgets for fiscal flow measurements account for potential errors. Unexpected liquids in dry gas systems can substantially increase the uncertainty budget associated with both flow and BTU measurements.
References
Gas-liquid separation
Chemical accident
Chemical industry | Liquid carryover | [
"Chemistry",
"Environmental_science"
] | 1,541 | [
"Chemical accident",
"Separation processes by phases",
"Environmental chemistry",
"nan",
"Gas-liquid separation"
] |
76,015,102 | https://en.wikipedia.org/wiki/Invariant%20sigma-algebra | In mathematics, especially in probability theory and ergodic theory, the invariant sigma-algebra is a sigma-algebra formed by sets which are invariant under a group action or dynamical system. It can be interpreted as of being "indifferent" to the dynamics.
The invariant sigma-algebra appears in the study of ergodic systems, as well as in theorems of probability theory such as de Finetti's theorem and the Hewitt-Savage law.
Definition
Strictly invariant sets
Let be a measurable space, and let be a measurable function. A measurable subset is called invariant if and only if .
Equivalently, if for every , we have that if and only if .
More generally, let be a group or a monoid, let be a monoid action, and denote the action of on by .
A subset is -invariant if for every , .
Almost surely invariant sets
Let be a measurable space, and let be a measurable function. A measurable subset (event) is called almost surely invariant if and only if its indicator function is almost surely equal to the indicator function .
Similarly, given a measure-preserving Markov kernel , we call an event almost surely invariant if and only if for almost all .
As for the case of strictly invariant sets, the definition can be extended to an arbitrary group or monoid action.
In many cases, almost surely invariant sets differ by invariant sets only by a null set (see below).
Sigma-algebra structure
Both strictly invariant sets and almost surely invariant sets are closed under taking countable unions and complements, and hence they form sigma-algebras.
These sigma-algebras are usually called either the invariant sigma-algebra or the sigma-algebra of invariant events, both in the strict case and in the almost sure case, depending on the author.
For the purpose of the article, let's denote by the sigma-algebra of strictly invariant sets, and by the sigma-algebra of almost surely invariant sets.
Properties
Given a measure-preserving function , a set is almost surely invariant if and only if there exists a strictly invariant set such that .
Given measurable functions and , we have that is invariant, meaning that , if and only if it is -measurable. The same is true replacing with any measurable space where the sigma-algebra separates points.
An invariant measure is (by definition) ergodic if and only if for every invariant subset , or .
Examples
Exchangeable sigma-algebra
Given a measurable space , denote by be the countable cartesian power of , equipped with the product sigma-algebra.
We can view as the space of infinite sequences of elements of ,
Consider now the group of finite permutations of , i.e. bijections such that only for finitely many .
Each finite permutation acts measurably on by permuting the components, and so we have an action of the countable group on .
An invariant event for this sigma-algebra is often called an exchangeable event or symmetric event, and the sigma-algebra of invariant events is often called the exchangeable sigma-algebra.
A random variable on is exchangeable (i.e. permutation-invariant) if and only if it is measurable for the exchangeable sigma-algebra.
The exchangeable sigma-algebra plays a role in the Hewitt-Savage zero-one law, which can be equivalently stated by saying that for every probability measure on , the product measure on assigns to each exchangeable event probability either zero or one.
Equivalently, for the measure , every exchangeable random variable on is almost surely constant.
It also plays a role in the de Finetti theorem.
Shift-invariant sigma-algebra
As in the example above, given a measurable space , consider the countably infinite cartesian product .
Consider now the shift map given by mapping to .
An invariant event for this sigma-algebra is called a shift-invariant event, and the resulting sigma-algebra is sometimes called the shift-invariant sigma-algebra.
This sigma-algebra is related to the one of tail events, which is given by the following intersection,
where is the sigma-algebra induced on by the projection on the -th component .
Every shift-invariant event is a tail event, but the converse is not true.
See also
Invariant set
De Finetti theorem
Hewitt-Savage zero-one law
Exchangeable random variables
Invariant measure
Ergodic system
Citations
References
Algebras
Probability theory
Ergodic theory | Invariant sigma-algebra | [
"Mathematics"
] | 904 | [
"Mathematical structures",
"Algebras",
"Ergodic theory",
"Algebraic structures",
"Dynamical systems"
] |
76,016,053 | https://en.wikipedia.org/wiki/Moral%20geography | Moral geographies (a term coined by Felix Driver) are, according to David Smith (2000), the studying of human geography with a normative emphasis. The kind of questions that are examined including asking whether distance from a phenomenon lessons one's duty, whether there is a substantial difference between private spaces and public spaces and analysing which moral positions are personal, which are societal, which are absolute and which are relative. One key question is how to respect difference whilst recognizing universal rights.
References
Human geography | Moral geography | [
"Environmental_science"
] | 105 | [
"Environmental social science",
"Human geography"
] |
76,016,315 | https://en.wikipedia.org/wiki/Parietinic%20acid | Parietinic acid is an organic compound in the structural class of chemicals known as anthraquinones. It is found in many species of the lichen family Teloschistaceae. The substance was first reported in the literature by the German chemist Walter Eschrich in 1958.
Occurrence
Originally isolated from the lichen Xanthoria parietina, it has since been identified in many lichens of the family Teloschistaceae. In 1970, Johan Santesson proposed a possible biogenetic relationship between the anthraqunone compounds commonly found in Caloplaca. According to this scheme, emodin is methylated to give parietin, which then undergoes three successive oxidations, sequentially forming fallacinol, fallacinal, and then parietinic acid. A is a set of biosynthetically related compounds produced by a lichen. In 2002, Ulrik Søchting and Patrik Frödén identified chemosyndrome A, the most common chemosyndrome in the genus Teloschistes and in the entire family Teloschistaceae, which features parietin as the main substance and smaller proportions of teloschistin, fallacinal, parietinic acid, and emodin.
Properties
In its purified form, parietinic acid exists as orange needles with a melting point of . Its ultraviolet spectrum has two peaks of maximum absorption (λmax) at 325 and 435 nm, and its infrared spectrum has two peaks at 1629 and 1700 cm−1.
Parietinic acid was shown to have antifungal activity and antibacterial activity in laboratory tests.
References
Anthraquinones
Lichen products | Parietinic acid | [
"Chemistry"
] | 356 | [
"Natural products",
"Lichen products"
] |
76,018,398 | https://en.wikipedia.org/wiki/Cyberfeminism%20Index | The Cyberfeminism Index is an index of subjects related to Cyberfeminism by Mindy Seu. In 2019, it began as a Google Sheet that was then transposed onto a website. The index was published as a book in 2023.
References
External links
Official website
Feminism and the arts
Internet culture
Transhumanism | Cyberfeminism Index | [
"Technology",
"Engineering",
"Biology"
] | 69 | [
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology"
] |
76,019,869 | https://en.wikipedia.org/wiki/Young%20subgroup | In mathematics, the Young subgroups of the symmetric group are special subgroups that arise in combinatorics and representation theory. When is viewed as the group of permutations of the set , and if is an integer partition of , then the Young subgroup indexed by is defined by
where denotes the set of permutations of and denotes the direct product of groups. Abstractly, is isomorphic to the product . Young subgroups are named for Alfred Young.
When is viewed as a reflection group, its Young subgroups are precisely its parabolic subgroups. They may equivalently be defined as the subgroups generated by a subset of the adjacent transpositions .
In some cases, the name Young subgroup is used more generally for the product , where is any set partition of (that is, a collection of disjoint, nonempty subsets whose union is ). This more general family of subgroups consists of all the conjugates of those under the previous definition. These subgroups may also be characterized as the subgroups of that are generated by a set of transpositions.
References
Further reading
Representation theory
Combinatorics
Permutation groups
Finite reflection groups | Young subgroup | [
"Mathematics"
] | 239 | [
"Representation theory",
"Fields of abstract algebra",
"Discrete mathematics",
"Combinatorics"
] |
76,020,377 | https://en.wikipedia.org/wiki/Quantum%20Chemistry%20Program%20Exchange | The Quantum Chemistry Program Exchange (QCPE) was an organization located at Indiana University Bloomington from 1963 to 2007 that was devoted to the distribution of computational chemistry software before electronic file transfer on the internet became a widely available method of software distribution. The QCPE was originally founded by Prof. Harrison Shull and was managed by Richard Counts for most of its existence. Financial support for the QCPE was originally provided by the Air Force Office of Scientific Research until 1969, and funding continued under an interim grant from the National Science Foundation in 1971 until it became financially self-sustaining in 1973.
The QCPE maintained a catalog of software that expanded through regular contributions from chemistry software developers. New software contributions were announced through a quarterly QCPE Newsletter that were eventually formalized into a QCPE Bulletin in 1981, which allowed for software citations to numbered software entries in the Bulletin that announced their release. QCPE members paid for subscriptions to the Newsletter/Bulletin and additionally paid a processing and delivery fee to receive software from the QCPE catalog. The software distribution options expanded alongside technological development, starting from punched cards and magnetic tape drives delivered by mail, before adopting floppy disks and CD-ROMs, and eventually electronic delivery by FTP. The QCPE grew rapidly in its early days, with about 400 members and a catalog of nearly 100 programs after its first 3 years of operation. In the 1980's and early 1990's, the QCPE also organized annual summer workshops to train scientists in the use of its more popular software. At its peak in the mid-1980's, the QCPE had over 2000 members, over 400 programs available, and an annual income near $400,000.
The most visible legacy of the QCPE are the thousands of software citations to the QCPE Bulletin in scientific publications over 4 decades, with a peak of over 1000 per year in the early 1990's. The most popular software in the early days of the QCPE was GAUSSIAN (QCPE #236, #368, #406) before it was removed from the QCPE catalog to become commercial software, and the most popular software in its later years was MOPAC (QCPE #455, #688, #689).
Other popular software distributed by the QCPE included POLYATOM (QCPE #47, #199), CNDO/2 (QCPE #91), AMPAC (QCPE #506), CRYSTAL (QCPE #577), Molden (QCPE #619), and MM2 / MM3 (QCPE #690-#698).
References
Software distribution
Indiana University Bloomington
Organizations established in 1963
Organizations disestablished in 2007
Computational chemistry software
1963 establishments in Indiana
2007 disestablishments in Indiana | Quantum Chemistry Program Exchange | [
"Chemistry"
] | 561 | [
"Computational chemistry",
"Computational chemistry software",
"Chemistry software"
] |
64,398,432 | https://en.wikipedia.org/wiki/Hall-type%20theorems%20for%20hypergraphs | In the mathematical field of graph theory, Hall-type theorems for hypergraphs are several generalizations of Hall's marriage theorem from graphs to hypergraphs. Such theorems were proved by Ofra Kessler, Ron Aharoni, Penny Haxell, Roy Meshulam, and others.
Preliminaries
Hall's marriage theorem provides a condition guaranteeing that a bipartite graph admits a perfect matching, or - more generally - a matching that saturates all vertices of . The condition involves the number of neighbors of subsets of . Generalizing Hall's theorem to hypergraphs requires a generalization of the concepts of bipartiteness, perfect matching, and neighbors.
1. Bipartiteness: The notion of a bipartiteness can be extended to hypergraphs in many ways (see bipartite hypergraph). Here we define a hypergraph as bipartite if it is exactly 2-colorable, i.e., its vertices can be 2-colored such that each hyperedge contains exactly one yellow vertex. In other words, can be partitioned into two sets and , such that each hyperedge contains exactly one vertex of . A bipartite graph is a special case in which each edge contains exactly one vertex of and also exactly one vertex of ; in a bipartite hypergraph, each hyperedge contains exactly one vertex of but may contain zero or more vertices of . For example, the hypergraph with and is bipartite with and
2. Perfect matching: A matching in a hypergraph is a subset of , such that every two hyperedges of are disjoint. If is bipartite with parts and , then the size of each matching is obviously at most . A matching is called -perfect (or -saturating) if its size is exactly . In other words: every vertex of appears in exactly one hyperedge of . This definition reduces to the standard definition of a -perfect matching in a bipartite graph.
3. Neighbors: Given a bipartite hypergraph and a subset of , the neighbors of are the subsets of that share hyperedges with vertices of . Formally:
For example, in the hypergraph from point 1, we have: and and Note that, in a bipartite graph, each neighbor is a singleton - the neighbors are just the vertices of that are adjacent to one or more vertices of . In a bipartite hypergraph, each neighbor is a set - the neighbors are the subsets of that are "adjacent" to one or more vertices of .
Since contains only subsets of , one can define a hypergraph in which the vertex set is and the edge set is . We call it the neighborhood-hypergraph of and denote it:
Note that, if is a simple bipartite graph, the neighborhood-hypergraph of every contains just the neighbors of in , each of which with a self-loop.
Insufficiency of Hall's condition
Hall's condition requires that, for each subset of , the set of neighbors of is sufficiently large. With hypergraphs this condition is insufficient. For example, consider the tripartite hypergraph with edges:{ {1, a, A}, {2, a, B} }Let Every vertex in has a neighbor, and itself has two neighbors: But there is no -perfect matching since both edges overlap.
One could try to fix it by requiring that contain at least disjoint edges, rather than just edges. In other words: should contain a matching of size at least . The largest size of a matching in a hypergraph is called its matching number and denoted by (thus admits a -perfect matching if and only if ). However, this fix is insufficient, as shown by the following tripartite hypergraph:{ {1, a, A}, {1, b, B}, {2, a, B}, {2, b, A} }Let Again every vertex in has a neighbor, and itself has four neighbors: Moreover, since admits a matching of size 2, e.g. However, H does not admit a -perfect matching, since every hyperedge that contains 1 overlaps every hyperedge that contains 2.
Thus, to guarantee a perfect matching, a stronger condition is needed. Various such conditions have been suggested.
Aharoni's conditions: largest matching
Let be a bipartite hypergraph (as defined in 1. above), in which the size of every hyperedge is exactly , for some integer . Suppose that, for every subset of , the following inequality holds:
In words: the neighborhood-hypergraph of admits a matching larger than . Then admits a -perfect matching (as defined in 2. above).
This was first conjectured by Aharoni. It was proved with Ofra Kessler for bipartite hypergraphs in which and for . It was later proved for all -uniform hypergraphs.
In simple graphs
For a bipartite simple graph , and Aharoni's condition becomes:
Moreover, the neighborhood-hypergraph (as defined in 3. above) contains just singletons - a singleton for every neighbor of . Since singletons do not intersect, the entire set of singletons is a matching. Hence, the number of neighbors of . Thus, Aharoni's condition becomes, for every subset of :
This is exactly Hall's marriage condition.
Tightness
The following example shows that the factor cannot be improved. Choose some integer . Let be the following -uniform bipartite hypergraph:
is the union of (where is the set of hyperedges containing vertex ), and:
For each in contains disjoint hyperedges of size :
contains hyperedges of size :
Note that edge in meets all edges in .
This does not admit a -perfect matching, since every hyperedge that contains intersects all hyperedges in for some .
However, every subset of satisfies the following inequality
since contains at least hyperedges, and they are all disjoint.
Fractional matchings
The largest size of a fractional matching in is denoted by . Clearly . Suppose that, for every subset of , the following weaker inequality holds:
It was conjectured that in this case, too, admits a -perfect matching. This stronger conjecture was proved for bipartite hypergraphs in which .
Later it was proved that, if the above condition holds, then admits a -perfect fractional matching, i.e., . This is weaker than having a -perfect matching, which is equivalent to .
Haxell's condition: smallest transversal
A transversal (also called vertex-cover or hitting-set) in a hypergraph is a subset of such that every hyperedge in contains at least one vertex of . The smallest size of a transversal in is denoted by .
Let be a bipartite hypergraph in which the size of every hyperedge is at most , for some integer . Suppose that, for every subset of , the following inequality holds:
In words: the neighborhood-hypergraph of has no transversal of size or less.
Then, admits a -perfect matching (as defined in 2. above).
In simple graphs
For a bipartite simple graph so , and Haxell's condition becomes:
Moreover, the neighborhood-hypergraph (as defined in 3. above) contains just singletons - a singleton for every neighbor of . In a hypergraph of singletons, a transversal must contain all vertices. Hence, the number of neighbors of . Thus, Haxell's condition becomes, for every subset of :
This is exactly Hall's marriage condition. Thus, Haxell's theorem implies Hall's marriage theorem for bipartite simple graphs.
Tightness
The following example shows that the factor cannot be improved. Let be an -uniform bipartite hypergraph with:
[so ].
where:
[so contains hyperedges].
for [so contains hyperedges].
This does not admit a -perfect matching, since every hyperedge that contains 0 intersects every hyperedge that contains 1.
However, every subset of satisfies the following inequality:
It is only slightly weaker (by 1) than required by Haxell's theorem. To verify this, it is sufficient to check the subset , since it is the only subset for which the right-hand side is larger than 0. The neighborhood-hypergraph of is where:
for
One can visualize the vertices of as arranged on an grid. The hyperedges of are the rows. The hyperedges of are the selections of a single element in each row and each column. To cover the hyperedges of we need vertices - one vertex in each row. Since all columns are symmetric in the construction, we can assume that we take all the vertices in column 1 (i.e., for each in . Now, since contains all columns, we need at least additional vertices - one vertex for each column All in all, each transversal requires at least vertices.
Algorithms
Haxell's proof is not constructive. However, Chidambaram Annamalai proved that a perfect matching can be found efficiently under a slightly stronger condition.
For every fixed choice of and , there exists an algorithm that finds a -perfect matching in every -uniform bipartite hypergraph satisfying, for every subset of :
In fact, in any -uniform hypergraph, the algorithm finds either a -perfect matching, or a subset violating the above inequality.
The algorithm runs in time polynomial in the size of , but exponential in and .
It is an open question whether there exists an algorithm with run-time polynomial in either or (or both).
Similar algorithms have been applied for solving problems of fair item allocation, in particular the santa-claus problem.
Aharoni–Haxell conditions: smallest pinning sets
We say that a set of edges pins another set of edges if every edge in intersects some edge in . The width of a hypergraph , denoted , is the smallest size of a subset of that pins . The matching width of a hypergraph , denoted , is the maximum, over all matchings in , of the minimum size of a subset of that pins . Since contains all matchings in , the width of H is obviously at least as large as the matching-width of .
Aharoni and Haxell proved the following condition:Let be a bipartite hypergraph. Suppose that, for every subset of , the following inequality holds:[in other words: contains a matching such that at least disjoint edges from are required for pinning ]. Then, admits a -perfect matching.They later extended this condition in several ways, which were later extended by Meshulam as follows:Let be a bipartite hypergraph. Suppose that, for every subset of , at least one of the following conditions hold: or
Then, admits a -perfect matching.
In simple graphs
In a bipartite simple graph, the neighborhood-hypergraph contains just singletons - a singleton for every neighbor of . Since singletons do not intersect, the entire set of neighbors is a matching, and its only pinning-set is the set itself, i.e., the matching-width of is , and its width is the same:
Thus, both the above conditions are equivalent to Hall's marriage condition.
Examples
We consider several bipartite graphs with and The Aharoni–Haxell condition trivially holds for the empty set. It holds for subsets of size 1 if and only if each vertex in is contained in at least one edge, which is easy to check. It remains to check the subset itself.
Here Its matching-width is at least 2, since it contains a matching of size 2, e.g. which cannot be pinned by any single edge from . Indeed, H admits a -perfect matching, e.g.
Here Its matching-width is 1: it contains a matching of size 2, e.g. but this matching can be pinned by a single edge, e.g. The other matching of size 2 is but it too can be pinned by the single edge While is larger than in example 1, its matching-width is smaller - in particular, it is less than . Hence, the Aharoni–Haxell sufficient condition is not satisfied. Indeed, does not admit a -perfect matching.
Here, as in the previous example, so the Aharoni–Haxell sufficient condition is violated. The width of is 2, since it is pinned e.g. by the set so Meshulam's weaker condition is violated too. However, this does admit a -perfect matching, e.g. which shows that these conditions are not necessary.
Set-family formulation
Consider a bipartite hypergraph where The Hall-type theorems do not care about the set itself - they only care about the neighbors of elements of . Therefore can be represented as a collection of families of sets where for each in , the set-family of neighbors of . For every subset of , the set-family is the union of the set-families for in . A perfect matching in is a set-family of size , where for each in , the set-family is represented by a set in , and the representative sets are pairwise-disjoint.
In this terminology, the Aharoni–Haxell theorem can be stated as follows.
Let be a collection of families of sets. For every sub-collection of , consider the set-family - the union of all the in . Suppose that, for every sub-collection of , this contains a matching such that at least disjoint subsets from are required for pinning . Then admits a system of disjoint representatives.
Necessary and sufficient condition
Let be a bipartite hypergraph. The following are equivalent:
admits a -perfect matching.
There is an assignment of a matching in for every subset of , such that pinning requires at least disjoint edges from is a subset of
In set-family formulation: let be a collection of families of sets. The following are equivalent:
admits a system of disjoint representatives;
There is an assignment of a matching in for every sub-collection of , such that, for pinning , at least edges from is a subcollection of are required.
Examples
Consider example #3 above: Since it admits a -perfect matching, it must satisfy the necessary condition. Indeed, consider the following assignment to subsets of :
In the sufficient condition pinning required at least two edges from it did not hold.
But in the necessary condition, pinning required at least two edges from it does hold.
Hence, the necessary+sufficient condition is satisfied.
Proof
The proof is topological and uses Sperner's lemma. Interestingly, it implies a new topological proof for the original Hall theorem.
First, assume that no two vertices in have exactly the same neighbor (it is without loss of generality, since for each element of , one can add a dummy vertex to all neighbors of ).
Let They consider an -vertex simplex, and prove that it admits a triangulation with some special properties that they call economically-hierarchic triangulation. Then they label each vertex of with a hyperedge from in the following way:
(a) For each in , The main vertex of the simplex is labeled with some hyperedge from the matching .
(b) Each vertex of on a face spanned by a subset of , is labeled by some hyperedge from the matching .
(c) For each two adjacent vertices of , their labels are either identical or disjoint.
Their sufficient condition implies that such a labeling exists. Then, they color each vertex of with a color such that the hyperedge assigned to is a neighbor of .
Conditions (a) and (b) guarantee that this coloring satisfies Sperner's boundary condition. Therefore, a fully-labeled simplex exists. In this simplex there are hyperedges, each of which is a neighbor of a dif and only iferent element of , and so they must be disjoint. This is the desired -perfect matching.
Extensions
The Aharoni–Haxell theorem has a deficiency version. It is used to prove Ryser's conjecture for .
Meshulam's conditions - the topological Hall theorems
In abstract simplicial complexes
Let be a set of vertices. Let be an abstract simplicial complex on . Let (for in ) be subsets of . A transversal is a set in (an element of ) whose intersection with each contains exactly one vertex. For every subset of , let
Suppose that, for every subset of , the homological connectivity plus 2 of the sub-complex induced by is at least , that is:
Then there exists a transversal. That is: there is a set in that intersects each by exactly one element. This theorem has a deficiency version. If, for every subset of :
then there exists a partial -transversal, that intersects some sets by 1 element, and the rest by at most 1 element. More generally, if is a function on positive integers satisfying , and for every subset of :
then there is a set in that intersects at least of the by at one element, and the others by at most one element.
Meshulam's game
Using the above theorem requires some lower bounds on homological connectivity. One such lower bound is given by Meshulam's game. This is a game played by two players on a graph. One player - CON - wants to prove that the graph has a high homological connectivity. The other player - NON - wants to prove otherwise. CON offers edges to NON one by one; NON can either disconnect an edge, or explode it; an explosion deletes the edge endpoints and all their neighbors. CON's score is the number of explosions when all vertices are gone, or infinity if some isolated vertices remain. The value of the game on a given graph (the score of CON when both players play optimally) is denoted by . This number can be used to get a lower bound on the homological connectivity of the independence complex of , denoted :
Therefore, the above theorem implies the following. Let be a set of vertices. Let be a graph on . Suppose that, for every subset of :
Then there is an independent set in , that intersects each by exactly one element.
In simple bipartite graphs
Let be a bipartite graph with parts and . Let be the set of edges of . Let the line graph of . Then, the independence complex is equal to the matching complex of H, denoted . It is a simplicial complex on the edges of , whose elements are all the matchings on . For each vertex in , let be set of edges adjacent to (note that is a subset of ). Then, for every subset of , the induced subgraph contains a clique for every neighbor of (all edges adjacent to , that meet at the same vertex of , form a clique in the line-graph). So there are disjoint cliques. Therefore, when Meshulam's game is played, NON needs explosions to destroy all of , so . Thus, Meshulam's condition
is equivalent to Hall's marriage condition. Here, the sets are pairwise-disjoint, so a -transversal contains a unique element from each , which is equivalent to a -saturating matching.
In matching complexes
Let be a bipartite hypergraph, and suppose is its matching complex . Let (for in ) be sets of edges of . For every subset of , is the set of matchings in the sub-hypergraph:
If, for every subset of :
Then there exists a matching that intersects each set exactly once (it is also called a rainbow matching, since each can be treated as a color).
This is true, in particular, if we define as the set of edges of containing the vertex of . In this case, is equivalent to - the multi-hypergraph of neighbors of ("multi" - since each neighbor is allowed to appear several times for several different ).
The matching complex of a hypergraph is exactly the independence complex of its line graph, denoted . This is a graph in which the vertices are the edges of , and two such vertices are connected iff their corresponding edges intersect in . Therefore, the above theorem implies:
Combining the previous inequalities leads to the following condition.
Let be a bipartite hypergraph. Suppose that, for every subset of , the following condition holds:
where is considered a multi-hypergraph (i.e., it may contain the same hyperedge several times, if it is a neighbor of several different elements of ). Then, admits a -perfect matching.
Examples
We consider several bipartite hypergraphs with and The Meshulam condition trivially holds for the empty set. It holds for subsets of size 1 iff the neighbor-graph of each vertex in is non-empty (so it requires at least one explosion to destroy), which is easy to check. It remains to check the subset itself.
s
Here The graph has three vertices: Only the last two are connected; the vertex Aa is isolated. Hence, . Indeed, admits a -perfect matching, e.g.
H = { {1,A,a}; {1,B,b}; {2,A,b}, {2,B,a} }. Here has four vertices: and four edges: For any edge that CON offers, NON can explode it and destroy all vertices. Hence, . Indeed, does not admit a -perfect matching.
Here is the same as in the previous example, so Meshulam's sufficient condition is violated. However, this does admit a -perfect matching, e.g. which shows that this condition is not necessary.
No necessary-and-sufficient condition using is known.
More conditions from rainbow matchings
A rainbow matching is a matching in a simple graph, in which each edge has a different "color". By treating the colors as vertices in the set , one can see that a rainbow matching is in fact a matching in a bipartite hypergraph. Thus, several sufficient conditions for the existence of a large rainbow matching can be translated to conditions for the existence of a large matching in a hypergraph.
The following results pertain to tripartite hypergraphs in which each of the 3 parts contains exactly vertices, the degree of each vertex is exactly , and the set of neighbors of every vertex is a matching (henceforth "-tripartite-hypergraph"):
Every -tripartite-hypergraph has a matching of size .
Every -tripartite-hypergraph has a matching of size .
Every -tripartite-hypergraph has a matching of size .
Every -tripartite-hypergraph has a matching of size .
Every -tripartite-hypergraph has a matching of size . (Preprint)
H. J. Ryser conjectured that, when is odd, every -tripartite-hypergraph has a matching of size .
S. K. Stein and Brualdi conjectured that, when is even, every -tripartite-hypergraph has a matching of size . (it is known that a matching of size might not exist in this case).
A more general conjecture of Stein is that a matching of size exists even without requiring that the set of neighbors of every vertex in is a matching.
The following results pertain to more general bipartite hypergraphs:
Any tripartite hypergraph in which , the degree of each vertex in is , and the neighbor-set of is a matching, has a matching of size . The is the best possible: if , then the maximum matching may be of size -1.
Any bipartite hypergraph in which , the degree of each vertex y in is , and the neighbor-set of is a matching, has a matching of size . It is not known whether this is the best possible. For even , it is only known that is required; for odd , it is only known that is required.
Conforti-Cornuejols-Kapoor-Vuskovic condition: Balanced hypergraphs
A balanced hypergraph is an alternative generalization of a bipartite graph: it is a hypergraph in which every odd cycle of has an edge containing at least three vertices of .
Let be a balanced hypergraph. The following are equivalent:
admits a perfect matching (i.e., a matching in which each vertex is matched).
For all disjoint vertex-sets , , if , then there exists an edge in such that (equivalently: if for all edges in , then ).
In simple graphs
A simple graph is bipartite iff it is balanced (it contains no odd cycles and no edges with three vertices).
Let for all edges in " means that contains all the neighbors of vertices of Hence, the CCKV condition becomes:"If a subset of contains the set , then ".This is equivalent to Hall's condition.
See also
Perfect matching in high-degree hypergraphs - presents other sufficient conditions for the existence of perfect matchings, which are based only on the degree of vertices.
References
Hypergraphs
Matching (graph theory)
Theorems in graph theory
Graph algorithms | Hall-type theorems for hypergraphs | [
"Mathematics"
] | 5,217 | [
"Graph theory",
"Theorems in discrete mathematics",
"Mathematical relations",
"Matching (graph theory)",
"Theorems in graph theory"
] |
64,399,855 | https://en.wikipedia.org/wiki/Time%20in%20Tonga | Tonga is a sovereign state in Polynesia that wholly utilises UTC+13:00 year round. Tonga does not currently observe daylight saving time, though they did in the Southern Hemisphere summers between 1992 and 2002 as well as the 2016—2017 summer, utilising UTC+14:00. UTC+14:00 is the earliest time zone on Earth and so, when using daylight saving time, Tonga was one of the first regions of Earth to bring in a new year. UTC+14:00 is also used by Samoa and Kiribati's Line Islands. Tonga currently shares a year-round time zone with Tokelau and the Phoenix Islands whilst Fiji, New Zealand and Samoa share Tonga's time seasonally. Tonga is west of the International Date Line (IDL) which deviates east from its standard course following the 180th meridian to roughly the 165th meridian west to traverse east of Tonga and other surrounding land.
References
Geography of Tonga | Time in Tonga | [
"Physics"
] | 197 | [
"Spacetime",
"Physical quantities",
"Time",
"Time by country"
] |
64,400,151 | https://en.wikipedia.org/wiki/Doyle%20spiral | In the mathematics of circle packing, a Doyle spiral is a pattern of non-crossing circles in the plane in which each circle is surrounded by a ring of six tangent circles. These patterns contain spiral arms formed by circles linked through opposite points of tangency, with their centers on logarithmic spirals of three different shapes.
Doyle spirals are named after mathematician Peter G. Doyle, who made an important contribution to their mathematical construction in the late 1980s or However, their study in phyllotaxis (the mathematics of plant growth) dates back to the early
Definition
A Doyle spiral is defined to be a certain type of circle packing, consisting of infinitely many circles in the plane, with no two circles having overlapping interiors. In a Doyle spiral, each circle is enclosed by a ring of six other circles. The six surrounding circles are tangent to the central circle and to their two neighbors in the
Properties
Radii
As Doyle the only way to pack circles with the combinatorial structure of a Doyle spiral is to use circles whose radii are also highly Six circles can be packed around a circle of radius if and only if there exist three positive real numbers so that the surrounding circles have radii (in cyclic order)
Only certain triples of numbers come from Doyle spirals; others correspond to systems of circles that eventually overlap each
Arms
In a Doyle spiral, one can group the circles into connecting chains of circles through opposite points of tangency. These have been called arms, following the same terminology used for Within each arm, the circles have radii in a doubly infinite geometric sequence
or a sequence of the same type with common multiplier In most Doyle spirals, the centers of the circles on a single arm lie on a logarithmic spiral, and all of the logarithmic spirals obtained in this way meet at a single central point. Some Doyle spirals instead have concentric circular arms (as in the stained glass window shown) or straight
Counting the arms
The precise shape of any Doyle spiral can be parameterized by three natural numbers, counting the number of arms of each of its three shapes. When one shape of arm occurs infinitely often, its count is defined as 0, rather The smallest arm count equals the difference of the other two arm counts, so any Doyle spiral can be described as being of where and are the two largest counts, in the sorted order
Every pair with determines a Doyle spiral, with its third and smallest arm count equal to . The shape of this spiral is determined uniquely by these counts, up to For a spiral of the radius multipliers are for complex numbers and satisfying the coherence equation and the tangency equations
This implies that the radius multipliers are algebraic numbers. The self-similarities of a spiral centered on the origin form a discrete group generated by and A circle whose center is distance from the central point of the spiral has radius
Exact values of these parameters are known for a few simple cases. In other cases, they can be accurately approximated by a numerical search, and the results of this search can be used to determine numerical values for the sizes and positions of all of the
Symmetry
Doyle spirals have symmetries that combine scaling and rotation around the central point (or translation and rotation, in the case of the regular hexagonal packing of the plane by unit circles), taking any circle of the packing to any other circle. Applying a Möbius transformation to a Doyle spiral preserves the shape and tangencies of its circles. Therefore, a Möbius transformation can produce additional patterns of non-crossing tangent circles, each tangent to six others. These patterns typically have a double-spiral pattern in which the connected sequences of circles spiral out of one center point (the image of the center of the Doyle spiral) and into another point (the image of the point at infinity). However, these do not meet all of the requirements of Doyle spirals: some circles in this pattern will not be surrounded by their six neighboring
Examples and special cases
The most general case of a Doyle spiral has three distinct radius multipliers, all different and three distinct arm counts, all nonzero. An example is Coxeter's loxodromic sequence of tangent circles, a Doyle spiral of type (2,3), with arm counts 1, 2, and 3, and with multipliers and for
where denotes the golden ratio. Within the single spiral arm of tightest curvature, the circles in Coxeter's loxodromic sequence form a sequence whose radii are powers of . Every four consecutive circles in this sequence are
When exactly one of the three arm counts is zero, the arms that it counts are circular, with radius The number of circles in each of these circular arms equals the number of arms of each of the other two types. All the circular arms are concentric, centered where the spiral arms The multipliers for a Doyle spiral of type are and . In the photo of a stained glass church window, the two rings of nine circles belong to a Doyle spiral of this form, of
Straight arms are produced for arm counts In this case, the two spiraling arm types have the same radius multiplier, and are mirror reflections of each other. There are twice as many straight arms as there are spirals of either type. Each straight arm is formed by circles with centers that lie on a ray through the central Because the number of straight arms must be even, the straight arms can be grouped into opposite pairs, with the two rays from each pair meeting to form a line. The multipliers for a Doyle spiral of type are and . The Doyle spiral of type (8,16) from the Popular Science illustration is an example, with eight arms spiraling the same way as the shaded arm, another eight reflected arms, and sixteen rays.
A final special case is the Doyle spiral of type (0,0), a regular hexagonal packing of the plane by unit circles. Its radius multipliers are all one and its arms form parallel families of lines of three different
Applications
The Doyle spirals form a discrete analogue of the exponential function, as part of the more general use of circle packings as discrete analogues of conformal maps. Indeed, patterns closely resembling Doyle spirals (but made of tangent shapes that are not circles) can be obtained by applying the exponential map to a scaled copy of the regular hexagonal circle The three ratios of radii between adjacent circles, fixed throughout the spiral, can be seen as analogous to a characterization of the exponential map as having fixed Doyle spirals have been used to study Kleinian groups, discrete groups of symmetries of hyperbolic space, by embedding these spirals onto the sphere at infinity of hyperbolic space and lifting the symmetries of each spiral to symmetries of the space
Spirals of tangent circles, often with Fibonacci numbers of arms, have been used to model phyllotaxis, the spiral growth patterns characteristic of certain plant species, beginning with the work of Gerrit van Iterson In this context, an arm of the Doyle spiral is called a parastichy and the arm counts of the Doyle spiral are called parastichy numbers. When the two parastichy numbers and are Fibonacci numbers, and either consecutive or separated by only one Fibonacci number, then the third parastichy number will also be a Fibonacci With this application in mind, Arnold Emch in 1910 calculated the positions of circles in Doyle spirals of noting in his work the connections between these spirals, logarithmic spirals, and the exponential For modeling plant growth in this way, spiral packings of tangent circles on surfaces other than the plane, including cylinders and cones, may also be
Spiral packings of circles have also been studied as a decorative motif in
Related patterns
Tangent circles can form spiral patterns whose local structure resembles a square grid rather than a hexagonal grid, which can be continuously transformed into Doyle The space of locally-square spiral packings is infinite-dimensional, unlike Doyle spirals, which can be determined by a constant number of parameters. It is also possible to describe spiraling systems of overlapping circles that cover the plane, rather than non-crossing circles that pack the plane, with each point of the plane covered by at most two circles except for points where three circles meet at angles, and with each circle surrounded by six others. These have many properties in common with the Doyle
The Doyle spiral should not be confused with a different spiral pattern of circles, studied for certain forms of plant growth such as the seed heads of sunflowers. In this pattern, the circles are of unit size rather than growing logarithmically, and are not tangent. Instead of having centers on a logarithmic spiral, they are placed on Fermat's spiral, offset by the golden angle from each other relative to the center of the spiral, where is the
Notes
References
Further reading
External links
Doyle spiral explorer, Robin Houston
Circle packing
Spirals
Plant morphology
Eponyms in geometry | Doyle spiral | [
"Mathematics",
"Biology"
] | 1,845 | [
"Geometry problems",
"Eponyms in geometry",
"Packing problems",
"Plants",
"Plant morphology",
"Circle packing",
"Geometry",
"Mathematical problems"
] |
64,402,663 | https://en.wikipedia.org/wiki/Infer.NET | Infer.NET is a free and open source .NET software library for machine learning. It supports running Bayesian inference in graphical models and can also be used for probabilistic programming.
Overview
Infer.NET follows a model-based approach and is used to solve different kinds of machine learning problems including standard problems like classification, recommendation or clustering, customized solutions and domain-specific problems. The framework is used in various different domains such as bioinformatics, epidemiology, computer vision, and information retrieval.
Development of the framework was started by a team at Microsoft’s research centre in Cambridge, UK in 2004. It was first released for academic use in 2008 and later open sourced in 2018. In 2013, Microsoft was awarded the USPTO’s Patents for Humanity Award in Information Technology category for Infer.NET and the work in advanced machine learning techniques.
Infer.NET is used internally at Microsoft as the machine learning engine in some of their products such as Office, Azure, and Xbox.
The source code is licensed under MIT License and available on GitHub. It is also available as NuGet package.
See also
Machine learning
ML.NET
scikit-learn
References
Further reading
External links
Infer.NET
GitHub - dotnet/infer
Machine Intelligence and Perception - Microsoft Research
Infer.NET - Practical Implementation Issues and a Comparison of Approximation Techniques
Applied machine learning
Free and open-source software
Microsoft free software
Microsoft Research
Software that uses Mono (software)
Open-source artificial intelligence
Probabilistic models
Probabilistic software
Software using the MIT license
2008 software | Infer.NET | [
"Mathematics"
] | 326 | [
"Probabilistic software",
"Mathematical software"
] |
64,403,658 | https://en.wikipedia.org/wiki/Quiet%20area | "Quiet area" or "quiet areas" is a concept used in landscape planning to highlight areas with good sound quality and limited noise disturbance. The concept is typically used in nature and nature-like areas with high experiential values and/or high accessibility. Despite the name, quiet areas are not "quiet" in the strictest meaning of the word. Rather, they imply a relative quietness, where other sounds than noise are given the chance to come forward. For instance, sounds of nature are often subtle in character, and require absence of noise to be heard. Quietness in its true sense hardly exists at all.
Background and history
In the planning processes for everyday landscapes, the sound environment has traditionally been given relatively low priority. If sound is at all considered, it is mostly in response to problems with environmental noise, dealt with through measurements of sound pressure levels and technical solutions.
Strategies to avoid noise have existed at least since ancient Greece and have been implemented on a wider scale since the 1970s in the western world. While playing a critical role to reduce noise and associated problems with health, noise management does not take account of the experiential qualities inherent in sound. With "quiet areas", it can be said that focus started to shift from noise to include also the potential qualities in the sound environment, like twittering birds, rustling vegetation and rippling water. This holistic way of thinking is in line with the discourse on soundscape, a research field that started to become influential around the same time as the concept of quiet areas was introduced.
In the EU, the notion of quiet areas can be traced to 1996 when it was mentioned in a Green Paper on "Future Noise Policy". Today, the concept is mostly associated with the influential directive on environmental noise from 2002 (2002/49/EC), where it is stipulated that member states should map their quiet areas as well as formulate strategies to protect them from future noise exposure. The instructions and definitions on quiet areas that were mentioned in the directive were vague, and clarifications and guidelines have been added subsequently.
Definitions and identification strategies
Definitions of what a "quiet area" is varies widely, which is partly a result of the formulations used in the END Directive. The directive makes a distinction between two types of quiet areas; in "open country" and in "agglomerations", which are defined as follows:
In other words; to a large extent, the END directive leaves it to each member state to formulate their own definitions of what qualifies as a quiet area. A number of different interpretations and definitions have come out as a result, many of these were collected in a subsequent publication in the union entitled "Good Practice Guide on quiet areas". Definitions typically include a reference to a benchmark sound pressure level between 25-55 dBA.
A method to identify potential for quiet areas has also been brought forward by the EU; the so called "Quietness Suitability Index" (QSI) uses existing data for noise and land use to indicate potential for quietness. Maps can be accessed through the European Environmental Agency's homepage
Examples and applications
The UK has seen several initiatives related to quiet areas including an interactive map from the Department for Environment, Food and Rural Affairs (DEFRA) depicting five quiet areas in Belfast.
A smartphone application Hush City has been developed as a means to aid identification of quiet areas from a user perspective. The app was released in 2017 and it is now used internationally by citizens and municipalities to map and assess quiet areas, and share them via an open access web-platform.
In Sweden, the initiative "Guide to Silence" has been implemented in several municipalities in the Stockholm region. The initiative is noteworthy for its emphasis on marketing quiet areas and making them accessible to the public.
Initiatives have also been taken in Greece and the Netherlands among other places
References
Soundscape ecology
Landscape architecture
Environmental design
Noise control | Quiet area | [
"Engineering",
"Biology"
] | 791 | [
"Environmental design",
"Landscape architecture",
"Ecological techniques",
"Soundscape ecology",
"Design",
"Architecture"
] |
64,403,830 | https://en.wikipedia.org/wiki/Wing%20engine | A wing engine is a subsidiary engine installed in a motor boat alongside the main engine. The primary purpose of a wing engine is to provide redundancy and safety in the event of failure of the main engine; a secondary benefit assists manoeuvering in port or in a marina.
Wing engine installation
Whereas the main engine will be larger and invariably mounted on the vessel's centreline, the wing engine will be considerably smaller and positioned to one side. A wing engine will typically be either:
a small marine engine that may also serve as a generator when running; or
a diesel generator that may power (typically via a 12v or 24v battery pack) an electric motor that drives its own propeller shaft and propeller.
In either case, the wing engine's propeller will be off-centre. This can give rise to steering difficulties; but this can be used to advantage in port with the main engine as follows: if the main engine has a right-hand propeller, the "prop walk" when in reverse will tend to move the stern to port. In these circumstances, the wing-motor should be arranged to have a propeller to the left (port-side) of the centreline, so as to balance the vessel in astern, or to produce (with the main engine in neutral) a vector thrust to starboard.
Canal boats need very little power in canals, as there is virtually no current (and there are often speed limits). In such canals the wing engine may be used to propel the boat; but when the vessel puts to sea or navigates a fast flowing river, the power of the main engine would be needed. Diesel engines suffer harm if not run under load, so a small wing engine under load should be more efficient in a canal than a main engine operating barely above tick-over.
Examples of wing engine installations
a 10m Vlet used on canals by author Marian Martin had a 120bhp DAF main engine, and an 18bhp Sabb wing engine. Ms Martin was so impressed that in her book she recommends wing engines, albeit with some reservations.
a 27m schooner-rigged Dutch sailing barge, Hosanna, had a large Cummins main engine and a smaller Perkins wing engine. When the Cummins failed, the owners, Bill & Laurel Cooper motored through the French canals to re-engine the boat at Great Yarmouth. So exasperated were they by the tricky steering using just a wing engine for long stretches, that instead of replacing the Cummins with a similar large main engine, they installed two more Perkins engines and propellers. Hosanna now had three similar Perkins engines, one in the centre, and one on either side. In calm canals, just the central engine alone would be used; the other two would be engaged at sea or in fast rivers, or when manoeuvering.
References
Marine engines | Wing engine | [
"Technology"
] | 578 | [
"Marine engines",
"Engines"
] |
64,404,772 | https://en.wikipedia.org/wiki/Kepler-1544b | Kepler-1544b is a potentially habitable (optimistic sample) exoplanet announced in 2016 and located 1138 light years away, in the constellation of Cygnus.
Characteristics
The planet orbits the K-type star Kepler-1544, which has a metallicity ([FE/H]) of −0.08 and an effective temperature of .
Kepler-1544b is considered a super-Earth with a radius of 1.71 Earth radii.
Habitability
With an orbital period of 168 days, the exoplanet is located at 0.54 AU from the star, which is close to the orbital distance at Earth's Equivalent Radiation (0.49 AU).
Despite the fact that NASA considers this planet gaseous, it could have a surface composition considering that its mass stays below 10 Earth masses.
References
Exoplanets discovered in 2016
Exoplanets discovered by the Kepler space telescope
Transiting exoplanets
Super-Earths in the habitable zone
Cygnus (constellation) | Kepler-1544b | [
"Astronomy"
] | 211 | [
"Cygnus (constellation)",
"Constellations"
] |
64,405,404 | https://en.wikipedia.org/wiki/Sabb%20Motor | Sabb Motor is a Norwegian maker of small marine diesel engines, mostly single-cylinder or twin-cylinder units. The firm was established as Damsgaard Motorfabrikk by two brothers, Alf and Håkon Søyland in 1925. The firm started building engines to meet the demand of fishermen who wanted simple, robust and reliable power for their boats. (The word 'Sabb' means toughness and reliability).
History
The brothers' cottage industry began by creating a 3HP hot-bulb engine. This was followed by a larger 7HP version which tended to suffer broken crankshafts, but the firm was able to solve the problem and re-launch their engines under the name Ny-Sabb (New Sabb). By 1975, Sabb Motor was producing 3,200 engines a year between 8 and 30 bhp. Facing market competition, the firm concentrated on providing 30bhp engines for ship’s lifeboats, a decision which increased worldwide demand. The UK's distributor of Sabb engines is Sleeman & Hawken Ltd. More recently, Sabb have established a link with Mitsubishi.
In 2006 Sabb Motor AS was bought by Frydenbø Industri and renamed Frydenbø Sabb Motor AS.
Sabb engines
Sabb engines are rugged and simple; for example, some have "splash lubrication" which requires no oil pump nor filter. (Splash lubrication is an antique system whereby "spades" on the big-end caps dip into the oil sump and splash the lubricant upwards; clearly it is a system that can work only on very low-revving engines, otherwise the sump oil would become a frothy mousse).
Sabb engines include Types H, G, GA, 2H, 2G, & 2J.
External links
Narrow boat "Bullfinch" with a running Sabb 2G engine
References
Marine engines
Engine manufacturers of Norway
Norwegian companies established in 1925 | Sabb Motor | [
"Technology"
] | 401 | [
"Marine engines",
"Engines"
] |
64,405,419 | https://en.wikipedia.org/wiki/Qibla%20observation%20by%20shadows | Twice every year, the Sun culminates at the zenith of the Kaaba in Mecca, the holiest site in Islam, at local solar noon, allowing the qibla (the direction towards the Kaaba) to be ascertained in other parts of the world by observing the shadows cast by vertical objects. This phenomenon occurs at 12:18 Saudi Arabia Standard Time (SAST; 09:18 UTC) on 27 or 28 May (depending on the year), and at 12:27 SAST (09:27 UTC) on 15 or 16 July (depending on the year). At these times, the Sun appears in the direction of Mecca, and shadows cast by vertical objects determine the qibla. At two other moments in the year, the Sun passes through the nadir (the antipodal zenith) of the Kaaba, casting shadows that point in the opposite direction, and thus also determine the qibla. These occur on 12, 13, or 14 January at 00:30 SAST (21:30 UTC on the preceding day), and 28 or 29 November at 00:09 SAST (21:09 UTC on the preceding day).
The shadow points towards Mecca because the Sun path makes the subsolar point travel through every latitude between the Tropic of Cancer and the Tropic of Capricorn every year, including the latitude of the Kaaba (21°25′N), and because the Sun crosses the local meridian once a day. This observation has been known since at least the 13th century, when it was noted by the astronomers Jaghmini and Nasir al-Din al-Tusi, but their timings could not be fixed to a particular date because the Islamic calendar is lunar rather than solar; the solar date on which the Sun culminates at the zenith of Mecca is constant, but the lunar date varies from year to year.
Context
Qibla
The qibla is the direction of the Kaaba, a cube-shaped building at the centre of the Great Mosque of Mecca (al-Masjid al-Haram) in the Hejaz region of Saudi Arabia. This direction is special in Islamic rituals and religious law because Muslims must face it during daily prayers (salat) and in other religious contexts. The determination of qibla was an important problem for Muslim communities because Muslims are required to know the qibla to perform their daily prayers and because it is needed to determine the orientation of mosques. When Muhammad lived among the Muslims in Medina, which is also in the Hejaz region, he prayed due south, the known direction of Mecca. Within a few generations of Muhammad's death in 632, Muslims had reached places far distant from Mecca, making the determination of the qibla in these new locations problematic. Initially, Muslims relied on traditional folk knowledge methods, but after the introduction of astronomy into the Islamic world, solutions based on mathematical and astronomical knowledge began to be developed in the early 9th century. The shadow-observation method has been attested since at least the 13th century CE.
Apparent motions of the Sun
Places on Earth experience the apparent diurnal motion of the Sun from the east to the west, during which it culminates, or reaches its highest point of the day and crosses the local meridian. The Sun also appears to move seasonally between the Tropic of Cancer (approximately 23.5°N) and the Tropic of Capricorn (approximately 23.5°S); therefore, the solar culmination usually occurs to the north or south of the zenith. For locations between the tropics, at certain times of the year, the Sun crosses the local latitude and then culminates at or near the zenith; this location is known as the subsolar point. The Kaaba is located at a latitude of 21°25′N, inside the zone that experiences this phenomenon. In the terminology of Islamic astronomy ('ilm al-falak), these events are called the "great culmination" (al-istiwa al-a'dham).
Observation
The great culmination, when the Sun appears directly over the Kaaba, occurs on 27 or 28 May at approximately 12:18 SAST (09:18 UTC), and on 15 or 16 July at 12:27 SAST (09:27 UTC), coinciding with the solar noon and the Zuhr adhan (midday call to prayer) in Mecca. As the sun crosses almost directly above the Kaaba, any shadow cast by vertical objects on earth will point directly away from the Kaaba, which casts nearly no shadow. This phenomenon allows the direction of the qibla to be determined without needing to perform calculations or to use sophisticated instruments. This observation is called rasd al-qibla ('observing the qibla').
This observation is not observable in the hemisphere opposite the Kaaba, since the phenomenon occurs when the Sun is below the horizon. This hemisphere includes most of the Americas, the Pacific Ocean, Australia, and Eastern Indonesia. People in these places can observe a comparable event when the Sun passes directly above the antipodal point of the Kaaba – the point directly opposite on the other side of the Earth. The shadows cast during these times point in the exact opposite direction shown during the rasd al-qibla. The antipodal events occur on 12, 13, or 14 January at 00:30 SAST (21:30 UTC on the previous day), and again on 28 or 29 November 00:09 SAST (21:09 UTC on the previous day). During any of these events, observations made within a five-minute interval, and at the same time one or two days before or after the prescribed date, are accurate with negligible deviation.
A practical problem occurs in locations whose angular distances to Mecca are almost equal to 90 degrees at the edge of the hemisphere centred in Mecca. In these locations, the rasd al-qibla events always occur close to sunrise or sunset. This is the case for several places in the east coast of North America; for instance, the first rasd al-qibla (28 May at 12:18 SAST) occurs six minutes after sunrise in Boston and Montreal, two minutes before sunrise in Ottawa, and eleven minutes before sunrise in New York City. The phenomenon cannot be viewed in New York City and Ottawa, while in Boston and Montreal, the Sun appears so low that the place of observation must be completely unobstructed by buildings or terrain.
Daily observation
In addition to the twice yearly rasd al-qibla, in most locations the Sun crosses the direct path between the location and the Kaaba each day; at the instant this happens, the Sun's shadow points in the direction of the qibla or its antipodal point. The time of this daily event depends upon the location and the day of the year, and can be determined using geographical data and calculations, but this is more complex than the yearly rasd al-qibla, the times of which are the same globally, with no calculations needed.
History
The method of observing the qibla by shadows was attested by the Central Asian astronomer Jaghmini, who wrote it can be done twice a year when the Sun's position in the ecliptic is at 7°21′, in the constellation Gemini, and 22°39′, in Cancer. Subsequently, Nasir al-Din al-Tusi (1201–1276) also related this method in his work al-Tadhkira al-Nasīriyya fī ʿilm al-Hayʾa ("Memoir on the Science of Astronomy"), although with less precision than Jaghmini:
Al-Tusi stated the two rasd al-qibla days by specifying the Sun's position on the ecliptic (8° Gemini and 23° Cancer), rather than giving specific dates. This is because during their time, the Muslim world used the lunar Islamic calendar rather than a solar one, therefore the two days could not be specified on a fixed day and month. Because the obliquity of the ecliptic is slowly decreasing, the values during the lives of Jaghmini and al-Tusi's differ from modern values. As of 2000, the appropriate solar positions are 6°40′ Gemini and 23°20′ Cancer. Other than specifying the sun's position, the passage by al-Tusi describes how to convert the noontime in Mecca to the local time.
See also
Lahaina Noon
Sundial
Zero shadow day
Explanatory notes
References
Footnotes
Bibliography
Astronomy in the medieval Islamic world
Shadows
Orientation (geometry) | Qibla observation by shadows | [
"Physics",
"Astronomy",
"Mathematics"
] | 1,773 | [
"Physical phenomena",
"Shadows",
"History of astronomy",
"Astronomy in the medieval Islamic world",
"Optical phenomena",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
64,407,506 | https://en.wikipedia.org/wiki/Barcode%20library | Barcode library or Barcode SDK is a software library that can be used to add barcode features to desktop, web, mobile or embedded applications. Barcode library presents sets of subroutines or objects which allow to create barcode images and put them on surfaces or recognize machine-encoded text / data from scanned or captured by camera images with embedded barcodes. The library can support two modes: generation and recognition mode, some libraries support barcode reading and writing in the same way, but some libraries support only one mode.
At this time barcode technology allows to add machine reading tags or machine reading additional data to any object of real world with less than one cent cost. and use any of camera equipped device to identify additional data about an object. In this way, combination of barcode technology and barcode library allows to implement with low cost any automatic document processing application, OMR application, package tracking application or even augmented reality application.
History
The first Barcode SDKs were not implemented as software libraries but as standalone applications for DOS and Windows and as Barcode fonts. At that time barcodes were used mostly in retail and for internal corporation needs, thus barcode users looked for all-inclusive hardware solutions to generate, print and recognize barcodes.
The situation changed when camera equipped devices (like mobile phones) and document scanners became common for everyday usage. Because barcodes could be scanned and recognized on common ordinary equipment and industrial and office users did not need to obtain expensive specialized one-function devices for barcode reading, the need for barcode writing and reading SDKs and libraries increased.
Barcode writing libraries already had been implemented as barcode fonts or standalone applications in projects like GNU Barcode or Zint. Implementation of a barcode writing library does not require hard Computer Science skills because it just need to follow AIM or ISO specifications. It does not have any difference from encoding data in special file format.
2D barcodes encoding is more difficult because 2D barcodes instead of 1D barcodes have additional encoding data like columns, rows, ECI or data correction options. Some 2D barcodes like MaxiCode or Pdf 417 also have special encoding fields like Post Address or metadata which convert these barcodes in multiple graphical files. These differences could not be solved by barcode fonts usage and required API with multiple parameters processing.
Barcode reading libraries are more complex, requiring Computer Vision techniques. However, they can be run on common camera or scanner equipped devices. The first libraries could recognize only 1D barcodes by laser scanners mode emulation. This mode captured the whole image but then library made some scan-lines with Bresenham's algorithm and tried to recognize data from these lines as hardware laser scanners did. The bright representation of these libraries is early ZXing project supported by Google, ZBar or other solutions.
For the recognition of 2D barcodes laser scanners mode emulation is not suitable. Moreover, this method has difficulty with barcode area detection, which causes problems with 1D angled barcode detection. More complicated methods from Computer Vision were implemented to improve recognition quality for 1D and 2D barcodes.
Application
Barcode libraries have provided low cost automatic identification and data capture features to various fields of services and industry. This can be entertainment, healthcare, postal services, such as document processing or retail applications.
They can be used for:
Advertisement
Augmented reality implementation as quick identification of virtual objects
Automatic data identification in document processing
Automatically identified hyperlinks to internet pages
Automatically recognized payment bills
Creative usage of barcodes in design
Data entry for documents, like driver ID, receipt or passport
Encryption keys transfer and documents validation
Food and goods tracking in retail
Games in augmented reality
In airports, bus / railroad stations for automatic tickets and passenger documents identification
Internet of Things with linkage of physical object to virtual representation
Package tracking
Patient or medicine identification in healthcare services or industry
Quick information extraction from business cards
Tracking of rental cars, airline luggage and even nuclear wastes
Vehicles identification
Types
Barcode libraries and or Barcode SDKs can be split in different types, which is based on their functionality:
Barcode Fonts
Barcode Writing library
Barcode Reading library
Barcode Full support library
The first barcode libraries were fully transparent to user and used as simple printing text with specialized TrueType Fonts. This works well for 1D barcodes, because 1D barcode just the same as linear text, sometimes with checksum. Usage of Barcode Fonts with 2D barcodes also possible but it has problem with metadata processing like setting barcode row and columns and metadata. This is solved with predefined different metadata values in set of fonts for the same type of barcode.
Barcode libraries with API calls have more customization features in writing and reading modes. However, only part of libraries has full support of writing and reading modes. More than half of libraries supports only one mode.
Barcode library list
Barcode libraries can support different barcode formats and programming languages. Also, they have different support of reading and writing functionality. Most common barcode libraries and SDKs are represented in the following list:
Recommendations and best practices
Barcodes is the way of adding machine reading tags to any object of real world with low cost. All other ways like RFID chips or object detection by image recognition are more expensive and difficult to implement. There are more than 200 barcode types and this makes choice of barcode type ambiguous. First barcode was standardized in 60th and there were two waves of barcode features development
The first wave of creation barcode standards was started in 60th and those were 1D barcodes. Main advantages of these barcodes were simple encoding and recognition with laser scanners for linear barcodes. All of these restrictions were tied to slow 8-bit processors, which were used at that time. This makes 1D barcodes have restricted symbol encoding like Code 11 or have restricted barcode length like EAN 13, UPCA, EAN 8 or be used even without checksum like Code 39 barcodes. In addition to this, informational density encoding of these barcode types is too low.
Moreover, all of these 1D barcodes have low quality checksum or even do not have any checksum which makes recognition process unpredictable on images with too low quality. Open source engines does not recognize 1D barcodes on images with low quality but barcode engines with advanced recognition algorithms can recognize these barcodes. Unfortunately, recognition of low quality images could produce some incorrect symbols in recognized text. Low-density encoding, encoding restrictions and weak checksum makes 1D barcode unsuitable to current requirements to informational systems and data processing. Using of 1D barcodes in the new applications is reasonable if only it is required by industrial standards
The second way of barcode standards implementation was started in 90th and it was development of 2D barcodes. Main advantages of 2D barcodes are high encoding density, which is 10 times more, no restrictions to text encoding and self-checked codes like Reed Solomon codes, which not only add confidence in correct recognition but also can restore some wiped or corrupted barcode data. Main disadvantage of 2D barcodes, they cannot be recognized by laser scanners, except PDF 417, for recognition they require photo scanners. Most of 2D barcodes can encode information in byte mode and this allows encoding both text in 8-bit national encoding charset and text in common Unicode charsets like UTF16 or UTF8 with ECI tag.
New projects should use 2D barcodes if industry standards permit. They do not have any restrictions to encoding text, they can be correctly restored on corrupted or low quality images and their recognition result is fully confidential. The informational density allows placing them on the same area or even lesser than 1D barcodes. The main question here could be requirement to marked area. Most common QR code can be only in square size, same Aztec or Datamatrix in some sizes. If someone has a long rectangular area with low height, they can use Datamatrix with rectangular sizes, see DMRE or PDF417, which can have difference width to height more than 64 times.
See also
References
External links
Advantages and disadvantages of barcodes and radio frequency identification in supply chain management
GS1 barcodes
Label and Barcode Detection in Wide Angle Image
Robust 1D Barcode Recognition on Mobile Devices
Ten steps to GS1 barcode implementation
Examples of data encoded in barcodes
Automatic identification and data capture
Cross-platform software
Computer vision software
Application programming interfaces
Barcodes
Computer libraries | Barcode library | [
"Technology"
] | 1,720 | [
"Automatic identification and data capture",
"Data",
"IT infrastructure",
"Computer libraries"
] |
64,408,219 | https://en.wikipedia.org/wiki/Redox%20Biology | Redox Biology is an open-access peer-reviewed scientific journal and an official journal of the Society for Redox Biology and Medicine and the Society for Free Radical Research-Europe. The journal covers research on redox biology, aging, signaling, biological chemistry and medical implications of free radicals for health and disease. According to the Journal Citation Reports, the journal's 2020 impact factor is 11.799.
Abstracting and indexing
The journal is abstracted and indexed in ADONIS, BIOSIS, CAB Abstracts, Chemical Abstracts, Current Contents, EMBASE, EMBiology, MEDLINE, Science Citation Index, Scopus and Toxicology Abstracts.
References
External links
FRBM Society
SFRR-E society
Monthly journals
English-language journals
Biochemistry journals
Elsevier academic journals | Redox Biology | [
"Chemistry"
] | 155 | [
"Biochemistry stubs",
"Biochemistry journals",
"Biochemistry literature",
"Biochemistry journal stubs"
] |
64,408,900 | https://en.wikipedia.org/wiki/Maria%20Bruna | Maria Bruna Estrach (born 1984) is an applied mathematician whose interests include stochastic modelling of multiscale phenomena with applications in mathematical biology and industry. She is affiliated with the University of Cambridge, where she is a university lecturer and Royal Society University Research Fellow in the Department of Applied Mathematics and Theoretical Physics, and a Fellow of Churchill College, Cambridge.
Education and career
Bruna was born in 1984 in Barcelona and grew up in Sant Cugat del Vallès, a town to the north of Barcelona, and while growing up there became an avid field hockey player with Júnior Futbol Club.
She studied mathematics and industrial engineering as an undergraduate at the Centro de Formación Interdisciplinaria Superior of the Polytechnic University of Catalonia, completing her studies there in 2008.
After coming to the University of Oxford for a one-year master's degree program in mathematical ophthalmology, she was invited to stay at Oxford for her doctoral studies, and completed a DPhil in applied mathematics in 2012.
After completing her doctorate, Bruna was a postdoctoral researcher in computer science at the University of Oxford, an Olga Taussky Pauli Fellow and senior postdoctoral researcher at the Johann Radon Institute for Computational and Applied Mathematics in Austria, and a junior research fellow in mathematics at St John's College, Oxford, before moving to Cambridge in 2019.
Recognition
In 2016 Bruna was awarded a L’Oréal-UNESCO Women in Science Fellowship, the first given in mathematics. She is also a 2016 winner of the Aviva Women of the Future Awards.
In 2020 the London Mathematical Society gave Bruna a Whitehead Prize "in recognition of her outstanding research in asymptotic homogenisation, most prominently in the systematic development of continuum models of interacting particles systems".
References
External links
1984 births
Living people
21st-century Spanish mathematicians
Spanish women mathematicians
Applied mathematicians
Polytechnic University of Catalonia alumni
Alumni of the University of Oxford
Academics of the University of Cambridge
Fellows of Churchill College, Cambridge
Cambridge mathematicians | Maria Bruna | [
"Mathematics"
] | 402 | [
"Applied mathematics",
"Applied mathematicians"
] |
64,409,191 | https://en.wikipedia.org/wiki/Dynabook%20Inc. | , stylized dynabook, is a Japanese personal computer manufacturer based in Kōtō, Tokyo, owned by Sharp Corporation; it was previously part of, and branded overseas as, Toshiba, until 2018. The Dynabook name had already been used by Toshiba in the Japanese market since 1989 for laptop products.
Under Toshiba, it notably launched the Toshiba T1100 in 1985, cited as the first ever commercial laptop PC. The company was a major manufacturer of PCs until a decline in fortunes led to Toshiba selling the business to Sharp in 2018, with new products since rebranded to Dynabook worldwide.
History
Corporate
The company's origins date back to the 1950s as a maker of typewriters called Kawasaki Typewriter Co., Ltd., which in 1958 was bought by Tokyo Shibaura Electric Co., Ltd. (later Toshiba Corporation), and the business changed its name to Toshiba Typewriter Co., Ltd.. In 1968, the name changed to Toshiba Business Machine Co..Toshiba Corporation established Toshiba Business Computers Co. in 1977, which was merged with Toshiba Business Machines in 1984 with the result company named Toshiba Information Systems Corporation. In April 2016, control of the PC business was transferred to a sales company targeting domestic corporations by means of a company split at Toshiba Corporation, the resulting entity named Toshiba Client Solutions Co., Ltd..
In 2018, Toshiba Corporation was in the midst of an accounting scandal, and was under pressure to cut costs; Toshiba Client Solutions Co., Ltd. (TCS), the personal computer division, became 80.1% owned by Sharp Corporation, in turn majority-owned by Foxconn; Sharp paid $36 million for the shares. TCS then changed its corporate name to Dynabook, Inc. It marked Sharp Corporation's return to the PC market having last marketed the Mebius in Japan in 2010, and previously also the Actius and WideNote series globally. Sharp exercised a call option on the remaining 19.9% of the shares on June 30, 2020, making Dynabook wholly owned by Sharp in August 2020, and indicated plans for Dynabook to have an initial public offering in 2020 or 2021.
, Dynabook Inc. had 162.9 billion yen (US$ billion) in annual sales and 2,680 employees. In 2024 the company had 1,867 employees and 8.55 billion yen in capital.
Toshiba era
Toshiba computers
Toshiba's history with computers date back to the 1950s. Toshiba worked with researchers at the University of Tokyo and manufactured a vacuum-tube computer called TAC, containing 7,000 tubes and 3,000 diodes. Then in the 1960s, the company developed and released TOSBAC (TOshiba Scientific and Business Automatic Computer) mainframes, including the first with its own operating system released in 1964. This OS was called TOPS-1, based on a Fortran monitor. They also partnered with the General Electric Company of the United States.
In 1981, Toshiba released the first in a line of home computers under the Pasopia name, which run on a BASIC based operating system. The original model was also sold in the United States as the Toshiba T100. Toshiba Pasopia IQ was a separate line that was MSX compatible.
In 1985, Toshiba released the Toshiba T1100, an 8-bit IBM PC compatible, which is claimed by them to be the first ever mass-market laptop computer. The company launched the Toshiba T3100 in 1986, which was 16-bit; its Japanese variant the Toshiba J-3100 was the first 16-bit PC in Japan. 1987 saw the launch of Toshiba T1000.
The first Dynabook
The 'dynabook' was a portable computer concept first introduced by Alan C. Kay in the 1960s and 1970s. Tetsuya Mizoguchi, an executive in Toshiba's mainframe computer division, read Kay's paper "Personal Dynamic Media" in the March 1977 IEEE Computer; and inspired by the concept of a computer that could be carried and used by anyone of any age, Mizoguchi became determined to develop such a computer. The Dynabook trademark was already owned by other companies in Japan and the United States: ASCII Corporation had acquired the rights in Japan, so Toshiba paid a fee to ASCII to use the name there. The trademark rights in Britain, France, and West Germany were also able to be acquired.
The first Toshiba computer with the name DynaBook was announced on June 26, 1989, released in Japan (model number J-3100 SS001) featuring a 3.5 inch disk drive and full sized keyboard, weighing five pounds. Its compact size (then the original definition for a 'notebook laptop'), combined with its low price, made it a major hit in the country, and led to Toshiba adopting the DynaBook brand name for most of its future notebooks sold domestically. Toshiba and the present day Dynabook Inc. have since commemorated it as the "world's first notebook PC".
The DynaBook was later released in America but as the name could not be used, it was released as Toshiba T1000SE. In August 1989, Mizoguchi sent a letter and a Toshiba T1000SE to Kay in Boston, and in December Kay was Mizoguchi's guest at Toshiba. The DynaBook along with NEC's 98 Note together accounted for the "vast majority" of notebook computer sales in Japan in the year 1990. In 1990, the T1000SE became mandatory for all 82 students at Methodist Ladies' College, Melbourne.
1990s and 2000s
In early 1990, Toshiba released the T1000XE and T1200XE, aimed at competing against Compaq's LTE. In 1992, Toshiba launched DynaBook EZ in Japan which had applications built into ROM. The following year, EZ486P with a i486SX chip and featuring a built-in printer was released and achieved 3 million sales worldwide. Also in 1993, Toshiba released a pen-based tablet computer under the name DynaPad.
The Satellite series was launched as a value-priced line of notebooks. In c. 1995, Toshiba launched the Tecra series which was described as their "flagship high-performance mobile computer". In September 1996, Toshiba announced its first consumer desktop computer in the American market, called the Infinia, a high-end, black-colored Pentium computer with analog TV/FM radio card included in its highest configuration. The same year the company also released its first Libretto, a subnotebook or handheld computer running Windows 95.
Toshiba shipped its 10 millionth notebook computer in 1997. In November 1997, the company announced the discontinuation of their Infinia desktop line. In 2002, it launched the Portege 2000, called the "thinnest notebook" and weighing 2.6 pounds. The Tecra M4 tablet PC was launched in 2005. In September 2006, Toshiba recalled thousands of batteries, supplied by Sony, shipped with some of its laptops as they could lose power. However unlike the battery recalls by Apple and Dell in August 2006, which were also Sony-supplied, Toshiba reported that its affected batteries did not pose fire hazards.
In 2008, Toshiba released the Qosmio G55, the first laptop with embedded technology from the Cell processor, which Toshiba co-developed for Sony's PlayStation 3. In December 2008, Toshiba and Sun Microsystems announced that Toshiba will ship laptops with OpenSolaris in the U.S. beginning in 2009.
2010s
In 2010 the Toshiba Libretto W100 was a one-off revival of the Libretto line and was the "first dual-touchscreen" notebook. In April 2011, Toshiba announced the DynaBook Qosmio T851/D8CR, described as "the world's first glasses-free 3D notebook PC able to display 3D and 2D content at the same time on one screen". It was slated for a release in Japan. The Toshiba Portégé Z830 was launched as their first Ultrabook.
In 2013, Toshiba released the Kirabook (Dynabook Kira in Japan), a high-end Windows 8 notebook. In 2014, it launched the Toshiba Encore Windows 8.1 tablet and the Excite Go, a low-end Android tablet. In a LaptopMag.com ranking the best and worst laptop brands, Toshiba was placed last in 2013 and again in 2016; it was placed fourth in 2011. The DynaPad tablet running Windows 10 was launched in October 2015. In March 2016, Toshiba announced that it will stop selling consumer notebooks in Western markets due to falling profits and instead only focus on business products and aiming for profits with business-to-business sales of premium products. In 2017, Toshiba announced the Portege X20W 2-in-1 convertible.
During the 1990s, Toshiba globally ranked as the largest manufacturer of laptop/notebook computers. However its presence in desktop computers was limited, which led to it eventually exiting desktops to focus solely on mobile. Toshiba remained one of the largest vendors in the market during the 2000s, and as of 2011 the company was selling 17.7 million PCs. However, it afterwards went into a steep decline, with sales falling to just 1.4 million in 2017 (a decline of over 90% in six years).
Sharp era
With Toshiba in trouble amid an accounting scandal and under pressure to cut costs, it sold the majority of its PC business to Sharp Corporation. As a result, the company name and branding had changed to Dynabook effective in 2019. On the 30th anniversary of the original DynaBook launch in 1989, in July 2019, the first Dynabook-branded products in the West were launched: Portégé X30-F, Tecra X40-F and Portégé A30-E, as successors of previous generation Toshiba-branded models. Also. the Dynabook G series was announced for Japan as a celebration of the 30th anniversary of the brand. Dynabook have continued making very light laptops, especially in its premium Portégé line. They have since also released education-specific models and TAA-compliant models for government use.
In 2021, Dynabook released its first Chromebook called C1 for the Japanese market. As of FY 2022, Dynabook is the fifth largest PC vendor in Japan with a market share of 8%, trailing FCCL (Fujitsu), Dell, HP, and the leader NEC-Lenovo Group.
In February 2024, Dynabook announced a recall of millions of AC adapters shipped with Toshiba notebooks from 2008 to 2012 due to burn and fire hazards, offering to replace them for free.
Product ranges
Current
Portégé – premium business ultrabooks, formerly subnotebooks (1994–present)
Tecra – business laptops (1994–present)
Satellite Pro – budget-friendly and "business essential" laptops, formerly prosumer (1994–2016, 2020–present)
E series – budget education-focused laptops and convertibles (2021–present)
Domestic Japanese market
T series, T/X series, C series, Y series (home notebooks)
R series, G series, GS series, S series, M series (mobile notebooks)
V series, F series, K series (convertibles)
DynaDesk (desktops)
Former
Libretto – handheld subnotebooks (1996–2002, 2005, 2010)
Qosmio – gaming laptops (2004–2014)
Satellite – consumer laptops (1992–2016)
T series – various portable computers and some desktop computers (1981–1995)
Equium –
Infinia – multimedia desktop computers (1996–1998)
Brezza – desktop computers (–1998; Japanese market)
PV – desktop computers (–1998; Japanese market)
DynaTop – all-in-one computers (Japanese market)
Brand logo history
Gallery of products
See also
References
External links
(redirects based on region)
Sharp Corporation divisions and subsidiaries
Computer companies of Japan
Computer hardware companies
Computer systems companies
Japanese brands
Japanese companies established in 1954
Manufacturing companies based in Tokyo
2018 mergers and acquisitions | Dynabook Inc. | [
"Technology"
] | 2,607 | [
"Computer hardware companies",
"Computer systems companies",
"Computers",
"Computer systems"
] |
57,805,865 | https://en.wikipedia.org/wiki/Floc%20%28biofilm%29 | A floc is a type of microbial aggregate that may be contrasted with biofilms and granules, or else considered a specialized type of biofilm. Flocs appear as cloudy suspensions of cells floating in water, rather than attached to and growing on a surface like most biofilms. The floc typically is held together by a matrix of extracellular polymeric substance (EPS), which may contain variable amounts of polysaccharide, protein, and other biopolymers. The formation and the properties of flocs may affect the performance of industrial water treatment bioreactors such as activated sludge systems where the flocs form a sludge blanket.
Floc formation may benefit the constituent microorganisms in a number of ways, including protection from pH stress, resistance to predation, manipulation of microenvironments, and facilitation of mutualistic relationships in mixed microbial communities.
In general, the mechanisms by which flocculating microbial aggregates hold together are poorly understood. However, work on the activated sludge bacterium Zoogloea resiniphila has shown that PEP-CTERM proteins must be expressed for flocs to form; in their absence, growth is planktonic, even though exopolysaccharide is produced.
See also
Yeast flocculation#Process
References
Bacteriology
Biological matter
Environmental microbiology
Microbiology terms | Floc (biofilm) | [
"Biology",
"Environmental_science"
] | 290 | [
"Environmental microbiology",
"Microbiology terms"
] |
57,806,752 | https://en.wikipedia.org/wiki/Kate%20Marvel | Kate Marvel is a climate scientist and science writer based in New York City. She is a senior scientist at Project Drawdown and was formerly an associate research scientist at NASA Goddard Institute for Space Studies and Columbia Engineering's Department of Applied Physics and Mathematics.
Education and early career
Marvel attended the University of California at Berkeley, where she received her Bachelor of Arts degree in physics and astronomy in 2003. She received her PhD in 2008 in theoretical physics from University of Cambridge as a Gates Scholar and member of Trinity College. Following her PhD, she shifted her focus to climate science and energy as a Postdoctoral Science Fellow at the Center for International Security and Cooperation at Stanford University and at the Carnegie Institution for Science in the Department of Global Ecology. She continued that trajectory as a postdoctoral fellow at the Lawrence Livermore National Laboratory before joining the research faculty at NASA Goddard Institute for Space Studies and Columbia University. Marvel left the Goddard Institute at the end of 2022.
Research
Marvel's current research centers on climate modeling to better predict how much the Earth's temperature will rise in the future. This work led Marvel to investigate the effects of cloud cover on modeling rising temperatures, which has proved an important variable in climate models. Clouds can play a double-edged role in mitigating or amplifying the rate of global warming. On one hand, clouds reflect solar energy back into space, serving to cool the planet; on the other, clouds can trap the planet's heat and radiate back onto Earth's surface. While computer models have difficulty simulating the changing patterns of cloud cover, improved satellite data can begin to fill in the gaps.
Marvel has also documented shifting patterns of soil moisture from samples taken around the world, combining them with computer models and archives of tree rings, to model the effects of greenhouse gas production on patterns of global drought. In this study, which was published in the journal Nature in May 2019, Marvel and her colleagues were able to distinguish the contribution of humans from the effects of natural variation of weather and climate. They found three distinct phases of drought in the data: a clear human fingerprint on levels of drought in the first half of the 20th century, followed by a decrease in drought from 1950 to 1975, followed by a final rise in levels of drought in the 1980s and beyond. The mid-century decrease in drought correlated with the rise in aerosol emissions, which contribute to rising levels of smog that may have reflected and blocked sunlight from reaching the Earth, altering patterns of warming. The subsequent rise of drought correlated with the decrease in global air pollution, which occurred in the 1970s and 1980s due to the passage of legislation like the United States Clean Air Act, suggesting that aerosol pollution may have had a moderating effect on drought.
Marvel has also studied practical limitations in renewable energy as a Postdoctoral Scholar at the Carnegie Institution for Science. At the 2017 TED conference, following computer theorist Danny Hillis's talk proposing geoengineering strategies to mitigate global warming, Marvel was brought on stage to share why she believes geoengineering may cause more harm than good in the long run.
Public engagement
Marvel is a science communicator whose efforts center on communicating about the impacts of climate change. She has been a guest on popular science shows like StarTalk and BRIC Arts Media TV, speaking about her expertise in climate change and the need to act on climate. She has also spoken about her path to becoming a scientist for the science-inspired storytelling series, The Story Collider. Marvel has also appeared on the TED Main Stage, giving a talk at the 2017 TED conference about the double-edged effect clouds can have on global warming.
Marvel's writing has been featured in On Being and Nautilus. She was a regular contributor to Scientific American with her column "Hot Planet", which launched in June 2018 and apparently ended in November 2020; the column focused on climate change, covering the science behind global warming, policies, and human efforts in advocacy. Marvel contributed to All We Can Save, a collection of essays authored by women involved in the climate movement.
References
External links
Kate Marvel on Twitter
Year of birth missing (living people)
Living people
21st-century American women scientists
American climatologists
Women climatologists
American science writers
Alumni of the University of Cambridge
University of California, Berkeley alumni
NASA people
Climate communication
21st-century American scientists
Climate change mitigation researchers
American women science writers
21st-century American non-fiction writers
21st-century American women writers
American women non-fiction writers | Kate Marvel | [
"Engineering"
] | 912 | [
"Geoengineering",
"Climate change mitigation researchers"
] |
57,810,908 | https://en.wikipedia.org/wiki/Data%20exfiltration | Data exfiltration occurs when malware and/or a malicious actor carries out an unauthorized data transfer from a computer. It is also commonly called data extrusion or data exportation. Data exfiltration is also considered a form of data theft. Since the year 2000, a number of data exfiltration efforts severely damaged the consumer confidence, corporate valuation, and intellectual property of businesses and national security of governments across the world.
Types of exfiltrated data
In some data exfiltration scenarios, a large amount of aggregated data may be exfiltrated. However, in these and other scenarios, it is likely that certain types of data may be targeted. Types of data that are targeted includes:
Usernames, associated passwords, and other system authentication related information
Information associated with strategic decisions
Cryptographic keys
Personal financial information
Social security numbers and other personally identifiable information (PII)
Mailing addresses
United States National Security Agency hacking tools
Techniques
Several techniques have been used by malicious actors to carry out data exfiltration. The technique chosen depends on a number of factors. If the attacker has or can easily gain physical or privileged remote access to the server containing the data they wish to exfiltrate, their chances of success are much better than otherwise. For example, it would be relatively easy for a system administrator to plant, and in turn, execute malware that transmits data to an external command and control server without getting caught. Similarly, if one can gain physical administrative access, they can potentially steal the server holding the target data, or more realistically, transfer data from the server to a DVD or USB flash drive. In many cases, malicious actors cannot gain physical access to the physical systems holding target data. In these situations, they may compromise user accounts on remote access applications using manufacturer default or weak passwords. In 2009, after analyzing 200 data exfiltration attacks that took place in 24 countries, SpiderLabs discovered a ninety percent success rate in compromising user accounts on remote access applications without requiring brute-force attacks. Once a malicious actor gains this level of access, they may transfer target data elsewhere.
Additionally, there are more sophisticated forms of data exfiltration. Various techniques can be used to conceal detection by network defenses. For example, Cross Site Scripting (XSS) can be used to exploit vulnerabilities in web applications to provide a malicious actor with sensitive data. A timing channel can also be used to send data a few packets at a time at specified intervals in a way that is even more difficult for network defenses to detect and prevent.
Preventive measures
A number of things can be done to help defend a network against data exfiltration. Three main categories of preventive measures may be the most effective:
Preventive
Detective
Investigative
One example of detective measures is to implement intrusion detection and prevention systems and regularly monitor network services to ensure that only known acceptable services are running at any given time. If suspicious network services are running, investigate and take the appropriate measures immediately. Preventive measures include the implementation and maintenance of access controls, deception techniques, and encryption of data in process, in transit, and at rest. Investigative measures include various forensics actions and counter intelligence operations.
References
External sources
http://www.ists.dartmouth.edu/library/293.pdf
https://www.scmagazine.com/data-exfiltration-defense/article/536744/
Data exfiltration blogs, news and reports
Data security
Theft | Data exfiltration | [
"Engineering"
] | 702 | [
"Cybersecurity engineering",
"Data security"
] |
57,811,456 | https://en.wikipedia.org/wiki/Aflatoxin%20B1%20exo-8%2C9-epoxide | {{DISPLAYTITLE:Aflatoxin B1 exo-8,9-epoxide}}
Aflatoxin B1 exo-8,9-epoxide is a toxic metabolite of aflatoxin B1. It's formed by the action of cytochrome P450 enzymes in the liver.
In the liver, aflatoxin B1 is metabolized to aflatoxin B1 exo-8,9-epoxide by the cytochrome P450 enzymes. The resulting epoxide can react with guanine in the DNA to cause DNA damage.
See also
Aflatoxin B1
Cytochrome P450
References
Epoxides
Aflatoxins
Human metabolites | Aflatoxin B1 exo-8,9-epoxide | [
"Chemistry"
] | 145 | [] |
57,813,124 | https://en.wikipedia.org/wiki/Alberto%20Sirlin | Alberto Sirlin (born 25 November 1930, in Buenos Aires, died February 23, 2022, in New York City) was an Argentine theoretical physicist, specializing in particle physics.
Biography
Sirlin studied from 1948 to 1952 at the University of Buenos Aires, where he received his doctorate in 1953 under the supervision of Richard Gans. In 1953–1954 Sirlin was a fellow at the Centro Brasileiro de Pesquisas Físicas in Rio de Janeiro, where he took several graduate courses, including one taught by Richard Feynman. Sirlin was in 1954–1955 at the University of California at Los Angeles (UCLA) and in 1955–1957 at the Cornell University, where in 1958 he received a doctorate under the supervision of Tōichirō Kinoshita. From 1957 to 1959 he was a research assistant at Columbia University. At New York University he was from 1959 to 1961 an assistant professor, from 1961 to 1968 an associate professor, and from 1968 a full professor, retiring in 2008.
Sirlin did research in the 1950s on radiative corrections in the theory of muon decay, i.e. higher-order corrections in the allowed weak interactions of quantum electrodynamics (QED). In 1960 Sirlin and Ralph E. Behrends discovered the nonrenormalization theorem for partially conserved vector currents in the SU(2) theory of weak interactions and suggested the theorem's generalization to higher symmetry. Their theorem plays an important role in experimentally verifying predictions from the Cabibbo-Kobayashi-Maskawa matrix. Beginning in the 1970s Sirlin did research with his student William J. Marciano on higher-order corrections in leptonic decays. With Tsung-Dao Lee and Richard M. Friedberg, Sirlin did research on non-topological soliton solutions in quantum field theory.
Sirlin was elected a Fellow of the American Physical Society in 1971. He was in the academic year 1983–1984 a Guggenheim Fellow and in 1997 received the Alexander von Humboldt Award. In 2002 Sirlin and William J. Marciano received the Sakurai Prize for their collaborative research on the theory of electroweak interactions.
Selected publications
with M. A. B. Beg: Gauge theories of weak interactions II. Physical Reports, Vol. 88, 1982, pp. 1–90
Current algebra formulation of radiation corrections in gauge theories and the universality of weak interactions. Reviews of Modern Physics, Vol. 50, 1978, pp. 573–605
Radiation corrections in the SU(2)L × U(1) theory: A simple renormalization framework. Physical Review D, Vol. 22, 1980, pp. 971–981
The Standard Electroweak Model Circa 1994: A Brief Overview. Comments on Nuclear and Particle Physics Vol. 21, 1994, pp. 287–322.
References
External links
1930 births
2022 deaths
University of Buenos Aires alumni
New York University faculty
J. J. Sakurai Prize for Theoretical Particle Physics recipients
Theoretical physicists
Argentine physicists
Cornell University alumni
Fellows of the American Physical Society
Scientists from Buenos Aires | Alberto Sirlin | [
"Physics"
] | 626 | [
"Theoretical physics",
"Theoretical physicists"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.