id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
67,944,641 | https://en.wikipedia.org/wiki/Continuum%20robot | A continuum robot is a type of robot that is characterised by infinite degrees of freedom and number of joints. These characteristics allow continuum manipulators to adjust and modify their shape at any point along their length, granting them the possibility to work in confined spaces and complex environments where standard rigid-link robots cannot operate. In particular, we can define a continuum robot as an actuatable structure whose constitutive material forms curves with continuous tangent vectors. This is a fundamental definition that allows to distinguish between continuum robots and snake-arm robots or hyper-redundant manipulators: the presence of rigid links and joints allows them to only approximately perform curves with continuous tangent vectors.
The design of continuum robots is bioinspired, as the intent is to resemble biological trunks, snakes and tentacles. Several concepts of continuum robots have been commercialised and can be found in many different domains of application, ranging from the medical field to undersea exploration.
Classification
Continuum robots can be categorised according to two main criteria: structure and actuation.
Structure
The main characteristic of the design of continuum robots is the presence of a continuously curving core structure, named backbone, whose shape can be actuated. The backbone must also be compliant, meaning that the backbone yields smoothly to external loads.
According to the design principles chosen for the continuum manipulator, we can distinguish between:
single-backbone: these continuum manipulators have one central elastic backbone through which actuation/transmission elements can run.
multi-backbone: the structure of these continuum robots has two or more elastic elements (either rods or tubes) parallel to each other and constrained with one another in some way.
concentric-tube: the backbone is made of concentric tubes that are free to rotate and translate between each other, depending on the actuation happening at the base of the robot.
Actuation
The actuation strategy of continuum manipulators can be distinguished between extrinsic or intrinsic actuation, depending on where the actuation happens:
extrinsic actuation: the actuation happens outside the main structure of the robot and the forces are transmitted via mechanical transmission; among these techniques, there are cable/tendon driven actuators and multi-backbone strategies.
intrinsic actuation: the actuation mechanism operates within the structure of the robot; these strategies include pneumatic or hydraulic chambers and the shape memory effect.
Advantages
The particular design of continuum robots offers several advantages with respect to rigid-link robots. First of all, as already said, continuum robots can more easily operate in environments that require a high level of dexterity, adaptability and flexibility. Moreover, the simplicity of their structure makes continuum robots more prone to miniaturisation. The rise of continuum robots has also paved the way for the development of soft continuum manipulators. These continuum manipulators are made of highly compliant materials that are flexible and can adapt and deform according to the surrounding environment. The "softness" of their material grants higher safety in human-robot interactions.
Disadvantages
The particular design of continuum robots also introduces many challenges. To properly and safely use continuum robots, it is crucial to have an accurate force and shape sensing system. Traditionally, this is done using cameras that are not suitable for some of the applications of continuum robots (e.g. minimally invasive surgery), or using electromagnetic sensors that are however disturbed by the presence of magnetic objects in the environment. To solve this issue, in the last years fiber-Bragg-grating sensors have been proposed as a possible alternative and have shown promising results. It is also necessary to notice that while the mechanical properties of rigid-link robots are fully understood, the comprehension of the behaviour and properties of continuum robots is still subject of study and debate. This poses new challenges in developing accurate models and control algorithms for this kind of robots.
Modelling
Creating an accurate model that can predict the shape of a continuum robot allows to properly control the robot's shape. There are three main approaches to model continuum robots:
Cosserat rod theory: this approach is an exact solution to the static of a continuum robot, as it is not subject to any assumption. It solves a set of equilibrium equations between position, orientation, internal force and torque of the robot. This method requires to be solved numerically and it is therefore computationally expensive, due to its high complexity.
Constant curvature: this technique assumes the backbone to be made of a series of mutually tangent sections that can be approximated as arcs with constant curvature. This approach is also known as piecewise constant-curvature. This assumption can be applied to the entire segment of the backbone or to its subsegments. This model has shown promising results, however it must be taken into account that the segment/subsegments of the backbone may not comply to the constant curvature assumption and therefore the model's behaviour may not entirely reflect the behaviour of the robot.
Rigid-link model: this approach is based on the assumption that the continuum robot can be divided in small segments with rigid links. This is a strong assumption, since if the number of segments is too low, the model hardly behaves like the continuum robot, while increasing the number of segments means increasing the number of variables, and thus complexity. Despite this limitation, rigid-link modelling allows the use of the standard control techniques that are well known for rigid-link robots. It has been proven that this model can be coupled with shape and force sensing to mitigate its inaccuracy and can lead to promising results.
Sensing
To develop accurate control algorithms, it is necessary to complement the presented modelling techniques with real time shape sensing. The following options are currently available:
Electromagnetic (EM) sensing: shape is reconstructed thanks to the mutual induction between a magnetic field generator and a magnetic field sensor. The most common external EM tracking system is the commercially available NDI Aurora: small sensors can be placed on the robot and their position is tracked in an external generated magnetic field. The validity of this method has been extensively assessed, however its performance is hindered by the limited workspace, whose dimension depends on the magnetic field. Another alternative is to embed the sensors internally in the continuum robot, combining magnetic sensors with Hall effect sensors: the magnetic field is measured at the level of the Hall effect sensors in order to estimate the deflection of the robot. However, it has been noticed that the higher the bending of the manipulator, the higher is the estimation error, due to crosstalk between sensors and magnets.
Optical sensing: fiber Bragg grating sensors incorporated in an optical fiber can be embedded into the backbone of the continuum robot to estimate its shape; these sensors can only reflect a small range of the input light spectrum depending on their strain; therefore, by measuring the strain on each sensor it is possible to obtain the shape of the robot. This type of sensor is however expensive and is more prone to breaking in case of excessive strain, and this can happen in robots that can perform high deflections.
Control strategies
The control strategies can be distinguished in static and dynamic; the first one is based on the steady-state assumption, while the latter also considers the dynamic behaviour of the continuum robot. We can also differentiate between model-based controllers, that depend on a model of the robot, and model-free, that learn the robot's behaviour from data.
Model-based static controllers: they rely on one of the modelling approaches presented above; once the model is defined, the kinematics must be inverted to obtain the desired actuator or configuration space variables. There are several ways to do this, like differential inverse kinematics, direct inversion or optimization.
Model-free static controllers: these approaches learn directly, via machine learning techniques (e.g. regression methods and neural networks), the inverse kinematic or the direct kinematic representation of the continuum robot from collected data, and they are also known as data-driven methods. Even though these controllers present the advantage of not having to establish an accurate model of the continuum robot, they perform worse than their model-based counterpart.
Model-based dynamic controllers: they need the formulation of the kinematic model and an associated dynamic formulation. , they are in the early stage, as they require high computational power and high-dimensional sensory feedback. With improvements in computational power and sensing capabilities they could be crucial in industrial applications of continuum robots, where time and cost are also relevant along with accuracy.
Model-free dynamic controllers: they are still a relatively unexplored approach. Some works that propose machine learning techniques to learn the dynamic behaviour of continuum robots have been presented, but their performance is limited by high training time and instability of the machine learning model.
Hybrid approaches, that combine model-free and model-based controllers, can also present a valid alternative.
Applications
Continuum robots have been applied in many different fields.
Medical
Continuum robots have been widely applied in the medical field, in particular for minimally invasive surgery. For example, Ion by Intuitive is a robotic-assisted endoluminal platform for minimally invasive peripheral lung biopsy, that allows to reach nodules located in peripheral areas of the lungs that cannot be reached by standard instrumentations; this allows to perform early-stage diagnoses of cancer.
Hazardous places
Continuum robots offer the possibility of completing tasks in hazardous and hostile environments. For example, a quadruped robot with continuum limbs has been developed: it can walk, crawl, trot and propel to whole arm grasping to negotiate difficult obstacles.
Space
NASA has developed a continuum manipulator, named Tendril, that can extend into crevasses and under thermal blankets to access areas that would be otherwise inaccessible with conventional means.
Subsea
The AMADEUS project developed a dextrous underwater robot for grasping and manipulation tasks, while the FLAPS project created propulsion systems that replicate the mechanisms of fish swimming.
See also
Soft robotics
Biorobotics
References
External links
Continuum robots - a state of the art
Robotics
Robot kinematics
Robot control | Continuum robot | [
"Engineering"
] | 2,033 | [
"Robotics engineering",
"Automation",
"Robot control",
"Robotics",
"Robot kinematics"
] |
67,944,653 | https://en.wikipedia.org/wiki/Sorption%20enhanced%20water%20gas%20shift | Sorption enhanced water gas shift (SEWGS) is a technology that combines a pre-combustion carbon capture process with the water gas shift reaction (WGS) in order to produce a hydrogen rich stream from the syngas fed to the SEWGS reactor.
The water gas shift reaction converts the carbon monoxide into carbon dioxide, according to the following chemical reaction:
CO + H2O CO2 + H2
While carbon dioxide is captured and removed through an adsorption process.
The in-situ CO2 adsorption and removal shifts the water gas shift reaction to the right-hand side, thereby completely converting the CO and maximizing the production of high pressure hydrogen.
Since the beginning of the second decade of the 21st century this technology has started gaining attention, as it shows advantages over carbon capture conventional technologies and because hydrogen is considered the energy carrier of the future.
Process
The SEWGS technology is the combination of the water gas shift reaction with the adsorption of carbon dioxide on a solid material. Typical temperature and pressure ranges are 350-550 °C and 20-30 bar. The inlet gas of SEWGS reactors is typically a mixture of hydrogen, CO and CO2, where steam is added to convert CO into CO2.
The conversion of carbon monoxide into carbon dioxide is enhanced by shifting the reaction equilibrium through CO2 adsorption and removal, the latter being one the produced species.
The SEWGS technology is based on a multi-bed pressure swing adsorption (PSA) unit in which the vessels are filled with the water gas shift catalyst and the CO2 adsorbent material. Each vessel is subjected to a series of processes. In the sorption/reaction step, a high pressure hydrogen-rich stream is produced, while during sorbent regeneration a CO2 rich stream is generated.
The process starts feeding syngas to the SEWGS reactor, where CO2 is adsorbed and a hydrogen-rich stream is produced. The regeneration of the first vessel starts when the sorbent material is saturated by CO2, directing the feed stream to another vessel. After the regeneration, the vessels are re-pressurized. A multibed configuration is necessary to guarantee a continuous production of hydrogen and carbon dioxide. The optimal number of beds usually varies between 6 and 8.
Water gas shift reaction
The water gas shift reaction is the reaction between carbon monoxide and steam to form hydrogen and carbon dioxide:
CO + H2O CO2 + H2
This reaction was discovered by Felice Fontana and nowadays is adopted in a wide range of industrial applications, such as in the production process of ammonia, hydrocarbons, methanol, hydrogen and other chemicals. In the industrial practice two water gas shift sections are necessary, one at high temperature and one at low temperature, with an intersystem cooling.
Adsorption process
Adsorption is the phenomenon of sorption of gases or solutes on solid or liquid surfaces. Adsorption on solid surface occurs when some substances collide with the solid surface creating bonds with the atoms or the molecules of the solid surface. There are two main adsorption processes: physical adsorption and chemical adsorption. The first one is the result of the interaction of intermolecular forces. Since weak bonds are formed, the adsorbed substance can be easily separated. In chemical adsorption, chemical bonds are formed, meaning that the absorption or release of adsorption heat and the activation energy are larger with respect to physical adsorption. These two processes often take place simultaneously. The adsorbent material is then regenerated through desorption, which is the opposite phenomenon of sorption, releasing the captured substance from the adsorbent material.
In SEWGS technology the pressure swing adsorption (PSA) process is employed to regenerate the adsorbent material and produce a CO2 rich stream. The process is similar to the one conventionally used for air separation, hydrogen purification and other gas separations.
Conventional technology for carbon dioxide removal
The industrially used technology for carbon dioxide removal is called amine washing technology and is based on chemical absorption of carbon dioxide. In chemical absorption, reactions between the absorbed substance (CO2) and the solvent occur and produce a rich liquid. Then, the rich liquid enters the desorption column where carbon dioxide is separated from the sorbent which is reused for CO2 absorption. Ethanolamine (C2H7NO), diethanolamine (C4H11NO2), triethanolamine (C6H15NO3) mono-ethanolamine (C2H7NO) and methyl-diethanolamine (C5H13NO2) are commonly used for the removal of CO2.
Advantages of SEWGS over conventional technologies
SEWGS technology shows some advantages in comparison with traditional technologies adoptable for pre-combustion removal of carbon dioxide. Traditional technologies require employing two water gas shift reactors (a high temperature and a low temperature stage) in order to get high conversions of carbon monoxide into carbon dioxide with an intermediate cooling stage between the two reactors. In addition, another cooling stage is necessary at the outlet of the second WGS reactor for the CO2 capture with a solvent. Furthermore, the hydrogen rich stream at the outlet of SEWGS section can be directly fed into a gas turbine, while the hydrogen rich stream produced by the traditional route needs a further heating stage.
Applications
The importance of this technology is directly related to the problem of global warming and the mitigation of the carbon dioxide emissions. In hydrogen economy hydrogen is considered a clean energy carrier with high energy content and is expected to replace fossil fuels and other energy sources associated with pollution issues. For these reasons, since the beginning of second decade of the 21st century this technology attracted the public interest.
The SEWGS technology enables producing high-purity hydrogen without need for further purification processes. It furthermore finds potential application in a wide range of industrial processes, such as in the production of electricity from fossil fuels or in the iron and steel industry.
The integration of the SEWGS process in natural gas combined cycle (NGCC) and integrated gasification combined cycle (IGCC) power plants has been investigated as a possible way to produce electricity from natural gas or coal with almost-zero emissions. In NGCC power plant the carbon capture achieved is around 95% with a CO2 purity over 99%, while in IGCC power plants the carbon capture ratio is around 90% with a CO2 purity of 99%.
The investigation of SEWGS integration in steel mills started during the second decade of 21st century. The goal is to reduce the carbon footprint of this industrial process that is responsible of the 6% of total global CO2 emissions and 16% of the emissions generated by industrial processes.
The captured and removed CO2 can be then stored or used for the production of high value chemical products.
Sorbents for SEWGS process
The reactor vessels are loaded with sorbent pellets. Sorbent must have the following features:
high CO2 capacity and selectivity over H2
low H2O adsorption
low specific cost
mechanical stability under pressure and temperature variation
chemical stability in the presence of impurities
easy regeneration by steam
Different sorbent materials have been investigated to the purpose of being employed in SEWGS. Some examples include:
K2CO3-promoted hydrotalcite
potassium promoted alumina
Na–Mg double salt
CaO
Potassium promoted hydrotalcite is the most studied sorbent material for SEWGS application. Its principal features are listed below:
low cost
sufficiently high CO2 cyclic working capacity
fast adsorption kinetics
good mechanical stability
See also
Water-gas shift reaction
Adsorption
Carbon capture and storage
Carbon capture and utilization
References
External links
Projects in which SEWGS technology is investigated:
Web page of STEPWISE project
Web page of C4U project
Chemical processes
Hydrogen production
Industrial gases | Sorption enhanced water gas shift | [
"Chemistry"
] | 1,620 | [
"Chemical process engineering",
"Chemical processes",
"Industrial gases",
"nan"
] |
67,944,684 | https://en.wikipedia.org/wiki/Lithium%20aluminium%20germanium%20phosphate | Lithium aluminium germanium phosphate, typically known with the acronyms LAGP or LAGPO, is an inorganic ceramic solid material whose general formula is . LAGP belongs to the NASICON (Sodium Super Ionic Conductors) family of solid conductors and has been applied as a solid electrolyte in all-solid-state lithium-ion batteries. Typical values of ionic conductivity in LAGP at room temperature are in the range of 10 - 10 S/cm, even if the actual value of conductivity is strongly affected by stoichiometry, microstructure, and synthesis conditions. Compared to lithium aluminium titanium phosphate (LATP), which is another phosphate-based lithium solid conductor, the absence of titanium in LAGP improves its stability towards lithium metal. In addition, phosphate-based solid electrolytes have superior stability against moisture and oxygen compared to sulfide-based electrolytes like (LGPS) and can be handled safely in air, thus simplifying the manufacture process.
Since the best performances are encountered when the stoichiometric value of x is 0.5, the acronym LAGP usually indicates the particular composition of , which is also the typically used material in battery applications.
Properties
Crystal structure
Lithium-containing NASICON-type crystals are described by the general formula , in which M stands for a metal or a metalloid (Ti, Zr, Hf, Sn, Ge), and display a complex three-dimensional network of corner-sharing octahedra and phosphate tetrahedra. Lithium ions are hosted in voids in between, which can be subdivided into three kinds of sites:
Li(1) 6-fold coordinated sites at Wyckoff 6b position;
Li(2) sites at Wyckoff 18e position;
Li(3) sites at Wyckoff 36f position.
In order to promote lithium conductivity at sufficiently high rates, Li(1) sites should be fully occupied and Li(2) sites should be fully empty. Li(3) sites are located between Li(1) and Li(2) sites and are occupied only when large tetravalent cations are present in the structure, such as Zr, Hf, and Sn. If some Ge cations in the (LGP) structure are partially replaced by Al cations, the LAGP material is obtained with the general formula . The single-phase NASICON structure is stable with x between 0.1 and 0.6; when this limit is exceeded, a solid solution is no more possible and secondary phases tend to be formed. Although Ge and Al cations have very similar ionic radii (0.53 Å for Ge vs. 0.535 Å for Al), cationic substitution leads to compositional disorder and promotes the incorporation of a larger amount of lithium ions to achieve electrical neutrality. Additional lithium ions can be incorporated in either Li(2) or Li(3) empty sites.
In the available scientific literature, there is not a unique description of the sites available for lithium ions and of their atomic coordination, as well as of the sites directly involved during the conduction mechanism. For example, only two available sites, namely Li(1) and Li(2), are mentioned in some cases, while the Li(3) site is neither occupied nor involved in the conduction process. This results in the lack of unambiguous description of LAGP local crystal structure, especially concerning the arrangement of lithium ions and site occupancy when germanium is partially replaced by aluminium.
LAGP displays a rhombohedral unit cell with a space group R3c.
Vibrational properties
Factor group analysis
LAGP crystals belong to the space group D63d - R3c. The factor group analysis of NASICON-type materials with general formula MIM2IVPO4 (where MI stands for a monovalent metal ion like Na+, Li+ or K+, and MIV represents a tetravalent cation such as Ti4+, Ge4+, Sn4+, Zr4+ or Hf4+) is usually performed assuming the separation between internal vibrational modes (i.e. modes originating in PO4 units) and external modes (i.e. modes arising from the translations of the MI and MIV cations, from PO4 translations, and from PO4 librations).
Focusing on internal modes only, the factor group analysis for R3c space group identifies 14 Raman-active modes for the PO4 units: 6 of these modes correspond to stretching vibrations and 8 to bending vibrations.
On the contrary, the analysis of external modes leads to many available vibrations: since the number of irreducible representations within the rhombohedral R3c space group is restricted, interactions among different modes could be expected and a clear assignment or discrimination becomes unfeasible.
Raman spectra
The vibrational properties of LAGP could be directly probed using Raman spectroscopy. LAGP shows the Raman features characteristic of all the NASICON-type materials, most of which caused by the vibrational motions of PO4 units. The main spectral regions in a Raman spectrum of NASICON-type materials are summarized in the following table.
The Raman spectra of LAGP are usually characterized by broad peaks, even when the material is in its crystalline form. Indeed, both the presence of aluminium ions in place of germanium ions and the extra lithium ions introduce structural and compositional disorder in the sublattice, resulting in peak broadening.
Transport properties
LAGP is a solid ionic conductor and features the two fundamental properties to be used as a solid-state electrolyte in lithium-ion batteries, namely a sufficiently high ionic conductivity and a negligible electronic conductivity. Indeed, during battery operations, LAGP should guarantee the easy and fast motion of lithium ions between cathode and anode, while preventing the transfer of electrons.
As stated in the description of the crystal structure, three kinds of sites are available for hosting lithium ions in the LAGP NASICON structure, i.e. the Li(1) sites, the Li(2) sites and the Li(3) sites. Ionic conduction occurs because of hopping of lithium ions from Li(1) to Li(2) sites or across two Li(3) sites. The bottleneck to ionic motion is represented by a triangular window delimited by three oxygen atoms between Li(1) and Li(2) sites.
The ionic conductivity in LAGP follows the usual dependency on temperature expressed by an Arrhenius-type equation, which is typical of most of solid-state ionic conductors:
where
is the pre-exponential factor,
is the absolute temperature,
is the activation energy for ionic transport,
is the Boltzmann constant.
Typical values for the activation energies of bulk LAGP materials are in the range of 0.35 - 0.41 eV. Similarly, the room-temperature ionic conductivity is closely related to the synthesis conditions and to the actual material microstructure, therefore the conductivity values reported in scientific literature span from 10 S/cm up to 10 mS/cm, the highest value close to room temperature reported up to now. Compared to LGP, the room-temperature ionic conductivity of LAGP is increased by 3-4 orders of magnitude upon partial substitution of Ge by Al. Aluminium ions have a lower charge compared to Ge ions and additional lithium is incorporated in the NASICON structure to maintain charge balance, resulting in an enlarged number of charge carriers. The beneficial effect of aluminium is maximized for x around 0.4 - 0.5; for larger Al content, the single-phase NASICON structure is not stable and secondary phases appear, mainly AlPO4, , and GeO2. Secondary phases are typically nonconductive; however, small and controlled amounts of AlPO4 exert a densification effect which affects in a positive way the overall ionic conductivity of the material.
The prefactor in the Arrhenius equation can in turn be written as a function of fundamental constants and conduction parameters:
where
is ion valence,
is the elementary charge,
is the absolute temperature,
is the Boltzmann constant,
is the concentration of charge carriers,
is the average velocity of the ions,
is the mean free path.
The prefactor is directly proportional to the concentration of mobile lithium-ion carriers, which increases with the aluminium content in the material. As a result, since the dependency of the activation energy on aluminium content is negligible, the ionic conductivity is expected to increase with increasing Ge substitution by Al, until secondary phases are formed. The introduction of aluminium also reduces the grain boundary resistivity of the material, positively impacting on the total (bulk crystal + grain boundary) ionic conductivity of the LAGP material.
As expected for solid ionic conductors, the ionic conductivity of LAGP increases with increasing temperature.
Regarding the electronic conductivity of LAGP, it should be as low as possible to prevent electrical short circuit between anode and cathode. As for ionic conductivity, the exact stoichiometry and microstructure, strongly connected to the synthesis method, have an influence on the electronic conductivity, even if the reported values are very low and close to (or lower than) 10 S/cm.
Thermal properties
The specific heat capacity of LAGP materials with general formula fits into the Maier-Kelley polynomial law in the temperature range from room temperature to 700 °C:
where
is the absolute temperature,
are fitting constants.
Typical values are in the range of 0.75 - 1.5 J⋅g−1⋅K−1 in the temperature interval 25 - 100 °C. The constants increase with the x value, i.e. with both the aluminium and the lithium content, while the constant does not follow a precise trend. As a result, the specific heat capacity of LAGP is expected to increase as the Al content grows and the Ge content decreases, which is consistent with data about the relative specific heats of aluminium and germanium compounds.
In addition, the thermal diffusivity of LAGP follows a decreasing trend with increasing temperature, irrespective of the aluminium content:
The aluminium level affects the exponent , which varies from 0.08 (high Al content) to 0.11 (low Al content). Such small values suggest the presence of a large number of point defects in the material, which is highly beneficial for solid ionic conductors. Finally, the expression for the thermal conductivity can be written:
where
is the heat capacity per unit volume,
is the average phonon group velocity,
is the phonon mean free path,
is the density of the material.
Taking everything into account, as the aluminium content in LAGP increases, the ionic conductivity increases as well, while the thermal conductivity decreases, since a larger number of lithium ions enhances the phonon scattering, thus reducing the phonon mean free path and the thermal transport in the material. Therefore, thermal and ionic transports in LAGP ceramics are not correlated: the corresponding conductivities follow opposite trends as a function of the aluminium content and are affected in a different way by temperature variations (e.g., the ionic conductivity increases by one order of magnitude upon an increase from room temperature to 100 °C, while the thermal conductivity increases by only 6%).
Thermal stability
Detrimental secondary phases can also form because of thermal treatments or during the material production. Excessively high sintering/annealing temperatures or long dwelling times will result in the loss of volatile species (especially Li2O) and in the decomposition of LAGP main phase into AlPO4 and GeO2. LAGP bulk samples and thin films are typically stable up to 700-750 °C; if this temperature is exceeded, volatile lithium is lost and the impurity phase GeO2 forms. If the temperature is further increased beyond 950 °C, also AlPO4 appears.
Raman spectroscopy and in situ X-ray diffraction (XRD) are useful techniques that can be employed to recognise the phase purity of LAGP samples during and after the heat treatments.
Chemical and electrochemical stability
LAGP belongs to phosphate-based solid electrolytes and, in spite of showing a moderate ionic conductivity compared to other families of solid ionic conductors, it possesses some intrinsic advantages with respect to sulfides and oxides:
Extremely high chemical stability in humid air;
Wide electrochemical stability window;
Low to negligible electronic conductivity.
One of the main advantages of LAGP is its chemical stability in the presence of oxygen, water vapour, and carbon dioxide, which simplifies the manufacture process preventing the use of a glovebox or protected environments. Unlike sulfide-based solid electrolytes, which react with water releasing poisonous gaseous hydrogen sulfide, and garnet-type lithium lanthanum zirconium oxide (LLZO), which react with water and CO2 to form passivating layers of LiOH and Li2CO3, LAGP is practically inert in humid air.
Another important advantage of LAGP is its wide electrochemical stability window, up to 6 V, which allows the use of such electrolyte in contact with high-voltage cathodes, thus enabling high energy densities. However, the stability at very low voltages and against lithium metal is controversial: even if LAGP is more stable than LATP because of the absence of titanium, some literature works report on the reduction of Ge by lithium as well, with formation of Ge and metallic germanium at the electrode-electrolyte interface and dramatic increase of interfacial resistance.
The possible decomposition mechanism of LAGP in contact with metallic lithium is reported in the equation below:
Synthesis
Several synthesis methods exist to produce LAGP in the form of bulk pellets or thin films, depending on the required performances and final applications. The synthesis path significantly affects the microstructure of the LAGP material, which plays a key role in determining its overall conductive properties. Indeed, a compact layer of crystalline LAGP with large and connected grains, and minimal amount of secondary, non-conductive phases ensures the highest conductivity values. On the contrary, an amorphous structure or the presence of small grains and pores tend to hinder the motion of lithium ions, with values of ionic conductivity in the range of 10 - 10 S/cm for glassy LAGP.
In most cases, a post-process thermal treatment is performed to achieve the desired degree of crystallinity.
Bulk pellets
Solid-state sintering
Solid-state sintering is the most used synthesis process to produce solid-state electrolytes. Powders of LAGP precursors, including oxides like GeO2 and Al2O3, are mixed, calcinated and densified at high temperature (700 - 1200 °C) and for long times (12 hours). Sintered LAGP is characterized by high crystalline quality, large grains, a compact microstructure, and high density, even if negative side effects such as loss of volatile lithium compounds and formation of secondary phases should be avoided while the material is kept at high temperature.
The sintering parameters affects LAGP microstructure and purity and, ultimately, its ionic conductivity and conduction performances.
Glass crystallization
LAGP glass-ceramics can be obtained starting from an amorphous glass with nominal composition of , which is subsequently annealed to promote crystallization. Compared to solid-state sintering, ceramic melt-quenching followed by crystallization is a simpler and more flexible process which leads to a denser and more homogeneous microstructure.
The starting point for glass crystallization is the synthesis of the glass through a melt-quenching process of precursors in suitable amount to achieve the desired stoichiometry. Different precursors can be used, especially to provide phosphorus to the material. One possible route is the following:
Preheating of Al2O3 and GeO2 at 1000 °C for 1 hour;
Drying of Li2CO3 at 300 °C for 3 hours;
Mixing of the starting precursors in proper amounts to match the nominal stoichiometry;
Removal of volatile species by stepwise heating of the mix to 500 °C;
Melting at 1450 °C for 1 hour;
Quenching of the melt;
Annealing of glass samples in air.
The main steps are summarized in the following equation:
+
The annealing temperature is selected to promote the full crystallization and avoiding the formation of detrimental secondary phases, pores, and cracks. Various temperatures are reported in different literature sources; however, crystallization does not usually start below 550-600 °C, while temperatures larger than 850 °C cause the extensive formation of impurity phases.
Sol-gel techniques
The sol-gel technique enables the production of LAGP particles at lower processing temperatures compared to sintering or glass crystallization. The typical precursor is a germanium organic compound, like germanium ethoxide , which is dissolved in an aqueous solution with stoichiometric amounts of the sources of lithium, phosphorus, and aluminium. The mixture is then heated and stirred. The sol-gel process starts after the addition of a gelation agent and the final material is obtained after subsequent heating steps aimed at eliminating water and at promoting the pyrolysis reaction, followed by calcination.
The sol-gel process requires the use of germanium organic precursors, which are more expensive compared to GeO2.
Thin films
Sputtering
Sputtering (in particular radio-frequency magnetron sputtering) has been applied to the fabrication of LAGP thin-films starting from a LAGP target. Depending on the temperature of the substrate during the deposition, LAGP can be deposited in the cold sputtering or hot sputtering configuration.
The film stoichiometry and microstructure can be tuned by controlling the deposition parameters, especially the power density, the chamber pressure, and the substrate temperature. Both amorphous and crystalline films are obtained, with a typical thickness around 1 μm. The room-temperature ionic conductivity and the activation energy of sputtered and annealed LAGP films are comparable with those of bulk pellets, i.e. 10 S/cm and 0.31 eV.
Aerosol deposition
Pre-synthesized LAGP powders can be sprayed on a substrate to form a LAGP film by means of aerosol deposition. The powders are loaded into the aerosol deposition chamber and purified air is used as the carrier gas to drive the particles towards the substrate, where they impinge and coalesce to generate the film. Since the as-produced film is amorphous, an annealing treatment is usually performed to improve the film crystallinity and its conduction properties.
Other techniques
Some other methods to produce LAGP materials have been reported in literature works, including liquid-based techniques, spark plasma sintering, and co-precipitation.
In the following table, some ionic conductivity values are reported for LAGP materials produced with different synthesis routes, in the case of optimized production and annealing conditions.
Applications
LAGP is one of the most studied solid-state electrolytes for lithium-ion batteries. The use of a solid-state electrolyte improves the battery safety eliminating liquid-based electrolytes, which are flammable and usually unstable above 4.3 V. In addition, it physically separates the anode from the cathode, reducing the risk of short-circuit, and strongly inhibits lithium dendrite growth. Finally, solid-state electrolytes can operate in a wide range of temperatures, with minimum conductivity loss and decomposition issues. Nevertheless, the ionic conductivity of solid-state electrolytes is some orders of magnitude lower than the one of conventional liquid-based electrolytes, therefore a thin electrolyte layer is preferred to reduce the overall internal impedance and to achieve a shorter diffusion path and larger energy densities. Therefore, LAGP is a suitable candidate for all-solid-state thin-film lithium-ion batteries, in which the electrolyte thickness ranges from 1 to some hundreds of micrometres. The good mechanical strength of LAGP effectively suppress lithium dendrites during lithium stripping and plating, reducing the risk of internal short-circuit and battery failure.
LAGP is applied as a solid-state electrolyte both as a pure material and as a component in organic-inorganic composite electrolytes. For example, LAGP can be composited with polymeric materials, like polypropylene (PP) or polyethylene oxide (PEO), to improve the ionic conductivity and to tune the electrochemical stability. Moreover, since LAGP is not fully stable against metallic lithium because of the electrochemical reactivity of Ge cations, additional interlayers can be introduced between the lithium anode and the solid electrolyte to improve the interfacial stability. The addition of a thin layer of metallic germanium inhibits the electrochemical reduction by lithium metal at very negative potentials and promotes the interfacial contact between the anode and the electrolyte, resulting in improved cycling performance and battery stability. The use of polymer-ceramic composite interlayers or the excess of Li2O are alternative strategies to improve the electrochemical stability of LAGP at negative potentials.
LAGP has been also tested not only as a solid electrolyte, but also as an anode material in lithium-ion battery, showing high electrochemical stability and good cycling performance.
Lithium-sulfur batteries
LAGP-based membranes have been applied as separators in lithium-sulfur batteries. LAGP allows the transfer of lithium ions from anode to cathode but, at the same time, prevents the diffusion of polysulfides from the cathode, suppressing the polysulfide shuttle effect and enhancing the overall performance of the battery. Typically, all-solid-state lithium-sulfur batteries are not fabricated because of high interfacial resistance; therefore, hybrid electrolytes are usually realized, in which LAGP acts as a barrier against polysulfide diffusion but it is combined with liquid or polymer electrolytes to promote fast lithium diffusion and to improve the interfacial contact with electrodes.
See also
Solid-state electrolyte
Solid-state battery
NASICON
Lithium lanthanum zirconium oxide
References
Lithium compounds
Lithium-ion batteries
Electrochemistry
Solid-state batteries
Phosphates
Germanium compounds | Lithium aluminium germanium phosphate | [
"Chemistry"
] | 4,598 | [
"Electrochemistry",
"Phosphates",
"Salts"
] |
67,944,695 | https://en.wikipedia.org/wiki/Biaxial%20tensile%20testing | In materials science and solid mechanics, biaxial tensile testing is a versatile technique to address the mechanical characterization of planar materials. It is a generalized form of tensile testing in which the material sample is simultaneously stressed along two perpendicular axes. Typical materials tested in biaxial configuration include
metal sheets,
silicone elastomers,
composites,
thin films,
textiles
and biological soft tissues.
Purposes of biaxial tensile testing
A biaxial tensile test generally allows the assessment of the mechanical properties
and a complete characterization for uncompressible isotropic materials, which can be obtained through a fewer number of specimens with respect to uniaxial tensile tests.
Biaxial tensile testing is particularly suitable for understanding the mechanical properties of biomaterials, due to their directionally oriented microstructures.
If the testing aims at the material characterization of the post elastic behaviour, the uniaxial results become inadequate, and a biaxial test is required in order to examine the plastic behaviour.
In addition to this, using uniaxial test results to predict rupture under biaxial stress states seems to be inadequate.
Even if a biaxial tensile test is performed in a planar configuration, it may be equivalent to the stress state applied on three-dimensional geometries, such as cylinders with an inner pressure and an axial stretching.
The relationship between the inner pressure and the circumferential stress is given by the Mariotte formula:
where is the circumferential stress, P the inner pressure, D the inner diameter and t the wall thickness of the tube.
Equipment
Typically, a biaxial tensile machine is equipped with motor stages, two load cells and a gripping system.
Motor stages
Through the movement of the motor stages a certain displacement is applied on the material sample. If the motor stage is one, the displacement is the same in the two direction and only the equi-biaxial state is allowed. On the other hand, by using four independent motor stages, any load condition is allowed; this feature makes the biaxial tensile test superior to other tests that may apply a biaxial tensile state, such as the hydraulic bulge, semispherical bulge, stack compression or flat punch.
Using four independent motor stages allows to keep the sample centred during the whole duration of the test; this feature is particularly useful to couple an image analysis during the mechanical test. The most common way to obtain the fields of displacements and strains is the Digital Image Correlation (DIC), which is a contactless technique and so very useful since it doesn't affect the mechanical results.
Load cells
Two load cells are placed along the two orthogonal load directions to measure the normal reaction forces explicated by the specimen. The dimensions of the sample have to be in accordance with the resolution and the full scale of the load cells.
A biaxial tensile test can be performed either in a load-controlled condition, or a displacement-controlled condition, in accordance with the settings of the biaxial tensile machine. In the former configuration a constant loading rate is applied and the displacements are measured, whereas in the latter configuration a constant displacement rate is applied and the forces are measured.
Dealing with elastic materials the load history is not relevant, whereas in viscoelastic materials it is not negligible. Furthermore, for this class of materials also the loading rate plays a role.
Gripping system
The gripping system transfers the load from the motor stages to the specimen. Although the use of biaxial tensile testing is growing more and more, there is still a lack of robust standardized protocols concerning the gripping system. Since it plays a fundamental role in the application and distribution of the load, the gripping system has to be carefully designed in order to satisfy the Saint-Venant principle. Some different gripping systems are reported below.
Clamps
The clamps are the most common used gripping system for biaxial tensile test since they allow a quite uniformly distributed load at the junction with the sample. To increase the uniformity of stress in the region of the sample close to the clamps, some notches with circular tips are obtained from the arm of the sample. The main problem related with the clamps is the low friction at the interface with the sample; indeed, if the friction between the inner surface of the clamps and the sample is too low, there could be a relative motion between the two systems altering the results of the test.
Sutures
Small holes are performed on the surface on the sample to connect it to the motor stages through wire with a stiffness much higher than the sample. Typically, sutures are used with square samples. In contrast to the clamps, sutures allow the rotation of the sample around the axis perpendicular to the plane; in this way they do not allow the transmission of shear stresses to the sample.
The load transmission is very local, thereby the load distribution is not uniform. A template is needed to apply the sutures in the same position in different samples, to have repeatability among different tests.
Rakes
This system is similar to the suture gripping system, but stiffer. The rakes transfer a limited quantity of shear stress, so they are less useful than sutures if used in presence of large shear strains. Although the load is transmitted in a discontinuous way, the load distribution is more uniform if compared to the sutures.
Specimen shape
The success of a biaxial tensile test is strictly related to the shape of the specimen.
The two most used geometries are the square and cruciform shapes. Dealing with fibrous materials or fibres reinforced composites, the fibres should be aligned to the load directions for both classes of specimens, in order to minimize the shear stresses and to avoid the sample rotation.
Square samples
Square or more generally rectangular specimens are easy to obtain, and their dimension and ratio depend on the material availability. Large specimens are needed to make negligible the effects of the gripping system in the core of the sample. However this solution is very material consuming so small specimen are required. Since the gripping system is very close to the core of the specimen the strain distribution is not homogeneous.
Cruciform samples
A proper cruciform sample should fulfil the following requirements:
maximization of the biaxially loaded area in the centre of the sample, where the strain field is uniform;
minimization of the shear strain in the centre of the sample;
minimization of regions of stress concentration, even outside the area of interest;
failure in the biaxially loaded area;
repeatable results.
Is important to note that on this kind of sample, the stretch is larger in the outer region than in the centre, where the strain is uniform.
Method
Uniaxial stress test is typically used to measure mechanical properties of materials, while many materials exhibit various behavior when different loading stress are exerted. Thus, biaxial tensile test become one of the prospective measurements. Small Punch Test (SPT) and Bulge Testing are two methods applying biaxial tensile state.
Small Punch Test (SPT)
The Small Punch Test (SPT) was first developed in the 1980s as minimal invasive in-situ technique to investigate the local degradation and embrittlement of nuclear material. The SPT is a kind of miniaturized test method that only small volume specimen is required. Using small volumes would not severely affect and damage an in-service component which make SPT a good method to determine the mechanical properties of unirradiated and irradiated materials or analyze small regions of structural components.
In terms of the testing, the disc shaped specimen is clamped between two dies. The punch is then pushed with a constant displacement rate through the specimen. A flat punch or concave tip pushing a ball are typically used in the test. After the testing, some characteristic parameters such as force-displacement curves are used to estimate yield strength, ultimate tensile stress. Considering the curves with various temperatures from SPT tensile/fracture data, ductile to brittle transition temperature (DBTT) can be calculated. One thing to be noticed is that the specimen used in SPT is suggested to be very flat to reduce the stress error caused by undefined contact situation.
Hydraulic Bulge Test (HBT)
Hydraulic Bulge Test (HBT) is a method of biaxial tensile testing. It used to determine the mechanical properties such as Young’s moduli, yield strength, ultimate tensile strength, and strain-hardening properties of sheet material like thin films. HBT can better describe the plastic properties of a sheet at large strains since the strain in press forming are normally larger than the uniform strain. However, the geometries of forming part are not symmetry, therefore, the true stress and strain measured by HBT will be higher than that measured by tensile test.
In HBT, rupture discs and high-pressure hydraulic oil are used to cause specimen deformation which also used to avoid influence factors such as friction during small punch test. While there are constraints in test conditions, the temperature is limited by solidification and vaporization of hydraulic oil. High temperature would lead to loading failure, while low temperature result in the failure of the seal part and the leaking vapor might be dangerous.
In HBT, a circular sample is normally stripped from a substrate on which they have been prepared and clamped over a hole around its periphery at the end of a cylinder. It experiences pressure from one side using hydraulic oil and then bulges and expands into a cavity with increasing pressure. The flow stress is calculated from the dome height of the bulging blank and the pressure and height can also be determined. Strain will be measured by Digital Image Correlation (DIC). With the specimen thickness and clamper size being considered, the true stress and strain can be calculated.
Other liquids may also be used as the hydraulic fluid in HBT. Xiang et al. (2005) developed a HBT for sub-micron thin films by using standard photolithographic microfabrication techniques etch away a small channel behind the film of interest, then pressurized the channel with water to bulge thin films. Validity of this method was confirmed using finite element analysis (FEA).
Gas Bulge Test (GBT)
Gas bulge tests (GBT) operate similarly to HBT. Instead of a hydraulic oil, high-pressure gas is used to back-pressure a thin plate specimen. Since gas has a much lower density than liquid, the maximum safe pressure output from GBT is considerably lower than hydraulic systems. Therefore, elevated temperature GBT is often used to increase ductility of the specimen, enabling plastic deformation at lower pressures.
Unlike HBT, elevated temperatures are possible for GBT. Operating temperatures of biaxial bulge testing are limited by phase transitions of the pressurized fluid—gasses therefore have an extremely wide range of operating temperatures. GBT is suitable for studying fatigue, low and high-temperature mechanical properties (given sufficient ductility at low temperatures), and thermal cycling. Additionally, holding pressure at a high temperature allows for testing time-dependent mechanical properties such as creep.
High temperature DIC may be used to measure biaxial stress and strain during GBT. Alternatively, a laser interferometer may be used to find the displacement near the apex of the dome, and many models are presented for calculating both radius of curvature and radial strain of bulged specimens. True stress is best approximated by the Young-Laplace equation. Results are comparable to biaxial testing standard ISO 16808. Clamping of elevated-temperature gas bulge specimens requires clamping materials with an operating temperature in excess of the operating temperature. This is possible using high-temperature mechanical fasteners, or by directly bonding materials via traditional welding, friction stir welding (FSW), or diffusion bonding.
GBT example studies
Frary et al. (2002) use GBT to demonstrate superplastic deformation of commercially pure (CP) titanium and Ti64 by thermally cycling through the material’s α/β transformation temperature.
Huang et al. (2019) measure coefficients of thermal expansion through GBT, and thermally cycle NiTi shape memory alloys to measure stress evolution.
The ability to perform GBT in parallel for an array of specimens enables high-throughput screening of mechanical properties and facilitates rapid materials design. Ding et al. (2014) conducted parallel measurements of viscosity across a huge composition-space of bulk metallic glass. Instead of using a direct pressure hookup, tungstic acid was placed into the cavities behind the specimen plate and decomposed to produce gas upon heating to ~100 °C.
Analytical solution
A biaxial tensile state can be derived starting from the most general constitutive law for isotropic materials in large strains regime:
where S is the second Piola-Kirchhoff stress tensor, I the identity matrix, C the right Cauchy-Green tensor, and , and the derivatives of the strain energy function per unit of volume in the undeformed configuration with respect to the three invariants of C.
For an uncompressible material, the previous equation becomes:
where p is of hydrostatic nature and plays the role of a Lagrange multiplier. It is worth nothing that p is not the hydrostatic pressure and must be determined independently of constitutive model of the material.
A well-posed problem requires specifying ; for a biaxial state of a membrane , thereby the p term can be obtained
where is the third component of the diagonal of C.
According to the definition, the three non zero components of the deformation gradient tensor F are , and .
Consequently, the components of C can be calculated with the formula , and they are , and .
According with this stress state, the two non zero components of the second Piola-Kirchhoff stress tensor are:
By using the relationship between the second Piola-Kirchhoff and the Cauchy stress tensor, and can be calculated:
Equi-biaxial configuration
The simplest biaxial configuration is the equi-biaxial configuration, where each of the two direction of load are subjected to the same stretch at the same rate. In an uncompressible isotropic material under a biaxial stress state, the non zero components of the deformation gradient tensor F are and .
According to the definition of C, its non zero components are and .
The Cauchy stress in the two directions is:
Strip biaxial configuration
A strip biaxial test is a test configuration where the stretch of one direction is confined, namely there is a zero displacement applied on that direction. The components of the C tensor become , and . It is worth nothing that even if there is no displacement along the direction 2, the stress is different from zero and it is dependent on the stretch applied on the orthogonal direction, as stated in the following equations:
The Cauchy stress in the two directions is:
The strip biaxial test has been used in different applications, such as the prediction of the behaviour of orthotropic materials under a uniaxial tensile stress, delamination problems, and failure analysis.
FEM analysis
Finite Element Methods (FEM) are sometimes used to obtain the material parameters.
The procedure consists of reproducing the experimental test and obtain the same stress-stretch behaviour; to do so, an iterative procedure is needed to calibrate the constitutive parameters. Nevertheless, the cracking behavior of a cruciform specimen under mixed mode loading can be determined using FEA. Franc2d program is used to calculate the stress intensity factor (SIF) for such specimens using the linear elastic fracture mechanics approach. This kind of approach has been demonstrated to be effective to obtain the stress-stretch relationship for a wide class of hyperelastic material models (Ogden, Neo-Hooke, Yeoh, and Mooney-Rivlin).
Standards
ISO 16842:2014 metallic materials – sheet and strip – biaxial tensile testing method using a cruciform test piece.
ISO 16808:2014 metallic materials – sheet and strip – determination of biaxial stress-strain curve by means of bulge test with optical measuring systems.
ASTM D5617 – 04(2015) – Standard Test Method for Multi-Axial Tension Test for Geosynthetics.
DIN EN 17117 – A German standard describes methods of the test using biaxial stress states for the determination of the tensile stiffness properties of biaxially oriented coated fabrics
See also
Tensile testing
Mechanical properties
References
Materials testing
Continuum mechanics
Solid mechanics | Biaxial tensile testing | [
"Physics",
"Materials_science",
"Engineering"
] | 3,400 | [
"Solid mechanics",
"Continuum mechanics",
"Classical mechanics",
"Materials science",
"Materials testing",
"Mechanics"
] |
67,944,697 | https://en.wikipedia.org/wiki/Non-linear%20inverse%20Compton%20scattering | Non-linear inverse Compton scattering (NICS), also known as non-linear Compton scattering and multiphoton Compton scattering, is the scattering of multiple low-energy photons, given by an intense electromagnetic field, in a high-energy photon (X-ray or gamma ray) during the interaction with a charged particle, in many cases an electron. This process is an inverted variant of Compton scattering since, contrary to it, the charged particle transfers its energy to the outgoing high-energy photon instead of receiving energy from an incoming high-energy photon. Furthermore, differently from Compton scattering, this process is explicitly non-linear because the conditions for multiphoton absorption by the charged particle are reached in the presence of a very intense electromagnetic field, for example, the one produced by high-intensity lasers.
Non-linear inverse Compton scattering is a scattering process belonging to the category of light-matter interaction phenomena. The absorption of multiple photons of the electromagnetic field by the charged particle causes the consequent emission of an X-ray or a gamma ray with energy comparable or higher with respect to the charged particle rest energy.
The normalized vector potential helps to isolate the regime in which non-linear inverse Compton scattering occurs ( is the electron charge, is the electron mass, the speed of light and the vector potential). If , the emission phenomenon can be reduced to the scattering of a single photon by an electron, which is the case of inverse Compton scattering. While, if , NICS occurs and the probability amplitudes of emission have non-linear dependencies on the field. For this reason, in the description of non-linear inverse Compton scattering, is called classical non-linearity parameter.
History
The physical process of non-linear inverse Compton scattering has been first introduced theoretically in different scientific articles starting from 1964. Before this date, some seminal works had emerged dealing with the description of the classical limit of NICS, called non-linear Thomson scattering or multiphoton Thomson scattering. In 1964, different papers were published on the topic of electron scattering in intense electromagnetic fields by L. S. Brown and T. W. B. Kibble, and by A. I. Nikishov and V. I. Ritus, among the others. The development of the high-intensity laser systems required to study the phenomenon has motivated the continuous advancements in the theoretical and experimental studies of NICS. At the time of the first theoretical studies, the terms non-linear (inverse) Compton scattering and multiphoton Compton scattering were not in use yet and they progressively emerged in later works. The case of an electron scattering off high-energy photons in the field of a monochromatic background plane wave with either circular or linear polarization was one of the most studied topics at the beginning. Then, some groups have studied more complicated non-linear inverse Compton scattering scenario, considering complex electromagnetic fields of finite spatial and temporal extension, typical of laser pulses.
The advent of laser amplification techniques and in particular of chirped pulse amplification (CPA) has allowed to reach sufficiently high-laser intensities to study new regimes of light-matter interaction and to significantly observe non-linear inverse Compton scattering and its peculiar effects. Non-linear Thomson scattering was first observed in 1983 with keV electron beam colliding with a Q-switched Nd:YAG laser delivering an intensity of W/cm2 (), photons of frequency two times the one of the laser were produced, then in 1995 with a CPA laser of peak intensity around W/cm2 interacting with neon gas, and in 1998 in the interaction of a mode-locked Nd:YAG laser ( W/cm2, ) with plasma electrons from a helium gas jet, producing multiple harmonics of the laser frequency. NICS was detected for the first time in a pioneering experiment at the SLAC National Accelerator Laboratory at Stanford University, USA. In this experiment, the collision of an ultra-relativistic electron beam, with energy of about GeV, with a terawatt Nd:glass laser, with an intensity of W/cm2 (, ), produced NICS photons which were observed indirectly via a nonlinear energy shift in the spectrum of electrons in output; consequent positron generation was also observed in this experiment.
Multiple experiments have been then performed by crossing a high-energy laser pulse with a relativistic electron beam from a conventional linear electron accelerator, but a further achievement in the study of non-linear inverse Compton scattering has been achieved with the realization of all-optical setups. In these cases, a laser pulse is both responsible for the electron acceleration, through the mechanisms of plasma acceleration, and for the non-linear inverse Compton scattering occurring in the interaction of accelerated electrons with a laser pulse (possibly counter-propagating with respect to electrons). One of the first experiment of this type was made in 2006 producing photons of energy from to keV with a Ti:Sa laser beam (W/cm2). Research is still ongoing and active in this field as attested by the numerous theoretical and experimental publications.
Classical limit
The classical limit of non-linear inverse Compton scattering, also called non-linear Thomson scattering and multiphoton Thomson scattering, is a special case of classical synchrotron emission driven by the force exerted on a charged particle by intense electric and magnetic fields. Practically, a moving charge emits electromagnetic radiation while experiencing the Lorentz force induced by the presence of these electromagnetic fields. The calculation of the emitted spectrum in this classical case is based on the solution of the Lorentz equation for the particle and the substitution of the corresponding particle trajectory in the Liénard-Wiechert fields. In the following, the considered charged particles will be electrons, and gaussian units will be used.
The component of the Lorentz force perpendicular to the particle velocity is the component responsible for the local radial acceleration and thus of the relevant part of the radiation emission by a relativistic electron of charge , mass and velocity . In a simplified picture, one can suppose a local circular trajectory for a relativistic particle and can assume a relativistic centripetal force equal to the magnitude of the perpendicular Lorentz force acting on the particle: and are the electric and magnetic fields respectively, is the magnitude of the electron velocity and is the Lorentz factor . This equation defines a simple dependence of the local radius of curvature on the particle velocity and on the electromagnetic fields felt by the particle. Since the motion of the particle is relativistic, the magnitude can be substituted with the speed of light to simplify the expression for . Given an expression for , the model given in Example 1: bending magnet can be used to approximately describe the classical limit of non-linear inverse Compton scattering. Thus, the power distribution in frequency of non-linear Thomson scattering by a relativistic charged particle can be seen as equivalent to the general case of synchrotron emission with the main parameters made explicitly dependent on the particle velocity and on the electromagnetic fields.
Electron quantum parameter
Increasing the intensity of the electromagnetic field and the particle velocity, the emission of photons with energy comparable to the electron one becomes more probable and non-linear inverse Compton scattering starts to progressively differ from the classical limit because of quantum effects such as photon recoil. A dimensionless parameter, called electron quantum parameter, can be introduced to describe how far the physical condition are from the classical limit and how much non-linear and quantum effects matter. This parameter is given by the following expression:where V/m is the Schwinger field. In scientific literature, is also called . The Schwinger field , appearing in this definition, is a critical field capable of performing on electrons a work of over a reduced Compton length , where is the reduced Planck constant. The presence of such a strong field implies the instability of vacuum and it is necessary to explore non-linear QED effects, such as the production of pairs from vacuum. The Schwinger field corresponds to an intensity of nearly W/cm2. Consequently, represents the work, in units of , performed by the field over the Compton length and in this way it also measures the importance of quantum non-linear effects since it compares the field strength in the rest frame of the electron with that of the critical field. Non-linear quantum effects, like the production of an electron-positron pair in vacuum, occur above the critical field , however, they can be observed also well below this limit since ultra-relativistic particles with Lorentz factor equal to see fields of the order of in their rest frame. is called also non-linear quantum parameter whereas it is a measure of the magnitude of non-linear quantum effects. The electron quantum parameter is linked to the magnitude of the Lorentz four-force acting on the particle due to the electromagnetic field and it is a Lorentz-invariant:The four-force acting on the particle is equal to the derivative of the four-momentum with respect to proper time. Using this fact in the classical limit, the radiated power according to the relativistic generalization of the Larmor formula becomes:As a result, emission is improved by higher values of and, therefore, some considerations can be done on which are the conditions for prolific emission, further evaluating the definition (). The electron quantum parameter increases with the energy of the electron (direct proportionality to ) and it is larger when the force exerted by the field perpendicularly to the particle velocity increases.
Plane wave case
Considering a plane wave the electron quantum parameter can be rewritten using this relation between electric and magnetic fields:where is the wavevector of the plane wave and the wavevector magnitude. Inserting this expression in the formula of :where the vectorial identity was used. Elaborating the expression:Since for a plane wave and the last two terms under the square root compensate each other, reduces to:
In the simplified configuration of a plane wave impinging on the electron, higher values of the electron quantum parameter are obtained when the plane wave is counter-propagating with respect to the electron velocity.
Quantum effects
A full description of non-linear inverse Compton scattering must include some effects related to the quantization of light and matter. The principal ones are listed below.
Inclusion of the discretization of the emitted radiation, i.e. the introduction of photons with respect to the continuous description of the classical limit. This effect does not change quantitatively the emission features but changes how the emitted radiation is interpreted. A parameter equivalent to can be introduced for the photon of frequency and it is called photon quantum parameter:where is the photon four-wavevector and is the three-dimensional wavevector. In the limit in which the particle approaches the speed of light, the ratio between and is equal to:From the Frequency distribution of radiated energy one can get a rate of high-energy photon emission distributed in as a function of and but still valid in the classical limit:
where stands for the McDonald functions. The mean energy of the emitted photon is given by . Consequently, a large Lorentz factor and intense fields increase the chance of producing high-energy photons. goes as because of this formula.
The effect of radiation reaction, due to photon recoil. The electron energy after the interaction process reduces because part of it is delivered to the emitted photon and the maximum energy achievable by the emitted photon cannot be higher than the electron kinetic energy. This effect is not taken into account in non-linear Thomson scattering in which the electron energy is supposed to remain almost unaltered in energy such as in elastic scattering. Quantum radiation reaction effects become important when the emitted photon energy approaches the electron energy. Since , if the classical limit of NICS is a valid description, while for the energy of the emitted photon is of the order of the electron energy and photon recoil is very relevant.
The quantization of the motion of the electron and spin effects. An accurate description of non-linear inverse Compton scattering is made considering the electron dynamics described with the Dirac equation in presence of an electromagnetic field.
Emission description when and
When the incoming field is very intense , the interaction of the electron with the electromagnetic field is completely equivalent to the interaction of the electron with multiple photons, with no need of explicitly quantize the electromagnetic field of the incoming low-energy radiation. While the interaction with the radiation field, i.e. the emitted photon, is treated with perturbation theory: the probability of photon emission is evaluated considering the transition between the states of the electron in presence of the electromagnetic field. This problem has been solved primarily in the case in which electric and magnetic fields are orthogonal and equal in magnitude (crossed field); in particular, the case of a plane electromagnetic wave has been considered. Crossed fields represent in good approximation many existing fields so the found solution can be considered quite general. The spectrum of non-linear inverse Compton scattering, obtained with this approach and valid for and , is:
where the parameter , is now defined as:The result is similar to the classical one except for the different expression of . For it reduces to the classical spectrum (). Note that if ( or ) the spectrum must be zero because the energy of the emitted photon cannot be higher than the electron energy, in particular could not be higher than the electron kinetic energy .
The total power emitted in radiation is given by the integration in of the spectrum ():where the result of the integration of is contained in the last term:
This expression is equal to the classical one if is equal to one and it can be expanded in two limiting cases, near the classical limit and when quantum effects are of major importance:A related quantity is the rate of photon emission:where it is made explicit that the integration is limited by the condition that if no photons can be produced. This rate of photon emission depends explicitly on electron quantum parameter and on the Lorentz factor for the electron.
Applications
Non-linear inverse Compton scattering is an interesting phenomenon for all applications requiring high-energy photons since NICS is capable of producing photons with energy comparable to and higher. In the case of electrons, this means that it is possible to produce photons with MeV energy that can consequently trigger other phenomena such as pair production, Breit–Wheeler pair production, Compton scattering, nuclear reactions.
In the context of laser-plasma acceleration, both relativistic electrons and laser pulses of ultra-high intensity can be present, setting favourable conditions for the observation and the exploitation of non-linear inverse Compton scattering for high-energy photon production, for diagnostic of electron motion, and for probing non-linear quantum effects and non-linear QED. Because of this reason, several numerical tools have been introduced to study non-linear inverse Compton scattering. For example, particle-in-cell codes for the study of laser-plasma acceleration have been developed with the capabilities of simulating non-linear inverse Compton scattering with Monte Carlo methods. These tools are used to explore the different regimes of NICS in the context of laser-plasma interaction.
See also
Compton scattering
Synchrotron radiation
Breit–Wheeler process
Quantum electrodynamics
Laser
References
External links
High-energy photon emission & radiation reaction in the PIC code SMILEI - Example of particle-in-cell code with a module for NICS simulation.
CORELS research - Example of research activity on NICS.
Scattering
Quantum electrodynamics | Non-linear inverse Compton scattering | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,148 | [
"Nuclear physics",
"Scattering",
"Condensed matter physics",
"Particle physics"
] |
67,944,776 | https://en.wikipedia.org/wiki/Dynamic%20stall%20on%20helicopter%20rotors | The dynamic stall is one of the hazardous phenomena on helicopter rotors, which can cause the onset of large torsional airloads and vibrations on the rotor blades. Unlike fixed-wing aircraft, of which the stall occurs at relatively low flight speed, the dynamic stall on a helicopter rotor emerges at high airspeeds or/and during manoeuvres with high load factors of helicopters, when the angle of attack(AOA) of blade elements varies intensively due to time-dependent blade flapping, cyclic pitch and wake inflow. For example, during forward flight at the velocity close to VNE, velocity, never exceed, the advancing and retreating blades almost reach their operation limits whereas flows are still attached to the blade surfaces. That is, the advancing blades operate at high Mach numbers so low values of AOA is needed but shock-induced flow separation may happen, while the retreating blade operates at much lower Mach numbers but the high values of AoA result in the stall (also see advancing blade compressibility and retreating blade stall).
Performance limits
The effect of dynamic stall limits the helicopter performance in several ways such as:
The maximum forward flight velocity and thrust;
High blade structural loads, which may result in excessive vibrations and blade structural damage;
Control system loads, manoeuvre capability, and handling qualities;
Helicopter dynamic performance.
Flow topology
The visualization is considered a vivid method to better understand the aerodynamic principle of the dynamic stall on a helicopter rotor, and the investigation generally starts from the analysis of the unsteady motion on 2D airfoil (see Blade element theory).
Dynamic stall for 2D airfoils
By wind tunnel experiments, it has been found that the behavior of an airfoil under unsteady motion is quite different from that under quasi-steady motion. Flow separation is less likely to happen on the upper airfoil surface with a larger value of AoA than the latter, which can increase the maximum lift coefficient to a certain extent. Three primary unsteady phenomena have been identified to contribute to the delay in the onset of flow separation under unsteady condition:
During the condition where the AoA is increasing with respect to time, the unsteadiness of the flow resulting from circulation that is shed into the wake at the trailing edge of the airfoil causes a reduction in the lift and adverse pressure gradients compared to the steady case at the same AoA;
By virtue of a kinematic induced camber effect, a positive pitch rate further decreases the leading edge pressure and pressure gradients for a given value of lift;
In response to the external pressure gradients, there are also additional unsteady effects that occur within the boundary layer, including the existence of flow reversals in the absence of any significant flow separation.
The development process of dynamic stall on 2D airfoil can be summarized in several stages:
Stage 1: the AoA exceeds the static stall angle but the flow separation is delayed due to the reduction of adverse pressure gradients produced by the kinematics of pitch rate.
Stage 2: flow separation and the formation of a vortex disturbance is cast-off from the leading edge region of the airfoil. This vortex, called leading edge vortex (LEV) or dynamic stall vortex (DSV), provides an additional lift for the airfoil so long as it stays over the upper surface, and also noteworthy increases in nose-down pitching moment (moment break, moment stall) while it moves downstream across the chord.
Stage 3: a steep decrease of the lift coefficient (lift break, lift stall) occurs as the DSV passes into the wake.
Stage 4: full separation of the flow on the upper surface of the airfoil can be observed, accompanied by the peak of nose-down pitch moment.
Stage 5: the full flow reattachment is achieved as the AoA gradually decreases until it is fairly smaller than the static stall angle. The reasons for the lag are, firstly, the reorganization of the flow from fully separated to reattached, and secondly, the reverse kinematic "induced camber" effect on the leading edge pressure gradient by the negative pitch rate.
Dynamic stall in the rotor environment
Although the unsteady mechanism of idealized 2D experiments has already been studied comprehensively, the dynamic stall on a rotor presents strong three-dimensional character differences. According to a well-collected in-flight data by Bousman, the generation location of the DSV is "tightly grouped", where lift overshoots and large nose-down pitching moments are featured and can be classified into three groups.
Types
Light dynamic stall
Minor flow separation;
Low deviations of airloads and small hysteresis;
The same order of the viscous zone thickness as the airfoil thickness;
Sensitivity to airfoil geometry, reduced frequency and Mach number.
Deep dynamic stall
Domination of the vortex-shedding phenomenon;
High deviations of airloads and large hysteresis;
Extension of the viscous zone to the order of airfoil chord;
Less sensitivity to airfoil geometry, reduced frequency and Mach number;
Rapid overshoots of airloads after stall.
Factors
Mean AoA
The increasing of the mean value of AoA leads to more evident flow separation, higher overshoots of lift and pitch moment, and larger airloads hysteresis, which may ultimately result in deep dynamic stall.
Oscillating angle
The amplitude of oscillation is also an important parameter for the stall behaviour of an airfoil. With a larger oscillating angle, deep dynamic stall tends to occur.
Reduced frequency
A higher value of reduced frequency suggests a delay of the onset of flow separation at higher AoA, and a reduction of airloads overshoots and hysteresis is secured because of the increase of the kinematic induced camber effect. But when reduce frequency is rather low, i.e. , the vortex-shedding phenomenon is not likely to happen, so does the deep dynamic stall.
Airfoil geometry
The effect of airfoil geometry on dynamic stall is quite intricate. As is shown in the figure, for a cambered airfoil, the lift stall is delayed and the maximum nose-down pitch moment is significantly reduced. On the other hand, the inception of stall is more abrupt for a sharp leading-edge airfoil. More information is available here.
Sweep angle
The sweep angle of the flow to a blade element for a helicopter in forward flight can be significant. It is defined as the radial component of the velocity relative to the leading edge of the blade:
Based on experimental data, a sweep angle of 30° is able to delay the onset of stall to a higher AoA thanks to the convection of the leading-edge vortex at a lower velocity and reduce the varying rate of lift, pitch moment, and the scale of hysteresis loops.
Reynolds number
As the figure suggests, the effect of Reynolds numbers seems to be minor, with a low value of reduced frequency k=0.004, stall overshoot is minimal and most of the hysteresis loop is attributable to a delay in reattachment, rather than vortex shedding.
Three-dimensional effects
Lorber et al. found that at the outermost wing station, the existence of the tip vortex gives both the steady and unsteady lift and pitching moment hysteresis loops a more nonlinear quasi-steady behaviour due to an element of steady vortex-induced lift, while for the rest of the wing stations where oscillations below stall, there is no particular difference from 2-D cases.
Time-varying velocity
During forward flight, the blade element of a rotor will encounter a time-varying incident velocity, leading to additional unsteady aerodynamic characters. Several features have been discovered through experiments, for example, depending on the phasing of the velocity variations with respect to the AoA, initiation of LEV shedding and the chordwise convection of LEV appear to be different. However, more works are needed to better understand this problem adopting mathematical models.
Modelling
There are mainly two types of mathematical models to predict the dynamic stall behaviour: semi-empirical models and computational fluid dynamics method. With regard to the latter method, because of the sophisticated flow field during the process of the dynamic stall, the full Navier-Stokes equations and proper models are adopted, and some promising results have been presented in the literature. However, to utilize this method precisely, proper turbulence models and transition models should be carefully selected. Furthermore, this method is also sometimes too computationally costly for research purposes as well as the pre-design of a helicopter rotor. On the other hand, to date some semi-empirical models have shown their capability of providing adequate precision, which contains sets of linear and nonlinear equations, based on classical unsteady thin-airfoil theory and parameterized by empirical coefficients. Therefore, a large number of experimental results are demanded to correct the empirical coefficients, and it is foreseeable that these models cannot be generally adapted to a wide range of conditions such as different airfoils, Mach numbers, and so on.
Here, two typical semi-empirical methods are presented to give insights into the modelling of dynamic stall.
Boeing-Vertol Gamma Function Method
The model was initially developed by Gross&Harris and Gormont, the basic idea is as follows:
The onset of dynamic stall is assumed to occur at
,
where is the critical AoA of dynamic stall, is static stall AoA and is given by
,
where is the time derivative of AoA, is the blade chord, and is the free-stream velocity. The function is empirical, depends on geometry and Mach number and is different for lift and pitching moment.
The airloads coefficients are constructed from static data using an equivalent angle of attack derived from Theodorsen's theory at the appropriate reduced frequency of the forcing and a reference angle as follows:
, , , where is the center point of rotation.
A comprehensive analysis of a helicopter rotor using this model is presented in the reference.
Leishman-Beddoes Method
The model was initially developed by Beddoes and Leishman&Beddoes and refined by Leishman and Tyler&Leishman.
The model consists of three distinct sub-systems for describing the dynamic stall physics:
Attached flow model for the unsteady (linear) airloads (with compressibility effects included) using the compressible indicial response functions;
Separated flow model for the nonlinear airloads (Kirchhoff-Helmholtz theory);
Dynamic stall model for the leading edge vortex-induced airloads.
One significant advantage of the model is that it uses relatively few empirical coefficients, with all but four at each Mach number being derived from static airfoil data.
See also
helicopter
Rotorcraft
Helicopter rotor
Stall (fluid dynamics)
retreating blade stall
Reduced frequency
Lift coefficient
Angle of attack
References
Helicopter aerodynamics
Fluid dynamics | Dynamic stall on helicopter rotors | [
"Chemistry",
"Engineering"
] | 2,208 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
67,945,769 | https://en.wikipedia.org/wiki/Giant%20Arc | The Giant Arc is a large-scale structure discovered in June 2021 that spans 3.3 billion light years. The structure of galaxies exceeds the 1.2 billion light year threshold, challenging the cosmological principle that at large enough scales the universe is considered to be the same in every place (homogeneous) and in every direction (isotropic). The Giant Arc consists of galaxies, galactic clusters, as well as gas and dust. It is located 9.2 billion light-years away and stretches across roughly a 15th of the radius of the observable universe. It was discovered using data from the Sloan Digital Sky Survey by the team of Alexia M. Lopez, a doctoral candidate in cosmology at the University of Central Lancashire.
It and the Big Ring may form part of a connected cosmological system.
If the Giant Arc were visible in the night sky it would form an arc occupying as much space as 20 full moons, or 10 degrees on the sky.
See also
Huge-LQG
Sloan Great Wall
CfA2 Great Wall
South Pole Wall
BOSS Great Wall
Hercules–Corona Borealis Great Wall
References
Galaxy filaments
Physical cosmology
Large-scale structure of the cosmos
Astronomical objects discovered in 2021 | Giant Arc | [
"Physics",
"Astronomy"
] | 246 | [
"Galaxy stubs",
"Astronomical sub-disciplines",
"Theoretical physics",
"Astronomy stubs",
"Astrophysics",
"Physical cosmology"
] |
67,946,474 | https://en.wikipedia.org/wiki/Psilocybe%20ruiliensis | Psilocybe ruiliensis is a species of psilocybin mushroom in the family Hymenogastraceae. Described as new to science in 2016, it is found in Yunnan province of southwest China. The species epithet, ruiliensis, is a reference to the location Ruili where the type collections were found. The type specimens were growing solitary to scattered in grasslands in which cows and horses had previously grazed.
Description
Cap: in diameter; conic to almost plane, with or without umbo or small acutely papillate at the disk; brownish-yellow (often with reddish tinge); hygrophanous and translucently striate when moist, watery brown when wet; sometimes bruising blue when damaged or mature; cortinate white veil and sometimes small scales when young.
Gills: Yellowish or beige when young, chocolate brown in age (gray-purple or purple tinge), with adnate to subsinuate or adnexed attachment; edges serrulate and slightly wavy.
Spores: Brown with purple tinge (in water); ellipsoid to subhexagonal; smooth and slightly thick-walled, sometimes containing 1–2 oil drops; 9–11 by 6–7.5 μm.
Stipe: long, thick; yellow-white to brownish, sometimes bruising bluish when damaged; central or occasionally slightly eccentric; fibrillose; hollow; annulus absent; equal to slightly enlarged bulbous base. Stem base with rhizomorphic white mycelium.
Odor: Slightly grassy.
Microscopic features: Larger hexagonal and subrhomboid basidiospores (9.6–12.0 by 6.4–8.4 μm); ventricose-lageniform cheilocystidia and pleurocystidia.
See also
List of psilocybin mushrooms
List of Psilocybe species
References
ruiliensis
Fungi described in 2016
Fungi of China
Fungus species | Psilocybe ruiliensis | [
"Biology"
] | 410 | [
"Fungi",
"Fungus species"
] |
67,948,345 | https://en.wikipedia.org/wiki/Gelesis100 | Gelesis100, sold under the brand name Plenity, is an oral hydrogel used to treat overweight and obesity. It absorbs water and expands in the stomach and small bowel thereby increasing feelings of fullness. Possible side effects include primarily gastrointestinal symptoms, such as diarrhea, abdominal distention, infrequent bowel movements, constipation, abdominal pain, and flatulence. It is contraindicated in pregnancy, chronic malabsorption syndromes, and cholestasis. The US Food and Drug Administration approved it in 2019 as a medical device. Gelesis100 was developed by the company Gelesis.
History
The US Food and Drug Administration approved the use of Gelesis100 in April 2019 as a medical device. Gelesis100 is the first treatment of its kind for overweight and obesity. In 2022, the American Gastroenterology Association published a guideline for the management of obesity, which recommended the use of Gelesis100 be limited to clinical trials due to limited evidence.
Uses and effectiveness
Gelesis100 is used to treat obesity and overweight as an anti-obesity medication. Gelesis100 is taken as a pill before meals with water.
Gelesis100 has been criticized for its small impact on weight loss relative to side effects.
Mechanism and physiology
Gelesis100 is an oral superabsorbent hydrogel, which is produced from carboxymethylcellulose and citric acid. The cross-linked product forms a hydrophilic matrix, which absorbs water. Taken in capsule form by mouth, as Gelesis100 absorbs water, it expands in the stomach and small intestine. After absorbing water, a semisolid gel structure forms, which may promote satiety and result in weight loss via reduced caloric intake.
Contraindications
Gelesis100 is contraindicated in pregnancy, chronic malabsorption syndromes, and cholestasis.
Side effects
Side effects consist of minor gastrointestinal symptoms, including diarrhea, abdominal distention, infrequent bowel movements, constipation, abdominal pain, and flatulence. Gelesis100 is not associated with any severe adverse events. However, long-term safety data beyond 24 weeks is not available.
References
External links
Official website
Gastroenterology
Anti-obesity drugs
Bariatrics
Management of obesity
Medical devices | Gelesis100 | [
"Biology"
] | 493 | [
"Medical devices",
"Medical technology"
] |
69,345,705 | https://en.wikipedia.org/wiki/Quantum%20bogodynamics | Quantum bogodynamics () is a humorous parody of quantum mechanics that describes the universe through interactions of fictional elementary particles, bogons (by analogy to the naming of real elementary particles, e.g. photons; but also from the English word bogus, meaning 'fake').
This theory assumes the existence of three basic phenomena:
Bogon-emitting sources (such as politicians, used-car dealers, TV preachers, and teleshopping hosts)
Bogon absorbers (or sinks) (taxpayers and computers),
Bogosity potential fields.
The Jargon File Glossary describes the theory as follows:
The unit of bogosity is microLenat, proposed by David Jefferson, and was intended as an attack against computer scientist Doug Lenat. "Doug had failed the student on an important exam because the student gave only 'AI is bogus' as his answer to the questions. The slur is generally considered unmerited, but it has become a running gag nevertheless. Some of Doug's friends argue that of course a microLenat is bogus, since it is only one millionth of a Lenat. Others have suggested that the unit should be redesignated after the grad student, as the microReid."
The term comes from hacker culture, where bogons were used to describe units of "bogusness" or failure.
References
Hacker culture
Computing culture | Quantum bogodynamics | [
"Technology"
] | 286 | [
"Computing culture",
"Computing and society"
] |
69,346,844 | https://en.wikipedia.org/wiki/BOLD-100 | BOLD-100, or sodium trans-[tetrachlorobis (1H-indazole)ruthenate(III)], is a ruthenium-based anti-cancer therapeutic in clinical development. As of February 2024, BOLD-100 was being tested in a Phase 1b/2a clinical trial in 117 patients with advanced gastrointestinal cancers in combination with the chemotherapy regimen FOLFOX. BOLD-100 is being developed by Bold Therapeutics Inc.
Structure
BOLD-100 has an octahedral structure with two trans indazoles and four chloride ligands in the equatorial plane. The primary cation for BOLD-100 is sodium. BOLD-100’s impurity profile contains trace quantities of cesium
BOLD-100 derivatives
BOLD-100 is sodium trans-[tetrachlorobis (1H-indazole) ruthenate(III)] with cesium as an intermediate salt form. BOLD-100 was developed from the closely related ruthenium molecule KP1339 (also known as IT-139 or NKP-1339) which is also sodium trans-[tetrachlorobis (1H-indazole) ruthenate(III)], but has different manufacturing methods and purity profiles. The names are often used interchangeably.
The precursor molecule to BOLD-100 is KP1019, which is the indazole salt equivalent. KP1019 previously entered Phase 1 clinical trials but development was halted due to low solubility in water, leading to the development of KP1339 and BOLD-100 which are readily soluble in water. KP1019 and KP1339 were invented by Dr. Keppler at the University of Vienna.
Synthesis
Synthesis of BOLD-100 is accomplished by treating RuCl3 with an excess of 1H-indazole in a concentrated aqueous HCl solution. The resulting indazolium salt is treated with CsCl, and a salt exchange is performed that converts the cesium salt to the final sodium salt. The drug product is prepared as a lyophilized powder for parenteral administration.
Mechanism of action
BOLD-100 kills cancer cells through multiple mechanisms, leading to cell death through apoptosis. BOLD-100 inhibits GRP78 and alters the unfolded protein response (UPR), while also inducing reactive oxygen species (ROS), leading to DNA damage.
BOLD-100 can synergize with cytotoxic chemotherapies and targeted agents to improve cancer cell death.
BOLD-100 also causes immunogenic cell death in colon cancer organoids.
Clinical development
The precursor molecule to BOLD-100, KP1339 was tested in a Phase 1 monotherapy clinical trial in heavily pretreated patients with advanced cancers. In this dose escalation study, KP1339 was administered to 46 patients with doses ranging from 20 mg/m2 to 780 mg/m2. KP1339 was well tolerated, with the treatment-emergent adverse events occurring in >20% of patients being nausea, fatigue, vomiting, anaemia and dehydration. These adverse events were mainly grade 2 or lower. In the 38 efficacy-evaluable patients, nine patients achieved stable disease and 1 patient had a durable partial response. 625 mg/m2 was determined to be the recommended Phase 2 dose.
BOLD-100 is being tested in a Phase 1b/2a clinical trial in combination with the chemotherapy regimen FOLFOX (5-fluorouracil, leucovorin, and oxaliplatin) for the treatment of gastrointestinal cancers, including gastric, pancreatic, colon and bile duct cancer. This trial includes a dose escalation phase followed by a cohort expansion with 117 patients enrolled. Interim data presented at ASCO GI in January 2024 showed that BOLD-100 + FOLFOX was active and well-tolerated treatment in a heavily pre-treated Stage IV mCRC study population with 36 patients. Progression Free Survival, Overall Survival, and Objective Response Rate demonstrate significant clinical benefit and improvement over the currently available therapies, with minimal treatment emergent neuropathy or significant toxicities.
References
Experimental cancer drugs
Ruthenium complexes
Indazoles
Chlorides
Ruthenium(IV) compounds | BOLD-100 | [
"Chemistry"
] | 887 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
69,347,052 | https://en.wikipedia.org/wiki/Well%20Done%20Foundation | The Well Done Foundation (WDF) is a United States-based non-profit environmental organization that plugs abandoned oil and gas wells, preventing methane emissions from being released into the atmosphere. Established in 2019 with its headquarters in Shelby, Montana, WDF is a vendor for the carbon marketplace and sells offsets verified through the American Carbon Registry (ACR).
History
In 2019, Curtis Shuck, a former oil and gas executive of 30 years, was in Shelby, MT meeting with farmers when he discovered abandoned oil and gas wells scattered around the town's farm fields. In November 2019, Shuck initially created the Well Done Montana (WDM), LLC; a for-profit organization designed to plug wells in Montana. The organization started its pilot project in its home state and plugged its first well, known as Anderson #3, in Toole County, Montana, in April 2020. Anderson #3 stopped producing oil in the 1980s and was emitting more than 6,600 MTCO2e before it was plugged.
WDM was formally reorganized into the Well Done Foundation as a non-profit 501(c)(3) organization in 2020. In June 2020, two more wells, Allen #31-8 and Blum #12, were plugged by WDF in Montana.
WDF continued to expand its operations across the United States throughout 2021, amidst the COVID-19 pandemic, including Pennsylvania, New York, Ohio, West Virginia, Kansas, Louisiana, and Texas.
In September 2023, ABB announced it would partner with the Well Done Foundation to monitor methane and greenhouse gas emissions from orphaned wells in the United States.
Process
The WDF follows a five-step process to plug a well. It first identifies wells of interest in whichever state it is operating in, then researches well emissions of individual sites, alongside the history of the well, its depth, and materials needed to plug it, for a nine-month period. A bond is then posted and WDF adopts the well from the State. A budget is prepared for the project and a campaign is established to raise funds for the well's plugging and costs for surface restoration. Each campaign is funded entirely through donations and partnerships, with each well costing around $65,000 to plug.
Once the funding goal is reached, contractors are employed to carry out the plugging process and a gel is pumped through the well's piping, then filled with concrete. Following the sealing process, a methane monitoring platform, known by WDF as "Dorothy", is placed over the well and collects data on the methane emissions to see if the plugging operation successfully stopped methane leakage. WDF then works with surface land owners to restore the surface surrounding the well to its pre-drilling state.
In the media
Vice News: "This Retired Oil Exec Wants to Plug Up Millions of Abandoned Wells Across the US"
Washington Post: "Capping methane-spewing oil wells, one hole at a time"
Pittsburgh Post-Gazette: "Ask Me About... a new model for plugging old oil wells"
KSBY California's Central Coast: "Nonprofit tackles methane emissions 'one well at a time'"
Williston Herald: "Well Done Foundation to celebrate one-year anniversary, Earth Day by plugging its fifth well"
U.S. News & World Report: "Montana Foundation Capping Abandoned Oil Wells"
Helena Independent Record: "Capping off problems: Montana-based company takes on abandoned wells"
Yes! magazine: "How Montana Is Cleaning Up Abandoned Oil Wells"
Marcellus Drilling News: "Seneca Sponsors Plugging of Century-Old Orphan Well in McKean, PA"
Bradford Era: "Appalachian Legacy Project to be 'boots on ground' for Well Done Foundation"
ITV: "Climate change: Millions of disused oil wells in US are pumping out methane - what's being done?"
Grist: "Abandonment Issues"
References
Non-profit corporations
Greenhouse gas emissions in the United States
Natural gas in the United States
501(c)(3) organizations
Oil wells | Well Done Foundation | [
"Chemistry"
] | 830 | [
"Petroleum technology",
"Oil wells"
] |
69,348,843 | https://en.wikipedia.org/wiki/CoRoT-16 | CoRoT-16 is a solitary star located in the equatorial constellation Scutum. With an apparent magnitude of 16, it requires a powerful telescope to be seen, and is located 2,400 light years away based on parallax.
Properties
This is an ordinary G-type main sequence star with a similar mass to the Sun, but is 19% larger than the latter. It radiates at 77% the Sun's luminosity from its photosphere at an effective temperature of 5,650 K, which gives it the yellow-hue of a G-type star. CoRoT-16 has a rotation rate of 1/2 km/s, which correlates with an age of 6.7 billion years. As expected with planetary hosts, CoRoT-16 has a high metallicity.
Planetary system
In 2011, the CoRoT mission discovered an unusually eccentric "hot Jupiter".
References
G-type main-sequence stars
Scutum (constellation)
Planetary systems with one confirmed planet
CoRoT | CoRoT-16 | [
"Astronomy"
] | 207 | [
"Space telescopes",
"CoRoT",
"Scutum (constellation)",
"Constellations"
] |
69,349,499 | https://en.wikipedia.org/wiki/Cerebrospinal%20fluid%20flow%20MRI | Cerebrospinal fluid (CSF) flow MRI is used to assess pulsatile CSF flow both qualitatively and quantitatively. Time-resolved 2D phase-contrast MRI with velocity encoding is the most common method for CSF analysis. CSF Fluid Flow MRI detects back and forth flow of Cerebrospinal fluid that corresponds to vascular pulsations from mostly the cardiac cycle of the choroid plexus. Bulk transport of CSF, characterized by CSF circulation through the Central Nervous System, is not used because it is too slow to assess clinically. CSF would have to pass through the brain's lymphatic system and be absorbed by arachnoid granulations.
Cerebrospinal fluid (CSF)
CSF is a clear fluid that surrounds the brain and spinal cord. The rate of CSF formation in humans is about 0.3–0.4 ml per minute and the total CSF volume is 90–150 ml in adults.
Traditionally, CSF was evaluated mainly using invasive procedures such as lumbar puncture, myelographies, radioisotope studies, and intracranial pressure monitoring. Recently, rapid advances in imaging techniques have provided non-invasive methods for flow assessment. One of the best-known methods is Phase-Contrast MRI and it is the only imaging modality for both qualitative and quantitative evaluation. The constant progress of magnetic resonance sequences gives a new opportunity to develop new applications and enhance unknown mechanisms of CSF flow.
Phase contrast MRI
The study of CSF flow became one of Phase-contrast MRI's major applications. The key to Phase-contrast MRI (PC-MRI) is the use of a bipolar gradient. A bipolar gradient has equal positive and negative magnitudes that are applied for the same time duration. The bipolar gradient in PC-MRI is put in a sequence after RF excitation but before data collection during the echo time of the generic MRI modality. The bipolar lobe must be applied in all three axes to image flow in all three directions.
Bipolar gradient
The basis of the bipolar gradient in PC-MRI is that when using this gradient to change frequencies, there will be no phase shift for the stationary protons because they will experience equal positive and negative magnitudes. However, the moving protons will undergo various degrees of phase shift because, along the gradient direction, their locations are constantly changing. This notion can be applied to monitor protons that are moving through a plane. From the phase contrast, the floating protons can be detected. In the equation for determining the phase, local susceptibility influence is not removed by this bipolar gradient. Thus, it is necessary to invert a second sequence with the bipolar gradient, and the signal must be subtracted from the original acquisition. The purpose of this step is to cancel out those static areas’ signals and produce the characteristic static appearance at phase-contrast imaging.
where = phase shift, = gyromagnetic ratio, is the proton velocity, and is the change in magnetic moment
Equation 1. This is used to calculate phase shift, which is directly proportional to the gradient strength according to the change in magnetic moment.
In phase-contrast imaging, there is a direct correlation between the degree of phase shift and the proton velocity in the direction of the gradient. However, because of the limitation of angles above 360°, the angle will wrap back to 0°, and only a specific range of proton velocities can be measured. For example, if a certain velocity leads to a 361° phase shift, we cannot distinguish this one from a velocity that causes a 1° phase shift. This phenomenon is called aliasing. Because both the forward direction velocity and the backward direction velocity are important, phase angles are usually within the range from −180° to 180°.
Using the bipolar gradient, it is possible to create a phase shift of spins that move with a specific velocity in the axis direction. Spins moving towards the bipolar gradient have a positive net phase shift, whereas spins moving away from the gradient have a negative net phase shift. Positive phase shifts are generally shown as white, while negative phase shifts are black. The net phase shift is directly proportional to both the time of bipolar gradient application and the flow velocity. This is why it is important to pick a velocity parameter that is similar in magnitude and width to that of the bipolar gradient - this is denoted as velocity encoding.
Velocity encoding
Velocity encoding (VENC), measured in cm/s, is directly related to the properties of the bipolar gradient. The VENC is used as the highest estimated fluid velocity in PC-MRI. Underestimating VENC leads to aliasing artifacts, as any velocity slightly higher than the VENC value has the opposite sign phase shift. However, overestimating the VENC value leads to a lower acquired flow signal and a lower SNR. Typical CSF flow is 5–8 cm/s; however, patients with hyper-dynamic circulation often require higher VENCs of up to 25 cm/s. An accurate VENC value helps generate the highest signal possible.
Equation 2. This is used to calculate VENC, which is inversely proportional to gradient strength. The variables are equivalent to those defined in Equation 1.
Images
PC-MRI is made of a magnitude and phase image for each plane and VENC obtained. In the magnitude image, cerebrospinal fluid (CSF) that is flowing is a brighter signal and stationary tissues are suppressed and visualized as black background. The phase image is phase-shift encoded, where white high signals represent forward flowing CSF and black low signals represent backwards flow. Since the phase image is phase-dependent, the velocity can be quantitatively estimated from the image. The background is mid-grey in color. There is also a re-phased image, which is the magnitude of flow of the compensated signal. It includes bright high signal flow and a background that is visible.
The phase-contrast velocity image has greater sensitivity to CSF flow than the magnitude image, since the velocity image reflects the phase shifts of the protons. There are two sets of phase-contrast images used in evaluating CSF flow. The first is imaging of the axial plane, with through-plane velocity that shows the craniocaudal direction of flow (from cranial to caudal end of the structure). The second image is in the sagittal plane, where the velocity is shown in-plane and images the craniocaudal direction. The first technique allows for flow quantification, while the second allows for qualitative assessment. Through-plane analysis is usually done perpendicular to the aqueduct and is more accurate for quantitative evaluation because this minimizes the partial volume effect, a main limitation of PC-MRI. The partial volume effect occurs when a voxel includes a boundary of static and moving materials, this leads to an overestimate of phase which results in inaccurate velocities at material boundaries. These quantitative and qualitative CSF flow images can be acquired in about 8-10 additional minutes than a regular MRI.
Choosing parameters
Factors that impact PC-MRI include VENC, repetition time (TR), and signal-to-noise ratio (SNR). To capture CSF flow of 5–8 cm/s, it is necessary to use a strong bipolar gradient. VENC is inversely proportional to magnitude and time of application. This means that a slower VENC value needs a higher magnitude bipolar gradient applied for a longer time. This results in a larger TR value; however, TR can only be increased to a certain extent, as a short repetition time is needed for higher temporal resolution since the data is plotted relative to a full cardiac cycle. Therefore, it is important to balance these parameters to maximize resolution.
Quantification
To quantify CSF flow, it is important to define the region of interest, which can be done using a cross-sectional area measurement, for example. Then, velocity versus time can be plotted. Velocity is typically pulsatile due to systole and diastole, and the area under the curve can yield the amount of flow. Systole produces forward flow, while diastole produces backwards flow.
Applications
Clinical
CSF flow can be used in diagnosing and treating aqueduct stenosis, normal pressure hydrocephalus, and Chiari malformation.
Aqueduct stenosis is the narrowing of the aqueduct of Sylvius which blocks the flow of CSF, causing fluid buildup in the brain called hydrocephalus. Decreased aqueduct stroke volume and peak systolic velocity could be detected through CSF flow to diagnose a patient with aqueduct stenosis.
Normal pressure hydrocephalus (NPH) looks at CSF flow values and velocities, which is important for diagnosis because NPH is idiopathic and has varying symptoms amongst patients including urinary incontinence, dementia, and gait disturbances. Increased aqueduct CSF stroke volume and velocity are indicators of NPH. It is critically important to recognize and treat NPH because NPH is one of the few potentially treatable causes of dementia. The treatment of choice in NPH is ventriculoperitoneal shunt surgery (VPS). This treatment needs a VP shunt, which is a catheter with a valve aiming at implementing a one-way outflow of the excessive amount of CSF from the ventricles. It is obligatory to have patency control because of some possible complications such as infections and obstruction. Due to the development and widespread of PC-MRI, it superseded spin-echo(SE) images, which is the traditional way to choose patients who might benefit from a VPS. And PC-MRI gradually became the most often used sequence to evaluate the CSF flow pattern in patients with NPH in relation to the cardiac cycle.
Chiari malformation (CMI) is when the cerebellar tonsils push through the foramen magnum of the skull. CSF flow varies based on level of tonsil descent and type of Chiari malformation, so the MRI can also be helpful in deciding the type of surgery to be performed and monitoring progress. CSF flow will be altered within different regions of the spinal cord and brain stem because of the changes in the morphology of the posterior fossa and craniocervical junction, which enables PC-MRI as a fundamental technique in CMI research studies and clinical evaluation.
Limitations
In PC-MRI, the quantitative analysis of stroke volume, mean peak velocity, and peak systolic velocity is possible only in the plane that is perpendicular to the unidirectional flow. Additionally, it is not possible to calculate multidirectional flow in multiaxial planes in 2D or 3D PC-MRI. This means that it is not a useful technique in clinical applications that have turbulent flow.
Future
Emerging 4D PC-MRI is showing promising results in the assessment of multidirectional flow. The 4D imaging modality adds time as a dimension to the 3D image. There are many applications of 4D PC-MRI, including the ability to examine blood flow patterns. This is particularly helpful for cardiac and aortic imaging, but the major limitation remains the image acquisition time.
References
Magnetic resonance imaging | Cerebrospinal fluid flow MRI | [
"Chemistry"
] | 2,311 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
69,349,975 | https://en.wikipedia.org/wiki/Tire%20model | In vehicle dynamics, a tire model is a type of multibody simulation used to simulate the behavior of tires. In current vehicle simulator models, the tire model is the weakest and most difficult part to simulate.
Tire models can be classified on their accuracy and complexity, in a spectrum that goes from more simple empirical models to more complex physical models that are theoretically grounded. Empirical models include Hans B. Pacejka's Magic Formula, while physically based models include brush models (although they are still quite simplified), and more complex and detailed physical models include RMOD-K, FTire and Hankook. Theoretically-based models can be in turn classified from more approximative to more complex ones, going for example from the solid model, to the rigid ring model, to the flexural (elastic) ring model (like the Fiala model), and the most complex ones based on finite element methods.
Brush models were very popular in the 1960s and '70s, after which Pacejka's models became widespread for many applications.
Classification by purpose
Driving dynamics models
Brush model (Dugoff, Fancher and Segel, 1970)
Hohenheim tire model (physical approach [1])
Pacejka Magic Formula Tire (Bakker, Nyborg and Pacejka, 1987)
TameTire (semi-physical approach)
TMeasy (semi-physical approach)
Stretched string tire model (Fiala 1954)
Comfort models
BRIT (Brush and Ring Tire)
CDTire (Comfort and Durability Tire)
Ctire (Comfort tire)
Dtire (Dynamical Nonlinear Spatial Tire Model)
FTire (Flexible Structure Tire Model)
RMOD-K (Comfort and Durability Tire)
SWIFT (Short Wavelength Intermediate Frequency Tire) (Besselink, Pacejka, Schmeitz, & Jansen, 2005)
Applications
Fully physics-based tire models have been typically too computational expensive to be run in realtime driving simulations. For example, to since CDTire/3D, a physics-based tire model, cannot be run in realtime, for realtime applications typically an equivalent semi-empirical "magic formula" type of model, called CDTire/Realtime, is derived from it through experiments and a regression algorithm.
In 2016, a slightly less accurate version of FTire, a physics-based tire model, was adapted to be run in real time. This realtime version of FTire was shown in 2018 to run on a 2,7 GHz 12 Core Intel Xeon E5 (2014, 22 nm process, about $2000), with 900 contact road/contact patch elements, a sample frequency of 4.0 kHz including thermal and wear simulation.
The typical tire model sampling rate used in automotive simulators is 1 kHz. However, running at higher frequencies, like 2 kHz, might mitigate lowered numerical stability in some scenarios, and might increase the model accuracy in frequency domain above
about 250 Hz.
See also
Contact patch
Self aligning torque
Slip (vehicle dynamics)
Thermal analysis
Heat transfer
References
Further reading
A new way of representing tyre data obtained from measurements in pure cornering and pure braking conditions.
Hans Pacejka (2012) Tire and Vehicle Dynamics, third edition (first edition 2002)
Lugner, P., & Plöchl, M. (2005). Tyre model performance test: first experiences and results. Vehicle System Dynamics, 43(sup1), 48-62.
Xu Wang (2020) Automotive Tire Noise and Vibrations: Analysis, Measurement and Simulation, ch.10
FTire physical tire model to run with rFpro driving simulation software, at rFpro official Vimeo and youtube channels, Jan 29, 2020
Romano, L., Bruzelius, F., & Jacobson, B. (2020) Brush tyre models for large camber angles and steering speeds, in Vehicle System Dynamics, 1-52.
VEHICLE DYNAMICS LIBRARY OFFERS EXPANDED SUPPORT FOR COSIN’S FTIRE MODEL, MODELON, AUGUST 1, 2017
Février, P., Hague, O. B., Schick, B., & Miquet, C. (2010) Advantages of a thermomechanical tire model for vehicle dynamics, ATZ worldwide, 112(7), 33-37.
External links
Tire models at Project Chrono
Tires For Heavy Trucks by PneusQuebec
Tire Modeling; Extracting Results from a Large Data Set at YouTube
Tires
Automotive software
Driving simulators
Racing simulators
Simulation software
Vehicle dynamics
Computational physics
Dynamical systems
de:Reifenmodell | Tire model | [
"Physics",
"Mathematics",
"Technology"
] | 924 | [
"Driving simulators",
"Computational physics",
"Mechanics",
"Real-time simulation",
"Dynamical systems"
] |
69,351,779 | https://en.wikipedia.org/wiki/WorldRiskReport | The WorldRiskReport is an annual technical report on global disaster risks. The yearly issues of the WorldRiskReport focus on varying critical topics related to disaster risk management and are published in German and English. The report includes the WorldRiskIndex, which identifies the risk of an extreme natural event becoming a disaster for 181 countries worldwide.
The report has been published annually by Byter Entwicklung Hilft since 2011 – until 2016 in cooperation with the Institute for Environment and Human Security (UNU-EHS) at the United Nations University in Bonn. Since 2018, the WorldRiskReport has been published jointly with the Institute for International Law of Peace and Armed Conflict (IFHV) at the Ruhr University Bochum.
The report aims to highlight linkages between extreme natural events, climate change, disaster risk reduction, and social inequality at the global level to provide a realistic picture of disasters and risk. Through the close exchange between science and development policy practice, approaches to solutions and recommendations for action for current challenges in disaster risk reduction, climate change adaptation, and development policy are identified.
Focal topics
With the focus topics in the WorldRiskReport, the quantitative disaster risk analysis in the form of the WorldRiskIndex is supplemented by several focus articles on central aspects of disaster risk and its management. In addition to the focus articles, the reports usually contain several case studies intended to provide insights into the project work of the member organizations of Bündnis Entwicklung Hilft on the respective focus topics. For the focus articles and case studies, Bündnis Entwicklung Hilft and IFHV cooperate with external experts from science and practice, thus aiming to provide a comprehensive and multi-layered perspective on disaster risk as a complex phenomenon. So far, the following focal topics have been addressed in the context of the WorldRiskReports:
2021: Social Protection
2020: Forced Displacement and Migration
2019: Water Supply
2018: Child Protection and Children's Rights
2017: Analysis and Prospects
2016: Logistics and Infrastructure
2015: Food Security
2014: The as a Risk Area
2013: Health and Healthcare
2012: Environmental Degradation and Disasters
2011: Governance and Civil Society
WorldRiskIndex
The WorldRiskIndex uses 27 aggregated, publicly available indicators to determine disaster risk for 181 countries worldwide. Conceptually, the index is composed of exposure to extreme natural hazards and the societal vulnerability of individual countries. Earthquakes, cyclones, floods, droughts, and climate-induced sea-level rise are considered in the exposure analysis. Societal vulnerability is divided into susceptibility to extreme natural events, lack of coping capacities, and lack of adaptive capacities. All index components are scaled to the value range from 0 to 100. The higher a country's index score on the WorldRiskIndex, the higher its national disaster risk. For illustration and better comparability of the results, all countries are divided into five nearly equal classes using the quintile method.
The primary methodological concept of the index was developed jointly by the United Nations University Institute for Environment and Human Security (UNU-EHS) and Bündnis Entwicklung Hilft and first published in 2011.
Since 2017, the index has been calculated and methodologically advanced by IFHV. As part of the index's evolution to date, new datasets on exposure and societal vulnerability have been included, and the number of countries analyzed has been expanded.
Media resonance
The publication of the WorldRiskReports has regularly reached widespread media resonance in Germany in recent years. The WorldRiskReport also attracts attention in the international press.
The 2019 and 2020 WorldRiskReports on the focal topics of water supply and forced displacement and migration were presented at two conferences organized by the European Commission's Directorate-General for European Civil Protection and Humanitarian Aid Operations (ECHO) and discussed by experts from academia, politics, and development policy practice.
Related publications
Based on the concept of the WorldRiskIndex, index-based risk analyses for freshwater regions, the global fisheries sector, and mangrove areas were conducted in cooperation between Bündnis Entwicklung Hilft and the global environmental organization The Nature Conservancy, as well as in collaboration with several universities such as the University of California Santa Cruz and McGill University. The cooperation project between Bündnis Entwicklung Hilft and The Nature Conservancy is part of the International Climate Initiative (IKI) and was funded by the German Federal Ministry for the Environment, Nature Conservation, and Nuclear Safety.
References
External links
Project page WorldRiskReport
WorldRiskReport project description IFHV
WorldRiskReport project description United Nations University
International rankings
Hazard analysis
Disaster preparedness | WorldRiskReport | [
"Engineering"
] | 976 | [
"Safety engineering",
"Hazard analysis"
] |
69,352,620 | https://en.wikipedia.org/wiki/Rank-width | Rank-width is a graph width parameter used in graph theory and parameterized complexity, and defined using linear algebra.
It is defined from hierarchical clusterings of the vertices of a given graph, which can be visualized as ternary trees having the vertices as their leaves. Removing any edge from such a tree disconnects it into two subtrees and partitions the vertices into two subsets. The graph edges that cross from one side of the partition to the other can be described by a biadjacency matrix; for the purposes of rank-width, this matrix is defined over the finite field GF(2) rather than using real numbers. The rank-width of a graph is the maximum of the ranks of the biadjacency matrices, for a clustering chosen to minimize this maximum.
Rank-width is closely related to clique-width: , where is the clique-width and the rank-width. However, clique-width is NP-hard to compute, for graphs of large clique-width, and its parameterized complexity is unknown. In contrast, testing whether the rank-width is at most a constant takes polynomial time, and even when the rank-width is not constant it can be approximated, with a constant approximation ratio, in polynomial time. For this reason, rank-width can be used as a more easily computed substitute for clique-width.
An example of a family of graphs with high rank-width is provided by the square grid graphs. For an grid graph, the rank-width is exactly .
References
Graph minor theory
Linear algebra | Rank-width | [
"Mathematics"
] | 327 | [
"Graph theory stubs",
"Graph theory",
"Mathematical relations",
"Linear algebra",
"Graph minor theory",
"Algebra"
] |
69,352,646 | https://en.wikipedia.org/wiki/Barry%20Dwolatzky | Barry Dwolatzky (29 April 1952 – 16 May 2023) was a South African software engineer. He was a professor emeritus at the University of the Witwatersrand Joburg Centre for Software Engineering. Dwolatzky was on University of the People's computer science advisory board. He was an anti-apartheid activist and in the late 1980s he joined the African National Congress's armed wing, Umkhonto we Sizwe ("spear of the nation").
Education
Dwolatzky completed a Bachelors of Science degree in 1975 and a Ph.D. in 1979, both in electrical engineering at the University of the Witwatersrand. He was a postdoctoral researcher at the University of Manchester Institute of Science and Technology and GEC Marconi.
Career
In 1989, Dwolatzky joined University of the Witwatersrand as a senior lecturer, becoming a full professor in 2000. He was an emeritus professor at the Joburg Centre for Software Engineering. Dwolatzky was on University of the People's computer science advisory board.
Dwolatzky was a fellow of the South African Institute of Electrical Engineers and The Institute of IT Professionals South Africa (IITPSA).
Death
Dwolatzky died in Johannesburg on 16 May 2023, at the age of 71.
References
External links
1950s births
Year of birth uncertain
2023 deaths
Place of birth missing
20th-century South African engineers
South African electrical engineers
Software engineers
21st-century South African engineers
University of the Witwatersrand alumni
Academic staff of the University of the Witwatersrand
University of the People people
South African computer programmers
White South African anti-apartheid activists
South African anti-apartheid activists | Barry Dwolatzky | [
"Engineering"
] | 350 | [
"Software engineering",
"Software engineers"
] |
69,354,594 | https://en.wikipedia.org/wiki/History%20of%20flags | A flag is a distinctive piece of fabric used as a symbol, a signalling device, or for decoration. While the origin of flags is unknown, flag-like symbols have been described as far back as 11th century BC China and have been used by other ancient civilisations such as Egypt and Rome.
During the Medieval period, silk from China allowed a variety of peoples, such as the Arabs and the Norse, to develop flags which flew from poles. Developments in heraldry led to the creation of personal heraldic banners for rulers and other important people in the European kingdoms. Flags began to be regularly used on board ships for identification and communication in the Age of Sail. In the 18th century and onwards, a rising tide of nationalism around the world meant that common people began to regularly identify themselves with nation-states and their symbols, including flags. In the modern day, every national entity and many sub-national entities employ flags for identification.
Etymology
While the exact etymological origin is unknown, the word 'flag' first appears in English in the late 15th century. Possible origins include a variation of Middle English flakken, "to flap, flutter" which may further originate from Old Norse flaka, "to flicker, flutter, hang loose." These may be derived from Proto-Germanic flago- and the Proto-Indo-European root plak- ("to be flat"). The word first seems to have come into widespread use in the 16th century and soon came to encompass a variety of items, including banners, ensigns, gonfalons and others.
Proto-Flags
The origin of flags is unknown. Some of the earliest known banners come from ancient China to identify different parts of the army. For example, it is recorded that the armies of the Zhou dynasty in the 11th century BC carried a white banner before them, although no extant depictions exist of these banners. An early representation of such Chinese flags is a low-relief sculpture on the tomb of Emperor Wu of Han that shows two horseman bearing banners attached to poles and staffs.
Early representations of standards can be found on Egyptian bas-reliefs such as the Narmer Palette, which is said to be the earliest representation. These vexilloids, or flag-like standards, were symbols of the nomes of pre-dynastic Egypt. In fact, ancient Greek writers attributed the creation of standards to the Egyptians. According to Diodorus, Egyptian standards generally consisted of figures of sacred animals on the end of a staff or spear. Another often used symbol was a figure resembling an expanded semi-circular fan.
Roman standards
While China, Greece, Persia are all known to have used cloth banners to designate parts of their armies, in ancient times, it was the Romans who made the most widespread use of flag like symbols to represent their army. These banners, also known as a vexillum, were used to represent each army unit starting around 100 BC. The vexillum was composed of a piece of cloth fastened to a cross bar at the top of a spear, sometimes with fringe around the outside. The only extant Roman vexillum is dated to the first half of the 3rd century AD and is housed in the Pushkin Museum of Fine Arts. It is an almost square piece of coarse linen cloth with the image of the goddess Victoria.
Roman emperors used a banner similar in form called a labarum. It frequently bore upon it a representation of the emperor, sometimes by himself and sometimes accompanied by the heads of members of his family. It became associated with Constantine the Great and later Christianity after he supposedly marched under a labarum bearing the Chi Rho. These Roman standards were guarded with religious veneration in the temples of the metropolis and chief cities of the empire.
Another Roman standard that was wide spread by the time of the 4th century author Vegetius was the draco or dragon, a symbol originally borrowed from the Parthians some time after the death of Trajan. It would take the form of a dragon affixed to a lance with silver jaws and a body of colourful silk. When the wind blew down its open jaws the body would become inflated, similar to a windsock. It would sometimes contain a device to produce a shrill whistle sound, and was used to intimidate enemy troops.
Medieval period
With the innovation of silk in China and subsequent propagation along the Silk Road, flags as we know them today began to develop. Flags that comprise cloth attached to an upright pole at one side seem to have first been regularly used by the Saracens who introduced it to the Western world, although they would not gain popularity in the latter until the 9th century. flags are often mentioned in the early history of Islam and may have been copied from India. Tradition holds that a black flag was flown by Muhammad during the Conquest of Mecca, in the 7th century, and that his followers flew green flags. There is evidence of such standards being used by the grandsons of Muhammad during the Rashidun Caliphate onward which were generally triangular and flown from a vertical flag pole. Subsequent Islamic dynasties used a variety of different coloured banners to identify themselves and were often drawn from flags supposedly flown by the prophet during his life.
Another 9th century vertical flying flag is the raven banner that was used widely by the Vikings. Although no complete illustration of this banner exists, it probably appears on Northumbrian coins from the start of the century and later, in the 11th century, is most likely seen on the Bayeux Tapestry.
Heraldic flags
A major stage in the development of flags in the west was the art of heraldry. Heraldry, which developed in approximately the second quarter of the 12th century, primarily deals with identification by means of devices placed on shields, with these symbols becoming the means by which knights and later other upper-class individuals became identified. After some time, these heraldic badges came to be emblazoned on flags. To start with, the banners were extensions of the gonfanon, which consisted of a flag tied to a lance, but soon became diverse displays of important people's arms. Traditionally, there are several types such as, pennons, heraldic standards, or banners of arms.
The pennon was a small, elongated flag with either pointed or swallow-tailed end. It would have been marked with the badge or other armorial ensign of the owner and by displayed upon their lance as a personal ensign. A banner of arms is square or oblong and larger than the pennon, bearing the entire coat of arms of the owner and composed precisely, as upon a shield, but in a square or rectangular shape.
The heraldic standard appeared around the middle of the 14th century, and it was in general use by personages of high rank during the two following centuries. The standard appears to have been adopted for the special purpose of displaying badges. The standard was often more versatile than a banner of arms because no one could possess more than one banner, since it displayed a set of unchangeable heraldic arms. A single individual; however, could possess as many standards as they wanted, since this flag displayed badges, which could be created at any time the owner wanted. For example, the standards of Henry VII were mostly green and white (the colours of the Tudor livery) and had in one "a red firye dragon;" in another, "a donne kowe;" and in a third, "a silver greyhound and two red roses."
Heraldic standards are still in use in Scotland; at Highland gatherings, the standard of the clan chiefs is displayed on the pipes of the Pipe major of the clan.
Flags during the crusades
During the Crusades, beginning at the end of the 11th century, there were developments of flags. During the first crusade banners were used by kings and nobles in an extensions of the practices in Europe with the addition of some holy orders adopting them. However, about a century into the period the rank and file from different realms began to differentiate themselves by means of variations in the colour of the crosses upon their shoulders. In 1188 Philip II of France decreed that his colours be added to a cross (a red cross on a white field) and soon after Henry II of England decreed the use of a white cross on a red field. These coloured crosses would for some unknown reason be swapped, but remained in use in England and France as symbols of the kingdoms, in the form of Saint George's Cross and the Cross of St. Denis respectively. Other Realms had similar stories, for example the black and white Cross of Teutonic Knights was also born of the crusades.
Maritime flags
Flags have probably been used at sea as a form of communication since the earliest days of trading ships, with some evidence of the practice as far back as the Ancient Greeks. As early as the 13th century, the Italian maritime republics were using distinct flags for naval identification and by the 16th century English and Scottish ships were flying flags to show their country of origin, with designs derived from badges worn by their respective soldiers during the Middle Ages. Flags also became the preferred means of communications at sea, resulting in various systems of flag signals; see, international maritime signal flags.
National flags
Originally, flags representing a country would generally be the personal flag of its rulers; however, over time, the practice of using personal banners as flags of places was abandoned in favour of flags that had some significance to the nation, often its patron saint. Early examples of these were the maritime republics such as Genoa that could be said to have a national flag as early as the 12th century. However, these were still mostly used in the context of marine identification.
An early example, that prefigured to developments to come, was the Prince's Flag which emerged as a flag of resistance and as a symbol of liberty during the 80 years war which lead to the formation of the United Provinces. It is notable for being one of the first European flags that broke with the tradition set down in the medieval context of cross flags representing realms.
Although some flags date back earlier, widespread use of flags outside of military or naval context begins only with the rise of idea of the nation state at the end of the 18th century and particularly are a product of the Age of Revolution. Revolutions such as those in France and America called for people to begin think of themselves as citizens as opposed to subjects under a king, and thus necessitated flags that represented the collective citizenry, not just the power and right of a ruling family. With nationalism becoming common across Europe in the 19th century, national flags came to represent most of the states of Europe. Flags also began fostering a sense of unity between different peoples, such as the Union Jack representing a union between England and Scotland, or began to represent unity between nations in a perceived shared struggle, for example, the Pan-Slavic colors or later Pan-Arab colors.
As Europeans colonised significant portions of the world, they exported ideas of nationhood and national symbols, including flags, with the adoption of a flag becoming seen as integral to the nation-building process. Political change, social reform, and revolutions combined with a growing sense of nationhood among ordinary people in the 19th and 20th centuries led to the birth of new nations and flags around the globe.
With so many flags being created, interest in these designs began to develop and the study of flags, vexillology, at both professional and amateur levels, emerged. After World War II, Western vexillology went through a phase of rapid development, with many research facilities and publications being established.
References
Notes
Citations
Bibliography
Flags
Cultural history
Flags | History of flags | [
"Mathematics"
] | 2,338 | [
"Symbols",
"Flags"
] |
69,354,648 | https://en.wikipedia.org/wiki/Sheldon%20spectrum | The Sheldon spectrum is an empirically-observed feature of marine life by which the size of an organism is inversely correlated with its abundance in the ocean. The spectrum is named after Ray Sheldon, a marine ecologist at Canada’s Bedford Institute of Oceanography in Dartmouth, Nova Scotia. Sheldon and colleagues first suggested the existence of the inverse correlation based on seagoing measurements of plankton made with a Coulter counter in the late 1960s, most notably during the first circum-navigation of the Americas aboard the CCGS Hudson.
The inverse correlation implies that biomass density as a function of logarithmic body mass is approximately constant over many orders of magnitude. For example, when Sheldon and his colleagues analyzed a plankton sample in a bucket of seawater, they would tend to find that one third of the plankton mass was between 1 and 10 micrometers, another third was between 10 and 100 micrometers, and a third was between 100 micrometers and 1 millimeter. To make up for the differences of size, there must be a remarkably accurate mathematically correlative decrease in number of organisms as they become larger, in order for the biomass to remain constant. Thus, the rule predicts that krill which are a million times smaller than tuna are a million times more abundant in the ocean, a prediction which appears to be true.
There is strong evidence that human behavior, particularly overfishing and whaling, have modified the Sheldon spectrum for larger species, and it is unknown what long term effects such global alteration may have.
References
Marine organisms
Planktology
Marine biology | Sheldon spectrum | [
"Biology"
] | 320 | [
"Marine biology"
] |
69,355,429 | https://en.wikipedia.org/wiki/HD%20193307 | HD 193307 (HR 7766; Gliese 9691) is the primary of a binary star located the southern constellation Telescopium. It has an apparent magnitude of 6.27, placing it near the limit for naked eye visibility, even under ideal conditions. The star is located relatively close at a distance of 102 light years based on Gaia DR3 parallax measurements, but it is receding with a heliocentric radial velocity of . At its current distance, HD 193307's brightness is diminished by 0.18 magnitudes due to extinction from interstellar dust and it has an absolute magnitude of +3.80. HD 193307 has a relatively high proper motion, moving at a rate of 437 mas/yr.
There has been disagreements in the stellar classification of the object. Two sources give a class of F9 V, indicating that it is an ordinary F-type main-sequence star. David Stanley Evans gave it a slightly more evolved class of G2 IV-V, meaning that it is a G-type star with a luminosity class intermediate between a subgiant and main sequence star. Nancy Houk's spectral classification catalog lists HD 193307 as G0 V.
The accepted class for HD 193307 is F9 V. The object's current luminosity is 1.49 magnitudes above the ZAMS, indicating that HD 193307 is somewhat evolved. has 1.15 times the Sun's mass and a slightly enlarged radius of . It radiates 2.61 times the luminosity of the Sun from its photosphere at an effective temperature of , which gives it the typical whitish-yellow hue of a late F-type star. At the age of 7.55 billion years, HD 193307 has nearly twice the Sun's age. The star is metal-deficient with an iron abundance 46% that of the Sun ([Fe/H] = −0.34) and it spins slowly with a projected rotational velocity lower than .
WT 703 is a 12th magnitude star located 21.3" away along a position angle of 300°. It has a class of M2.5, indicating that it is a M-type star. WT 703 is located around the same distance as HD 193307 and it has a similar proper motion.
References
F-type main-sequence stars
9691
7766
100412
193307
CD-50 12929
M-type main-sequence stars
Telescopium
Binary stars
Telescopii, 86 | HD 193307 | [
"Astronomy"
] | 533 | [
"Telescopium",
"Constellations"
] |
69,357,713 | https://en.wikipedia.org/wiki/Super%20weaner | A super weaner (also super-weaner or superweaner) is an exceptionally large elephant seal at weaning age. Super weaners may reach their large sizes by stealing milk from nursing female elephant seals or by being adopted by an additional mother elephant seal.
Background
Elephant seals have an abrupt weaning process, in which the weaned juvenile seal does not receive assistance from its parents in finding food. The postweaning period is important for the seals' development, with a high mortality rate.
Phenomenon
A super weaner, an elephant seal at weaning age which obtains milk from multiple females, can weigh . A typical elephant seal at the same age weighs between and . While most mother elephant seals will bite weaned elephant seals that attempt to suckle, some will allow it for unknown reasons; some super weaners also obtain the additional milk through theft.
Instances
A study carried out on Año Nuevo Island in California between 1972 and 1977 observed the existence of some recently weaned northern elephant seal pups which either stole milk from nursing females or were "adopted by foster mothers." Male pups were more persistent and successful at stealing milk, and the largest weaners were universally male, including exceptionally large weaners which the study defined as "superweaners". These seals were "so large that their corpulence impeded their movements"; observation of two of them showed that their ability to acquire additional milk after being weaned was a major factor in their size. The study additionally found that the large weaner seals reach their large sizes through two distinct strategies: by stealing milk from nursing female elephant seals ("milk thieves"), or by being adopted by an additional mother elephant seal ("double mother-sucklers").
A 2014 study published in Proceedings of the Royal Society B explored the benefits of high fat stores in female northern elephant seals, which relate to their buoyancy. Daniel P. Costa, one of the authors, later stated that the findings might explain a known phenomenon in which super weaners are rarely seen again after departing the rookery. Costa suggested that super weaners are likely so buoyant that they have difficulty figuring out how to feed.
A 2021 study, also in Proceedings of the Royal Society B, removed super weaners from modeling analyses because it utilized "the assumption that a single mother nursed a single pup throughout the average lactation period". Super weaners were defined as weaners weighing more than . This study found that both male and female pups can become super weaners, although they are more likely to be male (64% of 94 super weaners measured).
In popular culture
In "The Maternal Combustion", a 2015 episode of The Big Bang Theory, Leonard Hofstadter compares Sheldon Cooper to an elephant seal pup who steals milk because Hofstadter feels Cooper is getting all the attention from Hofstadter's mother in addition to his own. Cooper replies that because both mothers are seeking to give him their attention, he is not a super-weaner, but rather a double mother suckler.
References
Citations
Works cited
Mirounga
Marine mammals
Animal developmental biology
Buoyancy
Animal physiology
Reproduction in mammals | Super weaner | [
"Biology"
] | 670 | [
"Animals",
"Animal physiology"
] |
69,359,224 | https://en.wikipedia.org/wiki/Uranium%20mining%20in%20the%20Bancroft%20area | Uranium mining around Bancroft, Ontario, was conducted at four sites, beginning in the early 1950s and concluding by 1982. Bancroft was one of two major uranium-producing areas in Ontario, and one of seven in Canada, all located along the edge of the Canadian Shield. In the context of mining, the "Bancroft area" includes Haliburton, Hastings, and Renfrew counties, and all areas between Minden and Lake Clear. Activity in the mid-1950s was described by engineer A. S. Bayne in a 1977 report as the "greatest uranium prospecting rush in the world".
As a result of activities at its four major uranium mines, Bancroft experienced rapid population and economic growth throughout the 1950s. By 1958, Canada had become one of the world's leading producers of uranium; the $274 million of uranium exports that year represented Canada's most significant mineral export. By 1963, the federal government had purchased more than $1.5 billion of uranium from Canadian producers, but soon thereafter the global supply uranium market collapsed and the government stopped issuing contracts to buy. Mining resumed when uranium prices rose during the 1970s energy crisis, but this second period of activity ended by 1982.
Three of the uranium mines are decommissioned, and one is undergoing rehabilitation. A twofold increase in lung cancer development and mortality has been observed among former mine workers. Bancroft continues to be known for gems and mineralogy.
Geology and mineralogy
During the most recent ice age, in the area of what is now Bancroft, Ontario, ancient glaciers removed soil and rock, exposing the Precambrian granite that had been the heart of volcanic mountains on an ancient sea bed. During the Grenville orogenies, sedimentary rocks were transformed by heat and pressure into banded gneiss and marble, incorporating gabbro and diorite (rich in iron and other dark minerals). Some uranium ores in these structures are about 1,000 million years old, while others are understood to be 1,200 million years old.
In Canada, 99% of known uranium occurrences and 93% of properties producing uranium are located on the geological shield known as the Canadian Shield, almost all on the western and southern edges of it.
The Grenville Province in Eastern Canada has small quantities of uranium-thorium-rare earth element in granitic pegmatite which appear in numerous locations around the Bancroft area, giving Bancroft the moniker of the "Mineral Capital of Canada". The "Bancroft area" includes Haliburton, Hastings, and Renfrew counties, and all areas between Minden and Lake Clear.
Bancroft is unusual as one of the limited global locations where uranium is extracted from intrusive rocks, notably the only one from the pegmatite type. Other locations include the Rössing and Husab mines in Namibia, Kvanefjeld in Greenland, Palabora in South Africa, along with the Radium Hill mine and sites in Southern Australia's Olary Province. The key geological features in the Bancroft area relevant to uranium mining are three circular granitic complexes, each about across. They are (from southwest to northeast):
The Cheddar complex is a circular double dome of granitic rock surrounded by paragneiss, para-amphibolite, and pyroxene granulite. All these rocks contain younger granitic and syenitic intrusions.
The Cardiff plutonic complex consists mainly of three southeast-dipping cylindrical sheet intrusions: the Centre Lake granite, the Monck Lake granite, and the Deer Lake syenite. They intrude metasedimentary rocks.
The Faraday granite is a sheet of granite covered by gneisses and metagabbro. The Faraday granite sheet dips to the south and it is the southern edge of the Hastings Highland gneiss complex.
Gems and other resources
Finds of gold in nearby Madoc, Ontario, (then known as Eldorado) from 1886 to 1887, inspired many to seek gold around Bancroft. Surface gold was found in October 1897 by R. Bradshaw, southwest of Bancroft (towards Bobcaygeon). This triggered a rush of prospectors to the area. Iron and magnetic ores were mined from 1882, gold, copper and mica from the late 1890s, and marble from 1911. More than 1,600 identifiable minerals and non-metallic collectibles can be found in the area, including 175 species of gemstones.
Aside from uranium, mines in the Bancroft area produced sought-after gemstones of 175 species, most notably calcite, clinohumite, corundum, diopside, dravite, edenite, euxenite-(Y), ferri-fluoro-katophorite, fluorapatite, fluorite, fluoro-richterite, ilmenite, kainosite-(Y), molybdenite, nepheline, phlogopite, crystals of the pyrochlore supergroup, thorite, titanite, tremolite, uraninite, uranophane, and zircon. Madawaska Mine produced samples of the very rare kainosite-(Y), globally renowned samples of the common calcite and fluorite, "superb" samples of ilmenite, and "fine" samples of molybdenite.
Marble mined in Bancroft was used to make the floor of the Whitney Block and the Royal Ontario Museum.
Uranium mining
Uranium was first discovered in the area of Cardiff, Ontario, in 1922 by prospector W. M. Richardson. His find was first called "the Richardson deposit" and later "the Fission property" and is located east of the Wilberforce community of Cardiff township. Between 1929 and 1931, attempts were made to extract radon from the uranium ore dug from a tunnel driven into a hill.
In 1943, during World War II, global interest in mining uranium escalated. The government sent geologists to Bancroft, who concluded that all known uranium deposits were unviable due to accessibility, size and uranium concentration. 1948 saw an increase in private staking of claims for uranium, but due to the difficulties in extracting uranium from lower grade ore, none developed into mines.
In 1953, "intelligent prospecting and excellent preliminary exploration" by G. W. Burns, R. J. Steele and Arthur H. Shore led successful prospecting of the area. Between 1953 and 1956, 100 claims were staked around Bancroft and at approximately the same time, another ten mines were started in the Elliot Lake area. Burns and Steele discovered the Central Lake deposits, which were developed into Bicroft Mine, while Shore's prospect became the Faraday Mine.Activity in the mid-1950s was described by engineer A. S. Bayne in a 1977 report as the "greatest uranium prospecting rush in the world".
Uranium mining operations in the Bancroft area were conducted at four sites, beginning in the early 1950s and concluding by 1982. Each of these used underground hard-rock mining methods to access and collect uranium ores from the surrounding granite and gneiss. The mines were:
Bicroft Mine
In 1952 G. W. Burns, an amateur prospector from Peterborough, found uranium deposits 16 kilometres (10 mi) southwest of Bancroft, near Cardiff township and Paudash Lake. At another property near Centre Lake (between Cheddar and Cardiff), he observed purple rocks which he knew to be fluorspar, an indicator of radioactive geology. He brought the samples to Robert Steele in Peterborough who used a Geiger counter to confirm their radioactivity. The two then formed a partnership and immediately began staking land claims. Their slow careful staking disadvantaged them as others rushed to the area and staked their own claims; nonetheless, their work paid off and they started mining what the Geological Survey of Canada (GCS) confirmed to be uraninite.
In late 1952, Burns sold his property to a Toronto syndicate that formed into the Centre Lake Uranium Mines Limited, led by C. C. Huston. The company worked on the surface, opened an adit and started diamond drilling, mostly every , sometimes between the holes. A shaft was created in 1954. Simultaneous to this, Croft Uranium Mines Limited, a subsidiary of Macassa Mines Limited, formed in 1953 and discovered uranium north of the original site. In 1955 the two sites were merged under the ownership of Bicroft Uranium Mines Limited, with work focused on the Centre Lake part of the property. A shaft was sunk to and ten levels created. A treatment plant capable of processing of ore per day was built and operations started in late 1956. Production in 1957 was of U3O8 from ore with a grade of 0.0859%. Production increased to per day in 1958 and exploration started to the south of the site. Mining continued until 1963, producing about of uranium ore.
of tailings remain on site in two impoundments. Repairs to the decommissioned site, including the addition of vegetation over the tailings, were completed in 1980. Subsequent upgrades of the dams were completed in the 1990s. The site is now a wetlands.
The uranium deposits of Bicroft mine occur in a set of eastward-dipping en-echelon lens-shaped dykes of syenite and granite, up to wide and long, which extend over an area of about within a north–south oriented belt of amphibolite and paragneiss (the eastern part of the Centre Lake granite). The ore minerals are uranothorite and uraninite. The uranium-to-thorium ratio is variable. Pyroxene-rich granite of this area is richer in thorium.
Faraday Mine/Madawaska Mine
The area that is now known as Madawaska Mine was first mapped by Jack Satterly in the early 1950s.
Arthur H. Shore, an independent prospector, first found uranium at his lot on Faraday township in 1949. He founded Faraday Uranium Mines Limited in 1949, but injured himself shortly afterwards. Newkirk Mining Corporation led work in 1952, including diamond drilling in December 1952 which helped identify seven main zones of uranium ore. Further drilling the following year identified additional deposits to a depth of . 1954 drilling found more uranium and adits were created. By 1955 it was established that there were of ore that was 0.112% U3O8 (uranium oxide). A sale price was agreed in January 1956. A shaft was sunk from an adit, from which five levels were established. A treatment plant with a capacity was built and operations started in April 1957. In 1958, the treatment capacity was increased to per day in order to support processing of ore from Greyhawk Mine. Production in 1957 was from ore with a grade of 0.0859% U3O8. Between 1948 and 1964 Faraday Mine had produced $54 million of ore.
After $7 million of investment to rehabilitate the mine, it reopened as the Madawaska Mine in 1976 and production continued to 1982. The shaft into the uranium-bearing pegmatite reached a depth of . During this period, the mine was producing of ore per day.
In 2015, inspections found improper surface protection of the tailings and the site has been undergoing rehabilitation.
At the Faraday and Madawaska mines, lens-shaped bodies of ore occur in granitic pegmatite dykes within an area of steeply-dipping amphibolite and metagabbro at the southern edge of the Faraday granite. Uraninite and uranothorite are the principal ore minerals at Faraday and Madawaska. Other radioactive minerals found at this locality include allanite, cyrtolite (a uranium-thorium rich variety of zircon), uranophane-α and uranophane-β. The uranium-to-thorium ratio is about 2-to-1. Uranium ore concentrations range from 0.07 to 0.4 per cent U3O8 (uranium-oxide.)
Dyno Mine
Prospector Paul Mullette discovered radioactive occurrences in November 1953 that were sold to Dyno Mines Limited (later Canadian Dyno Mines Limited). The company undertook diamond drilling that same month simultaneous to geological mapping. This identified three zones, resulting in drilling, which discovered two additional zones. Surface diamond drilling of 124 holes at intervals occurred through 1954 and 1955. A shaft, in the "B" zone, was sunk creating five levels. A price to sell uranium was agreed, and an ore treatment plant with capacity was started in 1956. Production started in May 1958.
At Dyno mine, at the eastern edge of the Cheddar granite, five zones of ore occur as uranothorite and uraninite in a set of steeply-dipping lens-shaped dykes of pegmatitic granite wide, which intrude into gneisses. Ore occurs across the full width of narrower dykes (up to about wide); in wider dykes, ore is usually restricted to only parts of the dyke. The ores are closely associated with a set of north–south trending fractures. Ore concentrations vary from 0.05 to more than 1.00 per cent U3O8, averaging 0.093 per cent. The ore often contains magnetite, particularly where the ore is of higher grade. Cyrtolite and allanite also occur.
Greyhawk Mine
Radioactive materials were first discovered in Faraday Township in 1955 by K. D. Thompson and M. Card, two employees of Goldhawk Porcupine Mines Limited who were surveying with Geiger counters. They found exposed rock to be radioactive across a area. Ownership subsequently shifted to Greyhawk Uranium Mines Limited. Diamond drilling followed at 15- to 122-metre (50 to 400 ft) intervals at depth. An exploration shaft was begun in 1956 and three levels created. Operations uncovered no high-grade ore deposits, leaving the average grade below that of other Bancroft mines. Mining operations subsequently stopped in 1959.
Ore was transferred for processing at the Faraday Mine site, starting August 1957 at a rate of about per day. By the end of 1957, at a value of had been shipped. Through 1958, production was per day averaging at 0.082% U3O8. The tonnage of ore was 30% less than feasibility estimates.
Faraday Uranium Mines Limited purchased the site in 1962. Madawaska Mines Limited was formed in 1975 and purchased the mine, as well as the Faraday Mine. Mining operations restarted in 1976 and continued until 1982.
After mining, the uranium ore was treated in acid leaching plants located at the mines. The leaching process produced yellowcake high-grade uranium compounds which were either processed further at the Port Hope refinery or sold to the US government for processing in that country. Processing uranium ore from Bancroft cost $3.00 per ton
In the Greyhawk area, metagabbro is intruded by east–west trending pegmatitic granite dykes up to wide. Ore bodies of uranothorite and uraninite with an average length of and average width of occur within these pegmatitic dykes, often at the contact with metagabbro. Radioactive minerals are concentrated in the more mafic parts of the host rock. Ores of 0.095 per cent U3O8 have been reported from this area.
Other mines
Located at , the Kemp Uranium Mine, sometimes called the Kemp Property or Kemp Prospect, produced uranium and a world-class specimen of thorite between 1954 and 1955.
Nu-Age Uranium Mines Limited owned the Old Smokey Occurrence, also known as the Tripp property and the Montgomery property. Surveying was done by Nu-Age Uranium Mines in 1955 and by Imperial Oil Limited in 1975. Although a 50-ton-per-day concentrator was known to be on site in the 1950s, the production quantities are unknown.
Blue Rock Cerium Mines Limited started exploratory work for a mine at a location in Monmouth township (now known as Highlands East, Ontario) during 1954. Silver Crater Mines company purchased the Silver Crater Mine in Cardiff, Ontario in 1953 hoping to find uranium. The mine produced betafite crystals, which contained 15% to 20% uranium.
Economic and political influence
Eldorado Mining and Refining Limited was the crown company that purchased all uranium oxide in Canada; it entered into contracts with mine owners at fixed prices. Faraday Mine alone produced $54 million of uranium ore, creating a rapid economic boom. The mine succeeded due to a combination of economic factors, including Bancroft's geographical proximity to the only uranium processing facility in Canada (located at Port Hope) and a good road and rail network.
Employment of miners in Bancroft started in 1955 and peaked in 1958 at around 1,600 jobs. Mine workers unionized in 1957, forming Local 1006 Bancroft Mine and Mill Worker's Union. Housing for miners was quickly established around the mines and in nearby Bancroft village, which extended to cover . Other construction quickly followed, including two single-men's bunkhouses, a canteen, an eleven-room school, an ice-curling rink, and a recreation center. In 1957, a swimming pool was started.
By 1958, Canada had become one of the world's leading producers of uranium; the $274 million of uranium exports that year represented Canada's most significant mineral export. By 1963, the federal government had purchased more than $1.5 billion of uranium from Canadian producers, but soon thereafter the global supply uranium market collapsed and the government stopped issuing contracts to buy.
Mining in Bancroft initially stopped in 1964 due to global reductions in demand for uranium. Local Catholic priest Henry Joseph Maloney (brother of former Ontario Ombudsman Arthur Maloney, and also brother of Minister of Mines James Anthony Maloney) rallied the community to demand support from the provincial and federal governments. Canadian Prime Minister John Diefenbaker, relying on an old agreement with the United Kingdom to buy uranium from Canada, was able to prolong the life of the mine by eighteen months, giving the community time to plan for the closure.
Some mines re-opened during the 1970s energy crisis, although, by the early 1980s, uranium demand was again down, with global energy consumption growing at 2%, much less than the expected 7%. The price of uranium dropped from US$43.57 per pound in 1979 to US$23.50 per pound in March 1982.
Combined with environmental concerns about the nuclear industry following the Three Mile Island accident and increasing costs of building nuclear power plants, circumstances lead to the cancellation of a contract to buy uranium by Italian energy company Agip. While the uranium mining at Elliot Lake continued to grow, the remaining uranium mining in Bancroft ended in 1982, closing the mines and jeopardizing the local economy.
Regulatory environment
The Atomic Energy Control Board (AECB) issued licenses for uranium mines and mills in Canada, and began regulating uranium mines in 1977. As a result of this, mines that closed prior to 1977 (i.e. Bicroft and Dyno Mines) were able to abandon their sites without any regulatory oversight. Faraday Mine/Madawaska Mine and Greyhawk Mine both resumed mining from 1976 until 1982, so their operation and closure had AECB oversight.
Greyhawk Mine's tailings were processed at the mill located at Madawaska Mine, leaving no tailings on site. As a consequence of this, the primary hazards that are regulated are present only at Faraday/Madawaska Mine, and resulted in ongoing environmental monitoring by AECB's successor organization, the Canadian Nuclear Safety Commission (CNSC).
Mineral and environmental legacy
After the closure of the mines, the various tailing sites attracted mineral collectors, especially to the annual Rockbound Gemboree in which tourists travelled to Bancroft in search of gems and minerals. Reserves of of ore, averaging 0.065% U3O8, remain in the ground at Greyhawk Mine. Dyno Mine ran out of uranium ore in 1959. In 2007, a $3 million uranium development project was underway in nearby Haliburton.
1978 and 1980 studies found that the natural weathering of the granite and gabbro rocks left at Greyhawk Mine had caused uranium leaching into the aquifer at concentrations ranging between 1.2 and 380 parts per billion, with higher concentrations measured deeper in the water table and in sediments.
Rural Canadians predominantly rely on groundwater for drinking water supply. Mining activity expanded fissures and widened the area of groundwater contamination. Public health concerns around groundwater contamination focus on uranium and thorium, plus the presence of decay products of both.
1988 background radiation levels at parts of Paudash Lake were twenty times the safety limit and lumps of semi-refined uranium lay in the abandoned Dyno mine buildings. The same year, Crowe Valley Conservation Authority called for greater supervision of radioactive waste. A 2016 Geological Survey of Canada study noted that 70% of ground water samples taken from diamond drilling holes, mine shafts and adits had uranium concentrates above national drinking water safety standards of . 2019 sampling found radioactive and hazardous contamination in two of several water samples. Subsequent inspections in 2020 from nearby locations reported water quality to be within provincial standards.
Tailings remain at Bicroft, Madawaska and Dyno mine sites where water sampling by the CNSC is ongoing. Madawaska, Dyno and Greyhawk mines were managed by EWL Management Limited, until February 2022 when it dissolved into its parent company Ovintiv. Bicroft Mine is owned by Barrick Gold; the owners of all four legacy tailing sites at former mines are responsible for the ongoing management of the sites.
Health legacy for miners
According to a 2012 study published in Nature, there is a "positive exposure-response between silica and lung cancer". Uranium mining produces silica-laden dust at a free silica rate of 5–15% in Bancroft, significantly less than the Elliot Lake mines which produced ore with 60–70% free silica.
In 1974, the Ontario Workmen's Compensation Board studied 15,094 people who worked in uranium mines in Bancroft and around Elliot Lake for at least one month, between 1955 and 1974. Of those 15,094 people, 94 silicosis cases were found in 1974, of which one was attributed to working a Bancroft mine, i.e. the other 93 were attributed to working in an Elliot Lake mine.
According to the Committee on Uranium Mining in Virginia, mines produce radon gas which can increase lung cancer risks. Miners' exposure to radiation was not measured before 1958 and exposure limits were not enacted until 1968. Risks to miners at Bancroft and Elliot Lake mines were investigated and the official report of that investigation quotes a miner:"We have been led to believe through the years that the working environment in these mines was safe for us to work in. We have been deceived."
The aforementioned 1974 study of 15,094 Ontario uranium miners found 81 former miners who died of lung cancer. Factoring in predicted lung cancer rate for men in Ontario led to the conclusion that by 1974 there were 36 more deaths than expected attributable to both Bancroft and Elliot Lake mines, with the additional risk appearing to be twice as high for Bancroft miners compared to Elliot Lake miners.
A study report for the CNSC undertaken by the Occupational Cancer Research Centre at Cancer Care Ontario tracked the health of 28,959 former uranium miners over 21 years and found a two-fold increase in lung cancer mortality and incidence. In an article published in The BMJ (journal of the British Medical Association) reported an increase of lung cancer risk; miners who have worked at least 100 months in uranium mines have a twofold increased risk of developing lung cancer. The study is expected to be updated in 2023.
See also
Uranium mining in the Elliot Lake area
Royal Commission on the Health and Safety of Workers in Mines
Uranium ore deposits
List of uranium mines
List of mines in the Bancroft area
List of uranium mines in Ontario
1974 Elliot Lake miners strike
References
Uranium mines in Ontario
Former mines in Canada
Mining and the environment
History of Canada (1945–1960)
History of Canada (1960–1981)
History of Canada (1982–1992)
Mineralogy
Mining in Ontario
History of mining in Ontario
Economy of Canada
Environmental impact of nuclear power
Lung cancer
Nuclear power
Nuclear energy
Energy in Ontario
Geology of Ontario
History of Hastings County | Uranium mining in the Bancroft area | [
"Physics",
"Chemistry",
"Technology"
] | 4,971 | [
"Nuclear power",
"Physical quantities",
"Power (physics)",
"Environmental impact of nuclear power",
"Nuclear energy",
"Nuclear physics",
"Radioactivity"
] |
69,359,360 | https://en.wikipedia.org/wiki/Nitrogen%20pentahydride | Nitrogen pentahydride, also known as ammonium hydride is a hypothetical compound with the chemical formula NH5. There are two theoretical structures of nitrogen pentahydride. One structure is trigonal bipyramidal molecular geometry type NH5 molecule. Its nitrogen atom and hydrogen atoms are covalently bounded, and its symmetry group is D3h. Another predicted structure of nitrogen pentahydride is an ionic compound, composed of an ammonium ion and a hydride ion (NH4+H−). Until now, no one has synthesized this substance, or proved its existence, and related experiments have not directly observed nitrogen pentahydride. It is only speculated that it may be a reactive intermediate based on reaction products. Theoretical calculations show this molecule is thermodynamically unstable. The reason might be similar to the instability of nitrogen pentafluoride, so the possibility of its existence is low. However, nitrogen pentahydride might exist in special conditions or high pressure. Nitrogen pentahydride was considered for use as a solid rocket fuel for research in 1966.
Research and attempts
Some studies believe that nitrogen pentahydride may exist in the formation of other metal atoms crystal lattice, such as mercury and lithium. There are also related studies to explore the possibility of a substitution reaction with ammonium halide. There are also attempts to react ammonium and deuterium to produce the pentahydride, however some experiments show that it may only be a reactive intermediate, which will immediately decompose into ammonia and hydrogen, and the same is true for experiments using deuterium. However, all the studies above are only theoretical calculations, the existence of nitrogen pentahydride has not been observed, and this substance has not been shown to exist.
An experimental attempted to do a displacement reaction between ammonium trifluoroacetate and lithium hydride in the molten state, in order to study the possibility of the existence of nitrogen pentahydride:
CF3COONH4 + LiH → CF3COOLi + [NH4H]
In the reaction between ammonium trifluoroacetate and lithium deuteride, the product ammonia contains 85% of ordinary ammonia and 15% of monodeuterated ammonia. The product hydrogen contains 66% of hydrogen deuteride, 21% of hydrogen gas and 13% of deuterium gas. In the product collected using tetradeuterated ammonium trifluoroacetate and lithium hydride, ammonia contains ND3, NHD2 and NH2D, while hydrogen contains 68% of hydrogen deuteride, 18% of hydrogen gas and 14% of deuterium gas. Therefore, it is speculated that the reaction may have two routes: one is to directly decompose into ammonia and hydrogen, the other is to first generate ammonium deuteride reactive intermediates, partly by forming deuterium anions and hydrogen cations to form deuterated hydrogen and ammonia and by the formation of hydride ions or deuterium cations to decompose into hydrogen or deuterium gas.
But it immediately decomposed into hydrogen and ammonia, and it was impossible to prove its existence. Experiments with deuterium still get the same results:
[NH4H] → NH3 + H2
Structure
Several papers have conducted theoretical calculations on nitrogen pentahydride, and believe that nitrogen pentahydride is unlikely to form ionic crystals of hydride and ammonium ions. However, it is possible that hydrogen is connected to one of the hydrogen atoms of ammonium. It may also be similar to nitrogen pentafluoride, forming a three-center two-electron bond similar to carbonium ions, or those five hydrogen atoms are arranged in a triangular bipyramid structure around the nitrogen atom.
Related compounds
A compound that is similar to nitrogen pentahydride is the theoretical nitrogen pentafluoride. Its structure is assumed to be tetrafluoroammonium fluoride (NF4+F−). Similarly to nitrogen pentahydride, it is a compound of nitrogen and five of the same atom, but nitrogen pentafluoride is also a hypothetical compound, still never synthesized and only theoretical research exist. Other pnictogen pentahydrides are theoretically more stable, such as phosphorus pentahydride (PH4H) which is more stable than nitrogen pentahydride but still unstable to decomposition to phosphine and hydrogen gas. Its organic derivatives (phosphoranes) are more stable, such as stable pentaphenylphosphorus (Ph5P). Other heavier pnictogen pentahydrides are more likely to exist, such as the theoretical arsenic pentahydride.
References
Ammonium compounds
Hydrides
Hypothetical chemical compounds | Nitrogen pentahydride | [
"Chemistry"
] | 1,017 | [
"Hypotheses in chemistry",
"Salts",
"Theoretical chemistry",
"Ammonium compounds",
"Hypothetical chemical compounds"
] |
65,172,480 | https://en.wikipedia.org/wiki/Bainu%20%28website%29 | Bainu ("how are you?") is a Chinese social networking website written in the Mongolian language. it had about 400,000 users, concentrated in Inner Mongolia.
It was reported by Voice of America (VOA) that the Chinese authorities blocked Bainu on 23 August 2020 in order to prohibit Mongolians from discussing the issue of the authorities’ implementation of "bilingual education" in elementary schools.
References
External links
Bainu
Chinese websites
Social media
Mongolian-language computing | Bainu (website) | [
"Technology"
] | 96 | [
"Computing and society",
"Social media"
] |
65,173,261 | https://en.wikipedia.org/wiki/Dogpiling%20%28Internet%29 | Dogpiling, or dog-piling is a form of online harassment or online abuse characterized by having groups of harassers target the same victim. Examples of online abuse include flaming, doxing (online release of personal information without consent), impersonation, and public shaming. Dog-pilers often focus on harassing, exposing, or punishing a target for an opinion that the group does not agree with, or just simply for the sake of being a bully and targeting a victim. Participants use criticism and/or insults to target a single person. In some definitions, it also includes sending private messages.
History
Dogpiling often occurs in the form of online harassment. For example, the Gamergate harassment campaign is an example of dogpiling.
See also
Ad hominem
Bandwagon effect
Cyberbullying
Flaming (Internet)
Internet troll
Online shaming
References
Cyberbullying
Internet terminology | Dogpiling (Internet) | [
"Technology"
] | 187 | [
"Computing terminology",
"Internet terminology"
] |
65,175,699 | https://en.wikipedia.org/wiki/Glossary%20of%20microelectronics%20manufacturing%20terms | Glossary of microelectronics manufacturing terms
This is a list of terms used in the manufacture of electronic micro-components. Many of the terms are already defined and explained in Wikipedia; this glossary is for looking up, comparing, and reviewing the terms. You can help enhance this page by adding new terms or clarifying definitions of existing ones.
2.5D integration – an advanced integrated circuit packaging technology that bonds dies and/or chiplets onto an interposer for enclosure within a single package
3D integration – an advanced semiconductor technology that incorporates multiple layers of circuitry into a single chip, integrated both vertically and horizontally
3D-IC (also 3DIC or 3D IC) – Three-dimensional integrated circuit; an integrated circuit built with 3D integration
advanced packaging – the aggregation and interconnection of components before traditional packaging
ALD – see atomic layer deposition
atomic layer deposition (ALD) – chemical vapor deposition process by which very thin films of a controlled composition are grown
back end of line (BEoL) – wafer processing steps from the creation of metal interconnect layers through the final etching step that creates pad openings (see also front end of line, far back end of line, post-fab)
BEoL – see back end of line
bonding – any of several technologies that attach one electronic circuit or component to another; see wire bonding, thermocompression bonding, flip chip, hybrid bonding, etc.
breadboard – a construction base for prototyping of electronics
bumping – the formation of microbumps on the surface of an electronic circuit in preparation for flip chip assembly
carrier wafer – a wafer that is attached to dies, chiplets, or another wafer during intermediate steps, but is not a part of the finished device
chip – an integrated circuit; may refer to either a bare die or a packaged device
chip carrier – a package built to contain an integrated circuit
chiplet – a small die designed to be integrated with other components within a single package
chemical-mechanical polishing (CMP) – smoothing a surface with the combination of chemical and mechanical forces, using an abrasive/corrosive chemical slurry and a polishing pad
circuit board – see printed circuit board
class 10, class 100, etc. – a measure of the air quality in a cleanroom; class 10 means fewer than 10 airborne particles of size 0.5 μm or larger are permitted per cubic foot of air
cleanroom (clean room) – a specialized manufacturing environment that maintains extremely low levels of particulates
CMP – see chemical-mechanical polishing
copper pillar – a type of microbump with embedded thin-film thermoelectric material
deep reactive-ion etching (DRIE) – process that creates deep, steep-sided holes and trenches in a wafer or other substrate, typically with high aspect ratios
dicing – cutting a processed semiconductor wafer into separate dies
die – an unpackaged integrated circuit; a rectangular piece cut (diced) from a processed wafer
die-to-die (also die-on-die) stacking – bonding and integrating individual bare dies atop one another
die-to-wafer (also die-on-wafer) stacking – bonding and integrating dies onto a wafer before dicing the wafer
doping – intentional introduction of impurities into a semiconductor material for the purpose of modulating its properties
DRIE – see deep reactive-ion etching
e-beam – see electron-beam processing
EDA – see electronic design automation
electron-beam processing (e-beam) – irradiation with high energy electrons for lithography, inspection, etc.
electronic design automation (EDA) – software tools for designing electronic systems
etching (etch, etch processing) – chemically removing layers from the surface of a wafer during semiconductor device fabrication
fab – a semiconductor fabrication plant
fan-out wafer-level packaging – an extension of wafer-level packaging in which the wafer is diced, dies are positioned on a carrier wafer and molded, and then a redistribution layer is added
far back end of line (FBEoL) – after normal back end of line, additional in-fab processes to create RDL, copper pillars, microbumps, and other packaging-related structures (see also front end of line, back end of line, post-fab)
FBEoL – see far back end of line
FEoL – see front end of line
flip chip – interconnecting electronic components by means of microbumps that have been deposited onto the contact pads
front end of line (FEoL) – initial wafer processing steps up to (but not including) metal interconnect (see also back end of line, far back end of line, post-fab)
heterogeneous integration – combining different types of integrated circuitry into a single device; differences may be in fabrication process, technology node, substrate, or function
HIC - see hybrid integrated circuit
hybrid bonding – a permanent bond that combines a dielectric bond with embedded metal to form interconnections
hybrid integrated circuit (HIC) – a miniaturized circuit constructed of both semiconductor devices and passive components bonded to a substrate
IC – see integrated circuit
integrated circuit (IC) – a miniature electronic circuit formed by microfabrication on semiconducting material, performing the same function as a larger circuit made from discrete components
interconnect (n.) – wires or signal traces that carry electrical signals between the elements in an electronic device
interposer – a small piece of semiconductor material (glass, silicon, or organic) built to host and interconnect two or more dies and/or chiplets in a single package
lead – a metal structure connecting the circuitry inside a package with components outside the package
lead frame (or leadframe) – a metal structure inside a package that connects the chip to its leads
mask – see photomask
MCM – see multi-chip module
microbump – a very small solder ball that provides contact between two stacked physical layers of electronics
microelectronics – the study and manufacture (or microfabrication) of very small electronic designs and components
microfabrication – the process of fabricating miniature structures of sub-micron scale
Moore’s Law – an observation by Gordon Moore that the transistor count per square inch on ICs doubled every year, and the prediction that it will continue to do so
more than Moore – a catch-all phrase for technologies that attempt to bypass Moore’s Law, creating smaller, faster, or more powerful ICs without shrinking the size of the transistor
multi-chip module (MCM) – an electronic assembly integrating multiple ICs, dies, chiplets, etc. onto a unifying substrate so that they can be treated as one IC
nanofabrication – design and manufacture of devices with dimensions measured in nanometers
node – see technology node
optical mask – see photomask
package – a chip carrier; a protective structure that holds an integrated circuit and provides connections to other components
packaging – the final step in device fabrication, when the device is encapsulated in a protective package.
pad (contact pad or bond pad) – designated surface area on a printed circuit board or die where an electrical connection is to be made
pad opening – a hole in the final passivation layer that exposes a pad
parasitics (parasitic structures, parasitic elements) – unwanted intrinsic electrical elements that are created by proximity to actual circuit elements
passivation layer – an oxide layer that isolates the underlying surface from electrical and chemical conditions
PCB – see printed circuit board
photolithography – a manufacturing process that uses light to transfer a geometric pattern from a photomask to a photoresist on the substrate
photomask (optical mask) – an opaque plate with holes or transparencies that allow light to shine through in a defined pattern
photoresist – a light-sensitive material used in processes such as photolithography to form a patterned coating on a surface
pitch – the distance between the centers of repeated elements
planarization – a process that makes a surface planar (flat)
polishing – see chemical-mechanical polishing
post-fab – processes that occur after cleanroom fabrication is complete; performed outside of the cleanroom environment, often by another company
printed circuit board (PCB) – a board that supports electrical or electronic components and connects them with etched traces and pads
quilt packaging – a technology that makes electrically and mechanically robust chip-to-chip interconnections by using horizontal structures at the chip edges
redistribution layer (RDL) – an extra metal layer that makes the pads of an IC available in other locations of the chip
reticle – a partial plate with holes or transparencies used in photolithography integrated circuit fabrication
RDL – see redistribution layer
semiconductor – a material with an electrical conductivity value falling between that of a conductor and an insulator; its resistivity falls as its temperature rises
silicon – the semiconductor material used most frequently as a substrate in electronics
silicon on insulator (SoI) – a layered silicon–insulator–silicon substrate
SiP – see system in package
SoC – see system on chip
SoI – see silicon on insulator
split-fab (split fabrication, split manufacturing) – performing FEoL wafer processing at one fab and BEoL at another
sputtering (sputter deposition) – a thin film deposition method that erodes material from a target (source) onto a substrate
stepper – a step-and-scan system used in photolithography
substrate – the semiconductor material underlying the circuitry of an IC, usually silicon
system in package (SiP) – a number of integrated circuits (chips or chiplets) enclosed in a single package that functions as a complete system
system on chip (SoC) – a single IC that integrates all or most components of a computer or other electronic system
technology node – an industry standard semiconductor manufacturing process generation defined by the minimum size of the transistor gate length
thermocompression bonding – a bonding technique where two metal surfaces are brought into contact with simultaneous application of force and heat
thin-film deposition – a technique for depositing a thin film of material onto a substrate or onto previously deposited layers; in IC manufacturing, the layers are insulators, semiconductors, and conductors
through-silicon via (TSV) – a vertical electrical connection that pierces the (usually silicon) substrate
trace (signal trace) – the microelectronic equivalent of a wire; a tiny strip of conductor (copper, aluminum, etc.) that carries power, ground, or signal horizontally across a circuit
TSV – see through-silicon via
via – a vertical electrical connection between layers in a circuit
wafer – a disk of semiconductor material (usually silicon) on which electronic circuitry can be fabricated
wafer-level packaging (WLP) – packaging ICs before they are diced, while they are still part of the wafer
wafer-to-wafer (also wafer-on-wafer) stacking – bonding and integrating whole processed wafers atop one another before dicing the stack into dies
wire bonding – using tiny wires to interconnect an IC or other semiconductor device with its package (see also thermocompression bonding, flip chip, hybrid bonding, etc.)
WLP – see wafer-level packaging
Microelectronics manufacturing
Semiconductor device fabrication
Electronics manufacturing
Semiconductors
Engineering
Wikipedia glossaries using unordered lists | Glossary of microelectronics manufacturing terms | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,351 | [
"Electrical resistance and conductance",
"Physical quantities",
"Microtechnology",
"Semiconductors",
"Semiconductor device fabrication",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Electronics manufacturing",
"Solid state engineering",
"Matter"
] |
65,176,759 | https://en.wikipedia.org/wiki/Peter%20Marks%20%28physician%29 | Peter Marks is an American hematologist oncologist serving as the director of the Center for Biologics Evaluation and Research within the Food and Drug Administration. He was appointed to the position in 2016 after previously serving as deputy director.
Education
Marks earned a Bachelor of Science degree from Columbia University, followed by a Doctor of Medicine and PhD in cell and molecular biology from New York University in the lab of Fredrick R. Maxfield. As an undergraduate, he volunteered at Mount Sinai St. Luke's in New York City, where he worked in the radioimmunoassay lab. He completed an internal medicine residency and oncology training at the Brigham and Women's Hospital.
Career
After completing his training, Marks worked at the Brigham and Women's Hospital as a clinician-scientist, and later served as Clinical Director of Hematology. He then worked in the pharmaceutical industry, where he worked on the development of hematology and oncology products. He later managed the Adult Leukemia Service at Yale University and served as the Chief Clinical Officer of the Yale New Haven Hospital Cancer Center. Marks joined the Center for Biologics Evaluation and Research as deputy director in 2012, and was promoted to director in 2016.
In May 2020, he was selected to serve as a member of the White House Coronavirus Task Force, although he left a few days later over concerns that his participation would represent a conflict with his position at FDA. Marks also played a role in establishing Operation Warp Speed, a partnership between the federal government and various private companies to develop a COVID-19 vaccine, but left the project in May 2020 shortly after it was launched. Marks believed he would be more useful in his role as chief regulator of vaccines as the Director of FDA's Center for Biologics Evaluation and Research. In 2021, Marks served as a plenary speaker at the State of the Science Research Summit. In 2024, Marks overruled FDA staff to approve gene pharmacotherapy Elevidys—intended to treat Duchenne muscular dystrophy—despite it failing in Phase III clinical trial.
Personal life
Marks has two children and resides in Washington, D.C., with his wife.
References
American oncologists
Columbia University alumni
New York University Graduate School of Arts and Science alumni
New York University Grossman School of Medicine alumni
Yale University faculty
Food and Drug Administration people
Living people
Year of birth missing (living people)
Members of the National Academy of Medicine | Peter Marks (physician) | [
"Biology"
] | 500 | [
"Vaccination",
"Vaccination advocates"
] |
65,182,265 | https://en.wikipedia.org/wiki/Phantom%20border | A phantom border () is an informal delineation following the approximate course of an abolished political border, associated with demographic differences on each side as a continuing legacy of historical division, despite official geopolitical union. Not all former political borders are today phantom borders. Factors that may increase the likelihood of a political border becoming a phantom border upon dissolution, include: short time elapsed since the border's dissolution, long existence of the former border, imporousity of the former border, and divergent characteristics of the political entity formerly governing one side of the border.
Phantom borders have many different implications: in Ukraine they are associated with conflict, while in countries such as Romania they play an important part in relations with neighboring countries.
Development of the concept
Though the phenomenon of phantom borders is ancient, articulation of the concept is recent, stemming from its identification in the project (also known as the Phantom Borders in East Central Europe Project, a now-defunct European border studies research network backed by the German Federal Ministry of Education and Research), which defines the phenomenon as "former, predominantly political borders that structure today’s world (…), historical spaces [that] persist or re-emerge”. Recent developments in border studies have led to borders being understood more under the lens of being social constructs (such as in the work of Vladimir Kolosov). From the modern perspective, phantom borders are marks of the past and reminders of previous conquests and annexations. Nail Alkan states that this "enclosure fosters a feeling of security and people prefer to live in familiar circumstances" where old political borders were.
Notable phantom borders
Germany
The borders of Prussia and East Germany are reflected in support for far-right or national conservative parties after German unification, including DNVP in the Weimar Republic and AfD in the 21st century. East Germany notably did not receive as much immigration, except Russians, during the Cold War, so East Germans, particularly Russian-Germans, post-unification are more opposed to it. Combined with East German Ostalgie and the perception by East Germans of being second-class citizens compared to West Germans, eastern regions of Germany tend to vote more for either left-wing parties like The Left and PDS or right-wing anti-immigration parties like AfD. Other consequences of the German East-West divide show themselves in different ways - In the west, workers earn higher wages and produce more, while unemployment is higher in the east. The divide is also reflected in personal preferences - in terms of car preferences, West Germans prefer BMW over Škoda, while the opposite is the case in the East.
Poland
Historically, Poland was partitioned multiple times between the German Empire/Prussia, the Habsburg Empire, and the Russian Empire. Under Prussian control were the Polish regions of eastern Pomorze, Wielkopolska and Upper Silesia, regions historically exposed to German influence. Russia controlled central Poland under Congress Poland, including the capital, Warsaw, and Austria controlled the southern regions under the Kingdom of Galicia and Lodomeria. Both regions had large, mostly dominant populations of Polish people. At the end of the Second World War, German territories up to the Oder–Neisse line were handed over to Poland as compensation for the annexation of eastern Poland by the Soviet Union.
Politically, this led to the creation of phantom borders. Coinciding with the Prussian-controlled regions, western Poland is known as Polska liberalna, or "liberal Poland", due to their opting to vote for liberal or social-democratic parties such as PO in elections. In central and southern Poland, the situation is different: the region is known as Polska solidarna, or "solidarity Poland", where voters vote more to the conservative side, represented by parties such as PiS. This split can be explained through the influences foreign empires had on the Polish people, such as the ones caused by Germanization and Russification programs and different languages, economic models, political traditions, and cultures within these different empires, which affect the industrialization and infrastructure density of regions, re-settlement of the population of Eastern Poland up to the Oder-Neisse line, and social norms and values.
Romania
Prior to Romanian independence, its modern territories consisted of Wallachia and Moldavia under the Ottoman Empire, and the broader region of Transylvania under Habsburg Austria. Transylvania, in general, has more ethnic diversity than other parts of Romania, with a significant Hungarian minority and a smaller German one. Perceptions of political and social powers are also different, as Transylvania was ruled under administrative authorities while the former Ottoman lands were under more arbitrary rule with less centralization of legal powers. Transylvania experiences a larger number of political protests than the rest of the country excluding the capital Bucharest.
Formerly Ottoman Romania had been struggling for full independence, while Habsburg Romanians tended to go for political reforms. The faultline, separated by the Carpathian Mountains, is sometimes considered as the line separating the Eastern Orthodox Church in the east and the Latin Church to the west. Throughout the 1970s, Nicolae Ceaușescu implemented policies to assimilate minorities in Transylvania, though the cultural divide still remained. During Romania's elections in the 1990s, Transylvanian residents were less supportive of nationalic and populistic parties. In 1996, liberal presidential candidate Emil Constantinescu won in nearly all of Transylvania, while incumbent and former communist politician Ion Iliescu won in nearly all regions outside it.
Ukraine
In just the last 150 years, parts of Ukraine have been split between Russia, the Habsburg Empire, Czechoslovakia, Poland, Romania, Hungary, and various iterations of a Ukrainian state. There has been noted a divide between different regions of Ukraine originating from political boundaries in the region – electorally, there is a split between eastern-southern and central-western Ukraine. One example of phantom borders would be the pro-Russian attitude of Dnieper Ukraine due to its longer connections with Russia. There are various anomalies in these phantom borders – an example would be that voters in the regions of Transcarpathia and Chernivtsy, previously controlled by Austria, seem to vote similarly to eastern Ukraine.
References
Borders
Politics
Political geography | Phantom border | [
"Physics"
] | 1,248 | [
"Spacetime",
"Borders",
"Space"
] |
65,183,190 | https://en.wikipedia.org/wiki/Nitrate%20chlorides | Nitrate chlorides are mixed anion compounds that contain both nitrate (NO3−) and chloride (Cl−) ions. Various compounds are known, including amino acid salts, and also complexes from iron group, rare-earth, and actinide metals. Complexes are not usually identified as nitrate chlorides, and would be termed chlorido nitrato complexes.
Formation
Nitrate chloride compounds may be formed by mixing solutions of chloride and nitrate slats, the addition of nitric acid to a chloride salt solution, or the addition of hydrochloric acid to a nitrate solution. Most commonly water is used as a solvent, but other solvents such as methylene dichloride, methanol or ethanol can be used.
Minerals
List
References
Nitrates
Chlorides
Mixed anion compounds | Nitrate chlorides | [
"Physics",
"Chemistry"
] | 161 | [
"Matter",
"Chlorides",
"Inorganic compounds",
"Mixed anion compounds",
"Nitrates",
"Salts",
"Oxidizing agents",
"Ions"
] |
65,184,156 | https://en.wikipedia.org/wiki/Plant%20microbiome | The plant microbiome, also known as the phytomicrobiome, plays roles in plant health and productivity and has received significant attention in recent years. The microbiome has been defined as "a characteristic microbial community occupying a reasonably well-defined habitat which has distinct physio-chemical properties. The term thus not only refers to the microorganisms involved but also encompasses their theatre of activity".
Plants live in association with diverse microbial consortia. These microbes, referred to as the plant's microbiota, live both inside (the endosphere) and outside (the episphere) of plant tissues, and play important roles in the ecology and physiology of plants. "The core plant microbiome is thought to comprise keystone microbial taxa that are important for plant fitness and established through evolutionary mechanisms of selection and enrichment of microbial taxa containing essential functions genes for the fitness of the plant holobiont."
Plant microbiomes are shaped by both factors related to the plant itself, such as genotype, organ, species and health status, as well as factors related to the plant's environment, such as management, land use and climate. The health status of a plant has been reported in some studies to be reflected by or linked to its microbiome.
Overview
The study of the association of plants with microorganisms precedes that of the animal and human microbiomes, notably the roles of microbes in nitrogen and phosphorus uptake. The most notable examples are plant root-arbuscular mycorrhizal (AM) and legume-rhizobial symbioses, both of which greatly influence the ability of roots to uptake various nutrients from the soil. Some of these microbes cannot survive in the absence of the plant host (obligate symbionts include viruses and some bacteria and fungi), which provides space, oxygen, proteins, and carbohydrates to the microorganisms. The association of AM fungi with plants has been known since 1842, and over 80% of land plants are found associated with them. It is thought AM fungi helped in the domestication of plants.
Traditionally, plant-microbe interaction studies have been confined to culturable microbes. The numerous microbes that could not be cultured have remained uninvestigated, so knowledge of their roles is largely unknown. The possibilities of unraveling the types and outcomes of these plant-microbe interactions has generated considerable interest among ecologists, evolutionary biologists, plant biologists, and agronomists. Recent developments in multiomics and the establishment of large collections of microorganisms have dramatically increased knowledge of the plant microbiome composition and diversity. The sequencing of marker genes of entire microbial communities, referred to as metagenomics, sheds light on the phylogenetic diversity of the microbiomes of plants. It also adds to the knowledge of the major biotic and abiotic factors responsible for shaping plant microbiome community assemblages.
The composition of microbial communities associated with different plant species is correlated with the phylogenetic distance between the plant species, that is, closely related plant species tend to have more alike microbial communities than distant species. The focus of plant microbiome studies has been directed at model plants, such as Arabidopsis thaliana, as well as important economic crop species including barley (Hordeum vulgare), corn (Zea mays), rice (Oryza sativa), soybean (Glycine max), wheat (Triticum aestivum), whereas less attention has been given to fruit crops and tree species.
Plant microbiota
Cyanobacteria are an example of a microorganism which widely interacts in a symbiotic manner with land plants. Cyanobacteria can enter the plant through the stomata and colonise the intercellular space, forming loops and intracellular coils. Anabaena spp. colonize the roots of wheat and cotton plants. Calothrix sp. has also been found on the root system of wheat. Monocots, such as wheat and rice, have been colonised by Nostoc spp., In 1991, Ganther and others isolated diverse heterocystous nitrogen-fixing cyanobacteria, including Nostoc, Anabaena and Cylindrospermum, from plant root and soil. Assessment of wheat seedling roots revealed two types of association patterns: loose colonization of root hair by Anabaena and tight colonization of the root surface within a restricted zone by Nostoc.
Rhizosphere microbiome
The rhizosphere comprises the 1–10 mm zone of soil immediately surrounding the roots that is under the influence of the plant through its deposition of root exudates, mucilage and dead plant cells. A diverse array of organisms specialize in living in the rhizosphere, including bacteria, fungi, oomycetes, nematodes, algae, protozoa, viruses, and archaea.
Mycorrhizal fungi are abundant members of the rhizosphere community, and have been found in over 200,000 plant species, and are estimated to associate with over 80% of all plants. Mycorrhizae–root associations play profound roles in land ecosystems by regulating nutrient and carbon cycles. Mycorrhizae are integral to plant health because they provide up to 80% of the nitrogen and phosphorus requirements. In return, the fungi obtain carbohydrates and lipids from host plants. Recent studies of arbuscular mycorrhizal fungi using sequencing technologies show greater between-species and within-species diversity than previously known.
The most frequently studied beneficial rhizosphere organisms are mycorrhizae, rhizobium bacteria, plant-growth promoting rhizobacteria (PGPR), and biocontrol microbes. It has been projected that one gram of soil could contain more than one million distinct bacterial genomes, and over 50,000 OTUs (operational taxonomic units) have been found within the potato rhizosphere. Among the prokaryotes in the rhizosphere, the most frequent bacteria are within the Acidobacteriota, Pseudomonadota, Planctomycetota, Actinomycetota, Bacteroidota, and Bacillota. In some studies, no significant differences were reported in the microbial community composition between the bulk soil (soil not attached to the plant root) and rhizosphere soil. Certain bacterial groups (e.g. Actinomycetota, Xanthomonadaceae) are less abundant in the rhizosphere than in nearby bulk soil .
Endosphere microbiome
Some microorganisms, such as endophytes, penetrate and occupy the plant internal tissues, forming the endospheric microbiome. The arbuscular mycorrhizal and other endophytic fungi are the dominant colonizers of the endosphere. Bacteria, and to some degree archaea, are important members of endosphere communities. Some of these endophytic microbes interact with their host and provide obvious benefits to plants. Unlike the rhizosphere and the rhizoplane, the endospheres harbor highly specific microbial communities. The root endophytic community can be very distinct from that of the adjacent soil community. In general, diversity of the endophytic community is lower than the diversity of the microbial community outside the plant. The identity and diversity of the endophytic microbiome of above-and below-ground tissues may also differ within the plant.
Phyllosphere microbiome
The aerial surface of a plant (stem, leaf, flower, fruit) is called the phyllosphere and is considered comparatively nutrient poor when compared to the rhizosphere and endosphere. The environment in the phyllosphere is more dynamic than the rhizosphere and endosphere environments. Microbial colonizers are subjected to diurnal and seasonal fluctuations of heat, moisture, and radiation. In addition, these environmental elements affect plant physiology (such as photosynthesis, respiration, water uptake etc.) and indirectly influence microbiome composition. Rain and wind also cause temporal variation to the phyllosphere microbiome.
Interactions between plants and their associated microorganisms in many of these microbiomes can play pivotal roles in host plant health, function, and evolution. The leaf surface, or phyllosphere, harbours a microbiome comprising diverse communities of bacteria, fungi, algae, archaea, and viruses. Interactions between the host plant and phyllosphere bacteria have the potential to drive various aspects of host plant physiology. However, as of 2020 knowledge of these bacterial associations in the phyllosphere remains relatively modest, and there is a need to advance fundamental knowledge of phyllosphere microbiome dynamics.
Overall, there remains high species richness in phyllosphere communities. Fungal communities are highly variable in the phyllosphere of temperate regions and are more diverse than in tropical regions. There can be up to 107 microbes per square centimetre present on the leaf surfaces of plants, and the bacterial population of the phyllosphere on a global scale is estimated to be 1026 cells. The population size of the fungal phyllosphere is likely to be smaller.
Phyllosphere microbes from different plants appear to be somewhat similar at high levels of taxa, but at the lower levels taxa there remain significant differences. This indicates microorganisms may need finely tuned metabolic adjustment to survive in phyllosphere environment. Pseudomonadota seems to be the dominant colonizers, with Bacteroidota and Actinomycetota also predominant in phyllospheres. Although there are similarities between the rhizosphere and soil microbial communities, very little similarity has been found between phyllosphere communities and microorganisms floating in open air (aeroplankton).
The assembly of the phyllosphere microbiome, which can be strictly defined as epiphytic bacterial communities on the leaf surface, can be shaped by the microbial communities present in the surrounding environment (i.e., stochastic colonisation) and the host plant (i.e., biotic selection). However, although the leaf surface is generally considered a discrete microbial habitat, there is no consensus on the dominant driver of community assembly across phyllosphere microbiomes. For example, host-specific bacterial communities have been reported in the phyllosphere of co-occurring plant species, suggesting a dominant role of host selection.
Conversely, microbiomes of the surrounding environment have also been reported to be the primary determinant of phyllosphere community composition. As a result, the processes that drive phyllosphere community assembly are not well understood but unlikely to be universal across plant species. However, the existing evidence does indicate that phyllosphere microbiomes exhibiting host-specific associations are more likely to interact with the host than those primarily recruited from the surrounding environment.
The search for a core microbiome in host-associated microbial communities is a useful first step in trying to understand the interactions that may be occurring between a host and its microbiome. The prevailing core microbiome concept is built on the notion that the persistence of a taxon across the spatiotemporal boundaries of an ecological niche is directly reflective of its functional importance within the niche it occupies; it therefore provides a framework for identifying functionally critical microorganisms that consistently associate with a host species.
Divergent definitions of "core microbiome" have arisen across scientific literature with researchers variably identifying "core taxa" as those persistent across distinct host microhabitats and even different species. Given the functional divergence of microorganisms across different host species and microhabitats, defining core taxa sensu stricto as those persistent across broad geographic distances within tissue- and species-specific host microbiomes, represents the most biologically and ecologically appropriate application of this conceptual framework. Tissue- and species-specific core microbiomes across host populations separated by broad geographical distances have not been widely reported for the phyllosphere using the stringent definition established by Ruinen.
Example: The mānuka phyllosphere
The flowering tea tree commonly known as mānuka is indigenous to New Zealand. Mānuka honey, produced from the nectar of mānuka flowers, is known for its non-peroxide antibacterial properties. Microorganisms have been studied in the mānuka rhizosphere and endosphere. Earlier studies primarily focussed on fungi, and a 2016 study provided the first investigation of endophytic bacterial communities from three geographically and environmentally distinct mānuka populations using fingerprinting techniques and revealed tissue-specific core endomicrobiomes.
A 2020 study identified a habitat-specific and relatively abundant core microbiome in the mānuka phyllosphere, which was persistent across all samples. In contrast, non-core phyllosphere microorganisms exhibited significant variation across individual host trees and populations that was strongly driven by environmental and spatial factors. The results demonstrated the existence of a dominant and ubiquitous core microbiome in the phyllosphere of mānuka.
Seed microbiome
Plant seeds can serve as natural vectors for vertical transmission of their beneficial endophytes, such as those that confer disease resistance. A 2021 research paper explained, "It makes sense that their most important symbionts would be vertically transmitted through seed rather than gambling that all of the correct soil-dwelling microbes might be available at the germination site."
The new paradigm regarding mutualistic fungi and bacterial transmission via the seeds of host plants has been fostered largely by research pertaining to plants of agricultural value. Rice seeds were found to entail high microbial diversity, with the greatest diversity inhabiting the embryo rather than the pericarp. Fungi of genus Fusarium transmitted via seeds were found to be dominant members of the microbiome within the stems of maize. This facet of the plant microbiome came to be known as the seed microbiome.
Forestry researchers have also begun to identify members of the seed microbiome pertaining to valuable tree species. Vertical transmission of fungal and bacterial mutualists was confirmed in 2021 for the acorns of oak trees. If the research on oaks turns out to apply to other tree species, it will be understood that the above-soil portions of a plant (the phyllosphere) obtain nearly all of their beneficial fungi from those carried in the seed. In contrast, the roots (the rhizosphere) acquire only a small fraction of their mutualists from the seed. Most arrive via the surrounding soil, and this includes their vital associations with arbuscular mycorrhizal fungi.
Microbial species consistently found in plant seeds are known as the "core microbiome." Benefits to the host plant include their ability to assist in the production of antimicrobial compounds, detoxification, nutrient uptake, and growth-promoting activities. Discerning the functions of symbiotic microbes in seeds is shifting the agricultural paradigm away from seed breeding and preparation that traditionally sought to minimize the presence of fungal and bacterial propagules. The likelihood that a microbe found within a seed is mutualistic is now a routine presumption. Such partners may contribute to "seed dormancy and germination, environmental adaptation, resistance and tolerance against diseases, and growth promotion."
Application of the new understanding of beneficial microbes inhabiting seeds has been suggested for use beyond agriculture and for biodiversity conservation. A citizen group advocating for northward assisted migration of an endangered tree in the USA has pointed to the seed microbiome paradigm shift as a reason for the official institutions to lift their ban on seed transfer beyond the ex situ conservation plantings in northern Georgia.
Plant holobiont
Since the colonization of land by ancestral plant lineages 450 million years ago, plants and their associated microbes have been interacting with each other, forming an assemblage of species that is often referred to as a holobiont. Selective pressure acting on holobiont components has likely shaped plant-associated microbial communities and selected for host-adapted microorganisms that impact plant fitness. However, the high microbial densities detected on plant tissues, together with the fast generation time of microbes and their more ancient origin compared to their host, suggest that microbe-microbe interactions are also important selective forces sculpting complex microbial assemblages in the phyllosphere, rhizosphere, and plant endosphere compartments.
See also
Biomass partitioning
Mangrove microbiome
Phytobiome
References
Reference books
Saleem M (2015) Microbiome Community Ecology: Fundamentals and Applications Springer. .
Kumar V, Prasad R, Kumar M and Choudhary DK (2019) Microbiome in Plant Health and Disease: Challenges and Opportunities Springer. .
Microbiomes
Plants | Plant microbiome | [
"Biology",
"Environmental_science"
] | 3,579 | [
"Microbiomes",
"Environmental microbiology",
"Plants"
] |
65,184,894 | https://en.wikipedia.org/wiki/Biodiversity%20of%20South%20Africa | The Biodiversity of South Africa is the variety of living organisms within the boundaries of South Africa and its exclusive economic zone. South Africa is a region of high biodiversity in the terrestrial and marine realms. The country is ranked sixth out of the world's seventeen megadiverse countries, and is rated among the top 10 for plant species diversity and third for marine endemism.
This biodiversity is monitored and reported in terms of the continental terrestrial, inland aquatic, coastal, marine and the sub-antarctic Prince Edward Islands components. South Africa is a party to the Rio Convention on Biological Diversity, and has declared a number of protected areas, including national parks and marine protected areas which are managed by the national government. Continuing research and periodical reporting on the biodiversity of South Africa is the responsibility of the South African National Biodiversity Institute (SANBI) as directed by the Department of Environment, Forestry and Fisheries, and authorised by various statutory acts.
SANBI reports an estimate of about 67,000 animal species, and more than 20,400 plant species that have been described. Almost a quarter of the global cephalopod species, about 16% of elasmobranch species, 13% of the world's sunspiders (Solifugae), nearly 10% of the world coral species, 8% of seaweeds, 7% of vascular plants, 7% of the birds, 5% of the mammals, nearly 5% of butterflies, 4% of the reptiles, 2% of the amphibians, and 1% of the freshwater fish of the world are found in the country and its exclusive economic zone, including the Prince Edward Islands. Almost two thirds of South Africa's plant species, about half of the species of reptiles, amphibians, butterflies and freshwater fish, and about 40% of the estimated 10,000 marine animal species are endemic.
Global context
Biodiversity is the variety and variability of life on Earth. It is typically a measure of variation at the genetic, species, and ecosystem level. and is not distributed evenly, generally being richest in the tropics. Marine biodiversity is usually highest along coasts in the Western Pacific, where sea surface temperature is highest, and in the mid-latitudinal band in all oceans. Biodiversity generally tends to cluster in hotspots, and has been increasing through time, but will be likely to slow in the future.
Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million have been documented and over 86 percent have not yet been described. More recently, in May 2016, scientists reported that 1 trillion species are estimated to be on Earth currently with only one-thousandth of one percent described.
The country is ranked sixth out of the world's seventeen megadiverse countries, with high levels of marine and terrestrial biodiversity.
The main criterion for megadiverse countries is endemism at the level of species, genera and families. A megadiverse country must have at least 5,000 species of endemic plants and must border marine ecosystems.
South Africa is one of the smaller megadiverse countries, with a terrestrial area of about 1.2 million km2 and is rated among the top 10 for plant species diversity. The EEZ is about 1.1 million km2 and is rated third for marine endemism.
Measuring diversity
Biodiversity is usually plotted as the richness of a geographic area, with some reference to a temporal scale. Types of biodiversity include taxonomic or species, ecological, morphological, and genetic diversity. Taxonomic diversity, that is the number of species, genera, or families, is the most commonly assessed type.
The estimated number of South African animal species as of 2018 is about 67 000, with 20 401 plant species described. This comprises about 7% of the world's vascular plants, 7% of birds, 5% of mammals, 4% of reptiles, 2% of amphibians and 1% of freshwater fishes. Less information is available on invertebrate groups, but South Africa has almost a quarter of global cephalopods, and some terrestrial invertebrate groups are very strongly represented.
Evolutionary history
The age of the Earth is about 4.54 billion years. The earliest undisputed evidence of life on Earth dates at least from 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. There are microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old meta-sedimentary rocks discovered in Western Greenland. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia.
Since life began on Earth, five major mass extinctions and several minor events have led to large and sudden drops in biodiversity. The Phanerozoic eon (the last 540 million years) marked a rapid growth in biodiversity via the Cambrian explosion—a period during which the majority of multicellular phyla first appeared. The next 400 million years included repeated, massive, biodiversity losses classified as mass extinction events. In the Carboniferous, rainforest collapse led to a great loss of plant and animal life. The Permian–Triassic extinction event, 251 million years ago, was the worst; vertebrate recovery took 30 million years. The most recent, the Cretaceous–Paleogene extinction event, occurred 65 million years ago and has often attracted more attention than others because it resulted in the extinction of the dinosaurs.
The period since the emergence of humans has displayed an ongoing biodiversity reduction and an accompanying loss of genetic diversity. Named the Holocene extinction, the reduction is caused primarily by human impacts, particularly habitat destruction. Conversely, biodiversity positively impacts human health in a number of ways, although a few negative effects are studied.
Taxonomic biodiversity of a region may increase either by influx of species from other regions or by speciation within the region. The former is facilitated by physical connections to other regions of compatible habitability, by the mobility of the affected organisms at some stage of their life cycle, and by agents contributing to dispersal. Speciation in situ is facilitated by reproductive isolation and changes in the environmental pressures on the local populations. Many regions of high biodiversity or endemism arise from habitats which require unusual adaptations.
Biological realms
Terrestrial (continental)
The continental terrestrial component of the region lies within the Afrotropical biogeographic realm, which is one of Earth's eight biogeographic realms. It includes Africa south of the Sahara Desert, the majority of the Arabian Peninsula, the island of Madagascar, southern Iran and extreme southwestern Pakistan, and the islands of the western Indian Ocean.
Inland aquatic
Estuarine
Marine
According to the WWF scheme, the coastal waters of continental South Africa mostly lie within the marine realm of Temperate Southern Africa, with a part in the Western Indo-Pacific. The boundary between the Temperate Southern Africa and Western Indo-Pacific marine realms is near Lake St. Lucia, in northern KwaZulu-Natal, near the border with Mozambique.
The marine biodiversity of South Africa is the variety of living organisms that live in the seas off the coast of South Africa. It includes genetic, species and ecosystems biodiversity in a range of habitats spread over a range of ecologically varied regions, influenced by the geomorphology of the seabed and circulation of major and local water masses, which distribute both living organisms and nutrients in complex and time-variable patterns.
South Africa has a wide range of marine diversity with coastline in three oceans, two major current systems, major ocean frontal systems and benthic topography extending to a maximum depth of 5 700 m. There are 179 defined marine ecosystem types, 150 of them around South Africa and 29 around the sub-Antarctic territory of the Prince Edward Islands.
Coastal
Since 2018 the National Biodiversity Assessment has produced a separate coastal report, combining data from marine portions of the coastal zone with estuaries and dunes along with beaches and rocky shores, defining the coastal zone as an ecologically determined cross-realm zone spanning the coastal parts of the marine and terrestrial realms, and including all estuaries, within which relevant results from the constituent realms are presented together. This analyses biodiversity across the land-sea interface compares it with the non-coastal parts of the terrestrial and marine realms.
Vegetation types on the landward side of the coastal zone are included in the ecologically defined coast if they are purely coastal or have a coastal affinity, and least 70% of their area is within 10 km of the shore. On the seaward side, ecosystem types that are influenced by the land are considered to be coastal, and include ecosystems extending as far offshore as the back of the inner shelf, bays, and marine ecosystem types influenced by rivers. Estuarine Functional Zones (EFZs) are also considered to be part of the coast.
The ecologically defined coastal zone is estimated to comprise about 4% of mainland terrestrial area, but includes 186 of the 987 ecosystem types, high biodiversity, and many endemic species, particularly along the south coast. This is largely due to large variations in coastal conditions affected by the warm Agulhas Current on the east coast, and cool Benguela Current along the west coast, distinct variations in temperature and rainfall patterns and variations in geology.
Coastal parts of ecosystems tend to be hotspots of cumulative pressure, which often causes poor ecological condition in those areas. Ports and harbours have been identified as centres of cumulative impacts and ecological degradation. Intensive pressures on coastal areas include use of biological resources, coastal development, and mining. Coastal species of economic value which are accessible are likely to be over-exploited. Estuaries are often subjected to major flow modification due to upstream water use, which has adverse impacts on many coastal ecosystem types. For example, sand supplies to beaches and dunes are severely reduced, which affects erosion rates. Climate change and invasive species increase pressures on coastal biodiversity, and much of the pressure due to pollution is poorly understood.
60% of coastal ecosystem types making up 55% of the coastal zone area, have been identified to be threatened, with a high risk of biodiversity loss in 13 ecosystems, while 9% of the coastal zone area is protected, providing good protection to about 24% of coastal ecosystem types.
The three South African regions of high plant diversity and endemism all occur partly along the coast. They are the Maputaland-Pondoland-Albany Hotspot, the Succulent Karoo Region, and the Cape Floristic Region.
The coastal zone provides a rich variety of organisms useful to people as food, medicine, fuel and raw materials for construction and crafts. There are more than 220 coastal plant species recorded as useful for these purposes from South Africa.
Many coastal inhabitants rely to some extent on estuarine and marine fish and invertebrates as part of their diet, and the money saved by harvesting natural resources can be used for other needs, which is a significant benefit for economically marginal families. About a million people engage in recreational fishing, and the fishery is estimated to have a value of about R1.6 billion in 2018.
Some 147 communities and 29000 people are involved in subsistence fishing, harvesting fish, rock lobster, abalone, bait organisms and other intertidal resources, to an estimated value of about R16 million in 2018, about 85% of which is linefishing. The major importance of this sector is in employment and food security of poor coastal communities.
Sub-Antarctic
The Prince Edward Islands are Marion Island and Prince Edward Island, two small islands in the subantarctic Indian Ocean that are part of South Africa. The islands have been declared Special Nature Reserves under the South African Environmental Management: Protected Areas Act, No. 57 of 2003, and activities on the islands are therefore restricted to research and conservation management. Further protection was granted when the area was declared a marine protected area in 2013. The only human inhabitants of the islands are the staff of a meteorological and biological research station run by the South African National Antarctic Programme on Marion Island.
Ecoregions
An ecoregion (ecological region) is an ecologically and geographically defined area that is smaller than a bioregion, which in turn is smaller than a biogeographic realm. Ecoregions cover relatively large areas of land or water, and contain characteristic, geographically distinct assemblages of natural communities and species. The biodiversity of flora, fauna and ecosystems that characterise an ecoregion tends to be distinct from that of other ecoregions. In theory, biodiversity or conservation ecoregions are relatively large areas of land or water where the probability of encountering different species and communities at any given point remains relatively constant, within an acceptable range of variation. An ecoregion will typically include several habitat types.
Terrestrial ecoregions
Terrestrial ecoregions are land ecoregions, as distinct from freshwater and marine ecoregions. The WWF divides the land surface of the Earth into eight biogeographical realms containing 867 smaller terrestrial ecoregions.
The eight realms follow the major floral and faunal boundaries, identified by botanists and zoologists, that separate the world's major plant and animal communities. Realm boundaries generally follow continental boundaries, or major barriers to plant and animal distribution, like the Himalayas and the Sahara.
Ecoregions are classified by biome type, which are the major global plant communities determined by rainfall and climate. Forests, grasslands (including savanna and shrubland), and deserts (including xeric shrublands) are distinguished by climate (tropical and subtropical vs. temperate and boreal climates) and, for forests, by whether the trees are predominantly conifers (gymnosperms), broadleaf (Angiosperms), or mixed (broadleaf and conifer). Biome types like Mediterranean forests, woodlands, and scrub; tundra; and mangroves host very distinct ecological communities, and are also recognized as distinct biome types.
Listed by biome:
Tropical and subtropical moist broadleaf forests;
Tropical and subtropical grasslands, savannas, and shrublands;
Montane grasslands and shrublands;
Mediterranean forests, woodlands, and scrub;
Deserts and xeric shrublands;
Tundra;
Mangroves;
Marine ecoregions
The marine ecoregions of the South African exclusive economic zone are a set of geographically delineated regions of similar ecological characteristics on a fairly broad scale, covering the exclusive economic zone along the South African coast. There were originally five inshore bioregions over the continental shelf and four offshore bioregions covering the continental slope and abyssal regions. These bioregions are used for conservation research and planning. They were defined in the South African National Spatial Biodiversity Assessment of 2004. The South African National Spatial Biodiversity Assessment of 2011 amended this to reduce the number of regions to four inshore and two offshore and rename them as ecoregions.
Inshore ecoregions:
The Benguela ecoregion comprises the consolidated Namaqua and South-western Cape bioregions. which lie between Sylvia Hill in Namibia and Cape Point The northern sector is a cool temperate region from Namibi to Cape Columbine, with large-scale intensive upwelling and nutrient rich water, and the cold Benguela current. The region is known for low oxygen events and it contains extensive mud banks and a relatively wide continental shelf.
The southern sector has a relatively narrow continental shelf and a change in geology at Cape Columbine which marks the northern extent of exposed granite, and there is less offshore mud habitat south of this break. This region includes the two underwater canyons, Cape Point Valley and Cape Canyon, and there are large areas of rocky reef. The change in biology at Cape Columbine is indicated by changes in seaweed and intertidal communities. There is less tendency for oxygen deficient bottom water than in the area further north. The break at the south-eastern end of the region is at Cape Point, where it is distinct in the inshore and tidal habitats, but the change in deeper water tends obliquely to the south-east, and is more diffuse, due to mixing of the Benguela and Agulhas currents between these regions.
The Agulhas ecoregion extends over the continental shelf from Cape Point to the Mbashe river. The south coast comprises a warm temperate component from Mbashe to Cape Agulhas, and a large overlap zone between Cape Agulhas and Cape Point where waters of the two currents mix. The continental shelf is at its widest in this region, extending up to 240 km offshore on the Agulhas Bank. The shelf edge includes areas of extensive slumping. There are several areas of reef on the Agulhas Bank, including the Alphard banks. This region has the highest number of South African endemics, and is a breeding area for many species. It was renamed to Agulhas ecoregion in the 2011 assessment.
Offshore ecoregions:
Habitat types
In ecology, a habitat is the type of natural environment in which a particular species of organism lives. A species's habitat is those places where the species can find food, shelter, protection and mates for reproduction.
It is characterized by both physical and biological features. Every organism has certain habitat needs for the conditions in which it will thrive, but some are tolerant of wide variations while others are very specific in their requirements. A habitat is not necessarily a geographical area, it can be the interior of a stem, a rotten log, a rock or a clump of moss; a parasitic organism has as its habitat the body of its host, part of the host's body (such as the digestive tract), or a single cell within the host's body.
Geographic habitat types include polar, temperate, subtropical and tropical. The terrestrial vegetation type may be forest, steppe, grassland, semi-arid or desert. Fresh-water habitats include marshes, streams, rivers, lakes, and ponds; marine habitats include salt marshes, the coast, the intertidal zone, estuaries, reefs, bays, the open sea, the sea bed, deep water and submarine vents.
Habitats may change over time. Causes of change may include a violent event, or change may occur more gradually over millennia with alterations in the climate. Other changes come as a direct result of human activities. The introduction of alien species can have a devastating effect on native wildlife, through increased predation, competition for resources or the introduction of pests and diseases to which the indigenous species have no immunity. A change to a habitat can have far reaching consequences. It can make it more habitable for some inhabitants, at the expense of others. It can open new niches to immigrants, induce speciation, drive out established communities, and in some cases may lead to extinctions at various scales. A habitat is also directly and indirectly affected by the inhabitant organisms, their presence and biological activity influences the environment in complex ways.
Marine habitat types
A total of 136 marine habitat types have been identified. The classification takes connectivity, depth and slope, substrate geology and sediment grain size, shoreline wave exposure, and biogeography into account. Beach state considers the wave exposure and grain size. These habitats include 37 coastal types, 17 inshore types in the 5 to 30 m depth range, 62 offshore benthic types deeper than 30 m, and 16 offshore pelagic types, three types of island and one type of lagoon.
Vegetation types
The diverse vegetation types of South Africa are sampled, classified, described, and mapped by the SANBI VEGMAP project. Vegetation types of Lesotho and Eswatini are included in the project. The vegetation map is useful for biodiversity assessment, research, conservation management and environmental planning, and includes a database. The project is ongoing as more data becomes available over time. The first map was published in 2006, and has been updated in 2009. 2012 and 2018.
The classification system uses a hierarchy to organise the vegetation types within the nine defined biomes and a tenth azonal group. Bioregions are described within the biomes, and the vegetation types are at the more detailed level, and represent groups of communities with similar biotic and abiotic features. The vegetation types are plotted on the map in as much resolution as is available using a GIS system.
Listed by biome, there are 88 Savanna vegetation types, code SV; 73 Grassveld vegetation types, code G;
81 Fynbos vegetation types, code FF; 29 Renosterveld vegetation types, code FR; 65 Succulent Karoo vegetation types, code SK; 54 Albany Thicket and Strandveld vegetation types, codes AT and FS; 29 Nama Karoo and desert vegetation types, codes NK and D; 35 Azonal vegetation types, cose AZ; 17 Forest and coastal belt vegetation types, codes FO and CB; and 8 Subantarctic vegetation types, code ST.
Endemism
Endemism is the ecological state of a species being native to a single defined geographic location, such as an island, nation, country or other defined zone, or habitat type; organisms that are indigenous to a place are not endemic to it if they are also found elsewhere. The extreme opposite of an endemic species is one with a cosmopolitan distribution, having a global or widespread range, and the opposite to an indigenous species in an introduced or invasive species.
Terrestrial
The Cape Floristic Region, the smallest of the six recognised floral kingdoms of the world, is an area of extraordinarily high diversity and endemism, and is home to over 9,000 vascular plant species, of which 69 percent are endemic. Much of this diversity is associated with the fynbos biome, a Mediterranean-type, fire-prone shrubland.
Several species are endemic to extremely limited habitats, and are under severe pressure due to habitat reduction and degradation.
Marine
Over 13000 species of marine organisms are recorded from South African waters. Endemism is estimated at between 26 and 33%, the third highest marine endemism after New Zealand (51%) and Antarctica (45%). This varies between taxonomic groups from no endemic marine mammals or birds, to over 90% of chitons.
The region of highest known endemism is the south coast Agulhas inshore ecoregion, which is relatively far from the national borders, and relatively isolated from large scale oceanic circulation due to the effects of the widening of the continental shelf at the Agulhas Bank on the path of the Agulhas current, and far from other warm temperate regions. This region is largely bypassed by the Agulhas current, and has cooler inshore water due to upwelling, making it less hospitable to tropical Indo-west Pacific species. It is also isolated from the South Atlantic and Southern Ocean, so has been more prone to niche speciation.
Centres of diversity
The flora are not evenly distributed over South Africa, they tend to be concentrated in centres of diversity, which are regions of relatively high local biodiversity in a global or national context.
Succulent Karoo - the arid area to the north and west of the Cape fynbos.
Cape Floristic Region -
Griqualand West Centre - the arid area roughly between Prieska, Vryburg, Vorstershoop and Upington in the Northern Cape Province. The vegetation is mainly a variety of bushveld, grassland and Karoo types.
Albany Centre - in the Eastern Cape from the Kei River to about Middelburg, Aberdeen and the Baviaanskloof Mountains, with a transitional climate between the winter-rainfall of the Western Cape and the summer-rainfall of eastern southern Africa, with a large variety of vegetation types.
Drakensberg Alpine Centre - the highest altitude mountain region of southern Africa including most of Lesotho, the mountainous area around Barkly East in the Eastern Cape, and the eastern slopes of the Drakensberg through the Eastern Cape, KwaZulu-Natal, and the Free State southwest of Harrismith. Van Rooy (2000) considers this to be one of the main centres of diversity of mosses in southern Africa.
Soutpansberg Centre - the mountains from the Blouberg in the west, along the Soutpansberg, separating the Limpopo valley from the rest of South Africa, and parts of the Limpopo valley into the north-western corner of the Kruger National Park.
Wolkberg Centre - the escarpment areas previously known as the Transvaal Drakensberg, between Carolina, Mpumalanga, in the south to Haenertsburg, Limpopo Province in the north, and west to Zebediela.
Sekhukhuneland Centre - a drier area inland of the escarpment of the Wolkberg Centre, on basic and ultramafic rocks of the Bushveld Igneous Complex.
Barberton Centre - the mountains between Barberton in Mpumalanga Province, and the border with Eswatini.
Maputaland-Pondoland Region - most of KwaZulu-Natal, with the edges extending into neighbouring areas, particularly southern Mozambique and the Eastern Cape. The area contains various grassland, thicket and forest types.
Genetic diversity
Genetic diversity is the amount of variation in the Deoxyribonucleic acid (DNA) of distinct individuals, representing the genetic characteristics of a species. From a conservation perspective, genetic diversity appears to be highly variable in populations and species.
Genetic diversity is important for evolutionary potential, as it serves as a way for populations to adapt to changing environments. With more variation, it is more likely that some individuals in a population will possess variations of alleles that are suited for the new environment. Those individuals are more likely to survive to produce offspring bearing that allele. The population will continue for more generations because of the success of these individuals.
The methods of measuring genetic diversity of a region include:
Species richness, a measure of the number of species,
Species abundance, a relative measure of the abundance of species,
Species density, an evaluation of the total number of species per unit area
Stochastic simulation software can be used to predict the future of a population given measurements such as allele frequency and population size.
Hotspots
A biodiversity hotspot is a biogeographic region with significant levels of biodiversity that is threatened by human habitation. Around the world, 36 areas qualify under this definition. These sites support nearly 60% of the world's plant, bird, mammal, reptile, and amphibian species, with a very high proportion of those species as endemics. Some of these hotspots support as many as 15,000 endemic plant species and some have lost up to 95% of their natural habitat. Biodiversity hotspots support their diverse ecosystems on just 2.4% of the planet's surface, but the area defined as hotspots covers a much larger proportion of the land, at about 15.7% of the land surface area, where they have lost around 85% of their original habitat.
Three of these hotspots are largely or entirely within South Africa:
The Cape Floristic Region is the smallest of the six recognised floral kingdoms of the world, is an area of extraordinarily high diversity and endemism, and is home to over 9,000 vascular plant species, of which 69 percent are endemic. Much of this diversity is associated with the fynbos biome, a Mediterranean-type, fire-prone shrubland. The economical worth of fynbos biodiversity, based on harvests of fynbos products (e.g. wildflowers) and eco-tourism, is estimated to be in the region of R77 million a year. Thus, it is clear that the Cape Floristic Region has both economic and intrinsic biological value as a biodiversity hotspot.
The Maputaland-Pondoland-Albany Hotspot is situated near the south-eastern coast of Africa, occupying an area between the Great Escarpment and the Indian Ocean. The area is named after Maputaland, Pondoland and Albany. It stretches from the Albany Centre of Plant Endemism in the Eastern Cape Province of South Africa, through the Pondoland Centre of Plant Endemism and KwaZulu-Natal Province, the eastern side of Eswatini (known as Swaziland until 2018) and into southern Mozambique and Mpumalanga. The Maputaland Centre of Plant Endemism is contained in northern KwaZulu-Natal and southern Mozambique.
The Succulent Karoo lies along the coastal strip of southwestern Namibia and South Africa's Northern Cape Province, where the cold Benguela Current offshore creates frequent fogs. The ecoregion extends inland into the uplands of South Africa's Western Cape Province. It is bounded on the south by the Mediterranean climate fynbos, on the east by the Nama Karoo, which has more extreme temperatures and variable rainfall, and on the north by the Namib Desert.
Species lists
A simple measure of taxonomic biodiversity is the count or listing of taxa found within a region. This may be recorded as species checklists. Since the number of species may be large, a checklist may be split into lists of species in a specified taxon at whatever level is convenient. For some taxa, a list by phylum is manageable, for others lists may be broken down to family level. Within the lists, the detail is generally at species level, but may vary depending on the available information, and a utilitarian approach is used, providing available detail that appears useful and reliable. A taxon may be labelled to indicate that it is endemic, indigenous, introduced, cultivated, or invasive.
Flora
23,420 species of vascular plant have been recorded in South Africa, making it the sixth most species-rich country in the world and the most species-rich country on the African continent. Of these, 153 species are considered to be threatened. Nine biomes have been described in South Africa: Fynbos, Succulent Karoo, desert, Nama Karoo, grassland, savanna, Albany thickets, the Indian Ocean coastal belt, and forests.
The 2018 National Biodiversity Assessment plant checklist lists 35,130 taxa in the phyla Anthocerotophyta (hornworts (6)), Anthophyta (flowering plants(33534)), Bryophyta (mosses (685)), Cycadophyta (cycads (42)), Lycopodiophyta (Lycophytes(45)), Marchantiophyta (liverworts (376)), Pinophyta (conifers (33)), and Pteridophyta (cryptogams(408)).
Fauna
– List of Acanthocephala of South Africa
– List of annelids of South Africa
Arthropoda
Arachnida
Astigmata, Mesostigmata, Prostigmata, Oribatida
Ixodida
Amblypygi
Araneae
Opiliones
Palpigradi
Pseudoscorpiones
Schizomida
Scorpiones
Solifugae
Branchiopoda
Anostraca
Conchostraca
Cladocera
Notostraca
Chilopoda
Diplopoda
Entognatha
Collembola
Diplura
Insecta
Archaeognatha
Dermaptera
Diptera
Embioptera
Ephemeroptera
Notoptera
Hemiptera
Isoptera
List of butterflies of South Africa
List of moths of South Africa
List of moths of South Africa (Arctiinae)
List of moths of South Africa (Crambidae)
List of moths of South Africa (Gelechiidae)
List of moths of South Africa (Geometridae)
List of moths of South Africa (Nepticulidae)
List of moths of South Africa (Noctuidae)
List of moths of South Africa (Pyralidae)
List of moths of South Africa (Tortricidae)
Mantodea
Mecoptera
Megaloptera
Neuroptera
Odonata
Orthoptera
Phasmida
Phthiraptera
Plecoptera
Psocoptera
Siphonaptera
Strepsiptera
Thysanoptera
Trichoptera
Zygentoma
Malacostraca – List of marine crustaceans of South Africa#Malacostraca
Amphipoda
Bathynellacea
Cumacea
Decapoda
Euphausiacea
Isopoda
Leptostraca
Mysida
Spelaeogriphacea
Stomatopoda
Tanaidacea
Maxillopoda – List of marine crustaceans of South Africa#Maxillopoda
Branchiura
Copepoda – List of marine crustaceans of South Africa#Copepoda
Mystacocarida
Pentastomida
Thecostraca
Ostracoda – List of marine crustaceans of South Africa#Ostracoda
Pauropoda
Pycnogonida – List of sea spiders of South Africa
Symphyla
Brachiopoda
Bryozoa
Chaetognatha
Chordata
– List of marine bony fishes of South Africa#Actinopterygii
List of marine spiny-finned fishes of South Africa
List of marine Perciform fishes of South Africa
List of marine fishes of the suborder Percoidei of South Africa
Amphibia
Appendicularia/Thaliacea
Ascidiacea
Aves – List of birds of South Africa
Cephalochordata
Chondrichthyes – List of marine fishes of South Africa#Chondrichtyes
Mammalia – List of mammals of South Africa
Myxini – List of marine fishes of South Africa#Myxini
Reptilia
List of lizards of South Africa
Sarcopterygii – List of marine bony fishes of South Africa#Gigaclass Sarcopterygii — Lobefin fishes
Cnidaria – List of marine cnidarians of South Africa
Ctenophora – List of comb jellies of South Africa
Cycliophora
Echinodermata – List of echinoderms of South Africa
Echiura
Entoprocta
Gastrotricha
Gnathostomulida
Hemichordata
Kinorhyncha
Loricifera
Micrognathozoa
Mollusca
Aplacophora – List of marine molluscs of South Africa#Aplacophora
Bivalvia – List of marine molluscs of South Africa#Bivalvia
Cephalopoda – List of marine molluscs of South Africa#Cephalopoda
Gastropoda – List of marine gastropods of South Africa
List of marine heterobranch gastropods of South Africa
Polyplacophora – List of marine molluscs of South Africa#Polyplacophora
Scaphopoda – List of marine molluscs of South Africa#Scaphopoda
Myxozoa
Nematoda
Nematomorpha
Nemertea
Onychophora
Orthonectida
Phoronida
Placozoa
Platyhelminthes
Cestoda
Monogenea
Trematoda
Turbellaria
Porifera – List of sponges of South Africa
Priapulida
Rhombozoa
Rotifera
Sipuncula
Tardigrada
Xenacoelomorpha
Fungi
By 1945, more than 4900 species of fungi (including lichen-forming species) had been recorded, and by 2006, the number of fungi in South Africa was estimated at 200,000 species, without taking into account fungi associated with insects. If correct, then the number of South African fungi dwarfs that of its plants. In at least some major South African ecosystems, an exceptionally high percentage of fungi are highly specific in terms of the plants with which they occur. The country's Biodiversity Strategy and Action Plan does not mention fungi (including lichen-forming fungi).
Ascomycota
Basidiomycota
Zygomycota
Lichen-forming and lichenicolous fungi
Oomycete
History
I. B. Pole-Evans established a national collection of fungi in Pretoria after his appointment in 1905. The previously existing collections of MacOwan and Medley Wood comprised 765 specimens. By 1950 the collection included more than 35 000 fungal specimens. The collections of P.A. van der Bijl and L. Verwoerd were housed at Stellenbosch, and the P. MacOwan collection and Bolus herbarium collections at Cape Town. Several European herbaria, including Kew and the International Mycological Institute also held collections. E. M. Doidge (1950) summarised the content, listing 835 species of Ascomycetes, 1704 Basidiomycetes, 93 Myxomycetes, 77 Phycomycetes, 1159 lichens, and 880 fungi imperfecti, with a total of 4748 species.
Other eukaryotes
– List of green seaweeds of South Africa
– List of brown seaweeds of South Africa
– List of red seaweeds of South Africa
Prokaryotes
Threats
Biodiversity loss is the extinction of species worldwide, and also the local reduction or loss of species in a given habitat. Local losses can be temporary or permanent, depending on whether the environmental degradation that leads to the loss is reversible through ecological restoration or ecological resilience, or effectively permanent. Global extinction has so far been proven to be irreversible.
Even though permanent global species loss is a more dramatic phenomenon than regional changes in species composition, even minor changes from a healthy stable state can have dramatic influence on the food web and the food chain insofar as reductions in only one species can adversely affect the entire chain (coextinction), leading to an overall reduction in biodiversity, possible alternative stable states of an ecosystem notwithstanding. Ecological effects of biodiversity are usually counteracted by its loss. Reduced biodiversity in particular leads to reduced ecosystem services and eventually poses an immediate danger for food security, both within the ecosystem, and for human populations relying on it.
Habitat change by way of habitat fragmentation or habitat destruction) is the most important driver currently affecting biodiversity, as some 40% of forests and ice-free habitats have been converted to cropland or pasture. Other drivers are: overexploitation, pollution, invasive species, and climate change.
Human impacts
According to a 2019 Global Assessment Report on Biodiversity and Ecosystem Services by IPBES, 25% of plant and animal species are globally threatened with extinction as the result of human activity. As a region with high endemic diversity and three major biodiversity hotspots, South Africa is one of the regions where this is highly significant.
Climate change
Climate change includes both the global warming driven by human emissions of greenhouse gases, and the resulting large-scale shifts in weather patterns. While there have been previous periods of climatic change, changes observed since the mid-20th century have been unprecedented in rate and scale.
Endangered species
An endangered species is a species that is very likely to become extinct in the near future, either worldwide or in a particular region. Endangered species may be at risk due to factors such as habitat loss, poaching and invasive species. The International Union for Conservation of Nature (IUCN) Red List lists the global conservation status of many species, and various other agencies assess the status of species within particular areas. Some endangered species are the target of extensive conservation efforts such as captive breeding and habitat restoration.
Extinction
Rapid environmental changes typically cause mass extinctions. More than 99.9 percent of all species that ever lived on Earth, amounting to over five billion species, are estimated to be extinct.
Economic value
Ways in which the biodiversity of SA has economic value to the inhabitants
Natural resources
Employment opportunities
Tourism industry
The economical worth of fynbos biodiversity, based on harvests of fynbos products (e.g. wildflowers) and eco-tourism, is estimated to be in the region of R77 million a year. Thus, it is clear that the Cape Floristic Region has both economic and intrinsic biological value as a biodiversity hotspot.
Management
South Africa signed the Rio Convention on Biological Diversity on 4 June 1994, and became a party to the convention on 2 November 1995. It has subsequently produced a National Biodiversity Strategy and Action Plan, which was received by the convention on 7 June 2006.
Responsibility
Government department - Department of the Environment, Forestry and Fisheries. Previous departments: DEAT etc.
Laws
Sustainable use
Ecotourism in South Africa has become more prevalent as a possible method of supporting the maintenance of biodiversity.
Protection
Protected areas
The protected areas of South Africa include national parks and marine protected areas managed by the national government, public nature reserves managed by provincial and local governments, and private nature reserves managed by private landowners. Most protected areas are intended for the conservation of flora and fauna. National parks are maintained by South African National Parks (SANParks). A number of national parks have been incorporated in transfrontier conservation areas.
Research
Research institutions
The South African National Biodiversity Institute (SANBI) is an organisation established in 2004 in terms of the National Environmental Management: Biodiversity Act, No 10 of 2004, under the South African Department of Environmental Affairs (later named Department of Environment, Forestry and Fisheries), tasked with research and dissemination of information on biodiversity, and legally mandated to contribute to the management of the country's biodiversity resources.
Marine research:
National Biodiversity Assessment
The National Biodiversity Assessment (NBA) is a recurring project by the South African National Biodiversity Institute in collaboration with the government department currently responsible for environmental affairs and several other organisations to assess the state of South Africa's biodiversity over time as an input for policy and decision making where the environment may be affected. The NBA looks into genetic, species and ecosystems biodiversity for terrestrial, freshwater, estuarine and marine environments. Each assessment cycle nominally takes approximately five years, and both generates new knowledge and analyses existing knowledge. NBA reports are named for the year of the data, and are usually published in the following year. They have been published for 2004, 2011, and 2018, and include reports, data, and supplementary documents.
See also
References
Sources
.
Global Warming of 1.5 °C —.
Biodiversity of South Africa | Biodiversity of South Africa | [
"Biology"
] | 8,606 | [
"Biodiversity",
"Biodiversity of South Africa"
] |
65,187,964 | https://en.wikipedia.org/wiki/RNA%20therapeutics | RNA therapeutics are a new class of medications based on ribonucleic acid (RNA). Research has been working on clinical use since the 1990s, with significant success in cancer therapy in the early 2010s. In 2020 and 2021, mRNA vaccines have been developed globally for use in combating the coronavirus disease (COVID-19 pandemic). The Pfizer–BioNTech COVID-19 vaccine was the first mRNA vaccine approved by a medicines regulator, followed by the Moderna COVID-19 vaccine, and others.
The main types of RNA therapeutics are those based on messenger RNA (mRNA), antisense RNA (asRNA), RNA interference (RNAi), and RNA aptamers. Of the four types, mRNA-based therapy is the only type which is based on triggering synthesis of proteins within cells, making it particularly useful in vaccine development. Antisense RNA is complementary to coding mRNA and is used to trigger mRNA inactivation to prevent the mRNA from being used in protein translation. RNAi-based systems use a similar mechanism, and involve the use of both small interfering RNA (siRNA) and micro RNA (miRNA) to prevent mRNA translation and/or degrade mRNA. However, RNA aptamers are short, single stranded RNA molecules produced by directed evolution to bind to a variety of biomolecular targets with high affinity thereby affecting their normal in vivo activity.
RNA is synthesized from template DNA by RNA polymerase with messenger RNA (mRNA) serving as the intermediary biomolecule between DNA expression and protein translation. Because of its unique properties (such as its typically single-stranded nature and its 2' OH group) and its ability to adopt many different secondary/tertiary structures, both coding and noncoding RNAs have attracted attention in medicine. Research has begun to explore RNAs potential to be used for therapeutic benefit, and unique challenges have occurred during drug discovery and implementation of RNA therapeutics.
mRNA
Messenger RNA (mRNA) is a single-stranded RNA molecule that is complementary to one of the DNA strands of a gene. An mRNA molecule transfers a portion of the DNA code to other parts of the cell for making proteins. DNA therapeutics needs access to the nucleus to be transcribed into RNA, and its functionality depends on nuclear envelope breakdown during cell division. However, mRNA therapeutics do not need to enter into the nucleus to be functional since it will be translated immediately once it has reached to the cytoplasm. Moreover, unlike plasmids and viral vectors, mRNAs do not integrate into the genome and therefore do not have the risk of insertional mutagenesis, making them suitable for use in cancer vaccines, tumor immunotherapy and infectious disease prevention.
Discovery and development
In 1953, Alfred Day Hershey reported that soon after infection with phage, bacteria produced a form of RNA at a high level and this RNA was also broken down rapidly. However, the first clear indication of mRNA was from the work of Elliot Volkin and Lazarus Astrachan in 1956 by infecting E.coli with T2 bacteriophages and putting them into the medium with 32P. They found out that the protein synthesis of E.coli was stopped and phage proteins were synthesized. Then, in May 1961, their collaborated researchers Sydney Brenner, François Jacob, and Jim Watson announced the isolation of mRNA. For a few decades after mRNA discovery, people focused on understanding the structural, functional, and metabolism pathway aspects of mRNAs. However, in 1990, Jon A. Wolff demonstrated the idea of nucleic acid-encoded drugs by direct injecting in vitro transcribed (IVT) mRNA or plasmid DNA (pDNA) into the skeletal muscle of mice which expressed the encoded protein in the injected muscle.
Once IVT mRNA has reached the cytoplasm, the mRNA is translated instantly. Thus, it does not need to enter the nucleus to be functional. Also, it does not integrate into the genome and therefore does not have the risk of insertional mutagenesis. Moreover, IVT mRNA is only transiently active and is completely degraded via physiological metabolic pathways. Due to these reasons, IVT mRNA has undergone extensive preclinical investigation.
Mechanisms
In vitro transcription (IVT) is performed on a linearized DNA plasmid template containing the targeted coding sequence. Then, naked mRNA or mRNA complexed in a nanoparticle will be delivered systemically or locally. Subsequently, a part of the exogenous naked mRNA or complexed mRNA will go through cell-specific mechanisms. Once in the cytoplasm, the IVT mRNA is translated by the protein synthesis machinery.
There are two identified RNA sensors, toll-like receptors (TLRs) and the RIG-I-like receptor family. TLRs are localized in the endosomal compartment of cells, such as DCs and macrophages. RIG-I-like family is as a pattern recognition receptor (PRR). However, the immune response mechanisms and process of mRNA vaccine recognition by cellular sensors and the mechanism of sensor activation are still unclear.
Applications
Cancer immunotherapy
In 1995, Robert Conry demonstrated that intramuscular injection of naked RNA encoding carcinoembryonic antigen elicited antigen-specific antibody responses. Then, it was elaborated by demonstrating that dendritic cells(DCs) exposed to mRNA coding for specific antigens or to total mRNA extracted from tumor cells and injected into tumor-bearing mice induced T cell immune responses and inhibited the growth of tumors. Then, researchers started to approach mRNA transfected DCs using vaccines based on ex vivo IVT mRNA-transfected DCs. Meanwhile, Argos Therapeutics had initiated a Phase III clinical trial using DCs with advanced renal cell carcinoma in 2015 (NCT01582672) but it was terminated due to the lack of efficacy.
For further application, IVT mRNA was optimized for in situ transfections of DCs in vivo. It improved the translation efficiency and stability of IVT mRNA and enhanced the presentation of the mRNA-encoded antigen on MHC class I and II molecules. Then, they found out that the direct injection of naked IVT mRNA into lymph nodes was the most effective way to induce T cell responses. Based on this discovery, first-in-human testing of the injection of naked IVT mRNA encoding cancer antigens by BioNTech has started with patients with melanoma (NCT01684241).
Recently, the new cancer immunotherapy, the combining of self-delivering RNA(sd-rxRNA) and adoptive cell transfer(ACT) therapy, was invented by RXi Pharmaceuticals and the Karolinska Institute. In this therapy, the sd-rxRNA eliminated the expression of immunosuppressive receptors and proteins in therapeutic immune cells so it improved the ability of immune cells to destroy the tumor cells. Then, the PD-1 targeted sd-rxRNA helped increasing the anti-tumor activity of tumor-infiltrating lymphocytes (TIL) against melanoma cells. Based on this idea, the mRNA-4157 has been tested and passed phase I clinical trial.
Cytosolic nucleic acid-sensing pathways can enhance immune response to cancer. RIG-I agonist, stem loop RNA (SLR) 14. Tumor growth was significantly delayed and extended survival in mice. SLR14 improved antitumor efficacy of anti-PD1 antibody over single-agent treatment. SLR14 was absorbed by CD11b+ myeloid cells in the tumor microenvironment. Genes associated with immune defense were significantly up-regulated, along with increased CD8+ T lymphocytes, NK cells, and CD11b+ cells. SLR14 inhibited nonimmunogenic B16 tumor growth, leaving immune memory.
Vaccines
In 1993, the first success of an mRNA vaccine was reported in mice, by using liposome-encapsulated IVT mRNA which is encoding the nucleoprotein of influenza that induced virus-specific T cells. Then, IVT mRNA was formulated with synthetic lipid nanoparticles and it induced protective antibody responses against the respiratory syncytial virus(RSV) and influenza virus in mice.
There are a few different types of IVT mRNA-based vaccine development for infectious diseases. One of the successful types is using self-amplifying IVT mRNA that has sequences of positive-stranded RNA viruses. It was originally developed for a flavivirus and it was workable with intradermal injection. One of the other ways is injecting a two-component vaccine which is containing an mRNA adjuvant and naked IVT mRNA encoding influenza hemagglutinin antigen only or in combination with neuraminidase encoding IVT mRNA.
For example, for the HIV treatment, vaccines are using DCs transfected with IVT mRNA that is encoding HIV proteins. There are a few phase I and II clinical trials using IVT mRNA encoding combinations and it shows that antigen-specific CD8+ and CD4+ T cell responses can be induced. However, no antiviral effects have been observed in the clinical trial.
One of the other mRNA vaccines is for COVID-19. The Severe Acute Respiratory Syndrome CoronaVirus 2 (SARS-CoV-2) outbreaks in December 2019 and spread all over the world, causing a pandemic of respiratory illness designated coronavirus disease 2019 (COVID-19). The Moderna COVID-19 vaccine, manufactured by Moderna since 2020, is a lipid nanoparticle (LNP) encapsulated mRNA-based vaccine that encodes for a full-length, prefusion stabilized spike(S)-2P antigen of SARS-CoV-2 with a transmembrane anchor.
Anti-viral
In 2021, SLR14 was reported to prevent infection in the lower respiratory tract and severe disease in an interferon type I (IFN-I)–dependent manner in mice. Immunodeficient mice with chronic SARS-CoV-2 infection experienced near-sterilizing innate immunity with no help from the adaptive immune system.
Tissue regeneration
A 2022 study by researchers from the Mayo Clinic, Maastricht University, and Ethris GmBH, a biotech company that focuses on RNA therapeutics, found that chemically modified mRNA encoding BMP-2 promoted dosage-dependent healing of femoral osteotomies in male rats. The mRNA molecules were complexed within nonviral lipid particles, loaded onto sponges, and surgically implanted into the bone defects. They remained localized around the site of application. Compared to receiving rhBMP-2 directly, bony tissues regenerated after mRNA treatment displayed superior strength and less formation of massive callus.
Limitations
There are many challenges for the successful translation of mRNA into drugs because mRNA is a very large and heavy molecule(10^5 ~ 10^6 Da). Moreover, mRNA is unstable and easily degraded by nucleases, and it also activates the immune systems. Furthermore, mRNA has a high negative charge density and it reduces the permeation of mRNA across cellular membranes. Due to these reasons, without the appropriate delivery system, mRNA is degraded easily and the half-life of mRNA without a delivery system is only around 7 hours. Even though some degrees of challenges could be overcome by chemical modifications, delivery of mRNA remains an obstacle. The methods that have been researched to improve the delivery system of mRNA are using microinjection, RNA patches (mRNA loaded in a dissolving micro-needle), gene gun, protamine condensation, RNA adjuvants, and encapsulating mRNA in nanoparticles with lipids.
Even though In Vitro Translated (IVT) mRNA with delivery agents showed improved resistance against degradation, it needs more studies on how to improve the efficiency of the delivery of naked mRNA in vivo.
Approved RNA Therapeutics
patisiran
givosiran
lumasiran
inclisiran
Antisense RNA
Antisense RNA is the non-coding and single-stranded RNA that is complementary to a coding sequence of mRNA. It inhibits the ability of mRNA to be translated into proteins. Short antisense RNA transcripts are produced within the nucleus by the action of the enzyme Dicer, which cleaves double-stranded RNA precursors into 21–26 nucleotide long RNA species.
There is an antisense-based discovery strategy, rationale and design of screening assays, and the application of such assays for screening of natural product extracts and the discovery of fatty acid condensing enzyme inhibitors. Antisense RNA is used for treating cancer and inhibition of metastasis and vectors for antisense sequestration. Particularly MicroRNAs(miRs) 15 and 16 to a patient in need of the treatment for diagnosis and prophylaxis of cancer. Antisense drugs are based on the fact that antisense RNA hybridizes with and inactivates mRNA. These drugs are short sequences of RNA that attach to mRNA and stop a particular gene from producing the protein for which it encodes. Antisense drugs are being developed to treat lung cancer, diabetes and diseases such as arthritis and asthma with a major inflammatory component. It shows that the decreased expression of MLLT4 antisense RNA 1 (MLLT4‑AS1) is a potential biomarker and a predictor of a poor prognosis for gastric cancer. So far, applications of antisense RNAs in antivirus and anticancer treatments and in regulating the expression of related genes in plants and microorganisms have been explored.
Non-viral vectors, virus vectors and liposomes have been used to deliver the antisense RNA through the cell membrane into the cytoplasm and nucleus. It has been found that the viral vector based delivery is the most advantageous among different delivery systems because it has a high transfection efficacy. However, it is difficult to deliver antisense RNA only to the targeted sites. Also, due to the size and the stability issues of antisense RNA, there are some limitations to its use. To improve the delivery issues, chemical modifications, and new oligonucleotide designs have been studied to enhance the drug distribution, side effects, and tolerability.
RNAi
Interfering RNA are a class of short, noncoding RNA that act to translationally or post-translationally repress gene expression. Their discovery and subsequent identification as key effectors of post-transcriptional gene regulation have made small interfering RNA (siRNA) and micro RNA (miRNA) potential therapeutics for systemic diseases. The RNAi system was originally discovered in 1990 by Jorgensen et al., who were doing research involving the introduction of coloration genes into petunias, and it is thought that this system originally developed as a means of innate immunity against double-stranded RNA viruses.
siRNA
Small interfering (siRNA) are short, 19-23 base-pair (with a 3' overhang of two nucleotides), double-stranded pieces of RNA that participate in the RNA-induced silencing complex (RISC) for gene silencing. Specifically, siRNA is bound by the RISC complex where it is unwound using ATP hydrolysis. It is then used as a guide by the enzyme "Slicer" to target mRNAs for degradation based on complementary base-pairing to the target mRNA. As a therapeutic, siRNA is able to be delivered locally, through the eye or nose, to treat various diseases. Local delivery benefits from simple formulation and drug delivery and high bioavailability of the drug. Systemic delivery is necessary to target cancers and other diseases. Targeting the siRNA when delivered locally is one of the main challenges in siRNA therapeutics. While it is possible to use intravenous injection to deliver siRNA therapies, concerns have been raised about the large volumes used in the injection, as these must often be ~20-30% of the total blood volume. Other methods of delivery include liposome packaging, conjugation to membrane-permeable peptides, and direct tissue/organ electroporation. Additionally, it has been found that exogeneous siRNAs only last a few days (a few weeks at most in non-dividing cells) in vivo. If siRNA is able to successfully reach its target, it has the potential to therapeutically regulate gene expression through its ability to base-pair to mRNA targets and promote their degradation through the RISC system Currently, siRNA-based therapy is in a phase I clinical trial for the treatment of age-related macular degeneration, although it is also being explored for use in cancer therapy. For instance, siRNA can be used to target mRNAs that code for proteins that promote tumor growth such as the VEGF receptor and telomerase enzyme.
miRNA
Micro RNAs (miRNAs) are short, ~19-23 base pair long RNA oligonucleotides that are involved in the microRNA-induced silencing complex. Specifically, once loaded onto the ARGONAUTE enzyme, miRNAs work with mRNAs to repress translation and post-translationally destabilize mRNA. While they are functionally similar to siRNAs, miRNAs do not require extensive base-pairing for mRNA silencing (can require as few as seven base-pairs with target), thus allowing them to broadly affect a wider range of mRNA targets. In the cell, miRNA uses switch, tuning, and neutral interactions to finely regulate gene repression. As a therapeutic, miRNA has the potential to affect biochemical pathways throughout the organism.
With more than 400 miRNA identified in humans, discerning their target gene for repression is the first challenge. Multiple databases have been built, for example TargetScan, using miRNA seed matching. In vitro assays assist in determining the phenotypic effects of miRNAs, but due to the complex nature of gene regulation not all identified miRNAs have the expected effect. Additionally, several miRNAs have been found to act as either tumor suppressors or oncogenes in vivo, such as the oncogenic miR-155 and miR-17-92.
In clinical trials, miRNA are commonly used as biomarkers for a variety of diseases, potentially providing earlier diagnosis as well as disease progression, stage, and genetic links. Phase 1 and 2 trials currently test miRNA mimics (to express genes) and miRNA (to repress genes) in patients with cancers and other diseases. In particular, mimic miRNAs are used to introduce miRNAs that act as tumor suppressors into cancerous tissues, while miRNA antagonists are used to target oncogenic miRNAs to prevent their cancer-promoting activity. Therapeutic miRNA is also used in addition to common therapies (such as cancer therapies) that are known to overexpress or destabilize the patient miRNA levels. An example of one mimic miRNA therapy that demonstrated efficacy in impeding lung cancer tumor growth in mouse studies is miR-34a.
One concerning aspect of miRNA-based therapies is the potential for the exogeneous miRNA to affect miRNA silencing mechanisms within normal body cells, thereby affecting normal cellular biochemical pathways. However, in vivo studies have indicated that miRNAs display little to no effect in non-target tissues/organs.
RNA aptamers
Broadly, aptamers are small molecules composed of either single-stranded DNA or RNA and are typically 20-100 nucleotides in length, or ~3-60 kDa. Because of their single-stranded nature, aptamers are capable of forming many secondary structures, including pseudoknots, stem loops, and bulges, through intra-strand base pairing interactions. The combinations of secondary structures present in an aptamer confer it a particular tertiary structure which in turn dictates the specific target the aptamer will selectively bind to. Because of the selective binding ability of aptamers, they are considered a promising biomolecule for use in pharmaceuticals. Additionally, aptamers exhibit tight binding to targets, with dissociation constants often in the pM to nM range. Besides their strong binding ability, aptamers are also valued because they can be used on targets that are not capable of being bound by small peptides generated by phage display or by antibodies, and they are able to differentiate between conformational isomers and amino acid substitutions. Also, because aptamers are nucleic-acid based, they can be directly synthesized, eliminating the need for cell-based expression and extraction as is the case in antibody production. RNA aptamers in particular are capable of producing a myriad of different structures, leading to speculations that they are more discriminating in their target affinity compared to DNA aptamers.
Discovery and development
Aptamers were originally discovered in 1990 when Lary Gold and Craig Tuerk utilized a method of directed evolution known as SELEX to isolate a small single stranded RNA molecule that was capable of binding to T4 bacteriophage DNA polymerase. Additionally, the term “aptamer” was coined by Andrew Ellington, who worked with Jack Szostak to select an RNA aptamer that was capable of tight binding to certain organic dye molecules. The term itself is a conglomeration of the Latin “aptus” or “to fit” and the Greek “meros” or “part."
RNA aptamers are not so much “created” as “selected.” To develop an RNA aptamer capable of selective binding to a molecular target, a method known as Systematic Evolution of Ligands by EXponential Enrichment (SELEX) is used to isolate a unique RNA aptamer from a pool of ~10^13 to 10^16 different aptamers, otherwise known as a library. The library of potential aptamer oligonucleotides is then incubated with a non-target species so as to remove aptamers that exhibit non-specific binding. After subsequent removal of the non-specific aptamers, the remaining library members are then exposed to the desired target, which can be a protein, peptide, cell type, or even an organ (in the case of live animal-based SELEX). From there, the RNA aptamers which were bound to the target are transcribed to cDNA which then is amplified through PCR, and the PCR products are then re-transcribed to RNA. These new RNA transcripts are then used to repeat the selection cycle many times, thus eventually producing a homogeneous pool of RNA aptamers capable of highly specific, high-affinity target binding.
Examples
RNA aptamers can be designed to act as antagonists, agonists, or so-called ”RNA decoy aptamers." In the case of antagonists, the RNA aptamer is used either to prevent binding of a certain protein to its cell membrane receptor or to prevent the protein from performing its activity by binding to the protein's target. Currently, the only RNA aptamer-based therapies that have advanced to clinical trials act as antagonists. When RNA aptamers are designed to act as agonists, they promote immune cell activation as a co-stimulatory molecule, thus aiding in the mobilization of the body's own defense system. For RNA decoy aptamers, the synthetic RNA aptamer resembles a native RNA molecule. As such, proteins(s) which bind to the native RNA target instead bind to the RNA aptamer, possibly interfering with the biomolecular pathway of a particular disease. In addition to their utility as direct therapeutic agents, RNA aptamers are also being considered for other therapeutic roles. For instance, by conjugating the RNA aptamer to a drug compound, the RNA aptamer can act as a targeted delivery system for that drug. Such RNA aptamers are known as ApDCs. Additionally, through conjugation to radioisotope or a fluorescent dye molecule, RNA aptamers may be useful in diagnostic imaging.
Because of the SELEX process utilized to select RNA aptamers, RNA aptamers can be generated for many potential targets. By directly introducing the RNA aptamers to the target during SELEX, a very selective, high-affinity, homogeneous pool of RNA aptamers can be produced. As such, RNA aptamers can be made to target small peptides and proteins, as well as cell fragments, whole cells, and even specific tissues. Examples of RNA aptamer molecular targets and potential targets include vascular endothelial growth factor, osteoblasts, and C-X-C Chemokine Ligand 12 (CXCL2).
An example of an RNA aptamer therapy includes Pegaptanib (aka Macugen ® ), the only FDA-approved RNA aptamer treatment. Originally approved in 2004 to treat age-related macular degeneration, Pegaptanib is a 28 nucleotide RNA aptamer that acts as a VEGF antagonist. However, it is not as effective as antibody-based treatments such as bevacizumab and ranibizumab. Another example of an RNA aptamer therapeutic is NOX-A12, a 45 nucleotide RNA aptamer that is in clinical trials for chronic lymphocytic leukemia, pancreatic cancer, as well as other cancers. NOX-A12 acts as antagonist for CXCL12/SDF-1, a chemokine involved in tumor growth.
Limitations
While the high-selectivity and tight-binding of RNA aptamers have generated interest in their use as pharmaceuticals, there are many problems which have prevented them from being successful in vivo. For one, without modifications RNA aptamers are degraded after being introduced into the body by nucleases in the span of a few minutes. Also, due to their small size, RNA aptamers can be removed from the bloodstream by the renal system. Because of their negative charge, RNA aptamers are additionally known to bind proteins in the bloodstream, leading to non-target tissue delivery and toxicity. Care must also be taken when isolating the RNA aptamers, as aptamers which contain repeated Cytosine-Phosphate-Guanine (CpG) sequences will cause immune system activation through the Toll-like receptor pathway.
In order to combat some of the in vivo limitations of RNA aptamers, various modifications can be added to the nucleotides to aid in efficacy of the aptamer. For instance, a polyethylene glycol (PEG) moiety can be attached to increase the size of the aptamer, thereby preventing its removal from the bloodstream by the renal glomerulus. However, PEG has been implicated in allergic reactions during in vivo testing. Furthermore, modifications can be added to prevent nuclease degradation, such as a 2’ fluoro or amino group as well as a 3’ inverted thymidine. Additionally, the aptamer can be synthesized so that the ribose sugar is in the L-form instead of the D-form, further preventing nuclease recognition. Such aptamers are known as Spiegelmers. In order to prevent Toll-like receptor pathway activation, the cytosine nucleobases within the aptamer can be methylated. Nevertheless, despite these potential solutions to reduced in vivo efficacy, it is possible that chemically modifying the aptamer may weaken its binding affinity towards its target.
See also
Riboswitch
ncRNA therapy
References
External links
RNA therapeutics on the rise, Nature (April 2020)
RNA
Biotechnology
Molecular biology | RNA therapeutics | [
"Chemistry",
"Biology"
] | 5,749 | [
"Biochemistry",
"nan",
"Biotechnology",
"Molecular biology"
] |
65,191,373 | https://en.wikipedia.org/wiki/Dplyr | dplyr is an R package whose set of functions are designed to enable dataframe (a spreadsheet-like data structure) manipulation in an intuitive, user-friendly way. It is one of the core packages of the popular tidyverse set of packages in the R programming language. Data analysts typically use dplyr in order to transform existing datasets into a format better suited for some particular type of analysis, or data visualization.
For instance, someone seeking to analyze a large dataset may wish to only view a smaller subset of the data. Alternatively, a user may wish to rearrange the data in order to see the rows ranked by some numerical value, or even based on a combination of values from the original dataset. Functions within the dplyr package will allow a user to perform such tasks.
dplyr was launched in 2014. On the dplyr web page, the package is described as "a grammar of data manipulation, providing a consistent set of verbs that help you solve the most common data manipulation challenges."
The five core verbs
While dplyr actually includes several dozen functions that enable various forms of data manipulation, the package features five primary verbs or actions:
filter(), which is used to extract rows from a dataframe, based on conditions specified by a user;
select(), which is used to subset a dataframe by its columns;
arrange(), which is used to sort rows in a dataframe based on attributes held by particular columns;
mutate(), which is used to create new variables, by altering and/or combining values from existing columns; and
summarize(), also spelled summarise(), which is used to collapse values from a dataframe into a single summary.
Additional functions
In addition to its five main verbs, dplyr also includes several other functions that enable exploration and manipulation of dataframes. Included among these are:
count(), which is used to sum the number of unique observations that contain some particular value or categorical attribute;
rename(), which enables a user to alter the column names for variables, often to improve ease of use and intuitive understanding of a dataset;
slice_max(), which returns a data subset that contains the rows with the highest number of values for some particular variable;
slice_min(), which returns a data subset that contains the rows with the lowest number of values for some particular variable.
Built-in datasets
The dplyr package comes with five datasets. These are: band_instruments, band_instruments2, band_members, starwars, storms.
Copyright & license
The copyright to dplyr is held by Posit PBC, formerly RStudio PBC. dplyr was originally released under a GPL license, but in 2022, Posit changed the license terms for the package to the "more permissive" MIT License. The main difference between the two types of license is that the MIT license allows subsequent re-use of code within proprietary software, whereas a GPL license does not.
References
Data analysis software
Statistical software
Free R (programming language) software | Dplyr | [
"Mathematics"
] | 646 | [
"Statistical software",
"Mathematical software"
] |
76,750,322 | https://en.wikipedia.org/wiki/IC%204040 | IC 4040 is a type SABc spiral galaxy with a bar in Coma Berenices. It is located 353 million light-years away from the Solar System and has an estimated diameter of 105,000 light-years making it slightly larger than the Milky Way. IC 4040 was discovered on April 12, 1891, by Guillaume Bigourdan and is a member of the Coma Cluster.
Characteristics
IC 4040 is considered a jellyfish galaxy due to its close proximity to the cluster where gas is stripped from the galaxy by the action of ram pressure. A radio continuum tail can seen extending outwards from the galaxy, showing widespread occurrence of relativistic electrons and magnetic fields which is being removed by pressure. The stripped electrons are re-accelerated by turbulence and ICM shocks or by new supernovae, since massive stars can be found in H II regions that are located in its ram pressure stripped tail.
According to studies, extended ionized gas can be found surrounding IC 4040. This shows increased radial velocities which reaches up between 400 and 800 kilometer per seconds within distance from the nucleus of the galaxy. Not to mention, a low velocity filament is found at the southeastern edge of IC 4040 which exhibits blue continuum and strong Hα emission. The widths exceeds 200 Å and much more compared to 1000 Å for some knots, indicating the intensive activity of star formation. Some of these filaments show signs of shock emission-line spectra suggesting shock heating plays an important function in excitation and ionization of extended ionized gas. IC 4040 also presents a strong radio source compared to galaxies of type E/SO.
Supernovae
Two supernovae have been discovered in IC 4040 so far: PTF11gdh in 2011 and SN 2022jo in 2022.
PTF11gdh
PTF11gdh was discovered on June 21, 2011, in IC 4040 by Palomar Transient Factory. The supernova was located 0" east and 0" south of the nucleus. The supernova was Type la.
SN 2022jo
SN 2022jo was discovered in IC 4040 on January 9, 2022, by a group of astronomers; Chunpeng Bi, Jianlin Xu, Mi Zhang, Jingyuan Zhao, Guoyou Sun, Jiangao Ruan and Wenjie Zhou from Xingming Observatory. SN 2022jo was found at Right Ascension (13hr 00min 37s .666) and Declination (+28 degrees 0.3' 25".71). It was located 0".0 east and 0".0 north of the nucleus. The supernova was Type II.
References
4040
Spiral galaxies
044789
Coma Cluster
Coma Berenices
Astronomical objects discovered in 1891
Discoveries by Guillaume Bigourdan
044789
SDSS objects
2MASS objects
+05-31-085
F12582+2819 | IC 4040 | [
"Astronomy"
] | 599 | [
"Coma Berenices",
"Constellations"
] |
76,751,059 | https://en.wikipedia.org/wiki/Riviera%20Tower | Riviera Tower, also known as Marina Tower or Hellinikon Marina Residential Tower, is a residential skyscraper currently under construction in Athens, Greece. Located at the coastal district of Elliniko, it will be tallest building in Greece at a height of . The tower, designed by Foster + Partners, will host 200 apartments on 50 floors when completed in 2026 The tower will feature green building characteristics and its developers aim for LEED Gold certification. The tower is a part of the Hellenikon Metropolitan Park.
See also
List of tallest buildings and structures in Greece
List of tallest buildings in Europe
References
Skyscrapers in Athens
Residential skyscrapers
Residential buildings in Greece
Buildings and structures under construction | Riviera Tower | [
"Engineering"
] | 138 | [
"Construction",
"Buildings and structures under construction"
] |
76,752,455 | https://en.wikipedia.org/wiki/NGC%201761 | NGC 1761 (also known as GC 980, JH 2710, LH 9) is an open cluster in the Dorado constellation in the Large Magellanic Cloud. It encompasses a group of about 50 massive hot young stars. These stars are among the largest stars known anywhere in the Universe and appear as bright blue-white in colour. The stars in turn have given birth to new stars within dark globules. NGC 1761 is particularly noteworthy for its intense ultraviolet radiation, which has eroded a large hole in the surrounding nebular material. It is similar in structure to the more famous Rosette Nebula.
It is part of a large region of stars called LMC-N11 (N11) which was discovered with a 23-cm telescope by the astronomer James Dunlop in 1826 and was also observed by John Herschel in 1835.
References
External links
ESO objects
1761
Dorado
Open clusters
Large Magellanic Cloud
Astronomical objects discovered in 1826
Discoveries by James Dunlop | NGC 1761 | [
"Astronomy"
] | 197 | [
"Dorado",
"Constellations"
] |
76,752,629 | https://en.wikipedia.org/wiki/Employee-driven%20growth | Employee-driven growth (EDG) is a business philosophy that centers an organization’s growth on employee support, engagement, and development. It uses employee recognition, engagement, and rewards as strategies for business growth and customer satisfaction.
Key principles
Development
EDG uses feedback loops and incentive structures to improve job performance and satisfaction, which result in enhanced customer experiences. Employee development includes additional classes, certifications, and training that keep employees engaged with their jobs.
Recognition and rewards
EDG emphasizes recognizing and rewarding employees for exceptional service, positive online reviews, and contributions to business success. This recognition often takes the form of financial incentives, but could also include non-cash rewards and other more intrinsic motivators.
Impact
Employee-driven company practices have been shown in studies to increase customer satisfaction by as much as 30 percent. Research has also suggested that the use of recognition and rewards results in increased employee satisfaction, retention, and productivity. Positive employee experience metrics have also been correlated with a 50% increase in revenue growth.
In both franchise businesses and larger corporations, EDG principles such as reward and recognition systems are used to increase hiring and retention rates.
References
Business ethics
Employee relations
Workplace
Human resource management
Organizational behavior
Organizational culture | Employee-driven growth | [
"Biology"
] | 244 | [
"Behavior",
"Organizational behavior",
"Human behavior"
] |
76,753,043 | https://en.wikipedia.org/wiki/Arkham%20Intelligence | Arkham Intelligence, branded Arkham, is a global company that operates a cryptocurrency exchange platform as well as a public data application that enables users to analyze blockchain and cryptocurrency activity. Founded by Miguel Morel in 2020, the company's platform utilizes AI to identify and catalog the owners of blockchain addresses. Its partners include various cryptocurrency and blockchain companies.
History
Morel founded Arkham in 2020 and received investments from angel investors including Tim Draper, Joe Lonsdale of Palantir Technologies, and Sam Altman of OpenAI and Worldcoin.
In July 2022, during the midst of Celsius Network’s bankruptcy, Arkham found that Celsius owed over $500 million worth of digital assets to three of the biggest DeFi lenders, including Aave Protocol; it was also reported that Celsius worked with a previously unidentified fund manager to purchase NFTs and make deposits on yield-bearing decentralized exchanges.
Arkham reported on a hacker who stole approximately $477 million worth of tokens from FTX and sent 180,000 Ethereum (ETH) coins to at least a dozen digital wallets in November 2022. Arkham analysts noted that the hacker followed two patterns: operating between 08:00 and 10:00 UTC and creating new accounts for each operation.
In December 2022, Arkham tracked over $1 million of transferred funds tied to former FTX chief executive Sam Bankman-Fried as well as $1.7 million worth of cryptocurrencies liquidated within a 24-hour time span. This data was used by prosecutors from the Southern District of New York, who filed criminal charges against Bankman-Fried for his role in FTX's collapse.
Arkham provided data in January 2023 that identified “an alleged nexus of money laundering” from Bitzlato through intermediate wallets of Binance. Over the course of several years, it was found that the intermediary wallet deposited $15 million worth of crypto onto Binance's platform. That same month, Arkham released a report which revealed that Alameda liquidators lost $72,000 worth of crypto while trying to recover funds as part of FTX's bankruptcy.
The ARKM token was launched in July 2023 as the native token of the Arkham Intel Exchange. Originally released on Binance Launchpad, ARKM launched at a price of $0.05 and reached a price of $3.98 in March 2024. As of July 2024, the market capitalization for the ARKM token is US$326 million.
In June 2024, Arkham offered a $150,000 bounty on its Intel Exchange, which was solved and paid out, to anyone who discovered who was behind the DJT crypto asset. During an X Spaces event earlier that month, Martin Shkreli claimed that he and Barron Trump were behind the Trump-branded cryptocurrency.
In July 2024, Arkham tracked the German Government's movements of Bitcoin to centralized crypto exchanges like Coinbase, Kraken, and Bitstamp as it completely sold off its nearly 50,000 BTC holdings worth more than $2 billion at the time. German authorities had seized the Bitcoin from the operators of Movie2k.to and transferred it to a crypto wallet Arkham identified as being owned by Germany’s Federal Criminal Police Office.
Arkham tagged the cryptocurrency wallets of Mt. Gox, the defunct cryptocurrency exchange, which as of July 2024 held nearly 140,000 bitcoin. Arkham users were then able to track the onchain movements of those holdings, worth billions of dollars, as the trustee began making repayments to creditors.
Other notable blockchain identifications include Robinhood Markets and Justin Sun.
References
Cryptography
Cryptocurrency projects
Information technology companies of the United States
English-language websites
Organizations based in Dallas | Arkham Intelligence | [
"Mathematics",
"Engineering"
] | 825 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
76,753,765 | https://en.wikipedia.org/wiki/Chromatic%20symmetric%20function | The chromatic symmetric function is a symmetric function invariant of graphs studied in algebraic graph theory, a branch of mathematics. It is the weight generating function for proper graph colorings, and was originally introduced by Richard Stanley as a generalization of the chromatic polynomial of a graph.
Definition
For a finite graph with vertex set , a vertex coloring is a function where is a set of colors. A vertex coloring is called proper if all adjacent vertices are assigned distinct colors (i.e., ). The chromatic symmetric function denoted is defined to be the weight generating function of proper vertex colorings of :
Examples
For a partition, let be the monomial symmetric polynomial associated to .
Example 1: complete graphs
Consider the complete graph on vertices:
There are ways to color with exactly colors yielding the term
Since every pair of vertices in is adjacent, it can be properly colored with no fewer than colors.
Thus,
Example 2: a path graph
Consider the path graph of length :
There are ways to color with exactly colors, yielding the term
For each pair of colors, there are ways to color yielding the terms and for
Altogether, the chromatic symmetric function of is then given by:
Properties
Let be the chromatic polynomial of , so that is equal to the number of proper vertex colorings of using at most distinct colors. The values of can then be computed by specializing the chromatic symmetric function, setting the first variables equal to and the remaining variables equal to :
If is the disjoint union of two graphs, then the chromatic symmetric function for can be written as a product of the corresponding functions for and :
A stable partition of is defined to be a set partition of vertices such that each block of is an independent set in . The type of a stable partition is the partition consisting of parts equal to the sizes of the connected components of the vertex induced subgraphs. For a partition , let be the number of stable partitions of with . Then, expands into the augmented monomial symmetric functions, with coefficients given by the number of stable partitions of :
Let be the power-sum symmetric function associated to a partition . For , let be the partition whose parts are the vertex sizes of the connected components of the edge-induced subgraph of specified by . The chromatic symmetric function can be expanded in the power-sum symmetric functions via the following formula:
Let be the expansion of in the basis of elementary symmetric functions . Let be the number of acyclic orientations on the graph which contain exactly sinks. Then we have the following formula for the number of sinks:
Open problems
There are a number of outstanding questions regarding the chromatic symmetric function which have received substantial attention in the literature surrounding them.
(3+1)-free conjecture
For a partition , let be the elementary symmetric function associated to .
A partially ordered set is called -free if it does not contain a subposet isomorphic to the direct sum of the element chain and the element chain. The incomparability graph of a poset is the graph with vertices given by the elements of which includes an edge between two vertices if and only if their corresponding elements in are incomparable.
Conjecture (Stanley–Stembridge) Let be the incomparability graph of a -free poset, then is -positive.
A weaker positivity result is known for the case of expansions into the basis of Schur functions.
Theorem (Gasharov) Let be the incomparability graph of a -free poset, then is -positive.
In the proof of the theorem above, there is a combinatorial formula for the coefficients of the Schur expansion given in terms of -tableaux which are a generalization of semistandard Young tableaux instead labelled with the elements of .
Generalizations
There are a number of generalizations of the chromatic symmetric function:
There is a categorification of the invariant into a homology theory which is called chromatic symmetric homology. This homology theory is known to be a stronger invariant than the chromatic symmetric function alone. The chromatic symmetric function can also be defined for vertex-weighted graphs, where it satisfies a deletion-contraction property analogous to that of the chromatic polynomial. If the theory of chromatic symmetric homology is generalized to vertex-weighted graphs as well, this deletion-contraction property lifts to a long exact sequence of the corresponding homology theory.
There is also a quasisymmetric refinement of the chromatic symmetric function which can be used to refine the formulae expressing in terms of Gessel's basis of fundamental quasisymmetric functions and the expansion in the basis of Schur functions. Fixing an order for the set of vertices, the ascent set of a proper coloring is defined to be . The chromatic quasisymmetric function of a graph is then defined to be:
See also
Chromatic polynomial
Symmetric function
References
Further reading
Functions and mappings | Chromatic symmetric function | [
"Mathematics"
] | 1,002 | [
"Mathematical analysis",
"Mathematical objects",
"Functions and mappings",
"Mathematical relations"
] |
76,755,298 | https://en.wikipedia.org/wiki/List%20of%20inventoried%20conifers%20in%20the%20United%20States | Silvics of North America (1991), a forest inventory compiled and published by the United States Forest Service, includes many conifers. It superseded Silvics of Forest Trees of the United States (1965), which was the first extensive American tree inventory. A variety of statistics on all of these trees are maintained by the National Plant Data Team of the US Department of Agriculture.
All of the conifers in the inventory except the larches and some bald cypresses are evergreens. Apart from two species in the yew family, all are in either the pine family (including firs, larches, spruces, pines, Douglas firs and hemlocks) or the cypress family (including junipers, redwoods, giant sequoias, bald cypresses and four genera of cedars).
Softwood from North American conifers has a variety of commercial uses. The sturdier timber is milled for plywood, wood veneer and construction framing, including structural support beams and studs. Logs can be fashioned into posts, poles and railroad ties. Less sturdy timber is often ground and processed into pulpwood, principally for papermaking. Resins from sap yield wood tar, turpentine or other terpenes. Some resins and other tree products contain dangerous toxins (not generally listed below).
Key
West of the Mississippi River: AK Alaska AR Arkansas AZ Arizona CA California CO Colorado IA Iowa ID Idaho KS Kansas LA Louisiana MN Minnesota MO Missouri MT Montana ND North Dakota NE Nebraska NM New Mexico NV Nevada OK Oklahoma OR Oregon SD South Dakota TX Texas UT Utah WA Washington WY Wyoming. (Hawaii is not associated with any conifer species in the 1991 inventory.)
These are often divided up into:
The continental Western states: AK AZ CA CO ID MT NM NV OR UT WA WY
The South Central states: AR LA OK TX
The Midwestern states west of the Mississippi (including MN, which is mostly west or north of the river), also called the western Midwest: IA KS MN MO ND NE SD
East of the Mississippi: AL Alabama CT Connecticut DE Delaware FL Florida GA Georgia IL Illinois IN Indiana KY Kentucky MA Massachusetts MD Maryland ME Maine MI Michigan MS Mississippi NC North Carolina NH New Hampshire NJ New Jersey NY New York OH Ohio PA Pennsylvania RI Rhode Island TN Tennessee VA Virginia VT Vermont WI Wisconsin WV West Virginia
These are often divided up into:
New England: CT MA ME NH RI VT
The Mid-Atlantic: DE MD NJ NY PA VA WV
The Southeast: AL FL GA KY MS NC SC TN
The Midwestern states east of the Mississippi, also called the eastern Midwest: IL IN MI OH WI
Conifers
See also
List of gymnosperm families
List of inventoried conifers in Canada
List of inventoried hardwoods in the United States
Notes
Citations
References
See their terms-of-use license.
Conifers
Forests of the United States
Inventoried conifers in the United States
Taxonomic lists (species) | List of inventoried conifers in the United States | [
"Biology"
] | 615 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
76,755,445 | https://en.wikipedia.org/wiki/1973%20Concorde%20eclipse%20flight | On 30 June 1973, the supersonic jet Concorde 001 intercepted the path of a total solar eclipse and followed the path of totality as it crossed Africa. This feat allowed the passengers to experience a total solar eclipse for 74 minutes, the longest-ever total eclipse observation. Five experiments were carried out during the flight, but they have had limited scientific impact.
Sequence of events
Preparation and lead up
In May 1972, Pierre Léna, an astronomer with the Paris Observatory, met with French Concorde test pilot André Turcat over lunch at a restaurant at Toulouse Airport to propose his idea to view the 1973 eclipse from an aircraft. Léna describes this meeting in his book about the project, Concorde 001 et l’ombre de la Lune (2015), while Turcat describes it in Un mythe éclipsé in Bulletin de l’Académie des sciences, agriculture, arts et belles lettres d’Aix-en-Provence (2013). British astrophysicist John Beckman had previously tried to obtain permission to use the 002 Concorde prototype to conduct a similar experiment, but was turned down.
In autumn 1972, Léna was told that he, Turcat and their teams could begin work, but that no firm decision would be made about the flight before February 1973. On 2 February, it was announced that the flight would proceed. The scientists were able to carry out a test flight with their equipment on 17 May 1973, in their maiden supersonic flight. The final 2-hour-and-36-minute rehearsal flight took place on 28 June.
30 June 1973
At 10:08 GMT on 30 June 1973, Concorde 001 departed Las Palmas, Gran Canaria, piloted by André Turcat and Jean Dabo. Aboard the flight were Turcat and Dabo; flight mechanic Michel Rétif; radio navigator Hubert Guyonnet; Henri Perrier; and astronomers Léna, Beckman, Donald Hall, Donald Liebenberg, Alain Soufflot, Paul Wraight, and Serge Koutchmy.
The plane intercepted the path of totality over Mauritania within one second of the planned rendezvous and flew at an altitude of 58,000 feet at Mach 2. Mauritania closed its airspace to commercial air traffic to ensure the success of the Concorde's flight. The aircraft flew in the lunar shadow over the Sahara including Mali, Nigeria and Niger, before landing in Fort-Lamy (present-day N'Djamena), in Chad.
On the ground on Earth, the longest possible viewing of totality of this eclipse from a fixed location was 7 minutes and 4 seconds. The Concorde experienced 74 minutes of totality with an extended second contact of 7 minutes and extended third contact of 12 minutes.
Aircraft
The original Concorde prototype 001 made its first test flight in 1969 from Toulouse Airport. The specific modified version of the aircraft used for this experiment was the Concorde 001 registered as F-WTSS. The aircraft has four twin-spool Olympus 593 engines and two onboard inertial guidance systems. Four specially-made portholes were installed in the roof of the aircraft's fuselage to facilitate viewing of the Sun. Infrared and optical cameras were installed in portholes in the plane's roof to capture the Sun's corona with less atmospheric interference than there would be from the ground.
F-WTSS is now on display as an exhibit at the Musée de l’air et de l’espace in France along with Air France Concorde 213, registered as F-BTSD.
Scientific observations
Five experiments were carried out during the 1973 Concorde 001 flight. Léna and his team (Université Paris) focused their efforts on studying the F-corona (the outer part of the Sun's corona, made up of dust particles). Wraight (University of Aberdeen) measured the effects of the eclipse on oxygen atoms in the Earth's atmosphere through a side-porthole. Liebenberg (University of California, Los Alamos Scientific Laboratories) measured pulsations in light intensity, while Beckman (Queen Mary College) observed the far infrared emissions from the chromosphere.
Legacy
Though this event garnered wide and lasting media attention, solar researchers generally agree that the Concorde's flight has had limited scientific impact. Kevin Reardon of the National Solar Observatory said of the flight, "Strangely no significant results were ever published from the effort. [...] The overall science output was not as notable as the flight itself." Léna himself has admitted, "The five experiments all succeeded, but none of them revolutionized our understanding of the corona" and that "[the experiments] all played their role in the normal progression of scientific knowledge, but there were no extraordinary results."
On 11 August 1999, three Concorde aircraftone from France and two from the United Kingdomcarried out a similar feat carrying tourists instead of scientists. Passengers paid $2,400, but experienced only four or five minutes of totality, which was difficult to see because of the aircraft's small windows and the location of the Sun. A similar flight was planned for the 21 June 2001 solar eclipse, but was cancelled after the 2000 plane crash of Air France Flight 4590. Airborne eclipse chasing has been successfully attempted on other non-supersonic aircraft including a LATAM Airlines Boeing 787-9 Dreamliner (E-Flight 2019-MAX), and a 2024 Gulfstream V jet.
The Concorde's 74 minutes of totality remains the longest-ever total eclipse observation.
Notes
References
Aviation occurrences
1973 in science
20th-century astronomical events
Solar eclipses
June 1973 events in Africa | 1973 Concorde eclipse flight | [
"Astronomy"
] | 1,148 | [
"Astronomical events",
"20th-century astronomical events"
] |
76,756,314 | https://en.wikipedia.org/wiki/IC%202628 | IC 2628 is a type SBa barred spiral galaxy with a ring located in Leo constellation. It is located 600 million light-years from the Solar System and has an approximate diameter of 135,000 light-years. IC 2628 was discovered on March 27, 1906, by Max Wolf and is classified as a ring galaxy due to its peculiar appearance. The galaxy has a surface brightness of magnitude 23.8 and located at right ascension (11:11:37.8) and declination (+12:07:21) respectively.
See also
List of ring galaxies
PGC 1000714
Hoag's Object
NGC 6028
References
2628
Barred spiral galaxies
Ring galaxies
Leo (constellation)
034038
2MASS objects
Astronomical objects discovered in 1906
Discoveries by Max Wolf
SDSS objects
034038 | IC 2628 | [
"Astronomy"
] | 165 | [
"Leo (constellation)",
"Constellations"
] |
76,756,981 | https://en.wikipedia.org/wiki/IC%20535 | IC 535 known as PGC 26524 and PGC 1128295, is a type E elliptical galaxy with a ring located in the Hydra constellation. It is located 740 million light-years away from the Solar System and has an estimated diameter of 85,000 light-years. IC 535 was discovered on March 23, 1893, by Stephane Javelle. It has a surface brightness of 23.7 mag/arcsec and is moving at radial velocity of 16,049 kilometers per seconds. It is located at right ascension (9 hours: 22.2 minutes) and declination (-01 degrees: 03 minutes).
References
0535
Elliptical galaxies
026524
Hydra (constellation)
2MASS objects
SDSS objects
Astronomical objects discovered in 1893 | IC 535 | [
"Astronomy"
] | 157 | [
"Hydra (constellation)",
"Constellations"
] |
76,757,460 | https://en.wikipedia.org/wiki/IC%204537 | IC 4537 is a type S0-a lenticular galaxy located in the Serpens constellation. It is located 736 million light-years from the Solar System and was discovered by astronomer Edward Emerson Barnard although the year of discovery was unknown. IC 4537 has a surface brightness of magnitude 23.9 and a right ascension of (15 hours: 17.5 minutes) and declination (+0.2 degrees : 02 minutes). IC 4537 is apparently located a few miles away from the globular cluster Messier 5.
References
4537
Lenticular galaxies
Serpens
054583
Discoveries by Edward Emerson Barnard
SDSS objects
2MASS objects
054583 | IC 4537 | [
"Astronomy"
] | 142 | [
"Constellations",
"Serpens"
] |
76,758,003 | https://en.wikipedia.org/wiki/Azerbaijani%20calendar%20beliefs | Azerbaijani calendar beliefs are common beliefs about the naming of different times (cosmic periods, years, months, etc.) in Azerbaijani culture.
The formation of the Azerbaijani folk calendar is based on the attitude to nature, the movement of celestial bodies, and agricultural traditions. The transition from winter to spring occurs when the world sleeps, and is celebrated with Novruz holiday. Although Nowruz and Khidir Nabi holidays are celebrated together in different regions of Azerbaijan, the good celebration of one of the holidays is observed with the poor celebration of the other. Chilla night is celebrated on the occasion of the beginning of winter, Sadda holiday is celebrated on the occasion of the transition from Big Chilla to Little Chilla.
In the dialects of the Azerbaijani language, weekdays are referred to as salt, grief, honey, milk days, etc. On the day of ancestors, known as the Name Day, the deceased are remembered, and as a remnant of Shamanism, porridge is consumed.
Cosmic cycles
Ivar Lassi, who studied the traditions of Muharram in Azerbaijan, reported that Azerbaijanis believe in the existence of 10 cosmic cycles, each lasting 10,000 years. These cosmic cycles are divided into 12-year solar years. Azerbaijanis believed that the 5th cycle continued into the 1910s.
History of calendars in Azerbaijan
Jalali calendar
The absence of an exact date of Khizir Nabi holiday, but the celebration of Nowruz holiday on a specific day, is explained by the fact that Nowruz holiday Nowruz was officially incorporated into the Jalali calendar in the 11th century.
The Twelve Animal Calendar
Georgian sources show more of an Eastern (Uyghur-Chagatai) character, and it is possible to encounter the Twelve Animal Calendar dating back to the Mongol period. The term "Siçan ili" used is of Oghuz origin (the word "il" is more precisely of Azerbaijani origin). Certainly, these Oghuz elements should be attributed to local Eastern Ottoman-Azerbaijani influences. In the same way, the word "ilan" used for the year is either of Azerbaijani or Turkish origin.
According to Turkish historian Osman Turan, the calendar with twelve animals still exists in Azerbaijan. In general, the Turkish calendar played an important role in the lives of all Turkic peoples of the Caucasus and was passed on to other peoples in the neighborhood.
The commonly used names are as follows: mouse, ox-cow, tiger, rabbit, dragon (or crocodile and fish), snake, horse, sheep (or goat), monkey, chicken-rooster (or simply bird), dog, pig. Such naming of years is considered "Tarikh-i Turki".
In Azerbaijani folklore, beliefs related to the Turkic calendar are encountered. According to a widespread belief among the people, the nature of the upcoming year, whether it will be good or bad, is associated with the character and traits of the animal named after it. For example, when the year of the snake arrives, they say that the weather will be warm, drought will pass, and relations will not be normal. In the year of the rabbit, the harvest is plentiful, in the year of the dragon (crocodile), there is a lot of rain, in the year of the pig, the weather will be harsh, etc.
According to another calendar myth, fortune tellers who gathered in one place on Novruz gave names to the years, and if the year is based on an animal, people will show the character of that animal. For example, people gnaw and destroy in the year of the mouse, be tolerant in the year of the horse, and fight in the year of the dog. Another myth tells the story that because people confused the years, their rulers named the years after the animals that appeared in front of them.
Turkish calendar
In the theoretically existing Turkish calendar (based on lunar years), the 1st month is called Aram, and the last month is called Haqqsabat. The remaining months are named by numbers. An intercalary month is added after the 2nd or 3rd year. This month is called Sivan, which means "crying".
Azerbaijan folk calendar
In the Azerbaijani folk calendar, different times (months, periods, holidays or ceremonies) are marked with special names. The formation of the folk calendar in Azerbaijan is rooted in the attitude to nature, celestial movements and agricultural traditions. In the folk calendar, terms such as the month of plowing, the month of migration, the month of vay nene, the month of irrigation, and the month of harvest are used.
According to legend, in ancient times, when the year was divided into months, each month was given 32 days, except for the "Boz" month, which had 14 days. E very month gives 1 day to the Boz month so that the Boz month does not get hurt. Some months give him days again because the Boz month is still short. However, since the Boz month takes days from other months, its days are not similar in terms of weather conditions.
Historically, in places where Nowruz holiday is celebrated with great enthusiasm in Azerbaijan, Khidir Nabi holiday has either not been celebrated or has been poorly celebrated. At the same time, where Khidir Nabi holiday is well celebrated, Nowruz holiday has not been celebrated in an important way. Khidir Nabi holiday is not celebrated in the southern region of the Republic of Azerbaijan and in the plains of the Shirvan area.
Spring
When the universe sleeps. It is a mysterious time that marks the end of winter and the beginning of spring. It occurs at the moment when the last Tuesday of the year or the night before the Nowruz holiday is transformed into daylight. Rivers, streams, that is, running water, stop for a moment, and then they start flowing again.
Nowruz (March 21).The New Year, the arrival of Spring, the awakening of nature, and the beginning of summer agricultural activities.
Baca-baca day. It is the first day of Nowruz.
Chershenbe sur. It is the first Tuesday after the spring equinox. According to information provided by Adam Olearius, Iranians (meaning Turks in the given example) consider this day unlucky. They refrain from work and the markets are closed.
Garayaz (oğlaqqıran, goat-breaker). This period, from the end of April until late spring, about forty days after the beginning of summer, is called "Garayaz" in the folk calendar.
Goygavan/Goygovan. It is used in Babek, Shahbuz and Julfa. It is the period when cattle come out of winter. "Goyqavan" means "searching for sky grass, fleeing when the sky falls, and eating the sky."
Hefteseyri. In the past, Shirvan used to celebrate the festival of flowers, which was called the Hefteseyri. This holiday begins in Nowruz and was celebrated every Friday for 30–40 days. In the north-west of Azerbaijan, this holiday was celebrated under the name "Rose festival".
Sun rituals. If the 3rd and 4th days after Novruz are rainy, people make a doll and sing songs calling for the Sun.
Planting month. Spring is the most productive period of a farmer's life.
Rain month. This is the name of the first month of the spring season, it comes from the words Nisan and Neysan (April).
Nature day. Iranian Azerbaijanis visit nature on the thirteenth day of the new year and celebrate until the evening.
Suceddim. It is a tradition to swim in the spring. It is spread in many parts of Azerbaijan and among the Azerbaijanis of Armenia.
Green light month. In the mountains and foothills, the month of April is called this.
Garıborcu. It refers to the period until April 15. They sing songs about the change from winter to spring, called "The Change of Garı with March". In Ordubad and Shahbuz, the period until April 15, when the weather is cold, is called karnaburt.
Terchıkh period. It is the period when new summer grasses grow, and sheep and goats are sheared at night.
Kotan (cut) period. It is the period when plowing works are started. They are celebrating the Shum Festival, wishing them a prosperous year ahead.
Cut and Kotan holiday. It is held in cüt or kotan time. Lavash or tandir is placed around the neck of the oxen, and the animals are roamed in the field.
Sprout (or grass) month. It is the period when gardens are cultivated.
Rose age. They celebrated the rose water festival, everyone gave each other rose water in small perfume bottles, and had fun with various singing and ritual games.
Grass cutting period. It is held in some regions at the end of the spring season.
Summer
Harvest time. It's early summer.
Susepen holiday. Celebrated in the Ordubad region with the arrival of summer. It involves welcoming the sun and sacrificing an animal.
Cırhacır period (or dragonfly period). It is the middle month of summer, the reason for the appearance of this name is that due to the intensity of the scorching heat falling from the sky and the drought, dragonflies start to chirp in the lowlands.
Abrizagan holiday. Ceremonies related to water were performed on the holiday. With various water containers in their hands and accompanied by music, everyone would gather at the riverside, near a spring, orany water source and play, throw water on each other, and try to push each other into the water.
Goradoyan month. A period of the middle month of summer is called. er. During this time, grape clusters slowly start to ripen. Women collect these goras, wash them, pour them into large wooden plates, beat a round river stone with a hammer and extract the water, making abgora (gora water).
Gorabishiren month. It takes from Goradoyen to the middle of August. It is the hottest period of summer, when the grapes swell and turn dark.
The period of migration from the highlands. The Elat holiday is celebrated on July 26, when Azerbaijani nomads return from the mountains to the plains in Armudlu village of Dmanisi in Georgia. At this time, tents are erected, horse races are held, and bread is prepared. In the summer season, beekeepers also go to the plain. Because special flowers grow above, in cool places.
Guyrugdogdu (tail-born or tail-frozen). It is called the second period of summer. It is determined by the appearance of comet-like stars extending from the eastern horizon towards the west just before dawn, and agricultural activities are performed during this period. As early as 1905, Hasan Bey Zardabi wrote in his article "Quyruq doğdu, çillə çıxdı": "If anyone wakes up early in the morning on July 25 (August 7 according to the new calendar) and looks at the sunrise, he will see a group of stars resembling that tail. The time of quyruqdoğdu is from August 6th to 15th. From that period onwards, the weather gradually cools down"
Honey moon. The beginning of honey harvesting brings joy to beekeepers. On this occasion, they rejoice, give each other a smile, put honey on their cheeks, and wish for blessings.
Elgovan (or the days of breaking burku). It is the last month of summer. During this period, the mist often falls, there is no shortage of fog and drizzle from the mountains. A cold wind blows from early in the morning.
Sonay. It is called the last moonlit nights of summer. In the Sonay nights, they make the sound of AVAVA by tapping hands at their mouths, as a call for gathering. Then they gather and play late the moonlight nights. Of the necessarily played games/plays are: Mallaharay and “Bənövşə Bəndə Düşə”.
Fig ripening period. In Absheron, the end of August and the beginning of September are called so.
Autumn
Equality of night and day (Paghtigan). In Azerbaijani folk belief, the Moon and the Sun are depicted as lovers. Their love is eternal, but they can never be united, but they can see each other's faces at the equinox. Despite this, they still lose each other without being able to meet. In Azerbaijani villages, various games are played on this night.
Gochgarishan. Among the elite population of Azerbaijan, the beginning of autumn is called Gochgarishan. During this month, sheep that have been previously bred are released into the herd, taking into account that breeding coincides with the last month of winter (boz ay). Sheep are released into the herd within the first 5–10 days of autumn. This period experiences cooler weather, heavy rainfall, and strong winds.
Shanider. It is a celebration and thanksgiving ceremony held by picking grapes and baking doshab.
Harvest season. The first apple picked before dawn and before the sunrise of the first day of this season grants Eternal Life to the person.
Mehregan holiday. It was held on the occasion of harvest. Until recent years, this holiday was celebrated as the labor victory of the year, the final holiday of the year - the harvest holiday.
Khazal month or the girovdushen moon. It's November. Passes with rain and lightning.
Pomegranate holiday. The festival features a fair and an exhibition that displays different local varieties of pomegranates as well as various pomegranate products produced by local enterprises. During the festival, music and dance performances are presented each year. The festival also includes athletic performances and various craftsmen, potters, millers, blacksmiths, artists, performances of folklore groups and paintings.
"Kovsec" ceremony was performed in girovdushan month. A person dressed in a ridiculous costume, with a laughing face, would parade through the village, declaring to the crowd that he is the enemy of winter. People in the surrounding area sprinkled water on him, threw snow, and played funny games and entertainments around him. He was holding a crow with his feathers in one hand and a fan in the other hand and waving it, saying "it's hot, it's hot, I don't care" over and over.
Autumn period or leaf shedding time. During this period, a windy breeze called Khezan wind would blow, causing the yellow leaves remaining on the trees to fall, marking the beginning of the leaf-falling season.
Nakhirgovan (or oglaggiran). The wind blowing in the last month of autumn first of all drove the cattle grazing outside into the barn. After that, the cold sets in.
The time of transitioning from autumn to winter. t the end of autumn, the slopes of the mountains are covered with smoke. In a moment, hail pours, rain falls, snow is mixed and blown in such a way that the eye cannot see.
Winter
Big Chelle. It is the first stage of winter, lasting 40 days. Shab-e Chelleh is the night opening the "big Chelleh" period, that is the night between the last day of autumn and the first day of winter.
Chelle night. Countless bonfires were lit to mark the beginning of winter. Fireworks were set off everywhere, people gathered together to celebrate, played interesting games and plays, made sacrifices, played music.
The ninth day of winter. On that day, after midnight, a childless woman who eats a frosted apple will have a child.
Karagish. These are the snowy, stormy, freezing days of the Great Chelle. During this time, winter passes heavily and sadly, with snow mixed with wind howling.
Little Chelle (February 1–20). It is the second stage of winter. Generally, Little Chelle is distinguished by its severe, icy cold. That's why this period is also called the boyhood of winter, the zalo-zalo time of winter, the harsh time of winter.
Sadda holiday. It is a holiday celebrated 50 days before Nowruz. Sadda holiday is a holiday that shows people's attitude towards winter and demonstrating that they are not afraid of winter.
Time of Yalguzag. It is called the first 10 days of Little Chelle. It is also called "Khidir Nabi days". These ridge-cold times are associated with Khidir Nabi and it is said that "Khidir comes, winter comes; Khidir leaves, winter leaves"
Khidir Nabi holiday. It is a seasonal holiday celebrated in honor of Khidir Nabi, who is associated with the cult of greenery and water. Khidir is a protector who rides a gray horse. As Khidir is believed to be a healer, some ritual practices as regards to health issues can be seen on Hıdırellez Day. On that day, meals cooked by lamb meat are traditionally feasted. It is believed that on Khidirellez Day all kinds or species of the living, plants and trees revive in a new cycle of life, therefore the meat of the lambs grazing on the land which Khidir walks through is assumed as the source of health and happiness. In addition to these, some special meals besides lamb meat are cooked on that day.
Thursdays of Little Chelle. These are the dates when nature starts to come alive. These three Tuesdays are called "thief's tuesday", "thief's bugh", "thief's usku". Although there is no awakening like in the Thursdays of the Boz Month (Wind, Fire, Water, Earth), warmth goes to the ground. People believe that they will be protected from evil forces by burning the flakes and carrying them around the house 3 times.
Boz Month (or chellebeces, alachalpov, gray chelle, ala chelle, crying-smiling month, fetus month). It is the last stage of winter, lasting 30 days. During this period, due to frequent weather changes and the high number of gloomy, sunless days, it is called the gray, gloomy, and dark period in the folk calendar.
Tuesdays of the Boz month. They are the four Tuesdays before Novruz, related to the four elements (Wind, Fire, Water, Earth).
Kos-kosa game. This game humorously depicts the death of winter and the beginning of spring. Neighborhood children play this game in front of houses, collecting money and food. The face of "Kosan" (the mask) symbolizes devotion to the spirit of nature, and wearing fur inside out also has a sacred meaning.
Vasfi-hal. Various items (rings, earrings, banners, ornaments) are placed inside a wishing bowl, and after reciting a ceremonial song, one of them is secretly chosen. The fate of the item depends on the intention of its owner.
Danatma time.It is the ceremony of the sunrise and welcoming on the last Tuesday. Girls fill a bowl with water, cover it with a cloth, and put jewels in the water.
When the universe sleeps. It is a mysterious time that marks the end of winter and the beginning of spring. It takes place on the last Tuesday of the year or at the moment when the night turns into day, which will open Nowruz. Streams and rivers, that is, flowing waters, stop for a moment, then start flowing again.
Other
In Iranian Azerbaijan, ceremonies and festivities are held under the names of Mountain Migration, Shepherd's Festival, Lamb Day, Shepherd's Day, Grass Migration, and Sheep Day.
Lamb day. It was performed after the first birth in the sheep herd. Shepherds used to dance and sing, and competitions were held.
Ram day. It was held ten days before moving to the highlands. The head shepherd or village elder would give advice to the youth. The sheep were decorated with various colored wool and bells were hung around their necks.
Grass migration. It was held on the occasion of the end of agricultural work and village work. During this traditional temporary migration, the elderly, children, and livestock were taken to the summer pasture.
Days of the week
In the dialects of the Azerbaijani language, the following expressions are used for the days of the week:
Salt day (I day). The first day in Goychay was named after the lexeme "prophet". In Baku, the first day is called banuma day, and in Ordubad and Shahbuz, it is called banamiya.
Day of mourning or i.e. special day (II day). In Agbaba, Khanlar, Gakh, Gazakh, Mingachevir, Sheki dialects, it is known as "khas day" (special day).
Ad(ina) day (IV day). It is used as the fourth day in Ganja, Gazakh, Lankaran, Shaki, Shamkir, Zangilan. In Ganja, the donation given on the fourth day for a person who dies is called adnaliq (adnalikh). This day is called "ata-baba day" (father's day) in some parts of Azerbaijan, as it is associated with spirits. On the day of the adina, special food is cooked in houses and graves are visited, the dead are remembered. One of the remains of the shamanism customs that were influenced by Islam in Azerbaijan is the ash that Azerbaijanis eat on Thursdays in honor of the spirits of the dead.
Adina day (V day). It is used as the fifth day in Gadabay and Tovuz.
The day after Adina (VI day). It is used as Saturday in Gadabay.
Milk day or Azat day (VII day). The name of milk day was used in Baku governorate. In Gədəbəy and Tovuz dialects, it is referred to as "xas günü" (special day).
The days of milk and salt are thought to be related to the Moon and the Sun. The expressions "clear as milk", "pure as milk" are still used today. In naming the second day as the "day of mourning", "gam day", similarities have been observed in the language of Eastern European, Caucasian, Front Asian and Balkan Turks, as well as in some Finnish peoples of Eastern Europe.
Under the influence of Islam, week names of Turkic origin have been replaced by names of Arabic-Persian origin or used together with them. IIn modern times, the poet I. Tapdıq introduced the names Gunbir (Day one), Guniki (Day two), Gunuch (Day three), Gundord (Day four), Gunbesh (Day five), Gunalti (Day six) to help children better memorize the week days.
Examples of the day of the week in dialects
In Baku dialect, danna used in Goychay, Mingachevir, Megri dialects, dannari in Goychay dialects means "morning". The expression "danna of the milk day" used for the second day means "the morning of the milk day". In Mahmud Kashgarli's "Dictionary", tang is used in the sense of "morning, at dawn".
Days of the week in folklore
In Azerbaijani religious tales, the days of the week are separated according to whether they are useful, successful or unprofitable. Although the perceptions about the days of the week are formed on Islamic values, they also have a nature associated with superstition. Friday is one of the sacred days in Islam. The connection between the creation of Prophet Adam, his entry into heaven, his exit from heaven, and the day of judgment with Friday has led to its significant importance in Islamic history. In Islam, Friday is a day of communal worship in Islam. It is believed that sins are forgiven for those who listen to the sermon and perform prayers in the mosque on that day. In Azerbaijani religious tales, Friday is described as a holy day. Fairies come to bathe in the lake on Fridays. Thursday and Saturday are considered lucky days as they are the day before and after Friday. As a result of this, it is mentioned in the Azerbaijani tale that according to the words of the ancestors, those who go on a trip on Saturday will return early and the trip made on that day will be easier.
Visiting times
Stream. In the summer, mothers recite prayers in the form of poetry when bathing their children in these waters.
Aynali mountain. On the 13th day of summer (Nature Day), Iranian Azerbaijanis dance and celebrate on the slopes of this mountain.
Pir mountain. Before sowing or moving to summer pastures, the grave on Pir Mountain in the Kivi village of Maraga is visited by the locals and shepherds.
Shahdag. The largest mountain in the northeastern part of Azerbaijan, is considered sacred by the people of the region. In the summer months, visits to Shahdag are organized, grave sites in the region are visited, pieces of cloth are tied to trees, and sacrifices are made.
Kirkhgiz hill. It is believed that stones cry every Friday on Kirkhgiz hill near Gobu and that day is an acceptable day for visits. This water is believed to be the tears of forty girls who turned into stones.
Special times of the day and month
Times of the day
Obashdan/before the dog falls off the hystack/The sun hasn't broken. It is early in the morning in Azerbaijani language dialects.
Twilight (early dawn) and alagaranlig (after sunset) times. These are the times when the star of Zohra appears in the sky, and it is considered suitable for embassies and wedding ceremonies. In this regard, there are beliefs such as "A child born at sunset spends his day abroad", "A child born at sunrise becomes a knight".
Gunerta/Midday, Afternoon. It is daytime in Azerbaijani language dialects.
Sher time/Dar time. This is in the evening and signifies the end of the day
The times of the month
New moon. Among the people, there are beliefs such as "who sees the moon when it rises will be successful", "who throws dirty water on the door where the moon shines will not receive good luck". When the new moon rises there are customs such as saying"blessed!" and reciting prayers, making wishes, and hanging a horn-shaped ornament on the door.
Ay bashi. Together with many Turkic peoples, Azerbaijanis also used this expression for the first day of the month.
3 days of the month. At this time, a person who makes an intention and looks at the moon for 3 days will see the person he will marry in his dream.
15 days of the month. It is believed that the child born at this time will be like a wrestler.
In postage stamps
In Azerbaijan, various postage stamps have been issued related to the ongoing celebration of Nowruz holiday and Tuesdays, as well as the Chinese calendar.
General overview
Sources
References
Calendars
Units of time
Calendars
Calendars
Calendars | Azerbaijani calendar beliefs | [
"Physics",
"Mathematics"
] | 5,703 | [
"Calendars",
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
76,758,860 | https://en.wikipedia.org/wiki/Praveen%20Sethupathy | Praveen Sethupathy is an American geneticist, science author and journalist. He is a professor of physiological genomics and Chair of the Department of Biomedical Sciences at the Cornell University. He currently serves as one of the board directors at The BioLogos Foundation where he holds discussions on the relationship between science and religion.
Education
Sethupathy received his BA degree from Cornell University and a PhD in genomics from the University of Pennsylvania. He completed his post-doctoral fellowship at the National Human Genome Research Institute under the direction of the then Director of National Institutes of Health Dr. Francis Collins, after which he moved to the University of North Carolina at Chapel Hill as an assistant professor in the Department of Genetics in 2011. He was selected by Genome Technology as one of the nation's top 25 rising young investigators in genomics in that same year. He was recruited to Cornell University, where he became a frequent research collaboration with Nicolas Buchon.
Career
Science
Sethupathy currently leads a research lab which is focused on genome-scale and molecular approaches to understanding physiology and human disease. He researches microRNA and the broader genetic factors related to diabetes, Crohn's disease, fibrolamellar carcinoma (a rare type of liver cancer), short-term memory, diabetes, and gut epithelium.
Professor
Sethupathy returned to the Cornell University as an Associate Professor where he studied. He teaches courses on various scientific topics such as stem cells, cancer and animal physiology. He also holds courses on the relationship between science and religion (particularly concerning Christianity), evolution theory and how to reconcile it with faith.
Journalism
Sethupathy is a science journalist and he is the author of more than 140 peer reviewed publications in various scientific journals such as PNAS, Cell, Science, etc. He also reviews various journals and has reviewed than 50 journals. He has also gotten several awards. He is also a writer on science and religion. He has advocated for compatibility between science and religion in his various works and frequently makes publications for The BioLogos Foundation. He also advocates against various pseudoscientific topics and has made various publications debunking them.
Personal life
Sethupathy is a Christian. He currently serves as a board director at The BioLogos Foundation, which is an organization that promotes harmony between science and religion. He believes in evolutionary creationism (also called theistic evolution) and considers there to be no conflicts between science and religion. He has also served on the advisory board of the Dialogue on Science, Ethics, and Religion in the American Association for the Advancement of Science (AAAS) and has also spoken in the Veritas Forum.
See also
The BioLogos Foundation
Relationship between science and religion
Theistic evolution
References
American geneticists
Theistic evolutionists
Writers about religion and science
Members of The BioLogos Foundation
Living people
Cornell University alumni
Cornell University faculty
Year of birth missing (living people) | Praveen Sethupathy | [
"Biology"
] | 592 | [
"Non-Darwinian evolution",
"Theistic evolutionists",
"Biology theories"
] |
76,759,148 | https://en.wikipedia.org/wiki/WinShock | WinShock is computer exploit that exploits a vulnerability in the Windows secure channel (SChannel) module and allows for remote code execution. The exploit was discovered in May 2014 by IBM, who also helped patch the exploit. The exploit was present and undetected in Windows software for 19 years, affecting every Windows version from Windows 95 to Windows 8.1
Details
WinShock exploits a vulnerability in the Windows secure channel (SChannel) security module that allows for remote control of a PC through a vulnerability in SSL, which then allows for remote code execution. With the execution of remote code, attackers could compromise the computer completely and gain complete control over it. The vulnerability was given a CVSS 2.0 base score of 10.0, the highest score possible.
The attack exploits a vulnerable function in the SChannel module that handles SSL Certificates. A number of Windows applications such as Microsoft Internet Information Services use the SChannel Security Service Provider to manage these certificates and are vulnerable to the attack.
It was later discovered in November 2014 that the attack could be executed even if the ISS Server was set to ignore SSL Certificates, as the function was still ran regardless. Microsoft Office, and Remote Desktop software in Windows could also be exploited in the same way, even though it did not support SSL encryption at the time.
While the attack is covered by a single CVE, and is considered to be a single vulnerability, it is possible to execute a number of different and unique attacks by exploiting the vulnerability including buffer overflow attacks as well as certificate verification bypasses.
Responsibility
The exploit was discovered and disclosed privately to Microsoft in May 2014 by researchers in IBM's X-Force team who also helped to fix the issue. It was later disclosed publicly on 11 November 2014, with a proof-of-concept released not long after.
See also
Heartbleed, a similar vulnerability.
References
External links
Microsoft Security Bulletin Entry
National Vulnerability Database Entry
CVE-2014-6321
Computer security exploits | WinShock | [
"Technology"
] | 405 | [
"Computer security exploits"
] |
76,761,078 | https://en.wikipedia.org/wiki/Epicoccum%20thailandicum | Epicoccum thailandicum is a species of fungi first described in Thailand in 2017 living saprobic on grass litter - thus resulting in the specimen name. This is just one of 89 different species of Epicoccum. E. thailandicum is morphologically similar to Epicoccum sorghinum, but the conidial dimensions differ. E. thailandicum has a differing clade in comparison to E. sorghinum, as well making it genetically distinct from this species.
E. thailandicum is a pathogenic fungus infecting Amomum villosum leaves and as a lab contaminate can cause accumulations of pharmacodynamic compounds.
References
Pleosporales
Fungi described in 2017
Fungus species | Epicoccum thailandicum | [
"Biology"
] | 150 | [
"Fungi",
"Fungus species"
] |
76,762,331 | https://en.wikipedia.org/wiki/IC%202431 | IC 2431 are a group of interacting galaxies in the constellation of Cancer. They are located 684 million light-years away from the Solar System and were discovered on February 24, 1896, by Stephane Javelle.
Characteristics
There are at least three galaxies involved in the gravitational interaction. They are IC 2431 NED01 (known as PGC 200245), IC 2431 NED02 (known as NSA 135647) and IC 2431 NED03 (known as PGC 200246). Additionally, a fourth galaxy (PGC 200247) might also be involved in the interaction. As they draw closer to each other, the forces are causing them to tear each other apart. This is common in the universe and all large galaxies, including the Milky Way own their size to violent mergers.
The galaxies are undergoing tumultuous mixture of star formation and tidal distortions which are caused by the interaction. In the center, a thick cloud of dust is seen obscuring, which light from a background galaxy is piercing its outer extremities. Also, they display a thermally-dominated X-ray emission which is much more in excess of expectations based on its own star formation rate.
IC 2431 falls under the category of Markarian Galaxies as Mrk 1224, in which its core shines bright in ultraviolet rays. It is possible that carbon monoxide might be present in the regions of the interacting galaxies, which can be determined by the fraction of interstellar gas and total mass in form of molecules. Not to mention, IC 2431 possibly contains an active nucleus which produces ionized gas outflows.
References
Cancer (constellation)
Interacting galaxies
Starburst galaxies
Luminous infrared galaxies
Astronomical objects discovered in 1896
2431
+03-23-030
04756
25476
025476
1224
09018+1447 | IC 2431 | [
"Astronomy"
] | 379 | [
"Cancer (constellation)",
"Constellations"
] |
76,762,511 | https://en.wikipedia.org/wiki/Dysprosium%20stannides | Under standard conditions, the elements dysprosium and tin combine to form a number of intermetallic compounds, the dysprosium stannides. Dysprosium stannides with simple empirical formulas include Dy5Sn3 and DySn2, but four other intermetallics have intermediate composition. None is believed to survive temperatures higher than , whereat Dy5Sn3 decomposes. Although dysprosium is a lanthanoid, its f orbitals likely participate in the metallic bonding: mixing dysprosium and tin releases an enthalpy quite different from mixing samarium and tin, with gadolinium and tin intermediate.
DySn2 adopts the zirconium disilicide crystal structure, and undergoes a Néel transition around . The magnetic patterning below the Néel point has periods incommensurable with the atomic unit cell, leading to a sinusoidal modulation. Theoretically, DySn2 should transition at very low temperatures to a different magnetic pattern with commensurable spatial period, but even at the incommensurable pattern survives.
References
Dysprosium compounds
Tin compounds
Intermetallics | Dysprosium stannides | [
"Physics",
"Chemistry",
"Materials_science"
] | 253 | [
"Inorganic compounds",
"Metallurgy",
"Inorganic compound stubs",
"Alloys",
"Intermetallics",
"Condensed matter physics"
] |
76,762,569 | https://en.wikipedia.org/wiki/ESO%2069-6 | ESO 69-6 collectively known as AM 1633-682, is a pair of interacting galaxies located 654 million light-years away in the constellation of Triangulum Australe. They are made of two galaxies: ESO 069-IG 006N known as IRAS 16330-6820, and ESO 069-IG 006S known as LEDA 285730.
Characteristics
Both galaxies are in stages of merging with each other. They resemble musical notes on a stave. Long tidal tails are formed, which stars and gas are stripped and torn away from their outer regions. These tails are proven signs of their interactions. Additionally numerical simulations that reproduces interaction-induced inflow of gas and resulting nuclear starbursts can, might trigger strong starbursts in both galaxies.
It is proven from the gravitational interactions of ESO 69–6, the surrounding intergalactic medium can be enriched with metals very efficiently up to distances of several 100 kpc. This can be explained in terms of indirect processes or direct processes that create kinetic spreading of baryonic matter. Possibly, they will eventually merge with each other and form a much bigger galaxy, in this case an elliptical galaxy, in the future.
References
069-006
Interacting galaxies
058663
2MASS objects
Triangulum Australe
IRAS catalogue objects | ESO 69-6 | [
"Astronomy"
] | 282 | [
"Triangulum Australe",
"Constellations"
] |
76,762,609 | https://en.wikipedia.org/wiki/IEEE%20Symposium%20on%20Security%20and%20Privacy | The IEEE Symposium on Security and Privacy (IEEE S&P, IEEE SSP), also known as the Oakland Conference, is an annual conference focusing on topics related to computer security and privacy. The conference was founded in 1980 by Stan Ames and George Davida and is considered to be among the top conferences in the field. The conference has a single track, meaning that all presentations and sessions are held sequentially in one venue. The conference also follows a double-blind review process, where both the authors' and reviewers' identities are concealed from each other to ensure impartiality and fairness during peer review process.
The conference started as a small workshop where researchers exchanged ideas on computer security and privacy, with an early emphasis on theoretical research. During these initial years, there was a divide between cryptographers and system security researchers, with cryptographers often leaving sessions focused on systems security. This issue was eventually addressed by combining cryptography and system security discussions in the same sessions. In 2011, the conference moved to San Francisco due to venue size concerns.
The conference has a low acceptance rate due to it having only a single track. The review process for the conference tends to evaluate the papers on a variety of criteria with a focus on novelty. In 2022, researchers interviewed reviewers from top security conferences like IEEE S&P and found that the review process of the conferences was exploitable due to inconsistent reviewing standards across reviewers. The reviewers recommended mentoring new reviewer with a focus on reviewing quality to mitigate this issue.
In 2021, researchers from the University of Minnesota submitted a paper to the conference where they tried to introduce bugs into the Linux kernel, a widely-used operating system component without Institutional Review Board (IRB) approval. The paper was accepted and was scheduled to be published, however, after criticism from the Linux kernel community, the authors of the paper retracted the paper and issued a public apology. In response to this incident, IEEE S&P committed to adding a ethics review step in their paper review process and improving their documentation surrounding ethics declarations in research papers.
History
The conference was initially conceived by researchers Stan Ames and George Davida in 1980 as a small workshop for discussing computer security and privacy. This workshop gradually evolved into a larger gathering within the field. Held initially at Claremont Resort, the first few iterations of the event witnessed a division between cryptographers and systems security researchers. Discussions during these early iterations predominantly focused on theoretical research, neglecting practical implementation considerations. This division persisted, to the extent that cryptographers would often leave sessions focused on systems security topics. In response, subsequent iterations of the conference integrated panels that encompassed both cryptography and systems security discussions within the same sessions. Over time, the conference's attendance grew, leading to a relocation to San Francisco in 2011 due to venue capacity limitations.
Structure
IEEE Symposium on Security and Privacy considers papers from a wide range of topics related to computer security and privacy. Every year, a list of topics of interest is published by the program chairs of the conference which changes based on the trends in the field. In past meetings, IEEE Symposium on Security and Privacy have considered papers from topics like web security, online abuse, blockchain security, hardware security, malware analysis and artificial intelligence. The conference follows a single-track model for its proceedings, meaning only one session takes place at any given time. This approach deviates from the multi-track format commonly used in other security and privacy conferences, where multiple sessions on different topics run concurrently. Papers submitted for consideration to the conference reviewed using a double-blind process to ensure fairness. However, this model constrains the conference in the number of papers it can accept, resulting in a low acceptance rate often in the single digits, unlike conferences which may have rates in the range of 15 to 20 percent. In 2023, IEEE Symposium on Security and Privacy introduced a Research Ethics Committee that would screen papers submitted to the conference and flag instances of potential ethical violations in the submitted papers.
In 2022, a study conducted by Ananta Soneji et al. showed that review processes of top security conferences, including the IEEE Symposium on Security and Privacy were exploitable. The researchers interviewed 21 reviewers about the criteria they used to judge papers during the review process. Among these reviewers, 19 identified novelty—whether the paper advanced the research problem or the state of the art—as their primary criterion. Nine reviewers also emphasized the importance of technical soundness in the implementation, while seven mentioned the need for a self-contained and complete evaluation, ensuring all identified areas were thoroughly explored. Additionally, six reviewers highlighted the importance of clear and effective writing in their assessments. Based on these interviews, the researchers identified a lack of objective criteria for paper evaluation and noted a degree of randomness among reviews provided by conference reviewers as the major weaknesses of the peer review process used by the conferences. To remediate this, the researchers recommended mentoring new reviewers with a focus on enhancing review quality rather than other productivity metrics. They acknowledged an initiative by IEEE S&P allowing PhD students and postdoctoral researchers to shadow reviewers on the program committee but also pointed out findings from a 2017 report suggesting that these students tended to be more critical in their assessments compared to experienced reviewers since they were not graded on review quality.
Controversy
In 2021, researchers from the University of Minnesota submitted a paper titled "On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits" to the 42nd iteration of a conference. They aimed to highlight vulnerabilities in the review process of Linux kernel patches, and the paper was accepted for presentation in 2021. The Linux kernel is a widely used open-source operating system component that forms the core of the Linux operating system, which is a popular choice in servers and in consumer-oriented devices like the Steam Deck, Android and ChromeOS. Their methods involved writing patches for existing trivial bugs in the Linux kernel in ways such that they intentionally introduced security bugs into the software. Four patches were submitted by the researchers under pseudonyms, three of which were rejected by their respective code reviewers who correctly identified the buggy code. The fourth patch was merged, however, during a subsequent investigation it was found that the researchers had misunderstood the way the code worked and had submitted a valid fix. This attempt at including bugs was done without Institutional Review Board (IRB) approval. Despite undergoing review by the conference, this breach of ethical responsibilities was not detected during the paper's review process. This incident sparked criticism from the Linux community and the broader cybersecurity community. Greg Kroah-Hartman, one of the lead maintainers of the kernel, banned both the researchers and the university from making further contributions to the Linux project, ultimately leading the authors and the university to retract the paper and issue an apology to the community of Linux kernel developers. In response to this incident, IEEE S&P committed to adding a ethics review step in their paper review process and improving their documentation surrounding ethics declarations in research papers.
References
Security and Privacy, IEEE Symposium on
Computer science conferences | IEEE Symposium on Security and Privacy | [
"Technology"
] | 1,433 | [
"Computer science",
"Computer science conferences"
] |
76,762,780 | https://en.wikipedia.org/wiki/Cosmic%20ray%20astronomy | Cosmic ray astronomy is a branch of observational astronomy where scientists attempt to identify and study the potential sources of extremely high-energy (ranging from 1 MeV to more than 1 EeV) charged particles called cosmic rays coming from outer space. These particles, which include protons (nucleus of hydrogen), electrons, positrons and atomic nuclei (mostly of helium, but potentially of all chemical elements), travel through space at nearly the speed of light (such as the ultra-high-energy "Oh-My-God particle") and provide valuable insights into the most energetic processes in the universe. Unlike other branches of observational astronomy, it uniquely relies on charged particles as carriers of information.
Detection methods
Astronomers use ground-based detectors, high-altitude research balloons, artificial satellites and other methods to detect cosmic rays. Ground-based detectors, often spread over large areas (for example, the Pierre Auger Observatory is an array of detectors spread over 3,000 square kilometers), identify and analyze the secondary particles (electrons, positrons, photons, muons, etc.) produced in a chain reaction of particle interactions triggered by the collision of cosmic rays and Earth's atmosphere. The properties of the original cosmic ray particle, such as arrival direction and energy, are inferred from the measured properties of the extensive air shower, which is the cascade of secondary particles collectively showering down through the atmosphere. There are two kinds of ground-based detectors: Surface detector arrays analyze the air shower at a unique altitude, whereas air fluorescence detectors record the shower development in the atmosphere, based on the interactions of air shower particles with nitrogen molecules in the atmosphere. Modern "hybrid" detectors, such as the Pierre Auger Observatory in Argentina and the Large High Altitude Air Shower Observatory in Sichuan, China, take advantage of the complementary nature of these two. Moreover, scientific balloons (such as the one used in Cosmic Ray Energetics and Mass Experiment) and satellites (such as China's Dark Matter Particle Explorer or DAMPE telescope) can also be used to observe pure cosmic rays at very high altitudes and in outer space.
Benefits
By studying the energy, direction, and composition of cosmic rays, scientists can uncover the sources and acceleration mechanisms behind these particles, which reveal astrophysical processes such as supernova explosions, black hole accretion, and galactic magnetic fields. Observations of cosmic rays led to the discovery of subatomic particles beyond the proton, neutron, and electron, including the positron and the muon, laying the groundwork for modern particle physics. It reveals the nucleosynthetic processes leading to the origin of the elements. By measuring cosmic rays, scientists discovered the presence of magnetic fields and radiation in the Solar System. Some cosmic rays originate from beyond the Solar System or galaxy, allowing scientists to estimate the amount and composition of matter in the universe, providing crucial information about its makeup. Cosmic rays are generated in extreme astrophysical environments such as exploding stars, black holes, and galactic collisions and provide a rare window into these processes. Energetic cosmic rays can interact with objects traveling through space, altering their isotopic composition. By studying these isotopes in meteorites, scientists can determine when they formed and fell on Earth, providing insights into the history of the Solar System. Cosmic rays have practical applications, including monitoring soil moisture for agriculture and irrigation practices and carbon-14 dating, which helps determine the ages of archaeological artifacts and geological formations.
History
Historical milestones in cosmic ray astronomy inclue Victor Hess's discovery of cosmic rays during balloon flights in 1912; the identification of new subatomic particles like the positron and muon in the 1930s, expanding our understanding of particle physics; Pierre Victor Auger's discovery of extensive particle showers from cosmic ray collisions high in the atmosphere; ground-based detectors measuring cosmic ray flux and energy spectrum in the 1940s-1950s; the establishment of the Volcano Ranch cosmic ray observatory in the 1960s, initiating large-scale experiments; the discovery of cosmic ray anisotropy (the fact that cosmic rays do not arrive uniformly from every region of the sky) in the 1960s, unveiling variations in flux and direction; the emergence of high-energy gamma-ray telescopes in the 1980s-1990s, enabling observations of gamma rays produced by cosmic ray interactions; the advent of space-based detectors like AMS-02 on the International Space Station in the 2000s, providing insights from space; and recent progress in multi-messenger astronomy in the 2010s, integrating cosmic ray observations with other astrophysical signals for a more complete view of cosmic phenomena.
Future
With advancements in technology and the development of more sensitive detection systems, astronomers anticipate making new discoveries about the sources, acceleration mechanisms, and propagation of cosmic rays. These insights will contribute to a deeper understanding of the underlying physics governing the cosmos. Future cosmic ray observatories, such as the Cherenkov Telescope Array, will use advanced techniques to detect gamma rays produced by cosmic ray interactions in Earth's atmosphere. Since these gamma rays will be the most sensitive means to study cosmic rays near their source, these observatories will enable astronomers to study cosmic rays with unprecedented precision.
Cosmic ray astronomy faces difficulty in identifying the exact sources of cosmic rays because charged particles are deflected by magnetic fields in space, and as a result tracing the paths of cosmic rays back to their origins require sophisticated modeling techniques and multi-messenger observations to infer their source locations. Moreover, due to the high-energy nature of these rays, the need for full-sky exposure, minimization of deflection by magnetic fields and elimination of background from distant sources present technical challenges.
References
Observational astronomy
Astrophysics | Cosmic ray astronomy | [
"Physics",
"Astronomy"
] | 1,155 | [
"Astronomical sub-disciplines",
"Observational astronomy",
"Astrophysics"
] |
76,762,925 | https://en.wikipedia.org/wiki/PGC%201470080 | PGC 1470080 is a type E elliptical galaxy located in the Boötes constellation. It is located 3 billion light-years away from the Solar System and has a diameter of 571,000 light-years, making it a type-cD galaxy and one of the largest.
Characteristics
It is the brightest cluster galaxy of the galaxy cluster, WHL J143845.0+145412. The galaxy acts as a gravitational lens for a much more distant spiral galaxy which is called SGAS J143845+145407. This creates a mirror image of the galaxy thus creating a masterpiece.
Such of this phenomenon occurs, when a massive celestial body such as a galaxy cluster which creates sufficient curvature of spacetime for the path of light to be bent by the lens. This creates multiple images of the original galaxy which as seen, the background object appears as a distorted arc or a ring.
This observation takes advantage of gravitational lensing to peer through early universe galaxies. It helps to reveal details of distant galaxies that is unobtainable and allowing astronomers to determine star formation in such early galaxies. Not to mention, it gives scientists a better insight on how evolution of galaxies have unfolded. By using gravitational lensing is also a very useful tool which contributes significant new results in areas as different as the cosmological distance scale, dark matter in halos and galaxy structures.
According to the Hubble image of PGC 1470080, it is shown to be a peculiar lenticular galaxy rather than an elliptical galaxy as expected.
References
Boötes
Principal Galaxies Catalogue objects
LEDA objects
2MASS objects
SDSS objects
Elliptical galaxies | PGC 1470080 | [
"Astronomy"
] | 330 | [
"Boötes",
"Constellations"
] |
58,448,678 | https://en.wikipedia.org/wiki/Aspergillus%20minisclerotigenes | Aspergillus minisclerotigenes is a species of fungus in the genus Aspergillus. It is from the Flavi section. The species was first described in 2008. It has been reported to produce aflatoxin B1, aflatoxin B2, aflatoxin G1, aflatoxin G2, aflavarins, aflatrems, aflavinins, aspergillic acid, cyclopiazonic acid, and paspalinine.
Growth and morphology
A. minisclerotigenes has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
minisclerotigenes
Fungi described in 2008
Fungus species | Aspergillus minisclerotigenes | [
"Biology"
] | 177 | [
"Fungi",
"Fungus species"
] |
58,448,717 | https://en.wikipedia.org/wiki/Aspergillus%20nomius | Aspergillus nomius is a species of fungus in the genus Aspergillus. It is from the Flavi section. The species was first described in 1987. It has been reported to produce aflatoxin B1, aflatoxin B2, aflatoxin G1, aflatoxin G2, aspergillic acid, kojic acid, nominine, paspaline, pseurotin, and tenuazonic acid. A. nomius has been identified as the cause of human infections.
Growth and morphology
A. nomius has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
nomius
Fungi described in 1987
Fungus species | Aspergillus nomius | [
"Biology"
] | 178 | [
"Fungi",
"Fungus species"
] |
58,448,828 | https://en.wikipedia.org/wiki/Aspergillus%20novoparasiticus | Aspergillus novoparasiticus is a species of fungus in the genus Aspergillus. It is from the Flavi section. The species was first described in 2011. It has been reported to produce aflatoxin B1, aflatoxin B2, aflatoxin G1, and aflatoxin G2. A. novoparasiticus has been isolated from hospital patients. Recently, it has been reported in maize from Brazil.
Growth and morphology
A. novoparasiticus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
novoparasiticus
Fungi described in 2011
Fungus species | Aspergillus novoparasiticus | [
"Biology"
] | 161 | [
"Fungi",
"Fungus species"
] |
58,451,522 | https://en.wikipedia.org/wiki/Aspergillus%20parvisclerotigenus | Aspergillus parvisclerotigenus is a species of fungus in the genus Aspergillus. It is from the Flavi section. The species was first described in 2005. A. parvisclerotigenus has been isolated in Nigeria and has been found to produce aflatoxin B1, aflatoxin B2, aflatoxin G1, aflatoxin G2, aflatrem, aflavarin, aspirochlorin, cyclopiazonic acid, kojic acid, and paspaline.
References
parvisclerotigenus
Fungi described in 2005
Fungus species | Aspergillus parvisclerotigenus | [
"Biology"
] | 128 | [
"Fungi",
"Fungus species"
] |
58,451,696 | https://en.wikipedia.org/wiki/Aspergillus%20pseudotamarii | Aspergillus pseudotamarii is a species of fungus in the genus Aspergillus. It is from the Flavi section. The species was first described in 2001. It has been shown to produce aflatoxin B1, aflatoxin B2, cyclopiazonic acid, and kojic acid.
Growth and morphology
A. pseudotamarii has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
pseudotamarii
Fungi described in 2001
Fungus species | Aspergillus pseudotamarii | [
"Biology"
] | 137 | [
"Fungi",
"Fungus species"
] |
58,451,979 | https://en.wikipedia.org/wiki/Sea%20balls | Sea balls (also known as Aegagropila or Pillae marinae) are tightly packed balls of fibrous marine material, recorded from the seashore. They vary in size but are generally up to in size. In Edgartown, Massachusetts a longish sea ball around in diameter has been found. Others have been reported at Dingle Bay in Ireland and at Valencia, Spain. They may occur in hundreds and are composed of plant material, in majority seagrass rhizome netting torn out by water movement.
In recent years they have been shown to contain more and more plastic marine debris and even microplastics.
Gallery
References
Aquatic ecology
Ecotoxicology
Ocean pollution
Oceanographical terminology
Waste | Sea balls | [
"Physics",
"Chemistry",
"Biology",
"Environmental_science"
] | 145 | [
"Ocean pollution",
"Water pollution",
"Materials",
"Ecosystems",
"Aquatic ecology",
"Waste",
"Matter"
] |
58,454,314 | https://en.wikipedia.org/wiki/Protocol%20engineering | Protocol engineering is the application of systematic methods to the development of communication protocols. It uses many of the principles of software engineering, but it is specific to the development of distributed systems.
History
When the first experimental and commercial computer networks were developed in the 1970s, the concept of protocols was not yet well developed. These were the first distributed systems. In the context of the newly adopted layered protocol architecture (see OSI model), the definition of the protocol of a specific layer should be such that any entity implementing that specification in one computer would be compatible with any other computer containing an entity implementing the same specification, and their interactions should be such that the desired communication service would be obtained. On the other hand, the protocol specification should be abstract enough to allow different choices for the implementation on different computers.
It was recognized that a precise specification of the expected service provided by the given layer was important. It is important for the verification of the protocol, which should demonstrate that the communication service is provided if both protocol entities implement the protocol specification correctly. This principle was later followed during the standardization of the OSI protocol stack, in particular for the transport layer.
It was also recognized that some kind of formalized protocol specification would be useful for the verification of the protocol and for developing implementations, as well as test cases for checking the conformance of an implementation against the specification. While initially mainly finite-state machine were used as (simplified) models of a protocol entity, in the 1980s three formal specification languages were standardized, two by ISO and one by ITU. The latter, called SDL, was later used in industry and has been merged with UML state machines.
Principles
The following are the most important principles for the development of protocols:
Layered architecture: A protocol layer at the level n consists of two (or more) entities that have a service interface through which the service of the layer is provided to the users of the protocol, and which uses the service provided by a local entity of level (n-1).
The service specification of a layer describes, in an abstract and global view, the behavior of the layer as visible at the service interfaces of the layer.
The protocol specification defines the requirements that should be satisfied by each entity implementation.
Protocol verification consists of showing that two (or more) entities satisfying the protocol specification will provide at their service interfaces the specified service of that layer.
The (verified) protocol specification is used mainly for the following two activities:
The development of an entity implementation. Note that the abstract properties of the service interface are defined by the service specification (and also used by the protocol specification), but the detailed nature of the interface can be chosen during the implementation process, separately for each entity.
Test suite development for conformance testing. Protocol conformance testing checks that a given entity implementation conforms to the protocol specification. The conformance test cases are developed based on the protocol specification and are applicable to all entity implementations. Therefore standard conformance test suites have been developed for certain protocol standards.
Methods and tools
Tools for the activities of protocol verification, entity implementation and test suite development can be developed when the protocol specification is written in a formalized language which can be understood by the tool. As mentioned, formal specification languages have been proposed for protocol specification, and the first methods and tools where based on finite-state machine models. Reachability analysis was proposed to understand all possible behaviors of a distributed system, which is essential for protocol verification. This was later complemented with model checking. However, finite-state descriptions are not powerful enough to describe constraints between message parameters and the local variables in the entities. Such constraints can be described by the standardized formal specification languages mentioned above, for which powerful tools have been developed.
It is in the field of protocol engineering that model-based development was used very early. These methods and tools have later been used for software engineering as well as hardware design, especially for distributed and real-time systems. On the other hand, many methods and tools developed in the more general context of software engineering can also be used of the development of protocols, for instance model checking for protocol verification, and agile methods for entity implementations.
Constructive methods for protocol design
Most protocols are designed by human intuition and discussions during the standardization process. However, some methods have been proposed for using constructive methods possibly supported by tools to automatically derive protocols that satisfy certain properties. The following are a few examples:
Semi-automatic protocol synthesis: The user defines all message sending actions of the entities, and the tool derives all necessary reception actions (even if several messages are in transit).
Synchronizing protocol: The state transitions of one protocol entity are given by the user, and the method derives the behavior of the other entity such that it remains in states that correspond to the former entity.
Protocol derived from service specification: The service specification is given by the user and the method derives a suitable protocol for all entities.
Protocol for control applications: The specification of one entity (called the plant - which must be controlled) is given, and the method derives a specification of the other entity such that certain fail states of the plant are never reached and certain given properties of the plant's service interactions are satisfied. This is a case of supervisory control.
Books
Ming T. Liu, Protocol Engineering, Advances in Computers, Volume 29, 1989, Pages 79-195.
G.J. Holzmann, Design and Validation of Computer Protocols, Prentice Hall, 1991.
H. König, Protocol Engineering, Springer, 2012.
M. Popovic, Communication Protocol Engineering, CRC Press, 2nd Ed. 2018.
P. Venkataram, S.S. Manvi, B.S. Babu, Communication Protocol Engineering, 2014.
References
Software engineering | Protocol engineering | [
"Technology",
"Engineering"
] | 1,156 | [
"Software engineering",
"Systems engineering",
"Information technology",
"Computer engineering"
] |
58,455,439 | https://en.wikipedia.org/wiki/Small%20planet%20radius%20gap | The small planet radius gap (also called the Fulton gap, photoevaporation valley, or Sub-Neptune Desert) is an observed scarcity of planets with radii between 1.5 and 2 times Earth's radius, likely due to photoevaporation-driven mass loss. A bimodality in the Kepler exoplanet population was first observed in 2011 and attributed to the absence of significant gas atmospheres on close-in, low-mass planets. This feature was noted as possibly confirming an emerging hypothesis that photoevaporation could drive atmospheric mass loss This would lead to a population of bare, rocky cores with smaller radii at small separations from their parent stars, and planets with thick hydrogen- and helium-dominated envelopes with larger radii at larger separations. The bimodality in the distribution was confirmed with higher-precision data in the California-Kepler Survey in 2017, which was shown to match the predictions of the photoevaporative mass-loss hypothesis later that year.
Despite the implication of the word 'gap', the Fulton gap does not actually represent a range of radii completely absent from the observed exoplanet population, but rather a range of radii that appear to be relatively uncommon. As a result, 'valley' is often used in place of 'gap'. The specific term "Fulton gap" is named for Benjamin J. Fulton, whose doctoral thesis included precision radius measurements that confirmed the scarcity of planets between 1.5 and 2 Earth radii, for which he won the Robert J. Trumpler Award, although the existence of this radius gap had been noted along with its underlying mechanisms as early as 2011, 2012 and 2013.
Within the photoevaporation model of Owen and Wu, the radius gap arises as planets with H/He atmospheres that double the core's radius are the most stable to atmospheric mass-loss. Planets with atmospheres larger than this are vulnerable to erosion and their atmospheres evolve towards a size that doubles the core's radius. Planets with smaller atmospheres undergo runaway loss, leaving them with no H/He dominated atmosphere.
Other possible explanations
Runaway gas accretion by larger planets.
Observational bias favoring easier detection of hot ocean planets with extended steam atmospheres.
See also
References
Exoplanetology
Planetary science
Radii | Small planet radius gap | [
"Astronomy"
] | 481 | [
"Planetary science",
"Astronomical sub-disciplines"
] |
58,455,539 | https://en.wikipedia.org/wiki/Aspergillus%20varians | Aspergillus varians is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 1899.
Growth and morphology
A. varians has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
varians
Fungi described in 1899
Fungus species | Aspergillus varians | [
"Biology"
] | 103 | [
"Fungi",
"Fungus species"
] |
58,455,591 | https://en.wikipedia.org/wiki/Aspergillus%20navahoensis | Aspergillus navahoensis is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 1982. It was isolated from sand in Arizona, United States. It has been reported to produce averufin, norsolorinic acid, 6,7,8-trihydroxy-3-methylisocoumarin, desferritriacetylfusigen, echinocandin B, and sterigmatocystin.
Growth and morphology
A. navahoensis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
navahoensis
Fungi described in 1982
Fungus species | Aspergillus navahoensis | [
"Biology"
] | 181 | [
"Fungi",
"Fungus species"
] |
58,455,663 | https://en.wikipedia.org/wiki/Aspergillus%20quadrilineatus | Aspergillus quadrilineatus is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 1939. It has been isolated from soil in New Jersey, Egypt, Spain, China, and Namibia. It has been reported to produce asperthecin, averufin, 7-methoxyaverufin, sterigmatocystin, versicolourin, desferritriacetylfusigen, echinocandin B & E, variacoxanthone B, emestrin, aurantioemestrin, dethiosecoemestrin, emindol DA, microperfuranone, penicillin G, quadrilineatin, and sterigmatocystin.
Growth and morphology
A. quadrilineatus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
quadrilineatus
Fungi described in 1939
Taxa named by Charles Thom
Fungus species | Aspergillus quadrilineatus | [
"Biology"
] | 246 | [
"Fungi",
"Fungus species"
] |
58,455,709 | https://en.wikipedia.org/wiki/Aspergillus%20similis | Aspergillus similis is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 2014.
Growth and morphology
A. similis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
similis
Fungi described in 2014
Fungus species | Aspergillus similis | [
"Biology"
] | 103 | [
"Fungi",
"Fungus species"
] |
58,455,856 | https://en.wikipedia.org/wiki/Aspergillus%20stercorarius | Aspergillus stercorarius is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 2016. It has been isolated from dung in Kerzaz, Sahara, and Kagh Islands.
References
stercorarius
Fungi described in 2016
Fungus species | Aspergillus stercorarius | [
"Biology"
] | 70 | [
"Fungi",
"Fungus species"
] |
58,455,897 | https://en.wikipedia.org/wiki/Aspergillus%20aurantiobrunneus | Aspergillus aurantiobrunneus is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 1965. It has been reported to produce emeremophiline, emericolin A-D, variecolin, variecolol, desferritriacetylfusigen, sterigmatocystin, variecoacetal A & B, variecolactone, variecolin, and variecolol.
Growth and morphology
A. aurantiobrunneus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
aurantiobrunneus
Fungi described in 1965
Fungus species | Aspergillus aurantiobrunneus | [
"Biology"
] | 186 | [
"Fungi",
"Fungus species"
] |
58,455,919 | https://en.wikipedia.org/wiki/Aspergillus%20falconensis | Aspergillus falconensis is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 1989. It has been reported to produce 3,30-Dihydroxy-5,50-dimethyldiphenyl ether, falconensin A-N, falconenson A-B, hopane-6α,7β,22-triol, hopane-7β,22-diol, mitorubrin, monomethyldihydromitorubrin, monomethylmitorubrin, and zeorin.
Growth and morphology
A. falconensis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
falconensis
Fungi described in 1989
Fungus species | Aspergillus falconensis | [
"Biology"
] | 197 | [
"Fungi",
"Fungus species"
] |
58,455,971 | https://en.wikipedia.org/wiki/Aspergillus%20insuetus | Aspergillus insuetus is a species of fungus in the genus Aspergillus. It is from the Usti section. The species was first described in 1929. It has been reported to produce drimans, ophiobolin G, and ophiobolin H.
Growth and morphology
A. insuetus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
insuetus
Fungi described in 1929
Fungus species | Aspergillus insuetus | [
"Biology"
] | 127 | [
"Fungi",
"Fungus species"
] |
58,456,170 | https://en.wikipedia.org/wiki/Aspergillus%20pseudodeflectus | Aspergillus pseudodeflectus is a species of fungus in the genus Aspergillus. It is from the Usti section. The species was first described in 1975. It has been reported to produce drimans, ophiobolins G and H, and austins.
Growth and morphology
A. pseudodeflectus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
pseudodeflectus
Fungi described in 1975
Fungus species | Aspergillus pseudodeflectus | [
"Biology"
] | 130 | [
"Fungi",
"Fungus species"
] |
58,456,246 | https://en.wikipedia.org/wiki/Aspergillus%20pseudoustus | Aspergillus pseudoustus is a species of fungus in the genus Aspergillus. It is from the Usti section. The species was first described in 2011. It has been reported to produce asperugins, austamide, austocystin, norsolorinic acid, versicolorin C, and averufin.
Growth and morphology
A. pseudoustus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
pseudoustus
Fungi described in 2011
Fungus species | Aspergillus pseudoustus | [
"Biology"
] | 143 | [
"Fungi",
"Fungus species"
] |
58,456,383 | https://en.wikipedia.org/wiki/Aspergillus%20granulosus | Aspergillus granulosus is a species of fungus in the genus Aspergillus. It is from the Usti section. The species was first described in 1944. It has been reported to produce asperugins, ustic acids, nidulol, and drimans.
Growth and morphology
A. granulosus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
granulosus
Fungi described in 1944
Taxa named by Charles Thom
Fungus species | Aspergillus granulosus | [
"Biology"
] | 134 | [
"Fungi",
"Fungus species"
] |
58,456,406 | https://en.wikipedia.org/wiki/Aspergillus%20heterothallicus | Aspergillus heterothallicus is a species of fungus in the genus Aspergillus. It is from the Usti section. The species was first described in 1965. It has been reported to produce emethallicins, emeheterone, emesterones A & B, 5’-hydroxyaveranthin, stellatin, and sterigmatocystin.
Growth and morphology
A. heterothallicus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
heterothallicus
Fungi described in 1965
Fungus species | Aspergillus heterothallicus | [
"Biology"
] | 155 | [
"Fungi",
"Fungus species"
] |
58,456,427 | https://en.wikipedia.org/wiki/Aspergillus%20lucknowensis | Aspergillus lucknowensis is a species of fungus in the genus Aspergillus. It is from the Usti section. The species was first described in 1968.
References
lucknowensis
Fungi described in 1968
Fungus species | Aspergillus lucknowensis | [
"Biology"
] | 46 | [
"Fungi",
"Fungus species"
] |
58,458,206 | https://en.wikipedia.org/wiki/Aspergillus%20karnatakaensis | Aspergillus karnatakaensis is a species of fungus in the genus Aspergillus. It is from the Aenei section. The species was first described in 2010. A. karnatakaensis has been isolated from soil, and has been found to produce
terrein, gregatins, asteltoxin, karnatakafuran A, and karnatakafuran B.
Growth and morphology
A. karnatakaensis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
karnatakaensis
Fungi described in 2010
Fungus species | Aspergillus karnatakaensis | [
"Biology"
] | 141 | [
"Fungi",
"Fungus species"
] |
58,458,239 | https://en.wikipedia.org/wiki/Aspergillus%20spectabilis | Aspergillus spectabilis is a species of fungus in the genus Aspergillus. It is from the Aenei section. The species was first described in 1978.
Growth and morphology
A. spectabilis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
spectabilis
Fungi described in 1978
Fungus species | Aspergillus spectabilis | [
"Biology"
] | 105 | [
"Fungi",
"Fungus species"
] |
58,458,373 | https://en.wikipedia.org/wiki/Aspergillus%20elegans | Aspergillus elegans is a species of fungus in the genus Aspergillus. It is from the Circumdati section. The species was first described in 1978.
Growth and morphology
A. elegans has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
elegans
Fungi described in 1887
Fungus species | Aspergillus elegans | [
"Biology"
] | 107 | [
"Fungi",
"Fungus species"
] |
58,458,383 | https://en.wikipedia.org/wiki/Ecosystem%20collapse | An ecosystem, short for ecological system, is defined as a collection of interacting organisms within a biophysical environment. Ecosystems are never static, and are continually subject to both stabilizing and destabilizing processes. Stabilizing processes allow ecosystems to adequately respond to destabilizing changes, or perturbations, in ecological conditions, or to recover from degradation induced by them: yet, if destabilizing processes become strong enough or fast enough to cross a critical threshold within that ecosystem, often described as an ecological 'tipping point', then an ecosystem collapse (sometimes also termed ecological collapse). occurs.
Ecosystem collapse does not mean total disappearance of life from the area, but it does result in the loss of the original ecosystem's defining characteristics, typically including the ecosystem services it may have provided. Collapse of an ecosystem is effectively irreversible more often than not, and even if the reversal is possible, it tends to be slow and difficult. Ecosystems with low resilience may collapse even during a comparatively stable time, which then typically leads to their replacement with a more resilient system in the biosphere. However, even resilient ecosystems may disappear during the times of rapid environmental change, and study of the fossil record was able to identify how certain ecosystems went through a collapse, such as with the Carboniferous rainforest collapse or the collapse of Lake Baikal and Lake Hovsgol ecosystems during the Last Glacial Maximum.
Today, the ongoing Holocene extinction is caused primarily by human impact on the environment, and the greatest biodiversity loss so far had been due to habitat degradation and fragmentation, which eventually destroys entire ecosystems if left unchecked. There have been multiple notable examples of such an ecosystem collapse in the recent past, such as the collapse of the Atlantic northwest cod fishery. More are likely to occur without a change in course, since estimates show that 87% of oceans and 77% of the land surface have been altered by humanity, with 30% of global land area is degraded and a global decline in ecosystem resilience. Deforestation of the Amazon rainforest is the most dramatic example of a massive, continuous ecosystem and a biodiversity hotspot being under the immediate threat from habitat destruction through logging, and the less-visible, yet ever-growing and persistent threat from climate change.
Biological conservation can help to preserve threatened species and threatened ecosystems alike. However, time is of the essence. Just as interventions to preserve a species have to occur before it falls below viable population limits, at which point an extinction debt occurs regardless of what comes after, efforts to protect ecosystems must occur in response to early warning signals, before the tipping point to a regime shift is crossed. Further, there is a substantial gap between the extent of scientific knowledge how extinctions occur, and the knowledge about how ecosystems collapse. While there have been efforts to create objective criteria used to determine when an ecosystem is at risk of collapsing, they are comparatively recent, and are not yet as comprehensive. While the IUCN Red List of threatened species has existed for decades, the IUCN Red List of Ecosystems has only been in development since 2008.
Definition
Ecosystem collapse has been defined as a "transformation of identity, loss of defining features, and replacement by a novel ecosystem", and involves the loss of "defining biotic or abiotic features", including the ability to sustain the species which used to be associated with that ecosystem. According to another definition, it is "a change from a baseline state beyond the point where an ecosystem has lost key defining features and functions, and is characterised by declining spatial extent, increased environmental degradation, decreases in, or loss of, key species, disruption of biotic processes, and ultimately loss of ecosystem services and functions". Ecosystem collapse has also been described as "an analogue of species extinction", and in many cases, it is irreversible, with a new ecosystem appearing instead, which may retain some characteristics of the previous ecosystem, yet has agreatly altered structure and function. There are exceptions where an ecosystem can be recovered past the point of a collapse, but by definition, will always be far more difficult to reverse than allowing a disturbed yet functioning ecosystem to recover, requiring active intervention and/or a prolonged period of time even if it can be reversed.
Drivers
While collapse events can occur naturally with disturbances to an ecosystem—through fires, landslides, flooding, severe weather events, disease, or species invasion—there has been a noticeable increase in human-caused disturbances over the past fifty years. The combination of environmental change and the presence of human activity is increasingly detrimental to ecosystems of all types, as our unrestricted actions often increase the risk of abrupt (and potentially irreversible) changes post-disturbance; when a system would otherwise have been able to recover.
Some behaviors that induce transformation are: human intervention in the balance of local diversity (through introduction of new species or overexploitation), alterations in the chemical balance of environments through pollution, modifications of local climate or weather with anthropogenic climate change, and habitat destruction or fragmentation in terrestrial/marine systems. For instance, overgrazing was found to cause land degradation, specifically in Southern Europe, which is another driver of ecological collapse and natural landscape loss. Proper management of pastoral landscapes can mitigate risk of desertification.
Despite the strong empirical evidence and highly visible collapse-inducing disturbances, anticipating collapse is a complex problem. The collapse can happen when the ecosystem's distribution decreases below a minimal sustainable size, or when key biotic processes and features disappear due to environmental degradation or disruption of biotic interactions. These different pathways to collapse can be used as criteria for estimating the risk of ecosystem collapse. Although states of ecosystem collapse are often defined quantitatively, few studies adequately describe transitions from pristine or original state towards collapse.
Geological record
In another example, 2004 research demonstrated how during the Last Glacial Maximum (LGM), alternations in the environment and climate led to a collapse of Lake Baikal and Lake Hovsgol ecosystems, which then drove species evolution. The collapse of Hovsgol's ecosystem during the LGM brought forth a new ecosystem, with limited biodiversity in species and low levels of endemism, in Hovsgol during the Holocene. That research also shows how ecosystem collapse during LGM in Lake Hovsgol led to higher levels of diversity and higher levels of endemism as a byproduct of subsequent evolution.
In the Carboniferous period, coal forests, great tropical wetlands, extended over much of Euramerica (Europe and America). This land supported towering lycopsids which fragmented and collapsed abruptly. The collapse of the rainforests during the Carboniferous has been attributed to multiple causes, including climate change and volcanism. Specifically, at this time climate became cooler and drier, conditions that are not favourable to the growth of rainforests and much of the biodiversity within them. The sudden collapse in the terrestrial environment made many large vascular plants, giant arthropods, and diverse amphibians to go extinct, allowing seed-bearing plants and amniotes to take over (but smaller relatives of the affected ones survived also).
Historic examples of collapsed ecosystems
The Rapa Nui subtropical broadleaf forests in Easter Island, formerly dominated by an endemic Palm, are considered collapsed due to the combined effects of overexplotaition, climate change and introduced exotic rats.
The Aral Sea was an endorheic lake between Kazakhstan and Uzbekistan. It was once considered one of the largest lakes in the world but has been shrinking since the 1960s after the rivers that fed it were diverted for large scale irrigation. By 1997, it had declined to 10% of its original size, splitting into much smaller hypersaline lakes, while dried areas have transformed into desert steppes.
The regime shift in the northern Benguela upwelling ecosystem is considered an example of ecosystem collapse in open marine environments. Prior to the 1970s sardines were the dominant vertebrate consumers, but overfishing and two adverse climatic events (Benguela Niño in 1974 and 1984) lead to an impoverished ecosystem state with high biomass of jellyfish and pelagic goby.
Another notable example is the collapse of the Grand Banks cod in the early 1990s, when overfishing reduced fish populations to 1% of their historical levels.
Contemporary risk
There are two tools commonly used together to assess risks to ecosystems and biodiversity: generic risk assessment protocols and stochastic simulation models. The most notable of the two tactics is risk assessment protocol, particularly because of the IUCN Red List of Ecosystems (RLE), which is widely applicable to many ecosystems even in data-poor circumstances. However, because using this tool is essentially comparing systems to a list of criteria, it is often limited in its ability to look at ecosystem decline holistically; and is thus often used in conjunction with simulation models that consider more aspects of decline such as ecosystem dynamics, future threats, and social-ecological relationships.
The IUCN RLE is a global standard that was developed to assess threats to various ecosystems on local, regional, national, and global scales, as well as to prompt conservation efforts in the face of the unparalleled decline of natural systems in the last decade. And though this effort is still in the earlier stages of implementation, the IUCN has a goal to assess the risk of collapse for all of the world's ecosystems by 2025. The concept of ecosystem collapse is used in the framework to establish categories of risk for ecosystems, with the category Collapsed used as the end-point of risk assessment. Other categories of threat (Vulnerable, Endangered and Critically Endangered) are defined in terms of the probability or risk of collapse. A paper by Bland et al. suggests four aspects for defining ecosystem collapse in risk assessments:
qualitatively defining initial and collapsed states
describing collapse and recovery transitions
identifying and selecting indicators of collapse
setting quantitative collapse thresholds.
Early detection and monitoring
Scientists can predict tipping points for ecosystem collapse. The most frequently used model for predicting food web collapse is called R50, which is a reliable measurement model for food web robustness. However, there are others: i.e. marine ecosystem assessments can use RAM Legacy Stock Assessment Database. In one example, 154 different marine fish species were studied to establish the relationship between pressures on fish populations such as overfishing and climate change, these populations; traits like growth rate, and the risk of ecosystem collapse.
The measurement of "critical slowing down" (CSD) is one approach for developing early warning signals for a potential or likely onset of approaching collapse. It refers to increasingly slow recovery from perturbations.
In 2020, one paper suggested that once a 'point of no return' is reached, breakdowns do not occur gradually but rapidly and that the Amazon rainforest could shift to a savannah-type mixture of trees and grass within 50 years and the Caribbean coral reefs could collapse within 15 years once a state of collapse has been reached. Another indicated that large ecosystem disruptions will occur earlier under more intense climate change: under the high-emissions RCP8.5 scenario, ecosystems in the tropical oceans would be the first to experience abrupt disruption before 2030, with tropical forests and polar environments following by 2050. In total, 15% of ecological assemblages would have over 20% of their species abruptly disrupted if as warming eventually reaches ; in contrast, this would happen to fewer than 2% if the warming were to stay below .
Rainforest collapse
Rainforest collapse refers to the actual past and theoretical future ecological collapse of rainforests. It may involve habitat fragmentation to the point where little rainforest biome is left, and rainforest species only survive in isolated refugia. Habitat fragmentation can be caused by roads. When humans start to cut down the trees for logging, secondary roads are created that will go unused after its primary use. Once abandoned, the plants of the rainforest will find it difficult to grow back in that area. Forest fragmentation also opens the path for illegal hunting. Species have a hard time finding a new place to settle in these fragments causing ecological collapse. This leads to extinction of many animals in the rainforest.
A classic pattern of forest fragmentation is occurring in many rainforests including those of the Amazon, specifically a 'fishbone' pattern formed by the development of roads into the forest. This is of great concern, not only because of the loss of a biome with many untapped resources and wholesale death of living organisms, but also because plant and animal species extinction is known to correlate with habitat fragmentation.
In the year 2022, research found that more than three-quarters of the Amazon rainforest has been losing resilience due to deforestation and climate change since the early 2000s as measured by recovery-time from short-term perturbations (the critical slowing down), reinforcing the theory that it is approaching a critical transition. Another study from 2022 found that tropical, arid and temperate forests are substantially losing resilience.
Coral reefs
A major concern for marine biologists is the collapse of coral reef ecosystems.). An effect of global climate change is the rising sea levels which can lead to reef drowning or coral bleaching. Human activity, such as fishing, mining, deforestation, etc., serves as a threat for coral reefs by affecting the niche of the coral reefs. For example, there is a demonstrated correlation between a loss in diversity of coral reefs by 30-60% and human activity such as sewage and/or industrial pollution.
Conservation and reversal
As of now there is still not much information on effective conservation or reversal methods for ecosystem collapse. Rather, there has been increased focus on the predictability of ecosystem collapse, whether it is possible, and whether it is productive to explore. This is likely because thorough studies of at-risk ecosystems are a more recent development and trend in ecological fields, so collapse dynamics are either too recent to observe or still emerging. Since studies are not yet long term, conclusions about reversibility or transformation potential are often hard to draw from newer, more focused studies.
See also
Arctic shrinkage
Ecological resilience
Ecosystem services
Environmental degradation
Overshoot (ecology)
Tipping points in the climate system
References
Ecosystems
Biological systems
IUCN Red List of Ecosystems | Ecosystem collapse | [
"Biology"
] | 2,880 | [
"Symbiosis",
"Ecosystems",
"nan"
] |
58,458,409 | https://en.wikipedia.org/wiki/Aspergillus%20flocculosus | Aspergillus flocculosus is a species of fungus in the genus Aspergillus. It is from the Circumdati section. The species was first described in 2004. It has been isolated in Venezuela, Slovenia, Greece, Costa Rica, and Brazil.
Growth and morphology
A. flocculosus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
flocculosus
Fungi described in 2004
Fungus species | Aspergillus flocculosus | [
"Biology"
] | 130 | [
"Fungi",
"Fungus species"
] |
58,458,450 | https://en.wikipedia.org/wiki/Aspergillus%20westlandensis | Aspergillus westlandensis is a species of fungus in the genus Aspergillus. It is from the Circumdati section. The species was first described in 2014. It has been reported to produce aspergamide A and B, penicillic acid, dehydropenicillic acid, xanthomegnin, viomellein and vioxanthin and traces of ochratoxin A.
Growth and morphology
A. westlandensis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
westlandensis
Fungi described in 2014
Fungus species | Aspergillus westlandensis | [
"Biology"
] | 160 | [
"Fungi",
"Fungus species"
] |
58,458,485 | https://en.wikipedia.org/wiki/Aspergillus%20muricatus | Aspergillus muricatus is a species of fungus in the genus Aspergillus. It is from the Circumdati section. The species was first described in 1994. It has been isolated from soil in the Philippines and is reported to produce petromurins.
Growth and morphology
A. muricatus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
muricatus
Fungi described in 1994
Fungus species | Aspergillus muricatus | [
"Biology"
] | 126 | [
"Fungi",
"Fungus species"
] |
58,458,524 | https://en.wikipedia.org/wiki/Aspergillus%20nakazawae | Aspergillus nakazawae is a species of fungus in the genus Aspergillus. The species was first described in 1950.
Growth and morphology
A. nakazawae has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
nakazawae
Fungi described in 1950
Fungus species | Aspergillus nakazawae | [
"Biology"
] | 99 | [
"Fungi",
"Fungus species"
] |
58,458,561 | https://en.wikipedia.org/wiki/Aspergillus%20ostianus | Aspergillus ostianus is a species of fungus in the genus Aspergillus. It is from the Circumdati section. The species was first described in 1899. It has been reported to produce ochratoxin A.
Growth and morphology
A. ostianus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
ostianus
Fungi described in 1899
Fungus species | Aspergillus ostianus | [
"Biology"
] | 119 | [
"Fungi",
"Fungus species"
] |
58,458,618 | https://en.wikipedia.org/wiki/Aspergillus%20petrakii | Aspergillus petrakii is a species of fungus in the genus Aspergillus. It is from the Circumdati section. The species was first described in 1957.
Growth and morphology
A. petrakii has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
petrakii
Fungi described in 1957
Fungus species | Aspergillus petrakii | [
"Biology"
] | 107 | [
"Fungi",
"Fungus species"
] |
58,459,797 | https://en.wikipedia.org/wiki/Tupolev%20OOS | The Tupolev OOS was a Soviet concept for an air-launched, single-stage-to-orbit spaceplane. The OOS's proposed carrier aircraft, the Antonov AKS, was a twin-fuselage concept plane consisting of two An-225 fuselages and was powered by 18 Progress D-18T turbofan engines, with the placements of the engines both above and below the wings. The OOS was to be carried under the AKS's raised center wing. The launch system was proposed in the late 1980s, but never developed past the design stage.
See also
Buran programme
Scaled Composites Stratolaunch
Conroy Virtus
References
External links
Soviet Designed Air Launched Space Shuttle Rocket from a dual Fuselage Antonov An-225, video rendering by Hazegrayart
Spaceplanes
Soviet military aircraft
Single-stage-to-orbit | Tupolev OOS | [
"Astronomy"
] | 178 | [
"Astronomy stubs",
"Spacecraft stubs"
] |
58,460,201 | https://en.wikipedia.org/wiki/Ethical%20guidelines%20for%20treating%20trauma%20survivors | Ethical guidelines for treating trauma survivors can provide professionals direction to enhance their efforts. Trauma survivors have unique needs and vary in their resilience, post-traumatic growth, and negative and positive outcomes from their experiences. Numerous ethical guidelines can inform a trauma-informed care (TIC) approach.
Trauma can result from a wide range of experiences which expose humans to one or more physical, emotional, and/or relational dangers. Treatment can be provided by a wide range of practices, ranging from yoga, education, law, mental health, justice, to medical. It can be provided by organizations.
Within the field of psychology, ethics define the standards of professional conduct. The American Psychological Association (APA) describes their Ethics Code as a “common set of principles and standards upon which psychologists build their professional and scientific work” (p. 8). Ethics help clinicians to think through and critically analyze situations, while also serving as aspirations and virtues that clinicians should strive towards. When working with trauma survivors, oftentimes a client's traumatic experiences can be so overwhelming for both the patient and the clinician that professional and ethical boundaries may become endangered.
Guidelines
The following ethical guidelines should be considered when working with clients who have survived a traumatic experience:
Informed consent
The APA ethics code outlines many professional guidelines for clinicians including the maintenance of confidentiality, minimizing intrusions to privacy, and obtaining informed consent. Informed consent ensures the client has an adequate understanding of the techniques and procedures that will be used during therapy, expected timeline for treatment, and possible consequences for engaging in specific tasks and goals.
When clinicians work with trauma survivors their informed consent should emphasize diagnosis and treatment of trauma and include clear guidelines for maintaining secure and firm boundaries. Some research suggests that clients who have experienced complex trauma may deliberately or unconsciously test clinician's boundaries by missing or arriving late for appointments, bringing the clinician gifts, attempting to photograph the therapist, calling during non-office hours, or trying to extend the session either in person or with a follow-up phone call.
Risk management
Research suggests that trauma survivors are more likely than those without a history of trauma to report suicidal ideation and to engage in self-harming behaviors. Furthermore, research also indicates that suicide attempts are correlated with both childhood maltreatment and PTSD symptom severity. Clinicians who treat trauma survivors should continuously monitor their client's suicidal ideation, means, and plans especially surrounding anniversary dates and triggering experiences. Client safety should be prioritized when working with trauma survivors, and should include immediately assessing client safety following intense sessions and frequent follow-ups with clients between sessions.
Establishing and maintaining a strong therapeutic alliance
The APA outlines General Principles that clinicians should use in order to aspire towards the very highest ethical ideals. Among these General Principles are Principle A: Beneficence and Nonmaleficence and Principle C: Integrity. Beneficence and Nonmaleficence describes that clinicians strive to benefit those with whom they work, and make efforts to do no harm. Fidelity and Responsibility includes establishing relationships of trust and being aware of one's professional responsibilities. Both of these principles should be considered when a clinician attempts to establish and maintain a strong therapeutic alliance with trauma survivors.
For clients with a history of trauma, particularly those who have experienced betrayal trauma, forging close and trusting relationships with others may be difficult. In addition, during the course of therapy clients may discuss terrifying, horrific, or disturbing experiences, which may elicit strong reactions from the therapist. Some of the possible negative reactions could include distancing and emotional detachment, which may reinforce clients’ often negative schemas and self-image. Clinicians may also contribute to the challenges of establishing a strong therapeutic alliance by becoming overly inquisitive about the client's traumatic experience, which, in turn, may lead to a lack of accurate empathy. For these reasons, clinicians treating those with a history of trauma may encounter unique challenges when attempting to develop a strong therapeutic alliance.
Addressing transference and countertransference
Within the course of traditional therapy it is possible for transference and counter transference to interfere with treatment. For clinicians treating those with a history of trauma it is possible to experience “a priori counter-transference”. A priori counter-transference includes the thoughts, feelings, and prejudices that may arise before meeting with a potential client as a result of knowing that the client has gone through a certain traumatic event. These initial reactions may create ethical dilemmas as the clinician's personal attitudes, beliefs, and values may become compromised, thereby increasing the amount of counter-transference the clinician may have towards the client. The APA ethics code 2.06(b) describes a clinician's ethical responsibility should personal situations interfere with a clinician's ability to perform their duties adequately. Clinicians experiencing a priori counter-transference should consider utilizing more frequent consultations, receive increased levels of personal therapy, or consider limiting, suspending, or terminating their work-related duties.
Traumatic bonding
Dutton and Painter originally coined the term “traumatic bonding” to describe the relationship bond that occurs between the perpetrator and victim of abusive relationships. As a result of ongoing cycles of positive and traumatic experiences powerful emotional bonds are created that are resistant to change. The term can also be borrowed to describe the relationship between a trauma clinician and the client. As the client describes their traumatic memories and re-experiences the accompanying powerful emotions and sensations they are prone to form a remarkably intense bond with their clinician. These emotionally driven experiences present ethical challenges and pitfalls for the clinician including behaving in extremes such as acting in an overprotective manner or distancing themselves from the client. The clinician may also feel triggered by their own similar trauma history, causing unnecessary discloses or the need to share the client's story in order to seek revenge or justice. The APA ethics code 2.06(a) describes that clinicians should refrain from practicing if they know there is a substantial likelihood that their personal problems will prevent them from being objective or competent. Clinicians who recognize that traumatic bonding might be occurring should increase consultations or consider limiting, suspending, or terminating their work-related duties.
References
Ethics in psychiatry
Clinical psychology
Stress-related disorders | Ethical guidelines for treating trauma survivors | [
"Biology"
] | 1,290 | [
"Behavioural sciences",
"Behavior",
"Clinical psychology"
] |
58,461,116 | https://en.wikipedia.org/wiki/Sodium%20hydrogenoxalate | Sodium hydrogenoxalate or sodium hydrogen oxalate is a chemical compound with the chemical formula . It is an ionic compound. It is a sodium salt of oxalic acid . It is an acidic salt, because it consists of sodium cations and hydrogen oxalate anions or , in which only one acidic hydrogen atom in oxalic acid is replaced by sodium atom. The hydrogen oxalate anion can be described as the result of removing one hydrogen ion from oxalic acid, or adding one to the oxalate anion .
Properties
Hydrates
The compound is commonly encountered as the anhydrous form or as the monohydrate . Both are colorless crystalline solids at ambient temperature.
The monohydrate can be obtained by evaporating a solution of the compound at room temperature.
The crystal structure of is triclinic normal (pinacoidal, space group P). The lattice parameters are a = 650.3 pm, b = 667.3 pm, c = 569.8 pm, α = 85.04°, β = 110.00°, γ = 105.02°, and Z = 2. The hydrogen oxalate ions are linked end to end in infinite chains by hydrogen bonds (257.1 pm). The chains are cross linked to form layers by both ···O bonds from the water molecules (280.8 pm, 282.6 pm) and by ionic bonds ···O. These layers are in turn held together by ···O bonds. The oxalate group is non-planar with an angle of twist about the bond of 12.9°.
Reactions
Upon being heated, sodium hydrogenoxalate converts to oxalic acid and sodium oxalate, the latter of which decomposes into sodium carbonate and carbon monoxide.
Toxicity
The health hazards posed by this compound are largely due to its acidity and to the toxic effects of oxalic acid and other oxalate or hydrogenoxalate salts, which can follow ingestion or absorption through the skin. The toxic effects include necrosis of tissues due to sequestration of calcium ions , and the formation of poorly soluble calcium oxalate stones in the kidneys that can obstruct the kidney tubules.
References
Organic sodium salts
Oxalates | Sodium hydrogenoxalate | [
"Chemistry"
] | 473 | [
"Organic sodium salts",
"Salts"
] |
58,463,477 | https://en.wikipedia.org/wiki/Ohio%20water%20resource%20region | The Ohio water resource region is one of 21 major geographic areas, or regions, in the first level of classification used by the United States Geological Survey to divide and sub-divide the United States into successively smaller hydrologic units. These geographic areas contain either the drainage area of a major river, or the combined drainage areas of a series of rivers.
The Ohio region, which is listed with a 2-digit hydrologic unit code (HUC) of 05, has an approximate size of , and consists of 14 subregions, which are listed with the 4-digit HUCs 0501 through 0514.
This region includes the drainage of the Ohio River Basin, excluding the Tennessee River Basin. Includes parts of Illinois, Indiana, Kentucky, Maryland, New York, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia and West Virginia.
List of water resource subregions
See also
List of rivers in the United States
Water resource region
External links
References
Lists of drainage basins
Drainage basins
Watersheds of the United States
Regions of the United States
Water resource regions | Ohio water resource region | [
"Environmental_science"
] | 211 | [
"Hydrology",
"Drainage basins"
] |
58,463,578 | https://en.wikipedia.org/wiki/Tennessee%20water%20resource%20region | The Tennessee water resource region is one of 21 major geographic areas, or regions, in the first level of classification used by the United States Geological Survey to divide and sub-divide the United States into successively smaller hydrologic units. These geographic areas contain either the drainage area of a major river, or the combined drainage areas of a series of rivers.
The Tennessee region, which is listed with a 2-digit hydrologic unit code (HUC) of 06, has an approximate size of , and consists of 4 subregions, which are listed with the 4-digit HUCs 0601 through 0604.
This region includes the drainage of the Tennessee River Basin. Includes parts of Alabama, Georgia, Kentucky, Mississippi, North Carolina, Tennessee, and Virginia.
List of water resource subregions
See also
List of rivers in the United States
Water Resource Region
References
Lists of drainage basins
Drainage basins
Watersheds of the United States
Regions of the United States
Water resource regions | Tennessee water resource region | [
"Environmental_science"
] | 195 | [
"Hydrology",
"Drainage basins"
] |
58,464,097 | https://en.wikipedia.org/wiki/PAM%20library | PAM (Parallel Augmented Maps) is an open-source parallel C++ library implementing the interface for sequence, ordered sets, ordered maps, and augmented maps. The library is available on GitHub. It uses the underlying balanced binary tree structure using join-based algorithms. PAM supports four balancing schemes, including AVL trees, red-black trees, treaps and weight-balanced trees.
PAM is a parallel library and is also safe for concurrency. Its parallelism can be supported by cilk, OpenMP or the scheduler in PBBS. Theoretically, all algorithms in PAM are work-efficient and have polylogarithmic depth. PAM uses underlying persistent tree structure such that multi-versioning is allowed. PAM also supports efficient GC.
Interface
Sequences
To define a sequence, users need to specify the key type of the sequence.
PAM supports functions on sequences including construction, find an entry with a certain rank, first, last, next, previous, size, empty, filter, map-reduce, concatenating, etc.
Ordered sets
To define an ordered set, users need to specify the key type and the comparison function defining a total ordering on the key type.
On top of the sequence interface, PAM also supports functions for ordered sets including insertion, deletion, union, intersection, difference, etc.
Ordered maps
To define an ordered map, users need to specify the key type, the comparison function on the key type, and the value type.
On top of the ordered set interface, PAM also supports functions for ordered maps, such as insertion with combining values.
Augmented maps
To define an augmented map, users need to specify the key type, the comparison function on the key type, the value type, the augmented value type, the base function, the combine function and the identity of the combine function.
On top of the ordered map interface, PAM also supports functions for augmented maps, such as aug_range.
In addition to the tree structures, PAM also implements the prefix structure for augmented maps.
Implementation for Example Applications
The library also provides example implementations for a number of applications, including 1D stabbing query (using interval trees, 2D range query (using a range tree and a sweepline algorithm), 2D segment query (using a segment tree and a sweepline algorithm), 2D rectangle query (using a tree structure and a sweepline algorithm), inverted index searching, etc.
Used in applications
The library has been tested in various applications, including database benchmarks, 2D segment tree, 2D interval tree, inverted index and multiversion concurrency control.
References
External links
PAM, the parallel augmented map library.
C++ libraries
Computer libraries | PAM library | [
"Technology"
] | 538 | [
"IT infrastructure",
"Computer libraries"
] |
78,112,747 | https://en.wikipedia.org/wiki/Ilaria%20Mazzoleni | Ilaria Mazzoleni is an architect and founder of IM Studio Milano/Los Angeles. She is known for her work on sustainable architecture at all scales of design, and on biomimicry or design innovation inspired by nature. She has built work in Italy, California, and Ghana.
Her 2013 book "Architecture Follows Nature-Biomimetic Principles for Innovative Design" covers topics such as biomimicry in architecture and site-specific architecture.
She founded the Nature, Art & Habitat Residency (NAHR), a nonprofit association based in Bergamo Italy that offers a one-month residency in the rural Taleggio Valley.
References
External links
Mazzoleni interview from VoyageLA
IM Studio Milano/Los Angeles Website
Living people
Italian emigrants to the United States
Italian architects
Italian women architects
Architecture
Year of birth missing (living people) | Ilaria Mazzoleni | [
"Engineering"
] | 171 | [
"Construction",
"Architecture"
] |
78,113,322 | https://en.wikipedia.org/wiki/Ewa%20Ligocka | Ewa Ligocka (13 October 1947 – 28 October 2022) was a Polish mathematician specializing in complex analysis, and a political activist.
Early life and education
Ligocka was born in Katowice on 13 October 1947, the daughter of Polish photography critic and historian Alfred Ligocki. As a high school student under the tutelage of , she competed for Poland in the International Mathematical Olympiad in 1965.
She earned a master's degree at the University of Warsaw in 1970, and completed a Ph.D. there in 1973 under the supervision of . During this period, her research concerned the theory of analytic functions on topological vector spaces. The story goes that, in 1972, she plucked and cooked the goose given to Per Enflo as the prize for solving Mazur's goose problem.
Career and later life
After completing her doctorate, Ligocka continued as a researcher at the University of Warsaw. As an assistant professor in 1976, she signed an open letter of protest regarding the June 1976 protests in Radom and Ursus. Despite the efforts of other mathematicians to protect her, this protest led to her transfer to a branch campus of the university in Białystok and then, in 1977, her dismissal from the university.
Meanwhile, she had begun working with Maciej Skwarczyński on the Bergman kernel, and by 1978 she began her research with Massachusetts Institute of Technology student Steven R. Bell on Fefferman's theorem on the smooth extension of biholomorphisms to the boundaries of their domains. This work, published in Inventiones Mathematicae in 1980, already created a stir in Polish mathematics in the late 1970s, and in 1979 she was hired by Czesław Olech as a researcher at the Institute of Mathematics of the Polish Academy of Sciences, without any political restrictions.
She completed a habilitation in 1986, and in 1992 returned to the University of Warsaw as an associate professor. She was given the degree of professor in 1994. She retired in 2008, and died on 28 October 2022.
Recognition
Ligocka was the 1986 recipient of the Stanisław Zaremba Grand Prize of the Polish Mathematical Society. She and Steven R. Bell received the 1991 Stefan Bergman Prize of the American Mathematical Society, given for their work on Fefferman's theorem.
References
1947 births
2022 deaths
People from Katowice
Polish mathematicians
Polish women mathematicians
Mathematical analysts
University of Warsaw alumni
Academic staff of the University of Warsaw
Academic staff of the Polish Academy of Sciences
International Mathematical Olympiad participants | Ewa Ligocka | [
"Mathematics"
] | 516 | [
"Mathematical analysis",
"Mathematical analysts"
] |
78,113,392 | https://en.wikipedia.org/wiki/A%20Logical%20Calculus%20of%20the%20Ideas%20Immanent%20in%20Nervous%20Activity | "A Logical Calculus of the Ideas Immanent to Nervous Activity" is a 1943 article written by Warren McCulloch and Walter Pitts. The paper, published in the journal The Bulletin of Mathematical Biophysics, proposed a mathematical model of the nervous system as a network of simple logical elements, later known as artificial neurons, or McCulloch-Pitts neurons. These neurons receive inputs, perform a weighted sum, and fire an output signal based on a threshold function. By connecting these units in various configurations, McCulloch and Pitts demonstrated that their model could perform all logical functions.
It is a seminal work in computational neuroscience, computer science, and artificial intelligence. It was a foundational result in automata theory. John von Neumann cited it as a significant result.
Mathematics
The artificial neuron used in the original paper is slightly different from the modern version. They considered neural networks that operate in discrete steps of time .
The neural network contains a number of neurons. Let the state of a neuron at time be . The state of a neuron can either be 0 or 1, standing for "not firing" and "firing". Each neuron also has a firing threshold , such that it fires if the total input exceeds the threshold.
Each neuron can connect to any other neuron (including itself) with positive synapses (excitatory) or negative synapses (inhibitory). That is, each neuron can connect to another neuron with a weight taking an integer value. A peripheral afferent is a neuron with no incoming synapses.
We can regard each neural network as a directed graph, with the nodes being the neurons, and the directed edges being the synapses. A neural network has a circle or a circuit if there exists a directed circle in the graph.
Let be the connection weight from neuron to neuron at time , then its next state iswhere is the Heaviside step function (outputting 1 if the input is greater than or equal to 0, and 0 otherwise).
Symbolic logic
The paper used, as a logical language for describing neural networks, "Language II" from The Logical Syntax of Language by Rudolf Carnap with some notations taken from Principia Mathematica by Alfred North Whitehead and Bertrand Russell. Language II covers substantial parts of classical mathematics, including real analysis and portions of set theory.
To describe a neural network with peripheral afferents and non-peripheral afferents they considered logical predicate of formwhere is a first-order logic predicate function (a function that outputs a boolean), are predicates that take as an argument, and is the only free variable in the predicate. Intuitively speaking, specifies the binary input patterns going into the neural network over all time, and is a function that takes some binary input patterns, and constructs an output binary pattern .
A logical sentence is realized by a neural network iff there exists a time-delay , a neuron in the network, and an initial state for the non-peripheral neurons , such that for any time , the truth-value of the logical sentence is equal to the state of the neuron at time . That is,
Equivalence
In the paper, they considered some alternative definitions of artificial neural networks, and have shown them to be equivalent, that is, neural networks under one definition realizes precisely the same logical sentences as neural networks under another definition.
They considered three forms of inhibition: relative inhibition, absolute inhibition, and extinction. The definition above is relative inhibition. By "absolute inhibition" they meant that if any negative synapse fires, then the neuron will not fire. By "extinction" they meant that if at time , any inhibitory synapse fires on a neuron , then for , until the next time an inhibitory synapse fires on . It is required that for all large .
Theorem 4 and 5 state that these are equivalent.
They considered three forms of excitation: spatial summation, temporal summation, and facilitation. The definition above is spatial summation (which they pictured as having multiple synapses placed close together, so that the effect of their firing sums up). By "temporal summation" they meant that the total incoming signal is for some . By "facilitation" they meant the same as extinction, except that . Theorem 6 states that these are equivalent.
They considered neural networks that do not change, and those that change by Hebbian learning. That is, they assume that at , some excitatory synaptic connections are not active. If at any , both , then any latent excitatory synapse between becomes active. Theorem 7 states that these are equivalent.
Logical expressivity
They considered "temporal propositional expressions" (TPE), which are propositional formulas with one free variable . For example, is such an expression. Theorem 1 and 2 together showed that neural nets without circles are equivalent to TPE.
For neural nets with loops, they noted that "realizable may involve reference to past events of an indefinite degree of remoteness". These then encodes for sentences like "There was some x such that x was a ψ" or . Theorems 8 to 10 showed that neural nets with loops can encode all first-order logic with equality and conversely, any looped neural networks is equivalent to a sentence in first-order logic with equality, thus showing that they are equivalent in logical expressiveness.
As a remark, they noted that a neural network, if furnished with a tape, scanners, and write-heads, is equivalent to a Turing machine, and conversely, every Turing machine is equivalent to some such neural network. Thus, these neural networks are equivalent to Turing computability, Church's lambda-definability, and Kleene's primitive recursiveness.
Context
Previous work
The paper built upon several previous strands of work.
In the symbolic logic side, it built on the previous work by Carnap, Whitehead, and Russell. This was contributed by Walter Pitts, who had a strong proficiency with symbolic logic. Pitts provided mathematical and logical rigor to McCulloch’s vague ideas on psychons (atoms of psychological events) and circular causality.
In the neuroscience side, it built on previous work by the mathematical biology research group centered around Nicolas Rashevsky, of which McCulloch was a member. The paper was published in the Bulletin of Mathematical Biophysics, which was founded by Rashevsky in 1939. During the late 1930s, Rashevsky's research group was producing papers that had difficulty publishing in other journals at the time, so Rashevsky decided to found a new journal exclusively devoted to mathematical biophysics.
Also in the Rashevsky's group was Alston Scott Householder, who in 1941 published an abstract model of the steady-state activity of biological neural networks. The model, in modern language, is an artificial neural network with ReLU activation function. In a series of papers, Householder calculated the stable states of very simple networks: a chain, a circle, and a bouquet. Walter Pitts' first two papers formulated a mathematical theory of learning and conditioning. The next three were mathematical developments of Householder’s model.
In 1938, at age 15, Pitts ran away from home in Detroit and arrived in the University of Chicago. Later, he walked into Rudolf Carnap's office with Carnap's book filled with corrections and suggested improvements. He started studying under Carnap and attending classes during 1938--1943. He wrote several early papers on neuronal network modelling and regularly attended Rashevsky's seminars in theoretical biology. The seminar attendants included Gerhard von Bonin and Householder. In 1940, von Bonin introduced Lettvin to McCulloch. In 1942, both Lettvin and Pitts had moved in with McCulloch's home.
McCulloch had been interested in circular causality from studies with causalgia after amputation, epileptic activity of surgically isolated brain, and Lorente de Nò's research showing recurrent neural networks are needed to explain vestibular nystagmus. He had difficulty with treating circular causality until Pitts demonstrated how it can be treated by the appropriate mathematical tools of modular arithmetics and symbolic logic.
Both authors' affiliation in the article was given as "University of Illinois, College of Medicine, Department of Psychiatry at the Illinois Neuropsychiatric Institute, University of Chicago, Chicago, U.S.A."
Subsequent work
It was a foundational result in automata theory. John von Neumann cited it as a significant result. This work led to work on neural networks and their link to finite automata. Kleene introduced the term "regular" for "regular language" in a 1951 technical report, where Kleene proved that regular languages are all that could be generated by neural networks, among other results. The term "regular" was meant to be suggestive of "regularly occurring events" that the neural net automaton must process and respond to.
Marvin Minsky was influenced by McCulloch, built an early example of neural network SNARC (1951), and did a PhD thesis on neural networks (1954).
McCulloch was the chair to the ten Macy conferences (1946--1953) on "Circular Causal and Feedback Mechanisms in Biological and Social Systems". This was a key event in the beginning of cybernetics, and what later became known as cognitive science. Pitts also attended the conferences.
In the 1943 paper, they described how memories can be formed by a neural network with loops in it, or alterable synapses, which are operating over time, and implements logical universals -- "there exists" and "for all". This was generalized for spatial objects, such as geometric figures, in their 1947 paper How we know universals. Norbert Wiener found this a significant evidence for a general method for how animals recognizing objects, by scanning a scene from multiple transformations and finding a canonical representation. He hypothesized that this "scanning" activity is clocked by the alpha wave, which he mistakenly thought was tightly regulated at 10 Hz (instead of the 8 -- 13 Hz as modern research shows).
McCulloch worked with Manuel Blum in studying how a neural network can be "logically stable", that is, can implement a boolean function even if the activation thresholds of individual neurons are varied. They were inspired by the problem of how the brain can perform the same functions, such as breathing, under influence of caffeine or alcohol, which shifts the activation threshold over the entire brain.
See also
Artificial neural network
Perceptron
Connectionism
Principia Mathematica
History of artificial neural networks
References
Machine learning
Artificial neural networks
History of artificial intelligence
Computer science papers | A Logical Calculus of the Ideas Immanent in Nervous Activity | [
"Engineering"
] | 2,219 | [
"Artificial intelligence engineering",
"Machine learning"
] |
78,114,240 | https://en.wikipedia.org/wiki/14%20October%202024%20Al-Aqsa%20Hospital%20attack | On 14 October 2024, the Israeli Air Force struck tents within the grounds of the Shuhada al-Aqsa Hospital in Deir el-Balah, Gaza Strip. As of 14 October 2024, at least 5 people were confirmed killed in the attack and at least 70 were injured after a major fire broke out in nearby tents. The death toll was expected to increase due to the large number of victims with severe burns. 25 people were transferred to Nasser Hospital in southern Gaza. It was the seventh attack on the hospital since March 2024. Following the spread of videos showing people burning alive in nearby tents, the White House expressed its concerns to Israel.
Around one million displaced people are estimated to be sheltering in Deir el-Balah, which is supposedly considered to be part of Israel's "humanitarian zone" in the Gaza Strip.
Airstrike
At approximately 1 AM on 14 October 2024, the Israeli Air Force launched an airstrike on a tent camp housing displaced people on the grounds of the Shuhada al-Aqsa Hospital in Deir el-Balah, Gaza Strip. The strike hit while emergency services were receiving injured people from the Israeli bombardment of the al-Mufti school in Nuseirat hours before, which killed 22 and injured 80 people. According to a Doctors Without Borders coordinator on site, the fire destroyed structures sheltering 37 families. The strike caused families' cooking gas cylinders to explode, further fueling the fire. Four munitions experts reviewing videos of the blaze added that some of the secondary explosions were probably caused by small-arms ammunition but cautioned it was difficult to determine the exact balance without access to the site. Footage showed tents on fire while people tried to extinguish the flames.
Israel claimed to have targeted a Hamas command center embedded in the car park, without providing evidence. Doctors without Borders (MSF), which has staff working in Al-Aqsa Hospital, reported that "it had no knowledge" of a Hamas command center and that "the hospital functions as a hospital".
Victims
The head of the hospital's emergency department said many of the injured were women and children. Hospital staff said while the hospital itself was not damaged in the strike, they had to treat patients on the floor for lack of beds. Most of the injured had second and third degree burns; some also had shrapnel wounds requiring critical care. Most of the survivors would eventually die from their "massive and deep" burns, a medical worker said.
Widely shared videos of the blaze, verified by NBC, showed at least one person lying on a bed connected to an IV drip, burning alive, with onlookers unable to reach and save him. The victim, identified as Sha'ban al-Dalou, was a 19-year-old software engineering student at Al-Azhar University who was injured at the IDF's bombing of the Shuhada al-Aqsa mosque a week prior. Al-Dalou's mother was also killed in the fire. His father suffered severe burns while pulling two of the family's children out of the flames.
Sha'ban's 11-year-old brother, Abdul Rahman al-Dalou, also died from his burns on 17 October. Ahmed Al-Dalou, Sha'ban's and Abdul's father stated, "I cannot forget the smell of their burning bodies. It is stuck in my nose and mind. Every time I close my eyes, I see my wife and son burning."
Responses
Domestic
Israel Defense Force: Israeli military spokesperson Col. Avichay Adraee said the air force had bombed a "command and control center" in the hospital, and Lt. Col. Nadav Shoshani described it as a "precise strike on terrorists who were operating inside a command and control center in the area of a parking lot adjacent", both without providing evidence.
International
:
A U.S. National Security Council spokesperson told The New York Times and The Times of Israel the White House had expressed its concern to Israel. "The images and video of what appear to be displaced civilians burning alive following an Israeli airstrike are deeply disturbing, and we have made our concerns clear to the Israeli government," the spokesperson said. "Israel has a responsibility to do more to avoid civilian casualties – and what happened here is horrifying, even if Hamas was operating near the hospital in an attempt to use civilians as human shields."
The U.S. Ambassador to the United Nations, Linda Thomas-Greenfield, said at the October 16, 2024 UN Security Council meeting: "'Colleagues, this weekend, like so many of you – like so many people around the world – I watched in horror as images from Central Gaza poured across my screen. Images of what appeared to be displaced civilians burning alive following an Israeli air strike. There are no words, simply no words, to describe what we saw. Israel has a responsibility to do everything possible to avoid civilian casualties, even if Hamas was operating near the hospital in an attempt to use civilians as human shields. We have made this clear to Israel."
Congresswoman Cori Bush (D-MI) and Representative Alexandria Ocasio-Cortez (D-NY) both called for an arms embargo over Twitter. Responding to the Times of Gaza's coverage on the fire, Bush tweeted, "There are no words powerful enough to capture the agony of human beings being massacred & burned alive. The U.S. is funding & arming the Israeli military’s extermination of the Palestinian people. It’s unconscionable. End this genocide. There must be an #ArmsEmbargoNow". Ocasio-Cortez tweeted, "The horrors unfolding in northern Gaza are the result of a completely unrestrained Netanyahu gov, fully armed by the Biden admin while food aid is blocked and patients are bombed in hospitals. This is a genocide of Palestinians. The US must stop enabling it. Arms embargo now."
:
The UK Ambassador to the United Nations, Barbara Woodward, said at the October 16, 2024 UN Security Council meeting: "[T]here are no safe places in Gaza. Just this week we saw horrifying images following the Israeli strike on Al-Aqsa hospital, inside the IDF designated humanitarian zone."
: UNICEF condemned the attack, posting: "Today, our screens were once again filled with horrifying reports of children killed, burned, and families emerging from bombed tents in Gaza. These should shock the world to its coreAttacks on shelters in Deir al-Balah and at al-Aqsa hospital, which reportedly killed 15 children, prove again that there is no safe place in Gaza. This shameful violence against children must end now."
Other
Médecins Sans Frontières: The organization, which supports the hospital, called the attack "totally unacceptable," saying: "Repeated attacks on medical facilities in Gaza must end. Health structures and medical staff must be protected at all times, and warring parties must respect hospital grounds. People in Gaza are trapped under relentless bombings. A ceasefire is needed now to stop this bloodshed."
Following the spread on social media of a video of Sha'ban al-Dalou burning to death during the attack, several pro-Israel accounts spread the false Pallywood conspiracy theory that the video had been staged. More broadly, however, images of al-Dalou's death sparked outrage and added to growing concerns about Israel's conduct in Gaza. An independent journalist who filmed the bombing stated, "I saw people burning in front of me. By god, no one could do anything. The man, the woman and the little girl burning in front of me".
See also
Attacks on health facilities during the Israel–Hamas war
Gaza genocide
References
2024 in the Gaza Strip
2024 building bombings
2024 fires in Asia
2024 massacres of the Israel–Hamas war
2024 murders in the State of Palestine
21st-century mass murder in the Gaza Strip
October 2024 crimes in Asia
Attacks on hospitals during the Israel–Hamas war
Building bombings in the Gaza Strip
Deir al-Balah
Deaths from fire
Filmed killings in Asia
Fires in the State of Palestine
Gas explosions
Wartime hospital bombings in Asia
Hospital fires in Asia
Israeli massacres of Palestinians
Israeli war crimes in the Israel–Hamas war
Massacres in the Gaza Strip | 14 October 2024 Al-Aqsa Hospital attack | [
"Chemistry"
] | 1,726 | [
"Natural gas safety",
"Gas explosions"
] |
78,115,984 | https://en.wikipedia.org/wiki/Facial%20age%20estimation | Facial age estimation is the use of artificial intelligence to estimate the age of a person based on their facial features. Computer vision techniques are used to analyse the facial features in the images of millions of people whose age is known and then deep learning is used to create an algorithm that tries to predict the age of an unknown person. The key use of the technology is to prevent access to age-restricted goods and services. Examples include restricting children from accessing Internet pornography, checking that they meet a mandatory minimum age when registering for an account on social media, or preventing adults from accessing websites, online chat or games designed only for use by children.
The technology is distinct from facial recognition systems as the software does not attempt to uniquely identify the individual.
Researchers have applied neural networks for age estimation since at least 2010.
Evaluation
An ongoing study by the National Institute of Standards and Technology entitled 'Face Analysis Technology Evaluation' seeks to establish the technical performance of prototype age estimation algorithms submitted by academic teams and software vendors including Brno University of Technology, Czech Technical University in Prague, Dermalog, IDEMIA, Incode Technologies Inc, Jumio, Nominder, Rank One Computing, Unissey and Yoti.
Commercial use
Commercial users of facial age estimation include Instagram and OnlyFans. In January 2025, John Lewis & Partners announced that had started using the technology to check the age of people shopping for knives on its website, to comply with UK legislation to limit knife crime.
In the UK, several supermarket chains have taken part in Home Office trials of the technology to automate the checking of a customer's age when buying age-restricted goods such as alcohol. UK legislation introduced in January 2025 mandates robust forms of age verification hosting adult content viewable in the UK by July 2025. Allowable methods include facial age estimation.
See also
Age verification system
Challenge 21
Liveness test
Online Safety Act 2023
Proposed UK Internet age verification system
References
Minimum ages
Facial recognition
Automatic identification and data capture
Video surveillance
Computer vision | Facial age estimation | [
"Technology",
"Engineering"
] | 406 | [
"Packaging machinery",
"Data",
"Automatic identification and data capture",
"Artificial intelligence engineering",
"Computer vision"
] |
78,116,473 | https://en.wikipedia.org/wiki/Kwagalana%20Group | Kwagalana Group is a members only group of richest businessmen and women in Uganda. Together, the group led by Godfrey Kirumira was estimated to be worth over UGX1,000 billion/USD272 million in 2009. The group was started in 2002 by a group of rich men, but the membership has since grown to include several businessmen like Sudhir Ruparelia of Ruparelia Group Joseph Magandaazi Yiga Hamis Kiggundu among several other rich Ugandans.
Philanthropy
The Kwagalana Group is known for its work of philanthropy by its members. The members of the group have been known to contribute to the communities and the country's development through business and charity. Sudhir Ruparelia in particular has a foundation has benefited several children and women.
During the COVID-19 Pandemic, the Kwagalana Group donated a pickup truck to the health ministry to help transport materials to different health facilities. This followed a call for donations from Yoweri Museveni Members also made individual contributions to the same cause.
Business
Members of Kwagalana Group own several businesses in Uganda, including hotels, Fuel Companies, Schools, Real Estate Companies mong others. These include; Kirumira's Bargary Company Limited deals in fuel, and Sudhir's hotels, apartments, schools etc. Other members of Kwagalana Group include Joseph Magandaazi Yiga the proprietor Steel and Tube Industries Dr Sarah Nkonge, who owns several arcades in Kampala and Jinja. She was also the first woman to be appointed Kampala City Mayor. Mr Ben Kavuya, the Proprietor Legacy Group and Hamis Kiggundu the proprietor Ham Enterprises, among other members.
References
Business organisations based in Uganda
Philanthropy | Kwagalana Group | [
"Biology"
] | 358 | [
"Philanthropy",
"Behavior",
"Altruism"
] |
78,116,585 | https://en.wikipedia.org/wiki/IRAS%2001003-2238 | IRAS 01003-2238 also known as IRAS F01004-2237 or simply F01004-2237, is a galaxy located in the constellation of Cetus. It is located 1.65 billion light years away from Earth and is a Seyfert galaxy and an ultraluminous infrared galaxy. IRAS 01003-2238 is also classified as a Wolf-Rayet galaxy, making the object one of the most distant known.
Characteristics
IRAS 01003-2238 is the brightest galaxy of a small group. It has two companions located 14.5 arcsec east and 18.5 arcsec southeast respectively. It has an infrared luminosity of 1012.2 Lʘ, and a far-infrared luminosity of 1.9 x 1012 Lʘ. The black hole mass in IRAS 01003-2238 is estimated to be 2.5 x 107 Mʘ.
IRAS 01003-2228 has a star formation rate of > 100 Mʘ yr−1. There are also numerous massive young Wolf-Rayet stars in its nucleus. In addition, the galaxy displays a broad emission band with a rest wavelength of λ ≈ 4660 Á. This is interpreted as arising from a combined effect of around 105 Wolf-Rayet stars of a WN subtype.
Additionally, IRAS 01003-2238 is also an old galaxy merger showing modest distortions but absence of tidal tails when shown at optical wavelengths. Although no traces of radio excess are seen, it is categorized as a Seyfert 2 galaxy according to optical observations. It shows signs of a hidden active galactic nucleus. The radio emission in IRAS 01003-2238 is found similar to radio galaxies with a high intrinsic brightness temperature of T'b ~ 108.1 K.
An optical flare is observed in IRAS 01003-228, with a luminous one recorded in June 2010. Since both helium emission lines are detected in the galaxy following the optical flare, the most likely explanation is a candidate tidal disruption event, where a star wandering close to the black hole is ripped apart by tidal forces. Since then, IRAS 01003-2238 has since gone through another recurring flaring period in September 2021. This time, the flare is ultraviolet bright yet weak in X-rays.
References
01003-2238
Cetus
Starburst galaxies
Seyfert galaxies
Luminous infrared galaxies
3095492
Galaxy mergers | IRAS 01003-2238 | [
"Astronomy"
] | 506 | [
"Cetus",
"Constellations"
] |
78,117,211 | https://en.wikipedia.org/wiki/Hc3a | Hc3a is a peptide from the venom of the Australian funnel web spider, which slows down desensitisation of the acid-sensing sodium channel ASIC1a.
Source
Hc3a is a peptide derived from the venom gland of the Australian Funnel web spider Hadronyche cerberea.
Chemistry
Hc3a is a single inhibitor cystine knot (ICK) peptide consisting of 38 amino acids. It resembles other single and double ICKs such as PcTx1 and Hm3a. It strongly resembles Hc1a, a double ICK also found in H. cerbrea venom, and Hi1a as the N-terminal 23 amino acids are identical and the C-terminal 15 amino acids differ by only a single or two amino acids, respectively.
The amino acid sequence of Hc3a is:NECIRKWLSCVDRKNDCCEGLECWKRRGNKSSVCVPIT
The structure of Hc3a is a hydrophobic patch surrounded by charged residues, showing an identical fold as PcTx1. It contains three disulphide bridges that give rise to a tightly knotted structure with 3 folds. Despite these similarities, Hc3a seems to have a different mechanism of action than other known peptides that act on the same channel.
Target
Hc3a, in a similar way as PcTx1, binds to the acidic pocket of the proton-activated sodium channel ASIC1a.
Mode of action
Hc3a has complex pharmacology, but its main effect appears to be that it can bind to ASIC1a in the open state and potentiate currents by slowing down desensitisation. Hc3a also shifts the steady-state desensitisation curve to slightly more alkaline values.
References
Spider toxins
Peptides
Ion channel toxins | Hc3a | [
"Chemistry"
] | 373 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.