id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
52,297,221 | https://en.wikipedia.org/wiki/Volume%20and%20displacement%20indicators%20for%20an%20architectural%20structure | The volume (W) and displacement (Δ) indicators have been discovered by Philippe Samyn in 1997 to help the search for the optimal geometry of architectural structures.
Objective
The study is limited to the quest of the geometry giving the structure of minimum volume.
The cost of a structure depends on the nature and the quantity of the materials used as well as the tools and human resources required for its production.
Although technological progress has reduced the cost of tools and the amount of human resources required, and despite the fact that computerised calculation tools can now be used to determine the dimension of a structure so that the load it bears at every point is within the admissible limits allowed by its constituent materials, it is also necessary for its geometry to be optimal. It is far from simple to find this optimal point because the choice available is so vast.
Furthermore, the resistance of the structure is not the only criterion to take into account. In many cases, it is also important to ensure that it will not undergo excessive deformation under static loads or that it does not vibrate to inconvenient or dangerous levels when subjected to dynamic loads.
Volume and displacement indicators, W and Δ, discovered by Philippe Samyn in August 1997, are useful tools in this regard. This approach does not take into account phenomena of elastic instability. It can indeed be shown that it is always possible to design a structure so that this effect becomes negligible.
The indicators
The objective is to ascertain the optimal morphology for a two-dimensional structure with constant thickness, which:
fits in a rectangle of pre-determined dimensions, longitudinal L and horizontal H, expressed in metres (m);
is made of one (or several) material(s) with a modulus of elasticity E, expressed in Pascals (Pa), and bearing a load at all points within its allowable stress(es) σ, expressed in Pascals (Pa);
is resistant to the maximum loads to which it is subjected, in the form of a "resultant" F, expressed in Newtons (N).
Each form chosen corresponds to a volume of material V (in m3) and a maximum deformation δ (in m).
Their calculation depends on the factors L, H, E, σ and F. These calculations are long and tedious, they cloud the objective of finding the optimal form.
It is, nevertheless, possible to overcome this problem by setting each factor to unity: while all other characteristics remain the same. Length L is therefore set to 1m, H to H/L, E and σ to 1Pa, and F to 1N.
This "reduced" structure has a volume of material W= σV/ FL (the volume indicator) and a maximum deformation Δ = Eδ / σL (the displacement indicator). Their main characteristic is that they are numbers without physical dimensions (dimensionless) and their value, for every morphology considered, depends only on the ratio L/H, i.e. the geometric slenderness ratio of the form.
This method can easily be applied to three-dimensional structures as illustrated in the following examples.
The theory related to the indicators has been taught since 2000, and among other institutions, at the department of Civil Engineering and of Architecture at the Vrije Universiteit Brussel (VUB ; section "material mechanics and constructions") leading to research and publications under the direction of Prof. Dr. Ir. Philippe Samyn (from 2000 to 2006); Prof. Dr. Ir. Willy Patrick De Wilde (from 2000 to 2011) and now Prof. Dr. Ir. Lincy Pyl.
The "reference book", since the reference thesis, reports the developments of the theory at Samyn and Partners as well as the VUB, up to 2004.
The theory is open to everyone who wants to contribute, W and Δ being to be calculated for any resistant structure as defined in paragraph 1 here above.
Progresses in material sciences, robotics and three dimensional printing, lead to the creation of new structural forms lighter than the lightest known today.
The geometry of minimal surfaces of constant thickness in a homogeneous material is, for example, substantially modified when thickness and/or local allowable stress are varying.
Macrostructure, structural element, microstructure and material
The macrostructures considered here may be composed of "structural elements" which material presents a "microstructure".
Whether searching to limit the stress or the deformation, macrostructure, structural element and microstructure have each, a weight Vρ, when ρ is the volumic weight of materials, in N/m3, function of the solicitations {F0} (for "force" in général) applied to them, of their size {L0} (for length or "size" in general), of their shape {Ge} (for geometry or "shape" in general), and of their constituting material {Ma} (for "material" in general).
It can also be expressed as shape and material ({Ge}{Ma}) defining the weight (Vρ) for the structure of a given size under given force ({F0}{L0}).
In material mechanics and for the structural elements under a specific loading case, the factor {Ge} corresponds to the "form factor" for elements of continuous section out of a solid material (without voids).
The constituting material might however present a microstructure with voids. This cellular structure enhances than the form factor, whatever the loading case.
The factor {Ma} characterizes a material which efficiency might be compared to another for a given loading case, and the independently of the form factor {Ge}.
The indicators W = σV/FL and Δ = δE/σL just defined, characterize the macrostructures, while the same notations and symbols in small letters, w = σv/fl and Δ = δE/σl, refer to the structural element.
The figure 1 gives the values of W and Δ for the structural element subject to traction, compression, bending and shear. The left column relates to the limitation of stress and the right column to the limitation of deformation. It shows the direct relation of W to {Ge}{Ma} as:
, thus
and
or
for given dimensions and loading case.
Then, as W and Δ depend only on :
and:
which for a given loading case, is the specific weight of a macrostructure per unit of force and length, depending only from the geometry through L/H, and the materials though σ/ρ.
Wρ/σ includes thus, the material factor {Ma} (ρ/σ and ρ/E for tension and compression without buckling, ρ/E1/2 for compression limited by buckling, ρ/σ2/3 and ρ/E1/2 for pure bending, ρ and ρ/G for pure shear) and the form factor {Ge}.
All other factor being equal, a cluster of tubes with a diameter H and a wall thickness e, compared to a solid bar of equal volume in a material characterized by ρ, σ, E et G, presents an apparent density ρa = 4k(1 − k)ρ with k = e/H, allowable stress σa = 4k(1 − k)σ,
The Young's modulus is and the shear modulus is .
Thus
and
This explains the better performances of lighter materials for structural elements subject to compression or bending.
This indicator allows to compare the efficiency of macrostructures including geometry and material.
It echos the work of M.F. Ashby: "Materials Selection in Mechanical Design" (1992). He analyses {Ge} and {Ma} separately as, for his studies, {Ma} relates to a large amount of the materials physical properties.
Different and complementary, it can also be placed alongside work carried out since 1969 by the Institut Für Leichte Flächentragwerke in Stuttgart under the direction of Frei Otto and now Werner SobekK, which refers to indices named Tra and Bic. The Tra is defined as the product of the length of the trajectory of the force Fr, (causing the collapse of the structure) onto the supports by the intensity of this force, and the Bic is the relationship of the mass of the structure with Tra.
Since ρ* is the density of the material (in kg/m3), and α is, like W, a constant depending on the type of structure and the loading case:
therefore, with stress reached under
and as
Unlike W, which is dimensionless, Bic is expressed in kg/Nm. Therefore, depending on the material, an independent comparison of different morphologies is not possible.
It is surprising to note that despite the abundance of their works, none of them mention or make any effort to study W and its relationship with L/H.
It appears that only V. Quintas Ripoll, and W. Zalewski and St. Kus mentioned the volume indicator W without examining it in depth.
Validity limits of W and Δ
In general, the second order effects have very little influence on W, but they can have a significant impact on Δ. W and Δ therefore also depend on E/σ.
The shearing force T may be crucial in the case of short and continuous elements, subject to bending so that W does not fall below a given value, regardless of the reduction of the slenderness L/H. However, this limitation is very theoretical because it is always possible to remove it by transferring material from the flanges to the web of the section, close to the supports.
The stress σ to which the structure can be subjected depends on the nature, the internal geometry, the production method and the implementation of the materials, as well as several other factors, including the dimensional accuracy of the actual construction, the nature of the connections of the components or their fire resistance, but also the skill with which the geometry of the structure is designed to cope with elastic instability. Pierre Latteur, who discovered the buckling indicator, studied the influence of elastic instability on W and Δ.
In this regard, it is important to note that the existence of anchoring points of an element in traction may reduce the apparent permissible stress to the same level as the reduction necessary to take into account a moderate level of elastic instability.
The influence on W of the buckling of the compressed parts on one side and of the anchoring points at the extremities of an element in traction on the other side is analysed on pages 30 to 58 in the « reference book ».
The allowable stress σ is also often reduced by the need to limit the displacement δ of the structure since it is not possible to significantly alter E for a given material.
Considerations regarding fatigue, ductility and dynamic forces also limit the working stress.
It is not always straightforward to establish the nature and the overall maximum intensity of the forces F(including dead weight) to which the structure is subject, which again has a direct influence on the working stress.
The connections of an element in compression or in traction are considered as hinged. Any clamping, even partial ones, introduce parasitic forces which add extra weight to the structure.
For certain types of the structures, the volume of the connections adds to the net volume defined by W. Its importance depends on the nature of the material and the context in which it is used; this needs to be determined on a case-by-case basis.
It follows that initially, only W and Δ should be taken into account for the morphological design of a structure, assuming that it is ultra-dampened (i.e. its internal damping is greater than the critical damping), which makes it impervious to dynamic stress.
The volume V of a structure is therefore directly proportional to the total intensity of the force F which is applied to it, to its length L and to the morphological factor W; it is inversely proportional to the stress σ to which it can be subjected. Furthermore, the weight of a structure is proportional to the density ρ of the material from which it is constructed. However, its maximum displacement δ remains proportional to the span L and the morphological factor Δ, as well as the ratio between its working stress σ and the modulus of elasticity E.
If it is a case of limiting the weight (or the volume) and the deformation of a structure for a given stress F and span L, with all other aspects remaining unchanged, then the work of the structural engineer involves minimising W and ρ/σ on one side and Δ and σ/E ont the other.
Accuracy of W and Δ
Theoretical accuracy
For the large majority of compressed elements, it is possible to limit the reduction of the working stress to 25% by taking into account elastic instability, providing that the designer focuses on ensuring an efficient geometric design from as early as the initial sketches. This means that the increase in their volume indicator can also be limited to 25% . The volume of the elements subject to pure traction is also only very rarely limited to the product of the net distance over which a force is applied by a section strained at the permissible stress. In other words, their real volume indicator is thus also higher than the one which results from the calculation of W. A bar under traction can be welded at its extremities; no extra material apart from the negligible welding material is added, but the rigidity introduce parasitic moments which absorb some of the permissible stress.
The bar can be articulated at its extremities and work at its permissible stress, but this requires close end sockets or attachment mechanisms whose volume is far from negligible, especially if the bar is short or highly stressed. As L.H. Cox demonstrated, in this case, it is worth taking into account n bars each with a cross-section of Ω/n, strained by force F/n with 2n sockets, instead of one bar with a cross-section Ω strained by a force F with 2 sockets, since the total volume of 2n sockets in the first case is much less than that of 2 sockets in the second.
The anchoring of the extremities of a bar under traction can also be ensured by adherence, as is usually the case for the rebars in elements made of reinforced concrete. In this specific case, it is necessary to have an anchoring length at least 30 times the diameter of the bar. The bar then has a length L + 60H for a useful length L; its theoretical volume indicator W = 1 becomes W = 1 + 60H/L. Consequently, L/H must be greater than 240 (which is always theoretically possible) so that W does not increase by more than 25%. This observation also helps to show another reason for taking into account n bars with a crosssection Ω/n instead of one bar with a cross-section Ω.
Finally, connections consisting of bolts, dowels, pins or nails, especially in the case of wooden components, significantly reduce usable sections. For elements in traction, a reduction of 25% in the working stress or an increase of 25% in the volume is therefore also necessary in the majority of cases. Determining the volume and the displacement of a structure using the indicators W and Δ is therefore reliable theoretically, providing that:
the working stress is reduced by at least 25% ;
de dessiner les éléments comprimés et les assemblages avec discernement.
great attention is paid to the design of compressed parts and connections. The overall proportions of an optimised structure, without taking buckling into account, are significantly altered when the compressed bars need to be shortened to take account of elastic instability. It becomes sensitive to the scale effect, leading to an extension of the overall proportion and increasing the weight of the structure. Inversely, a shortening of the overall proportions is necessary when the volume of the connections needs to be considered, since the influence of this volume reduces when the bars are lengthened. This shows the advantage of accurately designing not only the compressed parts, but also the connections, in order to avoid these flaws. One of Niki de Saint-Phalle's light sculptures are therefore preferable to Giacometti's slender but heavy structures !
Practical accuracy
The volume of the material of the structure, as determined using W, can only be obtained accurately if the theoretical values of the relevant characteristic of the sections under strain σ can be measured in practice.
As shown in Figure 1 above, this characteristic is:
Ω for an element under pure compression without buckling ;
I for an element under pure compression with buckling (as well as for deformation under
pure bending) ;
I/H for an element under simple bending.
It is always possible to obtain the precise value of these characteristics when the parts are made of moulded materials, such as reinforced concrete, or squared-off materials, such as wood or stone. However, this is not the case for laminated or extruded materials, produced on an industrial production line, such as steel or aluminium. It is important therefore to produce these elements with the smallest possible difference in size between two of them in order to avoid an unnecessary use of material. This use is consistent when the related deviation c between two successive values kn and kn+1 is constant, thus (kn+1 − kn) / kn = c or kn+1 = (c + 1) kn or kn+1 = k0.
This is the principle of the geometric series known as the Renard Series (named after Colonel Renard who was the first to use them in calculating the diameter of cabling on aircraft) featured in the French standard NF X01-002. When all the necessary values are only very slightly greater than a series value, c represents the maximum increase and c/2 the average increase of W. Being universally used, the case of the steel profiles requires an in-depth examination (see the "reference book"; page 26 to 29). Consequently, the use of industrial steel profiles automatically leads to a significant increase of W:
by half of the theoretical inaccuracy for pure compression ;
practically identical for bending or compression subject to buckling.
This situation is magnified when the number of profiles available is restricted, which may explain the use of forms which are not theoretically optimal but which tend to subject the available profiles to the permissible stress σ (such as, for example, pylons for high-voltage electric lines or variable height truss bridges). For structures subject to pure bending, this also explains the use of flat plates of variable lengths added to the flanges of these I profiles to obtain the inertia or resisting moment required, with the greatest degree of accuracy. Conversely, the significant variety in the tubes available enables a relative deviation value c which is both smaller and more constant. They also cover a much wider range in both the lower and higher characteristic values. Since their geometric performance is practically identical to that of the I profiles, tubes are the most appropriate industrial solution in order to practically eliminate any increase in the volume indicator W. Nevertheless, practical issues of availability and corrosion may limit their use.
Some examples of W and Δ
The following figures show the values of the indicators according to the ratio L/H for a number of types of structures.
Figure 2 and 3: W and Δ for a horizontal isostatic span under a uniformly distributed vertical load made up of:
profiles with a constant cross-section, from I-section to solid cylinder;
different types of trusses;
parabolic arches with or without hangers or small columns, with constant or variable cross-sections.
Figure 4: for the transfer to two equidistant supports on the horizontal of a vertical point load (in this case Δ=W) or evenly distributed lead: F = 1.
Figure 5 and 6: W for a vertical mast, with a constant width, subject to a horizontal load which is evenly distributed along its height or concentrated at the top.
Figure 7: W for a membrane of revolution on a vertical axis, with a constant or variable thickness, under an evenly distributed vertical load. It is surprising to note that the minimum value is reached for a conical dome of variable thickness with an opening angle of 90° (L/H = 2 ; W = 0,5!).
Developments
Applications discussed in the « reference book » are:
trusses,
straight continuous beams,
arches, cables and guyed structures,
masts,
gantries,
membranes of revolution.
Some examples of composite structures with minimum W
W can easily be determined in order to optimise structures made up of a number of different construction elements (see « reference book » pages 100–106) as shown, for instance, for the wind turbine in Figure 8.
Or a parabolic roof coupled with large vertical glazed gables subject to wind loads, as seen at Leuven station in Belgium, shown in Figure 9 (see reference for a detailed analysis).
The optimisation of the King Cross truss for the facade of the Europa building in Brussels (see reference pages 93–101 for detailed analysis) is another example.
See also
Architectural engineering
Notes and references
Structural engineering | Volume and displacement indicators for an architectural structure | [
"Engineering"
] | 4,336 | [
"Structural engineering",
"Civil engineering",
"Construction"
] |
52,297,304 | https://en.wikipedia.org/wiki/Convergence%20group | In mathematics, a convergence group or a discrete convergence group is a group acting by homeomorphisms on a compact metrizable space in a way that generalizes the properties of the action of Kleinian group by Möbius transformations on the ideal boundary of the hyperbolic 3-space .
The notion of a convergence group was introduced by Gehring and Martin (1987) and has since found wide applications in geometric topology, quasiconformal analysis, and geometric group theory.
Formal definition
Let be a group acting by homeomorphisms on a compact metrizable space . This action is called a convergence action or a discrete convergence action (and then is called a convergence group or a discrete convergence group for this action) if for every infinite distinct sequence of elements there exist a subsequence and points such that the maps converge uniformly on compact subsets to the constant map sending to . Here converging uniformly on compact subsets means that for every open neighborhood of in and every compact there exists an index such that for every . Note that the "poles" associated with the subsequence are not required to be distinct.
Reformulation in terms of the action on distinct triples
The above definition of convergence group admits a useful equivalent reformulation in terms of the action of on the "space of distinct triples" of .
For a set denote , where . The set is called the "space of distinct triples" for .
Then the following equivalence is known to hold:
Let be a group acting by homeomorphisms on a compact metrizable space with at least two points. Then this action is a discrete convergence action if and only if the induced action of on is properly discontinuous.
Examples
The action of a Kleinian group on by Möbius transformations is a convergence group action.
The action of a word-hyperbolic group by translations on its ideal boundary is a convergence group action.
The action of a relatively hyperbolic group by translations on its Bowditch boundary is a convergence group action.
Let be a proper geodesic Gromov-hyperbolic metric space and let be a group acting properly discontinuously by isometries on . Then the corresponding boundary action of on is a discrete convergence action (Lemma 2.11 of ).
Classification of elements in convergence groups
Let be a group acting by homeomorphisms on a compact metrizable space with at least three points, and let . Then it is known (Lemma 3.1 in or Lemma 6.2 in ) that exactly one of the following occurs:
(1) The element has finite order in ; in this case is called elliptic.
(2) The element has infinite order in and the fixed set is a single point; in this case is called parabolic.
(3) The element has infinite order in and the fixed set consists of two distinct points; in this case is called loxodromic.
Moreover, for every the elements and have the same type. Also in cases (2) and (3) (where ) and the group acts properly discontinuously on . Additionally, if is loxodromic, then acts properly discontinuously and cocompactly on .
If is parabolic with a fixed point then for every one has
If is loxodromic, then can be written as so that for every one has and for every one has , and these convergences are uniform on compact subsets of .
Uniform convergence groups
A discrete convergence action of a group on a compact metrizable space is called uniform (in which case is called a uniform convergence group) if the action of on is co-compact. Thus is a uniform convergence group if and only if its action on is both properly discontinuous and co-compact.
Conical limit points
Let act on a compact metrizable space as a discrete convergence group. A point is called a conical limit point (sometimes also called a radial limit point or a point of approximation) if there exist an infinite sequence of distinct elements and distinct points such that and for every one has .
An important result of Tukia, also independently obtained by Bowditch, states:
A discrete convergence group action of a group on a compact metrizable space is uniform if and only if every non-isolated point of is a conical limit point.
Word-hyperbolic groups and their boundaries
It was already observed by Gromov that the natural action by translations of a word-hyperbolic group on its boundary is a uniform convergence action (see for a formal proof). Bowditch proved an important converse, thus obtaining a topological characterization of word-hyperbolic groups:
Theorem. Let act as a discrete uniform convergence group on a compact metrizable space with no isolated points. Then the group is word-hyperbolic and there exists a -equivariant homeomorphism .
Convergence actions on the circle
An isometric action of a group on the hyperbolic plane is called geometric if this action is properly discontinuous and cocompact. Every geometric action of on induces a uniform convergence action of on .
An important result of Tukia (1986), Gabai (1992), Casson–Jungreis (1994), and Freden (1995) shows that the converse also holds:
Theorem. If is a group acting as a discrete uniform convergence group on then this action is topologically conjugate to an action induced by a geometric action of on by isometries.
Note that whenever acts geometrically on , the group is virtually a hyperbolic surface group, that is, contains a finite index subgroup isomorphic to the fundamental group of a closed hyperbolic surface.
Convergence actions on the 2-sphere
One of the equivalent reformulations of Cannon's conjecture, (posed by James W. Cannon,
although an earlier and more general conjecture, reducing to the Cannon conjecture for compact type, was given by Gaven J. Martin and Richard K. Skora )
These conjectures are in terms of word-hyperbolic groups with boundaries homeomorphic to , says that if is a group acting as a discrete uniform convergence group on then this action is topologically conjugate to an action induced by a geometric action of on by isometries. These conjectures still remains open.
Applications and further generalizations
Yaman gave a characterization of relatively hyperbolic groups in terms of convergence actions, generalizing Bowditch's characterization of word-hyperbolic groups as uniform convergence groups.
One can consider more general versions of group actions with "convergence property" without the discreteness assumption.
The most general version of the notion of Cannon–Thurston map, originally defined in the context of Kleinian and word-hyperbolic groups, can be defined and studied in the context of setting of convergence groups.
References
Group theory
Dynamical systems
Geometric topology
Geometric group theory | Convergence group | [
"Physics",
"Mathematics"
] | 1,396 | [
"Geometric group theory",
"Group actions",
"Geometric topology",
"Group theory",
"Fields of abstract algebra",
"Topology",
"Mechanics",
"Symmetry",
"Dynamical systems"
] |
57,060,297 | https://en.wikipedia.org/wiki/Magnetohydrodynamic%20converter |
A magnetohydrodynamic converter (MHD converter) is an electromagnetic machine with no moving parts involving magnetohydrodynamics, the study of the kinetics of electrically conductive fluids (liquid or ionized gas) in the presence of electromagnetic fields. Such converters act on the fluid using the Lorentz force to operate in two possible ways: either as an electric generator called an MHD generator, extracting energy from a fluid in motion; or as an electric motor called an MHD accelerator or magnetohydrodynamic drive, putting a fluid in motion by injecting energy. MHD converters are indeed reversible, like many electromagnetic devices.
Michael Faraday first attempted to test a MHD converter in 1832. MHD converters involving plasmas were highly studied in the 1960s and 1970s, with many government funding and dedicated international conferences. One major conceptual application was the use of MHD converters on the hot exhaust gas in a coal fired power plant, where it could extract some of the energy with very high efficiency, and then pass it into a conventional steam turbine. The research almost stopped after it was considered the electrothermal instability would severely limit the efficiency of such converters when intense magnetic fields are used, although solutions may exist.
Crossed-field magnetohydrodynamic converters
(linear Faraday type with segmented electrodes)
A: MHD generator. B: MHD accelerator.
MHD power generation
A magnetohydrodynamic generator is an MHD converter that transforms the kinetic energy of an electrically conductive fluid, in motion with respect to a steady magnetic field, into electricity. MHD power generation has been tested extensively in the 1960s with liquid metals and plasmas as working fluids.
Basically, a plasma is hurtling down within a channel whose walls are fitted with electrodes. Electromagnets create a uniform transverse magnetic field within the cavity of the channel. The Lorentz force then acts upon the trajectory of the incoming electrons and positive ions, separating the opposite charge carriers according to their sign. As negative and positive charges are spatially separated within the chamber, an electric potential difference can be retrieved across the electrodes. While work is extracted from the kinetic energy of the incoming high-velocity plasma, the fluid slows down during the process.
MHD propulsion
A magnetohydrodynamic accelerator is an MHD converter that imparts motion to an electrically conductive fluid initially at rest, using cross electric current and magnetic field both applied within the fluid. MHD propulsion has been mostly tested with models of ships and submarines in seawater. Studies are also ongoing since the early 1960s about aerospace applications of MHD to aircraft propulsion and flow control to enable hypersonic flight: action on the boundary layer to prevent laminar flow from becoming turbulent, shock wave mitigation or cancellation for thermal control and reduction of the wave drag and form drag, inlet flow control and airflow velocity reduction with an MHD generator section ahead of a scramjet or turbojet to extend their regimes at higher Mach numbers, combined to an MHD accelerator in the exhaust nozzle fed by the MHD generator through a bypass system. Research on various designs are also conducted on electromagnetic plasma propulsion for space exploration.
In an MHD accelerator, the Lorentz force accelerates all charge carriers in the same direction whatever their sign, as well as neutral atoms and molecules of the fluid through collisions. The fluid is ejected toward the rear and as a reaction, the vehicle accelerates forward.
See also
Plasma (physics)
Lorentz force
Electrothermal instability
Wingless Electromagnetic Air Vehicle
References
Further reading
Electromagnetism
Fluid dynamics
Plasma technology and applications
Energy conversion
Propulsion | Magnetohydrodynamic converter | [
"Physics",
"Chemistry",
"Engineering"
] | 751 | [
"Electromagnetism",
"Physical phenomena",
"Plasma physics",
"Plasma technology and applications",
"Chemical engineering",
"Fundamental interactions",
"Piping",
"Fluid dynamics"
] |
57,064,086 | https://en.wikipedia.org/wiki/Climacodon%20sanguineus | Climacodon sanguineus is a rare species of tooth fungus in the family Phanerochaetaceae that is found in Africa.
Taxonomy
The fungus was originally described as Hydnum sanguineum by Belgian mycologist Maurice Beeli in 1926. The holotype collection was made near Kalo, Democratic Republic of the Congo
Rudolph Arnold Maas Geesteranus transferred the species to genus Climacodon in 1971.
Phylogenetic data shows that C. sanguineus forms a well-supported clade with the type species of Climacodon, C. septentrionale, which nests in the Phlebioid clade.
Description
The bright red, funnel-shaped fruit bodies of this fungus are up to tall. They have sharp, cylindrical spines on the underside of the cap. C. sanguineus has a monomitic hyphal system, containing only generative hyphae. These hyphae have a septum; some of the hyphae comprising the cap and in the core of the spines have clamps. The cystidia, which are scattered on the surface on the spines (the spore-bearing surface), are double-walled with a discontinuous internal lumen. The spores are ellipsoid in shape, translucent, and measure 4–5 by 2–2.5 μm.
References
Fungi described in 1926
Fungi of Africa
Phanerochaetaceae
Fungus species | Climacodon sanguineus | [
"Biology"
] | 294 | [
"Fungi",
"Fungus species"
] |
43,626,457 | https://en.wikipedia.org/wiki/Pracinostat | Pracinostat (SB939) is an orally bioavailable, small-molecule histone deacetylase (HDAC) inhibitor based on hydroxamic acid with potential anti-tumor activity characterized by favorable physicochemical, pharmaceutical, and pharmacokinetic properties.
Activity
Pracinostat selectively inhibits HDAC class I, II, IV without class III and HDAC6 in class IIb, but has no effect on other Zn-binding enzymes, receptors, and ion channels. It accumulates in tumor cells and exerts a continuous inhibition to histone deacetylase, resulting in acetylated histones accumulation, chromatin remodeling, tumor suppressor genes transcription, and ultimately, apoptosis of tumor cells.
Clinical medication
Clinical studies suggests that pracinostat has potential best pharmacokinetic properties when compared to other oral HDAC inhibitors. In March 2014, pracinostat has granted Orphan Drug for acute myelocytic leukemia (AML) and for the treatment of T-cell lymphoma by the Food and Drug Administration.
References
Histone deacetylase inhibitors
Benzimidazoles
Hydroxamic acids
Diethylamino compounds | Pracinostat | [
"Chemistry"
] | 261 | [
"Organic compounds",
"Functional groups",
"Hydroxamic acids"
] |
43,628,051 | https://en.wikipedia.org/wiki/TKM-Ebola | TKM-Ebola was an experimental antiviral drug for Ebola disease that was developed by Arbutus Biopharma (formerly Tekmira Pharmaceuticals Corp.) in Vancouver, Canada. The drug candidate was formerly known as Ebola-SNALP.
TKM-Ebola is a combination of small interfering RNAs targeting three of the seven proteins in Ebola virus: Zaire Ebola L polymerase, Zaire Ebola membrane-associated protein (VP24), and Zaire Ebola polymerase complex protein (VP35). By down-regulating these three proteins, TKM-Ebola inhibits virus replication and eliminates the infection. The drug was effective in rhesus monkeys infected with Ebola. After the Ebola outbreak in West Africa in 2014, the new variant responsible for it was isolated from several Ebola virus families and the specific genomic sequence was determined. The company re-designed TKM-Ebola and renamed it as "TKM-Ebola-Guinea".
In January 2014, Tekmira started a Phase I clinical trial of TKM-Ebola to assess its safety in healthy people with a dose of 0.24 mg/kg/day for seven day treatments. The FDA placed the trial on clinical hold in July 2014 to assess results, after some subjects had flu-like responses. In August, the FDA changed the status to "partial hold", allowing the drug to be used under expanded access in people infected with Ebola but with the Phase I trial still suspended. In April 2015 the FDA allowed the study to resume at a lower dose.
A Phase II trial started on 11 March 2015 in Sierra Leone, West Africa and stopped enrolling new subjects on 19 June 2015 after it appeared not to work. In July 2015 the company announced it was changing its name to Arbutus, suspending development of the drug for Ebola and changing its focus to developing treatments for hepatitis B virus.
See also
Atoltivimab/maftivimab/odesivimab, treatment of Zaire ebolavirus (Ebola virus)
Favipiravir
ZMapp
References
Antiviral drugs
Ebola
Abandoned drugs | TKM-Ebola | [
"Chemistry",
"Biology"
] | 448 | [
"Antiviral drugs",
"Biocides",
"Drug safety",
"Abandoned drugs"
] |
49,752,515 | https://en.wikipedia.org/wiki/Scanning%20electron%20cryomicroscopy | Scanning electron cryomicroscopy (CryoSEM) is a form of electron microscopy where a hydrated but cryogenically fixed sample is imaged on a scanning electron microscope's cold stage in a cryogenic chamber. The cooling is usually achieved with liquid nitrogen. CryoSEM of biological samples with a high moisture content can be done faster with fewer sample preparation steps than conventional SEM. In addition, the dehydration processes needed to prepare a biological sample for a conventional SEM chamber create numerous distortions in the tissue leading to structural artifacts during imaging.
See also
Electron microscopy
Electron cryomicroscopy
Transmission electron cryomicroscopy
References
Electron microscopy
Scientific techniques | Scanning electron cryomicroscopy | [
"Chemistry"
] | 139 | [
"Electron",
"Electron microscopy",
"Microscopy"
] |
49,766,829 | https://en.wikipedia.org/wiki/BART%20superfamily | The Bile/Arsenite/Riboflavin Transporter (BART) superfamily is a superfamily of ubiquitous transport proteins. As of early 2016, the superfamily contains seven established families. Functional data for members of all of these families are available. The seven families are in the Transporter Classification Database with the following TC numbers, names and abbreviations include:
TC# 2.A.10 - The 2-Keto-3-Deoxygluconate Transporter (KdgT) Family
TC# 2.A.28 - The Bile Acid:Na+ Symporter (BASS) Family
TC# 2.A.59 - The Arsenical Resistance-3 (ACR3) Family
TC# 2.A.69 - The Auxin Efflux Carrier (AEC) Family
TC# 2.A.87 - The Prokaryotic Riboflavin Transporter (P-RFT) Family
TC# 9.B.33 - The Sensor Histidine Kinase (SHK) Family
TC# 9.B.34 - The Kinase/Phosphatase/Cyclic-GMP Synthase/Cyclic di-GMP Hydrolase (KPSH) Family
The first identified substrates for the transporters within the first 5 families are indicated by the names of the families, but all of these families transport a variety of other substrates. The majority of the protein members of the first four of these families exhibit a probable 10 transmembrane spanner (TMS) topology that arose from a tandemly duplicated 5 TMS unit. The N- and C-termini are believed to be in the cytoplasm of bacterial cells, and the same may be true of most other members as well. Members of the RFT family have a 5 TMS topology, and are homologous to each of the two repeat units in the 10 TMS proteins. The other two families [sensor histidine kinase (SHK) and kinase/phosphatase/synthetase/hydrolase (KPSH)] have a single 5 TMS unit preceded by an N-terminal TMS and followed by a hydrophilic sensor histidine kinase domain (the SHK family) or catalytic domains resembling sensor kinase, phosphatase, cyclic di-guanylate (GMP) synthetase and cyclic di-GMP hydrolase catalytic domains, as well as various non-catalytic domains (the KPSH family). Because functional data are not available for the transmembrane domains of members of the SHK and KPSH families, it is not known if these transporter-like domains retain transport activity or have evolved exclusive functions in molecular reception and signal transmission. They could serve merely to anchor the catalytic domains to the membrane. Please refer to TCDB for more details.
References
Solute carrier family
Protein superfamilies | BART superfamily | [
"Biology"
] | 591 | [
"Protein superfamilies",
"Protein classification"
] |
49,767,151 | https://en.wikipedia.org/wiki/Chain-ladder%20method | The chain-ladder or development method is a prominent actuarial loss reserving technique.
The chain-ladder method is used in both the property and casualty and health insurance fields. Its intent is to estimate incurred but not reported claims and project ultimate loss amounts.
The primary underlying assumption of the chain-ladder method is that historical loss development patterns are indicative of future loss development patterns.
Methodology
According to Jacqueline Friedland's "Estimating Unpaid Claims Using Basic Techniques," there are seven steps to apply the chain-ladder technique:
Compile claims data in a development triangle
Calculate age-to-age factors
Calculate averages of the age-to-age factors
Select claim development factors
Select tail factor
Calculate cumulative claim development factors
Project ultimate claims
Age-to-age factors, also called loss development factors (LDFs) or link ratios, represent the ratio of loss amounts from one valuation date to another, and they are intended to capture growth patterns of losses over time. These factors are used to project where the ultimate amount losses will settle.
Example
Firstly, losses (either reported or paid) are compiled into a triangle, where the rows represent accident years and the columns represent valuation dates. For example, the entry '43,169,009' represents loss amounts related to claims occurring in 1998, valued as of 24 months.
Next, age-to-age factors are determined by calculating the ratio of losses at subsequent valuation dates. From 24 months to 36 months, accident year 1998 losses increased from 43,169,009 to 45,568,919, so the corresponding age-to-age factor is 45,568,919 / 43,169,009 = 1.056. A "tail factor" is selected (in this case, 1.000) to project from the latest valuation age to ultimate.
Finally, averages of the age-to-age factors are calculated. Judgmental selections are made after observing several averages. The age-to-age factors are then multiplied together to obtain cumulative development factors.
The cumulative development factors multiplied by the reported (or paid) losses to project ultimate losses.
Incurred but not reported can be obtained by subtracting reported losses from ultimate losses, in this case, 569,172,456 - 543,481,587 = 25,690,869.
Limitations
The chain-ladder technique is only accurate when patterns of loss development in the past can be assumed to continue in the future. In contrast to other loss reserving methods such as the Bornhuetter–Ferguson method, it relies only on past experience to arrive at an incurred but not reported claims estimate.
When there are changes to an insurer's operations, such as a change in claims settlement times, changes in claims staffing, or changes to case reserve practices, the chain-ladder method will not produce an accurate estimate without adjustments.
The chain-ladder method is also very responsive to changes in experience, and as a result, it may be unsuitable for very volatile lines of business.
See also
Incurred but not reported
Loss reserving
Bornhuetter–Ferguson method
References
Actuarial science | Chain-ladder method | [
"Mathematics"
] | 634 | [
"Applied mathematics",
"Actuarial science"
] |
49,768,059 | https://en.wikipedia.org/wiki/9600%20port | The '9600 port' (also named data-jack or data-port) is an industry-specific name given to a special connector on the back of amateur radio HF, VHF, and UHF transceivers. It is used for connecting a packet radio modem or any other type of data-modem which uses audio tones to convey data.
This port is capable of transmitting and receiving data at speeds of at least 9600 bits per second, but usually faster. This is achieved by bypassing the highpass, lowpass, preemphasis, and deemphasis filters normally contained in the microphone and speaker circuits of an FM transmitter and receiver.
Amateur radio data ports which are not "9600 capable" are typically limited to a max speed of 1200 to 3000 bits per second.
Commonly this 9600-capable data port uses a 6-pin mini-DIN connector (shown to the right).
This is the same physical connector-type as PS/2 port mice and keyboards.
Modem Manufacturers
There are a number of manufacturers making modems intended for this 9600 port / data port.
Kantronics
Tigertronics
Argent Data
Byonics
Coastal ChipWorks
MFJ Enterprises
Symek
Timewave Technologies
Masters Communications
Radio Manufacturers
There are a number of manufacturers making radios which include a 9600 capable data port as a feature:
Alinco
Icom Incorporated
Yaesu
Kenwood
Software Modems
The 9600 port can also be connected to computer's soundcard for use with a number of different software-based data modems:
Direwolf
MixW
AGW Packet Engine
Soundmodem
UZ7HO Soundmodem
Digital Voice
The 9600 port can be used to connect a digital voice adapter, or dongle, which allows analog amateur radios to transmit and receive ICOM's D-Star digital voice protocol (AMBE2020).
Digital Voice Dongle
Star*DV / Star*Board
DVRPTR_V1 D-Star boards
PAPA GMSK Boards
DUTCH*Star
Users of this technology
This 9600 port is used to communicate with some amateur radio satellites using the packet radio
A 9600-baud capable amateur radio and modem are installed aboard the International Space Station as part of the ARISS project.
References
Digital amateur radio | 9600 port | [
"Technology"
] | 467 | [
"Wireless networking",
"Digital amateur radio"
] |
49,769,889 | https://en.wikipedia.org/wiki/QDriverStation | The QDriverStation is a free and open-source robotics software for the FIRST Robotics Competition.
The project was started in September 2015 by Alex Spataru (Team 3794), with the objective to provide a stable, free, extensible and friendly to use alternative to the FRC Driver Station. Since then, several FRC students, alumni and mentors have contributed to the project by providing feedback, documenting the communication protocols and creating Linux packages.
Features
Some important features of the QDriverStation are:
The QDriverStation implements a simple auto-updater to ensure that teams are running the latest version of the software.
The QDriverStation uses SDL to obtain joystick input, but it also implements the option to enable a "virtual joystick", which uses the keyboard keys to operate the robot.
The QDriverStation implements a simple sandbox around every protocol to ensure the safe operation of the robot and the software.
The QDriverStation uses the Qt framework to implement the Graphical user interface.
FRC communication protocols
The developers of the QDriverStation have implemented the 2014, 2015 and 2016 FRC communication protocols. Some users have requested to implement support for the ROS protocol, however, work for this feature has not been published yet.
Mobile version
The developers of the QDriverStation have also developed a side-project for mobile devices (such as Android and iOS) with QML. The mobile version has most of the capabilities that the desktop version has.
Screenshots
External links
GitHub Repository
QDriverStation announcement thread
References
Free software programmed in C++
Robotics software | QDriverStation | [
"Engineering"
] | 333 | [
"Robotics software",
"Robotics engineering"
] |
46,788,947 | https://en.wikipedia.org/wiki/Ultrasonic%20pulse%20velocity%20test | An ultrasonic pulse velocity test is an in-situ, nondestructive test to check the quality of concrete and natural rocks. In this test, the strength and quality of concrete or rock is assessed by measuring the velocity of an ultrasonic pulse passing through a concrete structure or natural rock formation.
This test is conducted by passing a pulse of ultrasonic through concrete to be tested and measuring the time taken by pulse to get through the structure. Higher velocities indicate good quality and continuity of the material, while slower velocities may indicate concrete with many cracks or voids.
Ultrasonic testing equipment includes a pulse generation circuit, consisting of electronic circuit for generating pulses and a transducer for transforming electronic pulse into mechanical pulse having an oscillation frequency in range of 40 kHz to 50 kHz, and a pulse reception circuit that receives the signal.
The transducer, clock, oscillation circuit, and power source are assembled for use. After calibration to a standard sample of material with known properties, the transducers are placed on opposite sides of the material. Pulse velocity is measured by a simple formula:
.
Applications
Ultrasonic Pulse Velocity can be used to:
Evaluate the quality and homogeneity of concrete materials
Predict the strength of concrete
Evaluate dynamic modulus of elasticity of concrete,
Estimate the depth of cracks in concrete.
Detect internal flaws, cracks, honeycombing, and poor patches.
The test can also be used to evaluate the effectiveness of crack repair. Ultrasonic testing is an indicative and other tests such as destructive testing must be conducted to find the structural and mechanical properties of the material.
Regulation and standards
A procedure for ultrasonic testing is outlined in ASTM C597 - 09.
In India, till 2018 ultrasonic testing was conducted according to IS 13311-1992.From 2018, procedure and specification for Ultrasonic pulse velocity test is outlined in IS 516 Part 5:Non destructive testing of concrete Section 1:Ultrasonic Pulse Velocity Testing. This test indicates the quality of workmanship and to find the cracks and defects in concrete.
Factors affecting testing
The important factors that affect/influence the ultrasonic pulse velocity test are:
Surface Conditions of Concrete
Moisture Content of Concrete
Path Length of Concrete Structure
Shape and Size of Concrete Structure
Temperature of Concrete
Stress to Which the Structure is Subjected
Reinforcing Bars
Contact Between the Transducer and Concrete
Cracks and Voids in Concrete
Density and Modulus of Elasticity of Aggregate
Usage
This test is recommended in some of testing done by the Indian government to certify and check construction of residential buildings.
References
Materials testing
Civil engineering
Nondestructive testing
Concrete | Ultrasonic pulse velocity test | [
"Materials_science",
"Engineering"
] | 533 | [
"Structural engineering",
"Materials science",
"Construction",
"Nondestructive testing",
"Materials testing",
"Civil engineering",
"Concrete"
] |
46,789,513 | https://en.wikipedia.org/wiki/Applegate%20mechanism | The Applegate mechanism (Applegate's mechanism or Applegate effect) explains long term orbital period variations seen in certain eclipsing binaries. As a main sequence star goes through an activity cycle, the outer layers of the star are subject to a magnetic torque changing the distribution of angular momentum, resulting in a change in the star's oblateness. The orbit of the stars in the binary pair is gravitationally coupled to their shape changes, so that the period shows modulations (typically on the order of ∆P/P ~ 10−5) on the same time scale as the activity cycles (typically on the order of decades).
Introduction
Careful timing of eclipsing binaries has shown that systems showing orbital period modulations on the order of ∆P/P ~ 10−5 over a period of decades are quite common. A striking example of such a system is Algol, for which the detailed observational record extends back over two centuries. Over this span of time, a graph of the time dependence of the difference between the observed times of eclipses versus the predicted times shows a feature (termed the "great inequality") with a full amplitude of 0.3 days and a recurrent time scale of centuries. Superimposed on this feature is a secondary modulation with a full amplitude of 0.06 days and a recurrent time scale of about 30 years. Orbital period modulations of similar amplitude are seen in other Algol binaries as well.
Although recurrent, these period modulations do not follow a strictly regular cycle. Irregular recurrence rules out attempts to explain these period modulations as being due to apsidal precession or the presence of distant, unseen companions. Apsidal precession explanations also have the problem that they require an eccentric orbit, but the systems in which these modulations are observed often show orbits of little eccentricity. Furthermore, third body explanations have the issue that in many cases, a third body massive enough to produce the observed modulation should not have managed to escape optical detection, unless the third body were quite exotic.
Another phenomenon observed in certain Algol binaries has been monotonic period increases. This is quite distinct from the far more common observations of alternating period increases and decreases explained by the Applegate mechanism. Monotonic period increases have been attributed to mass transfer, usually (but not always) from the less massive to the more massive star.
Mechanism
The time scale and recurrence patterns of these orbital period modulations suggested to Matese and Whitmire (1983) a mechanism invoking changes in the quadrupole moment of one star with subsequent spin-orbit coupling. However, they could not provide any convincing explanation for what might cause such fluctuations in the quadrupole moment.
Taking the Matese and Whitmire mechanism as a basis, Applegate argued that changes in the radius of gyration of one star could be related to magnetic activity cycles. Supportive evidence for his hypothesis came from the observation that a large fraction of the late-type secondary stars of Algol binaries appear to be rapidly rotating convective stars, implying that they should be chromospherically active. Indeed, orbital period modulations are seen only in Algol-type binaries containing a late-type convective star.
Given that gravitational quadrupole coupling is involved in producing orbital period changes, the question remained of how a magnetic field could induce such shape changes. Most models of the 1980s assumed that the magnetic field would deform the star by distorting it away from hydrostatic equilibrium. Marsh and Pringle (1990) demonstrated, however, that the energy required to produce such deformations would exceed the total energy output of the star.
A star does not rotate as a solid body. The outer parts of a star contribute most to a star's quadrupole moment. Applegate proposed that as a star goes through its activity cycle, magnetic torques could cause a redistribution of angular momentum within a star. As a result, the rotational oblateness of the star will change, and this change would ultimately result in changing the orbital period via the Matese and Whitmire mechanism. Energy budget calculations indicate that the active star typically should be variable at the ΔL/L ≈ 0.1 level and should be differentially rotating at the ΔΩ/Ω ≈ 0.01 level.
Applicability
The Applegate mechanism makes several testable predictions:
Luminosity variations in the active star should correspond to modulations in the orbital period.
Any other indicator of magnetic activity (i.e. sunspot activity, coronal X-ray luminosity, etc.) should also show variations corresponding to modulations in the orbital period.
Since large changes in the radius of the star are ruled out by considerations of energetics, luminosity variations should be entirely due to temperature variations.
Tests of the above predictions have been supportive of the mechanism's validity, but not unambiguously so.
The Applegate effect provides a unified explanation for many (but not all) ephemeris curves for a wide class of binaries, and it may aid in the understanding of the dynamo activity seen in rapidly rotating stars.
The Applegate mechanism has also been invoked to explain variations in the observed transit times of extrasolar planets, in addition to other possible effects such as tidal dissipation and the presence of other planetary bodies.
However, there are many stars for which the Applegate mechanism is inadequate. For example, the orbital period variations in certain eclipsing post-common-envelope binaries are an order of magnitude larger than can be accommodated by the Applegate effect, with magnetic braking or a third body in a highly elliptical orbit providing the only known mechanisms able to explain the observed variation.
References
-
Astrophysics
Variable stars
Stellar phenomena | Applegate mechanism | [
"Physics",
"Astronomy"
] | 1,182 | [
"Astronomical sub-disciplines",
"Physical phenomena",
"Stellar phenomena",
"Astrophysics"
] |
28,604,428 | https://en.wikipedia.org/wiki/Nano%20guitar | The nano guitar is a microscopically small carved guitar. It was developed by Dustin W. Carr in 1997, under the direction of Professor Harold G. Craighead, in the Cornell Nanofabrication Facility. The idea came about as a fun way to illustrate nanotechnology, and captured popular attention. It is disputed as to whether the nano guitar should be classified as a guitar, but it is the common opinion that it is in fact a guitar.
Explanation
Nanotechnology miniaturizes normal objects, in this case, a guitar. It can be used to create tiny cameras, scales, and covert listening devices. An example of this is smart dust, which can be either a camera or a listening device smaller than a grain of sand. A nanometer is one-billionth of a meter. For comparison, a human hair is about 200,000 nanometers thick. The nano guitar is about as long as one-twentieth of the diameter of a human hair, 10 micrometers or 10,000 nanometers long. Each of the six 'strings' is 50 nanometers wide. The entire guitar is the size of an average red blood cell. The guitar is carved from a grain of crystalline silicon by scanning a laser over a film called a 'resist'. This technique is known as electron-beam lithography.
The guitar strings can be made to vibrate by tiny lasers using an atomic force microscope, in the same way, a guitar player might use a plectrum. The strings vibrate at around 40 000 000 Hz, roughly 15 octaves higher than a normal guitar, which can typically reach up to 1318.510 Hz. Even if its sound were amplified, it could not be detected by the human ear.
Implications
The nano guitar illustrates inaudible technology that is not meant for musical entertainment. The application of frequencies generated by nano-objects is called sonification. Such objects can represent numerical data and provide support for information processing activities of many different kinds that produce synthetic non-verbal sounds. Since the manufacture of the nano-guitar, researchers in the lab headed by Dr. Craighead have built even tinier devices. One thought is that they may be useful as tiny scales to measure tinier particles, such as bacteria, which may aid in diagnosis.
See also
List of guitars
References
External links
Molecular machines
Individual guitars
1997 musical instruments | Nano guitar | [
"Physics",
"Chemistry",
"Materials_science",
"Technology"
] | 481 | [
"Molecular machines",
"Nanotechnology",
"Machines",
"Physical systems"
] |
28,613,911 | https://en.wikipedia.org/wiki/Iodine%20monofluoride | Iodine monofluoride is an interhalogen compound of iodine and fluorine with formula IF. It is a chocolate-brown solid that decomposes at 0 °C, disproportionating to elemental iodine and iodine pentafluoride:
5 IF → 2 I2 + IF5
However, its molecular properties can still be precisely determined by spectroscopy: the iodine-fluorine distance is 190.9 pm and the I−F bond dissociation energy is around 277 kJ mol−1. At 298 K, its standard enthalpy change of formation is ΔfH° = −95.4 kJ mol−1, and its Gibbs free energy is ΔfG° = −117.6 kJ mol−1.
It can be generated, albeit only fleetingly, by the reaction of the elements at −45 °C in CCl3F:
I2 + F2 → 2 IF
It can also be generated by the reaction of iodine with iodine trifluoride at −78 °C in CCl3F:
I2 + IF3 → 3 IF
The reaction of iodine with silver(I) fluoride at 0 °C also yields iodine monofluoride:
I2 + AgF → IF + AgI
Reactions
Iodine monofluoride is used to produce pure nitrogen triiodide:
BN + 3 IF → NI3 + BF3
See also
Iodine trifluoride
Iodine pentafluoride
Iodine heptafluoride
References
Interhalogen compounds
Diatomic molecules
Iodine compounds
Fluorides | Iodine monofluoride | [
"Physics",
"Chemistry"
] | 335 | [
"Molecules",
"Interhalogen compounds",
"Oxidizing agents",
"Salts",
"Diatomic molecules",
"Fluorides",
"Matter"
] |
40,766,961 | https://en.wikipedia.org/wiki/Minimum%20control%20speeds | The minimum control speed (VMC) of a multi-engine aircraft (specifically an airplane) is a V-speed that specifies the calibrated airspeed below which directional or lateral control of the aircraft can no longer be maintained, after the failure of one or more engines. The VMC only applies if at least one engine is still operative, and will depend on the stage of flight. Indeed, multiple VMCs have to be calculated for landing, air travel, and ground travel, and there are more still for aircraft with four or more engines. These are all included in the aircraft flight manual of all multi-engine aircraft. When design engineers are sizing an airplane's vertical tail and flight control surfaces, they have to take into account the effect this will have on the airplane's minimum control speeds.
Minimum control speeds are typically established by flight tests as part of an aircraft certification process. They provide a guide to the pilot in the safe operation of the aircraft.
Physical description
When an engine on a multi-engine aircraft fails, the thrust distribution on the aircraft becomes asymmetrical, resulting in a yawing moment in the direction of the failed engine. A sideslip develops, causing the total drag of the aircraft to increase considerably, resulting in a drop in the aircraft's rate of climb. The rudder, and to a certain extent the ailerons via the use of bank angle, are the only aerodynamic controls available to the pilot to counteract the asymmetrical thrust yawing moment.
The higher the speed of the aircraft, the easier it is to counteract the yawing moment using the aircraft's controls. The minimum control speed is the airspeed below which the force the rudder or ailerons can apply to the aircraft is not large enough to counteract the asymmetrical thrust at a maximum power setting. Above this speed it should be possible to maintain control of the aircraft and maintain straight flight with asymmetrical thrust.
Loss of engine power of wing-mounted-propeller aircraft and blown lift aircraft affects the lift distribution over the wing, causing a roll toward the inoperative engine. In some aircraft roll authority is more limiting than rudder authority in determining VMCs.
Certification and variants
Aviation regulations (such as FAR and EASA) define several different VMCs and require design engineers to size the vertical tail and the aerodynamic flight controls of the aircraft to comply with these regulations. The minimum control speed in the air (VMCA) is the most important minimum control speed of a multi-engine aircraft, which is why VMCA is simply listed as VMC in many aviation regulations and aircraft flight manuals. On the airspeed indicator of a twin-engine aircraft of less than 6000 lbs (2722 kg), the VMCA is indicated by a red radial line, as standardised by FAR 23.
Most test pilot schools use multiple, more specific minimum control speeds, as VMC will change depending on the stage of flight. Other defined VMCs include minimum control speed on the ground (VMCG) and minimum control speed during approach and landing (VMCL). In addition, with aircraft with four or more engines, VMCs exist for cases with either one or two engines inoperative on the same wing. Figure 1 illustrates the VMCs that are defined in the relevant civil aviation regulations and in military specifications.
Minimum control speed when airborne
The vertical tail or vertical stabilizer of a multi-engine aircraft plays a crucial role in maintaining directional control while an engine fails or is inoperative. The larger the tail, the more capable it will be of providing the required force to counteract the asymmetrical thrust yawing moment. This means that the smaller the tail is, the higher the VMCA will be. However, a larger tail is more costly and harder to accommodate, and comes with other aerodynamic issues such as increased prevalence of slipstreams. Engineers designing the vertical tail must make a decision based on, amongst other factors, their budget, the weight of the aircraft, and the maximum bank angle of 5° (away from the inoperative engine), as stated by FAR.
VMCA is also used to calculate the minimum takeoff safety speed. A high VMCA therefore results in higher takeoff speeds, and so longer runways are required, which is undesirable for airport operators.
Factors influencing minimum control speed
Any factor that has influence on the balance of forces and on the yawing and rolling moments after engine failure might also affect VMCs. When the vertical tail is designed and the VMCA is measured, the worst-case scenario for all factors is taken into account. This ensures that the VMCs published in the AFMs guaranteed to be safe.
Heavier aircraft are more stable and more resistant to yawing moments, and therefore have lower VMCAs. The longitudinal centre of gravity affects the VMCA as well: the further from the tail it is, the lower the minimum control speed, because the rudder will be able to provide a larger yawing moment, and so it is easier to counteract the imbalance in thrust. The lateral centre of gravity also has an effect: the nearer the inoperative engine it is, the larger the moment of the working engine, and so the more force the rudder has to apply. This means that if the lateral centre of gravity shifts towards the inoperative engine, the aircraft's VMCA will increase. The thrust of most engines depends on altitude and temperature; increasing altitude and higher temperatures decrease thrust. This means that if the air temperature is higher and the aircraft has a higher altitude, the force of the operative engine will be lower, the rudder will have to provide less counteractive force, and so the VMCA will be lower. The bank angle also influences the minimum control speed. A small bank angle away from the inoperative engine is required for smallest possible sideslip and therefore lower VMCA. Finally, if the P-factor of the working engine increases, then its yawing moment increases, and the aircraft's VMCA increases as a result.
Other minimal control speeds
Aircraft with more engines
Aircraft with four or more engines have not only a VMCA (often called VMCA1 under these circumstances), where the critical engine alone is inoperative, but also a VMCA2 that applies when the engine inboard of the critical engine, on the same wing, is also inoperative. Civil aviation regulations (FAR, CS and equivalent) no longer require a VMCA2 to be determined, although it is still required for military aircraft with four or more engines. On turbojet and turbofan aircraft, the outboard engines are usually equally critical. Three-engine aircraft such as the MD-11 and BN-2 Trislander do not have a VMCA2; a failed centerline engine has no effect on VMC.
When two opposing engines of aircraft with four or more engines are inoperative, there is no thrust asymmetry, hence there is no rudder requirement for maintaining steady straight flight; VMCAs play no role. There may be less power available to maintain flight overall, but the minimum safe control speeds remain the same as they would be for an aircraft being flown at 50% thrust on all four engines.
Failure of a single inboard engine, from a set of four, has a much smaller effect on controllability. This is because an inboard engine is closer to the aircraft's centre of gravity, so the lack of yawing moment is decreased. In this situation, if speed is maintained at or above the published VMCA, as determined for the critical engine, safe control can be maintained.
Ground
If an engine fails during taxiing or takeoff, the thrust yawing moment will force the aircraft to one side on the runway. If the airspeed is not high enough and hence, the rudder-generated side force is not powerful enough, the aircraft will deviate from the runway centerline and may even veer off the runway. The airspeed at which the aircraft, after engine failure, deviates 9.1 m from the runway centerline, despite using maximum rudder but without the use of nose wheel steering, is the minimum control speed on the ground (VMCG).
Approach and landing
The minimum control speed during approach and landing (VMCL) is similar to VMCA, but the aircraft configuration is the landing configuration. VMCL is defined for both part 23 <FAR 23.149 (c)> and part 25 aircraft in civil aviation regulations. However, when maximum thrust is selected for a go-around, the flaps will be selected up from the landing position, and VMCL no longer applies, but VMCA does.
Safe single-engine speed
Due to the inherent risks of operating at or close to VMCA with asymmetric thrust, and the desire to simulate and practice these manoeuvres in pilot training and certification VSSE may be defined. VSSE safe single-engine speed is the minimum speed to intentionally render the critical engine inoperative, established and designated by the manufacturer as the safe, intentional, one engine inoperative speed. This speed is selected to reduce the accident potential from loss of control due to simulated engine failures at inordinately slow airspeed.
References
Airspeed
Aerodynamics
Aviation safety | Minimum control speeds | [
"Physics",
"Chemistry",
"Engineering"
] | 1,911 | [
"Physical quantities",
"Aerodynamics",
"Airspeed",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Fluid dynamics"
] |
40,767,780 | https://en.wikipedia.org/wiki/Geographic%20center%20of%20Belarus | The geographical center of Belarus () is at the geographical coordinates of latitude 53°31'44.54", longitude 28°02'41.90". It is located 70-km south-east of Minsk, a 6-km west of Marina Hills, and 1 km to the south-east of the village Antonovo, Pukhovisky District, Minsk Oblast, Belarus.
Search works were carried out in 1996 by the 82nd expedition association "Belgeodesy" in cooperation with the firm "Aerogeokart" a special program using the 1:200 000 maps and satellites to find the geographic center of Belarus.
The geographical coordinates of the center of Belarus are entered into the State Geodetic directory as the state geodetic grid points.
On May 1, 1996, near the village of Antonovo, a pillar was erected with the sign "The village Antonovo – the geographical center of the Republic of Belarus".
References
Belarus
Geography of Belarus | Geographic center of Belarus | [
"Physics",
"Mathematics"
] | 199 | [
"Point (geometry)",
"Geometric centers",
"Geographical centres",
"Symmetry"
] |
40,769,910 | https://en.wikipedia.org/wiki/Homotopy%20excision%20theorem | In algebraic topology, the homotopy excision theorem offers a substitute for the absence of excision in homotopy theory. More precisely, let be an excisive triad with nonempty, and suppose the pair is ()-connected, , and the pair is ()-connected, . Then the map induced by the inclusion ,
,
is bijective for and is surjective for .
A geometric proof is given in a book by Tammo tom Dieck.
This result should also be seen as a consequence of the most general form of the Blakers–Massey theorem, which deals with the non-simply-connected case.
The most important consequence is the Freudenthal suspension theorem.
References
Bibliography
J. Peter May, A Concise Course in Algebraic Topology, Chicago University Press.
Theorems in homotopy theory | Homotopy excision theorem | [
"Mathematics"
] | 171 | [
"Topology stubs",
"Topology"
] |
48,483,838 | https://en.wikipedia.org/wiki/Encapsulin%20nanocompartment | Encapsulin nanocompartments, or encapsulin protein cages, are spherical bacterial organelle-like compartments roughly 25-30 nm in diameter that are involved in various aspects of metabolism, in particular protecting bacteria from oxidative stress. Encapsulin nanocompartments are structurally similar to the HK97 bacteriophage and their function depends on the proteins loaded into the nanocompartment. The sphere is formed from 60 (for a 25 nm sphere) or 180 (for a 30 nm sphere) copies of a single protomer, termed encapsulin. Their structure has been studied in great detail using X-ray crystallography and cryo-electron microscopy.
A number of different types of proteins have been identified as being loaded into encapsulin nanocompartments. Peroxidases or proteins similar to ferritins are the two most common types of cargo proteins. While most encapsulin nanocompartments contain only one type of cargo protein, in some species two or three types of cargo proteins are loaded.
Encapsulins purified from Rhodococcus jostii can be assembled and disassembled with changes in pH. In the assembled state, the compartment enhances the activity of its cargo, a peroxidase enzyme.
Use as a platform for bioengineering
Recently, encapsulin nanocompartments have begun to receive considerable interest from bioengineers because of their potential to allow the targeted delivery of drugs, proteins, and mRNAs to specific cells of interest.
References
Metabolism
Cell biology
Biological engineering
Bacterial proteins | Encapsulin nanocompartment | [
"Chemistry",
"Engineering",
"Biology"
] | 338 | [
"Biological engineering",
"Cell biology",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
48,487,233 | https://en.wikipedia.org/wiki/Paul%20Weyland | Paul Wilhelm Gustav Weyland (20 January 1888, Berlin – 6 December 1972, Bad Pyrmont) was the antisemitic leader of the Anti Einstein League.
In 1919 Weyland published the novel Hie Kreuz - hie Triglaff (The Cross against the Triglav), which gives a chauvinistic account of the historical events of the tenth century A.D. in Pomerania. It ends with an open allusion to the contemporary conflicts between Germans and Poles in Upper Silesia. A second book Der Tanz als kulturelles Ausdrucksmittel (Dancing as an expression of culture) was promised, with a chapter on modern dance as a sign of cultural decline. However this book never appeared and Weyland was to move on to scientific theory.
Weyland was a key figure involved in organising an antisemitic campaign against relativity. In August 1920 he organised a mass meeting at the Berliner Philharmonie to contest Einstein's theory of relativity. After ensuring the meeting had been well-advertised in the newspapers, Weyland delivered a vituperative attack on Einstein, described as "with heavy artillery" in one newspaper. This attack consisted primarily of unsubstantial insults against the theory of relativity alongside claims that it was promoted by "the clique of [Einstein's] academic supporters". Weyland claimed that the theory constituted a form of hypnotic mass suggestion and Jewish arrogance, which was a product of an unsettling spiritually chaotic period and that it, amongst other repellent ideas, was poisoning German thought. It was this speech which culminated in the statement: "Relativity theory is scientific Dadaism".
He was later granted American citizenship.
References
1888 births
1972 deaths
German nationalists
Sturmabteilung personnel
20th-century German novelists
Relativity critics | Paul Weyland | [
"Physics"
] | 380 | [
"Relativity critics",
"Theory of relativity"
] |
48,487,824 | https://en.wikipedia.org/wiki/Euler%27s%20critical%20load | Euler's critical load or Euler's buckling load is the compressive load at which a slender column will suddenly bend or buckle. It is given by the formula:
where
, Euler's critical load (longitudinal compression load on column),
, Young's modulus of the column material,
, second moment of area of the cross section of the column (area moment of inertia),
, unsupported length of column,
, column effective length factor
This formula was derived in 1744 by the Swiss mathematician Leonhard Euler. The column will remain straight for loads less than the critical load. The critical load is the greatest load that will not cause lateral deflection (buckling). For loads greater than the critical load, the column will deflect laterally. The critical load puts the column in a state of unstable equilibrium. A load beyond the critical load causes the column to fail by buckling. As the load is increased beyond the critical load the lateral deflections increase, until it may fail in other modes such as yielding of the material. Loading of columns beyond the critical load are not addressed in this article.
Around 1900, J. B. Johnson showed that at low slenderness ratios an alternative formula should be used.
Assumptions of the model
The following assumptions are made while deriving Euler's formula:
The material of the column is homogeneous and isotropic.
The compressive load on the column is axial only.
The column is free from initial stress.
The weight of the column is neglected.
The column is initially straight (no eccentricity of the axial load).
Pin joints are friction-less (no moment constraint) and fixed ends are rigid (no rotation deflection).
The cross-section of the column is uniform throughout its length.
The direct stress is very small as compared to the bending stress (the material is compressed only within the elastic range of strains).
The length of the column is very large as compared to the cross-sectional dimensions of the column.
The column fails only by buckling. This is true if the compressive stress in the column does not exceed the yield strength (see figure 1): where:
is the slenderness ratio,
is the effective length,
is the radius of gyration,
is the second moment of area (area moment of inertia),
is the area cross section.
For slender columns, the critical buckling stress is usually lower than the yield stress. In contrast, a stocky column can have a critical buckling stress higher than the yield, i.e. it yields prior to buckling.
Mathematical derivation
Pin ended column
The following model applies to columns simply supported at each end ().
Firstly, we will put attention to the fact there are no reactions in the hinged ends, so we also have no shear force in any cross-section of the column. The reason for no reactions can be obtained from symmetry (so the reactions should be in the same direction) and from moment equilibrium (so the reactions should be in opposite directions).
Using the free body diagram in the right side of figure 3, and making a summation of moments about point :
where is the lateral deflection.
According to Euler–Bernoulli beam theory, the deflection of a beam is related with its bending moment by:
so:
Let , so:
We get a classical homogeneous second-order ordinary differential equation.
The general solutions of this equation is: , where and are constants to be determined by boundary conditions, which are:
Left end pinned:
Right end pinned:
If , no bending moment exists and we get the trivial solution of .
However, from the other solution we get , for
Together with as defined before, the various critical loads are:
and depending upon the value of , different buckling modes are produced as shown in figure 4. The load and mode for n=0 is the nonbuckled mode.
Theoretically, any buckling mode is possible, but in the case of a slowly applied load only the first modal shape is likely to be produced.
The critical load of Euler for a pin ended column is therefore:
and the obtained shape of the buckled column in the first mode is:
General approach
The differential equation of the axis of a beam is:
For a column with axial load only, the lateral load vanishes and substituting , we get:
This is a homogeneous fourth-order differential equation and its general solution is
The four constants are determined by the boundary conditions (end constraints) on , at each end. There are three cases:
Pinned end:
and
Fixed end:
and
Free end:
and
For each combination of these boundary conditions, an eigenvalue problem is obtained. Solving those, we get the values of Euler's critical load for each one of the cases presented in Figure 2.
See also
Buckling
Bending moment
Bending
Euler–Bernoulli beam theory
References
Elasticity (physics)
Mechanical failure modes
Structural analysis
Mechanics
Leonhard Euler | Euler's critical load | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 1,017 | [
"Structural engineering",
"Physical phenomena",
"Mechanical failure modes",
"Elasticity (physics)",
"Deformation (mechanics)",
"Structural analysis",
"Technological failures",
"Mechanics",
"Mechanical engineering",
"Aerospace engineering",
"Mechanical failure",
"Physical properties"
] |
48,488,023 | https://en.wikipedia.org/wiki/Ciraparantag | Ciraparantag (aripazine) is a drug under investigation as an antidote for a number of anticoagulant (anti-blood clotting) drugs, including factor Xa inhibitors (rivaroxaban, apixaban and edoxaban), dabigatran, and heparins (including fondaparinux, low molecular weight heparins (LMWH), and unfractionated heparin).
Medical uses
Ciraparantag significantly reverses anticoagulation induced by a therapeutic dose of edoxaban within 10 minutes following injection. This return to normal haemostasis persists over 24 hours following a single intravenous dose of the drug. In addition to edoxaban, it also reverses the actions of LMWH and dabigatran.
Pharmacology
Mechanism of action
According to in vitro studies, the substance binds directly to anticoagulants via hydrogen bonds and charge-charge interactions from or to various parts of the molecule:
Chemistry
Ciraparantag consists of two L-arginine units connected with a piperazine containing linker chain.
See also
Andexanet alfa
Idarucizumab
Prothrombin complex concentrate
Vitamin K
References
Amino acid derivatives
Antidotes
Guanidines | Ciraparantag | [
"Chemistry"
] | 275 | [
"Guanidines",
"Functional groups"
] |
48,491,335 | https://en.wikipedia.org/wiki/Orbital%20pass | An orbital pass (or simply pass) is the period in which a spacecraft is above the local horizon, and thus available for line-of-sight communication with a given ground station, receiver, or relay satellite, or for visual sighting. The beginning of a pass is termed acquisition of signal (AOS); the end of a pass is termed loss of signal (LOS). The point at which a spacecraft comes closest to a ground observer is the time of closest approach (TCA).
Timing and duration
The timing and duration of passes depends on the characteristics of the orbit a satellite occupies, as well as the ground topography and any occulting objects on the ground (such as buildings), or in space (for planetary probes, or for spacecraft using relay satellites). The longest duration ground pass will be experienced by an observer directly on the ground track of the satellite. Path loss is greatest toward the start and end of a ground pass, as is Doppler shifting for Earth-orbiting satellites.
Satellites in geosynchronous orbit may be continuously visible from a single ground station, whereas satellites in low Earth orbit only offer short-duration ground passes (although longer contacts may be made via relay satellite networks such as TDRSS). Satellite constellations, such as those of satellite navigation systems, may be designed so that a minimum subset of the constellation is always visible from any point on the Earth, thereby providing continuous coverage.
Prediction and visibility
A number of web-based and mobile applications produce predictions of passes for known satellites. In order to be observed with the naked eye, a spacecraft must reflect sunlight towards the observer; thus, naked-eye observations are generally restricted to twilight hours, during which the spacecraft is in sunlight but the observer is not. A satellite flare occurs when sunlight is reflected by flat surfaces on the spacecraft. The International Space Station, the largest artificial satellite of Earth, has a maximum apparent magnitude of , brighter than the planet Venus.
See also
Ground track, the path on the surface of the Earth directly below a satellite
Satellite revisit period, the time elapsed between observations of the same point on Earth by a satellite
Satellite watching, as a hobby
References
Astrodynamics
Spacecraft communication | Orbital pass | [
"Engineering"
] | 449 | [
"Astrodynamics",
"Spacecraft communication",
"Aerospace engineering"
] |
48,492,125 | https://en.wikipedia.org/wiki/SECU-3 | SECU-3 is an internal combustion engine control unit. It is being developed as an open source project (drawings, schematic diagrams, source code etc. are open and freely available for all). Anyone can take part in the project, and can access all the information without any registrations.
SECU-3 system controls the ignition, fuel injection and various other actuators of the internal combustion engine (ICE) and vehicle. In particular, it is capable of controlling the carburetor choke using a stepper motor (auto choke), thus controlling RPM when engine is warming up. SECU-3 manages AFR on the carburetor engines (similar to AXTEC AFR systems), idle cut-off valve and wide open throttle mode valve in carburetor systems, controls electric fuel pump and gas valves in closed loop mode according to the feedback from the oxygen sensor. The SECU-3 system provides unique opportunities for reassigning the I/O pins of the mainboard for custom uses in engine tuning. It also provides smooth speed control of the engine electric cooling fan. The system includes its own software which allows editing all major settings and fuel and ignition maps in real time (when the engine is running), and switching between 2 or 4 sets of maps. SECU-3 system has many other advanced features (listed below).
Currently, there are five modifications of the unit:
SECU-3. The first version of the unit, developed in 2007, controls ignition, cooling fan and has some other functions. In the latest software releases, the support for this unit had been discontinued. History of the SECU-3 versions with photos could be accessed here
SECU-3T. It can control the ignition and fuel injection. It does not contain built-in power drivers for ignition coils, fuel injectors and idling air control (IAC) valve. External drivers must be used.
SECU-3L. It was designed for ignition control only and it can be considered as a light version of the SECU-3T unit. However, it contains built-in drivers for ignition coils, as well as manifold absolute pressure (MAP) sensor. Regarding the software, it is fully compatible with the SECU-3T unit.
SECU-3 Micro. Very easy-to-use and low-cost ignition controller unit in small plastic enclosure. Has only few inputs and outputs and doesn't contain built-in power drivers for ignition coils. It is the simplest SECU-3 unit.
SECU-3i. Full-featured, complete engine management system in metal enclosure with integrated power drivers (for ignition coils, injectors, IAC actuator etc.), with extended number of I/O and Bluetooth connectivity. The latest development of the system. This unit has double-board design.
The device is developed using the 8-bit AVR microcontroller ATMega644, with 64kB memory (ROM), 4kB random access memory (RAM), and operates at a clock frequency of 20 MHz. It includes analog and digital inputs, separate chip for preprocessing signal from the knock sensor (KS) (except SECU-3 'Lite' and 'Micro' units), a signal conditioner for VR start-pulse sensor (except SECU-3 Micro unit), a signal conditioner for the VR crankshaft position sensor (CKP), the interface with a computer, and the outputs for actuators control.
Structural diagram of the system with SECU-3T unit:
Structural diagram of the system with SECU-3L unit is shown on the following picture:
Structural diagram of the system with SECU-3 Micro unit:
Example of wiring diagram of the SECU-3T unit for controlling of simultaneous or semi-sequential fuel injection on the 4-cylinder engine is shown on the picture below.
Hi-z injectors and stepper IAC valve are used. On the right side of picture we can see external connector functions which should be remapped to specified values. It is done in the SECU-3 Manager software.
History
The first version of SECU-3 was launched in October 2007 and successfully works on the author's (A.Shabelnikov) vehicle for now.
Since then, the project has received a lot of new features and synchronization methods. First discussing of the project was started in 2007 in one topic on the iXBT conference. Since Dec. 2010 discussion moved to the forum on diyefi.org. In 2013 own project's forum had been opened.
The system has evolved from the ignition control to the engine management system (ECU). The project is supported by author all the time.
Current status
Continue to develop and extend fuel injection features and algorithms (inter alia, full-sequential injection support). Also, author works on software for the SECU-3i unit.
Features of the current firmware related to fuel injection:
Simultaneous (all injectors open at the same time), central (throttle-body) injection, two banks alternating (two injectors or two banks of injectors fire alternately) and semi-sequential injection (injectors fire by pairs). In the nearest future full-sequential injection support will be added
Speed-density method of estimating airflow into an engine (MAP and IAT sensors are used)
Open-loop IAC (closed-loop algorithm (PID) will be implemented in the nearest future)
Closed loop control of AFR using oxygen sensor. Ability to set voltage threshold, integrator step etc.
Tables: VE, AFR, inj.open time, warm up enrichment, IAC position on cranking, IAC position vs coolant temperature, inj.time on cranking, inj.timing
After start enrichment
Acceleration enrichment (TPS, RPM)
Priming pulse (wetting of intake manifold) vs coolant temperature
License
GPL, TAPR OHL with one addition: developments can not be used in commercial purposes without written approval of author (according to information from the official site).
Features
Support of engines with the following number of cylinders: 1, 2, 3, 4, 5, 6, 8
Synchronization from CKP sensor and missing tooth wheel or from two sensors and non-missing tooth wheel. Support of 60-2, 36-1 and other wheels with different number of teeth (from 16 to 200)
Synchronization from Hall sensor (the distributor can be left in the system)
Advance angle regulation from engine speed (from 1 VR sensor, 2 VR sensors or Hall sensor)
Advance angle regulation from engine load (from MAP sensor)
Correction of advance angle depending on temperature (various types of coolant temperature sensors)
Correction of advance angle from detonation (from 1 or 2 knock sensors)
Measurement of on-board electric system voltage
Idle cut-off valve solenoid control
Control of power valve solenoid (part load enrichment valve)
Multichannel output (from 1 up to 6 igniters). It is possible to use up to 8 channels!
Support of 2 channel igniters (single input driven by both edges)
RS-232 interface for reprogramming, control and tuning (with optical isolation) or USB interface (without optical isolation)
Able to control of engine cooling fan (use of PWM is also supported)
Starter blocking when engine speed reaches specified value
Gas equipment support (automatic switch between gas/gasoline modes)
Output for error indication “Check Engine” with support of blink codes
Ability to emergency start of boot loader
Ability to recover settings in emergency case
Idle speed regulation using advance angle
Control of coil energy accumulation (dwell control)
Cam sensor support (coil per cylinder – full sequential ignition)
Fuel injection control (central, simultaneous injection)
Injection timing control using custom fuel map
Calculation of air flow using Speed-Density method
AFR control on carburetor (Solex) by means of valve actuators with closed loop (oxygen sensor)
Additional features:
Control of an electric fuel pump
Tunable pulses output for Hall sensor or tachometer
Embedded stroboscope function (it is possible to use any free output)
Output and input functions remapping
Determine throttle gate position by TPS sensor
Processing and writing to log file signals from 2 additional inputs (e.g. Oxygen sensor can be connected)
Power management (ability of some functions to work after ignition is switched off, for instance working off cooling fan or auto choke control)
Carburetor choke control (using a stepper motor)
Stepper gas valve control
Support of speed sensor (Displaying and writing to log vehicle speed (km/h) and passed distance (km))
Control of manifold heater (also known as grid and intake heaters)
Correction of advance angle using air temperature sensor (connected to one of 2 additional inputs)
3 versatile programmable outputs, which can be programmed by user to perform different actions in very convenient and flexible way
Version differences
References
External links
Official site of the project
Official forum of the project
Author's official page in VK social network
Users' community in VK social network
Old forum of the project on DIYEFI.org
Old topic on the iXBT forum (when SECU-3 had no its own forum)
History of developing of the SECU-3T unit
Author's page on the GitHub (repositories)
Schematic diagram of the SECU-3T unit (for revCU6 board)
Repository containing all information and documentation
Electronic control unit for internal combustion engine SECU-3, p.90-95, ISSN 2411-2798
Ignition control system for internal combustion engines SECU-3L (Lite), p.115-121, p-ISSN 2079-5459, e-ISSN 2413-4295
Microprocessor system SECU-3 for internal combustion engine control, p.22-25
Fuel injection time calculation in the internal combustion engine control unit SECU-3, p.55-56
SECU-3i PROGRAMMABLE ENGINE MANAGEMENT SYSTEM, с.67-73
MICROPROCESSOR CONTROLLED IGNITION SYSTEM SECU-3 MICRO, с.55-61
Schematic diagram of the SECU-3L (Lite) unit
Community in Google+
Community in Facebook
ATmega644 datasheet
Engine control systems
Engine technology
Fuel injection systems
Engine components
Onboard computers | SECU-3 | [
"Technology"
] | 2,147 | [
"Engine technology",
"Engine components",
"Engines"
] |
48,493,924 | https://en.wikipedia.org/wiki/Acmella%20nana | Acmella nana is a species of land snail discovered from Borneo, Malaysia, in 2015. It was described by Jaap J. Vermeulen of the JK Art and Science in Leiden, Thor-Seng Liew of the Institute for Tropical Biology and Conservation at the Universiti Malaysia Sabah, and Menno Schilthuizen of the Naturalis Biodiversity Center in Leiden. It was named nana (Latin for "dwarf") due to its minute size. Measuring only 0.7 millimeters in size, it is the smallest known land snail as of 2015. It surpasses the earlier record attributed to Angustopila dominikae, which is 0.86 mm in size, described from China in September 2015.
Etymology
The genus name Acmella is derived from a Greek word akme meaning "(the highest) point, edge or peak of anything." The species name nana was derived from a Latin word nanus meaning "dwarf", and was chosen because of its small size.
Description
Acmella nana has a shell, which is whitish in colour and has a shiny appearance. The shell is translucent and measures 0.50 to 0.60 mm in width, and 0.60 to 0.79 mm in height. In average the size is 0.7 mm. Due to its size, it cannot be directly noticed by naked eye, and can be seen clearly under microscope. It has 2 to 3 whorls, and the aperture opening is 0.26-0.30 mm wide and 0.30-0.37 mm high.
Discovery
Acmella nana was discovered from limestone hills in Borneo. Knowing that limestone and snail shells are both composed of calcium carbonate, the research team led by two Dutch biologists Jaap J. Vermeulen and Menno Schilthuizen, and a Malaysian biologist Thor-Seng Liew collected soil and litter and dirt from the cliffs. Then they separated the larger particles from the smaller ones using sieves. They put the larger particles in a bucket of water and stirred them. The minerals such as clay and sand settled at the bottom, while the shells, being buoyant, float up on the surface. This is because, although the shells are chemically same as the minerals, their internal cavities contain air pockets. The shells were then examined under microscope. The taxonomic description was published in the 2 November 2015 issue of ZooKeys, and the paper also included a report of other 47 new species of snails. Schilthuizen remarked, saying, "Our paper was in review when that paper on Angustopila dominikae came out and it was only then that we realized that one of 'our' species was actually smaller." The main specimen (holotype) was collected by Vermeulen from the Niah Caves in Sarawak, Malaysia. Other specimens were from Sabah.
Biology
Since the specimens are only the shells, and not living snails, it is not possible to know details of the biology. But a closely related species Acmella polita is known in the same area. This snail eats thin films of bacteria and fungi growing on the limestone walls inside the caves. The researchers speculated that Acmella nana does the same. The presence of an opening called operculum on the shell suggests that it may also possess gills. Gills would be the respiratory organs for their wet environment. Such gills are known in aquatic snails. It is also assumed that the species is distributed in other parts of Borneo, and that their survival is not a serious threat. However, according to Schilthuizen, they can become threatened due to heavy quarrying in these limestone hills.
See also
Smallest organisms
References
Gastropods described in 2015
Organism size
Biological records
Assimineidae | Acmella nana | [
"Physics",
"Mathematics",
"Biology"
] | 771 | [
"Quantity",
"Physical quantities",
"Organism size",
"Size"
] |
32,361,704 | https://en.wikipedia.org/wiki/Slip%20ratio%20%28gas%E2%80%93liquid%20flow%29 | Slip ratio (or velocity ratio) in gas–liquid (two-phase) flow, is defined as the ratio of the velocity of the gas phase to the velocity of the liquid phase.
In the homogeneous model of two-phase flow, the slip ratio is by definition assumed to be unity (no slip). It is however experimentally observed that the velocity of the gas and liquid phases can be significantly different, depending on the flow pattern (e.g. plug flow, annular flow, bubble flow, stratified flow, slug flow, churn flow). The models that account for the existence of the slip are called "separated flow models".
The following identities can be written using the interrelated definitions:
where:
S – slip ratio, dimensionless
indices G and L refer to the gas and the liquid phase, respectively
u – velocity, m/s
U – superficial velocity, m/s
– void fraction, dimensionless
ρ – density of a phase, kg/m3
x – steam quality, dimensionless.
Correlations for the slip ratio
There are a number of correlations for slip ratio.
For homogeneous flow, S = 1 (i.e. there is no slip).
The Chisholm correlation is:
The Chisholm correlation is based on application of the simple annular flow model and equates the frictional pressure drops in the liquid and the gas phase.
The slip ratio for two-phase cross-flow horizontal tube bundles may be determined using the following correlation:
where the Richardson and capillary numbers are defined as and .
For enhanced surfaces bundles the slip ratio can be defined as:
Where:
S – slip ratio, dimensionless
P – tube centerline pitch
D – tube diameter
Subscript – liquid phase
Subscript – gas phase
g– gravitational acceleration
– minimum distance between the tubes
G-mass flux (mass flow per unit area)
– dynamic viscosity
– surface tension
– thermodynamic quality
– void fraction
References
Fluid dynamics | Slip ratio (gas–liquid flow) | [
"Chemistry",
"Engineering"
] | 403 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
32,363,043 | https://en.wikipedia.org/wiki/Agaropectin | Agaropectin is one of the two main components of agar.
Structure
Agaropectin is a sulfated galactan mixture which composes agar by 30% composition. It is composed of varying percentages of organosulfates (sulfate esters), D-glucuronic acid and small amounts of pyruvic acid. It is made up of alternating units of D-galactose and L-galactose heavily modified with acidic side-groups which are usually sulfate, glucuronate, and pyruvate. Pyruvic acid is possibly attached in an acetal form to the D-galactose residues of the agarobiose skeleton. The sulfate content of the agar depends on the source of the raw material from which it is derived. Acetylation of agaropectin yields the chloroform-insoluble agaropectin acetate, as opposed to agarose acetate. This process can be used to separate the two polysaccharides via fractionation.
Use
Agaropectin has no commercial value and is discarded during the commercial processing of agar, and food grade agar is mainly composed of agarose with a molecular weight of about 120 kDa.
References
Polysaccharides
Organosulfates | Agaropectin | [
"Chemistry"
] | 280 | [
"Carbohydrates",
"Polysaccharides"
] |
32,365,238 | https://en.wikipedia.org/wiki/Paavo%20Pylkk%C3%A4nen | Paavo Pylkkänen (born 1959) is a Finnish philosopher of mind. He is an Associate Professor of Philosophy at the University of Skövde and a university lecturer in theoretical philosophy at the University of Helsinki. He is known for his work on mind-body studies, building on David Bohm's interpretation of quantum mechanics, in particular Bohm's view of the cosmos as an enfolding and unfolding whole including mind and matter.
Work
Pylkkänen's areas of specialization are the mind-body problem, the basis of cognitive science, philosophy of physics, the philosophy of David Bohm and the foundations of quantum theory.
Since 1996 he has been employed at the University of Skövde in Skövde, Sweden, where he initiated a consciousness studies program combining philosophy and cognitive neuroscience. He is currently a temporary university lecturer in theoretical philosophy at the Department of Philosophy, History, Culture and Art Studies, at University of Helsinki, where he has regularly worked since 2008.
Pylkkänen has worked and published together with theoretical physicist Basil Hiley, a close co-worker of David Bohm over three decades. Hiley and Pylkkänen together addressed the question of the relation between mind and matter by the hypothesis of an active information within the conceptual framework of the de Broglie–Bohm theory. Pylkkänen's work Mind, Matter and the Implicate Order (2007) builds upon David Bohm's ontological interpretation of quantum theory, in which quantum processes are understood as a holomovement in terms of implicate and explicate orders.
Bibliography
Articles and book chapters
Pylkkänen, P.: Fundamental Physics and the Mind – Is There a Connection? Quantum Interaction, Lecture Notes in Computer Science, Volume 8951, p. 3–11, 20 February 2015
Pylkkänen, P.: David Bohm och den vetenskapliga andan, 2010, Beyond belief and knowledge: Thoughts from a dialogue. Liljenström, H. & Linderman, A. (eds.). Stockholm : Carlsson p. 127–142. (in Swedish)
Pylkkänen, P. : Quantum philosophy is philosophy enough, 2010, How we became doctors of philosophy. Roinila, M. (ed.). Helsinki: Suomen Filosofinen Yhdistys ry. p. 151–157.
Pylkkänen, P.: Implications of Bohmian quantum ontology for psychopathology, March 2010. In : NeuroQuantology. vol. 8, no. 1, p. 37–48.
Pylkkänen, P.: Does dynamical modelling explain time consciousness?, 2007, Computation, Information, Cognition: The Nexus and the Liminal. Stuart, S. & Crnkovic, G. D. (eds.). Newcastle: Cambridge Scholars Press p. 218–229.
Pylkkänen, P.: Escaping the prison of language, 2007, Communication - Action - Meaning: A Festschrift to Jens Allwood. E. A. (ed.). Department of Linguistics, Göteborg University
Pylkkänen, P. & Hiley, B. J.: Can mind affect matter via active information?, In: Mind and Matter, vol.3, no.2, Imprint Academic, 2005, p. 7–26.
Books and edited works
Dewdney, C., Pylkkänen, P., Atmanspacher, H. (eds.) Foundations of Physics Vol. 43(4) April 2013, Special issue: Hiley Festschrift. Springer.
Pylkkänen, P.: Mind, Matter and the Implicate Order, 2007 Berlin Heidelberg New York: Springer-Verlag. 270 p. (The Frontiers Collection), .
Paavo Pylkkänen and Tere Vadén (eds.): Dimensions of conscious experience, Advances in Consciousness Research, Volume 37, John Benjamins B.V., 2001, .
David Bohm & Charles Biederman (Paavo Pylkkänen, ed.): Bohm-Biederman Correspondence: Creativity and science, Routledge, 1999, .
P. Pylkkänen, P. Pylkkö, A. Hautamäki (eds.): Brain, Mind and Physics, IOS Press, 1997, .
P. Pylkkänen: Mind, matter and active information: the relevance of David Bohm's interpretation of quantum theory to cognitive science, Yliopistopaino, 1992,
P. Pylkkänen (ed.): The Search for Meaning: The New Spirit in Science and Philosophy, Crucible, The Aquarian Press, 1989, .
References
External links
Paavo Pylkkänen, University of Helsinki
Paavo Pylkkänen, publications
Paavo Pylkkänen, University of Skövde
1959 births
20th-century Finnish philosophers
21st-century Finnish philosophers
Consciousness researchers and theorists
Quantum mind
Academic staff of the University of Helsinki
Philosophers of mind
Living people
Academic staff of the University of Skövde | Paavo Pylkkänen | [
"Physics"
] | 1,055 | [
"Quantum mind",
"Quantum mechanics"
] |
32,365,813 | https://en.wikipedia.org/wiki/Sigma%20coordinate%20system | The sigma coordinate system is a common coordinate system used in computational models for oceanography, meteorology and other fields where fluid dynamics are relevant. This coordinate system receives its name from the independent variable used to represent a scaled pressure level.
Models that use a sigma coordinate system include the Princeton Ocean Model (POM), the COupled Hydrodynamical Ecological model for REgioNal Shelf seas (COHERENS), the ECMWF Integrated Forecast System, and various other numerical weather prediction models.
Description
Pressure at a height may be scaled with the surface pressure , or less often with the pressure at the top of the defined domain . The sigma value at the scale reference is by definition 1: i.e., if surface-scaled, .
In a sigma coordinate system, if the sigma scale is divided equally, then at every point on the surface, each horizontal layer above that point has the same thickness in terms of sigma, although in terms of metres each next higher equal sigma-thickness layer is thicker than the previous one. The sigma-thickness of each layer decreases with surface altitude, the sigma-levels being compressed together (in terms of metres) as the total vertical range is reduced.
The sigma coordinate system allows sigma-surfaces to follow model terrain; where terrain is sharply sloped, so are the sigma surfaces. This allows for continuous fields, such as temperature, to be represented especially smoothly at the lowest layers in the model. Further, with the exponential decaying nature of density within the atmosphere, sigma coordinates provide a greater vertical resolution (in terms of metres) near the surface. The sloping nature of the coordinate surfaces does require additional interpolation of the pressure gradient force, and the smoothing of terrain can often cause it to extend beyond the true boundaries of land.
Sigma coordinate hybrids
Hybrid sigma-pressure
Some atmospheric models use a hybrid sigma-pressure coordinate scheme, combining sigma-denominated layers at the bottom (following terrain) with isobaric (pressure-denominated) layers aloft. The isobaric upper layers are generally more numerically tractable (since flatter), and specifically more tractable for radiative transfer calculations (important for assimilating satellite radiance observations). Some models (e.g., the 2009 NAM) have a pure sigma domain at the bottom and a fixed transition level, above which all layers are exactly isobaric. Other models (e.g., GFS) gradually transition from sigma to isobaric.
Hybrid sigma-density
Some oceanographic models uses coordinates which similarly transition from density (isopycnic) to sigma coordinates in shallow coastal shelf regions.
References
Synoptic meteorology and weather
Oceanography
Numerical climate and weather models | Sigma coordinate system | [
"Physics",
"Environmental_science"
] | 549 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
32,367,668 | https://en.wikipedia.org/wiki/DWSIM |
DWSIM is an open-source CAPE-OPEN compliant chemical process simulator for Windows, Linux and macOS. DWSIM is built on top of the Microsoft .NET and Mono Platforms and features a graphical user interface (GUI), advanced thermodynamics calculations, reactions support and petroleum characterization / hypothetical component generation tools.
DWSIM is able to simulate steady-state, vapor–liquid, vapor–liquid-liquid, solid–liquid and aqueous electrolyte equilibrium processes with the following Thermodynamic Models and Unit Operations:
Thermodynamic models: CoolProp, Peng–Robinson equation of state, Peng–Robinson-Strÿjek-Vera (PRSV2), Soave–Redlich–Kwong, Lee-Kesler, Lee-Kesler-Plöcker, UNIFAC(-LL), Modified UNIFAC (Dortmund), Modified UNIFAC (NIST), UNIQUAC, NRTL, Chao-Seader, Grayson-Streed, Extended UNIQUAC, Raoult's Law, IAPWS-IF97 Steam Tables, IAPWS-08 Seawater, Black-Oil and Sour Water;
Unit operations: CAPE-OPEN Socket, Spreadsheet, Custom (IronPython Script), Mixer, Splitter, Separator, Pump, Compressor, Expander, Heater, Cooler, Valve, Pipe Segment, Shortcut Column, Heat exchanger, Reactors (Conversion, PFR, CSTR, Equilibrium and Gibbs), Distillation column, Simple, Refluxed and Reboiled Absorbers, Component Separator, Solids Separator, Continuous Cake Filter and Orifice plate;
Utilities: Binary Data Regression, Phase Envelope, Natural Gas Hydrates, Pure Component Properties, True Critical Point, PSV Sizing, Vessel Sizing, Spreadsheet and Petroleum Cold Flow Properties;
Tools: Hypothetical Component Generator, Bulk C7+/Distillation Curves Petroleum Characterization, Petroleum Assay Manager, Reactions Manager and Compound Creator;
Process Analysis and Optimization: Sensitivity Analysis Utility, Multivariate Optimizer with bound constraints;
Extras: Support for Runtime Python Scripts, Plugins and CAPE-OPEN Flowsheet Monitoring Objects.
Android and iOS versions
DWSIM is also available on Android and iOS, where it is free to download. On these platforms, DWSIM includes a basic set of features while more advanced modules can be unlocked through in-app purchases.
Raspberry Pi version
A special DWSIM build is available for Raspberry Pi 2/3 devices running an armhf-based Linux distribution like Raspbian and Ubuntu MATE.
See also
Process design (chemical engineering)
List of Chemical Process Simulators
Standard temperature and pressure
External links
DWSIM homepage - documentation, download links, tutorials, help and support for DWSIM.
CO-LaN - the CAPE-OPEN Laboratories Network is a neutral industry and academic association promoting open interface standards in process simulation software. CO-LaN members are committed to making Computer Aided Process Engineering easier, faster and less expensive by achieving complete interoperability of compliant commercial CAPE software tools. CO-LaN supports and maintains the CAPE-OPEN interface standards.
References
Simulation software
Chemical engineering software | DWSIM | [
"Chemistry",
"Engineering"
] | 676 | [
"Chemical engineering software",
"Chemical engineering"
] |
32,370,985 | https://en.wikipedia.org/wiki/Integrated%20stress%20response | The integrated stress response is a cellular stress response conserved in eukaryotic cells that downregulates protein synthesis and upregulates specific genes in response to internal or environmental stresses.
Background
The integrated stress response can be triggered within a cell due to either extrinsic or intrinsic conditions. Extrinsic factors include hypoxia, amino acid deprivation, glucose deprivation, viral infection and presence of oxidants. The main intrinsic factor is endoplasmic reticulum stress due to the accumulation of unfolded proteins. It has also been observed that the integrated stress response may trigger due to oncogene activation. The integrated stress response will either cause the expression of genes that fix the damage in the cell due to the stressful conditions, or it will cause a cascade of events leading to apoptosis, which occurs when the cell cannot be brought back into homeostasis.
eIF2 protein complex
Stress signals can cause protein kinases, known as EIF-2 kinases, to phosphorylate the α subunit of a protein complex called translation initiation factor 2 (eIF2), resulting in the gene ATF4 being turned on, which will further affect gene expression. eIF2 consists of three subunits: eIF2α, eIF2β and eIF2γ. eIF2α contains two binding sites, one for phosphorylation and one for RNA binding. The kinases work to phosphorylate serine 51 on the α subunit, which is a reversible action. In a cell experiencing normal conditions, eIF2 aids in the initiation of mRNA translation and recognizing the AUG start codon. However, once eIF2α is phosphorylated, the complex’s activity reduces, causing reduction in translation initiation and protein synthesis, while promoting expression of the ATF4 gene.
Protein kinases
There are four known mammalian protein kinases that phosphorylate eIF2α, including PKR-like ER kinase (PERK, EIF2AK3), heme-regulated eIF2α kinase (HRI, EIF2AK1), general control non-depressible 2 (GCN2, EIF2AK4) and double stranded RNA dependent protein kinase (PKR, EIF2AK2).
PERK
PERK (encoded in humans by the gene EIF2AK3) responds mainly to endoplasmic reticulum stress and has two modes of activation. This kinase has a unique luminal domain that plays a role in activation. The classical model of activation states that the luminal domain is normally bound to 78-kDa glucose-regulated protein (GRP78). Once there is a buildup of unfolded proteins, GRP78 dissociates from the luminal domain. This causes PERK to dimerize, leading to autophosphorylation and activation. The activated PERK kinase will then phosphorylate eIF2α, causing a cascade of events. Thus, the activation of this kinase is dependent on the aggregation of unfolded proteins in the endoplasmic reticulum. PERK has also been observed to activate in response to activity of the proto-oncogene MYC. This activation causes ATF4 expression, resulting in tumorigenesis and cellular transformation.
HRI
HRI (encoded in humans by the gene EIF2AK1) also dimerizes in order to autophosphorylate and activate. This activation is dependent on the presence of heme. HRI has two domains that heme may bind to, including one on the N-terminus and one on the kinase insertion domain. The presence of heme causes a disulfide bond to form between the monomers of HRI, resulting in the structure of an inactive dimer. However, when heme is absent, HRI monomers form an active dimer through non-covalent interactions. Therefore, the activation of this kinase is dependent on heme deficiency. HRI activation can also occur due to other stressors such as heat shock, osmotic stress and proteasome inhibition. Activation of HRI in response to these stressors does not depend on heme, but rather relies on the help of two heat shock proteins (HSP90 and HSP70). HRI is mainly found in the precursors of red blood cells, and has been observed to increase during erythropoiesis.
GCN2
GCN2 (encoded in humans by the gene EIF2AK4) is activated as a result of amino acid deprivation. The mechanisms regarding this activation are still being researched; however, one mechanism has been studied in yeast. It was observed that GCN2 binds to uncharged/deacylated tRNA which causes a conformational change, resulting in dimerization. Dimerization then causes autophosphorylation and activation. Other stressors have also been reported to activate GCN2. GCN2 activation was observed in glucose deprived tumor cells, although it was suggested that it was an indirect effect due to cells using amino acids as an alternate energy source. In mouse embryonic fibroblast cells and human keratinocytes, GCN2 was activated due to UV light exposure. The pathways for this activation require further research, although multiple models have been proposed, including crosslinking between GCN2 and tRNA.
PKR
PKR (encoded in humans by the gene EIF2AK2) activation is mainly dependent on the presence of double-stranded RNA during a viral infection. dsRNA causes PKR to form dimers, resulting in autophosphorylation and activation. Once activated, PKR will phosphorylate eIF2α which causes a cascade of events that result in viral and host protein synthesis being inhibited. Other stressors that cause the activation of PKR include oxidative stress, endoplasmic reticulum stress, growth factor deprivation and bacterial infection. Caspase activity early on in apoptosis has also been observed to trigger activation of PKR. However, these stressors differ in that they activate PKR without using dsRNA.
ATF4
When a cell is subjected to stressful conditions, the ATF4 gene is expressed. The ATF4 transcription factor has the ability to form dimers with many different proteins that influence gene expression and cell fate. ATF4 binds to C/EBP‐ATF response element (CARE) sequences which work together to increase the transcription of stress-responsive genes. However, when undergoing amino acid starvation, the sequences will act as amino acid response elements instead.
ATF4 will work together with other transcription factors, such as CHOP and ATF3, by forming homodimers or heterodimers, resulting in numerous observed effects. The proteins that ATF4 interacts with determines the outcome of the cell during the integrated stress response. For example, ATF4 and ATF3 work to establish homeostasis inside of the cell following stressful conditions. On the other hand, ATF4 and CHOP work together to induce cell death, as well as regulating amino acid biosynthesis, transport and metabolic processes. The presence of a leucine zipper domain (bZIP) allows ATF4 to work together with many other proteins, thus creating specific responses to different types of stressors. When a cell is undergoing the stress of hypoxia, ATF4 will interact with PHD1 and PHD3 to decrease its transcriptional activity. In addition, when a cell is undergoing amino acid starvation or endoplasmic reticulum stress, TRIP3 also interacts with ATF4 to decrease activity.
One result of ATF4 and stress-response proteins expression is the induction of autophagy. During this process, the cell forms autophagosomes, or double membraned vesicles, that allow for transportation of material throughout the cell. These autophagosomes can carry unneeded organelles and proteins, as well as damaged or harmful components in an attempt by the cell to maintain homeostasis.
Termination of integrated stress response
In order to terminate the integrated stress response, dephosphorylation of eIF2α is required. The protein phosphatase 1 complex (PP1) aids in the dephosphorylation of eIF2α. This complex contains a PP1 catalytic subunit as well as two regulatory subunits. This complex is negatively regulated by two proteins: growth arrest and DNA damage‐inducible protein (GADD34), also known as PPP1R15A, or constitutive repressor of eIF2α phosphorylation (CReP), also known as PPP1R15B. CReP acts to keep levels of eIF2α phosphorylation low in cells under normal conditions. GADD34 is produced in response to ATF4 and works to increase dephosphorylation of eIF2α. The dephosphorylation of eIF2α results in the return of normal protein synthesis and cellular function. However, dephosphorylation of eIF2α can also facilitate the production of death-inducing proteins in cases where the cell is so severely damaged that normal functioning cannot be restored.
Mutations affecting integrated stress response
Mutations that affect the functioning of the integrated stress response may have debilitating effects on cells. For example, cells lacking the ATF4 gene are unable to elicit proper gene expression in response to stressors. This results in cells exhibiting issues with amino acid transport, glutathione biosynthesis and oxidative stress resistance. When a mutation inhibits the functioning of PERK, endogenous peroxides accumulate when the cell experiences endoplasmic reticulum stress. In mice and humans lacking PERK, there have been observed destruction of secretory cells undergoing high endoplasmic reticulum stress.
See also
ISRIB, integrated stress response inhibitor
References
Cellular processes
Eukaryote biology
Gene expression
Proteins | Integrated stress response | [
"Chemistry",
"Biology"
] | 2,093 | [
"Biomolecules by chemical classification",
"Gene expression",
"Eukaryote biology",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins",
"Eukaryotes"
] |
42,191,418 | https://en.wikipedia.org/wiki/Imaging%20cycler%20microscopy | An imaging cycler microscope (ICM) is a fully automated (epi) fluorescence microscope which overcomes the spectral resolution limit resulting in parameter- and dimension-unlimited fluorescence imaging. The principle and robotic device was described by Walter Schubert in 1997 and has been further developed with his co-workers within the human toponome project. The ICM runs robotically controlled repetitive incubation-imaging-bleaching cycles with dye-conjugated probe libraries recognizing target structures in situ (biomolecules in fixed cells or tissue sections). This results in the transmission of a randomly large number of distinct biological informations by re-using the same fluorescence channel after bleaching for the transmission of another biological information using the same dye which is conjugated to another specific probe, a.s.o. Thereby noise-reduced quasi-multichannel fluorescence images with reproducible physical, geometrical, and biophysical stabilities are generated. The resulting power of combinatorial molecular discrimination (PCMD) per data point is given by 65,536k, where 65,536 is the number of grey value levels (output of a 16-bit CCD camera), and k is the number of co-mapped biomolecules and/or subdomains per biomolecule(s). High PCMD has been shown for k = 100, and in principle can be expanded for much higher numbers of k. In contrast to traditional multichannel–few-parameter fluorescence microscopy (panel a in the figure) high PCMDs in an ICM lead to high functional and spatial resolution (panel b in the figure). Systematic ICM analysis of biological systems reveals the supramolecular segregation law that describes the principle of order of large, hierarchically organized biomolecular networks in situ (toponome). The ICM is the core technology for the systematic mapping of the complete protein network code in tissues (human toponome project). The original ICM method includes any modification of the bleaching step. Corresponding modifications have been reported for antibody retrieval and chemical dye-quenching debated recently. The Toponome Imaging Systems (TIS) and multi-epitope-ligand cartographs (MELC) represent different stages of the ICM technological development. Imaging cycler microscopy received the American ISAC best paper award in 2008 for the three symbol code of organized proteomes.
Citations
References
"3D all-organelle real time visualization of a single cell"
"Visualizing the protein-DNA network code inside the cell nucleus"
Further reading
Systems biology
Bioinformatics
Omics
Topology | Imaging cycler microscopy | [
"Physics",
"Mathematics",
"Engineering",
"Biology"
] | 549 | [
"Biological engineering",
"Bioinformatics",
"Omics",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Systems biology"
] |
42,193,218 | https://en.wikipedia.org/wiki/Human%20digestive%20system | The human digestive system consists of the gastrointestinal tract plus the accessory organs of digestion (the tongue, salivary glands, pancreas, liver, and gallbladder). Digestion involves the breakdown of food into smaller and smaller components, until they can be absorbed and assimilated into the body. The process of digestion has three stages: the cephalic phase, the gastric phase, and the intestinal phase.
The first stage, the cephalic phase of digestion, begins with secretions from gastric glands in response to the sight and smell of food. This stage includes the mechanical breakdown of food by chewing, and the chemical breakdown by digestive enzymes, that takes place in the mouth. Saliva contains the digestive enzymes amylase, and lingual lipase, secreted by the salivary and serous glands on the tongue. Chewing, in which the food is mixed with saliva, begins the mechanical process of digestion. This produces a bolus which is swallowed down the esophagus to enter the stomach.
The second stage, the gastric phase, happens in the stomach. Here, the food is further broken down by mixing with gastric acid until it passes into the duodenum, the first part of the small intestine.
The third stage, the intestinal phase, begins in the duodenum. Here, the partially digested food is mixed with a number of enzymes produced by the pancreas.
Digestion is helped by the chewing of food carried out by the muscles of mastication, the tongue, and the teeth, and also by the contractions of peristalsis, and segmentation. Gastric acid, and the production of mucus in the stomach, are essential for the continuation of digestion.
Peristalsis is the rhythmic contraction of muscles that begins in the esophagus and continues along the wall of the stomach and the rest of the gastrointestinal tract. This initially results in the production of chyme which when fully broken down in the small intestine is absorbed as chyle into the lymphatic system. Most of the digestion of food takes place in the small intestine. Water and some minerals are reabsorbed back into the blood in the colon of the large intestine. The waste products of digestion (feces) are defecated from the rectum via the anus.
Components
There are several organs and other components involved in the digestion of food. The organs known as the accessory digestive organs are the liver, gall bladder and pancreas. Other components include the mouth, salivary glands, tongue, teeth and epiglottis.
The largest structure of the digestive system is the gastrointestinal tract (GI tract). This starts at the mouth and ends at the anus, covering a distance of about .
A major digestive organ is the stomach. Within its mucosa are millions of embedded gastric glands. Their secretions are vital to the functioning of the organ.
Most of the digestion of food takes place in the small intestine which is the longest part of the GI tract.
The largest part of the GI tract is the colon or large intestine. Water is absorbed here and the remaining waste matter is stored prior to defecation.
There are many specialised cells of the GI tract. These include the various cells of the gastric glands, taste cells, pancreatic duct cells, enterocytes and microfold cells.
Some parts of the digestive system are also part of the excretory system, including the large intestine.
Mouth
The mouth is the first part of the upper gastrointestinal tract and is equipped with several structures that begin the first processes of digestion. These include salivary glands, teeth and the tongue. The mouth consists of two regions; the vestibule and the oral cavity proper. The vestibule is the area between the teeth, lips and cheeks, and the rest is the oral cavity proper. Most of the oral cavity is lined with oral mucosa, a mucous membrane that produces a lubricating mucus, of which only a small amount is needed. Mucous membranes vary in structure in the different regions of the body but they all produce a lubricating mucus, which is either secreted by surface cells or more usually by underlying glands. The mucous membrane in the mouth continues as the thin mucosa which lines the bases of the teeth. The main component of mucus is a glycoprotein called mucin and the type secreted varies according to the region involved. Mucin is viscous, clear, and clinging. Underlying the mucous membrane in the mouth is a thin layer of smooth muscle tissue and the loose connection to the membrane gives it its great elasticity. It covers the cheeks, inner surfaces of the lips, and floor of the mouth, and the mucin produced is highly protective against tooth decay.
The roof of the mouth is termed the palate and it separates the oral cavity from the nasal cavity. The palate is hard at the front of the mouth since the overlying mucosa is covering a plate of bone; it is softer and more pliable at the back being made of muscle and connective tissue, and it can move to swallow food and liquids. The soft palate ends at the uvula. The surface of the hard palate allows for the pressure needed in eating food, to leave the nasal passage clear. The opening between the lips is termed the oral fissure, and the opening into the throat is called the fauces.
At either side of the soft palate are the palatoglossus muscles which also reach into regions of the tongue. These muscles raise the back of the tongue and also close both sides of the fauces to enable food to be swallowed. Mucus helps in the mastication of food in its ability to soften and collect the food in the formation of the bolus.
Salivary glands
There are three pairs of main salivary glands and between 800 and 1,000 minor salivary glands, all of which mainly serve the digestive process, and also play an important role in the maintenance of dental health and general mouth lubrication, without which speech would be impossible. The main glands are all exocrine glands, secreting via ducts. All of these glands terminate in the mouth. The largest of these are the parotid glands—their secretion is mainly serous. The next pair are underneath the jaw, the submandibular glands, these produce both serous fluid and mucus. The serous fluid is produced by serous glands in these salivary glands which also produce lingual lipase. They produce about 70% of the oral cavity saliva. The third pair are the sublingual glands located underneath the tongue and their secretion is mainly mucous with a small percentage of saliva.
Within the oral mucosa, and also on the tongue, palates, and floor of the mouth, are the minor salivary glands; their secretions are mainly mucous and they are innervated by the facial nerve (CN7). The glands also secrete amylase a first stage in the breakdown of food acting on the carbohydrate in the food to transform the starch content into maltose. There are other serous glands on the surface of the tongue that encircle taste buds on the back part of the tongue and these also produce lingual lipase. Lipase is a digestive enzyme that catalyses the hydrolysis of lipids (fats). These glands are termed Von Ebner's glands which have also been shown to have another function in the secretion of histatins which offer an early defense (outside of the immune system) against microbes in food, when it makes contact with these glands on the tongue tissue. Sensory information can stimulate the secretion of saliva providing the necessary fluid for the tongue to work with and also to ease swallowing of the food.
Saliva
Saliva moistens and softens food, and along with the chewing action of the teeth, transforms the food into a smooth bolus. The bolus is further helped by the lubrication provided by the saliva in its passage from the mouth into the esophagus. Also of importance is the presence in saliva of the digestive enzymes amylase and lipase. Amylase starts to work on the starch in carbohydrates, breaking it down into the simple sugars of maltose and dextrose that can be further broken down in the small intestine. Saliva in the mouth can account for 30% of this initial starch digestion. Lipase starts to work on breaking down fats. Lipase is further produced in the pancreas where it is released to continue this digestion of fats. The presence of salivary lipase is of prime importance in young babies whose pancreatic lipase has yet to be developed.
As well as its role in supplying digestive enzymes, saliva has a cleansing action for the teeth and mouth. It also has an immunological role in supplying antibodies to the system, such as immunoglobulin A. This is seen to be key in preventing infections of the salivary glands, importantly that of parotitis.
Saliva also contains a glycoprotein called haptocorrin which is a binding protein to vitamin B12. It binds with the vitamin in order to carry it safely through the acidic content of the stomach. When it reaches the duodenum, pancreatic enzymes break down the glycoprotein and free the vitamin which then binds with intrinsic factor.
Tongue
Food enters the mouth where the first stage in the digestive process takes place, with the action of the tongue and the secretion of saliva. The tongue is a fleshy and muscular sensory organ, and the first sensory information is received via the taste buds in the papillae on its surface. If the taste is agreeable, the tongue will go into action, manipulating the food in the mouth which stimulates the secretion of saliva from the salivary glands. The liquid quality of the saliva will help in the softening of the food and its enzyme content will start to break down the food whilst it is still in the mouth. The first part of the food to be broken down is the starch of carbohydrates (by the enzyme amylase in the saliva).
The tongue is attached to the floor of the mouth by a ligamentous band called the frenum and this gives it great mobility for the manipulation of food (and speech); the range of manipulation is optimally controlled by the action of several muscles and limited in its external range by the stretch of the frenum. The tongue's two sets of muscles, are four intrinsic muscles that originate in the tongue and are involved with its shaping, and four extrinsic muscles originating in bone that are involved with its movement.
Taste
Taste is a form of chemoreception that takes place in the specialised taste receptors, contained in structures called taste buds in the mouth. Taste buds are mainly on the upper surface (dorsum) of the tongue. The function of taste perception is vital to help prevent harmful or rotten foods from being consumed. There are also taste buds on the epiglottis and upper part of the esophagus. The taste buds are innervated by a branch of the facial nerve the chorda tympani, and the glossopharyngeal nerve. Taste messages are sent via these cranial nerves to the brain. The brain can distinguish between the chemical qualities of the food. The five basic tastes are referred to as those of saltiness, sourness, bitterness, sweetness, and umami. The detection of saltiness and sourness enables the control of salt and acid balance. The detection of bitterness warns of poisons—many of a plant's defences are of poisonous compounds that are bitter. Sweetness guides to those foods that will supply energy; the initial breakdown of the energy-giving carbohydrates by salivary amylase creates the taste of sweetness since simple sugars are the first result. The taste of umami is thought to signal protein-rich food. Sour tastes are acidic which is often found in bad food. The brain has to decide very quickly whether the food should be eaten or not. It was the findings in 1991, describing the first olfactory receptors that helped to prompt the research into taste. The olfactory receptors are located on cell surfaces in the nose which bind to chemicals enabling the detection of smells. It is assumed that signals from taste receptors work together with those from the nose, to form an idea of complex food flavours.
Teeth
Teeth are complex structures made of materials specific to them. They are made of a bone-like material called dentin, which is covered by the hardest tissue in the body—enamel. Teeth have different shapes to deal with different aspects of mastication employed in tearing and chewing pieces of food into smaller and smaller pieces. This results in a much larger surface area for the action of digestive enzymes.
The teeth are named after their particular roles in the process of mastication—incisors are used for cutting or biting off pieces of food; canines, are used for tearing, premolars and molars are used for chewing and grinding. Mastication of the food with the help of saliva and mucus results in the formation of a soft bolus which can then be swallowed to make its way down the upper gastrointestinal tract to the stomach.
The digestive enzymes in saliva also help in keeping the teeth clean by breaking down any lodged food particles.
Epiglottis
The epiglottis is a flap of elastic cartilage attached to the entrance of the larynx. It is covered with a mucous membrane and there are taste buds on its lingual surface which faces into the mouth. Its laryngeal surface faces into the larynx. The epiglottis functions to guard the entrance of the glottis, the opening between the vocal folds. It is normally pointed upward during breathing with its underside functioning as part of the pharynx, but during swallowing, the epiglottis folds down to a more horizontal position, with its upper side functioning as part of the pharynx. In this manner it prevents food from going into the trachea and instead directs it to the esophagus, which is behind. During swallowing, the backward motion of the tongue forces the epiglottis over the glottis' opening to prevent any food that is being swallowed from entering the larynx which leads to the lungs; the larynx is also pulled upwards to assist this process. Stimulation of the larynx by ingested matter produces a strong cough reflex in order to protect the lungs.
Pharynx
The pharynx is a part of the conducting zone of the respiratory system and also a part of the digestive system. It is the part of the throat immediately behind the nasal cavity at the back of the mouth and above the esophagus and larynx. The pharynx is made up of three parts. The lower two parts—the oropharynx and the laryngopharynx are involved in the digestive system. The laryngopharynx connects to the esophagus and it serves as a passageway for both air and food. Air enters the larynx anteriorly but anything swallowed has priority and the passage of air is temporarily blocked. The pharynx is innervated by the pharyngeal plexus of the vagus nerve. Muscles in the pharynx push the food into the esophagus. The pharynx joins the esophagus at the oesophageal inlet which is located behind the cricoid cartilage.
Esophagus
The esophagus, commonly known as the foodpipe or gullet, consists of a muscular tube through which food passes from the pharynx to the stomach. The esophagus is continuous with the laryngopharynx. It passes through the posterior mediastinum in the thorax and enters the stomach through a hole in the thoracic diaphragm—the esophageal hiatus, at the level of the tenth thoracic vertebra (T10). Its length averages 25 cm, varying with an individual's height. It is divided into cervical, thoracic and abdominal parts. The pharynx joins the esophagus at the esophageal inlet which is behind the cricoid cartilage.
At rest the esophagus is closed at both ends, by the upper and lower esophageal sphincters. The opening of the upper sphincter is triggered by the swallowing reflex so that food is allowed through. The sphincter also serves to prevent back flow from the esophagus into the pharynx. The esophagus has a mucous membrane and the epithelium which has a protective function is continuously replaced due to the volume of food that passes inside the esophagus. During swallowing, food passes from the mouth through the pharynx into the esophagus. The epiglottis folds down to a more horizontal position to direct the food into the esophagus, and away from the trachea.
Once in the esophagus, the bolus travels down to the stomach via rhythmic contraction and relaxation of muscles known as peristalsis. The lower esophageal sphincter is a muscular sphincter surrounding the lower part of the esophagus. The gastroesophageal junction between the esophagus and the stomach is controlled by the lower esophageal sphincter, which remains constricted at all times other than during swallowing and vomiting to prevent the contents of the stomach from entering the esophagus. As the esophagus does not have the same protection from acid as the stomach, any failure of this sphincter can lead to heartburn.
Diaphragm
The diaphragm is an important part of the body's digestive system. The muscular diaphragm separates the thoracic cavity from the abdominal cavity where most of the digestive organs are located. The suspensory muscle attaches the ascending duodenum to the diaphragm. This muscle is thought to be of help in the digestive system in that its attachment offers a wider angle to the duodenojejunal flexure for the easier passage of digesting material. The diaphragm also attaches to, and anchors the liver at its bare area. The esophagus enters the abdomen through a hole in the diaphragm at the level of T10.
Stomach
The stomach is a major organ of the gastrointestinal tract and digestive system. It is a consistently J-shaped organ joined to the esophagus at its upper end and to the duodenum at its lower end.
Gastric acid (informally gastric juice), produced in the stomach plays a vital role in the digestive process, and mainly contains hydrochloric acid and sodium chloride. A peptide hormone, gastrin, produced by G cells in the gastric glands, stimulates the production of gastric juice which activates the digestive enzymes. Pepsinogen is a precursor enzyme (zymogen) produced by the gastric chief cells, and gastric acid activates this to the enzyme pepsin which begins the digestion of proteins. As these two chemicals would damage the stomach wall, mucus is secreted by innumerable gastric glands in the stomach, to provide a slimy protective layer against the damaging effects of the chemicals on the inner layers of the stomach.
At the same time that protein is being digested, mechanical churning occurs through the action of peristalsis, waves of muscular contractions that move along the stomach wall. This allows the mass of food to further mix with the digestive enzymes. Gastric lipase secreted by the chief cells in the fundic glands in the gastric mucosa of the stomach, is an acidic lipase, in contrast with the alkaline pancreatic lipase. This breaks down fats to some degree though is not as efficient as the pancreatic lipase.
The pylorus, the lowest section of the stomach which attaches to the duodenum via the pyloric canal, contains countless glands which secrete digestive enzymes including gastrin. After an hour or two, a thick semi-liquid called chyme is produced. When the pyloric sphincter, or valve opens, chyme enters the duodenum where it mixes further with digestive enzymes from the pancreas, and then passes through the small intestine, where digestion continues.
The parietal cells in the fundus of the stomach, produce a glycoprotein called intrinsic factor which is essential for the absorption of vitamin B12. Vitamin B12 (cobalamin), is carried to, and through the stomach, bound to a glycoprotein secreted by the salivary glands – transcobalamin I also called haptocorrin, which protects the acid-sensitive vitamin from the acidic stomach contents. Once in the more neutral duodenum, pancreatic enzymes break down the protective glycoprotein. The freed vitamin B12 then binds to intrinsic factor which is then absorbed by the enterocytes in the ileum.
The stomach is a distensible organ and can normally expand to hold about one litre of food. This expansion is enabled by a series of gastric folds in the inner walls of the stomach. The stomach of a newborn baby will only be able to expand to retain about 30 ml.
Spleen
The spleen is the largest lymphoid organ in the body but has other functions. It breaks down both red and white blood cells that are spent. This is why it is sometimes known as the 'graveyard of red blood cells'. A product of this digestion is the pigment bilirubin, which is sent to the liver and secreted in the bile. Another product is iron, which is used in the formation of new blood cells in the bone marrow. Medicine treats the spleen solely as belonging to the lymphatic system, though it is acknowledged that the full range of its important functions is not yet understood.
Liver
The liver is the second largest organ (after the skin) and is an accessory digestive gland which plays a role in the body's metabolism. The liver has many functions some of which are important to digestion. The liver can detoxify various metabolites; synthesise proteins and produce biochemicals needed for digestion. It regulates the storage of glycogen which it can form from glucose (glycogenesis). The liver can also synthesise glucose from certain amino acids. Its digestive functions are largely involved with the breaking down of carbohydrates. It also maintains protein metabolism in its synthesis and degradation. In lipid metabolism it synthesises cholesterol. Fats are also produced in the process of lipogenesis. The liver synthesises the bulk of lipoproteins. The liver is located in the upper right quadrant of the abdomen and below the diaphragm to which it is attached at one part, the bare area of the liver. This is to the right of the stomach and it overlies the gall bladder. The liver synthesises bile acids and lecithin to promote the digestion of fat.
Bile
Bile produced by the liver is made up of water (97%), bile salts, mucus and pigments, 1% fats and inorganic salts. Bilirubin is its major pigment. Bile acts partly as a surfactant which lowers the surface tension between either two liquids or a solid and a liquid and helps to emulsify the fats in the chyme. Food fat is dispersed by the action of bile into smaller units called micelles. The breaking down into micelles creates a much larger surface area for the pancreatic enzyme, lipase to work on. Lipase digests the triglycerides which are broken down into two fatty acids and a monoglyceride. These are then absorbed by villi on the intestinal wall. If fats are not absorbed in this way in the small intestine problems can arise later in the large intestine which is not equipped to absorb fats. Bile also helps in the absorption of vitamin K from the diet.
Bile is collected and delivered through the common hepatic duct. This duct joins with the cystic duct to connect in a common bile duct with the gallbladder.
Bile is stored in the gallbladder for release when food is discharged into the duodenum and also after a few hours.
Gallbladder
The gallbladder is a hollow part of the biliary tract that sits just beneath the liver, with the gallbladder body resting in a small depression. It is a small organ where the bile produced by the liver is stored, before being released into the small intestine. Bile flows from the liver through the bile ducts and into the gall bladder for storage. The bile is released in response to cholecystokinin (CCK), a peptide hormone released from the duodenum. The production of CCK (by endocrine cells of the duodenum) is stimulated by the presence of fat in the duodenum.
It is divided into three sections, a fundus, body and neck. The neck tapers and connects to the biliary tract via the cystic duct, which then joins the common hepatic duct to form the common bile duct. At this junction is a mucosal fold called Hartmann's pouch, where gallstones commonly get stuck. The muscular layer of the body is of smooth muscle tissue that helps the gallbladder contract, so that it can discharge its bile into the bile duct. The gallbladder needs to store bile in a natural, semi-liquid form at all times. Hydrogen ions secreted from the inner lining of the gallbladder keep the bile acidic enough to prevent hardening. To dilute the bile, water and electrolytes from the digestion system are added. Also, salts attach themselves to cholesterol molecules in the bile to keep them from crystallising. If there is too much cholesterol or bilirubin in the bile, or if the gallbladder does not empty properly the systems can fail. This is how gallstones form when a small piece of calcium gets coated with either cholesterol or bilirubin and the bile crystallises and forms a gallstone. The main purpose of the gallbladder is to store and release bile, or gall. Bile is released into the small intestine in order to help in the digestion of fats by breaking down larger molecules into smaller ones. After the fat is absorbed, the bile is also absorbed and transported back to the liver for reuse.
Pancreas
The pancreas is a major organ functioning as an accessory digestive gland in the digestive system. It is both an endocrine gland and an exocrine gland. The endocrine part secretes insulin when the blood sugar becomes high; insulin moves glucose from the blood into the muscles and other tissues for use as energy. The endocrine part releases glucagon when the blood sugar is low; glucagon allows stored sugar to be broken down into glucose by the liver in order to re-balance the sugar levels. The pancreas produces and releases important digestive enzymes in the pancreatic juice that it delivers to the duodenum. The pancreas lies below and at the back of the stomach. It connects to the duodenum via the pancreatic duct which it joins near to the bile duct's connection where both the bile and pancreatic juice can act on the chyme that is released from the stomach into the duodenum. Aqueous pancreatic secretions from pancreatic duct cells contain bicarbonate ions which are alkaline and help with the bile to neutralise the acidic chyme that is churned out by the stomach.
The pancreas is also the main source of enzymes for the digestion of fats and proteins. Some of these are released in response to the production of cholecystokinin in the duodenum. (The enzymes that digest polysaccharides, by contrast, are primarily produced by the walls of the intestines.) The cells are filled with secretory granules containing the precursor digestive enzymes. The major proteases, the pancreatic enzymes which work on proteins, are trypsinogen and chymotrypsinogen. Elastase is also produced. Smaller amounts of lipase and amylase are secreted. The pancreas also secretes phospholipase A2, lysophospholipase, and cholesterol esterase. The precursor zymogens, are inactive variants of the enzymes; which avoids the onset of pancreatitis caused by autodegradation. Once released in the intestine, the enzyme enteropeptidase present in the intestinal mucosa activates trypsinogen by cleaving it to form trypsin; further cleavage results in chymotripsin.
Lower gastrointestinal tract
The lower gastrointestinal tract (GI), includes the small intestine and all of the large intestine. The intestine is also called the bowel or the gut. The lower GI starts at the pyloric sphincter of the stomach and finishes at the anus. The small intestine is subdivided into the duodenum, the jejunum and the ileum. The cecum marks the division between the small and large intestine. The large intestine includes the rectum and anal canal.
Small intestine
Partially digested food starts to arrive in the small intestine as semi-liquid chyme, one hour after it is eaten. The stomach is half empty after an average of 1.2 hours. After four or five hours the stomach has emptied.
In the small intestine, the pH becomes crucial; it needs to be finely balanced in order to activate digestive enzymes. The chyme is very acidic, with a low pH, having been released from the stomach and needs to be made much more alkaline. This is achieved in the duodenum by the addition of bile from the gall bladder combined with the bicarbonate secretions from the pancreatic duct and also from secretions of bicarbonate-rich mucus from duodenal glands known as Brunner's glands. The chyme arrives in the intestines having been released from the stomach through the opening of the pyloric sphincter. The resulting alkaline fluid mix neutralises the gastric acid which would damage the lining of the intestine. The mucus component lubricates the walls of the intestine.
When the digested food particles are reduced enough in size and composition, they can be absorbed by the intestinal wall and carried to the bloodstream. The first receptacle for this chyme is the duodenal bulb. From here it passes into the first of the three sections of the small intestine, the duodenum (the next section is the jejunum and the third is the ileum). The duodenum is the first and shortest section of the small intestine. It is a hollow, jointed C-shaped tube connecting the stomach to the jejunum. It starts at the duodenal bulb and ends at the suspensory muscle of duodenum. The attachment of the suspensory muscle to the diaphragm is thought to help the passage of food by making a wider angle at its attachment.
Most food digestion takes place in the small intestine. Segmentation contractions act to mix and move the chyme more slowly in the small intestine allowing more time for absorption (and these continue in the large intestine). In the duodenum, pancreatic lipase is secreted together with a co-enzyme, colipase to further digest the fat content of the chyme. From this breakdown, smaller particles of emulsified fats called chylomicrons are produced. There are also digestive cells called enterocytes lining the intestines (the majority being in the small intestine). They are unusual cells in that they have villi on their surface which in turn have innumerable microvilli on their surface. All these villi make for a greater surface area, not only for the absorption of chyme but also for its further digestion by large numbers of digestive enzymes present on the microvilli.
The chylomicrons are small enough to pass through the enterocyte villi and into their lymph capillaries called lacteals. A milky fluid called chyle, consisting mainly of the emulsified fats of the chylomicrons, results from the absorbed mix with the lymph in the lacteals. Chyle is then transported through the lymphatic system to the rest of the body.
The suspensory muscle marks the end of the duodenum and the division between the upper gastrointestinal tract and the lower GI tract. The digestive tract continues as the jejunum which continues as the ileum. The jejunum, the midsection of the small intestine contains circular folds, flaps of doubled mucosal membrane which partially encircle and sometimes completely encircle the lumen of the intestine. These folds together with villi serve to increase the surface area of the jejunum enabling an increased absorption of digested sugars, amino acids and fatty acids into the bloodstream. The circular folds also slow the passage of food giving more time for nutrients to be absorbed.
The last part of the small intestine is the ileum. This also contains villi and vitamin B12; bile acids and any residue nutrients are absorbed here. When the chyme is exhausted of its nutrients the remaining waste material changes into the semi-solids called feces, which pass to the large intestine, where bacteria in the gut flora further break down residual proteins and starches.
Transit time through the small intestine is an average of 4 hours. Half of the food residues of a meal have emptied from the small intestine by an average of 5.4 hours after ingestion. Emptying of the small intestine is complete after an average of 8.6 hours.
Cecum
The cecum is a pouch marking the division between the small intestine and the large intestine. It lies below the ileocecal valve in the lower right quadrant of the abdomen. The cecum receives chyme from the last part of the small intestine, the ileum, and connects to the ascending colon of the large intestine. At this junction there is a sphincter or valve, the ileocecal valve which slows the passage of chyme from the ileum, allowing further digestion. It is also the site of the appendix attachment.
Large intestine
In the large intestine, the passage of the digesting food in the colon is a lot slower, taking from 30 to 40 hours until it is removed by defecation. The colon mainly serves as a site for the fermentation of digestible matter by the gut flora. The time taken varies considerably between individuals. The remaining semi-solid waste is termed feces and is removed by the coordinated contractions of the intestinal walls, termed peristalsis, which propels the excreta forward to reach the rectum and exit through the anus via defecation. The wall has an outer layer of longitudinal muscles, the taeniae coli, and an inner layer of circular muscles. The circular muscle keeps the material moving forward and also prevents any back flow of waste. Also of help in the action of peristalsis is the basal electrical rhythm that determines the frequency of contractions. The taeniae coli can be seen and are responsible for the bulges (haustra) present in the colon. Most parts of the GI tract are covered with serous membranes and have a mesentery. Other more muscular parts are lined with adventitia.
Blood supply
The digestive system is supplied by the celiac artery. The celiac artery is the first major branch from the abdominal aorta, and is the only major artery that nourishes the digestive organs.
There are three main divisions – the left gastric artery, the common hepatic artery and the splenic artery.
The celiac artery supplies the liver, stomach, spleen and the upper 1/3 of the duodenum (to the sphincter of Oddi) and the pancreas with oxygenated blood. Most of the blood is returned to the liver via the portal venous system for further processing and detoxification before returning to the systemic circulation via the hepatic veins.
The next branch from the abdominal aorta is the superior mesenteric artery, which supplies the regions of the digestive tract derived from the midgut, which includes the distal 2/3 of the duodenum, jejunum, ileum, cecum, appendix, ascending colon, and the proximal 2/3 of the transverse colon.
The final branch which is important for the digestive system is the inferior mesenteric artery, which supplies the regions of the digestive tract derived from the hindgut, which includes the distal 1/3 of the transverse colon, descending colon, sigmoid colon, rectum, and the anus above the pectinate line.
Blood flow to the digestive tract reaches its maximum 20–40 minutes after a meal and lasts for 1.5–2 hours.
Nerve supply
The enteric nervous system consists of some one hundred million neurons that are embedded in the peritoneum, the lining of the gastrointestinal tract extending from the esophagus to the anus. These neurons are collected into two plexuses – the myenteric (or Auerbach's) plexus that lies between the longitudinal and the smooth muscle layers, and the submucosal (or Meissner's) plexus that lies between the circular smooth muscle layer and the mucosa.
Parasympathetic innervation to the ascending colon is supplied by the vagus nerve. Sympathetic innervation is supplied by the splanchnic nerves that join the celiac ganglia. Most of the digestive tract is innervated by the two large celiac ganglia, with the upper part of each ganglion joined by the greater splanchnic nerve and the lower parts joined by the lesser splanchnic nerve. It is from these ganglia that many of the gastric plexuses arise.
Development
Early in embryonic development, the embryo has three germ layers and abuts a yolk sac. During the second week of development, the embryo grows and begins to surround and envelop portions of this sac. The enveloped portions form the basis for the adult gastrointestinal tract. Sections of this foregut begin to differentiate into the organs of the gastrointestinal tract, such as the esophagus, stomach, and intestines.
During the fourth week of development, the stomach rotates. The stomach, originally lying in the midline of the embryo, rotates so that its body is on the left. This rotation also affects the part of the gastrointestinal tube immediately below the stomach, which will go on to become the duodenum. By the end of the fourth week, the developing duodenum begins to spout a small outpouching on its right side, the hepatic diverticulum, which will go on to become the biliary tree. Just below this is a second outpouching, known as the cystic diverticulum, that will eventually develop into the gallbladder.
Clinical significance
Each part of the digestive system is subject to a wide range of disorders many of which can be congenital. Mouth diseases can also be caused by pathogenic bacteria, viruses, fungi and as a side effect of some medications. Mouth diseases include tongue diseases and salivary gland diseases. A common gum disease in the mouth is gingivitis which is caused by bacteria in plaque. The most common viral infection of the mouth is gingivostomatitis caused by herpes simplex. A common fungal infection is candidiasis commonly known as thrush which affects the mucous membranes of the mouth.
There are a number of esophageal diseases such as the development of Schatzki rings that can restrict the passageway, causing difficulties in swallowing. They can also completely block the esophagus.
Stomach diseases are often chronic conditions and include gastroparesis, gastritis, and peptic ulcers.
A number of problems including malnutrition and anemia can arise from malabsorption, the abnormal absorption of nutrients in the GI tract. Malabsorption can have many causes ranging from infection, to enzyme deficiencies such as exocrine pancreatic insufficiency. It can also arise as a result of other gastrointestinal diseases such as coeliac disease. Coeliac disease is an autoimmune disorder of the small intestine. This can cause vitamin deficiencies due to the improper absorption of nutrients in the small intestine. The small intestine can also be obstructed by a volvulus, a loop of intestine that becomes twisted enclosing its attached mesentery. This can cause mesenteric ischemia if severe enough.
A common disorder of the bowel is diverticulitis. Diverticula are small pouches that can form inside the bowel wall, which can become inflamed to give diverticulitis. This disease can have complications if an inflamed diverticulum bursts and infection sets in. Any infection can spread further to the lining of the abdomen (peritoneum) and cause potentially fatal peritonitis.
Crohn's disease is a common chronic inflammatory bowel disease (IBD), which can affect any part of the GI tract, but it mostly starts in the terminal ileum.
Ulcerative colitis, an ulcerative form of colitis, is the other major inflammatory bowel disease which is restricted to the colon and rectum. Both of these IBDs can give an increased risk of the development of colorectal cancer. Ulcerative colitis is the most common of the IBDs
Irritable bowel syndrome (IBS) is the most common of the functional gastrointestinal disorders. These are idiopathic disorders that the Rome process has helped to define.
Giardiasis is a disease of the small intestine caused by a protist parasite Giardia lamblia. This does not spread but remains confined to the lumen of the small intestine. It can often be asymptomatic, but as often can be indicated by a variety of symptoms. Giardiasis is the most common pathogenic parasitic infection in humans.
There are diagnostic tools mostly involving the ingestion of barium sulphate to investigate disorders of the GI tract. These are known as upper gastrointestinal series that enable imaging of the pharynx, larynx, oesophagus, stomach and small intestine and lower gastrointestinal series for imaging of the colon.
In pregnancy
Gestation can predispose for certain digestive disorders. Gestational diabetes can develop in the mother as a result of pregnancy and while this often presents with few symptoms it can lead to pre-eclampsia.
History
In the early 11th century, the Islamic medical philosopher Avicenna wrote extensively on many subjects including medicine. Forty of these treatises on medicine survive, and in the most famous one titled the Canon of Medicine he discusses "rising gas". Avicenna believed that digestive system dysfunction was responsible for the overproduction of gas in the gastrointestinal tract. He suggested lifestyle changes and a compound of herbal drugs for its treatment.
In 1497, Alessandro Benedetti viewed the stomach as an unclean organ separated off by the diaphragm. This view of the stomach and intestines as being base organs was generally held until the mid-17th century.
In the Renaissance of the 16th century, Leonardo da Vinci produced some early drawings of the stomach and intestines. He thought that the digestive system aided the respiratory system. Andreas Vesalius provided some early anatomical drawings of the abdominal organs in the 16th century.
In the middle of the 17th century, a Flemish physician Jan Baptist van Helmont offered the first chemical account of digestion which was later described as being very close to the later conceptualised enzyme.
In 1653, William Harvey described the intestines in terms of their length, their blood supply, the mesenteries, and fat (adenylyl cyclase).
In 1823, William Prout discovered hydrochloric acid in the gastric juice. In 1895, Ivan Pavlov described its secretion as being stimulated by a neurologic reflex with the vagus nerve having a crucial role. Black in the 19th century suggested an association of histamine with this secretion. In 1916, Popielski described histamine as a gastric secretagogue of hydrochloric acid.
William Beaumont was an army surgeon who in 1825, was able to observe digestion as it took place in the stomach. This was made possible by experiments on a man with a stomach wound that did not fully heal leaving an opening into the stomach. The churning motion of the stomach was described among other findings.
In the 19th century, it was accepted that chemical processes were involved in the process of digestion. Physiological research into secretion and the gastrointestinal tract was pursued with experiments undertaken by Claude Bernard, Rudolph Heidenhain and Ivan Pavlov.
The rest of the 20th century was dominated by research into enzymes. The first to be discovered was secretin by Ernest Starling in 1902, with ensuing results from John Edkins in 1905 who first suggested gastrin with its structure being determined in 1964. Andre Latarjet and Lester Dragstedt found a role for acetylcholine in the digestive system. In 1972, H2 receptor agonists were described by J. Black, that block the action of histamine and decrease the production of hydrochloric acid. In 1980, proton pump inhibitors were described by Sachs. In 1983, the role of Helicobacter pylori in the formation of ulcers was described by Barry Marshall, and Robin Warren.
Art historians have often noted that banqueters on iconographic records of ancient Mediterranean societies almost always appear to be lying down on their left sides. One possible explanation could lie in the anatomy of the stomach and in the digestive mechanism. When lying on the left, the food has room to expand because the curvature of the stomach is enhanced in that position.
See also
Abdominal internal oblique muscle
Ulcerative colitis
References
Organ systems
Metabolism | Human digestive system | [
"Chemistry",
"Biology"
] | 9,843 | [
"Digestive system",
"Organ systems",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
37,988,325 | https://en.wikipedia.org/wiki/Lipidology | Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease.
History
Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition.
Clinical lipidology
The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins.
A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL.
Another factor of CVD that is often overlooked involves the concentrations of low-density lipoproteins (LDL) and very low-density lipoproteins (VLDL). These are often seen at higher than expected and necessary levels in the body due to food uptake, family history, and a person's metabolic rate. There is a correlation between these increased levels and stroke, heart attack, and mortality.
Therapeutic lipidology
Statins are a class of lipid-lowering medications used in the treatment and prevention of cardiovascular disease, specifically those associated with LDL-C. Statins have been shown to reduce incident cardiovascular events by 30-40% when used as prescribed. However, statins are associated with a range of adverse effects (e.g., statin myopathies and myalgias) sometimes severe enough warrant discontinuation and/or substitution. For individuals completely intolerant to statins who nevertheless have an indication for lipid-lowering therapy, lipoprotein apheresis—a non-surgical method of removing lipoprotein particles from the bloodstream—is an option.
Pharmacologic inhibition of proprotein convertase subtilisin/kexin type 9 (PCSK9), an enzyme crucial for maintaining lipoprotein homeostasis, can be achieved through the use of monoclonal antibodies targeting PCSK9, such as evolocumab and alirocumab. This approach offers a potential solution for individuals with statin intolerance and insufficient response to statins alone, a common scenario among patients with familial hypercholesterolemia—whereby significant reductions in circulating lipoproteins can be achieved.
Lipidomics
Lipidomics is the complete profile of all lipids in a biological system at a given time. This is used to identify and quantify the lipids that can be detected. Since lipids have a variety of functions in the body, being able to understand which specific types are present in the body and at what levels is crucial to understand the diseases that result due to lipids. Methods of lipidomic analysis include mass spectrometry and chromatography. Monitoring lipid concentration can reveal much about an organism's health.
See also
Dyslipidemia
References
Books
Lipids
Biochemistry
Branches of biology | Lipidology | [
"Chemistry",
"Biology"
] | 904 | [
"Biomolecules by chemical classification",
"Organic compounds",
"nan",
"Biochemistry",
"Lipids"
] |
30,110,617 | https://en.wikipedia.org/wiki/Great%20Comet%20of%201823 | The Great Comet of 1823, also designated C/1823 Y1 or Comet De Bréauté-Pons, was a bright comet visible from December 1823 to April 1824.
Discovery and observations
It was independently discovered by Nell de Bréauté at Dieppe on December 29, by Jean-Louis Pons on the morning of December 30, and by Wilhelm von Biela at Prague on the same morning. It was already visible to the naked eye when discovered: Pons initially thought he was seeing smoke from a chimney rising over a hill, but continued observing when he noticed it did not change appearance. He was later to note that the comet was, puzzlingly, more easily visible to the naked eye than through a telescope. Biela also noted that it was noticeable brighter than the Great Comet of 1819 had been.
The comet was particularly known at the time for exhibiting two tails, one pointing away from the Sun and the other (termed an "anomalous tail" by Karl Harding and Heinrich Olbers) pointing towards it.
Caroline Herschel recorded an observation of the comet on January 31, 1824 as the last entry in her observing book.
Pons was also the last astronomer to detect the comet, on April 1, 1824.
References
External links
Non-periodic comets
Great Comet
Great Comet
18231229
Great comets | Great Comet of 1823 | [
"Astronomy"
] | 266 | [
"Astronomy stubs",
"Comet stubs"
] |
30,115,086 | https://en.wikipedia.org/wiki/Quantum%20machine | A quantum machine is a human-made device whose collective motion follows the laws of quantum mechanics. The idea that macroscopic objects may follow the laws of quantum mechanics dates back to the advent of quantum mechanics in the early 20th century. However, as highlighted by the Schrödinger's cat thought experiment, quantum effects are not readily observable in large-scale objects. Consequently, quantum states of motion have only been observed in special circumstances at extremely low temperatures. The fragility of quantum effects in macroscopic objects may arise from rapid quantum decoherence. Researchers created the first quantum machine in 2009, and the achievement was named the "Breakthrough of the Year" by Science in 2010.
History
The first quantum machine was created on August 4, 2009, by Aaron D. O'Connell while pursuing his Ph.D. under the direction of Andrew N. Cleland and John M. Martinis at the University of California, Santa Barbara. O'Connell and his colleagues coupled together a mechanical resonator, similar to a tiny springboard, and a qubit, a device that can be in a superposition of two quantum states at the same time. They were able to make the resonator vibrate a small amount and a large amount simultaneously—an effect which would be impossible in classical physics. The mechanical resonator was just large enough to see with the naked eye—about as long as the width of a human hair.
The groundbreaking work was subsequently published in the journal Nature in March 2010. The journal Science declared the creation of the first quantum machine to be the "Breakthrough of the Year" of 2010.
Cooling to the ground state
In order to demonstrate the quantum mechanical behavior, the team first needed to cool the mechanical resonator until it was in its quantum ground state, the state with the lowest possible energy.
A temperature was required, where is the Planck constant, is the frequency of the resonator, and is the Boltzmann constant.
Previous teams of researchers had struggled with this stage, as a 1 MHz resonator, for example, would need to be cooled to the extremely low temperature of 50 μK. O'Connell's team constructed a different type of resonator, a film bulk acoustic resonator, with a much higher resonant frequency (6 GHz) which would hence reach its ground state at a (relatively) higher temperature (~0.1 K); this temperature could then be easily reached with a dilution refrigerator. In the experiment, the resonator was cooled to 25 mK.
Controlling the quantum state
The film bulk acoustic resonator was made of piezoelectric material, so that as it oscillated its changing shape created a changing electric signal, and conversely an electric signal could affect its oscillations. This property enabled the resonator to be coupled with a superconducting phase qubit, a device used in quantum computing whose quantum state can be accurately controlled.
In quantum mechanics, vibrations are made up of elementary vibrations called phonons. Cooling the resonator to its ground state can be seen as equivalent to removing all of the phonons. The team was then able to transfer individual phonons from the qubit to the resonator. The team was also able to transfer a superposition state, where the qubit was in a superposition of two states at the same time, onto the mechanical resonator. This means the resonator "literally vibrated a little and a lot at the same time", according to the American Association for the Advancement of Science. The vibrations lasted just a few nanoseconds before being broken down by disruptive outside influences. In the Nature paper, the team concluded "This demonstration provides strong evidence that quantum mechanics applies to a mechanical object large enough to be seen with the naked eye."
Notes
References
External links
Aaron D. O'Connell, December 2010, "A Macroscopic Mechanical Resonator Operated in the Quantum Limit" (Ph.D. thesis)
2009 introductions
Quantum mechanics
Resonators | Quantum machine | [
"Physics"
] | 827 | [
"Theoretical physics",
"Quantum mechanics"
] |
30,115,275 | https://en.wikipedia.org/wiki/Illumination%20problem | Illumination problems are a class of mathematical problems that study the illumination of rooms with mirrored walls by point light sources.
Original formulation
The original formulation was attributed to Ernst Straus in the 1950s and has been resolved. Straus asked whether a room with mirrored walls can always be illuminated by a single point light source, allowing for repeated reflection of light off the mirrored walls. Alternatively, the question can be stated as asking that if a billiard table can be constructed in any required shape, is there a shape possible such that there is a point where it is impossible to hit the billiard ball at another point, assuming the ball is point-like and continues infinitely rather than stopping due to friction.
Penrose unilluminable room
The original problem was first solved in 1958 by Roger Penrose using ellipses to form the Penrose unilluminable room. He showed that there exists a room with curved walls that must always have dark regions if lit only by a single point source.
Polygonal rooms
This problem was also solved for polygonal rooms by George Tokarsky in 1995 for 2 and 3 dimensions, which showed that there exists an unilluminable polygonal 26-sided room with a "dark spot" which is not illuminated from another point in the room, even allowing for repeated reflections. These were rare cases, when a finite number of dark points (rather than regions) are unilluminable only from a fixed position of the point source.
In 1995, Tokarsky found the first polygonal unilluminable room which had 4 sides and two fixed boundary points. He also in 1996 found a 20-sided unilluminable room with two distinct interior points. In 1997, two different 24-sided rooms with the same properties were put forward by George Tokarsky and David Castro separately.
In 2016, Samuel Lelièvre, Thierry Monteil, and Barak Weiss showed that a light source in a polygonal room whose angles (in degrees) are all rational numbers will illuminate the entire polygon, with the possible exception of a finite number of points. In 2019 this was strengthened by Amit Wolecki who showed that for each such polygon, the number of pairs of points which do not illuminate each other is finite.
See also
Hadwiger conjecture (alternate formulation with illumination)
References
External links
"The Illumination Problem – Numberphile", on YouTube by Numberphile, Feb 28, 2017
"Penrose Unilluminable Room Is Impossible To Light", on YouTube by Steve Mould, May 19, 2022
"The mushroom's shape does not matter in Penrose's unilluminable room", on YouTube by Nils Berglund, Aug 13, 2022
"The Tokarsky original unilluminable room with 24 sides", on YouTube by George Tokarsky, Jun 16, 2022
"Egyptian hieroglyphs: An Odd Tokarsky unilluminable room", on YouTube by George Tokarsky, Jul 15, 2022
"Eureka! The first polygonal unilluminable room", on YouTube by George Tokarsky, Jul 29, 2022
An interactive demonstration, on Wolfram demonstrations project
Mathematical problems
Dynamical systems | Illumination problem | [
"Physics",
"Mathematics"
] | 653 | [
"Mathematical problems",
"Mechanics",
"Dynamical systems"
] |
30,121,233 | https://en.wikipedia.org/wiki/Binary%20expression%20tree | A binary expression tree is a specific kind of a binary tree used to represent expressions. Two common types of expressions that a binary expression tree can represent are algebraic and boolean. These trees can represent expressions that contain both unary and binary operators.
Like any binary tree, each node of a binary expression tree has zero, one, or two children. This restricted structure simplifies the processing of expression trees.
Construction of an expression tree
Example
The input in postfix notation is: a b + c d e + * *
Since the first two symbols are operands, one-node trees are created and pointers to them are pushed onto a stack. For convenience the stack will grow from left to right.
The next symbol is a '+'. It pops the two pointers to the trees, a new tree is formed, and a pointer to it is pushed onto the stack.
Next, c, d, and e are read. A one-node tree is created for each and a pointer to the corresponding tree is pushed onto the stack.
Continuing, a '+' is read, and it merges the last two trees.
Now, a '*' is read. The last two tree pointers are popped and a new tree is formed with a '*' as the root.
Finally, the last symbol is read. The two trees are merged and a pointer to the final tree remains on the stack.
Algebraic expressions
Algebraic expression trees represent expressions that contain numbers, variables, and unary and binary operators. Some of the common operators are × (multiplication), ÷ (division), + (addition), − (subtraction), ^ (exponentiation), and - (negation). The operators are contained in the internal nodes of the tree, with the numbers and variables in the leaf nodes. The nodes of binary operators have two child nodes, and the unary operators have one child node.
Boolean expressions
Boolean expressions are represented very similarly to algebraic expressions, the only difference being the specific values and operators used. Boolean expressions use true and false as constant values, and the operators include (AND), (OR), (NOT).
See also
Expression (mathematics)
Term (logic)
Context-free grammar
Parse tree
Abstract syntax tree
References
Binary trees
Computer algebra | Binary expression tree | [
"Mathematics",
"Technology"
] | 470 | [
"Computer science",
"Computer algebra",
"Computational mathematics",
"Algebra"
] |
30,121,570 | https://en.wikipedia.org/wiki/Thermo%20galvanometer | The thermo-galvanometer is an instrument for measuring small electric currents. It was invented by William Duddell about 1900. The following is a description of the instrument taken from a trade catalog of Cambridge Scientific Instrument Company dated 1905:
For a long time the need of an instrument capable of accurately measuring small alternating currents has been keenly felt. The high resistance and self-induction of the coils of instruments of the electro-magnetic type frequently prevent their use. Electro-static instruments as at present constructed are not altogether suitable for measuring very small currents, unless a sufficient potential difference is available.
The thermo-galvanometer designed by Mr W. Duddell can be used for the measurement of extremely small currents to a high degree of accuracy. It has practically no self-induction or capacity and can therefore be used on a circuit of any frequency (even up to 120,000~ per sec.) and currents as small as twenty micro-amperes can be readily measured by it . It is equally correct on continuous and alternating currents. It can therefore be accurately standardized by continuous current and used without error on circuits of any frequency or wave-form.
The principle of the thermo-galvanometer is simple. The instrument consists of a resistance which is heated by the current to be measured, the heat from the resistance falling on the thermo-junction of a Boys radio-micrometer. The rise in temperature of the lower junction of the thermo-couple produces a current in the loop which is deflected by the magnetic field against the torsion of the quartz fibre.
References
Vladimir Karapetoff, Experimental Electrical Engineering and Manual for Electrical Testing for Engineers and for Students in Engineering Laboratories. Volume. 1 John Wiley & Sons, Inc. 1910. page 70
Cambridge Scientific Instrument Company Ltd. 1905 trade catalog.
Galvanometers
Historical scientific instruments | Thermo galvanometer | [
"Technology",
"Engineering"
] | 382 | [
"Galvanometers",
"Measuring instruments"
] |
50,859,529 | https://en.wikipedia.org/wiki/Alexander%20Duckham | Alexander Duckham (11 March 1877 – 1 February 1945) was an English chemist and businessman, best known for the development of machine lubricants. The son of an engineer, after university he specialised in lubrication, working briefly for Fleming's Oil Company before founding his own company, Alexander Duckham & Co, in Millwall in 1899.
By the outbreak of World War I, he was an authority on technological problems relating to lubrication, and the company went public in about 1920, relocating from Millwall to Hammersmith. By the time he died in 1945, Duckhams had assumed a dominant position for the supply of lubricants and corrosion inhibitors to the motor industry in Britain and other markets. A new manufacturing plant was opened in Staffordshire in 1968, and soon thereafter the company was taken over by BP.
Early career
Duckham was born in Blackheath, London, the second eldest son (his elder brother was Frederick and younger brother Sir Arthur Duckham) of a Falmouth-born mechanical and civil engineer, Frederic Eliot Duckham (1841 - died 13 January 1918 in Blackheath), who had patented improvements in governors for marine engines and invented a 'Hydrostatic Weighing Machine'. His mother was Maud Mary McDougall (1849-1921), sister of John McDougall of the flour-making family, which had a mill at Millwall Dock. His younger brother, Arthur Duckham, became one of the founders of the Institution of Chemical Engineers, and its first President. His elder brother, Frederick, also an engineer, was Director of Tank Design in World War One.
Upon leaving university in 1899, Alexander Duckham, who had worked briefly for Fleming's Oil Company, was encouraged by engineer Sir Alfred Yarrow, who lived nearby (Yarrow occupied Woodlands House in Mycenae Road, Westcombe Park for some years from 1896, close to the Duckham family home in Dartmouth Grove, Blackheath) to specialise in the study of lubrication, and was introduced to engineering firms with lubrication problems. Duckham established Alexander Duckham & Co in Millwall in 1899, and gradually assembled a team of engineers and chemists to whom he could delegate research work, freeing him to focus on lubricant production. Early customers included car dealer and racing driver Selwyn Edge who called weekly at Duckham's Millwall works for an oil change; Duckham, who bought his first car in 1899, also used to accompany Edge to Brooklands.
Yarrow and Lord Fisher subsequently encouraged Duckham to focus on sourcing raw materials for lubricants. From 1905 he helped pioneer the development of the Trinidad oil fields, including a deposit near Tabaquite of high-class crude oil suitable as a base for the preparation of lubricants, establishing a private company, Trinidad Central Oilfields, in 1911. The discovery and development of such lubricants was timely, coinciding with the evolution of internal combustion engines which demanded more advanced lubrication.
As well as being a successful businessman, Duckham was an early aviation pioneer and close friend of cross-channel aviator Louis Blériot – he paid for the stone memorial in Dover marking the place where Blériot landed in 1909 to complete the first flight across the English Channel in a heavier-than-air aircraft, and 25 years later hosted a dinner at London's Savoy Hotel marking the anniversary of the flight.
Duckhams
The outbreak of World War I in 1914 heightened the focus on mechanical efficiency, and the Duckham company was already established as the highest authority on technological problems in matters of lubrication. The company went public (c. 1920) soon after the war finished, and relocated from Millwall to Hammersmith in 1921.
By the time, Alexander Duckham died in 1945 (being succeeded as company chairman by his son Jack), Duckhams had assumed a dominant position in supply of lubricants and corrosion inhibitors to the motor industry and other markets. Behind Castrol, by 1967, it was regarded as the largest independent lubricating oil company in the UK and the third largest supplier of engine oil to motorists, producing the first multigrade oil for motorists. To cope with demand, a new manufacturing plant was opened in Aldridge, Staffordshire in 1968, shortly before the company was acquired by BP in 1969. Duckhams' Hammersmith site closed in 1979, was acquired by Richard Rogers' architects practice (today Rogers Stirk Harbour + Partners) in 1983, and was redeveloped to become the Thames Wharf Studios and the River Café.
Family
He married Violet Ethel Narraway in 1902, and they had five children, all born in Greenwich: Alec Narraway Duckham (born c. 1904); Millicent A. M. Duckham (c. 1905); Joan Ethel Duckham (c. 1906); Jack Eliot Duckham (c. 1908); and Ruth Edith
Duckham (born 1918).
The family lived for some years from 1907 in Vanbrugh Castle, close to Greenwich Park. In 1920, Duckham donated the house (and another property, Rooks Hill House in Sevenoaks) to the RAF Benevolent Fund to be used as a school for the children of RAF personnel killed in service. Vanbrugh Castle was later sold after the number of pupils declined; sale proceeds were used to educate RAF children, with funds later (1997) transferred to a charitable trust, the Alexander Duckham Memorial Schools Trust.
References
Note
Citations
20th-century English chemists
English chemists
1877 births
1945 deaths
Tribologists
Lubrication | Alexander Duckham | [
"Materials_science"
] | 1,141 | [
"Tribology",
"Tribologists"
] |
50,871,480 | https://en.wikipedia.org/wiki/LONGi | LONGi Green Energy Technology Co. Ltd. () or LONGi Group (), formerly Xi'an Longi Silicon Materials Corporation, is a Chinese photovoltaics company, a major manufacturer of solar modules and a developer of solar power projects.
LONGi is the world's largest manufacturer of monocrystalline silicon wafers and is listed on the Shanghai Stock Exchange.
History
The company was founded 14 February 2000 by Li Zhenguo as , with its corporate headquarters in Xi'an, Shaanxi. It changed its name in February 2017 to to better reflect its wider manufacturing scope after its acquisition of LERRI Solar, and also dropped the "Xi'an" location as part of the name.
In early 2016, LONGi signed a $1.84 billion solar panel sales agreement with SunEdison Products Singapore and agreed to purchase silicon manufactured in South Korea. LONGi also took over SunEdison's Malaysian silicon plant.
In early 2018, LONGi announced plans to build a new 5 GW module assembly plant in the Chuzhou Economic and Technological Development Zone in China's Anhui province, pending an internal review process, then an investment of approximately RMB 1.95 billion (US$300 million) and approximately 28 months of construction and start-up for manufacturing operations to commence.
In 2023, the U.S. Department of Commerce ruled that Vina Solar, a subsidiary of LONGi was circumventing tariffs for Chinese made products.
In 2024, LONGi announced plans to reduce workforce overcapacity by 30% staff reductions. In April 2024, the European Commission initiated an investigation into LONGi over anti-competitive subsidies.
Subsidiaries and acquisitions
LERRI Solar Technology Co., Ltd., (a.k.a. LERRI Photovoltaic Technology, also LERRI Solar), was acquired by LONGi in 2014.
LONGi Solar
Operations
LONGI Silicon Materials is engaged in the research, manufacture and distribution of monocrystalline ingots. It is the world's largest monocrystalline silicon manufacturer, and has rapidly broken world solar efficiency records three times within five months. Fast Company listed Xi'an LONGi Silicon Materials one among "Most Innovative Companies 2013" "for supplying the solar industry with high-quality silicon wafers at low cost". LONGi Solar, a subsidiary of LONGI Green Energy Technology, recently achieved a new industry record with 23.6% conversion efficiency with its P-type monocrystalline PERC (passivated emitter rear cell) solar cells, toward which an increasing number of manufacturers worldwide are migrating. The technique involves taking a silicon wafer, typically 1 to 2 mm thick, and making a multitude of parallel, transverse slices across the wafer, creating a large number of slivers that have a thickness of 50 micrometres and a width equal to the thickness of the original wafer. These slices are rotated 90 degrees, so that the surfaces corresponding to the faces of the original wafer become the edges of the slivers. The result is to convert, for example, a 150 mm diameter, 2 mm-thick wafer having an exposed silicon surface area of about 175 cm2 per side into about 1000 slivers having dimensions of 100 mm × 2 mm × 0.1 mm, yielding a total exposed silicon surface area of about 2000 cm2 per side. The electrical doping and contacts that had been on the face of the wafer are now located at the edges of the sliver, rather than at the front and rear as in the case of conventional wafer cells, as a result of this rotation. This results in making the cell sensitive on both sides, from both the front and rear of the cell (a property known as bifaciality).
Longi Silicon is a member of the Silicon Module Super League (SMSL), which had been a group of big-six c-Si module suppliers in the solar PV industry today until Longi was admitted. The other six members of the SMSL group are Canadian Solar, Hanwha Q CELLS, JA Solar, Jinko Solar, Trina Solar, and GCL.
Longi Silicon has been listed on the Shanghai Stock Exchange (security code: 601012) since April 2012.
LONGi has been called the fastest growing PV manufacturer in the industry. LONGi annual revenue in 2013 was derived entirely from selling around US$330 million of mono c-Si wafers, but by 2016 that annual revenue had skyrocketed to approximately US$1.67 billion. That was a nearly 94% increase over the 2015 fiscal year, which had itself generated a revenue growth of around 61% over the year before.
LONGi has manufacturing plants in Mainland China, India and Malaysia, and has acquired production facilities from other companies, including from American manufacturer SunEdison. However, Photon.Info reports that Longi Green Energy is mulling an open manufactory in the USA.
Products
P/N-type Mono-crystalline silicon wafer
P/N-type Mono-crystalline ingots
Bifacial solar cells
References
External links
Solar energy companies of China
Photovoltaics manufacturers
Manufacturing companies based in Xi'an
Manufacturing companies established in 2006
Renewable resource companies established in 2006
Energy in China
Science and technology in the People's Republic of China
Silicon photonics
Silicon wafer producers
Crystals
Solar power in China
Chinese brands
Chinese companies established in 2006
Companies in the FTSE China A50 Index | LONGi | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,114 | [
"Silicon photonics",
"Engineering companies",
"Crystallography",
"Crystals",
"Photovoltaics manufacturers",
"Nanotechnology"
] |
33,952,467 | https://en.wikipedia.org/wiki/Microcontinuity | In nonstandard analysis, a discipline within classical mathematics, microcontinuity (or S-continuity) of an internal function f at a point a is defined as follows:
for all x infinitely close to a, the value f(x) is infinitely close to f(a).
Here x runs through the domain of f. In formulas, this can be expressed as follows:
if then .
For a function f defined on , the definition can be expressed in terms of the halo as follows: f is microcontinuous at if and only if , where the natural extension of f to the hyperreals is still denoted f. Alternatively, the property of microcontinuity at c can be expressed by stating that the composition is constant on the halo of c, where "st" is the standard part function.
History
The modern property of continuity of a function was first defined by Bolzano in 1817. However, Bolzano's work was not noticed by the larger mathematical community until its rediscovery in Heine in the 1860s. Meanwhile, Cauchy's textbook Cours d'Analyse defined continuity in 1821 using infinitesimals as above.
Continuity and uniform continuity
The property of microcontinuity is typically applied to the natural extension f* of a real function f. Thus, f defined on a real interval I is continuous if and only if f* is microcontinuous at every point of I. Meanwhile, f is uniformly continuous on I if and only if f* is microcontinuous at every point (standard and nonstandard) of the natural extension I* of its domain I (see Davis, 1977, p. 96).
Example 1
The real function on the open interval (0,1) is not uniformly continuous because the natural extension f* of f fails to be microcontinuous at an infinitesimal . Indeed, for such an a, the values a and 2a are infinitely close, but the values of f*, namely and are not infinitely close.
Example 2
The function on is not uniformly continuous because f* fails to be microcontinuous at an infinite point . Namely, setting and K = H + e, one easily sees that H and K are infinitely close but f*(H) and f*(K) are not infinitely close.
Uniform convergence
Uniform convergence similarly admits a simplified definition in a hyperreal setting. Thus, a sequence converges to f uniformly if for all x in the domain of f* and all infinite n, is infinitely close to .
See also
Standard part function
Bibliography
Martin Davis (1977) Applied nonstandard analysis. Pure and Applied Mathematics. Wiley-Interscience [John Wiley & Sons], New York-London-Sydney. xii+181 pp.
Gordon, E. I.; Kusraev, A. G.; Kutateladze, S. S.: Infinitesimal analysis. Updated and revised translation of the 2001 Russian original. Translated by Kutateladze. Mathematics and its Applications, 544. Kluwer Academic Publishers, Dordrecht, 2002.
References
Nonstandard analysis
Theory of continuous functions | Microcontinuity | [
"Mathematics"
] | 644 | [
"Theory of continuous functions",
"Mathematical objects",
"Infinity",
"Nonstandard analysis",
"Topology",
"Mathematics of infinitesimals",
"Model theory"
] |
33,956,742 | https://en.wikipedia.org/wiki/Acheson%20process | The Acheson process is a method of synthesizing silicon carbide (SiC) and graphite invented by Edward Goodrich Acheson and patented by him in 1896.
Process
The process consists of heating a mixture of silicon dioxide (SiO2), in the form of silica or quartz sand, and carbon, in its elemental form as powdered coke, in an iron bowl.
In the furnace, the silicon dioxide, which sometimes also contains other additives along with ferric oxide and saw dust is melted surrounding a graphite rod, which serves as a core. These rods are inserted in such a way that they are held in contact with each other through the particles of coke, which is commonly called coke bed. An electric current is passed through the graphite rods which heats the mixture to 1700–2500 °C. The result of the carbothermic reaction is a layer of silicon carbide (especially in its alpha and beta phases) forming around the rod and emission of carbon monoxide (CO). There are four chemical reactions in the production of silicon carbide:
C + SiO2 → SiO + CO
SiO2 + CO → SiO + CO2
C + CO2 → 2CO
SiO + 2 C → SiC + CO
This overall process is highly endothermic, with a net reaction:
SiO2 + 3 C + 625.1 kJ → α-SiC + 2 CO
Discovery
In 1890 Acheson attempted to synthesize diamond but ended up creating blue crystals of silicon carbide that he called carborundum. He found that the silicon vaporized when overheated, leaving graphite. He also discovered that when starting with carbon instead of silicon carbide, graphite was produced only when there was an impurity, such as silica, that would result in first producing a carbide. He patented the process of making graphite in 1896. After discovering this process, Acheson developed an efficient electric furnace based on resistive heating, the design of which is the basis of most silicon carbide manufacturing today.
Commercial Production
The first commercial plant using the Acheson process was built by Acheson in Niagara Falls, New York, where hydroelectric plants nearby could cheaply produce the necessary power for the energy intensive process. By 1896, The Carborundum Company was producing 1 million pounds of carborundum. Many current silicon carbide plants use the same basic design as the first Acheson plant. In the first plant, sawdust and salt were added to the sand to control purity. The addition of salt was eliminated in the 1960s, due to it corroding steel structures. The addition of sawdust was stopped in some plants to reduce emissions.
To manufacture synthetic graphite items, carbon powder and silica are mixed with a binder, such as tar, and baked after being pressed into shape such as that of electrodes or crucibles. They are then surrounded with granulated carbon acting as a resistive element that heats them. In the more efficient Castner lengthwise graphitization furnace, the items to be graphitized, e.g. rods, are heated directly by placing them lengthwise end-to-end in contact with the carbon electrodes so that current flows through them, and the surrounding granulated carbon acts as a thermal insulator, but otherwise the furnace is similar to the Acheson design.
To finish the items, the process is run for approximately 20 hours at with a starting current of () for a furnace approximately 9 meters long by 35 cm in width and 45 cm in depth, and the resistance drops as the carbon heats due to a negative temperature coefficient, causing the current to increase. Cool down takes weeks. The purity of graphite achievable using the process is 99.5%.
Uses
Silicon carbide was a useful material in jewelry making due to its abrasive properties, and this was the first commercial application of the Acheson process.
In the 1940s, first the Manhattan Project and then the Soviet atomic bomb project adopted Acheson process for nuclear graphite manufacturing (see details there).
The first light-emitting diodes were produced using silicon carbide from the Acheson process. The potential use of silicon carbide as a semiconductor led to the development of the Lely process, which was based on the Acheson process, but allowed control over the purity of the silicon-carbide crystals.
The graphite became valuable as a lubricant and for producing high-purity electrodes.
Cancer correlation
Occupational exposures associated with the Acheson process are strongly linked to an increased risk of lung cancer..
References
Further reading
Chemical processes
Carcinogens
IARC Group 1 carcinogens | Acheson process | [
"Chemistry",
"Environmental_science"
] | 966 | [
"Toxicology",
"Chemical processes",
"nan",
"Chemical process engineering",
"Carcinogens"
] |
33,962,555 | https://en.wikipedia.org/wiki/Multiple-use%20water%20supply%20system | Multiple Use water Schemes (MUS) are low-cost, equitable water supply systems that provide communities with water for both domestic needs and high-value agricultural production, including rearing livestock. They are designed for use in rural areas, inhabited by smallholder farmers, and generally cover ten to 40 households, although some have served many more households.
Collaboration
The International Water Management Institute and International Development Enterprises collaborated on a project using MUS to help reduce poverty in India and Nepal. Between 2003 and 2008, 12 MUS systems were installed in Himalayan hilly areas serving a total of about 5000 households. A water poverty mapping technique helped identify the best areas to target. When the impact of the installed systems was evaluated, it showed that low initial investment costs (approximately US$200 per household) could be paid back within a year. This was because the households served with MUS were able to earn additional income of about US$190 per year through sale of surplus produce.
In a water supply system designed for a single use, such as irrigating crops, livestock might damage hardware if they try to access the water, and people needing water for domestic uses might find there is no water provided in months when it is not needed for watering crops. These problems can be overcome when designing water supply systems for multiple uses. For example, steps can be built to provide access for bathing or washing clothes, and access points can be provided to give livestock safe access to water. Sufficient water can be supplied that there is always some available for domestic uses, even at times when the crops do not need water. Other livelihood options can also be considered; for example, using water for fisheries, as well as rearing livestock, growing crops and domestic uses. MUS can benefit women; for example, by reducing the time they have to spend gathering water and by providing water close to their home with which they can grow produce to feed their families and sell on.
Gutu and Prowse (2017) offer some estimates from Ethiopia on farmers’ willingness to pay for a multiple-use water supply system. They find that willingness to pay is based on gender, the prevalence of waterborne disease, the time to collect water, contact with extension services, access to credit, level of income and location. Respondents would pay 3.43 per cent of average income to participate. Consideration of how gendered norms influence women’s access to extension, credit and local markets could extend the benefits of such schemes.
References
External links
Water Sample Testing & Hygiene Monitoring
How Much Water Does a Dripping Faucet Waste?
Water supply
Water management | Multiple-use water supply system | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 522 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
53,689,676 | https://en.wikipedia.org/wiki/Worksoft | Worksoft, Inc. is a software testing company founded in 1998 and headquartered in Addison, Texas. The company provides an automation platform for test automation, business process discovery, and documentation supporting enterprise applications, including packaged and web apps.
In addition to its headquarters in Addison, Texas, the company has offices in London and Munich.
History
Worksoft was founded in 1998 by Linda Hayes, a co-founder of AutoTester, Inc, and was initially funded by a contract with Fidelity Investments for Y2K testing. Worksoft Certify was the first code-less automation tool designed for business analysts and is now a leader in the ERP automation industry. Texas-based Austin Ventures and California-based Crecendo Ventures were major investors. In 2010, Worksoft acquired TestFactory, a software testing company specializing in SAP.
In 2019, Worksoft was acquired by Marlin Equity Partners for an undisclosed sum.
Products
Worksoft Certify is a test automation platform focused on business process testing. Worksoft Certify can be used to test ERP applications, web apps, mobile apps, and more. The software is SAP certified for integration with SAP applications.
Other products include Worksoft Analyze, Worksoft Business Reporting Tool (BPP), Worksoft Execution Suite, and Process Capture 2.0.
References
Software companies established in 1998
1998 establishments in Texas
Software testing
Companies based in Addison, Texas
2019 mergers and acquisitions | Worksoft | [
"Engineering"
] | 287 | [
"Software engineering",
"Software testing"
] |
52,309,618 | https://en.wikipedia.org/wiki/Geology%20applications%20of%20Fourier%20transform%20infrared%20spectroscopy | Fourier transform infrared spectroscopy (FTIR) is a spectroscopic technique that has been used for analyzing the fundamental molecular structure of geological samples in recent decades. As in other infrared spectroscopy, the molecules in the sample are excited to a higher energy state due to the absorption of infrared (IR) radiation emitted from the IR source in the instrument, which results in vibrations of molecular bonds. The intrinsic physicochemical property of each particular molecule determines its corresponding IR absorbance peak, and therefore can provide characteristic fingerprints of functional groups (e.g. C-H, O-H, C=O, etc.).
In geosciences research, FTIR is applied extensively in the following applications:
Analysing the trace amount of water content in Nominally anhydrous minerals (NAMs)
Measuring volatile inclusions in glass and minerals
Estimating the explosion potential in volcanic setting.
Analysing chemotaxonomy of early life on earth
Linking biological affinities of both microfossils and macrofossils
These applications are discussed in details in the later sections. Most of the geology applications of FTIR focus on the mid-infrared range, which is approximately 4000 to 400 cm−1.
Instrumentation
The fundamental components of a Fourier transform spectrometer include a polychromatic light source and a Michelson Interferometer with a movable mirror. When light goes into the interferometer, it is separated into two beams. 50% of the light reaches the static mirror and the other half reaches the movable mirror. The two light beams reflect from the mirrors and combine as a single beam again at the beam splitter. The combined beam travels through the sample and is finally collected by the detector. The retardation (total path difference) of the light beams between the static mirror and the movable mirror results in interference patterns. The IR absorption by the sample occurs at many frequencies and the resulting infereogram is composed of all frequencies except for those absorbed. A mathematical approach Fourier Transform converts the raw data into spectrum.
Advantages
The FTIR technique uses a polychromatic beam of light with a wide range of continuous frequencies simultaneously, and therefore allows a much higher speed of scanning versus the conventional monochromatic dispersive spectroscopy.
Without the slit used in dispersive spectroscopy, FTIR allows more light to enter the spectrometer and gives a higher signal-to-noise ratio, i.e. a less-disturbed signal.
The IR laser used has a known wavelength and the velocity of the movable mirror can be controlled accordingly. This stable setup allows a higher accuracy for spectrum measurement.
Sample characterization
Transmission FTIR, attenuated total reflectance (ATR)-FTIR, Diffuse reflectance infrared Fourier transform (DRIFT) spectroscopy and reflectance micro-FTIR are commonly used for sample analysis .
Applications in geology
Volatiles diagnosis
The most commonly investigated volatiles are water and carbon dioxide as they are the primary volatiles to drive volcanic and magmatic processes. The absorbance of total water and molecular water is approximately 3450 cm-1 and 1630 cm-1. The peak height of the absorption bands for CO2 and CO32− are 2350 cm−1 and 1430 cm−1 respectively. The phases of volatiles also give different frequency of bond stretch and eventually produce a specific wavenumber. For example, the band of solid and liquid CO2 occurs in between 2336 and 2345 cm−1; and the CO2 gas phase shows two distinctive bands at 2338 cm−1 and 2361 cm−1. This is due to the energy difference under vibrational and rotational motion of gas molecules.
The modified Beer-Lambert Law equation is commonly used in geoscience for converting the absorbance in the IR spectrum into the species concentration:
Where ω is wt. % of the species of interest within the sample; A is the absorbance of the species; M is the molar mass (in g mol−1); ϵ is molar absorptivity (in L mol−1 cm −1); l is sample thickness (in cm); ρ is density (in g mol−1)
There are various applications of identifying the quantitative amount of volatiles by using spectroscopic technology. The following sections provide some of the examples:
Hydrous components in nominally anhydrous minerals
Nominally anhydrous minerals (NAMs) are minerals with only trace to minor amounts of hydrous components. The hydrous material occurs only at crystal defects. NAMs chemical formulas are normally written without hydrogen. NAMs such as olivine and orthopyroxene account for a large proportion in the mantle volume. Individual minerals may contain only a very low content of OH but their total weight can contribute significant as the H2O reservoir on Earth and other terrestrial planets. The low concentration of hydrous components (OH and H2O) can be analyzed with Fourier Transform spectrometer due to its high sensitivity. Water is thought to have significant role in affecting mantle rheology, either by hydrolytic weakening to the mineral structure or by lowering the partial melt temperature. The presence of hydrous components within NAMs can therefore (1) provide information on the crystallization and melting environment in the initial mantle; (2) reconstruct the paleoenvironment of early terrestrial planet.
Fluid and melt inclusions
Inclusion refers to the small mineral crystals and foreign fluids within a crystal. Melt inclusions and fluid inclusions can provide physical and chemical information of the geological environment in which the melt or fluid are trapped within the crystal. Fluid inclusion refers to the bubble within a mineral trapping volatiles or microscopic minerals within it. For melt inclusions, it refers to the parent melt of the initial crystallization environment being held as melt parcel within a mineral. The inclusions preserved original melt and therefore can provide the magmatic condition where the melt is near liquidus. Inclusions can be particularly useful in the petrological and volcanological studies.
The size of inclusions is usually microscopic (μm) with a very low concentration of volatile species. By coupling a synchrotron light source to the FTIR spectrometer, the diameter of the IR beam can be significantly reduced to as small as 3 μm. This allows a higher accuracy in detecting the targeted bubbles or melt parcels only without contamination from the surrounding host mineral.
By incorporating the other parameters, (i.e. temperature, pressure and composition), obtained from micro thermometry, electron and ion microprobe analyzers, it is able to reconstruct the entrapment environment and further infer the magma genesis and crustal storage. The above approach of FTIR has successfully detect the occurrence of H2O and CO2 in numbers of studies nowaday, For examples, the water saturated inclusion in olivine phenocryst erupted at Stromboli (Sicily, Italy) in consequences of depressurization, and the unexpected of occurrence of molecular CO2 in melts inclusion in Phlegraean Volcanic District (Southern Italy) revealed as the presence of a deep, CO2-rich, continuous degassing magma.
Evaluate the explosive potential volcanic dome
Vesiculation, i.e. the nucleation and growth of bubbles commonly initiates eruptions in volcanic domes. The evolution of vesiculation can be summarized in these steps:
The magma becomes progressively saturated with volatiles when water and carbon dioxide dissolves in it. Nucleation of bubbles start when then magma is supersaturated with these volatiles.
Bubbles continue to grow by diffusive transfer of water gases from the magma. Stresses buildup inside the volcanic dome.
The bubbles expand in consequence to the decompression of magma and explosions occur eventually. This terminates the vesiculation.
In order to understand the eruption process and evaluate the explosive potential, FTIR spectromicroscopy is used to measure millimeter-scale variations in H2O on obsidian samples near the pumice outcrop. The diffusive transfer of water from the magma host has already completed in the highly vesicular pumice which volatiles escapes during explosion. On the other hand, water diffusion has not yet completed in the glassy obsidian formed from cooling lava and therefore the evolution of volatiles diffusion is recorded within these samples. The H2O concentration in obsidian measured by FTIR across the samples increase away from the vesicular pumice boundary. The shape of the curve in the water concentration profile represent a volatile-diffusion timescale. The vesiculation initiation and termination is thus recorded in the obsidian sample. The diffusion rate of H2O can be estimated based on the following 1D diffusion equation.
D(C, T, P): the Diffusivity of H2O in melt, which has an Arrhenian dependence on Temperature (T), Pressure (P) and H2O Content (C).
When generating the diffusion model with the diffusion equation, the temperature and pressure can be fixed to a high-temperature and low-pressure condition which resemble the lava dome eruption environment. The maximum H2O content measured from FTIR spectrometer is substituted into the diffusion equation as the initial value that resembles a volatile supersaturated condition. The duration of the vesiculation event can be controlled by the decrease of water content across a distance in the sample as the volatiles escape into the bubbles. The more gradual change of the water content curve represents a longer vesiculation event. Therefore, the explosive potential of volcanic dome can be estimated from the water content profile derived from the diffusive model.
Establishing taxonomy of early life
For the large fossil with well-preserved morphology, paleontologists might be able to recognize it relatively easily with their distinctive anatomy. However, for microfossils that has simple morphology, compositional analysis by FTIR is an alternative way to better identify the biological affinities of these species. The highly sensitive FTIR spectrometer can be used to study microfossils which only have small amount of specimens available in nature. FTIR result can also assist the development of plant fossil chemotaxonomy.
Aliphatic C-H stretching bands in the 2900 cm−1, aromatic C-Cring stretching band at 1600 cm−1, C=O bands at 1710 cm−1 are some of the common target functional groups examined by the paleontologists. CH3/CH2 is useful for distinguishing different groups of organism (e.g. Archea, bacteria and eucarya), or even the species among the same group (i.e. different plant species).
Linkage between acritarchs and microfossil taxa
Acritarchs are microorganism characterized by their acid-resistant organic-walled morphology and they existed from Proterozoic to the present. There is no consensus on the common descent, the evolutionary history and the evolutionary relationship of acritarchs. They share similarity to cells or organelles with different origins listed below:
Cysts of eukaryotes: Eukaryotes are by definition organisms with cells that consists of a nucleus and other cellular organelles enclosed within a membrane. The cysts is a dominant stage in many microeukaryotes such as bacterium, that consists of a strengthened wall to protect the cell under unfavorable environment.
Prokaryotic sheath: the cell wall of the single-celled organism that lacks all the membrane-bounded organelles such as the nucleus;
Algae and other vegetative parts of multicellular organisms;
Crustacean egg cases.
Acritarchs samples are collected from drill core in places where Proterozoic microfossils have been reported, e.g. Roper Group (1.5–1.4 Ga) and Tanana Formation (ca. 590–565 Ma) in Australia, Ruyang Group, China (around 1.4–1.3 Ga). Comparison of the chain length and presence of structure in modern eukaryotic microfossil and the acritarchs suggests possible affinities between some of the species. For example, the composition and structure of the Neoproterozoic acritarch Tanarium conoideum is consistent with algaenans, i.e. the resistant wall of green algae made up of long-chained methylenic-polymer that can withstand changing temperature and pressure throughout the geological history. Both of the FTIR spectra obtained from Tanarium conoideum and algaenans exhibit IR absorbance peaks at methylene CH2 bend (c. 1400 cm−1 and 2900 cm−1).
Chemotaxonomy of plant fossils
The micro-structural analysis is a common way to complement with the conventional morphology taxonomy for plant fossils classification. FTIR spectroscopy can provide insightful information in the microstructure for different plant taxa. Cuticles is a waxy protective layer that covers plant leaves and stems to prevent loss of water. Its constituted waxy polymers are generally well-preserved in plant fossil, which can be used for functional group analysis. For example, the well-preserved cuticle of cordaitales fossils, an extinct order of plant, found in Sydney, Stellarton and Bay St. George shows similar FTIR spectra. This result confirms the previous morphological-based studies that all these morphologic similar cordaitales are originated from one single taxon.
References
Infrared spectroscopy
Geological techniques | Geology applications of Fourier transform infrared spectroscopy | [
"Physics",
"Chemistry"
] | 2,740 | [
"Infrared spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
52,311,547 | https://en.wikipedia.org/wiki/Quadruplanar%20inversor | The Quadruplanar inversor of Sylvester and Kempe is a generalization of Hart's inversor. Like Hart's inversor, is a mechanism that provides a perfect straight line motion without sliding guides.
The mechanism was described in 1875 by James Joseph Sylvester in the journal Nature.
Like Hart's inversor, it is based on an antiparallelogram but the rather than placing the fixed, input and output points on the sides (dividing them in fixed proportion so they are all similar), Sylvester recognized that the additional points could be displaced sideways off the sides, as long as they formed similar triangles. Hart's original form is simply the degenerate case of triangles with altitude zero.
Gallery
In these diagrams:
The antiparallelogram is highlighted in full opacity links.
Yellow Triangles and Green Triangles are similar.
Green Triangles are congruent with each other.
Yellow Triangles are congruent with each other.
Cyan links and Pink links are congruent.
Dashed links are additional appendages to allow for a link to travel rectilinearly.
Example 1 – Sylvester–Kempe Inversor
Example Dimensions:
Cyan Links =
Pink Links =
Green Triangles:
Shorter Sides =
Longest Side =
Yellow Triangles:
Shorter Sides =
Longest Side =
Example 2 – Sylvester–Kempe Inversor
Example Dimensions:
Cyan Links =
Pink Links =
Green Triangles:
Shorter Sides =
Longest Side =
Yellow Triangles:
Shorter Sides =
Longest Side =
Example 3 – Sylvester–Kempe Inversor
Example Dimensions:
Cyan Links =
Pink Links =
Green Triangles:
Shortest Side =
Intermediate Side =
Longest Side =
Yellow Triangles:
Shortest Side =
Intermediate Side =
Longest Side =
Example 4 – Kumara–Kampling Inversor
Created by Fumio Imai and Arglin Kampling. Rather than having the third joint of each triangular link be displaced off to the side, the third joint can also be displaced collinear to the original links, allowing for the links to remain as bars.
Example Dimensions:
Cyan Links =
Pink Links =
Green Links =
Yellow Links =
See also
Hart's first inversor / Hart's antiparallelogram / Hart's W-frame, the origination of the Quadruplanar inversor.
Linkage (mechanical)
Straight line mechanism
Notes
References
External links
Quadruplanar Inversor Generalization – an interactive demo at GeoGebra for creating and simulating Quadruplanar Inversor linkages
A strong relationship between new and old inversion mechanisms Dijksman, E.A., Published in: Journal of Engineering for Industry : Transactions of the ASME, Published: 01/01/1971
https://americanhistory.si.edu/collections/search/object/nmah_1214012
https://alexandria.tue.nl/repository/freearticles/605221.pdf
Linkages (mechanical)
Linear motion
Straight line mechanisms | Quadruplanar inversor | [
"Physics"
] | 606 | [
"Physical phenomena",
"Motion (physics)",
"Linear motion"
] |
55,201,705 | https://en.wikipedia.org/wiki/Mathematical%20Methods%20of%20Classical%20Mechanics | Mathematical Methods of Classical Mechanics is a textbook by mathematician Vladimir I. Arnold. It was originally written in Russian, and later translated into English by A. Weinstein and K. Vogtmann. It is aimed at graduate students.
Contents
Part I: Newtonian Mechanics
Chapter 1: Experimental Facts
Chapter 2: Investigation of the Equations of Motion
Part II: Lagrangian Mechanics
Chapter 3: Variational Principles
Chapter 4: Lagrangian Mechanics on Manifolds
Chapter 5: Oscillations
Chapter 6: Rigid Bodies
Part III: Hamiltonian Mechanics
Chapter 7: Differential forms
Chapter 8: Symplectic Manifolds
Chapter 9: Canonical Formalism
Chapter 10: Introduction to Perturbation Theory
Appendices
Riemannian curvature
Geodesics of left-invariant metrics on Lie groups and the hydrodynamics of ideal fluids
Symplectic structures on algebraic manifolds
Contact structures
Dynamical systems with symmetries
Normal forms of quadratic Hamiltonians
Normal forms of Hamiltonian systems near stationary points and closed trajectories
Theory of perturbations of conditionally period motion and Kolmogorov's theorem
Poincaré's geometric theorem, its generalizations and applications
Multiplicities of characteristic frequencies, and ellipsoids depending on parameters
Short wave asymptotics
Lagrangian singularities
The Kortweg-de Vries equation
Poisson structures
On elliptic coordinates
Singularities of ray systems
Russian original and translations
The original Russian first edition Математические методы классической механики was published in 1974 by Наука. A second edition was published in 1979, and a third in 1989. The book has since been translated into a number of other languages, including French, German, Japanese and Mandarin.
Reviews
The Bulletin of the American Mathematical Society said, "The [book] under review [...] written by a distinguished mathematician [...is one of] the first textbooks [to] successfully to present to students of mathematics and physics, [sic] classical mechanics in a modern setting."
A book review in the journal Celestial Mechanics said, "In summary, the author has succeeded in producing a mathematical synthesis of the science of dynamics. The book is well presented and beautifully translated [...] Arnold's book is pure poetry; one does not simply read it, one enjoys it."
See also
List of textbooks in classical and quantum mechanics
References
Bibliography
1974 non-fiction books
Classical mechanics
Graduate Texts in Mathematics
Physics textbooks
Mathematics textbooks | Mathematical Methods of Classical Mechanics | [
"Physics"
] | 534 | [
"Mechanics",
"Classical mechanics"
] |
55,205,298 | https://en.wikipedia.org/wiki/Mean%20glandular%20dose | In mammography, mean glandular dose (MGD) is a quantity used to describe the absorbed dose of radiation to the breast. It is based on a measurement of air kerma and conversion factors. MGD can be calculated from measurements made with poly(methyl methacrylate) (PMMA) blocks. It is often used to compare typical doses to patients between different centres or internationally, and is the preferred measure of the potential risk from mammography.
Calculation
MGD can be calculated from a measured incident air kerma at the top of the breast, , as follows:
converts from incident air kerma to MGD, with a glandularity of 50%, based on breast thickness and HVL. corrects for glandularity other than 50%, depending on the breast thickness and HVL, with two versions for ages 50–64 and 40–49. corrects for the x-ray spectra in use with a table of target/filter combinations.
Applications
MGD is typically used to define limits on mammography exposures by national and international organisations such as the European Union and International Atomic Energy Agency, at <2.5 milligray (mGy) per exposure to a standard breast (4.5 cm PMMA).
In routine quality assurance testing of mammographic equipment, MGD measurements for a range of effective breast thicknesses with PMMA, and from real patient exposures, is widely recommended.
References
Radiology
Medical physics
Radiation protection | Mean glandular dose | [
"Physics"
] | 305 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
55,206,126 | https://en.wikipedia.org/wiki/Schuler%20Group | Schuler AG is a German company headquartered in Göppingen, Baden-Württemberg which operates in the field of forming technology and is the world's largest manufacturer of presses. The presses are used to create car body sheets and other car parts as well as items such as beverage and aerosol cans, coins, sinks, large pipes, and parts for electric motors.
The company has production sites in Germany, Switzerland, Brazil, USA and China and in addition to the automotive industry and its suppliers, it also supplies the household appliances and electrical industry, the forging, energy, aerospace and railway industries as well as mints.
In total, the company has a presence in 40 countries with its own sites and representatives.
As of December 31, 2016, the company employed 6,617 people and in the 2016 fiscal year, it achieved a turnover of €1.2 billion. Earnings before interest and taxes (EBIT) grew in 2015 to €95.4 million, the Group result was €77.4 million.
Schuler AG's shares were listed on the regulated market on the Frankfurt and Stuttgart stock exchanges. When the public float portion fell below 10% in 2012, Schuler fell off the SDAX share index. In 2014 the shares were delisted from the regulated stock exchange; today, the shares are only listed on the open market on the Munich stock exchange.
History
The company was founded by Louis Schuler in 1839 and produced the first sheet metal forming machines in 1852. In 1895, the first minting presses were exported to China. Schuler presented the world's first transfer press at the Exposition Universelle fair in Paris in 1900. In 1924, the first body panel press for mass production was delivered. Internationalization began in 1961. In 1999, Schuler went public and entered the field of laser technology with the acquisition of Held Lasertechnik in Dietzenbach, Germany. In 2007, Schuler acquired Müller Weingarten AG, which also included the company Umformtechnik Erfurt, along with others. This acquisition created a global leading provider of forming technology for metal processing with a market share of around 35 percent.
In the same year, Schuler launched its ServoDirect Technology for presses, which has now become the industry standard. It was followed by the TwinServo Technology in 2014.
A double-lever deep-drawing press manufactured by Schuler dating from 1928 was retained at the Automobilwerk Eisenach, and is exhibited outside the Automobile Welt museum in Eisenach as a technical monument, after having been in operation there until 1998.
When the public float portion fell below 10% in 2012, Schuler fell off the SDAX share index.
In May 2012, Austrian company Andritz AG acquired 38.5% of the shares in Schuler AG from the Schuler-Voith family, and made the shareholders an offer of €20 per share. As of February 15, 2013, Andritz reported a stake of 93.57 percent, after the competition authorities had given their approval for the acquisition.
In spring 2014, the Schuler AG Executive Board decided to apply for the Schuler AG shares to be delisted.
In 2017, the Schuler Innovation Tower at the Göppingen headquarters was officially opened.
As part of the sponsoring activities, Schuler supports projects in the field of science, research, education, social affairs and good citizenship at the various locations. The Louis Schuler Fund for Education and Science, for instance, is assigned the task of providing support for trainees and educational institutions in the field of technology.
References
External links
Official website
Companies based in Baden-Württemberg
Industrial machine manufacturers
Manufacturing companies established in 1839
German brands
German companies established in 1839 | Schuler Group | [
"Engineering"
] | 775 | [
"Industrial machine manufacturers",
"Industrial machinery"
] |
55,206,702 | https://en.wikipedia.org/wiki/Seidel%27s%20algorithm | Seidel's algorithm is an algorithm designed by Raimund Seidel in 1992 for the all-pairs-shortest-path problem for undirected, unweighted, connected graphs. It solves the problem in expected time for a graph with vertices, where is the exponent in the complexity of matrix multiplication. If only the distances between each pair of vertices are sought, the same time bound can be achieved in the worst case. Even though the algorithm is designed for connected graphs, it can be applied individually to each connected component of a graph with the same running time overall. There is an exception to the expected running time given above for computing the paths: if the expected running time becomes .
Details of the implementation
The core of the algorithm is a procedure that computes the length of the shortest-paths between any pair of vertices.
In the worst case this can be done in time. Once the lengths are computed, the paths can be reconstructed using a Las Vegas algorithm whose expected running time is for and for .
Computing the shortest-paths lengths
The Python code below assumes the input graph is given as a - adjacency matrix with zeros on the diagonal. It defines the function APD which returns a matrix with entries such that is the length of the shortest path between the vertices and . The matrix class used can be any matrix class implementation supporting the multiplication, exponentiation, and indexing operators (for example numpy.matrix).
def apd(A, n: int):
"""Compute the shortest-paths lengths."""
if all(A[i][j] for i in range(n) for j in range(n) if i != j):
return A
Z = A**2
B = matrix(
[
[1 if i != j and (A[i][j] == 1 or Z[i][j] > 0) else 0 for j in range(n)]
for i in range(n)
]
)
T = apd(B, n)
X = T * A
degree = [sum(A[i][j] for j in range(n)) for i in range(n)]
D = matrix(
[
[
2 * T[i][j] if X[i][j] >= T[i][j] * degree[j] else 2 * T[i][j] - 1
for j in range(n)
]
for i in range(n)
]
)
return D
The base case tests whether the input adjacency matrix describes a complete graph, in which case all shortest paths have length .
Graphs with weights from finite universes
Algorithms for undirected and directed graphs with weights from a finite universe also exist. The best known algorithm for the directed case is in time by Zwick in 1998. This algorithm uses rectangular matrix multiplication instead of square matrix multiplication. Better upper bounds can be obtained if one uses the best rectangular matrix multiplication algorithm available instead of achieving rectangular multiplication via multiple square matrix multiplications. The best known algorithm for the undirected case is in time by Shoshan and Zwick in 1999. The original implementation of this algorithm was erroneous and has been corrected by Eirinakis, Williamson, and Subramani in 2016.
Notes
Graph algorithms
Polynomial-time problems
Computational problems in graph theory
Articles with example Python (programming language) code
Graph distance | Seidel's algorithm | [
"Mathematics"
] | 705 | [
"Computational problems in graph theory",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Polynomial-time problems",
"Mathematical relations",
"Mathematical problems",
"Graph distance"
] |
55,208,562 | https://en.wikipedia.org/wiki/BioData%20Mining | BioData Mining is a peer-reviewed open access scientific journal covering data mining methods applied to computational biology and medicine established in 2008. It is published by BioMed Central and the editors-in-chief are Jason H. Moore and Nicholas Tatonetti (Cedars Sinai Medical Center).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.079.
References
External links
BioMed Central academic journals
Biomedical informatics journals
Creative Commons Attribution-licensed journals
Academic journals established in 2008
Continuous journals | BioData Mining | [
"Biology"
] | 122 | [
"Bioinformatics",
"Biomedical informatics journals"
] |
43,633,744 | https://en.wikipedia.org/wiki/Mesodinium%20rubrum | Mesodinium rubrum (or Myrionecta rubra) is a species of ciliates. It constitutes a plankton community and is found throughout the year, most abundantly in spring and fall, in coastal areas. Although discovered in 1908, its scientific importance came into light in the late 1960s when it attracted scientists by the recurrent red colouration it caused by forming massive blooms, that cause red tides in the oceans.
Unlike typical protozoans, M. rubrum can make its own nutrition by photosynthesis. The unusual autotrophic property was discovered in 2006 when genetic sequencing revealed that the photosynthesising organelles, plastids, were derived from the ciliate's principal food, the autotrophic algae called cryptomonads (or cryptophytes), which contain endosymbiont red algae whose internal chloroplasts (evolved via endosymbiosis with cyanobacteria) indirectly enable M. rubrum to photosynthesize using sunlight. The ciliate is thus both autotrophic and heterotrophic at the same time. This also indicates that it is an example of multiple-stage endosymbiosis in the form of kleptoplasty. Moreover, these “stolen” plastids can be further transferred to additional hosts, as seen in the case of predation of M. rubrum by dinoflagellate planktons of the genus Dinophysis.
In 2009, a new species of Gram-negative bacteria called Maritalea myrionectae was discovered from a cell culture of M. rubrum.
Description
M. rubrum is a free-living marine ciliate. It is reddish in colour and form dark-red mass during blooming. Its body is almost spherical, looking like a miniature sunflower with its radiating hair-like cilia on its body surface. It measures up to 100 μm in length and 75 μm in width. The body is superficially divided into two lobes due to formation of a constriction at the centre. The constriction gives rise to a larger anterior lobe and a smaller posterior lobe. The cilia arise from the constriction. Using the cilia it can jump about 10-20 times its body length in one movement. Its nucleus is prominently situated at the centre, and is surrounded by organelles mostly derived from algae. For example, its cytoplasm contains numerous plastids, mitochondria and other nuclei. These organelles are properly separated such that the mitochondria are fully enclosed in a vacuole membrane and two endoplasmic reticulum membranes of the ciliate. This indicates that the ciliate is primarily a heterotroph, but after acquiring algal plastid, it transforms into an autotroph.
The endosymbiont
Genetic analysis showed that in the American coastal areas, the primary food of M. rubrum is the algae most closely related to the free-living Geminigera cryophila. But in Japanese coasts, the major algal species is Teleaulax amphioxeia. When these plastid-containing algae are ingested by the ciliate, they are not digested. The plastids remain functional and provide nutrition to the ciliate by photosynthesis. In order for the plastids to be normally active, they still require enzymes, which are synthesised by the sequestered algal nuclei. The single nucleus can survive and remain genetically active up to 30 days in the cytoplasm of the ciliate. As the retention time of the prey nuclei is short, an average M. rubrum cell may contain eight algal plastids per single prey nucleus and the nuclei need to be replaced by continuous feeding on fresh algae. Thus, the algal organelles are not permanently integrated.
References
External links
Myrionecta rubra at Phytopedia
World Register of Marine Species
Mesodinium
Endosymbiotic events
Protists described in 1908
Marine biology
Ciliate species | Mesodinium rubrum | [
"Biology"
] | 845 | [
"Endosymbiotic events",
"Symbiosis",
"Marine biology"
] |
43,635,021 | https://en.wikipedia.org/wiki/Lectican | Lecticans, also known as hyalectans, are a family of proteoglycans (a type protein that is attached to chains of negatively charged polysaccharides) that are components of the extracellular matrix. There are four members of the lectican family: aggrecan, brevican, neurocan, and versican. Lecticans interact with hyaluronic acid and tenascin-R to form a ternary complex.
Tissue distribution
Aggrecan is a major component of extracellular matrix in cartilage whereas versican is widely expressed in a number of connective tissues including those in vascular smooth muscle, skin epithelial cells, and the cells of central and peripheral nervous system. The expression of neurocan and brevican is largely restricted to neural tissues.
Structure
All four lecticans contain an N-terminal globular domain (G1 domain) that in turn contains an immunoglobulin V-set domain and a Link domain that binds hyaluronic acid; a long extended central domain (CS) that is modified with covalently attached sulfated glycosaminoglycan chains, and a C-terminal globular domain (G3 domain) containing of one or more EGF repeats, a C-type lectin domain and a CRP-like domain. Aggrecan has in addition a globular domain (G2 domain) that is situated between the G1 and CS domains.
See also
Hyaladherin
References
Protein families | Lectican | [
"Biology"
] | 323 | [
"Protein families",
"Protein classification"
] |
43,635,566 | https://en.wikipedia.org/wiki/Loss%20free%20resistor | A loss free resistor (LFR) is a resistor that does not lose energy. The first implementation was due to Singer and it has been implemented in various settings.
Overview
Many power processing systems can be improved by the application of resistive elements. Resistors may be applied for waveshaping, damping of oscillatory waveforms, stabilization of nonstable systems, and power flow balancing. The losses involved by the application of conventional resistors may be eliminated by the synthesis of artificial, loss-free resistive elements which replace the conventional ones. The conventional resistor converts the electrical energy absorbed at its terminals into heat; however, it has been found that creation of a resistive characteristic is not necessarily followed by such energy conversion. It is possible to synthesize a Loss-Free Resistor (LFR) by the combination of a switched mode converter and a suitable control circuit. The LFR is a two-port element that has a resistive i-v curve at the input terminals. The power absorbed at the input is transferred to the source that powers the total system, so in principle no losses occur.
Basic LFR realization
The LFR realization is based on the control of a two-port element that has a time-variable transformer (TVT) or gyrator matrix. The realization of the controlled, time-variable transformer can be achieved by switched-mode circuits. Realization of a controlled gyrator can be obtained by the same types of circuits operated at current mode control. The input/output parameters of the TVT are given as follows:
where k is the voltage transfer ratio of the TVT. In this case, the required resistive characteristic is created at the input terminal (a-b) of the TVT. The output of the TVT is connected to the source U, which powers the total circuit. The voltage at the input is given by
A conventional linear resistor R connected to the terminals (a-b) implies the following voltage/current relation:
So, by controlling the voltage transfer ratio of the switching converter (which realizes the TVT) such that the above equation is obeyed, a resistive characteristic is determined at terminals (a-b). In this case, the voltage transfer ratio k(r) is given by
where R is the resistive value of the synthesized LFR. In the case of realization by a controlled gyrator, the input/output parameters are given by
where g is the gyration conductance. The resistive characteristic is obtained by controlling the gyration conductance such that the following equation is obeyed:
By applying a switched-mode converter composed of loss-free elements (in principle only), the power absorbed at terminals (a-b) is transferred to the source U, so in principle the losses are eliminated.
The LFR is materialized by the combination of a controlled TVT or TVG and a signal-processing circuit (SPC), which controls the coupling network, such that equations above are obeyed. Methods of loss reduction by the transferring of energy to the source that powers the circuit are well known; however, these methods are usually applied for recovering the energy trapped in storage elements. In those circuits, there is not continuous control of the coupling networks that transfer the recovered energy to the source. In our method, the required resistive characteristic is obtained by the continuous control of the loss-free, storage-less two ports.
Properties of LFR
The LFR is a two-port element that has the following characteristics:
an equivalent resistive characteristic R at the input terminals, and
a power source P at the output terminals.
The value of P is determined by the power consumed by the equivalent resistor R. This power is supplied (by the power source P) to the bus U that powers the total system. The TVT (and TVG) can be realized by a family of switched-mode circuits. The losses, which practically occur in these circuits, can be modeled by a series and parallel resistors (r, and rp, respectively). Thus, the total circuit can be modeled by a cascade combination of those resistors and TVT (or TVG).
References
Resistive components | Loss free resistor | [
"Physics"
] | 861 | [
"Resistive components",
"Physical quantities",
"Electrical resistance and conductance"
] |
43,637,988 | https://en.wikipedia.org/wiki/Lines%20of%20non-extension | In the field of biomechanics, the lines of non-extension are notional lines running across the human body along which body movement causes neither stretching or contraction. Discovered by Arthur Iberall in work beginning in the 1940s, as part of research into space suit design, they have been further developed by Dava Newman in the development of the Space Activity Suit.
They were originally mapped by Iberall by drawing a series of circles over a portion of the body and then watching their deformations as the wearer walked around or performed various tasks. The circles deform into ellipses as the skin stretches over the moving musculature, and these deformations were recorded. After a huge number of such measurements the data is then examined to find all of the possible deformations of the circles, and more importantly, the non-moving points on them where the original circle and the deformed ellipse intersect (at four points per circle). By mapping these points over the entire body, a series of lines are produced.
These lines may then be used to direct the placement of tension elements in a spacesuit to enable constant suit pressure regardless of the motion of the body.
References
Anatomy
Biomechanics
Spacesuits | Lines of non-extension | [
"Physics",
"Astronomy",
"Biology"
] | 252 | [
"Biomechanics",
"Outer space",
"Astronomy stubs",
"Mechanics",
"Outer space stubs",
"Anatomy"
] |
43,638,272 | https://en.wikipedia.org/wiki/Chikungunya%20vaccine | A Chikungunya vaccine is a vaccine intended to provide acquired immunity against the chikungunya virus.
The most commonly reported side effects include headache, fatigue, muscle pain, joint pain, fever, nausea and tenderness at the injection site.
The first chikungunya vaccine was approved for medical use in the United States in November 2023.
Medical uses
The chikungunya vaccine is indicated for the prevention of disease caused by chikungunya virus in individuals 18 years of age and older who are at high risk of exposure to the chikungunya virus.
History
The safety of the chikungunya vaccine was evaluated in two clinical studies conducted in North America in which about 3,500 participants 18 years of age and older received a dose of the vaccine with one study including about 1,000 participants who received a placebo. The effectiveness of the chikungunya vaccine is based on immune response data from a clinical study conducted in the United States in individuals 18 years of age and older. In this study, the immune response of 266 participants who received the vaccine was compared to the immune response of 96 participants who received placebo. The level of antibody evaluated in study participants was based on a level shown to be protective in non-human primates that had received blood from people who had been vaccinated. Almost all vaccine study participants achieved this antibody level.
The US Food and Drug Administration (FDA) granted the application for the chikungunya vaccine fast track, breakthrough therapy, and priority review designations. The FDA granted approval of Ixchiq to Valneva Austria GmbH.
Society and culture
Legal status
In May 2024, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Ixchiq, intended for the prevention of chikungunya disease in adults. The applicant for this medicinal product is Valneva Austria GmbH. Ixchiq was reviewed under EMA's accelerated assessment program. It contains the live attenuated chikungunya virus (CHIKV) Δ5nsP3 strain of the ECSA/IOL genotype. Ixchiq was approved for medical use in the European Union in June 2024.
Research
A phase-II vaccine trial used a live, attenuated virus, to develop viral resistance in 98% of those tested after 28 days and 85% still showed resistance after one year. However, 8% of people reported transient joint pain, and attenuation was found to be due to only two mutations in the E2 glycoprotein. Alternative vaccine strategies have been developed, and show efficacy in mouse models.
In August 2014, researchers at the National Institute of Allergy and Infectious Diseases in the USA tested an experimental vaccine using virus-like particles (VLPs) instead of attenuated virus. All of the 25 people participating in this phase I trial developed strong immune responses.
As of 2015, a phase II trial was planned, using 400 adults aged 18 to 60 and to take place at six locations in the Caribbean. In 2021, two vaccine manufacturers, one in France, the other in the United States, reported successful completion of phase II clinical trials.
References
Vaccines | Chikungunya vaccine | [
"Biology"
] | 676 | [
"Vaccination",
"Vaccines"
] |
43,638,795 | https://en.wikipedia.org/wiki/Gas%20blending | Gas blending is the process of mixing gases for a specific purpose where the composition of the resulting mixture is defined, and therefore, controlled.
A wide range of applications include scientific and industrial processes, food production and storage and breathing gases.
Gas mixtures are usually specified in terms of molar gas fraction (which is closely approximated by volumetric gas fraction for many permanent gases): by percentage, parts per thousand or parts per million. Volumetric gas fraction converts trivially to partial pressure ratio, following Dalton's law of partial pressures. Partial pressure blending at constant temperature is computationally simple, and pressure measurement is relatively inexpensive, but maintaining constant temperature during pressure changes requires significant delays for temperature equalization. Blending by mass fraction is unaffected by temperature variation during the process, but requires accurate measurement of mass or weight, and calculation of constituent masses from the specified molar ratio. Both partial pressure and mass fraction blending are used in practice.
Applications
Shielding gases for welding
Shielding gases are inert or semi-inert gases used in gas metal arc welding and gas tungsten arc welding to protect the weld area from oxygen and water vapour, which can reduce the quality of the weld or make the welding more difficult.
Gas metal arc welding (GMAW), or metal inert gas (MIG) welding, is a process that uses a continuous wire feed as a consumable electrode and an inert or semi-inert gas mixture to protect the weld from contamination.
Gas tungsten arc welding (GTAW), or tungsten inert gas (TIG) welding, is a manual welding process that uses a nonconsumable tungsten electrode, an inert or semi-inert gas mixture, and a separate filler material.
Modified atmosphere packaging in the food industry
Modified atmosphere packaging preserves fresh produce to improve delivered quality of the product and extend its life. The gas composition used to pack food products depends on the product. A high oxygen content helps to retain the red colour of meat, while low oxygen reduces mould growth in bread and vegetables.
Gas mixtures for brewing
Sparging: An inert gas such as nitrogen is bubbled through the wine, which removes the dissolved oxygen. Carbon dioxide is also removed and to ensure that an appropriate amount of carbon dioxide remains, a mixture of nitrogen and carbon dioxide may be used for the sparging gas.
Purging and blanketing: The removal of oxygen from the headspace above the wine in a container by flushing with a similar gas mixture to that used for sparging is called purging, and if it is left there it is called blanketing or inerting.
Breathing gas mixtures for diving
A breathing gas is a mixture of gaseous chemical elements and compounds used for respiration. The essential component for any breathing gas is a partial pressure of oxygen of between roughly 0.16 and 1.60 bar at the ambient pressure. The oxygen is usually the only metabolically active component unless the gas is an anaesthetic mixture. Some of the oxygen in the breathing gas is consumed by the metabolic processes, and the inert components are unchanged, and serve mainly to dilute the oxygen to an appropriate concentration, and are therefore also known as diluent gases.
Scuba diving
Gas blending for scuba diving is the filling of diving cylinders with non-air breathing gases such as nitrox, trimix and heliox. Use of these gases is generally intended to improve overall safety of the planned dive, by reducing the risk of decompression sickness and/or nitrogen narcosis, and may improve ease of breathing.
Surface supplied and saturation diving
Gas blending for surface supplied and saturation diving may include the filling of bulk storage cylinders and bailout cylinders with breathing gases, but it also involves the mixing of breathing gases at lower pressure which are supplied directly to the diver or to the hyperbaric life-support system. Part of the operation of the life-support system is the replenishment of oxygen used by the occupants, and removal of the carbon dioxide waste product by the gas conditioning unit. This entails monitoring of the composition of the chamber gas and periodic addition of oxygen to the chamber gas at the internal pressure of the chamber.
The gas mixing unit is part of the life support equipment of a saturation system, along with other components which may include bulk gas storage, compressors, helium recovery unit, bell and diver hot water supply, gas conditioning unit and emergency power supply
Medical gas mixtures
The anesthetic machine is used to blend breathing gas for patients under anesthesia during surgery. The gas mixing and delivery system lets the anesthetist control oxygen fraction, nitrous oxide concentration and the concentration of volatile anesthetic agents.
The machine is usually supplied with oxygen (O2) and nitrous oxide (N2O) from low pressure lines and high pressure reserve cylinders, and the metered gas is mixed at ambient pressure, after which additional anesthetic agents may be added by a vaporizer, and the gas may be humidified. Air is used as a diluent to decrease oxygen concentration. In special cases other gases may also be added to the mixture. These may include carbon dioxide (CO2), used to stimulate respiration, and helium (He) to reduce resistance to flow or to enhance heat transfer.
Gas mixing systems may be mechanical, using conventional rotameter banks, or electronic, using proportional solenoids or pulsed injectors, and control may be manual or automatic.
Chemical production processes
Providing reactive gaseous materials for chemical production processes in the required ratio
Controlled atmosphere manufacture and storage
Protective gas mixtures may be used to exclude air or other gases from the surface of sensitive materials during processing.
Examples include melting of reactive metals such as magnesium, and heat treatment of steels.
Customized gas mixtures for analytical applications
Calibration gases:
Span gases are used for testing and calibrating gas detection equipment by exposing the sensor to a known concentration of a contaminant. The gases are used as a reference point to ensure correct readings after calibration and have very accurate composition, with a content of the gas to be detected close to the set value for the detector.
Zero gas is normally a gas free of the component to be measured, and as similar as practicable to the composition of the gas to be monitored, used to calibrate the zero point of the sensor.
Calibration gas mixtures are generally produced in batches by gravimetric or volumetric methods.
The gravimetric method uses sensitive and accurately calibrated scales to weigh the amounts of gases added into the cylinder. Precise measurement is required as inaccuracy or impurities can result in incorrect calibration. The container for calibration gas must be as close to perfectly clean as practicable. The cylinders may be cleaned by purging with high purity nitrogen, the vacuumed. For particularly critical mixtures the cylinder may be heated while being vacuumed to facilitate removal of any impurities adhering to the walls.
After filling, the gas mixture must be thoroughly mixed to ensure that all components are evenly distributed throughout the container to prevent possible variations on composition within the container. This is commonly done by rolling the container horizontally for 2 to 4 hours.
Methods
Several methods are available for gas blending. These may be distinguished as batch methods and continuous processes.
Batch methods
Batch gas blending requires the appropriate amounts of the constituent gases to be measured and mixed together until the mixture is homogeneous. The amounts are based on the mole (or molar) fractions, but measured either by volume or by mass. Volume measurement may be done indirectly by partial pressure, as the gases are often sequentially decanted into the same container for mixing, and therefore occupy the same volume. Weight measurement is generally used as a proxy for mass measurement as acceleration can usually be considered constant.
The mole fraction is also called the amount fraction, and is the number of molecules of a constituent divided by the total number of all molecules in the mixture. For example, a 50% oxygen, 50% helium mixture will contain approximately the same number of molecules of oxygen and helium. As both oxygen and helium approximate ideal gases at pressures below 200 bar, each will occupy the same volume at the same pressure and temperature, so they can be measured by volume at the same pressure, then mixed, or by partial pressure when decanted into the same container.
The mass fraction can be calculated from the molar fraction by multiplying the molar fraction by the molecular mass for each constituent, to find a constituent mass, and comparing it to the summed masses of all the constituents. The actual mass of each constituent needed for a mixture is calculated by multiplying the mass fraction by the desired mass of the mixture.
Partial pressure blending
Also known as volumetric blending. This must be done at constant temperature for best accuracy, though it is possible to compensate for temperature changes in proportion to the accuracy of the temperature measured before and after each gas is added to the mixture.
Partial pressure blending is commonly used for breathing gases for diving. The accuracy required for this application can be achieved by using a pressure gauge which reads accurately to 0.5 bar, and allowing the temperature to equilibrate after each gas is added.
Mass fraction blending
Also known as gravimetric blending. This is relatively unaffected by temperature, and accuracy depends on the accuracy of mass measurement of the constituents.
Mass fraction blending is used where great accuracy of the mixture is critical, such as in calibration gases. The method is not suited to moving platforms where the accelerations can cause inaccurate measurement, and therefore is unsuitable for mixing diving gases on vessels.
Continuous processes
Additive
Constant flow blending – a controlled flow of the constituent gases is mixed to form the product. Blending may occur at ambient pressure or at a pressure setting above ambient but lower than supply gas pressures.
Constant mass flow supply: Precision mass flow controllers are used to control the flow rate of each gas for blending. Mass flow meters may be installed on the outputs of the mass flow controllers to monitor the output. The gases may be passed through a static mixer to ensure homogeneous output.
Continuous gas blending is used for some surface supplied diving applications, and for many chemical processes using reactive gas mixtures, particularly where there may be a need to alter the mixture during the operation or process.
Subtractive
These processes start with a mixture of gases, usually air, and reduce the concentration of one or more of the constituents. These processes can be used for the production of Nitrox for scuba diving and deoxygenated air for blanketing purposes.
Pressure swing adsorption – Selective adsorption of gas on a medium which is reversible and proportional to pressure. Gas is loaded onto the medium during the high pressure phase and is released during the low pressure phase.
Membrane gas separation – Gas is forced through a semi-permeable membrane by a pressure difference. Some of the constituent gases pass through the membrane more easily than the others, and the output from the low pressure side is enriched with the gases which pass through more easily. Gases which are slower to pass through the membrane accumulate on the high pressure side and are continuously discharged to retain a steady concentration. The process may be repeated in several stages to increase concentrations.
Gas analysis
Gas mixtures must generally be analysed either in process or after blending for quality control. This is particularly important for breathing gas mixtures where errors can affect the health and safety of the end user.
Oxygen content is relatively simple to monitor using electro-galvanic cells and these are routinely used in the underwater diving industry for this purpose, though other methods may be more accurate and reliable.
References
See also
Gas blending for scuba diving
Industrial processes
Industrial gases | Gas blending | [
"Chemistry"
] | 2,382 | [
"Chemical process engineering",
"Industrial gases"
] |
46,801,868 | https://en.wikipedia.org/wiki/Modimelanotide | Modimelanotide (INN) (code names AP-214, ABT-719, ZP-1480) is a melanocortinergic peptide drug derived from α-melanocyte-stimulating hormone (α-MSH) which was under development by, at different times, Action Pharma, Abbott Laboratories, AbbVie, and Zealand for the treatment of acute kidney injury. It acts as a non-selective melanocortin receptor agonist, with IC50 values of 2.9 nM, 1.9 nM, 3.7 nM, and 110 nM at the MC1, MC3, MC4, and MC5 receptors. Modimelanotide failed clinical trials for acute kidney injury despite showing efficacy in animal models, and development was not further pursued.
See also
Afamelanotide
BMS-470,539
Bremelanotide
Melanotan II
PF-00446687
Setmelanotide
References
External links
Modimelanotide - AdisInsight
ZP1480 (ABT-719) Publications - Zealand Pharma
Melanocortin receptor agonists
Peptides
Abandoned drugs | Modimelanotide | [
"Chemistry"
] | 240 | [
"Biomolecules by chemical classification",
"Drug safety",
"Molecular biology",
"Peptides",
"Abandoned drugs"
] |
46,801,914 | https://en.wikipedia.org/wiki/%CE%92-Melanocyte-stimulating%20hormone | β-Melanocyte-stimulating hormone (β-MSH) is an endogenous peptide hormone and neuropeptide. It is a melanocortin, specifically, one of the three types of melanocyte-stimulating hormone (MSH), and is produced from proopiomelanocortin (POMC). It is an agonist of the MC1, MC3, MC4, and MC5 receptors.
β-MSH is also known to decrease food intake in animals such as rats, chicken due to the effect of proopiomelanocortin (POMC). Research was performed to see the effect β-MSH has on chicks, and it has been found that chicks responded with a decrease in food and water intake when treated with β-MSH. The experiment showed that β-MSH causes anorexigenic effects in chicks.
See also
α-Melanocyte-stimulating hormone
γ-Melanocyte-stimulating hormone
Adrenocorticotropic hormone
References
Human hormones
Melanocortin receptor agonists
Peptide hormones | Β-Melanocyte-stimulating hormone | [
"Chemistry",
"Biology"
] | 228 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
46,801,915 | https://en.wikipedia.org/wiki/%CE%93-Melanocyte-stimulating%20hormone | γ-Melanocyte-stimulating hormone (γ-MSH) is an endogenous peptide hormone and neuropeptide. It is a melanocortin, specifically, one of the three types of melanocyte-stimulating hormone (MSH), and is produced from proopiomelanocortin (POMC). It is an agonist of the MC1, MC3, MC4, and MC5 receptors. It exists in three forms, γ1-MSH, γ2-MSH, and γ3-MSH.
γ-MSH regulated cardiovascular functions. γ-MSH effects are measured through the effects it has on the central neural pathway dispersed throughout the kidney. It is not moderated based on tubular sodium transport. Gamma-MSH activates MC3R in renal tubular cells by limiting sodium absorption by inhibiting the central neural pathway. This regulates sodium balance and blood pressure. If MC3R is absent then there is resistance in γ-MSH which results in hypertension on HSD.
See also
α-Melanocyte-stimulating hormone
β-Melanocyte-stimulating hormone
Adrenocorticotropic hormone
References
Human hormones
Melanocortin receptor agonists
Peptide hormones | Γ-Melanocyte-stimulating hormone | [
"Chemistry",
"Biology"
] | 260 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
57,073,704 | https://en.wikipedia.org/wiki/Rhenium%20diselenide | Rhenium diselenide is an inorganic compound with the formula ReSe2. It has a layered structure where atoms are strongly bonded within each layer. The layers are held together by weak Van der Waals bonds, and can be easily peeled off from the bulk material.
Synthesis
Rhenium diselenide with a thickness as small as a triple-atomic layer can be produced by chemical vapor deposition at ambient pressure. A mixture of Ar and hydrogen gases is flown through a tube whose ends are kept at different temperatures. The substrate and ReO3 powder are placed at the hot end which is heated to 750 °C, and selenium powder is located at the cold end which is kept at 250 °C.
2 ReO3 + 7 Se → 2 ReSe2 + 3 SeO2
Properties
As most other dichalcogenides of transition metals, rhenium diselenide has a layered structure where atoms are strongly bonded within each layer and the layers are held together by weak Van der Waals bonds. However, while most other layered dichalcogenides have a high (hexagonal) symmetry, ReSe2 has a very low triclinic symmetry, and this symmetry does not change from the bulk to monolayers.
References
Rhenium(IV) compounds
Selenides
Transition metal dichalcogenides
Monolayers | Rhenium diselenide | [
"Physics"
] | 277 | [
"Monolayers",
"Atoms",
"Matter"
] |
57,073,854 | https://en.wikipedia.org/wiki/Rhenium%20disulfide | Rhenium disulfide is an inorganic compound of rhenium and sulfur with the formula ReS2. It has a layered structure where atoms are strongly bonded within each layer. The layers are held together by weak Van der Waals bonds, and can be easily peeled off from the bulk material.
Production
ReS2 is found in nature as the mineral rheniite. It can be synthesized from the reaction between rhenium and sulfur at 1000 °C, or the decomposition of rhenium(VII) sulfide at 1100 °C:
Re + 2 S → ReS2
Re2S7 → 2 ReS2 + 3 S
Nanostructured ReS2 can usually be achieved through mechanical exfoliation, chemical vapor deposition (CVD), and chemical and liquid exfoliations. Larger crystals can be grown with the assistance of liquid carbonate flux at high pressure. It is widely used in electronic and optoelectronic device, energy storage, photocatalytic and electrocatalytic reactions.
Properties
It is a two-dimensional (2D) group VII transition metal dichalcogenide (TMD). ReS2 was isolated down to monolayers which is only one unit cell in thickness for the first time in 2014. These monolayers have shown layer-independent electrical, optical, and vibrational properties much different from other TMDs.
Structure
Bulk ReS2 has a layered structure and a platelet-like habit. Different crystal structures were proposed for ReS2 based on single-crystal X-ray diffraction studies. While all authors agree that the lattice is triclinic, the reported cell parameters and atomic arrangements slightly differ. The earliest work describes ReS2 in a triclinic unit cell (sp. gr. P, a = 0.6455 nm, b = 0.6362 nm, c = 0.6401 nm, α = 105.04°, β = 91.60°, γ = 118.97°) as a distorted variant of the CdCl2 prototype (1T structure, trigonal space group Rm). In comparison with ideal octahedral coordination of the metal atoms in CdCl2, the Re atoms in ReS2 are displaced from the centers of the surrounding Se6 octahedra and form Re4 clusters that are linked to chains in the b direction. A later study proposed a more accurate description of the crystal structure. It reports a different triclinic cell (sp. gr. P, a = 0.6352 nm, b = 0.6446 nm, c = 1.2779 nm, α = 91.51°, β = 105.17°, γ = 118.97°) with the doubled c parameter and swapped a and b, α and β. There are two layers in this unit cell, related by symmetry centers, and the chains of clusters run along the a axis. Each layer form parallelogram-shaped connected clusters with Re-Re distances of ca. 0.27-0.28 nm in the cluster, and ca. 0.29 nm between clusters. There is one more structure description of ReS2 published in in yet another triclinic cell (sp. gr. P, a = 0.6417 nm, b = 0.6510 nm, c = 0.6461 nm, α = 121.10°, β = 88.38°, γ = 106.47°) where only one layer is present and the centers of symmetry are in the Re layer. The current consent is that the latter work might have overlooked the doubling of the c parameter captured in.
Natural Occurrence
Rhenium disulfide is known in nature as the very rare mineral rheniite.
References
Rhenium(IV) compounds
Disulfides
Transition metal dichalcogenides
Monolayers | Rhenium disulfide | [
"Physics"
] | 786 | [
"Monolayers",
"Atoms",
"Matter"
] |
57,074,205 | https://en.wikipedia.org/wiki/SU%282%29%20color%20superconductivity | Several hundred metals, compounds, alloys and ceramics possess the property of superconductivity at low temperatures. The SU(2) color quark matter adjoins the list of superconducting systems. Although it is a mathematical abstraction, its properties are believed to be closely related to the SU(3) color quark matter, which exists in nature when ordinary matter is compressed at supranuclear densities above ~ .
Superconductors in lab
Superconducting materials are characterized by the loss of resistance and two parameters: a critical temperature Tc and a critical magnetic field that brings the superconductor to its normal state. In 1911, H. Kamerlingh Onnes discovered the superconductivity of mercury at a temperature below 4 K. Later, other substances with superconductivity at temperatures up to 30 K were found. Superconductors prevent the penetration of the external magnetic field into the sample when the magnetic field strength is less than the critical value. This effect was called the Meissner effect. High-temperature superconductivity was discovered in the 1980s. Of the known compounds, the highest critical temperature belongs to HgBa2Ca2Cu3O8+x.
Low-temperature superconductivity has found a theoretical explanation in the model of John Bardeen, Leon Cooper, and John Robert Schrieffer (BCS theory).
The physical basis of the model is the phenomenon of Cooper pairing of electrons. Since a pair of electrons carries an integer spin, the correlated states of the electrons can form a Bose–Einstein condensate. An equivalent formalism was developed by Nikolay Bogoliubov
and John George Valatin.
Cooper pairing of nucleons takes place in ordinary nuclei. The effect manifests itself in the Bethe–Weizsacker mass formula, the last pairing term of which describes the correlation energy of two nucleons. Because of the pairing, the binding energy of even–even nuclei systematically exceeds the binding energy of odd–even and odd–odd nuclei.
Superfluidity in neutron stars
The superfluid phase of neutron matter exists in neutron stars. The superfluidity is described by the BCS model with a realistic nucleon-nucleon interaction potential. By increasing the density of nuclear matter above the saturation density, quark matter is formed. It is expected that dense quark matter at low temperatures is a color superconductor.
In the case of the SU(3) color group, a Bose–Einstein condensate of the quark Cooper pairs carries an open color. To meet the requirement of confinement, a Bose–Einstein condensate of colorless 6-quark states is considered, or the projected BCS theory is used.
Superconductivity with dense two-color QCD
The BCS formalism is applicable without modifications to the description of quark matter with color group SU(2), where Cooper pairs are colorless. The Nambu–Jona-Lasinio model predicts the existence of the superconducting phase of SU(2) color quark matter at high densities.
This physical picture is confirmed in the Polyakov–Nambu–Jona-Lasinio model,
and also in lattice QCD models,
in which the properties of cold quark matter can be described based on the first principles of quantum chromodynamics. The possibility of modeling on the lattices of two-color QCD at finite chemical potentials for even numbers of the quark flavors is associated with the positive-definiteness
of the integral measure and the absence of a sign problem.
See also
QCD matter
Quark star
References
Phases of matter
Quantum chromodynamics
Quark matter
Superconductivity
Superfluidity | SU(2) color superconductivity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 782 | [
"Electrical resistance and conductance",
"Physical phenomena",
"Phase transitions",
"Physical quantities",
"Quark matter",
"Superconductivity",
"Phases of matter",
"Materials science",
"Astrophysics",
"Superfluidity",
"Exotic matter",
"Condensed matter physics",
"Nuclear physics",
"Matter"... |
39,438,424 | https://en.wikipedia.org/wiki/Recalescence | Recalescence is an increase in temperature that occurs while cooling metal when a change in structure with an increase in entropy occurs. The heat responsible for the change in temperature is due to the change in entropy. When a structure transformation occurs the Gibbs free energy of both structures are more or less the same. Therefore, the process will be exothermic. The heat provided is the latent heat.
Recalescence also occurs after supercooling, when the supercooled liquid suddenly crystallizes, forming a solid but releasing heat in the process.
See also
Allotropy
Phase transition
Thermal analysis
References
Metallurgy
Phase transitions
Thermodynamic properties | Recalescence | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 137 | [
"Thermodynamics stubs",
"Chemical process stubs",
"Physical phenomena",
"Phase transitions",
"Thermodynamic properties",
"Physical quantities",
"Materials science stubs",
"Metallurgy",
"Quantity",
"Phases of matter",
"Critical phenomena",
"Materials science",
"Thermodynamics",
"nan",
"St... |
39,444,110 | https://en.wikipedia.org/wiki/Coherent%20effects%20in%20semiconductor%20optics | The interaction of matter with light, i.e., electromagnetic fields, is able to generate a coherent superposition of excited quantum states in the material. Coherent denotes the fact that the material excitations have a well defined phase relation which originates from the phase of the incident electromagnetic wave. Macroscopically, the superposition state of the material results in an optical polarization, i.e., a rapidly oscillating dipole density. The optical polarization is a genuine non-equilibrium quantity that decays to zero when the excited system relaxes to its equilibrium state after the electromagnetic pulse is switched off. Due to this decay which is called dephasing, coherent effects are observable only for a certain temporal duration after pulsed photoexcitation. Various materials such as atoms, molecules, metals, insulators, semiconductors are studied using coherent optical spectroscopy and such experiments and their theoretical analysis has revealed a wealth of insights on the involved matter states and their dynamical evolution.
This article focusses on coherent optical effects in semiconductors and semiconductor nanostructures. After an introduction into the basic principles, the semiconductor Bloch equations (abbreviated as SBEs) which are able to theoretically describe coherent semiconductor optics on the basis of a fully microscopic many-body quantum theory are introduced. Then, a few prominent examples for coherent effects in semiconductor optics are described all of which can be understood theoretically on the basis of the SBEs.
Starting point
Macroscopically, Maxwell's equations show that in the absence of free charges and currents an electromagnetic field interacts with matter via the optical polarization . The wave equation for the electric field reads and shows that the second derivative with respect to time of , i.e., , appears as a source term in the wave equation for the electric field . Thus, for optically thin samples and measurements performed in the far-field, i.e., at distances significantly exceeding the optical wavelength , the emitted electric field resulting from the polarization is proportional to its second time derivative, i.e., . Therefore, measuring the dynamics of the emitted field provides direct information on the temporal evolution of the optical material polarization .
Microscopically, the optical polarization arises from quantum mechanical transitions between different states of the material system. For the case of semiconductors, electromagnetic radiation with optical frequencies is able to move electrons from the valence () to the conduction () band. The macroscopic polarization is computed by summing over all microscopic transition dipoles via , where is the dipole matrix element which determines the strength of individual transitions between the states and , denotes the complex conjugate, and is the appropriately chosen system's volume.
If and are the energies of the conduction and valence band states, their dynamic quantum mechanical evolution is according to the Schrödinger equation given by phase factors and , respectively.
The superposition state described by is evolving in time according to .
Assuming that we start at with , we have for the optical polarization
.
Thus, is given by a summation over the microscopic transition dipoles which all oscillate with frequencies corresponding to the energy differences between the involved quantum states.
Clearly, the optical polarization is a coherent quantity which is characterized by an amplitude and a phase.
Depending on the phase relationships of the microscopic transition dipoles, one may obtain constructive or destructive interference, in which the microscopic dipoles are in or out of phase, respectively, and temporal interference phenomena like quantum beats, in which the modulus of varies as function of time.
Ignoring many-body effects and the coupling to other quasi particles and to reservoirs, the dynamics of photoexcited two-level systems can be described by a set of two equations, the so-called optical Bloch equations.
These equations are named after Felix Bloch who formulated them in order to analyze the dynamics of spin systems in nuclear magnetic resonance.
The two-level Bloch equations read
and
Here, denotes the energy difference between the two states and is the inversion, i.e., the difference in the occupations of the upper and the lower states.
The electric field couples the microscopic polarization to the product of the Rabi energy and the inversion .
In the absence of the driving electric field, i.e., for , the Bloch equation for describes an oscillation, i.e., .
The optical Bloch equations enable a transparent analysis of several nonlinear optical experiments.
They are, however, only well suited for systems with optical transitions between isolated levels in which many-body interactions are of minor importance as is sometimes the case in atoms or small molecules.
In solid state systems, such as semiconductors and semiconductor nanostructures, an adequate description of the many-body Coulomb interaction and the coupling to additional degrees of freedom is essential and thus the optical Bloch equations are not applicable.
The semiconductor Bloch equations (SBEs)
For a realistic description of optical processes in solid materials, it is essential to go beyond the simple picture of the optical Bloch equations and to treat many-body interactions that describe the coupling among the elementary material excitations by, e.g., the see article Coulomb interaction between the electrons and the coupling to other degrees of freedom, such as lattice vibrations, i.e., the electron-phonon coupling.
Within a semiclassical approach, where the light field is treated as a classical electromagnetic field and the material excitations are described quantum mechanically, all above mentioned effects can be treated microscopically on the basis of a many-body quantum theory.
For semiconductors the resulting system of equations are known as the semiconductor Bloch equations.
For the simplest case of a two-band model of a semiconductor, the SBEs can be written schematically as
Here is the microscopic polarization and and are the electron occupations in the conduction and valence bands ( and ), respectively, and denotes the crystal momentum.
As a result of the many-body Coulomb interaction and possibly further interaction processes, the transition energy and the Rabi energy both depend on the state of the excited system, i.e., they are functions of the time-dependent polarizations and occupations and , respectively, at all crystal momenta .
Due to this coupling among the excitations for all values of the crystal momentum , the optical excitations in semiconductor cannot be described on the level of isolated optical transitions but have to be treated as an interacting many-body quantum system.
A prominent and important result of the Coulomb interaction among the photoexcitations
is the appearance of strongly absorbing discrete excitonic resonances which show up in the absorption spectra of semiconductors spectrally below the fundamental band gap frequency.
Since an exciton consists of a negatively charged conduction band electron and a positively charged valence band hole (i.e., an electron missing in the valence band) which attract each other via the Coulomb interaction, excitons have a hydrogenic series of discrete absorption lines.
Due to the optical selection rules of typical III-V semiconductors such as Galliumarsenide (GaAs) only the s-states, i.e., 1s, 2s, etc., can be optically excited and detected, see article on Wannier equation.
The many-body Coulomb interaction leads to significant complications since it results in an infinite hierarchy of dynamic equations for the microscopic correlation functions that describe the nonlinear optical response.
The terms given explicitly in the SBEs above arise from a treatment of the Coulomb interaction in the time-dependent Hartree–Fock approximation.
Whereas this level is sufficient to describe excitonic resonances, there are several further effects, e.g., excitation-induced dephasing, contributions from higher-order correlations like excitonic populations and biexcitonic resonances, which require one to treat so-called many-body correlation effects that are by definition beyond the Hartree–Fock level.
These contributions are formally included in the SBEs given above in the terms denoted by .
The systematic truncation of the many-body hierarchy and the development and the analysis of controlled approximations schemes is an important topic in the microscopic theory of the optical processes in condensed matter systems.
Depending on the particular system and the excitation conditions several approximations schemes have been developed and applied.
For highly excited systems, it is often sufficient to describe many-body Coulomb correlations using the second order Born approximation.
Such calculations were, in particular, able to successfully describe the spectra of semiconductor lasers, see article on semiconductor laser theory.
In the limit of weak light intensities, signature of exciton complexes, in particular, biexcitons, in the coherent nonlinear response have been analyzed using the dynamics controlled truncation scheme.
These two approaches and several other approximation schemes can be viewed as special cases of the so-called cluster expansion in which the nonlinear optical response is classified by correlation functions which explicitly take into account interactions between a certain maximum number of particles and factorize larger correlation functions into products of lower order ones.
Selected coherent effects
By nonlinear optical spectroscopy using ultrafast laser pulses with durations on the order of ten to hundreds of femtoseconds, several coherent effects have been observed and interpreted.
Such studies and their proper theoretical analysis have revealed a wealth of information on the nature of the photoexcited quantum states, the coupling among them, and their dynamical evolution on ultrashort time scales. In the following, a few important effects are briefly described.
Quantum beats involving excitons and exciton complexes
Quantum beats are observable in systems in which the total optical polarization is due to a finite number of discrete transition frequencies which are quantum mechanically coupled, e.g., by common ground or excited states.
Assuming for simplicity that all these transitions have the same dipole matrix element, after excitation with a short laser pulse at the optical polarization of the system evolves as
,
where the index labels the participating transitions.
A finite number of frequencies results in temporal modulations of the squared modulus of the polarization and thus of the intensity of the emitted electromagnetic field with time periods
.
For the case of just two frequencies the squared modulus of the polarization is proportional to
,
i.e., due to the interference of two contributions with the same amplitude but different frequencies, the polarization varies between a maximum and zero.
In semiconductors and semiconductor heterostructures, such as quantum wells, nonlinear optical quantum-beat spectroscopy has been widely used to investigate the temporal dynamics of excitonic resonances.
In particular, the consequences of many-body effects which depending on the excitation conditions may lead to, e.g., a coupling among different excitonic resonances via biexcitons and other Coulomb correlation contributions and to a decay of the coherent dynamics by scattering and dephasing processes, has been explored in many pump-probe and four-wave-mixing measurements.
The theoretical analysis of such experiments in semiconductors requires a treatment on the basis of quantum mechanical many-body theory as is provided by the SBEs with many-body correlations incorporated on an adequate level.
Photon echoes of excitons
In nonlinear optics it is possible to reverse the destructive interference of so-called inhomogeneously broadened systems which contain a distribution of uncoupled subsystems with different resonance frequencies.
For example, consider a four-wave-mixing experiment in which the first short laser pulse excites all transitions at .
As a result of the destructive interference between the different frequencies the overall polarization decays to zero.
A second pulse arriving at is able to conjugate the phases of the individual microscopic polarizations, i.e., , of the inhomogeneously broadened system.
The subsequent unperturbed dynamical evolution of the polarizations leads to rephasing such that all polarization are in phase at which results in a measurable macroscopic signal.
Thus, this so-called photon echo occurs since all individual polarizations are in phase and add up constructively at .
Since the rephasing is only possible if the polarizations remain coherent, the loss of coherence can be determined by measuring the decay of the photon echo amplitude with increasing time delay.
When photon echo experiments are performed in semiconductors with exciton resonances, it is essential to include many-body effects in the theoretical analysis since they may qualitatively alter the dynamics. For example, numerical solutions of the SBEs have demonstrated that the dynamical reduction of the band gap which originates from the Coulomb interaction among the photoexcited electrons and holes is able to generate a photon echo even for resonant excitation of a single discrete exciton resonance with a pulse of sufficient intensity.
Besides the rather simple effect of inhomogeneous broadening, spatial fluctuations of the energy, i.e., disorder, which in semiconductor nanostructure may, e.g., arise from imperfection of the interfaces between different materials, can also lead to a decay of the photon echo amplitude with increasing time delay. To consistently treat this phenomenon of disorder induced dephasing the SBEs need to be solved including biexciton correlations.
As shown in Ref. such a microscopic theoretical approach is able to describe disorder induced dephasing in good agreement with experimental results.
The excitonic optical Stark effect
In a pump-probe experiment one excites the system with a pump pulse () and probes its dynamics with a (weak) test pulse ().
With such experiments one can measure the so-called differential absorption which is defined as the difference between the probe absorption in the presence of the pump and the probe absorption without the pump .
For resonant pumping of an optical resonance and when the pump precedes the test, the absorption change is usually negative in the vicinity of the resonance frequency.
This effect called bleaching arises from the fact that the excitation of the system with the pump pulse reduces the absorbance of the test pulse.
There may also be positive contributions to spectrally near the original absorption line due to resonance broadening and at other spectral positions due to excited-state absorption, i.e., optical transitions to states such as biexcitons which are only possible if the system is in an excited state.
The bleaching and the positive contributions are generally present in both coherent and incoherent situations where the polarization vanishes but occupations in excited states are present.
For detuned pumping, i.e., when the frequency of the pump field is not identical with the frequency of the material transition, the resonance frequency shifts as a result of the light-matter coupling, an effect known as the optical Stark effect.
The optical Stark effect requires coherence i.e., a non vanishing optical polarization induced be the pump pulse, and thus decreases with increasing time delay between the pump and probe pulses and vanishes if the system has returned to its ground state.
As can be shown by solving the optical Bloch equations for a two-level system due to the optical Stark effect the resonance frequency should shift to higher values, if the pump frequency is smaller than the resonance frequency and vice versa.
This is also the typical result of experiments performed on excitons in semiconductors.
The fact that in certain situations such predictions which are based on simple models fail to even qualitatively describe experiments in semiconductors and semiconductor nanostructures has received significant attention.
Such deviations are because in semiconductors typically many-body effects dominate the optical response and therefore it is required to solve the SBEs instead of the optical Bloch equations to obtain an adequate understanding.
An important example was presented in Ref. where it was shown that many-body correlations arising from biexcitons are able to reverse the sign of the optical Stark effect. In contrast to the optical Bloch equations, the SBEs including coherent biexcitonic correlations were able to properly describe the experiments performed on semiconductor quantum wells.
Superradiance of excitons
Consider two-level systems at different positions in space.
Maxwell's equations lead to a coupling among all the optical resonances since the field emitted from a specific resonance interferes with the emitted fields of all other resonances.
As a result, the system is characterized by eigenmodes originating from the radiatively coupled optical resonances.
A spectacular situation arises if identical two-level systems are regularly arranged with distances that equals an integer multiple of , where is the optical wavelength.
In this case, the emitted fields of all resonances interfere constructively and the system behaves effectively as a single system with a -times stronger optical polarization.
Since the intensity of the emitted electromagnetic field is proportional to the squared modulus of the polarization, it scales initially as .
Due to the cooperativity that originates from the coherent coupling of the subsystems, the radiative decay rate is increased by , i.e., where is the radiative decay of a single two-level system.
Thus the coherent optical polarization decays -times faster proportional to than that of an isolated system.
As a result, the time integrated emitted field intensity scales as , since the initial factor is multiplied by which arises from the time integral over the enhanced radiative decay.
This effect of superradiance has been demonstrated by monitoring the decay of the exciton polarization in suitably arranged semiconductor multiple quantum wells.
Due to superradiance introduced by the coherent radiative coupling among the quantum wells, the decay rate increases proportional to the number of quantum wells and is thus significantly more rapid than for a single quantum well.
The theoretical analysis of this phenomenon requires a consistent solution of Maxwell's equations together with the SBEs.
Concluding remarks
The few examples given above represent only a small subset of several further phenomena which demonstrate that the coherent optical response of semiconductors and semiconductor nanostructures is strongly influenced by many-body effects.
Other interesting research directions which similarly require an adequate theoretical analysis including many-body interactions are, e.g., phototransport phenomena where optical fields generate and/or probe electronic currents, the combined spectroscopy with optical and terahertz fields, see article terahertz spectroscopy and technology, and the rapidly developing area of semiconductor quantum optics, see article semiconductor quantum optics with dots.
See also
Semiconductor luminescence equations
Semiconductor Bloch equations
Cluster-expansion approach
Semiconductor laser theory
Quantum beats
Spin echo
Stark effect
Superradiance
Further reading
References
Semiconductor materials | Coherent effects in semiconductor optics | [
"Chemistry"
] | 3,794 | [
"Semiconductor materials"
] |
32,378,301 | https://en.wikipedia.org/wiki/Inertial%20number | The Inertial number is a dimensionless quantity which quantifies the significance of dynamic effects on the flow of a granular material. It measures the ratio of inertial forces of grains to imposed forces: a small value corresponds to the quasi-static state, while a high value corresponds to the inertial state or even the "dynamic" state. It is given by:
where is the shear rate, the average particle diameter, is the pressure and is the density.
Generally three regimes are distinguished:
: quasi static flow
: dense flow
: collisional flow
One model of dense granular flows, the μ(I) rheology, asserts that the coefficient of friction μ of a granular material is a function of the inertial number only.
References
Granularity of materials | Inertial number | [
"Physics",
"Chemistry"
] | 162 | [
"Particle technology",
"Materials",
"Granularity of materials",
"Matter"
] |
32,383,927 | https://en.wikipedia.org/wiki/Exidia%20recisa | Exidia recisa is a species of fungus in the family Auriculariaceae. In the UK, it has the recommended English name of amber jelly. Basidiocarps (fruit bodies) are gelatinous, orange-brown, and turbinate (top-shaped). It typically grows on dead attached twigs and branches of willow and is found in Europe and possibly elsewhere, though it has long been confused with the North American Exidia crenata.
Taxonomy
The species was originally found growing on willow in Germany and was described in 1813 by L.P.F. Ditmar as Tremella recisa. It was transferred to the genus Exidia by Fries in 1822. Tremella salicum (the epithet means "of willow") has long been considered a synonym.
Molecular research, based on cladistic analysis of DNA sequences, has shown that Exidia recisa is part of a complex of similar species, including Exidia crenata in North America and Exidia yadongensis in eastern Asia.
The epithet "recisa" means "cut-off", with reference to the shape of the fruit bodies.
Description
Exidia recisa forms orange-brown or amber, gelatinous fruit bodies that are firm and shallowly conical at first, becoming lax and pendulous with age, and around across. The fruit bodies typically grow gregariously, but do not normally coalesce. The upper, spore-bearing surface is smooth and shiny, whilst the undersurface is smooth and matt. The fruit bodies attach to wood at a point, but do not have a stem. The spore print is white.
Microscopic characters
The microscopic characters are typical of the genus Exidia. The basidia are ellipsoid, septate, 8 to 15 by 6 to 10 μm. The spores are allantoid (sausage-shaped), 14 to 15 by 3 to 3.5 μm.
Similar species
In Europe, fruit bodies of Exidia repanda are similarly coloured and microscopically indistinguishable. The fruit bodies are button-shaped, however, never becoming conical and pendulous, and the species typically occurs on birch, never on willow. Fruit bodies of Exidia umbrinella are also similar, but the species only occurs on conifers and is uncommon. Exidia brunneola is also uncommon and occurs on poplar. The widespread E. glandulosa has much darker, blackish brown fruit bodies with sparse warts or small, peg-like projections on their surface.
Habitat and distribution
Exidia recisa is a wood-rotting species, typically found on dead attached twigs and branches. It was originally recorded on willow and most frequently occurs on this substrate, although it has also been reported on alder, and Prunus species. Exidia recisa typically fruits in autumn and winter. It is widely distributed in Europe, but its distribution elsewhere is uncertain because it has only recently been distinguished from morphologically similar species in North America and Asia. Based on DNA sequencing, it is said to be present in North America as well as the more common E. crenata.
References
Auriculariales
Fungi described in 1813
Fungi of Europe
Fungus species | Exidia recisa | [
"Biology"
] | 659 | [
"Fungi",
"Fungus species"
] |
32,385,683 | https://en.wikipedia.org/wiki/Rosick%C3%BDite | Rosickyite is a rare native element mineral that is a polymorph of sulfur. It crystallizes in the monoclinic crystal system and is a high temperature, high density polymorph. It occurs as soft, colorless to pale yellow crystals and efflorescences.
It was first described in 1930 for an occurrence in Havirna, near Letovice, Moravia, Czech Republic. It was named for Vojtĕch Rosický (1880–1942), of Masaryk University, Brno.
Rosickyite occurs as in Death Valley within an evaporite layer produced by a microbial community. The otherwise unstable polymorph was produced and stabilized within a cyanobacteria dominated layer.
References
Native element minerals
Monoclinic minerals
Minerals in space group 13
Sulfur
Polymorphism (materials science) | Rosickýite | [
"Materials_science",
"Engineering"
] | 179 | [
"Polymorphism (materials science)",
"Materials science"
] |
32,388,475 | https://en.wikipedia.org/wiki/Druggability | Druggability is a term used in drug discovery to describe a biological target (such as a protein) that is known to or is predicted to bind with high affinity to a drug. Furthermore, by definition, the binding of the drug to a druggable target must alter the function of the target with a therapeutic benefit to the patient. The concept of druggability is most often restricted to small molecules (low molecular weight organic substances) but also has been extended to include biologic medical products such as therapeutic monoclonal antibodies.
Drug discovery comprises a number of stages that lead from a biological hypothesis to an approved drug. Target identification is typically the starting point of the modern drug discovery process. Candidate targets may be selected based on a variety of experimental criteria. These criteria may include disease linkage (mutations in the protein are known to cause a disease), mechanistic rationale (for example, the protein is part of a regulatory pathway that is involved in the disease process), or genetic screens in model organisms. Disease relevance alone however is insufficient for a protein to become a drug target. In addition, the target must be druggable.
Prediction of druggability
If a drug has already been identified for a target, that target is by definition druggable. If no known drugs bind to a target, then druggability is implied or predicted using different methods that rely on evolutionary relationships, 3D-structural properties or other descriptors.
Precedence-based
A protein is predicted to be "druggable" if it is a member of a protein family for which other members of the family are known to be targeted by drugs (i.e., "guilt" by association). While this is a useful approximation of druggability, this definition has limitations for two main reasons: (1) it highlights only historically successful proteins, ignoring the possibility of a perfectly druggable, but yet undrugged protein family; and (2) assumes that all protein family members are equally druggable.
Structure-based
This relies on the availability of experimentally determined 3D structures or high quality homology models. A number of methods exist for this assessment of druggability but all of them consist of three main components:
Identifying cavities or pockets on the structure
Calculating physicochemical and geometric properties of the pocket
Assessing how these properties fit a training set of known druggable targets, typically using machine learning algorithms
Early work on introducing some of the parameters of structure-based druggability came from Abagyan and coworkers and then Fesik and coworkers, the latter by assessing the correlation of certain physicochemical parameters with hits from an NMR-based fragment screen. There has since been a number of publications reporting related methodologies.
There are several commercial tools and databases for structure-based druggability assessment. A publicly available database of pre-calculated druggability assessments for all structural domains within the Protein Data Bank (PDB) is provided through the ChEMBL's DrugEBIlity portal.
Structure-based druggability is usually used to identify suitable binding pocket for a small molecule; however, some studies have assessed 3D structures for the availability of grooves suitable for binding helical mimetics. This is an increasingly popular approach in addressing the druggability of protein-protein interactions.
Predictions based on other properties
As well as using 3D structure and family precedence, it is possible to estimate druggability using other properties of a protein such as features derived from the amino-acid sequence (feature-based druggability) which is applicable to assessing small-molecule based druggability or biotherapeutic-based druggability or the properties of ligands or compounds known to bind the protein (Ligand-based druggability).
The importance of training sets
All methods for assessing druggability are highly dependent on the training sets used to develop them. This highlights an important caveat in all the methods discussed above: which is that they have learned from the successes so far. The training sets are typically either databases of curated drug targets; screened targets databases (ChEMBL, BindingDB, PubChem etc.); or on manually compiled sets of 3D structure known by the developers to be druggable. As training sets improve and expand, the boundaries of druggability may also be expanded.
Undruggable targets
About 3% of human proteins are known to be "mode of action" drug targets, i.e., proteins through which approved drugs act. Another 7% of the human proteins interact with small molecule chemicals. Based on DrugCentral, 1795 human proteins annotated to interact with 2455 approved drugs.
Furthermore, it is estimated that only 10-15% of human proteins are disease modifying while only 10-15% are druggable (there is no correlation between the two), meaning that only between 1 and 2.25% of disease modifying proteins are likely to be druggable. Hence it appears that the number of new undiscovered drug targets is very limited.
A potentially much larger percentage of proteins could be made druggable if protein–protein interactions could be disrupted by small molecules. However the majority of these interactions occur between relatively flat surfaces of the interacting protein partners and it is very difficult for small molecules to bind with high affinity to these surfaces. Hence these types of binding sites on proteins are generally thought to be undruggable but there has been some progress (by 2009) targeting these sites.
Chemoproteomics techniques have recently expanded the scope of what is deemed a druggable target through the identification of covalently modifiable sites across the proteome.
References
Further reading
External links
Drug discovery | Druggability | [
"Chemistry",
"Biology"
] | 1,162 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
36,572,239 | https://en.wikipedia.org/wiki/Methyl%20phenkapton | Methyl phenkapton is an organophosphorus compound. It is highly toxic.
References
Acetylcholinesterase inhibitors
Organophosphate insecticides
Methoxy compounds
Thiophosphoryl compounds
Chloroarenes | Methyl phenkapton | [
"Chemistry"
] | 52 | [
"Functional groups",
"Thiophosphoryl compounds"
] |
36,572,267 | https://en.wikipedia.org/wiki/Methylphosphonyl%20dichloride | Methylphosphonyl dichloride (DC) or dichloro is an organophosphorus compound. It has commercial application in oligonucleotide synthesis, but is most notable as being a precursor to several chemical weapons agents. It is a white crystalline solid that melts slightly above room temperature.
Synthesis and reactions
Methylphosphonyl dichloride is produced by oxidation of methyldichlorophosphine, with sulfuryl chloride:
CH3PCl2 + SO2Cl2 → CH3P(O)Cl2 + SOCl2
It can also be produced from a range of methylphosphonates (e.g. dimethyl methylphosphonate) via chlorination with thionyl chloride. Various amines catalyse this process.
With hydrogen fluoride or sodium fluoride, it can be used to produce methylphosphonyl difluoride. With alcohols, it converts to the dialkoxide:
CH3P(O)Cl2 + 2HOR → CH3P(O)(OR)2 + HCl
Safety
Methylphosphonyl dichloride is very toxic and reacts vigorously with water to release hydrochloric acid. It is also listed under Schedule 2 of the Chemical Weapons Convention as it is used in the production of organophosphorus nerve agents such as sarin and soman.
References
Nerve agent precursors
Organophosphorus compounds
Organic compounds with 1 carbon atom
Chlorides
Phosphine oxides | Methylphosphonyl dichloride | [
"Chemistry"
] | 317 | [
"Chlorides",
"Inorganic compounds",
"Salts",
"Organic compounds",
"Organic compounds with 1 carbon atom"
] |
36,575,142 | https://en.wikipedia.org/wiki/Methylmercuric%20dicyanamide | Methylmercuric dicyanamide is a chemical compound used as a fungicide for crops such as cereals, cotton, flax, sorghum, and sugar beets. As of 1998, the U.S. Environmental Protection Agency listed it as an unregistered pesticide in the United States. Although named as a dicyanamide, the major organic structure is a 2-cyanoguanidino group.
References
Cyanamides
Mercury(II) compounds | Methylmercuric dicyanamide | [
"Chemistry"
] | 102 | [
"Cyanamides",
"Functional groups"
] |
36,579,651 | https://en.wikipedia.org/wiki/Luteal%20support | Luteal support is the administration of medication, generally progesterone, progestins, hCG or GnRH agonists, to increase the success rate of implantation and early embryogenesis, thereby complementing and/or supporting the function of the corpus luteum. It can be combined with for example in vitro fertilization and ovulation induction.
Progesterone appears to be the best method of providing luteal phase support, with a relatively higher live birth rate than placebo, and a lower risk of ovarian hyperstimulation syndrome (OHSS) than hCG. Addition of other substances such as estrogen or hCG does not seem to improve outcomes.
Progesterone and progestins
The live birth rate is significantly higher with progesterone for luteal support in IVF cycles with or without intracytoplasmic sperm injection (ICSI). Co-treatment with GnRH agonists further improves outcomes, by a live birth rate RD of +16% (95% confidence interval +10 to +22%).
Routes and formulations
There is no evidence of any route of administration of progesterone or progestins being more beneficial than others for luteal support. The main ones are:
Oral administration of progesterone or progestin pills. Oral administration of progestins provides at least similar live birth rate than vaginal progesterone capsules when used for luteal support in embryo transfer, with no evidence of increased risk of miscarriage.
Intravaginal administration of gel, tablets or other inserts, such as endometrin. A weekly vaginal ring is an effective and safe method for intravaginal administration.
Intramuscular administration. Daily intramuscular injections of progesterone-in-oil (PIO) have been the standard route of administration, but are not FDA-approved for use in pregnancy.
Time of initiation
The time for beginning luteal support can be put in relation to various events:
In IVF, generally somewhere between the evening of oocyte retrieval and day 3 after oocyte retrieval, with weak evidence indicating that 2 days after oocyte retrieval may be optimal.
In artificial insemination, luteal support is generally started on the day of insemination, or 1 to 2 days after.
Duration
Luteal support given for a shorter duration than 7 weeks results in an increased risk of miscarriage in women with a dysfunctional corpus luteum (as can be diagnosed by blood tests for endogenous progesterone). In general, however, luteal support can safely be discontinued at the time of a positive pregnancy test (approximately 2 weeks after fertilization).
Other substances tested in luteal phase
The addition of estrogen or hCG as adjunctives to progesterone do not appear to affect outcomes pregnancy rate and live birth rate in IVF. In fact, luteal support with human chorionic gonadotropin (hCG) alone or as a supplement to progesterone has been associated with a higher risk of ovarian hyperstimulation syndrome (OHSS). Low molecular weight heparin as luteal support may improve the live birth rate but has substantial side effects and has no reliable data on long-term effects. Glucocorticoids such as cortisol has limited evidence of efficacy as luteal support.
References
Assisted reproductive technology | Luteal support | [
"Biology"
] | 724 | [
"Assisted reproductive technology",
"Medical technology"
] |
49,780,148 | https://en.wikipedia.org/wiki/MatC%20family | The Malonate Uptake (MatC) family (TC# 2.A.101) is a constituent of the ion transporter (IT) superfamily. It consists of proteins from Gram-negative and Gram-positive bacteria (e.g., Xanthomonas, Rhizobium and Streptomyces species), simple eukaryotes (e.g., Chlamydomonas reinhardtii) and archaea (e.g., Methanococcus jannaschii). The proteins are of about 450 amino acyl residues in length with 12-14 putative transmembrane segments (TMSs). Closest functionally-characterized homologues are in the DASS (TC #2.A.47) family. One member of this family is a putative malonate transporter (MatC of Rhizobium leguminosarum bv trifolii, TC# 2.A.101.1.2).
See also
"fkbF - FkbF - Streptomyces hygroscopicus subsp. ascomyceticus - fkbF gene & protein".www.uniprot.org. Retrieved 2016-03-03.
"matC - Malonate carrier protein - Rhizobium leguminosarum - matC gene & protein". www.uniprot.org. Retrieved 2016-03-03.
Further reading
References
Protein families
Membrane proteins
Pumps
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins | MatC family | [
"Physics",
"Chemistry",
"Biology"
] | 327 | [
"Pumps",
"Turbomachinery",
"Protein classification",
"Physical systems",
"Hydraulics",
"Membrane proteins",
"Protein families"
] |
49,780,486 | https://en.wikipedia.org/wiki/Magnetochromism | Magnetochromism is the term applied when a chemical compound changes colour under the influence of a magnetic field. In particular the magneto-optical effects exhibited by complex mixed metal compounds are called magnetochromic when they occur in the visible region of the spectrum. Examples include K2V3O8, lithium molybdenum purple bronze Li0.9Mo6O17, and related mixed oxides. Reported magnetochromic compounds are multiferroic manganese tungsten oxide and multiferroic bismuth ferrite.
Magnetically–induced color change can also occur in aqueous solutions of colloidal Fe3O4 nanoparticles that are ~10 nm in diameter. Paramagnetic Fe3O4 particles are extracted from a petroleum–based ferrofluid or synthesized in a laboratory and then suspended in water. When exposed to a strengthening magnetic field these particles organize into chains that diffract light and cause the solution to change color from a brown to red, yellow, green and then blue. Manufacturers encapsulate microscopic droplets of this solution in a thin plastic film to create a magnetochromic magnetic field viewing screen.
References
Chromism | Magnetochromism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 249 | [
"Spectrum (physical sciences)",
"Chromism",
"Materials science",
"Smart materials",
"Spectroscopy"
] |
42,201,889 | https://en.wikipedia.org/wiki/Graft%20polymer | In polymer chemistry, graft polymers are segmented copolymers with a linear backbone of one composite and randomly distributed branches of another composite. The picture labeled "graft polymer" shows how grafted chains of species B are covalently bonded to polymer species A. Although the side chains are structurally distinct from the main chain, the individual grafted chains may be homopolymers or copolymers.
Graft polymers have been synthesized for many decades and are especially used as impact resistant materials, thermoplastic elastomers, compatibilizers, or emulsifiers for the preparation of stable blends or alloys. One of the better-known examples of a graft polymer is a component used in high impact polystyrene, consisting of a polystyrene backbone with polybutadiene grafted chains.
General properties
Graft copolymers are a branched copolymer where the components of the side chain are structurally different than that of the main chain. Graft copolymers containing a larger quantity of side chains are capable of wormlike conformation, compact molecular dimension, and notable chain end effects due to their confined and tight fit structures.
The preparation of graft copolymers has been around for decades. All synthesis methods can be employed to create general physical properties of graft copolymers. They can be used for materials that are impact resistant, and are often used as thermoplastics elastomers, compatibilizers or emulsifiers for the preparation of stable blends or alloys. Generally, grafting methods for copolymer synthesis results in materials that are more thermostable than their homopolymer counterparts.
There are three methods of synthesis, grafting to, grafting from, and grafting through, that are used to construct a graft polymer.
Synthesis methods
There are many different approaches to synthesizing graft copolymers. Usually they employ familiar polymerization techniques that are commonly used such as atom transfer radical polymerization (ATRP), ring-opening metathesis polymerization (ROMP), anionic and cationic polymerizations, and free radical living polymerization. Some other less common polymerization include radiation-induced polymerization, ring-opening olefin metathesis polymerization, polycondensation reactions, and iniferter-induced polymerization.
Grafting to
The grafting to method involves the use of a backbone chain with functional groups A that are distributed randomly along the chain. The formation of the graft copolymer originates from the coupling reaction between the functional backbone and the end-groups of the branches that are reactive. These coupling reactions are made possible by modifying the backbone chemically. Common reaction mechanisms used to synthesize these copolymers include free-radical polymerization, anionic polymerization, atom-transfer radical-polymerization, and living polymerization techniques.
Copolymers that are prepared with the grafting-to method often utilize anionic polymerization techniques. This method uses a coupling reaction of the electrophilic groups of the backbone polymer and the propagation site of an anionic living polymer. This method would not be possible without the generation of a backbone polymer that has reactive groups. This method has become more popular with the rise of click chemistry. A high yield chemical reaction called atom transfer nitroxide radical coupling chemistry is for the grafting-to method for polymerization.
Grafting from
In the grafting-from method, the macromolecular backbone is chemically modified in order to introduce active sites capable of initiating functionality. The initiating sites can be incorporated by copolymerization, can be incorporated in a post-polymerization reaction, or can already be a part of the polymer. If the number of active sites along the backbone participates in the formation of one branch, then the number of chains grafted to the macromolecule can be controlled by the number of active sites. Even though the number of grafted chains can be controlled, there may be a difference in the lengths of each grafted chain due to kinetic and steric hindrance effects.
Grafting from reactions have been conducted from polyethylene, polyvinylchloride, and polyisobutylene. Different techniques such as anionic grafting, cationic grafting, atom-transfer radical polymerization, and free-radical polymerization have been used in the synthesis of grafting from copolymers.
Graft copolymers that are employed with the grafting-from method are often synthesized with ATRP reactions and anionic and cationic grafting techniques.
Grafting through
The grafting through, also known as the macromonomer method, is one of the simpler ways of synthesizing a graft polymer with well defined side chains. Typically a monomer of a lower molecular weight is copolymerized with free radicals with an acrylate functionalized macromonomer. The ratio of monomer to macromonomer molar concentrations as well as their copolymerization behavior determines the number of chains that are grafted. As the reaction proceeds, the concentrations of monomer to macromonomer change causing random placement of branches and formation of graft copolymers with different number of branches. This method allows for branches to be added heterogeneously or homogeneously based on the reactivity ratio of the terminal functional group on the macromolecular to the monomer. The difference in distribution of grafts has significant effects on the physical properties of the grafted copolymer. Polyethylene, polysiloxanes and poly(ethylene oxide) are all macromonomers that have been incorporated in a polystyrene or poly(methyl acrylate) backbone.
The macromonomer (grafting through) method can be employed using any known polymerization technique. Living polymerizations give special control over the molecular weight, molecular weight distribution, and chain-end functionalization.
Applications
Graft copolymers became widely studied due to their increased number of applications like in drug delivery vehicles, surfactants, water filtration, rheology modifiers, etc. It is their unique structures relative to other copolymers such as alternating, periodic, statistical, and block copolymers.
Some common applications of graft copolymers include:
Membranes for the separation of gases or liquids
Hydrogels
Drug deliverers
Thermoplastic elastomers
Compatibilizers for polymer blends
Polymeric emulsifiers
Impact resistant plastics
High impact polystyrene
High impact polystyrene (HIPS) was discovered by Charles F. Fryling in 1961. HIPS is a low cost, plastic material that is easy to fabricate and often used for low strength structural applications when impact resistance, machinability, and low cost are required. Its major applications include machined prototypes, low-strength structural components, housings, and covers. In order to produce the graft polymer, polybutadiene (rubber) or any similar elastomeric polymer is dissolved in styrene and polymerized. This reaction allows for two simultaneous polymerizations, that of styrene to polystyrene and that of the graft polymerization of styrene-rubber. During commercial use, it can be prepared by graft copolymerization with additional polymer to give the product specific characteristics.
The advantages of HIPS includes:
FDA compliant
Good impact resistance
Excellent machinability
Good dimensional stability
Easy to paint and glue
Low cost
Excellent aesthetic qualities
New properties as a result of grafting
By grafting polymers onto polymer backbones, the final grafted copolymers gain new properties from their parent polymers. Specifically, cellulose graft copolymers have various different applications that are dependent on the structure of the polymer grafted onto the cellulose. Some of the new properties that cellulose gains from different monomers grafted onto it include:
Absorption of water
Improved elasticity
Hydrophilic/Hydrophobic character
Ion-exchange
Dye adsorption capabilities
Heat Resistance
Thermosensitivity
pH sensitivity
Antibacterial effect
These properties give new application to the ungrafted cellulose polymers that include:
Medical body fluid absorbent materials
Enhanced moisture absorbing ability in fabrics
Permselective membranes
Stronger nucleating properties than ungrafted cellulose, and adsorption of hazardous contaminants like heavy metal ions or dyes from aqueous solutions by temperature swing adsorption
Sensors and optical materials
Reducing agents for various carbonyl compounds
References
Polymers | Graft polymer | [
"Chemistry",
"Materials_science"
] | 1,771 | [
"Polymers",
"Polymer chemistry"
] |
42,203,738 | https://en.wikipedia.org/wiki/Wind-assisted%20propulsion | Wind-assisted propulsion is the practice of decreasing the fuel consumption of a merchant vessel through the use of sails or some other wind capture device. Sails used to be the primary means of propelling ships, but with the advent of the steam engine and the diesel engine, sails came to be used for recreational sailing only. In recent years with increasing fuel costs and an increased focus on reducing emissions, there has been increased interest in harnessing the power of the wind to propel commercial ships.
A key barrier for the implementation of any decarbonisation technology and in particular of wind-assisted ones, is frequently discussed in the academia and the industry is the availability of capital. On the one hand, shipping lenders have been reducing their commitments overall while on the other hand, low-carbon newbuilds as well as retrofit projects entail higher-than-usual capital expenditure. Therefore, research effort is directed towards the development of shared economy and leasing business models, where benefits from reduced consumption of fossil fuels as well as gains from carbon allowances or reduced levies are shared among users, technology providers and operators.
Design
The mechanical means of converting the kinetic energy of the wind into thrust for a ship is the subject of much recent study. Where early ships designed primarily for sailing were designed around the sails that propelled them, commercial ships are now designed largely around the cargo that they carry, requiring a large clear deck and minimal overhead rigging in order to facilitate cargo handling. Another design consideration in designing a sail propulsion system for a commercial ship is that in order for it to be economically advantageous it cannot require a significantly larger crew to operate and it cannot compromise the stability of the ship. Taking into account these design criteria, three main concepts have emerged as the leading designs for wind-assisted propulsion: the “Wing Sail Concept,” the “Kite Sail,” and the “Flettner Rotor.”
Wingsail
As a result of rising oil prices in the 1980s, the US government commissioned a study on the economic feasibility of using wind assisted propulsion to reduce the fuel consumption of ships in the US Merchant Marine. This study considered several designs and concluded that a wingsail would be the most effective. The wingsail option studied consisted of an automated system of large rectangular solid sails supported by cylindrical masts. These would be symmetrical sails, which would allow a minimal amount of handling to maintain the sail orientation for different wind angles; however, this design was less efficient. A small freighter was outfitted with this system to evaluate its actual fuel gains, with the result that it was estimated to save between 15 and 25% of the vessel's fuel.
Kite sail
The kite sail concept has recently received a lot of interest. This rig consists of flying a gigantic kite from the bow of a ship using the traction developed by the kite to assist in pulling the ship through the water. Other concepts that have been explored were designed to have the kite rig alternately pull out and retract on a reel, driving a generator. The kite used in this setup is similar to the kites used by recreational kiteboarders, on a much larger scale. This design also allows users to expand its scale by flying multiple kites in a stacked arrangement.
The idea of using kites was, in 2012, the most popular form of wind-assisted propulsion on commercial ships, largely due to the low cost of retrofitting the system to existing ships, with minimal interference with existing structures. This system also allows a large amount of automation, using computer controls to determine the ideal kite angle and position. Using a kite allows the capture of wind at greater altitudes, where wind speed is higher and more consistent. This system has seen use on several ships, with the most notable in 2009 being , a merchant ship chartered by the US Military Sealift Command to evaluate the claims of efficiency and the feasibility of fitting this system to other ships.
Flettner rotor
The third design considered is the Flettner rotor. This is a large cylinder mounted upright on a ship's deck and mechanically spun. The effect of this spinning area in contact with the wind flowing around it creates a thrust effect that is used to propel the ship. Flettner Rotors were invented in the 1920s and have seen limited use since then. In 2010 a 10,000 dwt cargo ship was equipped with four Flettner Rotors to evaluate their role in increasing fuel efficiency. Since then, several cargo ships and a passenger ferry have been equipped with rotors.
The only parameter of the Flettner Rotor requiring control is the rotational speed of the rotor, meaning this method of wind propulsion requires very little operator input. In comparison to kite sails, Flettner rotors often offer considerable efficiency gains when compared to the size of a sail or kite, versus the size of the rotor and prevailing wind conditions.
Examples of 2018 Flettner rotor installations include :
Cruise ferry Viking Grace became the first passenger vessel with a rotor.
The liquid bulk tanker Maersk Pelican was retrofitted with two rotors.
The ultramax bulk carrier Afros received four rotors, which can be moved aside during port operations.
Implementation
The efficiency gains of these three propulsion assistance mechanisms are typically around 15–20% depending on the size of the system. As of 2009, shipping companies had been hesitant to install untested equipment. As of 2019, several initiatives were looking into the feasibility of cost-effective wind propulsion for commercial ships, including the Swedish Oceanbird concept for using wing sails, the Japanese Wind Challenger Project, and several coordinating associations.
See also
Pyxis Ocean, a bulk carrier retrofitted with wind-propulsion technology
Viking Grace, a rotor assisted cruise ship
Wind Surf, a wind assisted cruise ship
Hydrogen-powered ship
Nuclear marine propulsion
Internal drive propulsion
Integrated electric propulsion
Combined nuclear and steam propulsion
Astern propulsion
Marine propulsion
Air-independent propulsion
References
Marine propulsion
Wind | Wind-assisted propulsion | [
"Engineering"
] | 1,188 | [
"Marine propulsion",
"Marine engineering"
] |
42,204,798 | https://en.wikipedia.org/wiki/Tribonucleation | Tribonucleation is a mechanism that creates small gas bubbles by the action of making and breaking contact between solid surfaces immersed in a liquid containing dissolved gas. These small bubbles may then act as nuclei for the growth of bubbles when the pressure is reduced. As the formation of the nuclei occurs quite easily, the effect may occur in a human body engaged in light exercise, yet produce no symptoms. However tribonucleation may be a source of growing bubbles affecting scuba divers when ascending to the surface and is a potential cause of decompression sickness. The process has also been described as the basis for the cracking sound produced by the manipulation of human synovial joints.
References
Mechanisms (engineering)
Underwater diving physiology
Decompression theory | Tribonucleation | [
"Engineering"
] | 151 | [
"Mechanical engineering",
"Mechanisms (engineering)"
] |
42,206,064 | https://en.wikipedia.org/wiki/Raffaello%20D%27Andrea | Raffaello D’Andrea (born August 13, 1967, in Pordenone, Italy) is a Canadian-Italian-Swiss engineer, artist, and entrepreneur. He is professor of dynamic systems and control at ETH Zurich. He is a co-founder of Kiva Systems (now operating as Amazon Robotics), and the founder of Verity, an innovator in autonomous drones. He was the faculty advisor and system architect of the Cornell Robot Soccer Team, four time world champions at the annual RoboCup competition. He is a new media artist, whose work includes The Table, the Robotic Chair, and Flight Assembled Architecture. In 2013, D’Andrea co-founded ROBO Global, which launched the world's first exchange traded fund focused entirely on the theme of robotics and AI. ROBO Global was acquired by VettaFi in 2023.
D'Andrea was a speaker at TED Global 2013 and spoke at TED 2016. In 2016, he received the IEEE Robotics and Automation Award, and in 2020 he was elected a member of the National Academy of Engineering for contributions to the design and implementation of distributed automation systems for commercial applications.
Life
Born in Pordenone, Italy, D’Andrea moved to Canada in 1976, where he graduated valedictorian from Anderson Collegiate in Whitby, Ontario. He received a Bachelor of Applied Science from the University of Toronto, graduating in Engineering Science (Major in Electrical and Computer Engineering) in 1991 and winning the Wilson Medal as the top graduating student that year. In 1997 he received a Ph.D. in Electrical Engineering from the California Institute of Technology, under the supervision of John Doyle and Richard Murray.
He joined the Cornell faculty in 1997. While on sabbatical in 2003, he co-founded Kiva Systems with Mick Mountz and Peter Wurman. He became Kiva Systems’ chief technical advisor in 2007 when he was appointed professor of dynamic systems and control at ETH Zurich. He founded Verity with Markus Waibel and Markus Hehn in 2014.
Work
Academic work
After receiving his PhD in 1997, he joined the Cornell faculty as an assistant professor, where he was a founding member of the Systems Engineering program, and where he established robot soccer — a competition featuring fully autonomous robots — as the flagship, multidisciplinary team project. In addition to pioneering the use of semi-definite programming for the design of distributed control systems, he went on to lead the Cornell Robot Soccer Team to four world championships at international RoboCup competitions in Sweden, Australia, Italy, and Japan. D'Andrea received the Presidential Early Career Award for complex interconnected systems research in 2002.
After being appointed professor at ETH Zurich in 2007, D’Andrea established a research program that combined his broad interests and cemented his hands-on teaching style. His team engages in cutting-edge research by designing and building creative experimental platforms that allow them to explore the fundamental principles of robotics, control, and automation. His creations include the Flying Machine Arena, where flying robots perform aerial acrobatics, juggle balls, balance poles, and cooperate to build structures; the Distributed Flight Array, a flying platform consisting of multiple autonomous single propeller vehicles that are able to drive, dock with their peers, and fly in a coordinated fashion; the Balancing Cube, a dynamic sculpture that can balance on any of its edges or corners; Blind Juggling Machines that can juggle balls without seeing them, and without catching them; and the Cubli, a cube that can jump up, balance, and walk.
Entrepreneurial work
D’Andrea co-founded Kiva Systems in 2003 with Mick Mountz and Peter Wurman. He became chief technical advisor when he was appointed professor of dynamic systems and control at ETH Zurich in 2007. At Kiva, he led the systems architecture, robot design, robot navigation and coordination, and control algorithms efforts.
D’Andrea founded Verity in 2014 with Markus Hehn and Markus Waibel. The stated purpose of the company is "to develop autonomous indoor drone systems and related technologies for commercial applications." The company partnered with Cirque du Soleil to create Sparked, a live interaction between humans and quadcopters and has provided autonomous drone shows for large concert tours like Metallica's WorldWired Tour, Drake (musician)'s Aubrey & the Three Migos Tour, Celine Dion's Courage World Tour, Justin Bieber's 2022 Justice World Tour, and the Australasian Dance Collective (ADC).
Since 2016, D'Andrea and Verity have been focused on delivering autonomous inventory drone systems for commercial warehouses to support inventory tracking and management, and other use cases. In 2023, IKEA announced the milestone of 100 Verity drones in use in its warehouses, and Maersk announced its use of the Verity system in its warehouses. In July 2023, Verity announced completion of a $43M Series B fundraising round that included Qualcomm Ventures.
Artistic work
D’Andrea and Canadian artist Max Dean unveiled their collaborative work The Table at the Venice Biennale in 2001. They orchestrate a scenario wherein a spectator, selected by the table, becomes a performer, who is now an object not only of the table's "attention", but also of the other viewers'. It is part of the permanent collection of the National Gallery of Canada (NGC).
The Robotic Chair was created by D’Andrea, Max Dean, and Canadian artist Matt Donovan. It is an ordinary looking chair that falls apart and re-assembles itself. It was first unveiled to the general public at IdeaCity in 2006. It is part of the permanent collection of the National Gallery of Canada (NGC).
D’Andrea and Swiss architects Gramazio & Kohler created Flight Assembled Architecture, the first architectural installation assembled by flying robots. It took place at the FRAC Centre Orléans in France in 2011–2012. The installation consists of 1,500 modules put into place by a multitude of quadrotor helicopters. Within the build, an architectural vision of a 600-metre high "vertical village" for 30,000 inhabitants unfolds as a model in 1:100 scale. It is in the permanent collection of the FRAC Centre.
Awards and honors
2020 National Academy of Engineering Member
2020 National Inventors Hall of Fame Inductee
2016 IEEE Robotics and Automation Award
2015 Engelberger Robotics Award
2008 IEEE/IFR Invention and Entrepreneurship Award
2002 Presidential Early Career Award
References
Electronics engineers
Living people
1967 births
Cornell University faculty
Recipients of the Presidential Early Career Award for Scientists and Engineers
Academic staff of ETH Zurich | Raffaello D'Andrea | [
"Engineering"
] | 1,328 | [
"Electronics engineers",
"Electronic engineering"
] |
42,210,520 | https://en.wikipedia.org/wiki/Chlorophyllum%20hortense | Chlorophyllum hortense is a species of agaric fungus in the family Agaricaceae.
Taxonomy
It was first described in 1914 by the American mycologist William Murrill who classified it as Lepiota hortensis.
In 1983 it was reclassified as Leucoagaricus hortensis by the British mycologist David Pegler.
In 2002 it was reclassified as Chlorophyllum hortense by Else C.Vellinga.
Description
Cap: 8-10cm wide when mature, starting convex and slightly umbonate before expanding. The surface is a dirty yellowish white colour, dry and covered in thread like filaments (fibrillose) whilst the centre disc is light brown and covered with large light brown woolly (floccose) scales. The cap edges are thick, rounded and the same colour as the cap surface with distinct striations. Stem: 5-7cm long and 7-10mm thick, mostly equal in thickness across the length but sometimes slightly wider below the stem ring. The surface is smooth and white above the stem ring and usually brown and fibrillose below whilst the interior is tough and solid. The stem ring is thick, brown and located towards below or at the middle of the stem (inferior to median). Gills: Free, crowded and white, unchanging in colour. There is a slight bulge in the middle of the gills (ventricose). Spores: Ellipsoid and smooth. 8-9 x 6-7μm.
Habitat and distribution
The fungus is found in Australia and North America. In 2006, it was reported from China.
Murrill described the species from specimens collected in sandy soil in Alabama.
References
External links
Agaricaceae
Fungi described in 1914
Fungi of Asia
Fungi of Australia
Fungi of North America
Fungus species | Chlorophyllum hortense | [
"Biology"
] | 380 | [
"Fungi",
"Fungus species"
] |
33,964,670 | https://en.wikipedia.org/wiki/Acoustic%20lobing | Acoustic lobing refers to the radiation pattern of a combination of two or more loudspeaker drivers at a certain frequency, as seen looking at the speaker from its side. In most multi-way speakers, it is at the crossover frequency that the effects of lobing are of greatest concern, since this determines how well the speaker preserves the tonality of the original recorded content.
In practice, room-effects and interactions largely mean that the ideal loudspeaker (or combination thereof) is not practically possible. However a speaker that has the best dispersion at all frequencies of interest (especially the crossover frequency), will have the least colouration of sound - i.e., it will most faithfully reproduce the recorded material. Thus, an ideal speaker would have no lobes at all frequencies - in other words it will act as a point source radiating omnidirectionally at all frequencies. In practice all speakers will exhibit some amount of lobing at the crossover frequency. The primary reasons for this are the physical distance between the drivers, and the drivers' effective diameters relative to the frequency of interest.
Lobing is measured as having a comb filtering response (i.e., areas of peaks and dips) as the listening position varies vertically‡ w.r.t. the nominal on-axis position. Since a true spherical wavefront cannot be achieved in practice, designers try to make the lobe as wide as possible at the crossover frequency, such that at typical listening positions, the speaker appears omnidirectional.
Lobe formation
For the sake of simplicity, the following assumes two point sources separated by a distance d vertically‡, both radiating into half-space at a certain frequency f. Thus we can express lobing as a function of d and its relation to the wavelength λ. As d becomes significant (or larger) as compared to λ, the acoustic wavefront starts becoming narrower or more directive.
The following image shows a simplified representation of how two non-coincident drivers exhibit lobing (the difference between the lobing patterns is greatly exaggerated to demonstrate the effect):
The large black dot is the vertical listening position relative to the centre, at a certain fixed horizontal distance from the speaker. For wavelengths much greater than d, the wavefront is almost spherical (circular, when seen from the side) and the sound level is constant for a variety of such listening positions - the off-axis response of the speaker is almost omnidirectional. As the distance d approaches λ/4, the wavefront starts becoming narrower. At the listening position, the sound level is not the same as it would have been, had it been exactly midway between the drivers. The area where the sound level is constant for a given range of vertical positions (and fixed listening distance) is the lobe. Outside the lobe, the sound level is much less and this is what causes the speaker to have a change in tonality as one's listening height changes.
Note: For an individual driver this effect is known as directivity, and is observable in both vertical and horizontal planes, and d is now the driver's diameter relative to the wavelength, whereas, the lobing pattern due to two or more drivers is primarily an effect in the vertical plane, as a result of the distance between the two drivers.
The physical reason for a lobe to form is the fact that at any point that is at a position unequal from both drivers, at certain frequencies (i.e., wavelengths) and depending on d and relative difference between the distances to the listening position, the wavefronts from each driver will interfere constructively or destructively. This constructive or destructive interference happens due to the relative phases of the waves from each driver as they reach the listening position.
Thus, for any given frequency, there will be a minimum distance from the speaker below which there will be radical changes in sound level as the listening position is changed vertically. And this distance becomes larger as the distance between the drivers increases. Thus, the best compromise is obtained when, for practical listening distances, we can choose drivers large enough to cover as much of the audio band as possible but at the same time small enough so they can be as closely spaced as possible as to appear as a point source for any practical listening distance.
‡ - The article assumes a typical loudspeaker configuration where multiple drivers are arranged vertically. Therefore, the lobing phenomenon is observable in the vertical plane. For horizontally arranged drivers, the lobing phenomenon would be observable in the horizontal plane.
References
Acoustics
Loudspeakers
Loudspeaker technology | Acoustic lobing | [
"Physics"
] | 938 | [
"Classical mechanics",
"Acoustics"
] |
33,966,160 | https://en.wikipedia.org/wiki/Micromixing | In pharmaceutics, micromixing is a process in which ingredient particles rearrange to form a blend. Development of pharmaceutical formulations requires understanding how the ingredients blend with each other and how the blending progresses through different stages. It is also important to establish in a scientific manner when the blending is considered complete, establishing the margins of blending performance, so that in production the blending is complete before the blending process stops.
Optimal blending
In order to achieve optimal blending, the micromixing process must be studied to determine mixing parameters such as blending time, blending speed, type and size of blender. When blending is performed too long, overblending may occur, with particles re-aggregating, resulting in segregation of the previously ideal blend.
Tools
Formulation scientists and technologists need tools to select ingredients for new formulations. Tablets contain multiple ingredients beyond the active pharmaceutical ingredients (API) such as fillers, tableting agents, disintegrants, and absorption enhancers or agents that slow down and control absorption. Choice of materials is important to assure the flow characteristics, potency, and absorption of specific formulations. In addition, proper particle size grades of the ingredients must be selected to produce an optimum blend for capsule filling.
Methods
In order to study the rate and uniformity of blending, destructive analytical methods, such as dissolution followed by chromatographic separation and detection are often used. These methods require samples to be pulled from the blend, followed by time-consuming laboratory analysis. In production, such analysis delays may lengthen time required for production formulation development.
Hyperspectral imaging
Near-infrared hyperspectral imaging can show the distribution of ingredients in pharmaceutical tablets. In addition to laboratory analysis, imaging of near line pull-samples has been used to indicate whether the mixing endpoint has been achieved. However, such measurements were performed once blending was completed, and therefore, did not yield information about the progression of micromixing during the blending process.
References
Further reading
Marbach, Ralf. (2007). Multivariate Calibration: A Science-Based Method
Pharmaceutics
Pharmacy | Micromixing | [
"Chemistry"
] | 432 | [
"Pharmacology",
"Pharmacy"
] |
33,973,912 | https://en.wikipedia.org/wiki/Koenigs%20function | In mathematics, the Koenigs function is a function arising in complex analysis and dynamical systems. Introduced in 1884 by the French mathematician Gabriel Koenigs, it gives a canonical representation as dilations of a univalent holomorphic mapping, or a semigroup of mappings, of the unit disk in the complex numbers into itself.
Existence and uniqueness of Koenigs function
Let D be the unit disk in the complex numbers. Let be a holomorphic function mapping D into itself, fixing the point 0, with not identically 0 and not an automorphism of D, i.e. a Möbius transformation defined by a matrix in SU(1,1).
By the Denjoy-Wolff theorem, leaves invariant each disk |z | < r and the iterates of converge uniformly on compacta to 0: in fact for 0 < < 1,
for |z | ≤ r with M(r ) < 1. Moreover '(0) = with 0 < || < 1.
proved that there is a unique holomorphic function h defined on D, called the Koenigs function,
such that (0) = 0, '(0) = 1 and Schröder's equation is satisfied,
The function h is the uniform limit on compacta of the normalized iterates, .
Moreover, if is univalent, so is .
As a consequence, when (and hence ) are univalent, can be identified with the open domain . Under this conformal identification, the mapping becomes multiplication by , a dilation on .
Proof
Uniqueness. If is another solution then, by analyticity, it suffices to show that k = h near 0. Let
near 0. Thus H(0) =0, H'''(0)=1 and, for |z | small,
Substituting into the power series for , it follows that near 0. Hence near 0.Existence. If then by the Schwarz lemma
On the other hand,
Hence gn converges uniformly for |z| ≤ r by the Weierstrass M-test sinceUnivalence. By Hurwitz's theorem, since each gn is univalent and normalized, i.e. fixes 0 and has derivative 1 there , their limit is also univalent.
Koenigs function of a semigroup
Let be a semigroup of holomorphic univalent mappings of into itself fixing 0 defined
for such that
is not an automorphism for > 0
is jointly continuous in and
Each with > 0 has the same Koenigs function, cf. iterated function. In fact, if h is the Koenigs function of
, then satisfies Schroeder's equation and hence is proportion to h.
Taking derivatives gives
Hence is the Koenigs function of .
Structure of univalent semigroups
On the domain , the maps become multiplication by , a continuous semigroup.
So where is a uniquely determined solution of with Re < 0. It follows that the semigroup is differentiable at 0. Let
a holomorphic function on with v(0) = 0 and {{math|v'(0)}} = .
Then
so that
and
the flow equation for a vector field.
Restricting to the case with 0 < λ < 1, the h(D'') must be starlike so that
Since the same result holds for the reciprocal,
so that satisfies the conditions of
Conversely, reversing the above steps, any holomorphic vector field satisfying these conditions is associated to a semigroup , with
Notes
References
ASIN: B0006BTAC2
Complex analysis
Dynamical systems
Types of functions | Koenigs function | [
"Physics",
"Mathematics"
] | 764 | [
"Functions and mappings",
"Mathematical objects",
"Mechanics",
"Mathematical relations",
"Types of functions",
"Dynamical systems"
] |
33,974,223 | https://en.wikipedia.org/wiki/Homotopy%20type%20theory | In mathematical logic and computer science, homotopy type theory (HoTT) refers to various lines of development of intuitionistic type theory, based on the interpretation of types as objects to which the intuition of (abstract) homotopy theory applies.
This includes, among other lines of work, the construction of homotopical and higher-categorical models for such type theories; the use of type theory as a logic (or internal language) for abstract homotopy theory and higher category theory; the development of mathematics within a type-theoretic foundation (including both previously existing mathematics and new mathematics that homotopical types make possible); and the formalization of each of these in computer proof assistants.
There is a large overlap between the work referred to as homotopy type theory, and that called the univalent foundations project. Although neither is precisely delineated, and the terms are sometimes used interchangeably, the choice of usage also sometimes corresponds to differences in viewpoint and emphasis. As such, this article may not represent the views of all researchers in the fields equally. This kind of variability is unavoidable when a field is in rapid flux.
History
Groupoid model
At one time, the idea that types in intensional type theory with their identity types could be regarded as groupoids was mathematical folklore. It was first made precise semantically in the 1994 paper of Martin Hofmann and Thomas Streicher called "The groupoid model refutes uniqueness of identity proofs", in which they showed that intensional type theory had a model in the category of groupoids. This was the first truly "homotopical" model of type theory, albeit only "1-dimensional" (the traditional models in the category of sets being homotopically 0-dimensional).
Their follow-up paper foreshadowed several later developments in homotopy type theory. For instance, they noted that the groupoid model satisfies a rule they called "universe extensionality", which is none other than the restriction to 1-types of the univalence axiom that Vladimir Voevodsky proposed ten years later. (The axiom for 1-types is notably simpler to formulate, however, since a coherent notion of "equivalence" is not required.) They also defined "categories with isomorphism as equality" and conjectured that in a model using higher-dimensional groupoids, for such categories one would have "equivalence is equality"; this was later proven by Benedikt Ahrens, Krzysztof Kapulkin, and Michael Shulman.
Early history: model categories and higher groupoids
The first higher-dimensional models of intensional type theory were constructed by Steve Awodey and his student Michael Warren in 2005 using Quillen model categories. These results were first presented in public at the conference FMCS 2006 at which Warren gave a talk titled "Homotopy models of intensional type theory", which also served as his thesis prospectus (the dissertation committee present were Awodey, Nicola Gambino and Alex Simpson). A summary is contained in Warren's thesis prospectus abstract.
At a subsequent workshop about identity types at Uppsala University in 2006 there were two talks about the relation between intensional type theory and factorization systems: one by Richard Garner, "Factorisation systems for type theory", and one by Michael Warren, "Model categories and intensional identity types". Related ideas were discussed in the talks by Steve Awodey, "Type theory of higher-dimensional categories", and Thomas Streicher, "Identity types vs. weak omega-groupoids: some ideas, some problems". At the same conference Benno van den Berg gave a talk titled "Types as weak omega-categories" where he outlined the ideas that later became the subject of a joint paper with Richard Garner.
All early constructions of higher dimensional models had to deal with the problem of coherence typical of models of dependent type theory, and various solutions were developed. One such was given in 2009 by Voevodsky, another in 2010 by van den Berg and Garner. A general solution, building on Voevodsky's construction, was eventually given by Lumsdaine and Warren in 2014.
At the PSSL86 in 2007 Awodey gave a talk titled "Homotopy type theory" (this was the first public usage of that term, which was coined by Awodey). Awodey and Warren summarized their results in the paper "Homotopy theoretic models of identity types", which was posted on the ArXiv preprint server in 2007 and published in 2009; a more detailed version appeared in Warren's thesis "Homotopy theoretic aspects of constructive type theory" in 2008.
At about the same time, Vladimir Voevodsky was independently investigating type theory in the context of the search of a language for practical formalization of mathematics. In September 2006 he posted to the Types mailing list "A very short note on homotopy lambda calculus", which sketched the outlines of a type theory with dependent products, sums and universes and of a model of this type theory in Kan simplicial sets. It began by saying "The homotopy λ-calculus is a hypothetical (at the moment) type system" and ended with "At the moment much of what I said above is at the level of conjectures. Even the definition of the model of TS in the homotopy category is non-trivial" referring to the complex coherence issues that were not resolved until 2009. This note included a syntactic definition of "equality types" that were claimed to be interpreted in the model by path-spaces, but did not consider Per Martin-Löf's rules for identity types. It also stratified the universes by homotopy dimension in addition to size, an idea that later was mostly discarded.
On the syntactic side, Benno van den Berg conjectured in 2006 that the tower of identity types of a type in intensional type theory should have the structure of an ω-category, and indeed a ω-groupoid, in the "globular, algebraic" sense of Michael Batanin. This was later proven independently by van den Berg and Garner in the paper "Types are weak omega-groupoids" (published 2008), and by Peter Lumsdaine in the paper "Weak ω-Categories from Intensional Type Theory" (published 2009) and as part of his 2010 Ph.D. thesis "Higher Categories from Type Theories".
The univalence axiom, synthetic homotopy theory, and higher inductive types
The concept of a univalent fibration was introduced by Voevodsky in early 2006.
However, because of the insistence of all presentations of the Martin-Löf type theory on the property that the identity types, in the empty context, may contain only reflexivity, Voevodsky did not recognize until 2009 that these identity types can be used in combination with the univalent universes. In particular, the idea that univalence can be introduced simply by adding an axiom to the existing Martin-Löf type theory appeared only in 2009.
Also in 2009, Voevodsky worked out more of the details of a model of type theory in Kan complexes, and observed that the existence of a universal Kan fibration could be used to resolve the coherence problems for categorical models of type theory. He also proved, using an idea of A. K. Bousfield, that this universal fibration was univalent: the associated fibration of pairwise homotopy equivalences between the fibers is equivalent to the paths-space fibration of the base.
To formulate univalence as an axiom Voevodsky found a way to define "equivalences" syntactically that had the important property that the type representing the statement "f is an equivalence" was (under the assumption of function extensionality) (-1)-truncated (i.e. contractible if inhabited). This enabled him to give a syntactic statement of univalence, generalizing Hofmann and Streicher's "universe extensionality" to higher dimensions. He was also able to use these definitions of equivalences and contractibility to start developing significant amounts of "synthetic homotopy theory" in the proof assistant Coq; this formed the basis of the library later called "Foundations" and eventually "UniMath".
Unification of the various threads began in February 2010 with an informal meeting at Carnegie Mellon University, where Voevodsky presented his model in Kan complexes to a group including Awodey, Warren, Lumsdaine, Robert Harper, Dan Licata, Michael Shulman, and others. This meeting produced the outlines of a proof (by Warren, Lumsdaine, Licata, and Shulman) that every homotopy equivalence is an equivalence (in Voevodsky's good coherent sense), based on the idea from category theory of improving equivalences to adjoint equivalences. Soon afterwards, Voevodsky proved that the univalence axiom implies function extensionality.
The next pivotal event was a mini-workshop at the Mathematical Research Institute of Oberwolfach in March 2011 organized by Steve Awodey, Richard Garner, Per Martin-Löf, and Vladimir Voevodsky, titled "The homotopy interpretation of constructive type theory". As part of a Coq tutorial for this workshop, Andrej Bauer wrote a small Coq library based on Voevodsky's ideas (but not actually using any of his code); this eventually became the kernel of the first version of the "HoTT" Coq library (the first commit of the latter by Michael Shulman notes "Development based on Andrej Bauer's files, with many ideas taken from Vladimir Voevodsky's files"). One of the most important things to come out of the Oberwolfach meeting was the basic idea of higher inductive types, due to Lumsdaine, Shulman, Bauer, and Warren. The participants also formulated a list of important open questions, such as whether the univalence axiom satisfies canonicity (still open, although some special cases have been resolved positively), whether the univalence axiom has nonstandard models (since answered positively by Shulman), and how to define (semi)simplicial types (still open in MLTT, although it can be done in Voevodsky's Homotopy Type System (HTS), a type theory with two equality types).
Soon after the Oberwolfach workshop, the Homotopy Type Theory website and blog was established, and the subject began to be popularized under that name. An idea of some of the important progress during this period can be obtained from the blog history.
Univalent foundations
The phrase "univalent foundations" is agreed by all to be closely related to homotopy type theory, but not everyone uses it in the same way. It was originally used by Vladimir Voevodsky to refer to his vision of a foundational system for mathematics in which the basic objects are homotopy types, based on a type theory satisfying § the univalence axiom, and formalized in a computer proof assistant.
As Voevodsky's work became integrated with the community of other researchers working on homotopy type theory, "univalent foundations" was sometimes used interchangeably with "homotopy type theory", and other times to refer only to its use as a foundational system (excluding, for example, the study of model-categorical semantics or computational metatheory). For instance, the subject of the IAS special year was officially given as "univalent foundations", although a lot of the work done there focused on semantics and metatheory in addition to foundations. The book produced by participants in the IAS program was titled "Homotopy type theory: Univalent foundations of mathematics"; although this could refer to either usage, since the book only discusses HoTT as a mathematical foundation.
Special Year on Univalent Foundations of Mathematics
In 2012–13 researchers at the Institute for Advanced Study held "A Special Year on Univalent Foundations of Mathematics". The special year brought together researchers in topology, computer science, category theory, and mathematical logic. The program was organized by Steve Awodey, Thierry Coquand and Vladimir Voevodsky.
During the program Peter Aczel, who was one of the participants, initiated a working group which investigated how to do type theory informally but rigorously, in a style that is analogous to ordinary mathematicians doing set theory. After initial experiments it became clear that this was not only possible but highly beneficial, and that a book (the so-called HoTT Book) could and should be written. Many other participants of the project then joined the effort with technical support, writing, proof reading, and offering ideas. Unusually for a mathematics text, it was developed collaboratively and in the open on GitHub, is released under a Creative Commons license that allows people to fork their own version of the book, and is both purchasable in print and downloadable free of charge.
More generally, the special year was a catalyst for the development of the entire subject; the HoTT Book was only one, albeit the most visible, result.
Official participants in the special year
Peter Aczel
Benedikt Ahrens
Thorsten Altenkirch
Steve Awodey
Bruno Barras
Andrej Bauer
Yves Bertot
Marc Bezem
Thierry Coquand
Eric Finster
Daniel Grayson
Hugo Herbelin
André Joyal
Dan Licata
Peter Lumsdaine
Assia Mahboubi
Per Martin-Löf
Sergey Melikhov
Alvaro Pelayo
Andrew Polonsky
Michael Shulman
Matthieu Sozeau
Bas Spitters
Benno van den Berg
Vladimir Voevodsky
Michael Warren
Noam Zeilberger
ACM Computing Reviews listed the book as a notable 2013 publication in the category "mathematics of computing".
Key concepts
"Propositions as types"
HoTT uses a modified version of the "propositions as types" interpretation of type theory, according to which types can also represent propositions and terms can then represent proofs. In HoTT, however, unlike in standard "propositions as types", a special role is played by 'mere propositions' which, roughly speaking, are those types having at most one term, up to propositional equality. These are more like conventional logical propositions than are general types, in that they are proof-irrelevant.
Equality
The fundamental concept of homotopy type theory is the path. In HoTT, the type is the type of all paths from the point to the point . (Therefore, a proof that a point equals a point is the same thing as a path from the point to the point .) For any point , there exists a path of type , corresponding to the reflexive property of equality. A path of type can be inverted, forming a path of type , corresponding to the symmetric property of equality. Two paths of type resp. can be concatenated, forming a path of type ; this corresponds to the transitive property of equality.
Most importantly, given a path , and a proof of some property , the proof can be "transported" along the path to yield a proof of the property . (Equivalently stated, an object of type can be turned into an object of type .) This corresponds to the substitution property of equality. Here, an important difference between HoTT and classical mathematics comes in. In classical mathematics, once the equality of two values and has been established, and may be used interchangeably thereafter, with no regard to any distinction between them. In homotopy type theory, however, there may be multiple different paths , and transporting an object along two different paths will yield two different results. Therefore, in homotopy type theory, when applying the substitution property, it is necessary to state which path is being used.
In general, a "proposition" can have multiple different proofs. (For example, the type of all natural numbers, when considered as a proposition, has every natural number as a proof.) Even if a proposition has only one proof , the space of paths may be non-trivial in some way. A "mere proposition" is any type which either is empty, or contains only one point with a trivial path space.
Note that people write for ,
thereby leaving the type of implicit.
Do not confuse it with , denoting the identity function on .
Type equivalence
Two functions are homotopies by pointwise identification:
Equivalences between two types and belonging to some universe are defined by the functions together with the proof of having retractions and sections with respect to homotopies:
, where
Together with the univalence axiom below, one receives a non-circular "-isomorphism" expanded to identity.
The univalence axiom
Having defined functions that are equivalences as above, one can show that there is a canonical way to turn paths to equivalences.
In other words, there is a function of the type
which expresses that types that are equal are, in particular, also equivalent.
The univalence axiom states that this function is itself an equivalence. Therefore, we have
"In other words, identity is equivalent to equivalence. In particular, one may say that 'equivalent types are identical'."
Martín Hötzel Escardó has shown that the property of univalence is independent of Martin-Löf Type Theory (MLTT).
This is because type equivalence is compatible with all constructions of the type theory.
Applications
Theorem proving
Advocates claim that HoTT allows mathematical proofs to be translated into a computer programming language for computer proof assistants much more easily than before. They argue this approach increases the potential for computers to check difficult proofs. However, these claims aren't universally accepted and many research efforts and proof assistants don't make use of HoTT.
HoTT adopts the univalence axiom, which relates the equality of logical-mathematical propositions to homotopy theory. An equation such as is a mathematical proposition in which two different symbols have the same value. In homotopy type theory, this is taken to mean that the two shapes which represent the values of the symbols are topologically equivalent.
These equivalence relationships, ETH Zürich Institute for Theoretical Studies director Giovanni Felder argues, can be better formulated in homotopy theory because it is more comprehensive: Homotopy theory explains not only why "a equals b" but also how to derive this. In set theory, this information would have to be defined additionally, which, advocates argue, makes the translation of mathematical propositions into programming languages more difficult.
Computer programming
As of 2015, intense research work was underway to model and formally analyse the computational behavior of the univalence axiom in homotopy type theory.
Cubical type theory is one attempt to give computational content to homotopy type theory.
However, it is believed that certain objects, such as semi-simplicial types, cannot be constructed without reference to some notion of exact equality. Therefore, various two-level type theories have been developed which partition their types into fibrant types, which respect paths, and non-fibrant types, which do not. Cartesian cubical computational type theory is the first two-level type theory which gives a full computational interpretation to homotopy type theory.
See also
Calculus of constructions
Curry–Howard correspondence
Intuitionistic type theory
Homotopy hypothesis
Univalent foundations
Notes
References
Bibliography
(GitHub version cited in this article.)
As PDF.
As postscript.
Further reading
David Corfield (2020), Modal Homotopy Type Theory: The Prospect of a New Logic for Philosophy, Oxford University Press.
Egbert Rijke (2022), Introduction to Homotopy Type Theory, . Introductory textbook.
External links
Homotopy type theory wiki
Vladimir Voevodsky's webpage on the Univalent Foundations
Homotopy Type Theory and the Univalent Foundations of Mathematics by Steve Awodey
"Constructive Type Theory and Homotopy" – Video lecture by Steve Awodey at the Institute for Advanced Study
Libraries of formalized mathematics
(now integrated into UniMath, where further development takes place)
Foundations of mathematics
Type theory
Homotopy theory
Formal methods
Articles containing video clips | Homotopy type theory | [
"Mathematics",
"Engineering"
] | 4,237 | [
"Mathematical structures",
"Foundations of mathematics",
"Mathematical logic",
"Mathematical objects",
"Type theory",
"Software engineering",
"Formal methods"
] |
33,975,776 | https://en.wikipedia.org/wiki/Buchner%20ring%20expansion | The Buchner ring expansion is a two-step organic C-C bond forming reaction used to access 7-membered rings. The first step involves formation of a carbene from ethyl diazoacetate, which cyclopropanates an aromatic ring. The ring expansion occurs in the second step, with an electrocyclic reaction opening the cyclopropane ring to form the 7-membered ring.
History
The Buchner ring expansion reaction was first used in 1885 by Eduard Buchner and Theodor Curtius who prepared a carbene from ethyl diazoacetate for addition to benzene using both thermal and photochemical pathways in the synthesis of cycloheptatriene derivatives. The resulting product was a mixture of four isomeric carboxylic acids. Variations in the reaction arise from methods of carbene preparation. Advances in organometallic chemistry have resulted in increased selectivity of cycloheptatriene derivatives. In the 1980s it was found that dirhodium catalysts provide single cyclopropane isomers in high yields. Applications are found in medicine (drug syntheses) and material science (fullerene derivatives).
Preparation
Preparation of ethyl-diazoacetate
Buchner's first synthesis of cycloheptatriene derivatives in 1885 used photolysis and thermal conditions to generate the carbene. A procedure for preparation of the hazardous starting material needed for carbene generation in the Buchner reaction, ethyl-diazoacetate, is available in Organic Syntheses. In the procedure provided, Searle includes cautionary instructions due to the highly explosive nature of diazoacetic esters.
Preparation of the metal carbenoid
Synthesis of the carbene in the 1960s was focused on using copper catalysts for stereoselective cyclopropanation. In the 1980s, dirhodium catalysts have been used to generate the carbenoid for cyclopropanation. The advent of metallochemistry has improved the selectivity of the product ratios of the cyclohexatriene derivatives through choice of ligand on the carbenoid catalyst.
Mechanism
Step 1
The reaction mechanism of a Buchner ring expansion begins with carbene formation from ethyl-diazoacetate generated initially through photochemical or thermal reactions with extrusion of nitrogen.
The generated carbene adds to one of the double bonds of benzene to form the cyclopropane ring.
The advent of transition metal catalyzed reagents provides alternative stereospecific methods for cyclopropanation. The choices for metals include Cu, Rh and Ru with a variety of ligands. The use of rhodium catalysts in the Buchner reaction for carbene generation reduces the number of products by producing predominantly the kinetic cycloheptatrienyl esters. Product mixtures of Buchner reactions resulting from thermal Rhodium II-catalysts are less complicated. Wyatt et al. have studied the regioselectivity of the thermal Buchner reaction using Rh2(O2CCF3)4 and demonstrated that the electrophilic character of the rhodium carbene prefers reaction at the more nucleophilic π-bonds of the aromatic ring.
The accepted carbene catalytic cycle was proposed by Yates in 1952. Initially the diazo compound oxidatively adds to the metal ligand complex. Following the extrusion of nitrogen the metal carbene is generated and reacts with an electron rich aromatic substance to reductively regenerate the metal catalyst completing the catalytic cycle.
Step 2
The second step of the Buchner reaction involves a pericyclic concerted ring expansion. Based on Woodward–Hoffmann rules, the electrocyclic opening of norcaradiene derivatives is a 6-electron disrotatory (π 4s + σ 2s), thermally allowed process.
The norcaradiene-cycloheptatriene equilibrium has been studied extensively. The position of the equilibrium depends upon steric, electronic and conformational effects. Due to conformational strain in the cyclopropane ring of the norcaradiene the equilibrium lies on the side of the cycloheptatriene. The equilibrium may be shifted toward the norcaradiene by destabilization of the cycloheptatriene by bulky substitution (large sterically hindered groups i. e. t-butyl) at C1 and C6.
Equilibrium may be altered by varying substitution at C7. Electron donating groups (EDG) favor the norcaradiene, while electron withdrawing groups (EWG) favor the cycloheptatriene.
The tautomerism of the norcaradiene and cycloheptatriene can be understood based on the Walsh cyclopropane molecular orbitals of the norcaradiene cyclopropane ring. Electronic rationalization for stabilization of the Walsh orbitals is possible for both electron withdrawing and electron donating groups at the C7 carbon. The molecular orbitals of electron withdrawing groups at C7 overlap with the HOMO Walsh orbitals of the cyclopropane ring causing a shortening of the C1-C6 bond. In the case of electron donating groups, orbital overlap is again possible now in the LUMO, resulting in an increase in antibonding character destabilizing the norcaradiene tautomer. The position of the equilibrium may be controlled depending on the carbene substituents.
Applications
Medicine
The importance of the Buchner ring expansion annulation chemistry is evident in the application of this synthetic sequence in the synthesis of biological compounds.
While studying an analogous reaction of carbene addition to thiophene, Stephen Matlin and Lam Chan applied the Buchner ring expansion method in 1981 to generate spiro derivatives of penicillin.
In 1998, Mander et al. synthesized the diterpenoid tropone, harringtonolide using the Buchner intramolecular ring expansion annulation chemistry. A rhodium catalyst (Rh2(mandelate)4) and DBU (1,8-diazabicyclo[5.4.0]undec-7-ene) were used to generate the carbene. This natural product was found to have antineoplastic and antiviral properties.
Danheiser et al. utilized intramolecular carbenoid generation to produce substituted azulenes through a Buchner type ring expansion. The anti-ulcer drug, Egualen (KT1-32) was synthesized using this ring expansion-annulation strategy with a rhodium catalyst (Rh2(OCOt-Bu)4) in ether.
Material science
The Buchner ring expansion method has been used to synthesize starting materials for applications in material science involving photovoltaic cells. The development of a donor-acceptor (D-A) interface composed of conducting polymer donors and buckminsterfullerene derivative acceptors create a phase-separated composite that enhances photoconductivity (available with only polymer donors) in the photoinduced charge transfer process of photovoltaic cells. The fullerene compounds can be functionalized for miscibility of C60 to increase efficiency of the solar cell depending upon the polymeric thin film synthesized.
Limitations
The disadvantages of the reaction involve side reactions of the carbene moiety. The choice of solvent for the reaction needs to be considered. In addition to the potential for carbon-hydrogen bond insertion reactions, carbon-halogen carbene insertion is possible when dichloromethane is used as the solvent.
Control for regioselectivity during the carbene addition is necessary to avoid side products resulting from conjugated cycloheptatriene isomers. Noels et al. used Rh(II) catalysts for carbene generation under mild reaction conditions (room temperature) to obtain regioselectively the kinetic non-conjugated cycloheptatriene isomer.
See also
Electrocyclic reaction
Cycloheptatriene
Carbene
References
Ring expansion reactions
Carbon-carbon bond forming reactions
Rearrangement reactions
Name reactions | Buchner ring expansion | [
"Chemistry"
] | 1,706 | [
"Ring expansion reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions",
"Name reactions",
"Rearrangement reactions"
] |
48,494,934 | https://en.wikipedia.org/wiki/Supra-arcade%20downflows | Supra-arcade downflows (SADs) are sunward-traveling plasma voids that are sometimes observed in the Sun's outer atmosphere, or corona, during solar flares. In solar physics, refers to a bundle of coronal loops, and the prefix indicates that the downflows appear above flare arcades. They were first described in 1999 using the Soft X-ray Telescope (SXT) on board the Yohkoh satellite. SADs are byproducts of the magnetic reconnection process that drives solar flares, but their precise cause remains unknown.
Observations
Description
SADs are dark, finger-like plasma voids that are sometimes observed descending through the hot, dense plasma above bright coronal loop arcades during solar flares. They were first reported for a flare and associated coronal mass ejection that occurred on January 20, 1999, and was observed by the SXT onboard Yohkoh. SADs are sometimes referred to as “tadpoles” for their shape and have since been identified in many other events (e.g.). They tend to be most easily observed in the decay phases of long-duration flares, when sufficient plasma has accumulated above the flare arcade to make SADs visible, but they do begin earlier during the rise phase. In addition to the SAD voids, there are related structures known as supra-arcade downflowing loops (SADLs). SADLs are retracting (shrinking) coronal loops that form as the overlying magnetic field is reconfigured during the flare. SADs and SADLs are thought to be manifestations of the same process viewed from different angles, such that SADLs are observed if the viewer's perspective is along the axis of the arcade (i.e. through the arch), while SADs are observed if the perspective is perpendicular to the arcade axis.
Basic properties
SADs typically begin 100–200 Mm above the photosphere and descend 20–50 Mm before dissipating near the top of the flare arcade after a few minutes. Sunward speeds generally fall between 50 and 500 km s−1 but may occasionally approach 1000 km s−1. As they fall, the downflows decelerate at rates of 0.1 to 2 km s−2. SADs appear dark because they are considerably less dense than the surrounding plasma, while their temperatures (100,000 to 10,000,000 K) do not differ significantly from their surroundings. Their cross-sectional areas range from a few million to 70 million km2 (for comparison, the cross-sectional area of the Moon is 9.5 million km2).
Instrumentation
SADs are typically observed using soft X-ray and Extreme Ultraviolet (EUV) telescopes that cover a wavelength range of roughly 10 to 1500 Angstroms (Å) and are sensitive to the high-temperature (100,000 to 10,000,000 K) coronal plasma through which the downflows move. These emissions are blocked by Earth's atmosphere, so observations are made using space observatories. The first detection was made by the Soft X-ray Telescope (SXT) onboard Yohkoh (1991–2001). Observations soon followed from the Transition Region and Coronal Explorer (TRACE, 1998–2010), an EUV imaging satellite, and the spectroscopic SUMER instrument on board the Solar and Heliospheric Observatory (SOHO, 1995–2016). More recently, studies on SADs have used data from the X-Ray Telescope (XRT) onboard Hinode (2006—present) and the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO, 2010—present). In addition to EUV and X-ray instruments, SADs may also be seen by white light coronagraphs such as the Large Angle and Spectrometric Coronagraph (LASCO) onboard SOHO, though these observations are less common.
Causes
SADs are widely accepted to be byproducts of magnetic reconnection, the physical process that drives solar flares by releasing energy stored in the Sun's magnetic field. Reconnection reconfigures the local magnetic field surrounding the flare site from a higher-energy (non-potential, stressed) state to a lower-energy (potential) state. This process is facilitated by the development of a current sheet, often preceded by or in tandem with a coronal mass ejection. As the field is being reconfigured, newly formed magnetic field lines are swept away from the reconnection site, producing outflows both toward and away from the solar surface, respectively referred to as downflows and upflows. SADs are believed to be related to reconnection downflows that perturb the hot, dense plasma that collects above flare arcades, but precisely how SADs form is uncertain and is an area of active research.
SADs were first interpreted as cross sections of magnetic flux tubes, which comprise coronal loops, that retract down due to magnetic tension after being formed at the reconnection site. This interpretation was later revised to suggest that SADs are instead wakes behind much smaller retracting loops (SADLs), rather than cross sections of the flux tubes themselves. Another possibility, also related to reconnection outflows, is that SADs arise from an instability, such as the Rayleigh-Taylor instability or a combination of the tearing mode and Kelvin-Helmholtz instabilities.
References
External links
Supra-Arcade Downflows - RHESSI Wiki (berkeley.edu)
NASA: Closeup of Solar 'Tadpoles' (nasa.gov)
Hinode/XRT: Supra-Arcade Downflows Post X-Flare (cfa.harvard.edu)
Hinode/XRT: Supra-Arcade Downflowing Loops (cfa.harvard.edu)
Astrophysics
Magnetohydrodynamics
Solar phenomena
Space physics
Sun | Supra-arcade downflows | [
"Physics",
"Chemistry",
"Astronomy"
] | 1,239 | [
"Physical phenomena",
"Outer space",
"Magnetohydrodynamics",
"Astrophysics",
"Space physics",
"Solar phenomena",
"Stellar phenomena",
"Astronomical sub-disciplines",
"Fluid dynamics"
] |
48,496,573 | https://en.wikipedia.org/wiki/Bhargava%20factorial | In mathematics, Bhargava's factorial function, or simply Bhargava factorial, is a certain generalization of the factorial function developed by the Fields Medal winning mathematician Manjul Bhargava as part of his thesis in Harvard University in 1996. The Bhargava factorial has the property that many number-theoretic results involving the ordinary factorials remain true even when the factorials are replaced by the Bhargava factorials. Using an arbitrary infinite subset S of the set of integers, Bhargava associated a positive integer with every positive integer k, which he denoted by k !S, with the property that if one takes S = itself, then the integer associated with k, that is k ! , would turn out to be the ordinary factorial of k.
Motivation for the generalization
The factorial of a non-negative integer n, denoted by n!, is the product of all positive integers less than or equal to n. For example, 5! = 5×4×3×2×1 = 120. By convention, the value of 0! is defined as 1. This classical factorial function appears prominently in many theorems in number theory. The following are a few of these theorems.
For any positive integers m and n, (m + n)! is a multiple of m! n!.
Let f(x) be a primitive integer polynomial, that is, a polynomial in which the coefficients are integers and are relatively prime to each other. If the degree of f(x) is k then the greatest common divisor of the set of values of f(x) for integer values of x is a divisor of k!.
Let a0, a1, a2, ... , an be any n + 1 integers. Then the product of their pairwise differences is a multiple of 0! 1! ... n!.
Let be the set of integers and n any integer. Then the number of polynomial functions from the ring of integers to the quotient ring is given by .
Bhargava posed to himself the following problem and obtained an affirmative answer: In the above theorems, can one replace the set of integers by some other set S (a subset of , or a subset of some ring) and define a function depending on S which assigns a value to each non-negative integer k, denoted by k!S, such that the statements obtained from the theorems given earlier by replacing k! by k!S remain true?
The generalisation
Let S be an arbitrary infinite subset of the set Z of integers.
Choose a prime number p.
Construct an ordered sequence {a0, a1, a2, ... } of numbers chosen from S as follows (such a sequence is called a p-ordering of S):
a0 is any arbitrary element of S.
a1 is any arbitrary element of S such that the highest power of p that divides a1 − a0 is minimum.
a2 is any arbitrary element of S such that the highest power of p that divides (a2 − a0)(a2 − a1) is minimum.
a3 is any arbitrary element of S such that the highest power of p that divides (a3 − a0)(a3 − a1)(a3 − a2) is minimum.
... and so on.
Construct a p-ordering of S for each prime number p. (For a given prime number p, the p-ordering of S is not unique.)
For each non-negative integer k, let vk(S, p) be the highest power of p that divides (ak − a0)(ak − a1)(ak − a2) ... (ak − ak − 1). The sequence {v0(S, p), v1(S, p), v2(S, p), v3(S, p), ... } is called the associated p-sequence of S. This is independent of any particular choice of p-ordering of S. (We assume that v0(S, p) = 1 always.)
The factorial of the integer k, associated with the infinite set S, is defined as , where the product is taken over all prime numbers p.
Example: Factorials using set of prime numbers
Let S be the set of all prime numbers P = {2, 3, 5, 7, 11, ... }.
Choose p = 2 and form a p-ordering of P.
Choose a0 = 19 arbitrarily from P.
To choose a1:
The highest power of p that divides 2 − a0 = −17 is 20 = 1. Also, for any a ≠ 2 in P, a − a0 is divisible by 2. Hence, the highest power of p that divides (a1 − a0) is minimum when a1 = 2 and the minimum power is 1. Thus a1 is chosen as 2 and v1(P, 2) = 1.
To choose a2:
It can be seen that for each element a in P, the product x = (a − a0)(a − a1) = (a − 19)(a − 2) is divisible by 2. Also, when a = 5, x is divisible 2 and it is not divisible by any higher power of 2. So, a2 may be chosen as 5. We have v2(P, 2) = 2.
To choose a3:
It can be seen that for each element a in P, the product x = (a − a0)(a − a1)(a − a2) = (a − 19)(a − 2)(a − 5) is divisible by 23 = 8. Also, when a = 17, x is divisible by 8 and it is not divisible by any higher power of 2. Choose a3 = 17. Also we have v3(P,2) = 8.
To choose a4:
It can be seen that for each element a in P, the product x = (a − a0)(a − a1)(a − a2)(a − a3) = (a − 19)(a − 2)(a − 5)(a − 17) is divisible by 24 = 16. Also, when a = 23, x is divisible 16 and it is not divisible by any higher power of 2. Choose a4 = 23. Also we have v4(P,2) = 16.
To choose a5:
It can be seen that for each element a in P, the product x = (a − a0)(a − a1)(a − a2)(a − a3)(a − a4) = (a − 19)(a − 2)(a − 5)(a − 17)(a − 23) is divisible by 27 = 128. Also, when a = 31, x is divisible 128 and it is not divisible by any higher power of 2. Choose a5 = 31. Also we have v5(P,2) = 128.
The process is continued. Thus a 2-ordering of P is {19, 2, 5, 17, 23, 31, ... } and the associated 2-sequence is {1, 1, 2, 8, 16, 128, ... }, assuming that v0(P, 2) = 1.
For p = 3, one possible p-ordering of P is the sequence {2, 3, 7, 5, 13, 17, 19, ... } and the associated p-sequence of P is {1, 1, 1, 3, 3, 9, ... }.
For p = 5, one possible p-ordering of P is the sequence {2, 3, 5, 19, 11, 7, 13, ... } and the associated p-sequence is {1, 1, 1, 1, 1, 5, ...}.
It can be shown that for p ≥ 7, the first few elements of the associated p-sequences are {1, 1, 1, 1, 1, 1, ... }.
The first few factorials associated with the set of prime numbers are obtained as follows .
Table of values of vk(P, p) and k!P
Example: Factorials using the set of natural numbers
Let S be the set of natural numbers .
For p = 2, the associated p-sequence is {1, 1, 2, 2, 8, 8, 16, 16, 128, 128, 256, 256, ... }.
For p = 3, the associated p-sequence is {1, 1, 1, 3, 3, 3, 9, 9, 9, 27, 27, 27, 81, 81, 81, ... }.
For p = 5, the associated p-sequence is {1, 1, 1, 1, 1, 5, 5, 5, 5, 5, 25, 25, 25, 25, 25, ... }.
For p = 7, the associated p-sequence is {1, 1, 1, 1, 1, 1, 1, 7, 7, 7, 7, 7, 7, 7, ... }.
... and so on.
Thus the first few factorials using the natural numbers are
0! = 1×1×1×1×1×... = 1.
1! = 1×1×1×1×1×... = 1.
2! = 2×1×1×1×1×... = 2.
3! = 2×3×1×1×1×... = 6.
4! = 8×3×1×1×1×... = 24.
5! = 8×3×5×1×1×... = 120.
6! = 16×9×5×1×1×... = 720.
Examples: Some general expressions
The following table contains the general expressions for k!S for some special cases of S.
References
Combinatorics
Factorial and binomial topics | Bhargava factorial | [
"Mathematics"
] | 2,128 | [
"Discrete mathematics",
"Factorial and binomial topics",
"Combinatorics"
] |
48,497,362 | https://en.wikipedia.org/wiki/Gorenstein%20scheme | In algebraic geometry, a Gorenstein scheme is a locally Noetherian scheme whose local rings are all Gorenstein. The canonical line bundle is defined for any Gorenstein scheme over a field, and its properties are much the same as in the special case of smooth schemes.
Related properties
For a Gorenstein scheme X of finite type over a field, f: X → Spec(k), the dualizing complex f!(k) on X is a line bundle (called the canonical bundle KX), viewed as a complex in degree −dim(X). If X is smooth of dimension n over k, the canonical bundle KX can be identified with the line bundle Ωn of top-degree differential forms.
Using the canonical bundle, Serre duality takes the same form for Gorenstein schemes as it does for smooth schemes.
Let X be a normal scheme of finite type over a field k. Then X is regular outside a closed subset of codimension at least 2. Let U be the open subset where X is regular; then the canonical bundle KU is a line bundle. The restriction from the divisor class group Cl(X) to Cl(U) is an isomorphism, and (since U is smooth) Cl(U) can be identified with the Picard group Pic(U). As a result, KU defines a linear equivalence class of Weil divisors on X. Any such divisor is called the canonical divisor KX. For a normal scheme X, the canonical divisor KX is said to be Q-Cartier if some positive multiple of the Weil divisor KX is Cartier. (This property does not depend on the choice of Weil divisor in its linear equivalence class.) Alternatively, normal schemes X with KX Q-Cartier are sometimes said to be Q-Gorenstein.
It is also useful to consider the normal schemes X for which the canonical divisor KX is Cartier. Such a scheme is sometimes said to be Q-Gorenstein of index 1. (Some authors use "Gorenstein" for this property, but that can lead to confusion.) A normal scheme X is Gorenstein (as defined above) if and only if KX is Cartier and X is Cohen–Macaulay.
Examples
An algebraic variety with local complete intersection singularities, for example any hypersurface in a smooth variety, is Gorenstein.
A variety X with quotient singularities over a field of characteristic zero is Cohen–Macaulay, and KX is Q-Cartier. The quotient variety of a vector space V by a linear action of a finite group G is Gorenstein if G maps into the subgroup SL(V) of linear transformations of determinant 1. By contrast, if X is the quotient of C2 by the cyclic group of order n acting by scalars, then KX is not Cartier (and so X is not Gorenstein) for n ≥ 3.
Generalizing the previous example, every variety X with klt (Kawamata log terminal) singularities over a field of characteristic zero is Cohen–Macaulay, and KX is Q-Cartier.
If a variety X has log canonical singularities, then KX is Q-Cartier, but X need not be Cohen–Macaulay. For example, any affine cone X over an abelian variety Y is log canonical, and KX is Cartier, but X is not Cohen–Macaulay when Y has dimension at least 2.
Notes
References
External links
Algebraic geometry
Algebraic varieties
Scheme theory | Gorenstein scheme | [
"Mathematics"
] | 738 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
48,497,658 | https://en.wikipedia.org/wiki/Geophysical%20signal%20analysis | Geophysical signal analysis is concerned with the detection and a subsequent processing of signals. Any signal which is varying conveys valuable information. Hence to understand the information embedded in such signals, we need to 'detect' and 'extract data' from such quantities. Geophysical signals are of extreme importance to us as they are information bearing signals which carry data related to petroleum deposits beneath the surface and seismic data. Analysis of geophysical signals also offers us a qualitative insight into the possibility of occurrence of a natural calamity such as earthquakes or volcanic eruptions.
Gravitational and magnetic fields are detected using extremely sensitive gravitometers and magnetometers respectively. The gravitational field changes are measured using devices such as atom interferometers. A superconducting quantum interference device (SQUID) is an extremely sensitive device which measures minute changes in the magnetic field. After detection, the data from these signals is extracted by performing spectral analysis, filtering and beamforming techniques. These techniques can be used in oil exploration to estimate the position of underground objects, harnessing geothermal energy.
Background
The position of underground objects can be determined by measuring the gradient in Earth's gravitational field. It is known that an object with heavier mass “attracts” other objects of a considerably lower value of mass. This force of attraction is explained by understanding the following topics.
Spatial and temporal frequency
Temporal frequency is the number of occurrences of an event in unit "time". It is defined relative to time. Frequency of a wave can be X cycles per second. Spatial frequency on the other hand is the characteristic of any entity that periodically varies in space.
Digitizing in time and space domain
Digitizing of any signal has two aspects : "digitizing in time domain" and "digitizing in space domain". These concepts pertain to the signals varying in space, time or both.
Time domain digitization is the process of measuring the amplitude of signal in discrete time intervals.
Space domain digitization is the process of measuring the amplitude of signal in discrete spatial domain. Ex: Measuring intensity of electromagnetic field at various spatial intervals.
Tensor
To explain the concept of a tensor, consider the definition of a vector: “Vector is a quantity having both magnitude and direction. Vectors are tensors with rank 1”. There is only basis vector for a component. Ex: Velocity is represented as Ai + Bj + Ck where i,j,k are unit vectors in the x,y,z directions respectively. We can see that there is a one-one mapping between the basis vector and its component.
Tensor, on the other hand has rank greater than one. Gravitational field is an example for a tensor.
The set of figures on the left represent the various components of the gravitational field. These components fully characterize all the forces acting on a body. These can be represented in a matrix form as follows:
Now that we are familiar with the concepts of gravity and tensors, a qualitative discussion of gravity and its significance in geophysical analysis can be done. A certain mass distribution creates a gravitational force field around it, In other words, the object under consideration has a finite mass ‘M’ and hence bends the space around it. The gravitational field gradient is given by the divergence of the gravitational field.
Existing approaches in geophysical signal recognition and analysis
Estimating the positions of the underground objects by measuring gravitational measurements
The method being discussed here assumes that the mass distribution of the underground objects of interest is already known and hence the problem of estimating their location boils down to parametric localisation. Since the mass distribution of objects of interest is already known, say underground objects with center of masses (CM1, CM2...CMn) are located under the earth and at positions p1, p2...pn. The gravity gradient (components of the gravity field) is measured using a spinning wheel with accelerometers also called as the gravity gradiometer. The instrument is positioned in different orientations to measure the respective component of gravitational field. The values of gravitational gradient tensors are calculated and analyzed. The analysis includes observing the contribution of each object under consideration. A maximum likelihood procedure is followed and Cramér–Rao bound is computed to assess the quality of location estimate.
Measurement of Earth’s magnetic fields
Magnetometers are used to measure the magnetic fields, magnetic anomalies in the earth. The sensitivity of magnetometers depends upon the requirement. Ex, the variations in the geomagnetic fields can be to the order of several aT where 1aT = 10^-18T . In such cases, specialized magnetometers such as a superconducting quantum interference device (SQUID) are used.
Jim Zimmerman co-developed the superconducting quantum interference device during his tenure at Ford research lab. However, events leading to the invention of squid were in fact, serendipitous. John Lambe, during his experiments on nuclear magnetic resonance noticed that the electrical properties of indium varied due to a change in the magnetic field of the order of few nT. But, Lambe was not able to fully recognise the utility of SQUID.
SQUIDs have the capability to detect magnetic fields of extremely low magnitude. This is due to the virtue of Josephson junctions. Jim Zimmerman pioneered the development of SQUID by proposing a new approach to making the Josephson junctions. He made use of niobium wires and niobium ribbons to form two Josephson junctions connected in parallel. The ribbons act as the interruptions to the superconducting current flowing through the wires. The junctions are very sensitive to the magnetic fields and hence are very useful in measuring fields of the order of 10−18 T.
Measurement of seismic waves
Background
The motion of any mass is affected by the gravitational field. The motion of planets is affected by the Sun's enormous gravitational field. Likewise, a heavier object will influence the motion of other objects of smaller mass in its vicinity. However, this change in the motion is very small compared to the motion of heavenly bodies. Hence, special instruments are required to measure such a minute change.
Atom interferometer
Atom interferometers work on the principle of diffraction. The diffraction gratings are nano fabricated materials with a separation of a quarter wavelength of light. When a beam of atoms pass through a diffraction grating, due to the inherent wave nature of atoms, they split and form interference fringes on the screen. An atom interferometer is very sensitive to the changes in the positions of atoms. As heavier objects shifts the position of the atoms nearby, displacement of the atoms can be measured by detecting a shift in the interference fringes.
Analysis of geophysical signals
Any signal conveys information in two ways :
Temporal and spatial variation of the data
Frequency variation of the data.
Spectral analysis
Fourier representation
The Fourier expansion of a time domain signal is the representation of the signal as a sum of its frequency components, specifically sum of sines and cosines. Joseph Fourier came up with the Fourier representation to estimate the heat distribution of a body. The same approach can be followed to analyse the multi-dimensional signals such as electromagnetic waves.
The 4d - Fourier representation of such signals is given by:
S(K, ω) = ∫ ∫ s(x,t) exp [-j(ωt- k'x)] dx dt
ω represents temporal frequency and k represents spatial frequency.
s(x,t) is a 4-dimensional space time signal which can be imagined as travelling plane waves. The plane of propagation is perpendicular to the direction of propagation of an electromagnetic wave.
Filtering
Simply put, space time signal filtering problem can be thought as localizing the speed and direction of a particular signal. The design of filters for spacetime signals follows a similar approach as that of 1-D signals. The filters for 1-D signals are designed in such a way that if the requirement of the filter is to extract frequency components in a particular non-zero range of frequencies, a band-pass filter with appropriate passband and stop band frequencies in determined. Similarly, in the case of multi-dimensional systems, the wavenumber-frequency response of filters is designed in such a way that it is unity in the designed region of (k, ω) a.k.a. wavenumber - frequency and zero elsewhere.
Beamforming
This approach is applied for filtering spacetime signals. It is designed to isolate signals travelling in a particular direction. One of the simplest filters is weighted delay and sum beamformer. The output is the average of the linear combination of delayed signals. In other words, the beamformer output is formed by averaging weighted and delayed versions of receiver signals. The delay is chosen such that the passband of beamformer is directed to a specific direction in the space.
Upward continuation in analyzing magnetic fields
This method can be used to estimate the depth of the magnetic materials beneath the earth. The magnetic data is processed by the spectral methods such as Fourier transforms. The FFT algorithm makes the spectral analysis of signals fast, easy and efficient. The FFT computations can be performed in the machines which allow us to perform contour mapping. The upward continuation method attenuates the wave number anomalies associated with the shallow magnetic sources. Thus the signal components contain the information about the objects situated deep beneath the earth.
Applications
Geothermal energy mapping
The gravity data is collected from a detailed geographical surveys. The gravitational field intensity is measured using the gravimeter. Also, the elevations have to measured to account for the height corrections. The Bouguer anomalies in the gravity data are analysed. The Bouguer anomaly is a correction of the gravitational data which takes into account the heights of different terrains.
The Bouguer data is coupled with other magnetic and seismic measurements of the region. This data is instrumental in revealing the tectonic and structural geography that area.
After the data is obtained, some of the observations from the Bouguer data indicate the following:
The Bouguer anomaly data is directly related to the subsurface topology of the region.
The positive Bouguer anomalies indicate igneous intrusions in the sub-surface.
The gravity studies also indicate the presence of sub-surface aquifers. Hence, it can be guessed that the water circulation in the region of igneous intrusions can be a source of geothermal energy.
Oil exploration
The process of oil exploration] starts with finding a layer of impermeable substance under which oil is buried. Until a well is drilled, one can't accurately determine the presence of oil. However, by the efficient use of geothermal techniques, we can detect a layer beneath which oil may be trapped.
There are several approaches to detect the "oil traps"
The presence of flowing oil can cause minute changes in the gravitational and magnetic fields of the earth. These small changes in gravitational and magnetic fields can be picked up by sensitive gravimeters and magnetometers respectively.
In another approach, shock waves are sent beneath the surface. As the waves travel through the earth, they are reflected by various rock layers. The sensors mounted at the surface measure the time of arrival of the reflected waves. The presence/absence of oil can be ascertained by analyzing these readings. Seismic reflection techniques can provide reasonable accurate information over a larger area.
References
Further reading
Geophysics | Geophysical signal analysis | [
"Physics"
] | 2,314 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
48,500,538 | https://en.wikipedia.org/wiki/Besov%20measure | In mathematics — specifically, in the fields of probability theory and inverse problems — Besov measures and associated Besov-distributed random variables are generalisations of the notions of Gaussian measures and random variables, Laplace distributions, and other classical distributions. They are particularly useful in the study of inverse problems on function spaces for which a Gaussian Bayesian prior is an inappropriate model. The construction of a Besov measure is similar to the construction of a Besov space, hence the nomenclature.
Definitions
Let be a separable Hilbert space of functions defined on a domain , and let be a complete orthonormal basis for . Let and . For , define
This defines a norm on the subspace of for which it is finite, and we let denote the completion of this subspace with respect to this new norm. The motivation for these definitions arises from the fact that is equivalent to the norm of in the Besov space .
Let be a scale parameter, similar to the precision (the reciprocal of the variance) of a Gaussian measure. We now define a -valued random variable by
where are sampled independently and identically from the generalized Gaussian measure on with Lebesgue probability density function proportional to . Informally, can be said to have a probability density function proportional to with respect to infinite-dimensional Lebesgue measure (which does not make rigorous sense), and is therefore a natural candidate for a "typical" element of (although this Is not quite true — see below).
Properties
It is easy to show that, when t ≤ s, the Xt,p norm is finite whenever the Xs,p norm is. Therefore, the spaces Xs,p and Xt,p are nested:
This is consistent with the usual nesting of smoothness classes of functions f: D → R:
for example, the Sobolev space H2(D) is a subspace of H1(D) and in turn of the Lebesgue space L2(D) = H0(D); the Hölder space C1(D) of continuously differentiable functions is a subspace of the space C0(D) of continuous functions.
It can be shown that the series defining u converges in Xt,p almost surely for any t < s − d / p, and therefore gives a well-defined Xt,p-valued random variable. Note that Xt,p is a larger space than Xs,p, and in fact thee random variable u is almost surely not in the smaller space Xs,p. The space Xs,p is rather the Cameron-Martin space of this probability measure in the Gaussian case p = 2. The random variable u is said to be Besov distributed with parameters (κ, s, p), and the induced probability measure is called a Besov measure.
See also
References
Inverse problems
Measures (measure theory)
Theory of probability distributions | Besov measure | [
"Physics",
"Mathematics"
] | 597 | [
"Physical quantities",
"Measures (measure theory)",
"Applied mathematics",
"Quantity",
"Size",
"Inverse problems"
] |
48,502,346 | https://en.wikipedia.org/wiki/MDGRAPE-4 | MDGRAPE-4 is a supercomputer under development at the RIKEN Quantitative Biology Center (QBiC) in Suita, Osaka, Japan.
See also
RIKEN MDGRAPE-3
References
Riken
Supercomputers
Supercomputing in Japan
molecular dynamics | MDGRAPE-4 | [
"Physics",
"Chemistry",
"Technology"
] | 58 | [
"Supercomputers",
"Molecular physics",
"Supercomputing",
"Computer book stubs",
"Computational physics",
"Molecular dynamics",
"Computational chemistry",
"Computing stubs"
] |
48,503,281 | https://en.wikipedia.org/wiki/Ultrasound%20computer%20tomography | Ultrasound computer tomography (USCT), sometimes also Ultrasound computed tomography, Ultrasound computerized tomography or just Ultrasound tomography, is a form of medical ultrasound tomography utilizing ultrasound waves as physical phenomenon for imaging. It is mostly in use for soft tissue medical imaging, especially breast imaging.
Description
Ultrasound computer tomographs use ultrasound waves to create images. In the first measurement step, a defined ultrasound wave is generated with typically Piezoelectric ultrasound transducers, transmitted in direction of the measurement object and received with other or the same ultrasound transducers. While traversing and interacting with the object the ultrasound wave is changed by the object and carries now information about the object. After being recorded the information from the modulated waves can be extracted and used to create an image of the object in a second step. Unlike X-ray or other physical properties which provide typically only one information, ultrasound provides multiple information of the object for imaging: the attenuation the wave's sound pressure experiences indicate on the object's attenuation coefficient, the time-of-flight of the wave gives speed of sound information, and the scattered wave indicates on the echogenicity of the object (e.g. refraction index, surface morphology, etc.). Unlike conventional ultrasound sonography, which uses phased array technology for beamforming, most USCT systems utilize unfocused spherical waves for imaging. Most USCT systems aiming for 3D-imaging, either by synthesizing ("stacking") 2D images or by full 3D aperture setups. Another aim is quantitative imaging instead of only qualitative imaging.
The idea of Ultrasound computer tomography goes back to the 1950s with analogue compounding setups, in the mid 1970s the first "computed" USCT systems were built up, utilizing digital technology. The "computer" in the USCT concept indicates the heavy reliance on computational intensive advanced digital signal processing, image reconstruction and image processing algorithms for imaging. Successfully realization of USCT systems in the last decades was possible through the continuously growing availability of computing power and data bandwidth provided by the digital revolution.
Setup
USCT systems designed for medical imaging of soft tissue typically aim for resolution in the order of centimeters to millimeters and require therefore ultrasound waves in the order of mega-hertz frequency. This requires typically water as low-attenuating transmission medium between ultrasound transducers and object to retain suitable sound pressures.
USCT systems share with the common tomography the fundamental architectural similarity that the aperture, the active imaging elements, surround the object. For the distribution of ultrasound transducers around the measurement object, forming the aperture, multiple design approaches exist. There exist mono-, bi- and multistatic setups of transducer configurations. Common are 1D- or 2D- linear arrays of ultrasound transducers acting as emitters on one side of the object, on the opposing side of the object a similar array acting as receiver is placed, forming a parallel setup. Sometimes accompanied by the additional ability to be moved to gather more information from additional angles. While cost-efficient to build the main disadvantage of such a setup is the limited ability (or inability) of gathering reflectivity information, as such an aperture is limited to only transmission information. Another aperture approach is a ring of transducers, sometimes with the degree of freedom of motorized lifting for gathering additional information over the height for 3D imaging ("stacking"). Full 3D setups, with no inherent need for aperture movements, exist in the form of apertures formed by semi-spherical distributed transducers. While the most expensive setup they offer the advantage of nearly-uniform data, gathered from many directions. Also, they are fast in data taking as they don't require time-costly mechanical movements.
Imaging methods and algorithms
Tomographic reconstruction methods used in USCT systems for transmission information based imaging are classical inverse radon transform and fourier slice theorem and derived algorithms (cone beam etc.). As advanced alternatives, ART-based approaches are also utilized. For high-resolution and speckle noise reduced reflectivity imaging Synthetic Aperture Focusing Techniques (SAFT), similar to radar's SAR and sonar's SAS, are widely used. Iterative wave equation inversion approaches as imaging method coming from the seismology are under academic research, but usage for real world applications is due to the enormous computational and memory burden still a challenge.
Application and usage
Many USCT systems are designed for soft tissue imaging and for breast cancer diagnosis specifically. As an ultrasound-based method with low sound pressures, USCT is a harmless and risk-free imaging method, suitable for periodical screening. As USCT setups are fixed or motor moved without direct contact with the breast the reproduction of images is easier as with common, manually guided methods (e.g. Breast ultrasound) which rely on the individual examiners' performance and experience. In comparison with conventional screening methods like mammography, USCT systems offer potentially an increased specificity for breast cancer detection, as multiple breast cancer characteristic properties are imaged at the same time: speed-of-sound, attenuation and morphology.
See also
Medical ultrasound
Tomography
Ultrasound transmission tomography
Ultrasound-modulated optical tomography
References
Tomography
Medical ultrasonography
Acoustics
Medical equipment | Ultrasound computer tomography | [
"Physics",
"Biology"
] | 1,078 | [
"Medical equipment",
"Classical mechanics",
"Acoustics",
"Medical technology"
] |
48,503,953 | https://en.wikipedia.org/wiki/Sperm%20thermotaxis | Sperm thermotaxis is a form of sperm guidance, in which sperm cells (spermatozoa) actively change their swimming direction according to a temperature gradient, swimming up the gradient. Thus far this process has been discovered in mammals only.
Background
The discovery of mammalian sperm chemotaxis and the realization that it can guide spermatozoa for short distances only, estimated at the order of millimeters, triggered a search for potential long-range guidance mechanisms. The findings that, at least in rabbits and pigs, a temperature difference exists within the oviduct, and that this temperature difference is established at ovulation in rabbits due to a temperature drop in the oviduct near the junction with the uterus, creating a temperature gradient between the sperm storage site and the fertilization site in the oviduct (Figure 1), led to investigation whether mammalian spermatozoa can respond to a temperature gradient by thermotaxis.
Establishing sperm thermotaxis as an active process
Mammalian sperm thermotaxis was, hitherto, demonstrated in three species: humans, rabbits, and mice. This was done by two methods. One involved a Zigmond chamber, modified to make the temperature in each well separately controllable and measurable. A linear temperature gradient was established between the wells and the swimming of spermatozoa in this gradient was analyzed. A small fraction of the spermatozoa (at the order of ~10%), shown to be the capacitated cells, biased their swimming direction according to the gradient, moving towards the warmer temperature. The other method involved two- or three-compartment separation tube placed within a thermoseparation device that maintains a linear temperature gradient. Sperm accumulation at the warmer end of the separation tube was much higher than the accumulation at the same temperature but in the absence of a temperature gradient. This gradient-dependent sperm accumulation was observed over a wide temperature range (29-41 °C).
Since temperature affects almost every process, much attention has been devoted to the question of whether the measurements, mentioned just above, truly demonstrate thermotaxis or whether they reflect another temperature-dependent process. The most pronounced effect of temperature in liquid is convection, which raised the concern that the apparent thermotactic response could have been a reflection of a passive drift in the liquid current or a rheotactic response to the current (rather than to the temperature gradient per se). Another concern was that the temperature could have changed the local pH of the buffer solution in which the spermatozoa are suspended. This could generate a pH gradient along the temperature gradient, and the spermatozoa might have responded to the formed pH gradient by chemotaxis. However, careful experimental examinations of all these possibilities with proper controls demonstrated that the measured responses to temperature are true thermotactic responses and that they are not a reflection of any other temperature-sensitive process, including rheotaxis and chemotaxis.
Behavioral mechanism of mammalian sperm thermotaxis
The behavioral mechanism of sperm thermotaxis has been so far only investigated in human spermatozoa. Like the behavioral mechanisms of bacterial chemotaxis and human sperm chemotaxis, the behavioral mechanism of human sperm thermotaxis appears to be stochastic rather than deterministic. Capacitated human spermatozoa swim in rather straight lines interrupted by turns and brief episodes of hyperactivation. Each such episode results in swimming in a new direction. When the spermatozoa sense a decrease in temperature, the frequency of turns and hyperactivation events increases due to increased flagellar-wave amplitude that results in enhanced side-to-side head displacement. With time, this response undergoes partial adaptation. The opposite happens in response to an increase in temperature. This suggests that when capacitated spermatozoa swim up a temperature gradient, turns are repressed and the spermatozoa continue swimming in the gradient direction. When they happen to swim down the gradient, they turn again and again until their swimming direction is again up the gradient.
Temperature sensing
The response of spermatozoa to temporal temperature changes even when the temperature is kept constant spatially suggests that, as in the case of human sperm chemotaxis, sperm thermotaxis involves temporal gradient sensing. In other words, spermatozoa apparently compare the temperature (or a temperature-dependent function) between consecutive time points. This, however, does not exclude the occurrence of spatial temperature sensing in addition to temporal sensing. Human spermatozoa can respond thermotactically within a wide temperature range (at least 29–41 °C). Within this range they preferentially accumulate in warmer temperatures rather than at a single specific, preferred temperature. Amazingly, they can sense and thermotactically respond to temperature gradients as low as <0.014 °C/mm. This means that when human spermatozoa swim a distance that equals their body length (~46 μm) they respond to a temperature difference of <0.0006 °C!
Molecular mechanism
The molecular mechanism underlying thermotaxis, in general, and thermosensing with such extreme sensitivity, in particular, is obscure. It is known that, unlike other recognized thermosensors in mammals, the thermosensors for sperm thermotaxis do not seem to be temperature-sensitive ion channels. They are rather opsins, known to be G-protein-coupled receptors that act as photosensors in vision. The opsins are present in spermatozoa at specific sites, which depend on the species and the opsin type. They are involved in sperm thermotaxis via two signaling pathways—a phospholipase C signaling pathway and a cyclic-nucleotide pathway. The former was shown by pharmacological means in human spermatozoa to involve the enzyme phospholipase C, an inositol trisphosphate receptor calcium channel located on internal calcium stores, the calcium channel TRPC3, and intracellular calcium. The latter was hitherto shown to involve phosphodiesterase. Blocking both pathways fully inhibits sperm thermotaxis.
References
External links
Semenax Clinical Study Results
Semen
Cell biology | Sperm thermotaxis | [
"Biology"
] | 1,306 | [
"Cell biology"
] |
48,505,403 | https://en.wikipedia.org/wiki/LADOL | Lagos Deep Offshore Logistics Base (LADOL), officially LADOL Free Zone, also known as LADOL Base or the initials LFZ, is an industrial Free Zone privately owned logistics and engineering facility located on an island in the Port of Apapa, Lagos, Nigeria.
LADOL was designed to provide logistics, engineering and other support services to offshore oil & gas exploration and production companies operating in and around West Africa.
History
LADOL's developer, LiLe, began the construction of the logistics and engineering base in 2001 and commenced full operations in 2006. In June 2006, LADOL was designated as a Free Zone pursuant to the Nigeria Export Processing Zones Act No. 63 1992. Completed at a cost of US$150 million, LADOL's initial infrastructures included: a 200m quay, 8.5m draft, 25-ton/m2 high load bearing area and additional 30-ton bollards at either end that can accommodate up to six supply vessels and three heavy-lift vessels; a hotel; warehouse; office complex; road; water treatment; and underground reticulation.
In 2015, with the support of Total Upstream Nigeria Limited, LADOL was further expanded to include a new US$300 million Floating Production Storage and Offloading (FPSO) vessel fabrication and integration facility. The FPSO vessel fabrication and integration facility – currently being operated by SHI-MCI FZE, a Nigerian Local Content initiative-driven incorporated joint venture between Samsung Heavy Industries and LADOL's shipyard operator, Mega-Construction and Integration FZE – was initiated to fabricate and integrate Total Egina FPSO in Nigeria and other similar projects expected to be carried out in Africa.
The next phase of LADOL's expansion has been reported to include a dry dock that will be the largest in West Africa and attract as many as 100,000 direct and indirect jobs.
References
External links
Companies based in Lagos
Engineering companies of Nigeria
Foreign direct investment
Industrial parks
Manufacturing companies established in 2000
Nigerian brands
Nigerian companies established in 2000
Offshore engineering
Oilfield services companies
Planned industrial developments
Privately held companies of Nigeria
Shipyards of Africa
Special economic zones | LADOL | [
"Engineering"
] | 430 | [
"Construction",
"Offshore engineering"
] |
48,505,670 | https://en.wikipedia.org/wiki/Macintyre%27s%20X-Ray%20Film | Macintyre's X-Ray Film is an 1896 documentary radiography film directed by Scottish medical doctor John Macintyre.
The film shows X-ray images of a frog's knee joint and an X-ray radiograph of an adult's heart and digestive tract (using bismuth as contrast). Each image was captured in 1/300th of a second.
Text from the film's title card reads:
"First XRay Cinematograph ever taken, shown by Dr. Macintyre at the London Royal Society, 1897."
The title card between the footage of images of the heart and stomach reads:
"XRay Photograph of adult, each Picture taken in the 300th part of a second. A series of these enable us to see a complete cycle of the movements of the heart. The movements of the digestive organs can also be seen and the joints of the body thus facilitating diagnosis of diseases of the bones and joints."
References
John Macintyre universitystory.gla.ac.uk
Macintyre's X-Ray Film youtube.com
JOHN MACINTYRE gdl.cdlr.strath.ac.uk
1896 films
1896 short films
1890s short documentary films
Black-and-white documentary films
British short documentary films
British silent short films
X-rays
Articles containing video clips
British black-and-white films
Scottish documentary films | Macintyre's X-Ray Film | [
"Physics"
] | 282 | [
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
45,473,832 | https://en.wikipedia.org/wiki/Beyond%20CMOS | Beyond CMOS refers to the possible future digital logic technologies beyond the scaling limits of CMOS technology. which limits device density and speeds due to heating effects.
Beyond CMOS is the name of one of the 7 focus groups in ITRS 2.0 (2013) and in its successor, the International Roadmap for Devices and Systems.
CPUs using CMOS were released from 1986 (e.g. 12 MHz Intel 80386). As CMOS transistor dimensions were shrunk the clock speeds also increased. Since about 2004 CMOS CPU clock speeds have leveled off at about 3.5 GHz.
CMOS devices sizes continue to shrink – see Intel's process–architecture–optimization model (and older tick–tock model) and ITRS:
22 nanometer Ivy Bridge in 2012
first 14 nanometer processors shipped in Q4 2014.
In May 2015, Samsung Electronics showed a 300 mm wafer of 10 nanometer FinFET chips.
It is not yet clear if CMOS transistors will still work below 3 nm. See 3 nanometer.
Comparisons of technology
About 2010 the Nanoelectronic Research Initiative (NRI) studied various circuits in various technologies.
Nikonov benchmarked (theoretically) many technologies in 2012, and updated it in 2014. The 2014 benchmarking included 11 electronic, 8 spintronic, 3 orbitronic, 2 ferroelectric, and 1 straintronics technology.
The 2015 ITRS 2.0 report included a detailed chapter on Beyond CMOS, covering RAM and logic gates.
Some areas of investigation
Magneto-Electric Spin-Orbit logic
tunnel junction devices, e.g. Tunnel field-effect transistor
indium antimonide transistors
carbon nanotube FET, e.g. CNT Tunnel field-effect transistor
graphene nanoribbons
molecular electronics
spintronics — many variants
future low-energy electronics technologies, ultra-low dissipation conduction paths, including
topological materials
exciton superfluids
photonics and optical computing
superconducting computing
rapid single-flux quantum (RSFQ)
Superconducting computing and RSFQ
Superconducting computing includes several beyond-CMOS technologies that use superconducting devices, namely Josephson junctions, for electronic signals processing and computing. One variant called rapid single-flux quantum (RSFQ) logic was considered promising by the NSA in a 2005 technology survey despite the drawback that available superconductors require cryogenic temperatures. More energy-efficient superconducting logic variants have been developed since 2005 and are being considered for use in large scale computing.
See also
International Technology Roadmap for Semiconductors
International Roadmap for Devices and Systems
Moore's law
MOSFET scaling
Nanostrain, a project to characterise piezoelectric materials for low power switches
S-PULSE, the EU Shrink-Path of Ultra-Low Power Superconducting Electronics initiative
Probabilistic complementary metal-oxide semiconductor (PCMOS)
References
Further reading
Nikonov, Dmitri E.; Ian A. (2013–12). "Overview of Beyond-CMOS Devices and a Uniform Methodology for Their Benchmarking". Proceedings of the IEEE. 101 (12): 2498–2533. doi:10.1109/jproc.2013.2252317. ISSN 0018-9219.
Seabaugh, A.C. and Zhang, Q., 2010. Low-voltage tunnel transistors for beyond CMOS logic. Proceedings of the IEEE, 98(12), pp. 2095–2110.
Bernstein, K., Cavin, R.K., Porod, W., Seabaugh, A. and Welser, J., 2010. Device and architecture outlook for beyond CMOS switches. Proceedings of the IEEE, 98(12), pp. 2169–2184.
Sasikanth Manipatruni, Nikonov, D.E. and Ian A. Young, 2018. Beyond CMOS computing with spin and polarization. Nature Physics, 14(4), pp. 338–343.
Banerjee, S.K., Register, L.F., Tutuc, E., Basu, D., Kim, S., Reddy, D. and MacDonald, A.H., 2010. Graphene for CMOS and beyond CMOS applications. Proceedings of the IEEE, 98(12), pp. 2032–2046.
Topaloglu, R.O. and Wong, H.S.P. eds., 2019. Beyond-CMOS technologies for next generation computer design. Berlin/Heidelberg, Germany: Springer.
Sasikanth Manipatruni, Nikonov, D.E., Lin, C.C., Gosavi, T.A., Liu, H., Prasad, B., Huang, Y.L., Bonturim, E., Ramesh, R. and Young, I.A., 2019. Scalable energy-efficient magnetoelectric spin–orbit logic. Nature, 565(7737), pp. 35–42.
External links
ITRS 2013 edition
EMERGING RESEARCH DEVICES SUMMARY
Process Integration, Devices and structures summary
Electronic design
Digital electronics
Logic families
Integrated circuits | Beyond CMOS | [
"Technology",
"Engineering"
] | 1,104 | [
"Computer engineering",
"Digital electronics",
"Electronic design",
"Electronic engineering",
"Design",
"Integrated circuits"
] |
45,478,390 | https://en.wikipedia.org/wiki/Junctionless%20nanowire%20transistor | Junction-Less nanowire transistor (JLNT) is a type of Field-effect transistor (FET) in which the channel consists of one or more nanowires and does not contain a junction.
Existing devices
Multiple JLNT devices were manufactured in various labs:
Tyndall National Institute in Ireland
JLT is a nanowire-based transistor that has no gate junction. (Even MOSFET has a gate junction, although its gate is electrically insulated from the controlled region.) Junctions are difficult to fabricate, and, because they are a significant source of current leakage, they waste significant power and heat. Eliminating them held the promise of cheaper and denser microchips. The JNT uses a simple nanowire of silicon surrounded by an electrically isolated "wedding ring" that acts to gate the flow of electrons through the wire. This method has been described as akin to squeezing a garden hose to gate the flow of water through the hose. The nanowire is heavily n-doped, making it an excellent conductor. Crucially the gate, comprising silicon, is heavily p-doped; and its presence depletes the underlying silicon nanowire thereby preventing carrier flow past the gate.
LAAS
A Junction-Less Vertical Nano-Wire FET (JLVNFET) manufacturing process was developed in Laboratory for Analysis and Architecture of Systems (LAAS).
Electrical Behaviour
Thus the device is turned off not by reverse bias voltage applied to the gate, as in the case of conventional MOSFET but by full depletion of the channel. This depletion is caused due to work-function difference (Contact_potentials) between the gate material and doped silicon in the nanowire.
The JNT uses bulk conduction instead of surface channel conduction. The current drive is controlled by doping concentration and not by gate capacitance.
Germanium has been used instead of silicon nanowires.
References
Junctionless Nanowire Transistor: Properties and Device Guidelines
Ferain Junctionless Transistors (pdf)
Transistor types
Nanoelectronics | Junctionless nanowire transistor | [
"Materials_science"
] | 431 | [
"Nanotechnology",
"Nanoelectronics"
] |
40,774,061 | https://en.wikipedia.org/wiki/NGC%204527 | NGC 4527 is a spiral galaxy in the constellation Virgo. It was discovered by German-British astronomer William Herschel on 23 February 1784.
NGC 4527 is a member of the M61 Group of galaxies, which is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster.
Characteristics
NGC 4527 is an intermediate spiral galaxy similar to the Andromeda Galaxy and is located at a distance not well determined, but usually is considered to be an outlying member of the Virgo Cluster of galaxies, being placed within the subcluster known as S Cloud.
Unlike the Andromeda Galaxy, NGC 4527 is also a starburst galaxy, with 2.5 billion solar masses of molecular hydrogen concentrated within its innermost regions. However said starburst is still weak and seems to be on its earliest phases.
Supernovae
Three supernovae have been observed in NGC 4527:
Harlow Shapley discovered SN 1915A (type unknown, mag. 15.5) on 20 March 1915.
Several astronomers reported the discovery of SN 1991T (type Ia-pec, mag. 13) on 13 April 1991.
SN 2004gn (type Ic, mag. 16.6) was discovered on 1 December 2004 by the Lick Observatory Supernova Search (LOSS).
See also
List of NGC objects (4001–5000)
References
External links
Intermediate spiral galaxies
Virgo Cluster
4527
Virgo (constellation)
041789
07721
12315+0255
+01-32-101
17840223
Discoveries by William Herschel | NGC 4527 | [
"Astronomy"
] | 338 | [
"Virgo (constellation)",
"Constellations"
] |
40,774,899 | https://en.wikipedia.org/wiki/Homotopy%20colimit%20and%20limit | In mathematics, especially in algebraic topology, the homotopy limit and colimitpg 52 are variants of the notions of limit and colimit extended to the homotopy category . The main idea is this: if we have a diagramconsidered as an object in the homotopy category of diagrams , (where the homotopy equivalence of diagrams is considered pointwise), then the homotopy limit and colimits then correspond to the cone and coconewhich are objects in the homotopy category , where is the category with one object and one morphism. Note this category is equivalent to the standard homotopy category since the latter homotopy functor category has functors which picks out an object in and a natural transformation corresponds to a continuous function of topological spaces. Note this construction can be generalized to model categories, which give techniques for constructing homotopy limits and colimits in terms of other homotopy categories, such as derived categories. Another perspective formalizing these kinds of constructions are derivatorspg 193 which are a new framework for homotopical algebra.
Introductory examples
Homotopy pushout
The concept of homotopy colimitpg 4-8 is a generalization of homotopy pushouts, such as the mapping cylinder used to define a cofibration. This notion is motivated by the following observation: the (ordinary) pushout
is the space obtained by contracting the (n−1)-sphere (which is the boundary of the n-dimensional disk) to a single point. This space is homeomorphic to the n-sphere Sn. On the other hand, the pushout
is a point. Therefore, even though the (contractible) disk Dn was replaced by a point, (which is homotopy equivalent to the disk), the two pushouts are not homotopy (or weakly) equivalent.
Therefore, the pushout is not well-aligned with a principle of homotopy theory, which considers weakly equivalent spaces as carrying the same information: if one (or more) of the spaces used to form the pushout is replaced by a weakly equivalent space, the pushout is not guaranteed to stay weakly equivalent. The homotopy pushout rectifies this defect.
The homotopy pushout of two maps of topological spaces is defined as
,
i.e., instead of glueing B in both A and C, two copies of a cylinder on B are glued together and their ends are glued to A and C.
For example, the homotopy colimit of the diagram (whose maps are projections)
is the join .
It can be shown that the homotopy pushout does not share the defect of the ordinary pushout: replacing A, B and / or C by a homotopic space, the homotopy pushout will also be homotopic. In this sense, the homotopy pushouts treats homotopic spaces as well as the (ordinary) pushout does with homeomorphic spaces.
Composition of maps
Another useful and motivating examples of a homotopy colimit is constructing models for the homotopy colimit of the diagramof topological spaces. There are a number of ways to model this colimit: the first is to consider the spacewhere is the equivalence relation identifyingwhich can pictorially be described as the pictureBecause we can similarly interpret the diagram above as the commutative diagram, from properties of categories, we get a commutative diagramgiving a homotopy colimit. We could guess this looks likebut notice we have introduced a new cycle to fill in the new data of the composition. This creates a technical problem which can be solved using simplicial techniques: giving a method for constructing a model for homotopy colimits. The new diagram, forming the homotopy colimit of the composition diagram pictorially is represented asgiving another model of the homotopy colimit which is homotopy equivalent to the original diagram (without the composition of ) given above.
Mapping telescope
The homotopy colimit of a sequence of spaces
is the mapping telescope. One example computation is taking the homotopy colimit of a sequence of cofibrations. The colimit of pg 62 this diagram gives a homotopy colimit. This implies we could compute the homotopy colimit of any mapping telescope by replacing the maps with cofibrations.
General definition
Homotopy limit
Treating examples such as the mapping telescope and the homotopy pushout on an equal footing can be achieved by considering an -diagram of spaces, where is some "indexing" category. This is a functor
i.e., to each object in , one assigns a space and maps between them, according to the maps in . The category of such diagrams is denoted .
There is a natural functor called the diagonal,
which sends any space to the diagram which consists of everywhere (and the identity of as maps between them). In (ordinary) category theory, the right adjoint to this functor is the limit. The homotopy limit is defined by altering this situation: it is the right adjoint to
which sends a space to the -diagram which at some object gives
Here is the slice category (its objects are arrows , where is any object of ), is the nerve of this category and |-| is the topological realization of this simplicial set.
Homotopy colimit
Similarly, one can define a colimit as the left adjoint to the diagonal functor given above. To define a homotopy colimit, we must modify in a different way. A homotopy colimit can be defined as the left adjoint to a functor where
,
where is the opposite category of . Although this is not the same as the functor above, it does share the property that if the geometric realization of the nerve category () is replaced with a point space, we recover the original functor .
Examples
A homotopy pullback (or homotopy fiber-product) is the dual concept of a homotopy pushout. Concretely, given and , it can be constructed as
For example, the homotopy fiber of over a point y is the homotopy pullback of along . The homotopy pullback of along the identity is nothing but the mapping path space of .
The universal property of a homotopy pullback yields the natural map , a special case of a natural map from a limit to a homotopy limit. In the case of a homotopy fiber, this map is an inclusion of a fiber to a homotopy fiber.
Construction of colimits with simplicial replacements
Given a small category and a diagram , we can construct the homotopy colimit using a simplicial replacement of the diagram. This is a simplicial space, given by the diagrampg 16-17 wheregiven by chains of composable maps in the indexing category . Then, the homotopy colimit of can be constructed as the geometric realization of this simplicial space, soNotice that this agrees with the picture given above for the composition diagram of .
Relation to the (ordinary) colimit and limit
There is always a map
Typically, this map is not a weak equivalence. For example, the homotopy pushout encountered above always maps to the ordinary pushout. This map is not typically a weak equivalence, for example the join is not weakly equivalent to the pushout of , which is a point.
Further examples and applications
Just as limit is used to complete a ring, holim is used to complete a spectrum.
See also
Derivator
Homotopy fiber
Homotopy cofiber
Cohomology of categories
Spectral sequence of homotopy colimits
References
A Primer on Homotopy Colimits
Homotopy colimits in the category of small categories
Categories and Orbispaces
Further reading
Homotopy limit-colimit diagrams in stable model categories
pg.80 Homotopy Colimits and Limits
Homotopy theory
Category theory
Homotopical algebra | Homotopy colimit and limit | [
"Mathematics"
] | 1,669 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.