id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
3,002,883 | https://en.wikipedia.org/wiki/MIDI%20Show%20Control | MIDI Show Control, or MSC, is a real-time System Exclusive extension of the international Musical Instrument Digital Interface (MIDI) standard. MSC enables all types of entertainment equipment to communicate with each other through the process of show control.
The MIDI Show Control protocol is a technical standard ratified by the MIDI Manufacturers Association in 1991 which allows entertainment control devices to talk with each other and with computers to perform show control functions in live and prerecorded entertainment applications. Just like musical MIDI, MSC does not transmit the actual show media - it simply transmits digital information about a multimedia performance.
How MSC works
When any cue is called by a user (typically a stage manager) and/or preprogrammed timeline in a show control software application, the show controller transmits one or more MSC messages from its 'MIDI Out' port. A typical MSC message sequence is:
the user has just called a cue
the cue is for lighting device 3
the cue is number 45.8
the cue is in cue list 7
MSC messages are serially transmitted in the same way as musical messages and are fully compatible with all conventional MIDI hardware, however many modern MSC devices now use Ethernet communications for higher bandwidth and the flexibility afforded by networks. Other performance parameters are also transmitted such as lighting desk submaster settings using MSC SET messages.
All cues that a media control device is capable of playing are assigned MSC messages within the Show Controller's cue list and they are transmitted from its 'MIDI Out' port at the appropriate show time, depending on the actions of the user and the show controller's internally timed sequences.
All MSC compatible instruments follow the MSC specification and thus transmit identical MSC messages for identical MSC events such as the playing of a certain cue on the media controller. Since they follow a published standard, all MSC devices can communicate with and understand each other, as well as with computers that have been programmed to understand MSC messages using the MSC Command Set. All MSC compatible instruments have a built-in MIDI interface and many now follow one of the various MIDI-over-Ethernet protocols.
History
To create the MSC spec, Charlie Richmond headed the USITT MIDI Forum on their Callboard Network in 1990, which included developers and designers from the theatre sound and lighting industry from around the world. It is believed that this was the first international standard to be developed without a single physical meeting of the participants and the full transcript of the discussion is available via External Links, below. This Forum created the MSC standard between January and September 1990. This was ratified by the MIDI Manufacturers Association (MMA) in January 1991, and the Japan MIDI Standards Committee (JMSC) later that year, becoming a part of the standard MIDI specification in August 1991. The first show to fully use the MSC specification was the Magic Kingdom Parade at Walt Disney World's Magic Kingdom in September 1991.
MIDI Show Control software
See also
Sound design
Theatre lighting
References
External links
MIDI Manufacturers Association
MSC 1.0 PDF (still a valid subset, is now superseded by V1.1)
The Show Control Mailing List subscription page.
Creating the MIDI Show Control Spec on the USITT Callboard Forum in 1990-2
Computer file formats
Digital media
MIDI standards | MIDI Show Control | [
"Technology"
] | 639 | [
"Multimedia",
"Digital media"
] |
3,003,010 | https://en.wikipedia.org/wiki/Electroforming | Electroforming is a metal forming process in which parts are fabricated through electrodeposition on a model, known in the industry as a mandrel. Conductive (metallic) mandrels are treated to create a mechanical parting layer, or are chemically passivated to limit electroform adhesion to the mandrel and thereby allow its subsequent separation. Non-conductive (glass, silicon, plastic) mandrels require the deposition of a conductive layer prior to electrodeposition. Such layers can be deposited chemically, or using vacuum deposition techniques (e.g., gold sputtering). The outer surface of the mandrel forms the inner surface of the form.
The process involves passing direct current through an electrolyte containing salts of the metal being electroformed. The anode is the solid metal being electroformed, and the cathode is the mandrel, onto which the electroform gets plated (deposited). The process continues until the required electroform thickness is achieved. The mandrel is then either separated intact, melted away, or chemically dissolved.
The surface of the finished part that was in intimate contact with the mandrel is replicated in fine detail with respect to the original, and is not subject to the shrinkage that would normally be experienced in a foundry-cast metal object, or the tool marks of a milled part. The solution side of the part is less well defined, and that loss of definition increases with thickness of the deposit. In extreme cases, where a thickness of several millimetres is required, there is preferential build-up of material on sharp outside edges and corners. This tendency can be reduced by shielding, or a process known as periodic reverse, where the electroforming current is reversed for short periods and the excess is preferentially dissolved electrochemically. The finished form can either be the finished part, or can be used in a subsequent process to produce a positive of the original mandrel shape, such as with vinyl records or CD and DVD stamper manufacture.
In recent years, due to its ability to replicate a mandrel surface with practically no loss of fidelity, electroforming has taken on new importance in the fabrication of micro- and nano-scale metallic devices and in producing precision injection molds with micro- and nano-scale features for production of non-metallic micro-molded objects.
Process
In the basic electroforming process, an electrolytic bath is used to deposit nickel or other electroformable metal onto a conductive surface of a model (mandrel). Once the deposited material has been built up to the desired thickness, the electroform is parted from the substrate. This process allows the precise replication of the mandrel surface texture and geometry at low unit cost with high repeatability and excellent process control.
If the mandrel is made of a non-conductive material, then it can be coated with a thin conductive layer.
Advantages and disadvantages
The main advantage of electroforming is that it accurately replicates the external shape of the mandrel. Generally, machining a cavity accurately is more challenging than machining a convex shape; however, the opposite holds true for electroforming because the mandrel's exterior can be accurately machined and then used to electroform a precision cavity.
Compared to other basic metal forming processes (casting, forging, stamping, deep drawing, machining, and fabricating), electroforming is very effective when requirements call for extreme tolerances, complexity, or light weight. The precision and resolution inherent in the photo-lithographically produced conductive patterned substrate allows finer geometries to be produced to tighter tolerances while maintaining superior edge definition with a near-optical finish. Electroformed metal can be extremely pure, with superior properties over wrought metal due to its refined crystal structure. Multiple layers of electroformed metals can be bonded together, or to different substrate materials, to produce complex structures with "grown-on" flanges and bosses.
Tolerances of 1.5 to 3 nanometers have been reported.
A wide variety of shapes and sizes can be made by electroforming, the principal limitation being the need to part the product from the mandrel. Since the fabrication of a product requires only a single model or mandrel, low production quantities can be made economically.
See also
LIGA
Electrotyping
Electroplating
Electrochemical engineering
References
Further reading
Spiro, P. Electroforming: A comprehensive survey of theory, practice and commercial applications, London, 1971.
External links
Metal forming
Metallurgical processes
de:Galvanik | Electroforming | [
"Chemistry",
"Materials_science"
] | 944 | [
"Metallurgical processes",
"Metallurgy"
] |
3,003,070 | https://en.wikipedia.org/wiki/Empirical%20modelling | Empirical modelling refers to any kind of (computer) modelling based on empirical observations rather than on mathematically describable relationships of the system modelled.
Empirical Modelling
Empirical Modelling as a variety of empirical modelling
Empirical modelling is a generic term for activities that create models by observation and experiment. Empirical Modelling (with the initial letters capitalised, and often abbreviated to EM) refers to a specific variety of empirical modelling in which models are constructed following particular principles. Though the extent to which these principles can be applied to model-building without computers is an interesting issue (to be revisited below), there are at least two good reasons to consider Empirical Modelling in the first instance as computer-based. Without doubt, computer technologies have had a transformative impact where the full exploitation of Empirical Modelling principles is concerned. What is more, the conception of Empirical Modelling has been closely associated with thinking about the role of the computer in model-building.
An empirical model operates on a simple semantic principle: the maker observes a close correspondence between the behaviour of the model and that of its referent. The crafting of this correspondence can be 'empirical' in a wide variety of senses: it may entail a trial-and-error process, may be based on computational approximation to analytic formulae, it may be derived as a black-box relation that affords no insight into 'why it works'.
Empirical Modelling is rooted on the key principle of William James's radical empiricism, which postulates that all knowing is rooted in connections that are given-in-experience. Empirical Modelling aspires to craft the correspondence between the model and its referent in such a way that its derivation can be traced to connections given-in-experience. Making connections in experience is an essentially individual human activity that requires skill and is highly context-dependent. Examples of such connections include: identifying familiar objects in the stream of thought, associating natural languages words with objects to which they refer, and subliminally interpreting the rows and columns of a spreadsheet as exam results of particular students in particular subjects.
Principles
In Empirical Modelling, the process of construction is an incremental one in which the intermediate products are artefacts that evoke aspects of the intended (and sometimes emerging) referent through live interaction and observation. The connections evoked in this way have distinctive qualities: they are of their essence personal and experiential in character and are provisional in so far as they may be undermined, refined and reinforced as the model builder's experience and understanding of the referent develops. Following a precedent established by David Gooding in his account of the role that artefacts played in Michael Faraday's experimental investigation of electromagnetism, the intermediate products of the Empirical Modelling process are described as 'construals'. Gooding's account is a powerful illustration of how making construals can support the sense-making activities that lead to conceptual insights (cf. the contribution that Faraday's work made to electromagnetic theory) and to practical products (cf. Faraday's invention of the electric motor).
The activities associated with making a construal in the Empirical Modelling framework are depicted in Figure 1.
The eye icon at the centre the figure represents the maker's observation of the current state of development of the construal and its referent. The two arrows emanating from the eye represent the connection given-in-experience between the construal and its referent that is established in the mind of the maker. This connection is crafted through experimental interaction with the construal under construction and its emerging referent. As in genuine experiment, the scope of the interactions that can be entertained by the maker is inconceivably broad. At the maker's discretion, the interactions that characterise the construal are those that respect the connection given in the maker's experience. As the Empirical Modelling process unfolds, the construal, the referent, the maker's understanding and the context for the maker's engagement co-evolve in such a way that:
the interactive experience that the construal affords is enhanced;
the interactive experience that characterises the referent is refined;
the repertoire of characteristic interactions with the construal and its referent is enlarged;
the contextual constraints on characteristic interactions with the construal and its referent are identified.
Empirical Modelling concepts
In Empirical Modelling. making and maintaining the connection given-in-experience between the construal and referent is based on three primary concepts: observables, dependencies and agency. Within both the construal and its referent, the maker identifies observables as entities that can take on a range of different values, and whose current values determine its current state. All state-changing interactions with the construal and referent are conceived as changes to the values of observables. A change to the value of one observable may be directly attributable to a change in the value of another observable, in which case these values are linked by a dependency. Changes to observable values are attributed to agents, amongst which the most important is the maker of the construal. When changes to observable values are observed to occur simultaneously, this can be construed as concurrent action on the part of different agents, or as concomitant changes to observables derived from a single agent action via dependencies. To craft the connection given-in-experience between the construal and referent, the maker constructs the construal in such a way that its observables, dependencies and agency correspond closely to those that are observed in the referent. To this end, the maker must conceive appropriate ways in which observables and agent actions in the referent can be given suitable experiential counterparts in the construal.
The semantic framework shown in Figure 1 resembles that adopted in working with spreadsheets, where the state that is currently displayed in the grid is meaningful only when experienced in conjunction with an external referent. In this setting, the cells serve as observables, their definitions specify the dependencies, and agency is enacted by changing the values or the definitions of cells. In making a construal, the maker explores the roles of each relevant agent by projecting agency upon it as if it were a human agent and identifying observables and dependencies from that perspective. By automating agency, construals can then be used to specify behaviours in much the same way that behaviours can be expressed using macros in conjunction with spreadsheets. In this way, animated construals can emulate program-like behaviours in which the intermediate states are meaningful and live to auditing by the maker.
Environments to support Empirical Modelling
The development of computer environments for making construals has been an ongoing subject of research over the last thirty years. The many variants of such environments that have been implemented are based on common principles. The network of dependencies that currently connect observables is recorded as a family of definitions. Semantically such definitions resemble the definitions of spreadsheet cells, whereby changes to the values of observables on the right hand side propagate so as to change the value of the observable on the LHS in a conceptually indivisible manner. The dependencies in these networks are acyclic but are also reconfigurable: redefining an observable may introduce a new definition that alters the dependency structure. Observables built into the environment include scalars, geometric and screen display elements: these can be elaborated using multi-level list structures. A dependency is typically represented by a definition which uses a relatively simple functional expression to relate the value of an observable to the values of other observables. Such functions have typically been expressed in fragments of simple procedural code, but the most recent variants of environments of making construals also enable dependency relations to be expressed by suitably contextualised families of definitions. The maker can interact with a construal through redefining existing observables or introducing new observables in an open-ended unconstrained manner. Such interaction has a crucial role in the experimental activity that informs the incremental development of the construal. Triggered actions can be introduced to automate state-change: these perform redefinitions in response to specified changes in the values of observables.
Empirical Modelling as a broader view of computing
In Figure 1, identifying 'the computer' as the medium in which the construal is created is potentially misleading. The term COMPUTER is not merely a reference to a powerful computational device. In making construals, the primary emphasis is on the rich potential scope for interaction and perceptualisation that the computer enables when used in conjunction with other technologies and devices. The primary motivation for developing Empirical Modelling is to give a satisfactory account of computing that integrates these two complementary roles of the computer. The principles by which James and Dewey sought to reconcile perspectives on agency informed by logic and experience play a crucial role in achieving this integration.
The dual role for the computer implicit in Figure 1 is widely relevant to contemporary computing applications. On this basis, Empirical Modelling can be viewed as providing a foundation for a broader view of computing. This perspective is reflected in numerous Empirical Modelling publications on topics such as educational technology, computer-aided design and software development. Making construals has also been proposed as a suitable technique to support constructionism, as conceived by Seymour Papert, and to meet the guarantees for 'construction' as identified by Bruno Latour.
Empirical Modelling as generic sense-making?
The Turing machine provides the theoretical foundation for the role of the computer as a computational device: it can be regarded as modelling 'a mind following rules'. The practical applications of Empirical Modelling to date suggest that making construals is well-suited to supporting the supplementary role the computer can play in orchestrating rich experience. In particular, in keeping with the pragmatic philosophical stance of James and Dewey, making construals can fulfill an explanatory role by offering contingent explanations for human experience in contexts where computational rules cannot be invoked. In this respect, making construals may be regarded as modelling 'a mind making sense of a situation'.
In the same way that the Turing machine is a conceptual tool for understanding the nature of algorithms whose value is independent of the existence of the computer, Empirical Modelling principles and concepts may have generic relevance as a framework for thinking about sense-making without specific reference to the use of a computer. The contribution that William James's analysis of human experience makes to the concept of Empirical Modelling may be seen as evidence for this. By this token, Empirical Modelling principles may be an appropriate way to analyse varieties of empirical modelling that are not computer-based. For instance, it is plausible that the analysis in terms of observables, dependencies and agency that applies to interaction with electronic spreadsheets would also be appropriate for the manual spreadsheets that predated them.
Background
Empirical Modelling has been pioneered since the early 1980s by Meurig Beynon and the Empirical Modelling Research Group in Computer Science at the University of Warwick.
The term 'Empirical Modelling' (EM) has been adopted for this work since about 1995 to reflect the experiential basis of the modelling process in observation and experiment. Special purpose software supporting the central concepts of observable, dependency and agency has been under continuous development (mainly led by research students) since the late 1980s.
The principles and tools of EM have been used and developed by many hundreds of students within coursework, project work, and research theses. The undergraduate and MSc module 'Introduction to Empirical Modelling' was taught for many years up to 2013-14 until the retirement of Meurig Beynon and Steve Russ (authors of this article). There is a large website [1] containing research and teaching material with an extensive collection of refereed publications and conference proceedings.
The term 'construal' has been used since the early 2000s for the artefacts, or models, made with EM tools. The term has been adapted from its use by David Gooding in the book 'Experiment and the Making of Meaning' (1990) to describe the emerging, provisional ideas that formed in Faraday's mind, and were recorded in his notebooks, as he investigated electromagnetism, and made the first electric motors, in the 1800s.
The main practical activity associated with EM - that of 'making construals' - was the subject of an Erasmus+ Project CONSTRUIT! (2014-2017)[2].
See also
Multi-agent system
External links
http://www.dcs.warwick.ac.uk/modelling/ Empirical Modelling Research Group
https://warwick.ac.uk/fac/sci/dcs/research/em/welcome/ CONSTRUIT! Project web pages
Notes, References
Mathematical modeling | Empirical modelling | [
"Mathematics"
] | 2,704 | [
"Applied mathematics",
"Mathematical modeling"
] |
3,003,284 | https://en.wikipedia.org/wiki/Weighting | The process of frequency weighting involves emphasizing the contribution of particular aspects of a phenomenon (or of a set of data) over others to an outcome or result; thereby highlighting those aspects in comparison to others in the analysis. That is, rather than each variable in the data set contributing equally to the final result, some of the data is adjusted to make a greater contribution than others. This is analogous to the practice of adding (extra) weight to one side of a pair of scales in order to favour either the buyer or seller.
While weighting may be applied to a set of data, such as epidemiological data, it is more commonly applied to measurements of light, heat, sound, gamma radiation, and in fact any stimulus that is spread over a spectrum of frequencies.
Weighting in acoustics
Weighting and loudness
In the measurement of loudness, for example, a weighting filter is commonly used to emphasise frequencies around 3 to 6 kHz where the human ear is most sensitive, while attenuating very high and very low frequencies to which the ear is insensitive. A commonly used weighting is the A-weighting curve, which results in units of dBA sound pressure level. Because the frequency response of human hearing varies with loudness, the A-weighting curve is correct only at a level of 40-phon and other curves known as B-, C- and D-weighting are also used, the latter being particularly intended for the measurement of aircraft noise.
Weighting in audio measurement
In broadcasting and audio equipment measurements 468-weighting is the preferred weighting to use because it was specifically devised to allow subjectively valid measurements on noise, rather than pure tones. It is often not realised that equal loudness curves, and hence A-weighting, really apply only to tones, as tests with noise bands show increased sensitivity in the 5 to 7 kHz region on noise compared to tones.
Other weighting curves are used in rumble measurement and flutter measurement to properly assess subjective effect.
In each field of measurement, special units are used to indicate a weighted measurement as opposed to a basic physical measurement of energy level. For sound, the unit is the phon (1 kHz equivalent level).
In the fields of acoustics and audio engineering, it is common to use a standard curve referred to as A-weighting, one of a set that are said to be derived from equal-loudness contours.
Application to hearing in aquatic animals
Auditory frequency weighting functions for marine mammals were introduced by Southall et al. (2007).
Weighting in electromagnetism
Weighting and gamma rays
In the measurement of gamma rays or other ionising radiation, a radiation monitor or dosimeter will commonly use a filter to attenuate those energy levels or wavelengths that cause the least damage to the human body but letting through those that do the most damage, so any source of radiation may be measured in terms of its true danger rather than just its strength. The resulting unit is the sievert or microsievert.
Weighting and television colour components
Another use of weighting is in television, in which the red, green and blue components of the signal are weighted according to their perceived brightness. This ensures compatibility with black and white receivers and also benefits noise performance and allows separation into meaningful luminance and chrominance signals for transmission.
Weighting and UV factor derivation for sun exposure
Skin damage due to sun exposure is very wavelength dependent over the UV range 295 to 325 nm, with power at the shorter wavelength causing around 30 times as much damage as the longer one. In the calculation of UV Index, a weighting curve is used which is known as the McKinlay-Diffey Erythema action spectrum.
See also
Audio quality measurement
G-weighting
ITU-R 468 noise weighting
M-weighting
Psophometric weighting
Weight function
Weighting filter
Z-weighting
References
External links
Noise measurement briefing
Calculator for A,C,U, and AU weighting values
A-weighting filter circuit for audio measurements
AES pro audio reference definition of "weighting filters"
What is a decibel?
Weighting filter according DIN EN 61672-1 2003-10 (DIN-IEC 651) Calculation: frequency f to dBA and dBC
Statistical analysis
Applied and interdisciplinary physics | Weighting | [
"Physics"
] | 886 | [
"Applied and interdisciplinary physics"
] |
5,491,659 | https://en.wikipedia.org/wiki/Complex%20metallic%20alloy | Complex metallic alloys (CMAs) or complex intermetallics (CIMs) are intermetallic compounds characterized by the following structural features:
large unit cells, comprising some tens up to thousands of atoms,
the presence of well-defined atom clusters, frequently of icosahedral point group symmetry,
the occurrence of inherent disorder in the ideal structure.
Overview
Complex metallic alloys is an umbrella term for intermetallic compounds with a relatively large unit cell. There is no precise definition of how large the unit cell of a complex metallic alloy has to be, but the broadest definition includes Zintl phases, skutterudites, and Heusler compounds on the most simple end, and quasicrystals on the more complex end.
Research
Following the invention of X-ray crystallography techniques in the 1910s, the atomic structure of many compounds was investigated. Most metals have relatively simple structures. However, in 1923 Linus Pauling reported on the structure of the intermetallic NaCd2, which had such a complicated structure he was unable to fully explain it. Thirty years later, he concluded that NaCd2 contains 384 sodium and 768 cadmium atoms in each unit cell.
Most physical properties of CMAs show distinct differences with respect to the behavior of normal metallic alloys and therefore these materials possess a high potential for technological application.
The European Commission funded the Network of Excellence CMA from 2005 to 2010, uniting 19 core groups in 12 countries. From this emerged the European Integrated Center for the Development of New Metallic Alloys and Compounds (previously C-MAC, now ECMetAC), which connects researchers at 21 universities.
Examples
Example phases are:
β-Mg2Al3: 1168 atoms per unit cell, face-centred cubic, atoms arranged in Friauf polyhedra.
ξ'–Al74Pd22Mn4: 318 atoms per unit cell, face-centred orthorhombic, atoms arranged in Mackay-type clusters.
(Bergman phase): 163 atoms per unit cell, body centred cubic, atoms arranged in Bergman clusters.
(Taylor phase): 204 atoms per unit cell, face-centred orthorhombic, atoms arranged in Mackay-type clusters.
See also
High-entropy alloys, alloys of multiple elements which ideally form no intermetallics
Holmium–magnesium–zinc quasicrystal
Frank–Kasper phases
Laves phase
Hume-Rothery rules
References
Further reading
Intermetallics
Crystal structure types | Complex metallic alloy | [
"Physics",
"Chemistry",
"Materials_science"
] | 501 | [
"Inorganic compounds",
"Metallurgy",
"Crystal structure types",
"Crystallography",
"Intermetallics",
"Condensed matter physics",
"Alloys"
] |
5,491,752 | https://en.wikipedia.org/wiki/Strict%20differentiability | In mathematics, strict differentiability is a modification of the usual notion of differentiability of functions that is particularly suited to p-adic analysis. In short, the definition is made more restrictive by allowing both points used in the difference quotient to "move".
Basic definition
The simplest setting in which strict differentiability can be considered, is that of a real-valued function defined on an interval I of the real line.
The function f:I → R is said strictly differentiable in a point a ∈ I if
exists, where is to be considered as limit in , and of course requiring .
A strictly differentiable function is obviously differentiable, but the converse is wrong, as can be seen from the counter-example
One has however the equivalence of strict differentiability on an interval I, and being of differentiability class (i.e. continuously differentiable).
In analogy with the Fréchet derivative, the previous definition can be generalized to the case where R is replaced by a Banach space E (such as ), and requiring existence of a continuous linear map L such that
where is defined in a natural way on E × E.
Motivation from p-adic analysis
In the p-adic setting, the usual definition of the derivative fails to have certain desirable properties. For instance, it is possible for a function that is not locally constant to have zero derivative everywhere. An example of this is furnished by the function F: Zp → Zp, where Zp is the ring of p-adic integers, defined by
One checks that the derivative of F, according to usual definition of the derivative, exists and is zero everywhere, including at x = 0. That is, for any x in Zp,
Nevertheless F fails to be locally constant at the origin.
The problem with this function is that the difference quotients
do not approach zero for x and y close to zero. For example, taking x = pn − p2n and y = pn, we have
which does not approach zero. The definition of strict differentiability avoids this problem by imposing a condition directly on the difference quotients.
Definition in p-adic case
Let K be a complete extension of Qp (for example K = Cp), and let X be a subset of K with no isolated points. Then a function F : X → K is said to be strictly differentiable at x = a if the limit
exists.
References
Number theory | Strict differentiability | [
"Mathematics"
] | 504 | [
"Discrete mathematics",
"Number theory"
] |
5,491,838 | https://en.wikipedia.org/wiki/Computational%20visualistics | Computational Visualistics is an interdisciplinary field focused on the use of computers to generate and analyze images, upon which is usually directly implicated for the large language models that become discussed inside Artificial Intelligence Research.
Areas covered
In the study of images within computer science, the abstract data type "image" (or potentially several such types) is a central focus, along with its various implementations. Three main groups of algorithms are relevant to this data type in computational visualistics:
Algorithms from "image" to "image"
Algorithms from "image" to "image" involve image processing, which focuses on operations that convert one or more input images, possibly with additional non-image parameters, into an output image. These operations support various applications, including enhancing image quality through techniques like contrast enhancement, extracting features such as edge detection, and identifying and isolating patterns based on predefined criteria, such as the blue screen technique. The field also encompasses the development of compression algorithms, crucial for the efficient storage and transmission of image data.
Algorithms from "image" to "not-image"
Two disciplines focus on transforming images into non-pictorial data. The field of pattern recognition, although not limited to images, has made significant contributions to computational visualistics since the early 1950s. This work includes classifying information within images, such as identifying geometric shapes (e.g., circular regions), recognizing handwritten text, detecting spatial objects, and associating stylistic attributes. The goal is to map images to non-pictorial data types that describe various aspects of the images. In contrast, computer vision, a branch of artificial intelligence (AI), aims to enable computers to achieve visual perception akin to human vision. Problems in computer vision are considered semantic when their objectives closely align with human-like understanding of objects within images.
Algorithms from "not-image" to "image"
The exploration of how operations involving non-pictorial data types can generate images is particularly relevant in computer graphics and information visualization. Computer graphics focuses on creating images that represent spatial configurations of objects, often in a naturalistic manner, such as in virtual architecture. These image-generating algorithms typically start with data describing three-dimensional geometry and scene lighting, along with the optical properties of surfaces. In contrast, information visualization aims to depict various data types, especially those with non-visual components, using visual conventions such as color codes or icons. Fractal images, such as those of the Mandelbrot set, represent a borderline case in information visualization, where abstract mathematical properties are visualized.
Computational visualistics degree programmes
The field of computational visualistics was established at the University of Magdeburg, Germany, in the fall of 1996. Initiated by Thomas Strothotte, a professor of computer graphics, and supported by Jörg Schirra and a team of interdisciplinary researchers from social, technical sciences, and medicine, the program focuses on the application of computer science to image-related problems. The five-year diploma program emphasizes core computer science courses, including digital methods and electronic tools, and integrates courses on the use of images in the humanities. Students also develop communicative skills and apply their knowledge in practical areas such as biology and medicine, particularly in fields involving digital image data like microscopy and radiology. Bachelor’s and Master’s programs were introduced in 2006. The University of Koblenz features a similar degree program.
References
Further reading
Jochen Schneider, Thomas Strothotte & Winfried Marotzki (2003). Computational Visualistics, Media Informatics, and Virtual Communities. Deutscher Universitätsverlag.
Jörg R.J. Schirra (1999). "Computational Visualistics: Bridging the Two Cultures in a Multimedia Degree Programme". In: Forum Proceedings, ed.: Z. J. Pudlowski, p. 47–51,
Jörg R. J. Schirra (2000). "A New Theme for Educating New Engineers: Computational visualistics". In: Global Journal of Engineering Education, Vol. 4, No. 1, 73–82. (June 2000)
Jörg R. J. Schirra (2005). "Foundation of Computational Visualistics". Deutscher Universitätsverlag
Jörg R. J. Schirra (2005). "Computational Visualistics: Dealing with Pictures in Computer Science". In: K. Sachs-Hombach (Ed.): Bildwissenschaft zwischen Reflexion und Anwendung. Köln: Herbert von Halem Verlag, 2005, 494–509.
Jörg R. J. Schirra (2005) "Ein Disziplinen-Mandala für die Bildwissenschaft - Kleine Provokation zu einem Neuen Fach"" . In: Vol. I: Bildwissenschaft als interdisziplinäres Unternehmen. Eine Standortbestimmung. 2005, Köln: Herbert-von-Halem-Verlag
Bernhard Preim, Dirk Bartz (2007). Visualization in Medicine. Morgan Kaufmann, 2007.
Bernhard Preim, Charl Botha (2013). Visual Computing for Medicine. Morgan Kaufmann, 2013.
External links
Computational visualistics (degree programme at Otto-von-Guericke University Magdeburg, Germany)
Computervisualistik (degree programme at the University Koblenz-Landau, Germany)
Project Computational visualistics
Computational science | Computational visualistics | [
"Mathematics"
] | 1,130 | [
"Computational science",
"Applied mathematics"
] |
5,491,905 | https://en.wikipedia.org/wiki/Poly%284-vinylphenol%29 | Poly(4-vinylphenol), also called polyvinylphenol or PVP, is a plastic structurally similar to polystyrene. It is produced from the monomer 4-vinylphenol, which is also referred to as 4-hydroxystyrene.
PVP is used in electronics as a dielectric layer in organic transistors in organic TFT LCD displays. Thin films of cross-linked PVP can be used in this application, often in combination with pentacene. By varying the dielectric properties of PVP, the field-effect mobility of the TFTs can be tuned. Other applications include its use in photoresist materials, dielectric materials for energy storage, water-resistant adhesives and antimicrobial coatings. PVP, when mixed with a polyelectrolyte, has been demonstrated to moderately inhibit the growth of microorganisms. PVP has also been employed in gas sensors, such as by mixing polymer-carbon black with PVP to analyse organic solvents. PVP brushes are able to sense toxic gases such as hydrogen sulfide with microgravimetric techniques. Molecularly Imprinted Poly-4-vinylphenol can be produced for the selective electrochemical detection of small molecules, such as cotinine or nicotine.
PVP is typically prepared by free radical polymerization of 4-vinylphenol or a protected form of 4-vinylphenol. The protected monomers can be prepared from 4-hydroxybenzaldehyde, by vinylation of phenols, or acylation of polystyrene followed by oxidation at room temperature. If poly(4-methoxystyrene) is produced, the methoxy group can be cleaved by treating it with trimethylsilyliodide. There are several patents on the synthesis of 4-hydroxystyrene due its importance in the development of photoresist materials. RAFT polymerization can be used to prepare well-defined PVP chains. This can be done by mediating free radical polymerization of acetoxystyrene, which is then followed by deacetylation. Nitroxide mediated polymerization can also be used to prepare polyacetoxystyrene, which can transformed in polyphenols by UV irradiation. ATRP can also be used for the preparation of defined block copolymers of PVP, by polymerization of 4-acetoxystyrene that is subsequently selectively hydrolysed.
See also
4-Vinylphenol
References
Organic polymers
Plastics
Vinyl polymers | Poly(4-vinylphenol) | [
"Physics",
"Chemistry"
] | 545 | [
"Organic polymers",
"Unsolved problems in physics",
"Organic compounds",
"Amorphous solids",
"Plastics"
] |
5,492,058 | https://en.wikipedia.org/wiki/Label%20printer%20applicator | A label printer applicator is a basic robot that can automatically print and apply pressure-sensitive labels to various products. Some types of labeling include shipping labeling, content labeling, graphic images, and labeling to comply with specific standards such as those of GS1 and Universal Product Code U.P.C. A pressure-sensitive label consists of a label substrate and adhesive.
First developed in the late 1970s, today there are over 70 manufacturers of these types of machines worldwide.
Design
Basic label printer applicators consist of three primary parts: a printer, or print engine, an applicator and a method to handle label and ribbons, referred to as media. Computing power also has the potential to increase the efficiency of label printer applicators.
Print engine
The print engine can be taken from an industrial table top printer, it can be a specifically designed module that can be "bolted" onto an applicator or it can be a proprietary element constructed by the printer applicator manufacturer. A print engine’s primary function is to accept data from a computer and print the data onto a label for application. This printing can be accomplished using either the direct thermal method or the thermal transfer method. Both methods heat up very fine elements (up to 600 per inch) on a print head. Direct thermal burns the image onto the face of specially designed label stock. This is the preferred method for shipping labels and is also very popular in Europe. The thermal transfer process utilizes a ribbon coated with wax, resin, or a hybrid of the two. It is then heated and melted onto the surface of the label substrate. Thermal transfer is the most popular method in the United States. The printer knows what to print via data communication from an outside software package, much like common inkjet printers. The software delivers data formatted in a specific layout and the printer reads the format based on its own driver.
Applicator
The applicator section delivers the label to the product. This can be accomplished by several methods. Typically application is achieved with a pneumatic or electric cylinder with a specially designed label pad. The cylinder will extend out and touch (tamp) the adhesive side of the label to a product. Variations of this method will extend the cylinder and then use air to blow the label to the product surface (tamp-blow). Another popular method is a blow-on system that will use a burst of air to deliver the label from the pad to the product surface without the use of a cylinder. Other methods can be used to wipe a label onto a surface, or even place two duplicate or unique labels on different sides of a product.
Media
Media handling controls how the label stock is delivered to the print engine. It also performs the separation of the label from its backing and rewinds the waste label backing that remains after label application. This process can be difficult since consistent tension must be maintained for the label to peel off the liner and onto the applicator. Too much tension can cause the liner to break, which requires the machine to be rethreaded.
Processor
Today, a fourth element to label printer applicators is emerging: computing power. Recently label printer applicators have been introduced which have the power to store large amounts of data. These machines can also control and harvest data from other input devices such as barcode scanners and weighing scales. These printer applicators can be programmed with special languages such as Fingerprint designed by Intermec for Intermec print engines or MCL (Macro Command Language), a Datamax programming language. Now label printer applicators can communicate directly with an array of devices and hosts on the production line without the aid of a computer.
See also
Automatic label placement
Document automation
Food labelling
Label printer
Packaging and labeling
Thermal printer
Thermal transfer printer
Computer printers
Packaging machinery | Label printer applicator | [
"Engineering"
] | 773 | [
"Packaging machinery",
"Industrial machinery"
] |
5,492,198 | https://en.wikipedia.org/wiki/Limit%20set | In mathematics, especially in the study of dynamical systems, a limit set is the state a dynamical system reaches after an infinite amount of time has passed, by either going forward or backwards in time. Limit sets are important because they can be used to understand the long term behavior of a dynamical system. A system that has reached its limiting set is said to be at equilibrium.
Types
fixed points
periodic orbits
limit cycles
attractors
In general, limits sets can be very complicated as in the case of strange attractors, but for 2-dimensional dynamical systems the Poincaré–Bendixson theorem provides a simple characterization of all nonempty, compact -limit sets that contain at most finitely many fixed points as a fixed point, a periodic orbit, or a union of fixed points and homoclinic or heteroclinic orbits connecting those fixed points.
Definition for iterated functions
Let be a metric space, and let be a continuous function. The -limit set of , denoted by , is the set of cluster points of the forward orbit of the iterated function . Hence, if and only if there is a strictly increasing sequence of natural numbers such that as . Another way to express this is
where denotes the closure of set . The points in the limit set are non-wandering (but may not be recurrent points). This may also be formulated as the outer limit (limsup) of a sequence of sets, such that
If is a homeomorphism (that is, a bicontinuous bijection), then the -limit set is defined in a similar fashion, but for the backward orbit; i.e. .
Both sets are -invariant, and if is compact, they are compact and nonempty.
Definition for flows
Given a real dynamical system with flow , a point , we call a point y an -limit point of if there exists a sequence in so that
.
For an orbit of , we say that is an -limit point of , if it is an -limit point of some point on the orbit.
Analogously we call an -limit point of if there exists a sequence in so that
.
For an orbit of , we say that is an -limit point of , if it is an -limit point of some point on the orbit.
The set of all -limit points (-limit points) for a given orbit is called -limit set (-limit set) for and denoted ().
If the -limit set (-limit set) is disjoint from the orbit , that is (), we call () a ω-limit cycle (α-limit cycle).
Alternatively the limit sets can be defined as
and
Examples
For any periodic orbit of a dynamical system,
For any fixed point of a dynamical system,
Properties
and are closed
if is compact then and are nonempty, compact and connected
and are -invariant, that is and
See also
Julia set
Stable set
Limit cycle
Periodic point
Non-wandering set
Kleinian group
References
Further reading | Limit set | [
"Mathematics"
] | 615 | [
"Limit sets",
"Topology",
"Dynamical systems"
] |
5,492,199 | https://en.wikipedia.org/wiki/List%20of%20solid%20waste%20treatment%20technologies | The article contains a list of different forms of solid waste treatment technologies and facilities employed in waste management infrastructure.
Waste handling facilities
Civic amenity site (CA site)
Transfer station
Established waste treatment technologies
Incineration
Landfill
Recycling
Specific to organic waste:
Anaerobic digestion
Composting
Windrow composting
Alternative waste treatment technologies
In the UK some of these are sometimes termed advanced waste treatment technologies
Biodrying
Gasification
Plasma gasification: Gasification assisted by plasma torches
Hydrothermal carbonization
Hydrothermal liquefaction
Mechanical biological treatment (sorting into selected fractions)
Refuse-derived fuel
Mechanical heat treatment
Molten salt oxidation
Pyrolysis
UASB (applied to solid wastes)
Waste autoclave
Specific to organic waste:
Bioconversion of biomass to mixed alcohol fuels
In-vessel composting
Landfarming
Sewage treatment
Tunnel composting
See also
Bioethanol
Biodiesel
List of waste management companies
List of wastewater treatment technologies
Pollution control
Waste-to-energy
Burn pit
References
Anaerobic digestion
Thermal treatment
Waste treatment technology
Solid waste treatment technologies | List of solid waste treatment technologies | [
"Chemistry",
"Engineering"
] | 213 | [
"Water treatment",
"Anaerobic digestion",
"Environmental engineering",
"Water technology",
"Waste treatment technology"
] |
5,492,505 | https://en.wikipedia.org/wiki/Bar%20induction | Bar induction is a reasoning principle used in intuitionistic mathematics, introduced by L. E. J. Brouwer. Bar induction's main use is the intuitionistic derivation of the fan theorem, a key result used in the derivation of the uniform continuity theorem.
It is also useful in giving constructive alternatives to other classical results.
The goal of the principle is to prove properties for all infinite sequences of natural numbers (called choice sequences in intuitionistic terminology), by inductively reducing them to properties of finite lists. Bar induction can also be used to prove properties about all choice sequences in a spread (a special kind of set).
Definition
Given a choice sequence , any finite sequence of elements of this sequence is called an initial segment of this choice sequence.
There are three forms of bar induction currently in the literature, each one places certain restrictions on a pair of predicates and the key differences are highlighted using bold font.
Decidable bar induction (BID)
Given two predicates and on finite sequences of natural numbers such that all of the following conditions hold:
every choice sequence contains at least one initial segment satisfying at some point (this is expressed by saying that is a bar);
is decidable (i.e. our bar is decidable);
every finite sequence satisfying also satisfies (so holds for every choice sequence beginning with the aforementioned finite sequence);
if all extensions of a finite sequence by one element satisfy , then that finite sequence also satisfies (this is sometimes referred to as being upward hereditary);
then we can conclude that holds for the empty sequence (i.e. A holds for all choice sequences starting with the empty sequence).
This principle of bar induction is favoured in the works of, A. S. Troelstra, S. C. Kleene and Albert Dragalin.
Thin bar induction (BIT)
Given two predicates and on finite sequences of natural numbers such that all of the following conditions hold:
every choice sequence contains a unique initial segment satisfying at some point (i.e. our bar is thin);
every finite sequence satisfying also satisfies ;
if all extensions of a finite sequence by one element satisfy , then that finite sequence also satisfies ;
then we can conclude that holds for the empty sequence.
This principle of bar induction is favoured in the works of Joan Moschovakis and is (intuitionistically) provably equivalent to decidable bar induction.
Monotonic bar induction (BIM)
Given two predicates and on finite sequences of natural numbers such that all of the following conditions hold:
every choice sequence contains at least one initial segment satisfying at some point;
once a finite sequence satisfies , then every possible extension of that finite sequence also satisfies (i.e. our bar is monotonic);
every finite sequence satisfying also satisfies ;
if all extensions of a finite sequence by one element satisfy , then that finite sequence also satisfies ;
then we can conclude that holds for the empty sequence.
This principle of bar induction is used in the works of A. S. Troelstra, S. C. Kleene, Dragalin and Joan Moschovakis.
Relationships between these schemata and other information
The following results about these schemata can be intuitionistically proved:
(The symbol "" is a "turnstile".)
Unrestricted bar induction
An additional schema of bar induction was originally given as a theorem by Brouwer (1975) containing no "extra" restriction on under the name The Bar Theorem. However, the proof for this theorem was erroneous, and unrestricted bar induction is not considered to be intuitionistically valid (see Dummett 1977 pp 94–104 for a summary of why this is so). The schema of unrestricted bar induction is given below for completeness.
Given two predicates and on finite sequences of natural numbers such that all of the following conditions hold:
every choice sequence contains at least one initial segment satisfying at some point;
every finite sequence satisfying also satisfies ;
if all extensions of a finite sequence by one element satisfy , then that finite sequence also satisfies ;
then we can conclude that holds for the empty sequence.
Relations to other fields
In classical reverse mathematics, "bar induction" () denotes the related principle stating that if a relation is a well-order, then we have the schema of transfinite induction over for arbitrary formulas.
References
L. E. J. Brouwer Brouwer, L. E. J. Collected Works, Vol. I, Amsterdam: North-Holland (1975).
Michael Dummett, Elements of intuitionism, Clarendon Press (1977)
S. C. Kleene, R. E. Vesley, The foundations of intuitionistic mathematics: especially in relation to recursive functions, North-Holland (1965)
Michael Rathjen, The role of parameters in bar rule and bar induction, Journal of Symbolic Logic 56 (1991), no. 2, pp. 715–730.
A. S. Troelstra, Choice sequences, Clarendon Press (1977)
A. S. Troelstra and Dirk van Dalen, Constructivism in Mathematics, Studies in Logic and the Foundations of Mathematics, Elsevier (1988)
Constructivism (mathematics)
Mathematical induction | Bar induction | [
"Mathematics"
] | 1,097 | [
"Mathematical logic",
"Mathematical induction",
"Constructivism (mathematics)",
"Proof theory"
] |
5,492,801 | https://en.wikipedia.org/wiki/Anal%20plug | An anal plug (anal tampon or anal insert) is a medical device that is often used to treat fecal incontinence, the accidental passing of bowel moments, by physically blocking involuntary loss of fecal material. Fecal material such as feces are solid remains of food that does not get digested in the small intestines; rather, it is broken down by bacteria in the large intestine. Anal plugs vary in design and composition, but they are typically single-use, intra-anal, disposable devices made out of soft materials to contain fecal material and prevent it from leaking out of the rectum. The idea of an anal insert for fecal incontinence was first evaluated in a study of 10 participants with three different designs of anal inserts.
Use
Populations
Anal plugs may be beneficial to certain risk groups including, but not limited to, frail older people, women following childbirth, people with some neurological or spinal diseases, severe cognitive impairment, urinary incontinence, pelvic organ prolapse, and so on. Typically, anal plugs are used in people whose symptoms do not improve with to typical treatments: this may include changes in diet, physical therapy, nerve stimulation targeting the sacral and tibial nerves, surgical repair of the anus, and utilization of a colostomy bag. Nerve stimulation involves the placement of electrodes near the nerves that travel through a person's hips and down their legs. Colostomy bags are bags that collect feces from their intestines through a surgically created hole in a person's belly.
Children with certain conditions, including spina bifida and anal atresia, may struggle with leaks even after physical therapy and other interventions, so they may benefit from using anal plugs. Spina bifida is a birth defect where a part of the spinal cord is not surrounded by the vertebrae; either the spinal cord is still in the back, just not surrounded by the vertebral bones, or it can be bulging out in a sac. Anal atresia is another birth defect where the rectum and/or anus is deformed: fecal incontinence is a side effect.
The one common feature of people who use anal plugs is they all experience fecal incontinence, which is both uncomfortable and embarrassing. Some people may use it temporarily during certain events or while they have certain temporary medical conditions, such as women recovering from childbirth; others may need to use anal plugs for the rest of their lives. Others may opt to also use perineal pads or undergarments such as diapers to prevent the soiling of oneself. Management of fecal incontinence is very person-dependent, as anally inserted devices may not be for everyone.
Benefits
The plug allows individuals control over their bowel movements and may decrease negative side effects due to leakage. People have reported suffering from fewer of anal rashes, decreased soreness, and improved hygiene. Anal plugs can also be an affordable option: some countries with universal healthcare, like Germany, buy anal plugs for people who need them. During the development and approval of the Renew Insert, which is one of the types of anal plugs that have been developed, researchers found that on average people used 2.6 inserts per day. However, this plug is designed to be worn for twelve-hours; the number of inserts needed per day increases when products that have to be changed four times a day are considered.
Whether anal plugs are a good choice varies based on the person. Trials have shown anywhere from 25 to 80% of people were satisfied enough with plug to continue using it.
Drawbacks
People who have to pay for anal plugs out of pocket may find them to be expensive. Although some countries, such as Germany, may subsidize the cost for people, private insurance may not cover anal plugs. Plugs have to be changed a minimum of twice a day, some up to four times per day.
Tolerability
At the same time, discomfort, side effects, and occasional leaks from using anal plugs have been reported. A 2015 systematic review found that anal plugs may be helpful in treating fecal incontinence, provided that they are tolerated and that people actually use them. A 2001 study found that a majority of people could not tolerate an anal plug due to discomfort. Although only 20% decided to continue using the plug on a regular basis, anal plugs were generally successful at controlling fecal incontinence. Since anal plugs are considered an invasive strategy, they can result in pain, soreness, irritation, fecal urgency, and societal embarrassment. Bleeding hemorrhoids were a rare adverse event. There is not a lot of evidence reported on the efficacy on the different types of anal plugs, so the choice of plug can be up to people and their doctors. Some other challenges of the plug include occasional slippage with decreases the efficacy and increases discomfort, but people report more usage for occasions where anal leakage would be publicly troublesome. Anal plugs of smaller volume may resolve some people's discomfort, but still provide coverage for leakage.
Due to the complexity of fecal incontinence, the use of anal plugs are not well defined in guidelines and treatment pathways, which decrease the comfortability of medical professional on prescribing and usage of anal plugs. Therefore, more studies are needed in order for healthcare providers and regular people to understand when anal plugs are an option worth considering.
Products
Anal plugs may come as tampons or inserts as detailed below. Tampons are similar to menstruation tampons, which can expand and absorb substances to prevent leakage. Inserts provide a physical blockage, and are inserted into the rectal cavity to stop feces from escaping. Plugs come in a variety of shapes, sizes, and materials. Polyurethane plugs have been found to be preferred over polyvinyl-alcohol plugs, and are also associated with less plug loss. For all plugs that become stuck in the rectum, it is recommended to see a doctor if it does not come out in the next bowel movement.
Peristeen Anal Plug
The Peristeen (formerly Conveen) Anal Plug produced by Coloplast is a disposable polyurethane insert coated in a water-soluble film. When exposed to warmth and moisture in the anal canal, the film dissolves, allowing the plug to expand. It has a conical tip and a cord to tug on for removal. It comes in two sizes, and is inserted similarly to a suppository. This plug may remain inside the rectum for up to 12 hours.
A-tam Anal Tampon
The polyvinyl-alcohol anal tampons A-tam produced by Med SSE-System in Germany come in various sizes and shapes including cone, cylindrical, spiral, concave, convex, and ball-headed. The different shapes can address different concerns, based on anal sphincter muscle function, remaining muscle tissue, and gassiness. The plug should be soaked in warm water before inserted, similar to some rectal suppositories. While the plugs are single-use, the applicator may be used multiple times. Changing out the plug three times a day every 6-8 hours and anal hygiene are also recommended. A prescription is needed for this product.
Renew Insert
Renew Inserts are single-use silicone plugs that come in two sizes and require a prescription. The large size is recommended for adults, while the regular size may be used for children and young adults. Each plug is made of two attached disks and comes connected to a fingertip applicator which is disposed after insertion. The top disk blocks the stool, while the bottom disk secures the insert in place to prevent displacement. It can be removed either by pulling it out or through defecation, removing the concern of losing the plug in the anal canal.
ProCon2
The ProCon2 device by Incontinent Control Devices, Inc is composed of a blow-up balloon cuff attached to the end of a silicon catheter with vent holes at the tip for flatulence to escape. The catheter is inserted into the anal canal and the balloon cuff is inflated with water or air through a syringe attached to the exposed portion of the catheter. Once inflated, the exposed end of the catheter is pulled until the balloon cuff is met with resistance within the anus. An infrared photo-interruptor sensor in the catheter senses stool in the anal canal, sending a notification to a pager. To remove the device, the catheter is cut to allow the balloon to deflate before pulling it out. Each device is made for single-use only.
SURGISPON anal tampon
The SURGISPON anal tampon is made of a gamma-sterilized gelatin sponge and, unlike the previously mentioned products, is used to stop bleeding in anal and rectal surgeries and post-surgery bleeding and pain. The sponge can absorb up to 45 times its weight, liquefying and eventually being excreted in a bowel movement within 1-2 days. It also has an opening in the middle for gas to escape, and may be used as a carrier for delivery of antibiotics, thrombin, or chemotherapeutics. It is inserted either dry or wet, but for surgeries, it is inserted dry using a proctoscope.
Anal hygiene
Anal cleansing is the practice of proper sanitization and maintaining healthy hygiene in a person's rectal area. It is usually done post defecation. From a science perspective, it helps prevent pathogen exposure, which could lead to infections or diseases. The cleansing process involves rinsing the rectal cavity with water and usually wiping the area with toilet paper or baby wipes. Sometimes, a hand is used for rubbing the area while rinsing it with water. Other times, bidet system could be used instead. A bidet is a plumbing device that is usually installed as a separate unit beside the bathroom tub to wash one's inner buttocks and anal area.
Reasons to maintain anal hygiene:
Maintaining good anal hygiene contributes to an individual overall health. Some of the key benefits include:
It helps prevent infections - maintaining anal hygiene reduces the risk of fungal infections and other bacteria entering the anal area, and eventually entering the whole body.
Prevent UTIs - urinary tract infection is a urinary infection in any part of the urinary system. By ensuring proper wiping and good anal hygiene, the risk of bacteria spreading from the anal area to the urinary tract will be reduced.
Prevent anal fissures - anal fissures are very small tears around the anus. This could be very uncomfortable and painful to an individual. Assuring proper hygiene of the anal area will help reduce the chance of fissures.
Minimize odor; unpleasant smell - having a regular cleansing routine of the anal area will help regulate odor with fecal matter.
Assisting in sexual health and products - anal hygiene is very important for individuals that are sexually active, as well as for those using anal products. It helps reduce the risk of a sexual transmitted infections (STIs); an infection transmitted through sexual contact, and maintain comfort during activities.
References
Gastroenterology
Incontinence
Defecation
Medical devices | Anal plug | [
"Biology"
] | 2,357 | [
"Incontinence",
"Excretion",
"Medical devices",
"Defecation",
"Medical technology"
] |
5,492,831 | https://en.wikipedia.org/wiki/History%20of%20Yahoo | Yahoo! was founded in January 1994 by Jerry Yang and David Filo, who were electrical engineering graduates at Stanford University when they created a website named "Jerry and David's Guide to the World Wide Web". The Guide was a directory of other websites, organized in a hierarchy, as opposed to a searchable index of pages. In April 1994, Jerry and David's Guide to the World Wide Web was renamed "Yahoo!". The word "YAHOO" is a backronym for "Yet Another Hierarchically Organized Oracle" or "Yet Another Hierarchical Officious Oracle." The yahoo.com domain was created on January 18, 1995.
Yahoo! grew rapidly through 1990–1999 and diversified into a web portal, followed by numerous high-profile acquisitions. The company's stock price rose rapidly during the dot-com bubble and closed at an all-time high of US$118.75 in 2000. However, after the dot-com bubble burst, it reached an all-time low of $8.11 in 2001. Yahoo! formally rejected an acquisition bid from the Microsoft Corporation in 2008. In early 2012, Yahoo laid off 2,000 employees (14 percent of the workforce). This was the largest layoff in Yahoo!'s history.
Carol Bartz replaced co-founder Yang as chief executive officer in January 2009, but was fired by the board of directors in September 2011. Tim Morse was appointed as interim CEO following Bartz's departure. Former PayPal president Scott Thompson became CEO in January 2012 and after he resigned was replaced by Ross Levinsohn as the company's interim CEO on May 13, 2012. On July 16, former Google executive Marissa Mayer became the CEO of the company.
Mayer resigned as CEO of Yahoo in 2017, when it sold to Verizon for $4.48 billion, following Yahoo's disclosure of security breaches. Guru Gowrappan was CEO of Yahoo from 2018 to 2021.
Jim Lanzone is the current CEO of Yahoo, appointed September 2021.
Early history (1994–1996)
When Jerry and David's Guide to the World Wide Web was renamed to Yahoo! in 1994, Yang and Filo said that "Yet Another Hierarchical Officious Oracle" was a suitable backronym for this name, but they insisted they had selected the name because they liked the word's general definition, as in Gulliver's Travels by Jonathan Swift: "rude, unsophisticated, uncouth." Its URL was akebono.stanford.edu/~yahoo.
The yahoo.com domain was created in January 1995, although by the end of 1994 Yahoo! had already received one million hits. Yang and Filo realized their website had massive business potential, and on March 2, 1995, Yahoo! was incorporated.
Yang and Filo sought the advice of entrepreneur Randy Adams for a recommendation of a venture capital firm, and Adams introduced them to Michael Moritz. On April 5, 1995, Michael Moritz of Sequoia Capital provided Yahoo! with two rounds of venture capital, raising approximately $3 million. On April 12, 1996, Yahoo! had its initial public offering, raising $33.8 million by selling 2.6 million shares at the opening bid of $13 each.
The word "Yahoo" had previously been trademarked for barbecue sauce, knives (by EBSCO Industries) and human propelled watercraft (by Old Town Canoe Co.). Therefore, in order to get control of the trademark, Yang and Filo added the exclamation mark to the name. However, the exclamation mark is often incorrectly omitted when referring to Yahoo!. Srinija Srinivasan, an alumna of Stanford University, was hired as Yahoo!'s fifth employee as "Ontological Yahoo!" to assist Yang and Filo with organizing the content on the internet.
Growth (1997–1999)
In the late 1990s, Yahoo!, MSN, Lycos, Excite, and other web portals were growing rapidly. Web portal providers moved to acquire companies to expand their range of services, generally with the goal of increasing the time each user stays within the portal.
On March 8, 1997, Yahoo! acquired online communications company Four11. Four11's webmail service, Rocketmail, became Yahoo! Mail. The company also acquired ClassicGames.com and re-branded it Yahoo! Games. Yahoo! acquired direct marketing company Yoyodyne Entertainment, Inc. on October 12, 1998. In January 1999, Yahoo! acquired web hosting provider GeoCities. Yahoo! also acquired eGroups, which became Yahoo! Groups in June 2000. It acquired Pager, an instant messaging service that was renamed Yahoo! Messenger a year later.
When acquiring companies, Yahoo! often changed the terms of service. For example, they claimed intellectual property rights for content on their servers, unlike the previous policies of the companies they acquired. As a result, many of the acquisitions were controversial and unpopular with users of the existing services.
Dot-com bubble (2000–2001)
Yahoo! stock doubled in price in the last month of 1999. On January 3, 2000, at the height of the dot-com boom, Yahoo! stock closed at a high of $118.75 a share. Sixteen days later, shares in Yahoo! Japan became the first stock in Japanese history to trade at over ¥100,000,000, reaching a price of 101.4 million yen ($962,140 at that time).
On February 7, 2000, yahoo.com was brought down for a few hours as the victim of a distributed denial of service attack (DDoS). On the next day, its shares rose about $16, or 4.5 percent as the failure was blamed on hackers rather than on an internal glitch, unlike a fault with eBay earlier that year.
During the dot-com boom, the cable news network CNBC reported that Yahoo! and eBay were discussing a 50/50 merger. Although the merger never materialized, the two companies decided to form a marketing/advertising alliance six years later in 2006.
On June 26, 2000, Yahoo! and Google signed an agreement which would engage the Google engine to power searches made on yahoo.com.
In 2000, Yahoo became one of the first companies to implement a BizOps or business operations team.
Post dot-com bubble (2002–2005)
Yahoo! was one of the few surviving companies after the dot-com bubble burst. Nevertheless, on September 26, 2001, Yahoo! stock closed at an all-time low of $8.11.
Yahoo! formed partnerships with telecommunications and Internet providers to create content-rich broadband services to compete with AOL. On June 3, 2002, SBC and Yahoo! launched a national co-branded dialup service. In July 2003, BT Openworld announced an alliance with Yahoo!
On August 23, 2005, Yahoo! and Verizon launched an integrated DSL service.
In late 2002, Yahoo! began to bolster its search services by acquiring other search engines. In December 2002, Yahoo! acquired Inktomi. In February 2005, Yahoo! acquired Konfabulator and rebranded it Yahoo! Widgets, a desktop application, and in July 2003, it acquired Overture Services, Inc. and its subsidiaries AltaVista and AlltheWeb. On February 18, 2004, Yahoo! dropped Google-powered results and returned to using its own technology to provide search results.
In March 2004, Yahoo! launched a paid inclusion program whereby commercial websites were guaranteed listings on the Yahoo! search engine after payment. This scheme was lucrative, but proved unpopular both with website marketers (who were reluctant to pay), and the public (who were unhappy about the paid-for listings being indistinguishable from other search results). In October 2006, Paid Inclusion ceased to guarantee any commercial listing and only helped the paid inclusion customers by crawling their site more often, by providing some statistics on the searches that led to the page, and posting additional smart links (provided by customers as feeds) below the actual url.
In 2004, in response to Google's release of Gmail, Yahoo! upgraded the storage of all free Yahoo! Mail accounts from 4 MB to 1 GB, and all Yahoo! Mail Plus accounts to 2 GB. On July 9, 2004, Yahoo! acquired e-mail provider Oddpost, adding an Ajax interface to Yahoo! Mail Beta. Google released Google Talk, a voice over IP service, and Yahoo Messenger and Yahoo message boards service, on August 24, 2005. On October 13, 2005, Yahoo! and Microsoft announced that Yahoo! and MSN Messenger would become interoperable. In 2007, Yahoo! removed the storage meters on Yahoo Mail, allowing users unlimited storage.
Yahoo! continued the acquisition of companies to expand its range of services, particularly Web 2.0 services. Yahoo! Launch became Yahoo! Music in February 2005. On March 20, 2005, Yahoo! purchased photo sharing service Flickr. That same month, the company launched its blogging and social networking service Yahoo! 360°. In June 2005, Yahoo! acquired blo.gs, a service based on RSS feed aggregation. Yahoo! purchased the online social event calendar Upcoming.org, in October 2005. Yahoo! acquired social bookmark site del.icio.us in December 2005, and the playlist sharing community webjay in January 2009.
Yahoo! (2006–2008)
Yahoo! Next was an incubation ground for future Yahoo! technologies in their beta testing phase, similar to Google Labs. It contained forums for Yahoo! users to give feedback to assist in the development of these future Yahoo! technologies.
In early 2006, Yahoo! offered to users the opportunity to beta test a new version of the Yahoo! homepage. However, the test only had support for Internet Explorer and Mozilla Firefox browsers. Users of other browsers, such as Opera, have criticized Yahoo! for this move. Yahoo! says they intend to support additional browsers in the future.
On August 27, 2007, Yahoo! released a new version of Yahoo! Mail. It added Yahoo! Messenger integration (which included Windows Live Messenger due to the networks' federation) and free text messages (although not necessarily free to the receiver) to mobile phones in the U.S., Canada, India, and the Philippines.
On January 29, 2008, Yahoo! announced that the company was laying off 1,000 employees, as the company had failed to effectively compete with industry search leader Google. The cuts represented 7% of the company's workforce of 14,300.
In February 2008, Yahoo! acquired Cambridge, Massachusetts-based Maven Networks, a supplier of internet video players and video advertising tools, for approximately $160 million.
Yahoo! announced on November 17, 2008, that Jerry Yang would be stepping down as CEO.
On December 10, 2008, Yahoo! began layoffs of 1,520 employees world-wide as the company due to the global economic downturn.
Acquisition attempt by Microsoft
Microsoft and Yahoo! were in merger discussions in 2005, 2006, and 2007, that were ultimately unsuccessful. At the time, analysts were skeptical about the wisdom of a business combination by these two firms.
On February 1, 2008, after its friendly takeover offer was rebuffed by Yahoo!, Microsoft made an unsolicited takeover bid to buy Yahoo! for $44.6 billion in cash and stock. Days later, Yahoo! considered alternatives to the merger with Microsoft, including a merger with Internet giant Google or a potential transaction with News Corp. On February 11, 2008, Yahoo! rejected Microsoft's offer as "substantially undervaluing" Yahoo!'s brand, audience, investments, and growth prospects.
On February 22, two Detroit-based pension companies sued Yahoo! and its board of directors for allegedly breaching their duty to shareholders by opposing Microsoft's takeover bid and pursuing "value destructive" third-party deals. In early March, Google CEO Eric Schmidt went on record saying that he was concerned that a potential Microsoft-Yahoo! merger might hurt the internet by compromising its openness. The value of Microsoft's cash and stock offer declined with Microsoft's stock price, falling to $42.2 billion by April 4. On April 5, Microsoft CEO Steve Ballmer sent a letter to Yahoo!'s board of directors stating that if within three weeks they had not accepted the deal, Microsoft would approach shareholders directly in hopes of electing a new board and moving forward with merger talks (a hostile takeover). In response, Yahoo! stated on April 7 that they were not opposed to a merger, but that they wanted a better offer. In addition, they stated that Microsoft's "aggressive" approach was worsening their relationship and the chances of a "friendly" merger. Later the same day, Yahoo! stated that the original $44.6 billion offer was not acceptable. Following this, there was considerable discussion of having Time Warner's AOL and Yahoo! merge, instead of the originally proposed Microsoft deal.
On May 3, 2008, Microsoft withdrew the offer. During a meeting between Ballmer and Yang, Microsoft had offered to raise its offer by $5 billion to $33 per share, while Yahoo! demanded $37 per share. One of Ballmer's representatives suggested that Yang would implement a poison pill to make the takeover as difficult as possible, saying "They are going to burn the furniture if we go hostile. They are going to destroy the place."
Analysts said that Yahoo!'s shares, which closed at $28.67 per share on May 2, were likely to drop below $25 per share and perhaps as low as $20 per share on May 5, which would put significant pressure on Yang to engineer a turnaround of the company. Some suggested that institutional investors would file lawsuits against Yahoo!'s board of directors for not acting in shareholder interest by refusing Microsoft's offer.
On May 5, 2008, following Microsoft's withdrawal, Yahoo!'s stock dropped 15% lower to $23.02 per share in Monday trading and trimmed about $6 billion off of its market capitalization.
On June 12, 2008, Yahoo! announced that it had ended all talks with Microsoft about purchasing either part of the business (the search advertising business) or the entire company. Talks had taken place the previous weekend (June 8), during which Microsoft allegedly told Yahoo! that it was no longer interested in a purchase of the company at the price offered earlier – $33 per share. Also, on June 12, Yahoo! announced a non-exclusive search advertising alliance with Google. Upon this announcement, many executives and senior employees announced their plans to leave the company as they appeared to have lost confidence in Yahoo!'s strategies. According to market analysts, those pending departures impacted Wall Street's perception of the company.
On July 7, 2008, Microsoft said it would consider offering another bid for Yahoo! if the company's nine directors would be ousted at the annual meeting scheduled to be held on August 1, 2008. Microsoft believed it would be able to better negotiate with a new board.
Billionaire investor Carl Icahn, calling the current board irrational in its approach to talks with Microsoft, launched a proxy fight to replace Yahoo!'s board. On July 21, 2008, Yahoo! settled with Carl Icahn, agreeing to appoint him and two of his allies to an expanded board.
On November 20, 2008, almost 10 months after Microsoft's initial offer of $33 per share, Yahoo!'s stock (YHOO) dropped to a 52-week low, trading at only $8.94 per share.
On November 30, 2008, Microsoft offered to buy Yahoo!'s search business for $20 billion.
On July 29, 2009, a 10-year deal was announced giving Microsoft full access to Yahoo!'s search engine to be used in future Microsoft projects in its Bing search engine.
Under the deal, Microsoft was not required to pay any cash up front to Yahoo!. The day after the deal was announced, Yahoo!'s share price declined more than 10% to $15.14 per share, about 60% lower than Microsoft's takeover bid a year earlier.
Carol Bartz era (2009–2011)
On January 13, 2009, Yahoo! appointed Carol Bartz, former executive chairman of Autodesk, as its new chief executive officer and a member of the board of directors.
Yahoo! wished to change its direction after chief executive Carol Bartz replaced co-founder Jerry Yang.
In July 2009, Microsoft and Yahoo! agreed to a deal that would see Yahoo!'s websites use both Microsoft's search technology and search advertising. Yahoo! in turn became the sales team for banner advertising for both companies. While Microsoft would provide algorithmic search results, Yahoo! would control the presentation and personalization of results for searches on its pages.
On July 21, 2009, Yahoo! launched a new version of its front page, called Metro. The new page allowed users to customize it through the prominent "My Favorites" panel on the left side and integrate third-party web services and launch them within one page.
On October 28, 2009, Bartz told PCWorld that she struggled with the question of what Yahoo! is when she took over as CEO in January 2009. After talking to many users in about 10 countries, she said, Yahoo! executives concluded that users consider it their "home on the Internet."
In September 2011, Bartz sent an email to Yahoo! employees saying she was removed from her position at Yahoo! by the company's chairman Roy Bostock via a phone call. CFO Tim Morse was named as Interim CEO of the company.
Scott Thompson period (2012)
On January 4, 2012, Scott Thompson, former President of PayPal, was named the new chief executive officer.
Employee layoffs
In early 2012, after the appointment of Scott Thompson as the new CEO, many rumors spread about large layoffs looming. Kara Swisher who covered Yahoo at All Things Digital reported that Yahoo's Chief Product Officer Blake Irving resigned,
Andrei Broder, who was VP of computational advertising and chief scientist of the Advertising Product Group, as well as Jianchang (JC) Mao, who headed advertising sciences, left the company. This followed the departures of Yahoo! Labs head Prabhakar Raghavan who left for Google, and Raghu Ramakrishnan, who went to Microsoft.
On April 4, 2012, Yahoo announced a cut of 2,000 jobs or about 14 percent of 14,100 workers employed by Yahoo. Yahoo! said it would save around $375 million annually after the layoffs completed at end of 2012.
Facebook patent lawsuit
On March 14, Yahoo! filed a lawsuit against Facebook over the alleged infringement of 10 patents. Facebook responded by counter suing Yahoo!.
Reorganization
In an email memo sent to employees in April 2012, Scott Thompson re-iterated his view that customers should come first at Yahoo. He defined customers as both users and advertisers. He also re-organized the company. The reorganization took effect on May 1, 2012, and included operations in three major groups for Yahoo! Consumer, Regions and Technology.
The Consumer group had three groups: Media, Connections, and Commerce. The customers of this group are the users of Yahoo!.
The Regions group operated three regions: Americas, APAC, and EMEA. The customers of this group are the advertisers of Yahoo!.
The Technology group included Core Platforms, and Central Technology. It provides technology and support to the other two major groups.
The Corporate group (Finance, Legal, and HR) remained unchanged and continued to support the new groups.
Thompson's College degree controversy
On May 3, 2012, news reported that Scott Thompson's biography at Yahoo was incorrect. The CEO's biography stated that he held a dual accounting and computer science degree from Stonehill College, whereas investigation revealed that Thompson's degree was solely in accounting, and not in computer science. The information came from Dan Loeb, founder of Third Point LLC, which held 5.8% of Yahoo! stock, and who had been trying to gain seats on the board of directors of Yahoo!
In response to this, Yahoo!'s board of directors formed a three-member committee to review Thompson's academic credentials and the vetting process that preceded his selection as CEO. The review committee's chairman was Alfred Amoroso, who joined Yahoo!'s board in February 2012. The other directors on the panel were John Hayes and Thomas McInerney, who both joined in April 2012. The committee retained Terry Bird as independent counsel.
Thompson replaced by Ross Levinsohn (interim)
On May 13, 2012, Scott Thompson was replaced by Ross Levinsohn as the company's interim CEO. In June 2012, Yahoo! hired former Google director Michael G. Barrett as its chief revenue officer.
Marissa Mayer era (2012–2017)
On July 16, 2012, former Google executive and Walmart corporate director Marissa Mayer was named as Yahoo! CEO and President, and youngest CEO of a Fortune 500 company.
On May 19, 2013, the Wall Street Journal reported that Yahoo's board had approved an all-cash deal to purchase the six-year-old blogging website Tumblr. The announcement was reported to signify a changing trend in the technology industry, as large corporations like Yahoo, Facebook, and Google acquired start-up Internet companies that generate low amounts of revenue as a way in which to connect with sizeable, fast-growing online communities. The Wall Street Journal stated that the purchase of Tumblr would satisfy the company's need for "a thriving social-networking and communications hub." Yahoo would pay $1.1 billion for Tumblr, and the company's CEO and founder David Karp would remain a large shareholder.
The revamp of the Yahoo-owned photography service Flickr was launched in Times Square, New York, U.S. on May 20, 2013, in an event that was attended by the city's mayor and a large contingency of journalists. Eleven billboards in Times Square advertised the website's new tagline "biggr, spectaculr, wherevr" as part of the launch and Yahoo stated that it would provide Flickr users with a free terabyte of storage. The official announcement of the Tumblr acquisition was also included in the May 20 event.
The media reported on Yahoo!'s interest in the video streaming site Hulu on May 26, 2013. Under Mayer's leadership, Yahoo!'s bid was worth between $600 and $800 million, as a variety of options that consisted of different circumstances were put forward by the company. As of May 28, 2013, Yahoo!'s videos attracted 45 million unique visitors a month, while Hulu had 24 million visitors. The combination of the two audiences would have placed Yahoo! in the second-most popular position after Google and its subsidiary YouTube.
In July 2013, Yahoo Inc acquired Qwiki for $50 million.
On August 2, 2013, Yahoo Inc announced the acquisition of social web browser concern RockMelt. With the acquisition, the RockMelt team, including the concern's CEO Eric Vishria and CTO Tim Howes became part of Yahoo team. As a result, all the RockMelt apps and existing web services were scheduled to cease on August 31, 2013.
On August 7, 2013, at around midnight EDT, Yahoo! announced that it would be introducing the final version of the new logo on 5 September 2013 at 4:00 a.m. UTC. To mark the occasion, the company launched a "30 days of change" campaign that involved releasing a variation of the logo on each of the 30 days leading up to the revelation date.
Data collated by comScore during July 2013 revealed that more people in the U.S. visited Yahoo! websites during the month in comparison to Google websites. The occasion was the first time that Yahoo! outperformed Google since 2011. The data did not incorporate visit statistics for the Yahoo!-owned Tumblr website or mobile phone usage.
On February 11, 2014, Yahoo! acquired social diary company Wander.
On February 13, 2014, Yahoo! acquired Distill, a technical recruiting company.
On February 17, 2016, Yahoo! replaced Yahoo! Labs with Yahoo! Research.
On September 22, 2016, Yahoo disclosed a data breach in which hackers stole information associated with at least 500 million user accounts in late 2014. According to the BBC, this was the largest technical breach reported to date. Specific details of material taken include names, email addresses, telephone numbers, encrypted or unencrypted security questions and answers, dates of birth, and encrypted passwords. The breach used manufactured web cookies to falsify login credentials, allowing hackers to gain access to any account without needing a password. On December 14, 2016 a separate data breach, occurring earlier around August 2013 was reported. This breach affected over 1 billion user accounts and was considered the largest discovered in the history of the Internet.
On January 9, 2017, Yahoo! CEO Marissa Mayer announced she would step down from Yahoo's board of directors if its sale to Verizon, of $4.5 billion, went through. She also announced that when that deal closed Yahoo! would rename itself to Altaba.
References
External links
The History of Yahoo! – How It All Started ...
History of Silicon Valley
Yahoo!
Yahoo!
Yahoo! | History of Yahoo | [
"Technology"
] | 5,245 | [
"History of computer companies",
"History of computing"
] |
5,493,022 | https://en.wikipedia.org/wiki/Allocation%20concealment | In a randomized experiment, allocation concealment hides the sorting of trial participants into treatment groups so that this knowledge cannot be exploited. Adequate allocation concealment serves to prevent study participants from influencing treatment allocations for subjects. Studies with poor allocation concealment (or none at all) are prone to selection bias.
Some standard methods of ensuring allocation concealment include sequentially numbered, opaque, sealed envelopes (SNOSE); sequentially numbered containers; pharmacy controlled randomization; and central randomization. CONSORT guidelines recommend that allocation concealment methods be included in a study's protocol, and that the allocation concealment methods be reported in detail in their publication; however, a 2005 study determined that most clinical trials have unclear allocation concealment in their protocols, in their publications, or both. A 2008 study of 146 meta-analyses concluded that the results of randomized controlled trials with inadequate or unclear allocation concealment tended to be biased toward beneficial effects only if the trials' outcomes were subjective as opposed to objective.
Allocation concealment is different from blinding. An allocation concealment method prevents influence on the randomization process, while blinding conceals the outcome of the randomization. However, allocation concealment may also be called "randomization blinding".
Impact
Without the use of allocation concealment, researchers may (consciously or unconsciously) place subjects expected to have good outcomes in the treatment group, and those expected to have poor outcomes in the control group. This introduces considerable bias in favor of treatment.
Naming
Allocation concealment has also been called randomization blinding, blinded randomization, and bias-reducing allocation among other names. The term 'allocation concealment' was first introduced by Shultz et al. The authors justified the introduction of the term:
Subversion and fraud
Traditionally, each patient's treatment allocation data was stored in a sealed envelopes, which was to be opened to determine treatment allocation. However, this system is prone to abuse. Reports of researchers opening envelopes prematurely or holding the envelopes up to lights to determine their contents has led some researchers to say that the use of sealed envelopes is no longer acceptable. , sealed envelopes were still in use in some clinical trials.
Modern clinical trials often use centralized allocation concealment. Although considered more secure, central allocations are not completely immune from subversion. Typical and sometimes successful strategies include keeping a list of previous allocations (up to 15% of study personnel report keeping lists).
See also
Blinded experiment
Design of experiments
Randomized experiment
Metascience
Sealedenvelope.com—a provider of allocation concealment services
References
Design of experiments
Research
Scientific misconduct
Scientific method | Allocation concealment | [
"Technology"
] | 530 | [
"Scientific misconduct",
"Ethics of science and technology"
] |
5,493,064 | https://en.wikipedia.org/wiki/Kalpa%20%28time%29 | A kalpa is a long period of time (aeon) in Hindu and Buddhist cosmology, generally between the creation and recreation of a world or universe.
Etymology
Kalpa () in this context, means "a long period of time (aeon) related to the lifetime of the universe (creation)." It is derived from कॢप् (kḷp) + -अ (-a, nominalizing suffix) ().
Hinduism
In Hinduism, a kalpa is equal to 4.32 billion years, a "day of Brahma" (12-hour day proper) or one thousand mahayugas, measuring the duration of the world. Each kalpa is divided into 14 manvantara periods, each lasting 71 Yuga Cycles (306,720,000 years). Preceding the first and following each manvantara period is a juncture (sandhya) equal to the length of a Satya Yuga (1,728,000 years). A kalpa is followed by a pralaya (dissolution) of equal length, which together constitute a day and night of Brahma. A month of Brahma contains thirty such days and nights, or 259.2 billion years. According to the Mahabharata, 12 months of Brahma (=360 days) constitute his year, and 100 such years his life called a maha-kalpa (311.04 trillion years or 36,000 kalpa + 36,000 pralaya). Fifty years of Brahma are supposed to have elapsed, and we are now in the Shveta-Varaha Kalpa or the first day of his fifty-first year. At the end of a kalpa, the world is annihilated by fire.
The definition of a kalpa equaling 4.32 billion years is found in the Puranas—specifically Vishnu Purana and Bhagavata Purana.
The Matsya Purana (290.3–12) lists the names of 30 kalpas, each named by Brahma based on a significant event in the kalpa and the most glorious person in the beginning of the kalpa. These 30 kalpas or days (along with 30 pralayas or nights) form a 30-day month of Brahma.
The Vayu Purana has a different list of names for 33 kalpas, which G. V. Tagare describes as fanciful derivations.
Buddhism
In the Pali language of early Buddhism, the word kalpa takes the form kappa, and is mentioned in the assumed oldest scripture of Buddhism, the Sutta Nipata. This speaks of "Kappâtita: one who has gone beyond time, an Arahant". This part of the Buddhist manuscripts dates back to the middle part of the last millennium BCE.
Gautama Buddha claimed an incalculable number of Buddhas lived in previous kalpas: Vipassi Buddha 91 kalpas ago, Sikhi Buddha 31 kalpas ago, and three prior Buddhas in the present kalpa. He confines his teachings to the present kalpa, the duration of which he doesn't arithmetically define, but uses a similitude:
A similar similitude is found in the Mountain Pabbata Sutta (SN 15:5) of the Pali Canon:
Described in the Vibhanga division of the Abhidhamma Pitaka are sixteen rupa brahma lokas (worlds or planes) and four higher arupa brahma lokas, each attained through the imperfect, medial or perfect performance of the four states of jhāna (meditation), granting a duration of life measured in kalpas that exceed the top-most heavenly loka of 9.216 billion years:
1st jhāna leads to 3 lowest rupa lokas with respective lifespans of 1/3, 1/2 and 1 kalpa.
2nd jhāna leads to 3 higher rupa lokas with respective lifespans of 2, 4 and 8 kalpas.
3rd jhāna leads to 3 more higher rupa lokas with respective lifespans of 16, 32 and 64 kalpas.
4th jhāna leads to 7 highest rupa lokas with respective lifespans ranging from 500 to 16,000 kalpas, and 4 still higher arupa lokas with respective lifespans of 20,000; 40,000; 60,000 and 84,000 kalpas.
At the termination of each kalpa, the lower three rupa brahma lokas, attained through the 1st jhāna, and everything below them (six heavens, Earth, etc.) are destroyed by fire (seven suns), only to later again come into being.
In one explanation, there are four different lengths of kalpas. A regular kalpa is approximately 16 million years long (16,798,000 years), and a small kalpa is 1000 regular kalpas, or about 16.8 billion years. Further, a medium kalpa is roughly 336 billion years, the equivalent of 20 small kalpas. A great kalpa is four medium kalpas, or about 1.3 trillion years.
Gautama Buddha did not give the exact length of the maha-kalpa in terms of years. However, he gave several astounding analogies to understand it.
Imagine a huge empty cube at the beginning of a kalpa, approximately 16 miles in each side. Once every 100 years, you insert a tiny mustard seed into the cube. According to the Buddha, the huge cube will be filled even before the kalpa ends.
In one instance, when some monks wanted to know how many kalpas had elapsed so far, Buddha gave the below analogy:
If you count the total number of sand particles at the depths of the Ganga river, from where it begins to where it ends at the Bay of Bengal sea, even that number will be less than the number of passed kalpas.
Another definition of Kalpa is the world where Buddhas are born. There are generally 2 types of kalpa, Suñña-Kalpa and Asuñña-kalpa. The Suñña-Kalpa is the world where no Buddha is born. Asuñña-Kalpa is the world where at least one Buddha is born. There are 5 types of Asuñña-Kalpa:
Sāra-Kalpa – The world where one Buddha is born.
Maṇḍa-Kalpa – The world where two Buddhas are born.
Vara-Kalpa – The world where three Buddhas are born.
Sāramaṇḍa-Kalpa – The world where four Buddhas are born.
Bhadda-Kalpa – The world where five Buddhas are born.
The previous kalpa was the Vyuhakalpa (Glorious aeon), the present kalpa is called the Bhadrakalpa (Auspicious aeon), and the next kalpa will be the Nakshatrakalpa (Constellation aeon).
See also
Brahma
Hindu units of time
Kalpa (day of Brahma)
Manvantara (age of Manu)
Pralaya (period of dissolution)
Yuga Cycle (four yuga ages): Satya (Krita), Treta, Dvapara, and Kali
List of numbers in Hindu scriptures
References
External links
Kalpa names from various texts
Units of time
Buddhist philosophical concepts
Hindu philosophical concepts
Time in Buddhism
Time in Hinduism | Kalpa (time) | [
"Physics",
"Mathematics"
] | 1,556 | [
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
5,493,150 | https://en.wikipedia.org/wiki/Narrow-body%20aircraft | A narrow-body aircraft or single-aisle aircraft is an airliner arranged along a single aisle, permitting up to 6-abreast seating in a cabin less than in width.
In contrast, a wide-body aircraft is a larger airliner usually configured with multiple aisles and a fuselage diameter of more than , allowing at least seven-abreast seating and often more travel classes.
Market
Historically, beginning in the late 1960s and continuing through the 1990s, twin engine narrow-body aircraft, such as the Boeing 737 Classic, McDonnell-Douglas MD-80 and Airbus A320 were primarily employed in short to medium-haul markets requiring neither the range nor the passenger-carrying capacity of that period's wide-body aircraft.
The re-engined Boeing 737 MAX and Airbus A320neo jets offer 500 miles more range, allowing them to operate the 3,000 miles transatlantic flights between the eastern U.S. and Western Europe, previously dominated by wide-body aircraft.
Norwegian Air Shuttle, JetBlue and TAP Portugal will open up direct routes bypassing airline hubs for lower fares between cheaper, smaller airports.
The Boeing 737NG 3,300-mile range is insufficient for fully laden operations and operates at reduced capacity like the Airbus A318, while the Airbus A321LR could replace the less fuel efficient Boeing 757s used since their production ended in 2004.
Boeing will face competition and pricing pressure from the Embraer E-Jet E2 family, Airbus A220 (formerly Bombardier CSeries) and Comac C919.
Between 2016 and 2035, FlightGlobal expects 26,860 single-aisles to be delivered for almost $ billion, 45% Airbus A320 family ceo and neo and 43% Boeing 737 NG and max.
By June 2018, there were 10,572 Airbus A320neo and Boeing 737 MAX orders: 6,068 Airbuses (%, 2,295 with CFMs, 1,623 with PWs and 2,150 with not yet decided engines) and 4,504 Boeings (%); 3,446 in Asia-Pacific (%), 2,349 in Europe (%), 1,926 in North America (%), 912 in Latin America (%), 654 in Middle East (%), 72 in Africa (%) and 1,213 not yet bounded (%).
Many airlines have shown interest in the Airbus A321LR or its A321XLR derivative, and other extended-range models, for thin transatlantic and Asia-Pacific routes.
Examples
Six-abreast cabin
Five-abreast cabin
Four-abreast cabin
Three-abreast cabin
Two-abreast cabin
Image gallery
See also
List of regional airliners
Regional jet
Wide-body aircraft
Notes
References
Aircraft configurations
Airliners | Narrow-body aircraft | [
"Engineering"
] | 583 | [
"Aircraft configurations",
"Aerospace engineering"
] |
5,493,764 | https://en.wikipedia.org/wiki/Style%20line | A style line is a seam in a garment made primarily for the purpose of its visual effect, rather than for the purpose of shaping of structuring the garment. By contrast, a dart or pleat by itself would not be considered a style line because although each can be used to produce a pleasing visual effect, their main purpose is to shape the garment by taking in ease or adding fullness respectively. Clearly though, there can be some ambiguity as when a dart is made as part of a seam which continues beyond the dart point. If the seam beyond the dart is straight, that is, not affecting the garment's fit, it would be considered a style line.
Sewing
Parts of clothing
Fashion design | Style line | [
"Technology",
"Engineering"
] | 143 | [
"Design",
"Fashion design",
"Components",
"Parts of clothing"
] |
5,493,795 | https://en.wikipedia.org/wiki/Stability%20theory | In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using Lp norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance.
In dynamical systems, an orbit is called Lyapunov stable if the forward orbit of any point is in a small enough neighborhood or it stays in a small (but perhaps, larger) neighborhood. Various criteria have been developed to prove stability or instability of an orbit. Under favorable circumstances, the question may be reduced to a well-studied problem involving eigenvalues of matrices. A more general method involves Lyapunov functions. In practice, any one of a number of different stability criteria are applied.
Overview in dynamical systems
Many parts of the qualitative theory of differential equations and dynamical systems deal with asymptotic properties of solutions and the trajectories—what happens with the system after a long period of time. The simplest kind of behavior is exhibited by equilibrium points, or fixed points, and by periodic orbits. If a particular orbit is well understood, it is natural to ask next whether a small change in the initial condition will lead to similar behavior. Stability theory addresses the following questions: Will a nearby orbit indefinitely stay close to a given orbit? Will it converge to the given orbit? In the former case, the orbit is called stable; in the latter case, it is called asymptotically stable and the given orbit is said to be attracting.
An equilibrium solution to an autonomous system of first order ordinary differential equations is called:
stable if for every (small) , there exists a such that every solution having initial conditions within distance i.e. of the equilibrium remains within distance i.e. for all .
asymptotically stable if it is stable and, in addition, there exists such that whenever then as .
Stability means that the trajectories do not change too much under small perturbations. The opposite situation, where a nearby orbit is getting repelled from the given orbit, is also of interest. In general, perturbing the initial state in some directions results in the trajectory asymptotically approaching the given one and in other directions to the trajectory getting away from it. There may also be directions for which the behavior of the perturbed orbit is more complicated (neither converging nor escaping completely), and then stability theory does not give sufficient information about the dynamics.
One of the key ideas in stability theory is that the qualitative behavior of an orbit under perturbations can be analyzed using the linearization of the system near the orbit. In particular, at each equilibrium of a smooth dynamical system with an n-dimensional phase space, there is a certain n×n matrix A whose eigenvalues characterize the behavior of the nearby points (Hartman–Grobman theorem). More precisely, if all eigenvalues are negative real numbers or complex numbers with negative real parts then the point is a stable attracting fixed point, and the nearby points converge to it at an exponential rate, cf Lyapunov stability and exponential stability. If none of the eigenvalues are purely imaginary (or zero) then the attracting and repelling directions are related to the eigenspaces of the matrix A with eigenvalues whose real part is negative and, respectively, positive. Analogous statements are known for perturbations of more complicated orbits.
Stability of fixed points in 2D
The paradigmatic case is the stability of the origin under the linear autonomous differential equation where and is a 2-by-2 matrix.
We would sometimes perform change-of-basis by for some invertible matrix , which gives . We say is " in the new basis". Since and , we can classify the stability of origin using and , while freely using change-of-basis.
Classification of stability types
If , then the rank of is zero or one.
If the rank is zero, then , and there is no flow.
If the rank is one, then and are both one-dimensional.
If , then let span , and let be a preimage of , then in basis, , and so the flow is a shearing along the direction. In this case, .
If , then let span and let span , then in basis, for some nonzero real number .
If , then it is unstable, diverging at a rate of from along parallel translates of .
If , then it is stable, converging at a rate of to along parallel translates of .
If , we first find the Jordan normal form of the matrix, to obtain a basis in which is one of three possible forms:
where .
If , then . The origin is a source, with integral curves of form
Similarly for . The origin is a sink.
If or , then , and the origin is a saddle point. with integral curves of form .
where . This can be further simplified by a change-of-basis with , after which . We can explicitly solve for with . The solution is with . This case is called the "degenerate node". The integral curves in this basis are central dilations of , plus the x-axis.
If , then the origin is an degenerate source. Otherwise it is a degenerate sink.
In both cases,
where . In this case, .
If , then this is a spiral sink. In this case, . The integral lines are logarithmic spirals.
If , then this is a spiral source. In this case, . The integral lines are logarithmic spirals.
If , then this is a rotation ("neutral stability") at a rate of , moving neither towards nor away from origin. In this case, . The integral lines are circles.
The summary is shown in the stability diagram on the right. In each case, except the case of , the values allows unique classification of the type of flow.
For the special case of , there are two cases that cannot be distinguished by . In both cases, has only one eigenvalue, with algebraic multiplicity 2.
If the eigenvalue has a two-dimensional eigenspace (geometric multiplicity 2), then the system is a central node (sometimes called a "star", or "dicritical node") which is either a source (when ) or a sink (when ).
If it has a one-dimensional eigenspace (geometric multiplicity 1), then the system is a degenerate node (if ) or a shearing flow (if ).
Area-preserving flow
When , we have , so the flow is area-preserving. In this case, the type of flow is classified by .
If , then it is a rotation ("neutral stability") around the origin.
If , then it is a shearing flow.
If , then the origin is a saddle point.
Stability of fixed points
The simplest kind of an orbit is a fixed point, or an equilibrium. If a mechanical system is in a stable equilibrium state then a small push will result in a localized motion, for example, small oscillations as in the case of a pendulum. In a system with damping, a stable equilibrium state is moreover asymptotically stable. On the other hand, for an unstable equilibrium, such as a ball resting on a top of a hill, certain small pushes will result in a motion with a large amplitude that may or may not converge to the original state.
There are useful tests of stability for the case of a linear system. Stability of a nonlinear system can often be inferred from the stability of its linearization.
Maps
Let be a continuously differentiable function with a fixed point , . Consider the dynamical system obtained by iterating the function :
The fixed point is stable if the absolute value of the derivative of at is strictly less than 1, and unstable if it is strictly greater than 1. This is because near the point , the function has a linear approximation with slope :
Thus
which means that the derivative measures the rate at which the successive iterates approach the fixed point or diverge from it. If the derivative at is exactly 1 or −1, then more information is needed in order to decide stability.
There is an analogous criterion for a continuously differentiable map with a fixed point , expressed in terms of its Jacobian matrix at , . If all eigenvalues of are real or complex numbers with absolute value strictly less than 1 then is a stable fixed point; if at least one of them has absolute value strictly greater than 1 then is unstable. Just as for =1, the case of the largest absolute value being 1 needs to be investigated further — the Jacobian matrix test is inconclusive. The same criterion holds more generally for diffeomorphisms of a smooth manifold.
Linear autonomous systems
The stability of fixed points of a system of constant coefficient linear differential equations of first order can be analyzed using the eigenvalues of the corresponding matrix.
An autonomous system
where and is an matrix with real entries, has a constant solution
(In a different language, the origin is an equilibrium point of the corresponding dynamical system.) This solution is asymptotically stable as ("in the future") if and only if for all eigenvalues of , . Similarly, it is asymptotically stable as ("in the past") if and only if for all eigenvalues of , . If there exists an eigenvalue of with then the solution is unstable for .
Application of this result in practice, in order to decide the stability of the origin for a linear system, is facilitated by the Routh–Hurwitz stability criterion. The eigenvalues of a matrix are the roots of its characteristic polynomial. A polynomial in one variable with real coefficients is called a Hurwitz polynomial if the real parts of all roots are strictly negative. The Routh–Hurwitz theorem implies a characterization of Hurwitz polynomials by means of an algorithm that avoids computing the roots.
Non-linear autonomous systems
Asymptotic stability of fixed points of a non-linear system can often be established using the Hartman–Grobman theorem.
Suppose that is a -vector field in which vanishes at a point , . Then the corresponding autonomous system
has a constant solution
Let be the Jacobian matrix of the vector field at the point . If all eigenvalues of have strictly negative real part then the solution is asymptotically stable. This condition can be tested using the Routh–Hurwitz criterion.
Lyapunov function for general dynamical systems
A general way to establish Lyapunov stability or asymptotic stability of a dynamical system is by means of Lyapunov functions.
See also
Chaos theory
Lyapunov stability
Hyperstability
Linear stability
Orbital stability
Stability criterion
Stability radius
Structural stability
von Neumann stability analysis
References
External links
Stable Equilibria by Michael Schreiber, The Wolfram Demonstrations Project.
Limit sets
Mathematical and quantitative methods (economics) | Stability theory | [
"Mathematics"
] | 2,335 | [
"Limit sets",
"Stability theory",
"Topology",
"Dynamical systems"
] |
5,494,349 | https://en.wikipedia.org/wiki/Meisenheimer%20complex | A Meisenheimer complex or Jackson–Meisenheimer complex in organic chemistry is a 1:1 reaction adduct between an arene carrying electron withdrawing groups and a nucleophile. These complexes are found as reactive intermediates in nucleophilic aromatic substitution but stable and isolated Meisenheimer salts are also known.
Background
The early development of this type of complex takes place around the turn of the 19th century. In 1886 Janovski observed an intense violet color when he mixed meta-dinitrobenzene with an alcoholic solution of alkali. In 1895 Cornelis Adriaan Lobry van Troostenburg de Bruyn investigated a red substance formed in the reaction of trinitrobenzene with potassium hydroxide in methanol. In 1900 Jackson and Gazzolo reacted trinitroanisole with sodium methoxide and proposed a quinoid structure for the reaction product.
In 1902 Jakob Meisenheimer observed that by acidifying their reaction product, the starting material was recovered.
With three electron withdrawing groups, the negative charge in the complex is located at one of the nitro groups according to the quinoid model. When less electron poor arenes this charge is delocalized over the entire ring (structure to the right in scheme 1).
In one study a Meisenheimer arene (4,6-dinitrobenzofuroxan) was allowed to react with a strongly electron-releasing arene (1,3,5-tris(N-pyrrolidinyl)benzene) forming a zwitterionic Meisenheimer–Wheland complex. The Wheland intermediate is the name typically given to the cationic reactive intermediate formed in electrophilic aromatic substitution, and can be considered an oppositely charged analog of the negatively charged Meisenheimer complex formed in nucleophilic aromatic substitution. Hence, the simultaneous occurrence of the Wheland and Meisenheimer intermediates in the single zwitterionic complex shown below lead to its description as a Meisenheimer–Wheland complex.
The structure of this complex was confirmed by NMR spectroscopy.
Janovski reaction
The Janovski reaction is the reaction of 1,3-dinitrobenzene with an enolizable ketone to the Meisenheimer adduct.
Zimmermann reaction
In the Zimmermann reaction the Janovski adduct is oxidized with excess base to a strongly colored enolate with subsequent reduction of the dinitro compound to the aromatic nitro amine. This reaction is the basis of the Zimmermann test used for the detection of ketosteroids.
Eponyms
The Jackson–Meisenheimer complex was named after the American organic chemist, Charles Loring Jackson (1847–1935) and the German organic chemist, Jakob Meisenheimer (1876–1934).
The Janovski reaction was named for the Czech chemist, Jaroslav Janovski (1850–1907).
The Zimmermann reaction was named after the German chemist, Wilhelm Zimmermann (1910–1982).
Lastly, the Wheland intermediate was named after the American chemist, George Willard Wheland (1907–1976)
References
Reactive intermediates
Salts | Meisenheimer complex | [
"Chemistry"
] | 666 | [
"Organic compounds",
"Reactive intermediates",
"Physical organic chemistry",
"Salts"
] |
5,494,713 | https://en.wikipedia.org/wiki/Stable%20manifold%20theorem | In mathematics, especially in the study of dynamical systems and differential equations, the stable manifold theorem is an important result about the structure of the set of orbits approaching a given hyperbolic fixed point. It roughly states that the existence of a local diffeomorphism near a fixed point implies the existence of a local stable center manifold containing that fixed point. This manifold has dimension equal to the number of eigenvalues of the Jacobian matrix of the fixed point that are less than 1.
Stable manifold theorem
Let
be a smooth map with hyperbolic fixed point at . We denote by the stable set and by the unstable set of .
The theorem states that
is a smooth manifold and its tangent space has the same dimension as the stable space of the linearization of at .
is a smooth manifold and its tangent space has the same dimension as the unstable space of the linearization of at .
Accordingly is a stable manifold and is an unstable manifold.
See also
Center manifold theorem
Lyapunov exponent
Notes
References
External links
Dynamical systems
Theorems in dynamical systems | Stable manifold theorem | [
"Physics",
"Mathematics"
] | 214 | [
"Theorems in dynamical systems",
"Mechanics",
"Mathematical problems",
"Mathematical theorems",
"Dynamical systems"
] |
5,495,025 | https://en.wikipedia.org/wiki/280%20%28number%29 | 280 (two hundred [and] eighty) is the natural number after 279 and before 281.
In mathematics
The denominator of the eighth harmonic number, 280 is an octagonal number. 280 is the smallest octagonal number that is a half of another octagonal number.
There are 280 plane trees with ten nodes.
As a consequence of this, 18 people around a round table can shake hands with each other in non-crossing ways, in 280 different ways (this includes rotations).
References
Integers | 280 (number) | [
"Mathematics"
] | 100 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
5,495,129 | https://en.wikipedia.org/wiki/290%20%28number%29 | 290 (two hundred [and] ninety) is the natural number following 289 and preceding 291.
In mathematics
The product of three primes, 290 is a sphenic number, and the sum of four consecutive primes (67 + 71 + 73 + 79). The sum of the squares of the divisors of 17 is 290.
Not only is it a nontotient and a noncototient, it is also an untouchable number.
290 is the 16th member of the Mian–Chowla sequence; it can not be obtained as the sum of any two previous terms in the sequence.
See also the Bhargava–Hanke 290 theorem.
References
Integers | 290 (number) | [
"Mathematics"
] | 142 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
5,495,176 | https://en.wikipedia.org/wiki/Kauri-butanol%20value | The kauri-butanol value ("Kb value") is an international, standardized measure of solvent power for a hydrocarbon solvent, and is governed by an ASTM standardized test, ASTM D1133. The result of this test is a scaleless index, usually referred to as the "Kb value". A higher Kb value means the solvent is more aggressive or active in the ability to dissolve certain materials. Mild solvents have low scores in the tens and twenties; powerful solvents like chlorinated solvents and naphthenic aromatic solvents (i.e. "High Sol 10", "High Sol 15") have ratings that are in the low hundreds.
In terms of the test itself, the kauri-butanol value (Kb) of a chemical shows the maximum amount of the hydrocarbon that can be added to a solution of kauri resin (a thick, gum-like material) in butanol (butyl alcohol) without causing cloudiness. Since kauri resin is readily soluble in butyl alcohol but not in most hydrocarbon solvents, the resin solution will tolerate only a certain amount of dilution. "Stronger" solvents such as benzene can be added in a greater amount (and thus have a higher Kb value) than "weaker" solvents like mineral spirits.
References
Product certification
Units of measurement
Kauri gum | Kauri-butanol value | [
"Physics",
"Mathematics"
] | 283 | [
"Kauri gum",
"Quantity",
"Unsolved problems in physics",
"Amorphous solids",
"Units of measurement"
] |
5,495,959 | https://en.wikipedia.org/wiki/Disability%20pretender | A disability pretender is a subculture term meaning a person who behaves as if they were disabled. It may be classified as a type of factitious disorder or as a medical fetishism.
One theory is that pretenders may be the "missing link" between devotees and wannabes, demonstrating an assumed continuum between those merely attracted to people with disabilities and those who actively wish to become disabled. Many wannabes use pretending as a way to appease the intense emotional pain related to having body integrity identity disorder.
Pretending takes a variety of forms. Some chatroom users on internet sites catering to devotees have complained that chat counterparts they assumed were female were revealed as male devotees. This form of pretending (where a devotee derives pleasure by pretending to be a disabled woman) may indicate a very broad predisposition to pretending among devotees.
Pretending includes dressing and acting in ways typical of disabled people, including making use of aids (glasses, hearing aids, braces, canes, inhalers, walking sticks, crutches, wheelchairs, mobility scooters, white canes etc.). Pretending may also take the form of a devotee persuading his or her sexual partner to play the role of a disabled person. Pretending may be practised in private, in intimacy, or in public, and may occupy surprisingly long periods. In the latter case, some pretenders hope that the disability may become permanent, such as through tissue necrosis caused by constricted blood supply.
People with this condition may refer to themselves as "transabled".
See also
Abasiophilia—the desire for people who limp and/or use leg braces, walking sticks, crutches, walkers or wheelchairs
Acrotomophilia—the desire for amputees
Andy Pipkin, a character from Little Britain, who pretends to be disabled
Apotemnophilia—sexual arousal based on the desire to be or appear as an amputee
Attraction to disability—the broad range of sexualised fascinations projected onto disabled people
Disability devotee ("dev")—one who desires disabled partners
Medical fetishism—a sexualised interest in observing medical practice and receiving medical treatment
Munchhausen's syndrome—individuals with this psychological disorder feign illness and/or self-harm
Body integrity identity disorder ("transabled")—individuals with this disorder believe they should have an impairment
References
Bruno, R. L., PhD, "Devotees, pretenders and wannabes: Two cases of Factitious Disability Disorder" The Journal of Sexuality and Disability, 1997, 15, pp. 243–260
this portal to the pretender web lists 12 pretender and three pretender/wannabe websites
Abnormal psychology
Disability | Disability pretender | [
"Biology"
] | 556 | [
"Behavioural sciences",
"Behavior",
"Abnormal psychology"
] |
5,496,263 | https://en.wikipedia.org/wiki/Supermalloy | Supermalloy is an alloy composed of nickel (75%), iron (20%), and molybdenum (5%). It is a high permeability ferromagnetic alloy used in magnetic cores and magnetic shielding in electrical components, such as pulse transformers and ultra-sensitive magnetic amplifiers. It has a resistivity of 0.6 Ω·mm2/m (or 6.0 x 10−7Ω·m), an extremely high relative magnetic permeability (approximately ), and a low coercivity. Supermalloy is used in manufacturing components for radio engineering, telephony, and telemechanics instruments.
References
Nickel alloys
Magnetic alloys | Supermalloy | [
"Chemistry",
"Materials_science",
"Engineering"
] | 146 | [
"Nickel alloys",
"Alloy stubs",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic alloys",
"Alloys"
] |
5,496,415 | https://en.wikipedia.org/wiki/Spin%20Hall%20effect | The spin Hall effect (SHE) is a transport phenomenon predicted by Russian physicists Mikhail I. Dyakonov and Vladimir I. Perel in 1971. It consists of the appearance of spin accumulation on the lateral surfaces of an electric current-carrying sample, the signs of the spin directions being opposite on the opposing boundaries. In a cylindrical wire, the current-induced surface spins will wind around the wire. When the current direction is reversed, the directions of spin orientation is also reversed.
Definition
The spin Hall effect is a transport phenomenon consisting of the appearance of spin accumulation on the lateral surfaces of a sample carrying electric current. The opposing surface boundaries will have spins of opposite sign. It is analogous to the classical Hall effect, where charges of opposite sign appear on the opposing lateral surfaces in an electric-current carrying sample in a magnetic field. In the case of the classical Hall effect the charge build up at the boundaries is in compensation for the Lorentz force acting on the charge carriers in the sample due to the magnetic field. No magnetic field is needed for the spin Hall effect which is a purely spin-based phenomenon. The spin Hall effect belongs to the same family as the anomalous Hall effect, known for a long time in ferromagnets, which also originates from spin–orbit interaction.
History
The spin Hall effect (direct and inverse) was predicted by Russian physicists Mikhail I. Dyakonov and Vladimir I. Perel in 1971. They also introduced for the first time the notion of spin current.
In 1983 Averkiev and Dyakonov proposed a way to measure the inverse spin Hall effect under optical spin orientation in semiconductors. The first experimental demonstration of the inverse spin Hall effect, based on this idea, was performed by Bakun et al. in 1984
The term "spin Hall effect" was introduced by Hirsch who re-predicted this effect in 1999.
Experimentally, the (direct) spin Hall effect was observed in semiconductors more than 30 years after the original prediction.
Physical origin
Two possible mechanisms give origin to the spin Hall effect, in which an electric current (composed of moving charges) transforms into a spin current (a current of moving spins without charge flow). The original (extrinsic) mechanism devised by Dyakonov and Perel consisted of spin-dependent Mott scattering, where carriers with opposite spin diffuse in opposite directions when colliding with impurities in the material. The second mechanism is due to intrinsic properties of the material, where the carrier's trajectories are distorted due to spin–orbit interaction as a consequence of the asymmetries in the material.
One can intuitively picture the intrinsic effect by using the classical analogy between an electron and a spinning tennis ball. The tennis ball deviates from its straight path in air in a direction depending on the sense of rotation, also known as the Magnus effect. In a solid, the air is replaced by an effective electric field due to asymmetries in the material, the relative motion between the magnetic moment (associated to the spin) and the electric field creates a coupling that distorts the motion of the electrons.
Similar to the standard Hall effect, both the extrinsic and the intrinsic mechanisms lead to an accumulation of spins of opposite signs on opposing lateral boundaries.
Mathematical description
The spin current is described by a second-rank tensor , where the first index refers to the direction of flow, and the second one to the spin component that is flowing. Thus denotes the flow density of the y-component of spin in the x-direction. Introduce also the vector qi of charge flow density (which is related to the normal current density j=eq), where e is the elementary charge. The coupling between spin and charge currents is due to spin-orbit interaction. It may be described in a very simple way by introducing a single dimensionless coupling parameter ʏ.
Spin Hall magnetoresistance
No magnetic field is needed for spin Hall effect. However, if a strong enough magnetic field is applied in the direction perpendicular to the orientation of the spins at the surfaces, spins will precess around the direction of the magnetic field and the spin Hall effect will disappear. Thus in the presence of magnetic field, the combined action of the direct and inverse spin Hall effect leads to a change of the sample resistance, an effect that is of second order in spin-orbit interaction. This was noted by Dyakonov and Perel already in 1971 and later elaborated in more detail by Dyakonov. In recent years, the spin Hall magnetoresistance was extensively studied experimentally both in magnetic and non-magnetic materials (heavy metals, such as Pt, Ta, Pd, where the spin-orbit interaction is strong).
Swapping spin currents
A transformation of spin currents consisting in interchanging (swapping) of the spin and flow directions (qij → qji) was predicted by Lifshits and Dyakonov. Thus a flow in the x-direction of spins polarized along y is transformed to a flow in the y-direction of spins polarized along x. This prediction has yet not been confirmed experimentally.
Optical monitoring
The direct and inverse spin Hall effect can be monitored by optical means. The spin accumulation induces circular polarization of the emitted light, as well as the Faraday (or Kerr) polarization rotation of the transmitted (or reflected) light. Observing the polarization of emitted light allows the spin Hall effect to be observed.
More recently, the existence of both direct and inverse effects was demonstrated not only in semiconductors, but also in metals.
Applications
The spin Hall effect can be used to manipulate electron spins electrically. For example, in combination with the electric stirring effect, the spin Hall effect leads to spin polarization in a localized conducting region.
Further reading
For a review of spin Hall effect, see for example:
See also
Quantum spin Hall effect
Spin Nernst effect
References
Hall effect
Condensed matter physics
Spintronics | Spin Hall effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,214 | [
"Physical phenomena",
"Hall effect",
"Spintronics",
"Phases of matter",
"Electric and magnetic fields in matter",
"Materials science",
"Electrical phenomena",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
5,497,013 | https://en.wikipedia.org/wiki/Ketosteroid | A ketosteroid, or an oxosteroid, is a steroid in which a hydrogen atom has been replaced with a ketone (C=O) group.
A 17-ketosteroid is a ketosteroid in which the ketone is located specifically at the C17 position (in the upper right corner of most structure diagrams).
Examples of 17-ketosteroids include:
Androstenedione
Androstanedione
Androsterone
Dehydroepiandrosterone
Epiandrosterone
Epietiocholanolone
Etiocholanolone
17-Ketosteroids are endogenous steroid hormones.
See also
Hydroxysteroid
Hydroxysteroid dehydrogenase
External links
Ketones
Steroids | Ketosteroid | [
"Chemistry"
] | 162 | [
"Ketones",
"Functional groups"
] |
5,497,136 | https://en.wikipedia.org/wiki/Hydroxycorticosteroids | Hydroxycorticosteroids (OHCSs) are corticosteroids that have an additional hydroxy (-OH) group.
There are two main positions where the hydroxy group may be added: at carbon atom 11, and at carbon atom 17.
At the 11 position
11-hydroxycorticosteroids (11-OHCSs) include:
aldosterone
corticosterone
hydrocortisone
At the 17 position
17-hydroxycorticosteroids (17-OHCSs) include:
cortisone
hydrocortisone
External links
Corticosteroids | Hydroxycorticosteroids | [
"Chemistry"
] | 128 | [
"Organic chemistry stubs"
] |
5,497,343 | https://en.wikipedia.org/wiki/Axcelis%20Technologies | Axcelis Technologies, Inc. is an American company engaging in the design, manufacture, and servicing of capital equipment for the semiconductor manufacturing industry worldwide. It produces ion implantation systems, including high and medium current implanters, and high energy implanters, and curing systems used in the fabrication of semiconductor chips. The company was incorporated in 1995 and is headquartered in Beverly, Massachusetts, United States.
In 2000, Eaton Corporation spun off its semiconductor manufacturing equipment business as Axcelis Technologies.
On December 4, 2012 Axcelis Technologies decided "...that it will exit the dry-strip business and divest its dry-strip intellectual property and technology, including the advanced non-oxidizing process technology of its Integra product line, to Lam Research,...Axcelis will continue to ship its 300 mm dry-strip products through August 2013..."
In 2015, Axcelis sold its headquarters in a leaseback agreement.
See also
List of S&P 600 companies
References
External links
Companies listed on the Nasdaq
Equipment semiconductor companies
Companies based in Beverly, Massachusetts
Electronics companies of the United States | Axcelis Technologies | [
"Engineering"
] | 230 | [
"Equipment semiconductor companies",
"Semiconductor fabrication equipment"
] |
5,497,456 | https://en.wikipedia.org/wiki/Kinetic%20resolution | In organic chemistry, kinetic resolution is a means of differentiating two enantiomers in a racemic mixture. In kinetic resolution, two enantiomers react with different reaction rates in a chemical reaction with a chiral catalyst or reagent, resulting in an enantioenriched sample of the less reactive enantiomer. As opposed to chiral resolution, kinetic resolution does not rely on different physical properties of diastereomeric products, but rather on the different chemical properties of the racemic starting materials. The enantiomeric excess (ee) of the unreacted starting material continually rises as more product is formed, reaching 100% just before full completion of the reaction. Kinetic resolution relies upon differences in reactivity between enantiomers or enantiomeric complexes.
Kinetic resolution can be used for the preparation of chiral molecules in organic synthesis. Kinetic resolution reactions utilizing purely synthetic reagents and catalysts are much less common than the use of enzymatic kinetic resolution in application towards organic synthesis, although a number of useful synthetic techniques have been developed in the past 30 years.
History
The first reported kinetic resolution was achieved by Louis Pasteur. After reacting aqueous racemic ammonium tartrate with a mold from Penicillium glaucum, he reisolated the remaining tartrate and found it was levorotatory. The chiral microorganisms present in the mold catalyzed the metabolization of (R,R)-tartrate selectively, leaving an excess of (S,S)-tartrate.
Kinetic resolution by synthetic means was first reported by Marckwald and McKenzie in 1899 in the esterification of racemic mandelic acid with optically active (−)-menthol. With an excess of the racemic acid present, they observed the formation of the ester derived from (+)-mandelic acid to be quicker than the formation of the ester from (−)-mandelic acid. The unreacted acid was observed to have a slight excess of (−)-mandelic acid, and the ester was later shown to yield (+)-mandelic acid upon saponification. The importance of this observation was that, in theory, if a half equivalent of (−)-menthol had been used, a highly enantioenriched sample of (−)-mandelic acid could have been prepared. This observation led to the successful kinetic resolution of other chiral acids, the beginning of the use of kinetic resolution in organic chemistry.
Theory
Kinetic resolution is a possible method for irreversibly differentiating a pair of enantiomers due to (potentially) different activation energies. While both enantiomers are at the same Gibbs free energy level by definition, and the products of the reaction with both enantiomers are also at equal levels, the , or transition state energy, can differ. In the image below, the R enantiomer has a lower and would thus react faster than the S enantiomer.
The ideal kinetic resolution is that in which only one enantiomer reacts, i.e. kR>>kS. The selectivity (s) of a kinetic resolution is related to the rate constants of the reaction of the R and S enantiomers, kR and kS respectively, by s=kR/kS, for kR>kS. This selectivity can also be referred to as the relative rates of reaction. This can be written in terms of the free energy difference between the high- and low-energy transitions states, .
The selectivity can also be expressed in terms of ee of the recovered starting material and conversion (c), if first-order kinetics (in substrate) are assumed.
If it is assumed that the S enantiomer of the starting material racemate will be recovered in excess, it is possible to express the concentrations (mole fractions) of the S and R enantiomers as
where ee is the ee of the starting material. Note that for c=0, which signifies the beginning of the reaction, , where these signify the initial concentrations of the enantiomers. Then, for stoichiometric chiral resolving agent B*,
Note that, if the resolving agent is stoichiometric and achiral, with a chiral catalyst, the [B*] term does not appear. Regardless, with a similar expression for R, we can express s as
If we wish to express this in terms of the enantiomeric excess of the product, ee", we must make use of the fact that, for products R' and S' from R and S, respectively
From here, we see that
which gives us
which, when we plug into our expression for s derived above, yield
The conversion (c) and selectivity factor (s) can be expressed in terms of starting material and product enantiomeric excesses (ee and ee'', respectively) only:
Additionally, the expressions for c and ee can be parametrized to give explicit expressions for C and ee in terms of t. First, solving explicitly for [S] and [R] as functions of t yields
which, plugged into expressions for ee and c, gives
Without loss of generality, we can allow kS=1, which gives kR=s, simplifying the above expressions. Similarly, an expression for ee″ as a function of t can be derived
Thus, plots of ee and ee″ vs. c can be generated with t as the parameter and different values of s generating different curves, as shown below.
As can be seen, high enantiomeric excesses are much more readily attainable for the unreacted starting material. There is however a tradeoff between ee and conversion, with higher ee (of the recovered substrate) obtained at higher conversion, and therefore lower isolated yield. For example, with a selectivity factor of just 10, 99% ee is possible with approximately 70% conversion, resulting in a yield of about 30%. In contrast, in order to get good ee's and yield of the product, very high selectivity factors are necessary. For example, with a selectivity factor of 10, ee″ above approximately 80% is unattainable, and significantly lower ee″ values are obtained for more realistic conversions. A selectivity in excess of 50 is required for highly enantioenriched product, in reasonable yield.
This is a simplified version of the true kinetics of kinetic resolution. The assumption that the reaction is first order in substrate is limiting, and it is possible that the dependence on substrate may depend on conversion, resulting in a much more complicated picture. As a result, a common approach is to measure and report only yields and ee's, as the formula for krel only applies to an idealized kinetic resolution. It is simple to consider an initial substrate-catalyst complex forming, which could negate the first-order kinetics. However, the general conclusions drawn are still helpful to understand the effect of selectivity and conversion on ee.
Practicality
With the advent of asymmetric catalysis, it is necessary to consider the practicality of utilizing kinetic resolution for the preparation of enantiopure products. Even for a product which can be attained through an asymmetric catalytic or auxiliary-based route, the racemate may be significantly less expensive than the enantiopure material, resulting in heightened cost-effectiveness even with the inherent "loss" of 50% of the material. The following have been proposed as necessary conditions for a practical kinetic resolution:
inexpensive racemate and catalyst
no appropriate enantioselective, chiral pool, or classical resolution route is possible
resolution proceeds selectively at low catalyst loadings
separation of starting material and product is easy
To date, a number of catalysts for kinetic resolution have been developed that satisfy most, if not all of the above criteria, making them highly practical for use in organic synthesis. The following sections will discuss a number of key examples.
Reactions utilizing synthetic reagents
Acylation reactions
Gregory Fu and colleagues have developed a methodology utilizing a chiral DMAP analogue to achieve excellent kinetic resolution of secondary alcohols. Initial studies utilizing ether as a solvent, low catalyst loadings (2 mol %), acetic anhydride as the acylating agent, and triethylamine at room temperature gave selectivities ranging from 14-52, corresponding to ee's of the recovered alcohol product as high as 99.2%. However, solvent screening proved that the use of tert-amyl alcohol increased both the reactivity and selectivity.
With the benchmark substrate 1-phenylethanol, this corresponded to 99% ee of the unreacted alcohol at 55% conversion when run at 0 °C. This system proved to be adept at resolution of a number of arylalkylcarbinols, with selectivities as high as 95 and low catalyst loadings of 1%, as shown below utilizing the (-)-enantiomer of the catalyst. This resulted in highly enantioenriched alcohols at very low conversions, giving excellent yields as well. In addition, the high selectivities result in highly enantioenriched acylated products, with a 90% ee sample of acylated alcohol for o-tolylmethylcarbinol, with s=71.
In addition, Fu reported the first highly selective acylation of racemic diols (as well as desymmetrization of meso diols). With low catalyst loading of 1%, enantioenriched diol was recovered in 98% ee and 43% yield, with the diacetate in 39% yield and 99% ee. The remainder of the material was recovered as a mixture of monoacetate.
The planar-chiral DMAP catalyst was also shown to be effective at kinetically resolving propargylic alcohols. In this case, though, selectivities were found to be highest without any base present. When run with 1 mol% of the catalyst at 0 °C, selectivities as high as 20 could be attained. The limitations of this method include the requirement of an unsaturated functionality, such as carbonyl or alkenes, at the remote alkynyl position. Alcohols resolved using the (+)-enantiomer of the DMAP catalyst are shown below.
Fu also showed his chiral DMAP catalyst's ability to resolve allylic alcohols.
Effective selectivity was dependent upon the presence of either a geminal or cis substituent to the alcohol-bearing group, with a notable exception of a trans-phenyl alcohol which exhibited the highest selectivity. Using 1-2.5 mol% of the (+)-enantiomer of the DMAP catalyst, the alcohols shown below were resolved in the presence of triethylamine.
While Fu's DMAP analogue catalyst worked exceptionally well to kinetically resolve racemic alcohols, it was not successful in use for the kinetic resolution of amines. A similar catalyst, PPY*, was developed that, in use with a novel acylating agent, allowed for the successful kinetic resolution acylation of amines. With 10 mol% (-)-PPY* in chloroform at –50 °C, good to very good selectivities were observed in the acylation of amines, shown below. A similar protocol was developed for the kinetic resolution of indolines.
Epoxidations and dihydroxylations
The Sharpless epoxidation, developed by K. Barry Sharpless in 1980, has been utilized for the kinetic resolution of a racemic mixture of allylic alcohols. While extremely effective at resolving a number of allylic alcohols, this method has a number of drawbacks. Reaction times can run as long as 6 days, and the catalyst is not recyclable. However, the Sharpless asymmetric epoxidation kinetic resolution remains one of the most effective synthetic kinetic resolutions to date. A number of different tartrates can be used for the catalyst; a representative scheme is shown below utilizing diisopropyl tartrate. This method has seen general use on a number of secondary allylic alcohols.
Sharpless asymmetric dihydroxylation has also seen use as a method for kinetic resolution. This method is not widely used, however, since the same resolution can be accomplished in different manners that are more economical. Additionally, the Shi epoxidation has been shown to affect kinetic resolution of a limited selection of olefins. This method is also not widely used, but is of mechanistic interest.
Epoxide openings
While enantioselective epoxidations have been successfully achieved utilizing Sharpless epoxidation, Shi epoxidation, and Jacobsen epoxidation, none of these methods allows for the efficient asymmetric synthesis of terminal epoxides, which are key chiral building blocks. Due to the inexpensiveness of most racemic terminal epoxides and their inability to generally be subjected to classical resolution, an effective kinetic resolution of terminal epoxides would serve as a highly important synthetic methodology. In 1996, Jacobsen and coworkers developed a methodology for the kinetic resolution of epoxides via nucleophilic ring-opening with attack by an azide anion. The (R,R) catalyst is shown.
The catalyst could effectively, with loadings as low as 0.5 mol%, open the epoxide at the terminal position enantioselectively, yielding enantioenriched epoxide starting material and 1,2-azido alcohols. Yields are nearly quantitative and ee's were excellent (≥95% in nearly all cases). The 1,2-azido alcohols can be hydrogenated to give 1,2-amino alcohols, as shown below.
In 1997, Jacobsen's group published a methodology which improved upon their earlier work, allowing for the use of water as the nucleophile in the epoxide opening. Utilizing a nearly identical catalyst, ee's in excess of 98% for both the recovered starting material epoxide and 1,2-diol product were observed. In the example below, hydrolytic kinetic resolution (HKR) was carried out on a 58 gram scale, resulting in 26 g (44%) of the enantioriched epoxide in >99% ee and 38 g (50%) of the diol in 98% ee.
A multitude of other substrates were examined, with yields of the recovered epoxide ranging from 36-48% for >99% ee. Jacobsen hydrolytic kinetic resolution can be used in tandem with Jacobsen epoxidation to yield enantiopure epoxides from certain olefins, as shown below. The first epoxidation yields a slightly enantioenriched epoxide, and subsequent kinetic resolution yields essentially a single enantiomer. The advantage of this approach is the ability to reduce the amount of hydrolytic cleavage necessary to achieve high enantioselectivity, allowing for overall yields up to approximately 90%, based on the olefin.
Ultimately, the Jacobsen epoxide opening kinetic resolutions produce high enantiomeric purity in the epoxide and product, in solvent-free or low-solvent conditions, and have been applicable on a large scale. The Jacobsen methodology for HKR in particular is extremely attractive since it can be carried out on a multiton scale and utilizes water as the nucleophile, resulting in extremely cost-effective industrial processes.
Despite impressive achievements, HKR has generally been applied to the resolution of simple terminal epoxides with one stereocentre. Quite recently, D. A. Devalankar et al. reported an elegant protocol involving a two-stereocentered Co-catalyzed HKR of racemic terminal epoxides bearing adjacent C–C binding substituents.
Oxidations
Ryōji Noyori and colleagues have developed a methodology for the kinetic resolution of benzylic and allylic secondary alcohols via transfer hydrogenation. The ruthenium complex catalyzes oxidation of the more reactive enantiomer from acetone, yielding an unreacted enantiopure alcohol, an oxidized ketone, and isopropanol. In the example illustrated below, exposure of 1-phenylethanol to the (S,S) enantiomer of the catalyst in the presence of acetone results in a 51% yield of 94% ee (R)-1-phenylethanol, along with 49% acetophenone and isopropanol as a byproduct.
This methodology is essentially the reverse of Noyori's asymmetric transfer hydrogenation of ketones, which yield enantioenriched alcohols via reduction. This limits the attractiveness of the kinetic resolution method, since there is a similar method to achieve the same products without the loss of half the material. Thus, the kinetic resolution would only be carried out in an instance for which the racemic alcohol was at least one half the price of the ketone or significantly easier to access.
In addition, Uemura and Hidai have developed a ruthenium catalyst for the kinetic resolution oxidation of benzylic alcohols, yielding highly enantioenriched alcohols in good yields.
The complex can, like Noyori's catalyst, affect transfer hydrogenation between a ketone and isopropanol to give an enantioenriched alcohol as well as affect kinetic resolution of a racemic alcohol, giving enantiopure alcohol (>99% ee) and oxidized ketone, with acetone as the byproduct. It is highly effective at reducing ketones enantioselectively, giving most benzylic alcohols in >99% ee and can resolve a number of racemic benzylic alcohols to give high yields (up to 49%) of single enantiomers, as shown below. This method has the same disadvantages as the Noyori kinetic resolution, namely that the alcohols can also be accessed via reduction of the ketones enantioselectively. Additionally, only one enantiomer of the catalyst has been reported.
Hydrogenation
Noyori has also demonstrated the kinetic resolution of allylic alcohols by asymmetric hydrogenation of the olefin.
Utilizing the Ru[BINAP] complex, selective hydrogenation can give high ee's of the unsaturated alcohol in addition to the hydrogenated alcohol, as shown below. Thus, a second hydrogenation of the enantioenriched allylic alcohol remaining will give enantiomerically pure samples of both enantiomers of the saturated alcohol. Noyori has resolved a number of allylic alcohols with good to excellent yields and good to excellent ee's (up to >99%).
Ring closing metathesis
Hoveyda and Schrock have developed a catalyst for ring-closing metathesis kinetic resolution of dienyl allylic alcohols. The molybdenum alkylidene catalyst selectively catalyzes one enantiomer to perform ring closing metathesis, resulting in an enantiopure alcohol, and an enantiopure closed ring, as shown below. The catalyst is most effective at resolving 1,6-dienes. However, slight structural changes in the substrate, such as increasing the inter-alkene distance to 1,7, can sometimes necessitate the use of a different catalyst, reducing the efficacy of this method.
Enzymatic reactions
Acylations
As with synthetic kinetic resolution procedures, enzymatic acylation kinetic resolutions have seen the broadest application in a synthetic context. Especially important has been the use of enzymatic kinetic resolution to efficiently and cheaply prepare amino acids. On a commercial scale, Degussa's methodology employing acylases is capable of resolving numerous natural and unnatural amino acids. The racemic mixtures can be prepared via Strecker synthesis, and the use of either porcine kidney acylase (for straight chain substrates) or an enzyme from the mold Aspergillus oryzae (for branched side chain substrates) can effectively yield enantioenriched amino acids in high (85-90%) yields. The unreacted starting material can be racemized in situ, thus making this a dynamic kinetic resolution.
In addition, lipases are used extensively for kinetic resolution in both academic and industrial settings.
Lipases have been used to resolve primary alcohols, secondary alcohols, a limited number of tertiary alcohols, carboxylic acids, diols, and even chiral allenes. Lipase from Pseudomonas cepacia (PSL) is the most widely used in the resolution of primary alcohols and has been used with vinyl acetate as an acylating agent to kinetically resolve the primary alcohols shown below.
For the resolution of secondary alcohols, pseudomonas cepecia lipase (PSL-C) has been employed effectively to generate excellent ee's of the (R)-enantiomer of the alcohol. The use of isopropenyl acetate as the acylating agent results in acetone as the byproduct, which is effectively removed from the reaction using molecular sieves.
Oxidations and reductions
Baker's yeast (BY) has been utilized for the kinetic resolution of α-stereogenic carbonyl compounds. The enzyme selectively reduces one enantiomer, yielding a highly enantioenriched alcohol and ketone, as shown below.
Baker's yeast has also been used in the kinetic resolution of secondary benzylic alcohols by oxidation. While excellent ee's of the recovered alcohol have been reported, they typically require >60% conversion, resulting in diminished yields. Baker's yeast has also been used in the kinetic resolution via reduction of β-ketoesters. However, given the success of Noyori's resolution of the same substrates, detailed later in this article, this has not seen much use.
Dynamic kinetic resolution
Dynamic kinetic resolution (DKR) occurs when the starting material racemate is able to epimerize easily, resulting in an essentially racemic starting material mix at all points during the reaction. Then, the enantiomer with the lower barrier to activation can form in, theoretically, up to 100% yield. This is in contrast to standard kinetic resolution, which necessarily has a maximum yield of 50%. For this reason, dynamic kinetic resolution has extremely practical applications to organic synthesis. The observed dynamics are based on the Curtin-Hammett principle. The barrier to reaction of either enantiomer is necessarily higher than the barrier to epimerization, resulting in a kinetic well containing the racemate. This is equivalent to writing, for kR>kS,
A number of excellent reviews have been published, most recently in 2008, detailing the theory and practical applications of DKR.
Noyori asymmetric hydrogenation
The Noyori asymmetric hydrogenation of ketones is an excellent example of dynamic kinetic resolution at work. The enantiomeric β-ketoesters can undergo epimerization, and the choice of chiral catalyst, typically of the form Ru[(R)-BINAP]X2, where X is a halogen, leads to one of the enantiomers reacting preferentially faster. The relative free energy for a representative reaction is shown below. As can be seen, the epimerization intermediate is lower in free energy than the transition states for hydrogenation, resulting in rapid racemization and high yields of a single enantiomer of the product.
The enantiomers interconvert through their common enol, which is the energetic minimum located between the enantiomers. The shown reaction yields a 93% ee sample of the anti product shown above. Solvent choice appears to have a major influence on the diastereoselectivity, as dichloromethane and methanol both show effectiveness for certain substrates. Noyori and others have also developed newer catalysts which have improved on both ee and diastereomeric ratio (dr).
Genêt and coworkers developed SYNPHOS, a BINAP analogue which forms ruthenium complexes, which perform highly selective asymmetric hydrogenations. Enantiopure Ru[SYNPHOS]Br2 was shown to selectively hydrogenate racemic α-amino-β-ketoesters to enantiopure aminoalcohols, as shown below utilizing (R)-SYNPHOS. 1,2-syn amino alcohols were prepared from benzoyl protected amino compounds, whereas anti products were prepared from hydrochloride salts of the amine.
Fu acylation modification
Recently, Gregory Fu and colleagues reported a modification of their earlier kinetic resolution work to produce an effective dynamic kinetic resolution. Using the ruthenium racemization catalyst shown to the right, and his planar chiral DMAP catalyst, Fu has demonstrated the dynamic kinetic resolution of secondary alcohols yielding up to 99% and 93% ee, as shown below. Work is ongoing to further develop the applications of the widely used DMAP catalyst to dynamic kinetic resolution.
Enzymatic dynamic kinetic resolutions
A number of enzymatic dynamic kinetic resolutions have been reported. A prime example using PSL effectively resolves racemic acyloins in the presence of triethylamine and vinyl acetate as the acylating agent. As shown below, the product was isolated in 75% yield and 97% ee. Without the presence of the base, regular kinetic resolution occurred, resulting in 45% yield of >99% ee acylated product and 53% of the starting material in 92% ee.
Another excellent, though not high-yielding, example is the kinetic resolution of (±)-8-amino-5,6,7,8-tetrahydroquinoline. When exposed to Candida antarctica lipase B (CALB) in toluene and ethyl acetate for 3–24 hours, normal kinetic resolution occurs, resulting in 45% yield of 97% ee of starting material and 45% yield of >97% ee acylated amine product. However, when the reaction is allowed to stir for 40–48 hours, racemic starting material and >60% of >95% ee acylated product are recovered.
Here, the unreacted starting material racemizes in situ via a dimeric enamine, resulting in a recovery of greater than 50% yield of the enantiopure acylated amine product.
Chemoenzymatic dynamic kinetic resolutions
There have been a number of reported procedures which take advantage of a chemical reagent/catalyst to perform racemization of the starting material and an enzyme to selectively react with one enantiomer, called chemoenzymatic dynamic kinetic resolutions. PSL-C was utilized along with a ruthenium catalyst (for racemization) to produce enantiopure (>95% ee) δ-hydroxylactones.
More recently, secondary alcohols have been resolved by Bäckvall with yields up to 99% and ee's up to >99% utilizing CALB and a ruthenium racemization complex.
A second type of chemoenzymatic dynamic kinetic resolution involves a π-allyl complex from an allylic acetate with palladium. Here, racemization occurs with loss of the acetate, forming a cationic complex with the transition metal center, as shown below. Palladium has been shown to facilitate this reaction, while ruthenium has been shown to affect a similar reaction, also shown below.
Parallel kinetic resolution
In parallel kinetic resolution (PKR), a racemic mixture reacts to form two non-enantiomeric products, often through completely different reaction pathways. With PKR, there is no tradeoff between conversion and ee, as the formed products are not enantiomers. One strategy for PKR is to remove the less reactive enantiomer (towards the desired chiral catalyst) from the reaction mixture by subjecting it to a second set of reaction conditions that preferentially react with it, ideally with an approximately equal reaction rate. Thus, both enantiomers are consumed in different pathways at equal rates. PKR experiments can be stereodivergent, regiodivergent, or structurally divergent. One of the most highly efficient PKR's reported to date was accomplished by Yoshito Kishi in 1998; CBS reduction of a racemic steroidal ketone resulted in stereoselective reduction, producing two diastereomers of >99% ee, as shown below.
PKR have also been accomplished with the use of enzyme catalysts. Using the fungus Mortierella isabellina NRRL 1757, reduction of racemic β-ketonitriles affords two diastereomers, which can be separated and re-oxidized to give highly enantiopure β-ketonitriles.
Highly synthetically useful parallel kinetic resolutions have truly yet to be discovered, however. A number of procedures have been discovered that give acceptable ee's and yields, but there are very few examples which give highly selective parallel kinetic resolution and not simply somewhat selective reactions. For example, Fu's parallel kinetic resolution of 4-alkynals yields very enantioenriched cyclobutanone in low yield and slightly enantioenriched cyclopentenone, as shown below.
In theory, parallel kinetic resolution can give the highest ee's of products, since only one enantiomer gives each desired product. For example, for two complementary reactions both with s=49, 100% conversion would give products in 50% yield and 96% ee. These same values would require s=200 for a simple kinetic resolution. As such, the promise of PKR continues to attract much attention. The Kishi CBS reduction remains one of the few examples to fulfill this promise.
See also
Chiral auxiliaries
Chiral pool synthesis
Chiral resolution
Enantioselective synthesis
References
Further reading
Dynamic Kinetic Resolutions. A MacMillan Group Meeting. Jake Wiener Link
Dynamic Kinetic Resolution:A Powerful Approach to Asymmetric Synthesis. Erik Alexanian Supergroup Meeting March 30, 2005 Link
Dynamic Kinetic Resolution: Practical Applications in Synthesis. Valerie Keller 3rd-Year Seminar November 1, 2001 Link
Kinetic Resolution. David Ebner Stoltz Group Literature Seminar. June 4, 2003 link
Kinetic Resolutions. UT Southwestern Presentation. link
Stereochemistry | Kinetic resolution | [
"Physics",
"Chemistry"
] | 6,390 | [
"Spacetime",
"Stereochemistry",
"Space",
"nan"
] |
5,497,461 | https://en.wikipedia.org/wiki/William%20Bate%20Hardy | Sir William Bate Hardy, FRS (6 April 1864 – 23 January 1934) was a British biologist and food scientist. The William Bate Hardy Prize is named in his honour.
Life
He was born in Erdington, a suburb of Birmingham, the son of William Hardy of Llangollen and his wife Sarah Bate. Educated at Framlingham College, he graduated with a Master of Arts from the University of Cambridge in 1888, where he carried out biochemical research. He first suggested the word hormone to E.H. Starling.
He was elected a Fellow of the Royal Society in June 1902, and delivered their Croonian Lecture in 1905, their Bakerian Lecture (jointly) in 1925 and won their Royal Medal in 1926.
Hardy delivered the Guthrie lecture to the Physical Society in 1916.
In 1920 Hardy, in cooperation with Sir Walter Morley Fletcher, the secretary of the Medical Research Committee, persuaded the trustees of the Sir William Dunn legacy to use the money for research in biochemistry and pathology. To this end they funded Professor Sir Frederick Gowland Hopkins (1861–1947) in Cambridge with a sum of £210,000 in 1920 for the advancement of his work in biochemistry. Two years later they endowed Professor Georges Dreyer (1873–1934) of the Oxford University with a sum of £100,000 for research in pathology. The money enabled each of the recipients to establish a chair and sophisticated teaching and research laboratories, the Sir William Dunn Institute of Biochemistry at Cambridge and the Sir William Dunn School of Pathology at Oxford. Between them, the two establishments have yielded ten Nobel Prize winners, including Hopkins, for the discovery of vitamins, and professors Howard Florey and Ernst Chain (Oxford), for their developmental work on penicillin.
Hardy also made significant contributions to the field of tribology. Alongside Ida Doubleday, he introduced the concept of boundary lubrication. Hardy was named as one of the 23 "Men of Tribology" by Duncan Dowson.
Hardy was knighted in 1925.
Death
Hardy died at his home in Cambridge on 23 January 1934.
His long-time friend, Sir James Hopwood Jeans, elected as president of the British Science Association after Hardy's death, briefly eulogized him in the opening address to the Association's September 1934 meeting in Aberdeen:
The journal, Nature, commented on his death in a two page article, lamenting that, "science has lost a great captain and Great Britain a great public servant."
Family
William Bate Hardy married Alice Mary Finch in Cambridge in 1898.
References
1864 births
1934 deaths
Scientists from Birmingham, West Midlands
English biologists
Fellows of the Royal Society
Royal Medal winners
Tribologists | William Bate Hardy | [
"Materials_science"
] | 542 | [
"Tribology",
"Tribologists"
] |
5,497,504 | https://en.wikipedia.org/wiki/Seasonal%20thermal%20energy%20storage | Seasonal thermal energy storage (STES), also known as inter-seasonal thermal energy storage,
is the storage of heat or cold for periods of up to several months. The thermal energy can be collected whenever it is available and be used whenever needed, such as in the opposing season. For example, heat from solar collectors or waste heat from air conditioning equipment can be gathered in hot months for space heating use when needed, including during winter months. Waste heat from industrial process can similarly be stored and be used much later
or the natural cold of winter air can be stored for summertime air conditioning.
STES stores can serve district heating systems, as well as single buildings or complexes. Among seasonal storages used for heating, the design peak annual temperatures generally are in the range of , and the temperature difference occurring in the storage over the course of a year can be several tens of degrees. Some systems use a heat pump to help charge and discharge the storage during part or all of the cycle. For cooling applications, often only circulation pumps are used.
Sorption and thermochemical heat storage are considered the most suitable for seasonal storage due to the theoretical absence of heat loss between charging and discharging. However, studies have shown that actual heat losses currently are usually significant.
Examples for district heating include Drake Landing Solar Community where ground storage provides 97% of yearly consumption without heat pumps,
and Danish pond storage with boosting.
STES technologies
There are several types of STES technology, covering a range of applications from single small buildings to community district heating networks. Generally, efficiency increases and the specific construction cost decreases with size.
Underground thermal energy storage
UTES (underground thermal energy storage), in which the storage medium may be geological strata ranging from earth or sand to solid bedrock, or aquifers.
UTES technologies include:
ATES (aquifer thermal energy storage). An ATES store is composed of a doublet, totaling two or more wells into a deep aquifer that is contained between impermeable geological layers above and below. One half of the doublet is for water extraction and the other half for reinjection, so the aquifer is kept in hydrological balance, with no net extraction. The heat (or cold) storage medium is the water and the substrate it occupies. Germany's Reichstag building has been both heated and cooled since 1999 with ATES stores, in two aquifers at different depths.In the Netherlands there are well over 1,000 ATES systems, which are now a standard construction option.A significant system has been operating at Richard Stockton College (New Jersey) for several years. ATES has a lower installation cost than borehole thermal energy storage (BTES) because usually fewer holes are drilled, but ATES has a higher operating cost. Also, ATES requires particular underground conditions to be feasible, including the presence of an aquifer.
BTES (borehole thermal energy storage). BTES stores can be constructed wherever boreholes can be drilled, and are composed of one to hundreds of vertical boreholes, typically in diameter. Systems of all sizes have been built, including many quite large.The strata can be anything from sand to crystalline hardrock, and depending on engineering factors the depth can be from . Spacings have ranged from . Thermal models can be used to predict seasonal temperature variation in the ground, including the establishment of a stable temperature regime which is achieved by matching the inputs and outputs of heat over one or more annual cycles. Warm-temperature seasonal heat stores can be created using borehole fields to store surplus heat captured in summer to actively raise the temperature of large thermal banks of soil so that heat can be extracted more easily (and more cheaply) in winter. Interseasonal Heat Transfer uses water circulating in pipes embedded in asphalt solar collectors to transfer heat to Thermal Banks created in borehole fields. A ground source heat pump is used in winter to extract the warmth from the Thermal Bank to provide space heating via underfloor heating. A high Coefficient of performance is obtained because the heat pump starts with a warm temperature of from the thermal store, instead of a cold temperature of from the ground. A BTES operating at Richard Stockton College since 1995 at a peak of about consists of 400 boreholes deep under a parking lot. It has a heat loss of 2% over six months. The upper temperature limit for a BTES store is due to characteristics of the PEX pipe used for BHEs, but most do not approach that limit. Boreholes can be either grout- or water-filled depending on geological conditions, and usually have a life expectancy in excess of 100 years. Both a BTES and its associated district heating system can be expanded incrementally after operation begins, as at Neckarsulm, Germany.BTES stores generally do not impair use of the land, and can exist under buildings, agricultural fields and parking lots. An example of one of the several kinds of STES illustrates well the capability of interseasonal heat storage. In Alberta, Canada, the homes of the Drake Landing Solar Community (in operation since 2007), get 97% of their year-round heat from a district heat system that is supplied by solar heat from solar-thermal panels on garage roofs. This feat – a world record – is enabled by interseasonal heat storage in a large mass of native rock that is under a central park. The thermal exchange occurs via a cluster of 144 boreholes, drilled into the earth. Each borehole is in diameter and contains a simple heat exchanger made of small diameter plastic pipe, through which water is circulated. No heat pumps are involved.
CTES (cavern or mine thermal energy storage). STES stores are possible in flooded mines, purpose-built chambers, or abandoned underground oil stores (e.g. those mined into crystalline hardrock in Norway), if they are close enough to a heat (or cold) source and market.
Energy Pilings. During construction of large buildings, BHE heat exchangers much like those used for BTES stores have been spiraled inside the cages of reinforcement bars for pilings, with concrete then poured in place. The pilings and surrounding strata then become the storage medium.
GIITS (geo interseasonal insulated thermal storage). During construction of any building with a primary slab floor, an area approximately the footprint of the building to be heated, and > 1 m in depth, is insulated on all 6 sides typically with HDPE closed cell insulation. Pipes are used to transfer solar energy into the insulated area, as well as extracting heat as required on demand. If there is significant internal ground water flow, remedial actions are needed to prevent it.
Surface and above ground technologies
Pit Storage. Lined, shallow dug pits that are filled with gravel and water as the storage medium are used for STES in many Danish district heating systems. Storage pits are covered with a layer of insulation and then soil, and are used for agriculture or other purposes. A system in Marstal, Denmark, includes a pit storage supplied with heat from a field of solar-thermal panels. It is initially providing 20% of the year-round heat for the village and is being expanded to provide twice that. The world's largest pit store () was commissioned in Vojens, Denmark, in 2015, and allows solar heat to provide 50% of the annual energy for the world's largest solar-enabled district heating system. In these Danish systems, a capital expenditure per capacity unit between 0,4 and €0,6 /kWh could be achieved.
Large-scale thermal storage with water. Large scale STES water storage tanks can be built above ground, insulated, and then covered with soil.
Horizontal heat exchangers. For small installations, a heat exchanger of corrugated plastic pipe can be shallow-buried in a trench to create a STES.
Earth-bermed buildings. Stores heat passively in surrounding soil.
Salt hydrate technology. This technology achieves significantly higher storage densities than water-based heat storage. See Thermal energy storage: Salt hydrate technology
Conferences and organizations
The International Energy Agency's Energy Conservation through Energy Storage (ECES) Programme has held triennial global energy conferences since 1981. The conferences originally focused exclusively on STES, but now that those technologies are mature other topics such as phase change materials (PCM) and electrical energy storage are also being covered. Since 1985 each conference has had "stock" (for storage) at the end of its name; e.g. EcoStock, ThermaStock. They are held at various locations around the world. Most recent were InnoStock 2012 (the 12th International Conference on Thermal Energy Storage) in Lleida, Spain and GreenStock 2015 in Beijing.
EnerStock 2018 will be held in Adana, Turkey in April 2018.
The IEA-ECES programme continues the work of the earlier International Council for Thermal Energy Storage which from 1978 to 1990 had a quarterly newsletter and was initially sponsored by the U.S. Department of Energy. The newsletter was initially called ATES Newsletter, and after BTES became a feasible technology it was changed to STES Newsletter.
Use of STES for small, passively heated buildings
Small passively heated buildings typically use the soil adjoining the building as a low-temperature seasonal heat store that in the annual cycle reaches a maximum temperature similar to average annual air temperature, with the temperature drawn down for heating in colder months. Such systems are a feature of building design, as some simple but significant differences from 'traditional' buildings are necessary. At a depth of about in the soil, the temperature is naturally stable within a year-round range, if the drawdown does not exceed the natural capacity for solar restoration of heat. Such storage systems operate within a narrow range of storage temperatures over the course of a year, as opposed to the other STES systems described above for which large annual temperature differences are intended.
Two basic passive solar building technologies were developed in the US during the 1970s and 1980s. They use direct heat conduction to and from thermally isolated, moisture-protected soil as a seasonal storage method for space heating, with direct conduction as the heat return mechanism. In one method, "passive annual heat storage" (PAHS), the building's windows and other exterior surfaces capture solar heat which is transferred by conduction through the floors, walls, and sometimes the roof, into adjoining thermally buffered soil. When the interior spaces are cooler than the storage medium, heat is conducted back to the living space.
The other method, “annualized geothermal solar” (AGS) uses a separate solar collector to capture heat. The collected heat is delivered to a storage device (soil, gravel bed or water tank) either passively by the convection of the heat transfer medium (e.g. air or water) or actively by pumping it. This method is usually implemented with a capacity designed for six months of heating.
A number of examples of the use of solar thermal storage from across the world include: Suffolk One a college in East Anglia, England, that uses a thermal collector of pipe buried in the bus turning area to collect solar energy that is then stored in 18 boreholes each deep for use in winter heating. Drake Landing Solar Community in Canada uses solar thermal collectors on the garage roofs of 52 homes, which is then stored in an array of deep boreholes. The ground can reach temperatures in excess of 70 °C which is then used to heat the houses passively. The scheme has been running successfully since 2007. In Brædstrup, Denmark, some of solar thermal collectors are used to collect some 4,000,000 kWh/year similarly stored in an array of deep boreholes.
Liquid engineering
Architect Matyas Gutai obtained an EU grant to construct a house in Hungary which uses extensive water filled wall panels as heat collectors and reservoirs with underground heat storage water tanks. The design uses microprocessor control.
Small buildings with internal STES water tanks
A number of homes and small apartment buildings have demonstrated combining a large internal water tank for heat storage with roof-mounted solar-thermal collectors. Storage temperatures of are sufficient to supply both domestic hot water and space heating. The first such house was MIT Solar House #1, in 1939. An eight-unit apartment building in Oberburg, Switzerland was built in 1989, with three tanks storing a total of that store more heat than the building requires. Since 2011, that design is now being replicated in new buildings.
In Berlin, the “Zero Heating Energy House”, was built in 1997 in as part of the IEA Task 13 low energy housing demonstration project. It stores water at temperatures up to inside a tank in the basement.
A similar example was built in Ireland in 2009, as a prototype. The solar seasonal store consists of a tank, filled with water, which was installed in the ground, heavily insulated all around, to store heat from evacuated solar tubes during the year. The system was installed as an experiment to heat the world's first standardized pre-fabricated passive house in Galway, Ireland. The aim was to find out if this heat would be sufficient to eliminate the need for any electricity in the already highly efficient home during the winter months.
Based on improvements in glazing the Zero heating buildings are now possible without seasonal energy storage.
Use of STES in greenhouses
STES is also used extensively for the heating of greenhouses. ATES is the kind of storage commonly in use for this application. In summer, the greenhouse is cooled with ground water, pumped from the “cold well” in the aquifer. The water is heated in the process, and is returned to the “warm well” in the aquifer. When the greenhouse needs heat, such as to extend the growing season, water is withdrawn from the warm well, becomes chilled while serving its heating function, and is returned to the cold well. This is a very efficient system of free cooling, which uses only circulation pumps and no heat pumps.
Annualized geo-solar
Annualized geo-solar (AGS) enables passive solar heating in even cold, foggy north temperate areas. It uses the ground under or around a building as thermal mass to heat and cool the building. After a designed, conductive thermal lag of 6 months the heat is returned to, or removed from, the inhabited spaces of the building. In hot climates, exposing the collector to the frigid night sky in winter can cool the building in summer.
The six-month thermal lag is provided by about three meters (ten feet) of dirt. A six-meter-wide (20 ft) buried skirt of insulation around the building keeps rain and snow melt out of the dirt, which is usually under the building. The dirt does radiant heating and cooling through the floor or walls. A thermal siphon moves the heat between the dirt and the solar collector. The solar collector may be a sheet-metal compartment in the roof, or a wide flat box on the side of a building or hill. The siphons may be made from plastic pipe and carry air. Using air prevents water leaks and water-caused corrosion. Plastic pipe doesn't corrode in damp earth, as metal ducts can.
AGS heating systems typically consist of:
A very well-insulated, energy efficient, eco-friendly living space;
Heat captured in the summer months from a sun-warmed sub-roof or attic space, a sunspace or greenhouse, a ground-based, flat-plate, thermosyphon collector, or other solar-heat collection device;
Heat transported from the collection source into (typically) the earth mass under the living space (for storage), this mass surrounded by a sub-surface perimeter "cape" or "umbrella" providing both insulation from easy heat-loss back up to the outdoors air and a barrier against moisture migration through that heat-storage mass;
A high-density floor whose thermal properties are designed to radiate heat back into the living space, but only after the proper sub-floor-insulation-regulated time-lag;
A control-scheme or system which activates (often PV-powered) fans and dampers, when the warm-season air is sensed to be hotter in the collection area(s) than in the storage mass, or allows the heat to be moved into the storage-zone by passive convection (often using a solar chimney and thermally activated dampers.)
Usually it requires several years for the storage earth-mass to fully preheat from the local at-depth soil temperature (which varies widely by region and site-orientation) to an optimum Fall level at which it can provide up to 100% of the heating requirements of the living space through the winter. This technology continues to evolve, with a range of variations (including active-return devices) being explored. The listserve where this innovation is most often discussed is "Organic Architecture" at Yahoo.
This system is almost exclusively deployed in northern Europe. One system has been built at Drake Landing in North America. A more recent system is a Do-it-yourself energy-neutral home in progress in Collinsville, IL that will rely solely on Annualized Solar for conditioning.
See also
Central solar heating
District heating
Geosolar
Ice house (building)
Ice pond
List of energy storage projects
Solar pond
Solar thermal collector
Thermal energy storage
Zero energy building
Zero heating building
References
External links
DOE EERE Research Reports
December 2005, Seasonal thermal store being fitted in an ENERGETIKhaus100
October 1998, Fujita Research report
Earth Notes: Milk Tanker Thermal Store with Heat Pump
Heliostats used for concentrating solar power (photos)
Wofati Eco building with annualized thermal inertia
Energy storage
Energy conservation
Energy recovery
Sustainable energy
Renewable energy
Appropriate technology
Geothermal energy
Solar architecture
Solar power | Seasonal thermal energy storage | [
"Engineering"
] | 3,640 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
5,497,779 | https://en.wikipedia.org/wiki/Carbinoxamine | Carbinoxamine is an antihistamine and anticholinergic agent. It is used for hay fever, vasomotor rhinitis, mild urticaria, angioedema, dermatographism and allergic conjunctivitis. Carbinoxamine is a histamine antagonist, specifically an H1-antagonist. The maleic acid salt of the levorotatory isomer is sold as the prescription drug rotoxamine.
It was patented in 1947 and came into medical use in 1953. It was first launched in the United States by the McNeil Corporation under the brand name Clistin. Carbinoxamine is available in various countries around the world by itself, combined with decongestants such as pseudoephedrine, and also with other ingredients including paracetamol, aspirin, and codeine.
Society and culture
In June 2006 the FDA announced that more than 120 branded pharmacy products containing carbinoxamine were being illegally marketed and demanded they be removed from the marketplace. This action was precipitated by twenty-one reported deaths in children under the age of two who had been administered carbinoxamine-containing products. Despite the fact that the drug had not been studied in this age group, a multitude of OTC preparations containing carbinoxamine were being marketed for infants and toddlers. At present, all carbinoxamine-containing formulations are approved only for adults or children ages 3 or older.
Brand names
Brand names include Clistin, Palgic, Rondec, Rhinopront, Ryvent.
Side effects
Continuous and/or cumulative use of anticholinergic medication, including first-generation antihistamines, is associated with a higher risk of cognitive decline and dementia in older people.
See also
Carbinoxamine/pseudoephedrine
Tofenacin
Clemastine
Captodiamine
References
H1 receptor antagonists
4-Chlorophenyl compounds
2-Pyridyl compounds
Ethers | Carbinoxamine | [
"Chemistry"
] | 412 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
5,498,023 | https://en.wikipedia.org/wiki/Placental%20lactogen | Placental lactogen, also referred to as chorionic somatomammotropin, is a polypeptide hormone, produced by the placenta during pregnancy. It influences the metabolic processes of both the mother and fetus, aiding in the growth and development of the fetus. Classified within the somatotropin family, placental lactogen shares both structural and functional similarities to growth hormone and pituitary prolactin. It has been identified in various mammals, including humans, monkeys, mice, cows, hamster, and sheep. However, it has not been found in dogs and rabbits.
Classification of placental lactogen across mammalian species
The initial placental lactogen-related proteins were identified in rodents and are commonly categorized into two primary groups based on the timing of their secretion during pregnancy: those occurring during the mid-pregnancy stage, such as placental lactogen-I, and those occurring during the late-pregnancy stage, such as placental lactogen-II. Similarly, bovine placental lactogen exhibits diversity, through its molecular forms rather than secretion timing, with multiple isoforms differing in molecular weight and charge due to variations in glycosylation and truncated transcripts. While there are many shared characteristics, placental lactogen is synthesized by distinct trophoblast cell types. In sheep, for example, ovine placental lactogen is generated by binucleate cells.
References
Further reading
Placenta
Hormones of the pregnant female
Peptides | Placental lactogen | [
"Chemistry"
] | 317 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
5,498,192 | https://en.wikipedia.org/wiki/Coin%20manipulation | Coin manipulation is the art of manipulating coins in skillful flourishes, usually on or around the hands. Although not always considered coin magic, the flourishes are sometimes used in magic shows. The difficulty of the trick ranges greatly, from some that take a few minutes to accomplish, to much more complex ones that can take months, even years, to master. One of the best-known flourishes is the relatively advanced coin walk.
Coin walk
The coin walk is a type of coin flourish in which a coin is flipped over the fingers to create the illusion of a coin walking across the back of the hand. It is one of the most famous coin manipulation feats. It is also known as the coin roll, knuckle roll, and the steeplechase flourish, and can also be performed with poker chips, slugs, or other similar implements.
The manipulation is generally performed on the first phalanx bone of each finger of one hand. After the coin has been flipped over by each phalanx, not including the smallest finger, the thumb brings the coin back under the hand and back to the index finger to repeat the sequence as many times as desired.
In popular culture
Van Heflin's character Sam performs this manipulation throughout the 1946 film The Strange Love of Martha Ivers. Woody Allen's character Miles Monroe also performs this to seduce Diane Keaton's character Luna in the 1973 film Sleeper.
The characters Peter, Walter, and Elizabeth Bishop perform this trick in the TV show Fringe. Steve Carell and Alan Arkin also perform this feat in tandem (almost a "dueling knuckle walk") in the movie The Incredible Burt Wonderstone. In the 2017 Indian Tamil film Bairavaa, the protagonist (Vijay) is shown doing this feat several times throughout the film. Connor, a protagonist in the 2018 game Detroit: Become Human, is shown doing various coin tricks as the player progresses through the game. Actor Val Kilmer can be seen doing the feat in both Tombstone as Doc Holliday and in the movie Real Genius, performing a double-handed continuous hand roll.
See also
Pen spinning
References
External links
Sleight of hand
Coins
Coin magic
Object manipulation | Coin manipulation | [
"Biology"
] | 445 | [
"Behavior",
"Object manipulation",
"Motor control"
] |
5,498,479 | https://en.wikipedia.org/wiki/Optimal%20job%20scheduling | Optimal job scheduling is a class of optimization problems related to scheduling. The inputs to such problems are a list of jobs (also called processes or tasks) and a list of machines (also called processors or workers). The required output is a schedule – an assignment of jobs to machines. The schedule should optimize a certain objective function. In the literature, problems of optimal job scheduling are often called machine scheduling, processor scheduling, multiprocessor scheduling, or just scheduling.
There are many different problems of optimal job scheduling, different in the nature of jobs, the nature of machines, the restrictions on the schedule, and the objective function. A convenient notation for optimal scheduling problems was introduced by Ronald Graham, Eugene Lawler, Jan Karel Lenstra and Alexander Rinnooy Kan. It consists of three fields: α, β and γ. Each field may be a comma separated list of words. The α field describes the machine environment, β the job characteristics and constraints, and γ the objective function. Since its introduction in the late 1970s the notation has been constantly extended, sometimes inconsistently. As a result, today there are some problems that appear with distinct notations in several papers.
Single-stage jobs vs. multi-stage jobs
In the simpler optimal job scheduling problems, each job j consists of a single execution phase, with a given processing time pj. In more complex variants, each job consists of several execution phases, which may be executed in sequence or in parallel.
Machine environments
In single-stage job scheduling problems, there are four main categories of machine environments:
1: Single-machine scheduling. There is a single machine.
P: Identical-machines scheduling. There are parallel machines, and they are identical. Job takes time on any machine it is scheduled to.
Q: Uniform-machines scheduling. There are parallel machines, and they have different given speeds. Job on machine takes time .
R: Unrelated-machines scheduling. There are parallel machines, and they are unrelated – Job on machine takes time .
These letters might be followed by the number of machines, which is then fixed. For example, P2 indicates that there are two parallel identical machines. Pm indicates that there are m parallel identical machines, where m is a fixed parameter. In contrast, P indicates that there are m parallel identical machines, but m is not fixed (it is part of the input).
In multi-stage job scheduling problems, there are other options for the machine environments:
O: Open-shop problem. Every job consists of operations for . The operations can be scheduled in any order. Operation must be processed for units on machine .
F: Flow-shop problem. Every job consists of operations for , to be scheduled in the given order. Operation must be processed for units on machine .
J: Job-shop problem. Every job consists of operations for , to be scheduled in that order. Operation must be processed for units on a dedicated machine with for .
Job characteristics
All processing times are assumed to be integers. In some older research papers however they are assumed to be rationals.
, or : the processing time is equal for all jobs.
, or : the processing time is equal to 1 time-unit for all jobs.
: for each job a release time is given before which it cannot be scheduled, default is 0.
: an online problem. Jobs are revealed at their release times. In this context the performance of an algorithm is measured by its competitive ratio.
: for each job a due date is given. The idea is that every job should complete before its due date and there is some penalty for jobs that complete late. This penalty is denoted in the objective value. The presence of the job characteristic is implicitly assumed and not denoted in the problem name, unless there are some restrictions as for example , assuming that all due dates are equal to some given date.
: for each job a strict deadline is given. Every job must complete before its deadline.
pmtn: Jobs can be preempted and resumed possibly on another machine. Sometimes also denoted by 'prmp'.
: Each job comes with a number of machines on which it must be scheduled at the same time. The default is 1. This is an important parameter in the variant called parallel task scheduling.
Precedence relations
Each pair of two jobs may or may not have a precedence relation. A precedence relation between two jobs means that one job must be finished before the other job. For example, if job i is a predecessor of job j in that order, job j can only start once job i is completed.
prec: There are no restrictions placed on the precedence relations.
chains: Each job is the predecessor of at most one other job and is preceded by at most one other job.
tree: The precedence relations must satisfy one of the two restrictions.
intree: Each node is the predecessor of at most one other job.
outtree: Each node is preceded by at most one other job.
opposing forest: If the graph of precedence relations is split into connected components, then each connected component is either an intree or outtree.
sp-graph: The graph of precedence relations is a series parallel graph.
bounded height: The length of the longest directed path is capped at a fixed value. (A directed path is a sequence of jobs where each job except the last is a predecessor of the next job in the sequence.)
level order: Each job has a level, which is the length of the longest directed path starting from that job. Each job with level is a predecessor of every job with level .
interval order: Each job has an interval and job is a predecessor of if and only if the end of the interval of is strictly less than the start of the interval for .=
In the presence of a precedence relation one might in addition assume time lags. The time lag between two jobs is the amount of time that must be waited after the first job is complete before the second job to begin. Formally, if job i precedes job j, then must be true. If no time lag is specified then it is assumed to be zero. Time lags can also be negative. A negative time lag means that the second job can begin a fixed time before the first job finishes.
ℓ: The time lag is the same for each pair of jobs.
: Different pairs of jobs can have different time lags.
Transportation delays
: Between the completion of operation of job on machine and the start of operation of job on machine , there is a transportation delay of at least units.
: Between the completion of operation of job on machine and the start of operation of job on machine , there is a transportation delay of at least units.
: Machine dependent transportation delay. Between the completion of operation of job on machine and the start of operation of job on machine , there is a transportation delay of at least units.
: Machine pair dependent transportation delay. Between the completion of operation of job on machine and the start of operation of job on machine , there is a transportation delay of at least units.
: Job dependent transportation delay. Between the completion of operation of job on machine and the start of operation of job on machine , there is a transportation delay of at least units.
Various constraints
rcrc: Also known as Recirculation or flexible job shop. The promise on is lifted and for some pairs we might have . In other words, it is possible for different operations of the same job to be assigned to the same machine.
no-wait: The operation must start exactly when operation completes. In other words, once one operation of a job finishes, the next operation must begin immediately. Sometimes also denoted as 'nwt'.
no-idle: No machine may ever be idle between the start of its first execution to the end of its last execution.
: Multiprocessor tasks on identical parallel machines. The execution of job is done simultaneously on parallel machines.
: Multiprocessor tasks. Every job is given with a set of machines , and needs simultaneously all these machines for execution. Sometimes also denoted by 'MPT'.
: Multipurpose machines. Every job needs to be scheduled on one machine out of a given set . Sometimes also denoted by Mj.
Objective functions
Usually the goal is to minimize some objective value. One difference is the notation where the goal is to maximize the number of jobs that complete before their deadline. This is also called the throughput. The objective value can be sum, possibly weighted by some given priority weights per job.
-: The absence of an objective value is denoted by a single dash. This means that the problem consists simply in producing a feasible scheduling, satisfying all given constraints.
: the completion time of job . is the maximum completion time; also known as the makespan. Sometimes we are interested in the mean completion time (the average of over all j), which is sometimes denoted by mft (mean finish time).
: The flow time of a job is the difference between its completion time and its release time, i.e. .
: Lateness. Every job is given a due date . The lateness of job is defined as . Sometimes is used to denote feasibility for a problem with deadlines. Indeed using binary search, the complexity of the feasibility version is equivalent to the minimization of .
: Throughput. Every job is given a due date . There is a unit profit for jobs that complete on time, i.e. if and otherwise. Sometimes the meaning of is inverted in the literature, which is equivalent when considering the decision version of the problem, but which makes a huge difference for approximations.
: Tardiness. Every job is given a due date . The tardiness of job is defined as .
: Earliness. Every job is given a due date . The earliness of job is defined as . This objective is important for just-in-time scheduling.
There are also variants with multiple objectives, but they are much less studied.
Examples
Here are some examples for problems defined using the above notation.
– assigning each of given jobs to one of the two identical machines so to minimize the maximum total processing time over the machines. This is an optimization version of the partition problem
1|prec| – assigning to a single machine, processes with general precedence constraint, minimizing maximum lateness.
R|pmtn| – assigning tasks to a variable number of unrelated parallel machines, allowing preemption, minimizing total completion time.
J3|| – a 3-machine job shop problem with unit processing times, where the goal is to minimize the maximum completion time.
– assigning jobs to parallel identical machines, where each job comes with a number of machines on which it must be scheduled at the same time, minimizing maximum completion time. See parallel task scheduling.
Other variants
All variants surveyed above are deterministic in that all data is known to the planner. There are also stochastic variants, in which the data is not known in advance, or can perturb randomly.
In a load balancing game, each job belongs to a strategic agent, who can decide where to schedule his job. The Nash equilibrium in this game may not be optimal. Aumann and Dombb assess the inefficiency of equilibrium in several load-balancing games.
See also
Fractional job scheduling
References
External links
Scheduling zoo (by Christoph Dürr, Sigrid Knust, Damien Prot, Óscar C. Vásquez): an online tool for searching an optimal scheduling problem using the notation.
Complexity results for scheduling problems (by Peter Brucker, Sigrid Knust): a classification of optimal scheduling problems by what is known on their runtime complexity. | Optimal job scheduling | [
"Engineering"
] | 2,375 | [
"Optimal scheduling",
"Industrial engineering"
] |
5,498,622 | https://en.wikipedia.org/wiki/Cell%20software%20development | Software development for the Cell microprocessor involves a mixture of conventional development practices for the PowerPC-compatible PPU core, and novel software development challenges with regard to the functionally reduced SPU coprocessors.
Linux on Cell
An open source software-based strategy was adopted to accelerate the development of a Cell BE ecosystem and to provide an environment to develop Cell applications, including a GCC-based Cell compiler, binutils and a port of the Linux operating system.
Octopiler
Octopiler is IBM's prototype compiler to allow software developers to write code for Cell processors.
Software portability
Adapting VMX for SPU
Differences between VMX and SPU
The VMX (Vector Multimedia Extensions) technology is conceptually similar to the vector model provided by the SPU processors, but there are many significant differences.
The VMX Java mode conforms to the Java Language Specification 1 subset of the default IEEE Standard, extended to include IEEE and C9X compliance where the Java standard falls silent. In a typical implementation, non-Java mode converts denormal values to zero but Java mode traps into an emulator when the processor encounters such a value.
The IBM PPE Vector/SIMD manual does not define operations for double-precision floating point, though IBM has published material implying certain double-precision performance numbers associated with the Cell PPE VMX technology.
Intrinsics
Compilers for Cell provide intrinsics to expose useful SPU instructions in C and C++. Instructions that differ only in the type of operand (such as a, ai, ah, ahi, fa, and dfa for addition) are typically represented by a single C/C++ intrinsic which selects the proper instruction based on the type of the operand.
Porting VMX code for SPU
There is a great body of code which has been developed for other IBM Power microprocessors that could potentially be adapted and recompiled to run on the SPU. This code base includes VMX code that runs under the PowerPC version of Apple's Mac OS X, where it is better known as Altivec. Depending on how many VMX specific features are involved, the adaptation involved can range anywhere from straightforward, to onerous, to completely impractical. The most important workloads for the SPU generally map quite well.
In some cases it is possible to port existing VMX code directly. If the VMX code is highly generic (makes few assumptions about the execution environment) the translation can be relatively straightforward. The two processors specify a different binary code format, so recompilation is required at a minimum. Even where instructions exist with the same behaviors, they do not have the same instruction names, so this must be mapped as well. IBM provides compiler intrinsics which take care of this mapping transparently as part of the development toolkit.
In many cases, however, a directly equivalent instruction does not exist. The workaround might be obvious or it might not. For example, if saturation behavior is required on the SPU, it can be coded by adding additional SPU instructions to accomplish this (with some loss of efficiency). At the other extreme, if Java floating-point semantics are required, this is almost impossible to achieve on the SPU processor. To achieve the same computation on the SPU might require that an entirely different algorithm be written from scratch.
The most important conceptual similarity between VMX and the SPU architecture is supporting the same vectorization model. For this reason, most algorithms adapted to Altivec will usually adapt successfully to the SPU architecture as well.
Local store exploitation
Transferring data between the local stores of different SPUs can have a large performance cost. The local stores of individual SPUs can be exploited using a variety of strategies.
Applications with high locality, such as dense matrix computations, represent an ideal workload class for the local stores in Cell BE.
Streaming computations can be efficiently accommodated using software pipelining of memory block transfers using a multi-buffering strategy.
The software cache offers a solution for random accesses.
More sophisticated applications can use multiple strategies for different data types.
References
The Cell Project at IBM Research
Optimizing Compiler for a CELL Processor
Using advanced compiler technology to exploit the performance of the Cell Broadband Engine architecture
Compiler Technology for Scalable Architectures
Cell BE architecture
Compilers
Vaporware | Cell software development | [
"Technology"
] | 888 | [
"Computer industry",
"Vaporware"
] |
5,498,670 | https://en.wikipedia.org/wiki/Power%20dividers%20and%20directional%20couplers | Power dividers (also power splitters and, when used in reverse, power combiners) and directional couplers are passive devices used mostly in the field of radio technology. They couple a defined amount of the electromagnetic power in a transmission line to a port enabling the signal to be used in another circuit. An essential feature of directional couplers is that they only couple power flowing in one direction. Power entering the output port is coupled to the isolated port but not to the coupled port. A directional coupler designed to split power equally between two ports is called a hybrid coupler.
Directional couplers are most frequently constructed from two coupled transmission lines set close enough together such that energy passing through one is coupled to the other. This technique is favoured at the microwave frequencies where transmission line designs are commonly used to implement many circuit elements. However, lumped component devices are also possible at lower frequencies, such as the audio frequencies encountered in telephony. Also at microwave frequencies, particularly the higher bands, waveguide designs can be used. Many of these waveguide couplers correspond to one of the conducting transmission line designs, but there are also types that are unique to waveguide.
Directional couplers and power dividers have many applications. These include providing a signal sample for measurement or monitoring, feedback, combining feeds to and from antennas, antenna beam forming, providing taps for cable distributed systems such as cable TV, and separating transmitted and received signals on telephone lines.
Notation and symbols
The symbols most often used for directional couplers are shown in figure 1. The symbol may have the coupling factor in dB marked on it. Directional couplers have four ports. Port 1 is the input port where power is applied. Port 3 is the coupled port where a portion of the power applied to port 1 appears. Port 2 is the transmitted port where the power from port 1 is outputted, less the portion that went to port 3. Directional couplers are frequently symmetrical so there also exists port 4, the isolated port. A portion of the power applied to port 2 will be coupled to port 4. However, the device is not normally used in this mode and port 4 is usually terminated with a matched load (typically 50 ohms). This termination can be internal to the device and port 4 is not accessible to the user. Effectively, this results in a 3-port device, hence the utility of the second symbol for directional couplers in figure 1.
Symbols of the form;
in this article have the meaning "parameter P at port a due to an input at port b".
A symbol for power dividers is shown in figure 2. Power dividers and directional couplers are in all essentials the same class of device. Directional coupler tends to be used for 4-port devices that are only loosely coupled – that is, only a small fraction of the input power appears at the coupled port. Power divider is used for devices with tight coupling (commonly, a power divider will provide half the input power at each of its output ports – a divider) and is usually considered a 3-port device.
Parameters
Common properties desired for all directional couplers are wide operational bandwidth, high directivity, and a good impedance match at all ports when the other ports are terminated in
matched loads. Some of these, and other, general characteristics are discussed below.
Coupling factor
The coupling factor is defined as:
where P1 is the input power at port 1 and P3 is the output power from the coupled port (see figure 1).
The coupling factor represents the primary property of a directional coupler. Coupling factor is a negative quantity, it cannot exceed for a passive device, and in practice does not exceed since more than this would result in more power output from the coupled port than power from the transmitted port – in effect their roles would be reversed. Although a negative quantity, the minus sign is frequently dropped (but still implied) in running text and diagrams and a few authors go so far as to define it as a positive quantity. Coupling is not constant, but varies with frequency. While different designs may reduce the variance, a perfectly flat coupler theoretically cannot be built. Directional couplers are specified in terms of the coupling accuracy at the frequency band center.
Loss
The main line insertion loss from port 1 to port 2 (P1 – P2) is:
Insertion loss:
Part of this loss is due to some power going to the coupled port and is called coupling loss and is given by:
Coupling loss:
The insertion loss of an ideal directional coupler will consist entirely of the coupling loss. In a real directional coupler, however, the insertion loss consists of a combination of coupling loss, dielectric loss, conductor loss, and VSWR loss. Depending on the frequency range, coupling loss becomes less significant above coupling where the other losses constitute the majority of the total loss. The theoretical insertion loss (dB) vs coupling (dB) for a dissipationless coupler is shown in the graph of figure 3 and the table below.
Isolation
Isolation of a directional coupler can be defined as the difference in signal levels in dB between the input port and the isolated port when the two other ports are terminated by matched loads, or:
Isolation:
Isolation can also be defined between the two output ports. In this case, one of the output ports is used as the input; the other is considered the output port while the other two ports (input and isolated) are terminated by matched loads.
Consequently:
The isolation between the input and the isolated ports may be different from the isolation between the two output ports. For example, the isolation between ports 1 and 4 can be while the isolation between ports 2 and 3 can be a different value such as . Isolation can be estimated from the coupling plus return loss. The isolation should be as high as possible. In actual couplers the isolated port is never completely isolated. Some RF power will always be present. Waveguide directional couplers will have the best isolation.
Directivity
Directivity is directly related to isolation. It is defined as:
Directivity:
where: P3 is the output power from the coupled port and P4 is the power output from the isolated port.
The directivity should be as high as possible. The directivity is very high at the design frequency and is a more sensitive function of frequency because it depends on the cancellation of two wave components. Waveguide directional couplers will have the best directivity. Directivity is not directly measurable, and is calculated from the addition of the isolation and (negative) coupling measurements as:
Note that if the positive definition of coupling is used, the formula results in:
S-parameters
The S-matrix for an ideal (infinite isolation and perfectly matched) symmetrical directional coupler is given by,
is the transmission coefficient and,
is the coupling coefficient
In general, and are complex, frequency dependent, numbers. The zeroes on the matrix main diagonal are a consequence of perfect matching – power input to any port is not reflected back to that same port. The zeroes on the matrix antidiagonal are a consequence of perfect isolation between the input and isolated port.
For a passive lossless directional coupler, we must in addition have,
since the power entering the input port must all leave by one of the other two ports.
Insertion loss is related to by;
Coupling factor is related to by;
Non-zero main diagonal entries are related to return loss, and non-zero antidiagonal entries are related to isolation by similar expressions.
Some authors define the port numbers with ports 3 and 4 interchanged. This results in a scattering matrix that is no longer all-zeroes on the antidiagonal.
Amplitude balance
This terminology defines the power difference in dB between the two output ports of a hybrid. In an ideal hybrid circuit, the difference should be . However, in a practical device the amplitude balance is frequency dependent and departs from the ideal difference.
Phase balance
The phase difference between the two output ports of a hybrid coupler should be 0°, 90°, or 180° depending on the type used. However, like amplitude balance, the phase difference is sensitive to the input frequency and typically will vary a few degrees.
Transmission line types
Directional couplers
Coupled transmission lines
The most common form of directional coupler is a pair of coupled transmission lines. They can be realised in a number of technologies including coaxial and the planar technologies (stripline and microstrip). An implementation in stripline is shown in figure 4 of a quarter-wavelength (λ/4) directional coupler. The power on the coupled line flows in the opposite direction to the power on the main line, hence the port arrangement is not the same as shown in figure 1, but the numbering remains the same. For this reason it is sometimes called a backward coupler.
The main line is the section between ports 1 and 2 and the coupled line is the section between ports 3 and 4. Since the directional coupler is a linear device, the notations on figure 1 are arbitrary. Any port can be the input, (an example is seen in figure 20) which will result in the directly connected port being the transmitted port, the adjacent port being the coupled port, and the diagonal port being the isolated port. On some directional couplers, the main line is designed for high power operation (large connectors), while the coupled port may use a small connector, such as an SMA connector. The internal load power rating may also limit operation on the coupled line.
Accuracy of coupling factor depends on the dimensional tolerances for the spacing of the two coupled lines. For planar printed technologies this comes down to the resolution of the printing process which determines the minimum track width that can be produced and also puts a limit on how close the lines can be placed to each other. This becomes a problem when very tight coupling is required and couplers often use a different design. However, tightly coupled lines can be produced in air stripline which also permits manufacture by printed planar technology. In this design the two lines are printed on opposite sides of the dielectric rather than side by side. The coupling of the two lines across their width is much greater than the coupling when they are edge-on to each other.
The λ/4 coupled-line design is good for coaxial and stripline implementations but does not work so well in the now popular microstrip format, although designs do exist. The reason for this is that microstrip is not a homogeneous medium – there are two different mediums above and below the transmission strip. This leads to transmission modes other than the usual TEM mode found in conductive circuits. The propagation velocities of even and odd modes are different leading to signal dispersion. A better solution for microstrip is a coupled line much shorter than λ/4, shown in figure 5, but this has the disadvantage of a coupling factor which rises noticeably with frequency. A variation of this design sometimes encountered has the coupled line a higher impedance than the main line such as shown in figure 6. This design is advantageous where the coupler is being fed to a detector for power monitoring. The higher impedance line results in a higher RF voltage for a given main line power making the work of the detector diode easier.
The frequency range specified by manufacturers is that of the coupled line. The main line response is much wider: for instance a coupler specified as might have a main line which could operate at . The coupled response is periodic with frequency. For example, a λ/4 coupled-line coupler will have responses at nλ/4 where n is an odd integer. This preferred response gets obvious when a short impulse on the main line is followed through the coupler. When the impulse on the main line reaches the coupled line a signal of the same polarity is induced on the coupled line similar to the response of an RC-high-pass. This leads to two non-inverted pulses on the coupled line that travel in opposite direction to each other. When the pulse on the main line leaves the coupled line an inverted signal is induced on the coupled line, triggering two inverted impulses that travel in opposite direction to each other. Both impulses on the coupled line that go in the same direction as the pulse on the main line are of opposite polarity. They cancel each other so there is no response on the exit of the coupled line in forward direction. This is the decoupled port. The pulses on the coupled line that travel in the opposite direction to the pulse on the main line are also of opposite polarity to each other but the second impulse is delayed by twice the delay of the parallel line. For a λ/4 coupled-line the total delay length is λ/2 so the second signal is inverted and this gives a maximum response on the coupled port.
A single λ/4 coupled section is good for bandwidths of less than an octave. To achieve greater bandwidths multiple λ/4 coupling sections are used. The design of such couplers proceeds in much the same way as the design of distributed-element filters. The sections of the coupler are treated as being sections of a filter, and by adjusting the coupling factor of each section the coupled port can be made to have any of the classic filter responses such as maximally flat (Butterworth filter), equal-ripple (Cauer filter), or a specified-ripple (Chebychev filter) response. Ripple is the maximum variation in output of the coupled port in its passband, usually quoted as plus or minus a value in dB from the nominal coupling factor.
It can be shown that coupled-line directional couplers have purely real and purely imaginary at all frequencies. This leads to a simplification of the S-matrix and the result that the coupled port is always in quadrature phase (90°) with the output port. Some applications make use of this phase difference. Letting , the ideal case of lossless operation simplifies to,
Branch-line coupler
The branch-line coupler consists of two parallel transmission lines physically coupled together with two or more branch lines between them. The branch lines are spaced λ/4 apart and represent sections of a multi-section filter design in the same way as the multiple sections of a coupled-line coupler except that here the coupling of each section is controlled with the impedance of the branch lines. The main and coupled line are of the system impedance. The more sections there are in the coupler, the higher is the ratio of impedances of the branch lines. High impedance lines have narrow tracks and this usually limits the design to three sections in planar formats due to manufacturing limitations. A similar limitation applies for coupling factors looser than ; low coupling also requires narrow tracks. Coupled lines are a better choice when loose coupling is required, but branch-line couplers are good for tight coupling and can be used for hybrids. Branch-line couplers usually do not have such a wide bandwidth as coupled lines. This style of coupler is good for implementing in high-power, air dielectric, solid bar formats as the rigid structure is easy to mechanically support.
Branch line couplers can be used as crossovers as an alternative to air bridges, which in some applications cause an unacceptable amount of coupling between the lines being crossed. An ideal branch-line crossover theoretically has no coupling between the two paths through it. The design is a 3-branch coupler equivalent to two 90° hybrid couplers connected in cascade. The result is effectively a coupler. It will cross over the inputs to the diagonally opposite outputs with a phase delay of 90° in both lines.
Lange coupler
The construction of the Lange coupler is similar to the interdigital filter with paralleled lines interleaved to achieve the coupling. It is used for strong couplings in the range to .
Power dividers
The earliest transmission line power dividers were simple T-junctions. These suffer from very poor isolation between the output ports – a large part of the power reflected back from port 2 finds its way into port 3. It can be shown that it is not theoretically possible to simultaneously match all three ports of a passive, lossless three-port and poor isolation is unavoidable. It is, however, possible with four-ports and this is the fundamental reason why four-port devices are used to implement three-port power dividers: four-port devices can be designed so that power arriving at port 2 is split between port 1 and port 4 (which is terminated with a matching load) and none (in the ideal case) goes to port 3.
The term hybrid coupler originally applied to coupled-line directional couplers, that is, directional couplers in which the two outputs are each half the input power. This synonymously meant a quadrature coupler with outputs 90° out of phase. Now any matched 4-port with isolated arms and equal power division is called a hybrid or hybrid coupler. Other types can have different phase relationships. If 90°, it is a 90° hybrid, if 180°, a 180° hybrid and so on. In this article hybrid coupler without qualification means a coupled-line hybrid.
Wilkinson power divider
The Wilkinson power divider consists of two parallel uncoupled λ/4 transmission lines. The input is fed to both lines in parallel and the outputs are terminated with twice the system impedance bridged between them. The design can be realised in planar format but it has a more natural implementation in coax – in planar, the two lines have to be kept apart so that they do not couple but have to be brought together at their outputs so they can be terminated whereas in coax the lines can be run side-by-side relying on the coax outer conductors for screening. The Wilkinson power divider solves the matching problem of the simple T-junction: it has low VSWR at all ports and high isolation between output ports. The input and output impedances at each port are designed to be equal to the characteristic impedance of the microwave system. This is achieved by making the line impedance of the system impedance – for a system the Wilkinson lines are approximately
Hybrid coupler
Coupled-line directional couplers are described above. When the coupling is designed to be it is called a hybrid coupler. The S-matrix for an ideal, symmetric hybrid coupler reduces to;
The two output ports have a 90° phase difference (-i to −1) and so this is a 90° hybrid.
Hybrid ring coupler
The hybrid ring coupler, also called the rat-race coupler, is a four-port directional coupler consisting of a 3λ/2 ring of transmission line with four lines at the intervals shown in figure 12. Power input at port 1 splits and travels both ways round the ring. At ports 2 and 3 the signal arrives in phase and adds whereas at port 4 it is out of phase and cancels. Ports 2 and 3 are in phase with each other, hence this is an example of a 0° hybrid. Figure 12 shows a planar implementation but this design can also be implemented in coax or waveguide. It is possible to produce a coupler with a coupling factor different from by making each λ/4 section of the ring alternately low and high impedance but for a coupler the entire ring is made of the port impedances – for a design the ring would be approximately .
The S-matrix for this hybrid is given by;
The hybrid ring is not symmetric on its ports; choosing a different port as the input does not necessarily produce the same results. With port 1 or port 3 as the input the hybrid ring is a 0° hybrid as stated. However using port 2 or port 4 as the input results in a 180° hybrid. This fact leads to another useful application of the hybrid ring: it can be used to produce sum (Σ) and difference (Δ) signals from two input signals as shown in figure 12. With inputs to ports 2 and 3, the Σ signal appears at port 1 and the Δ signal appears at port 4.
Multiple output dividers
A typical power divider is shown in figure 13. Ideally, input power would be divided equally between the output ports. Dividers are made up of multiple couplers and, like couplers, may be reversed and used as multiplexers. The drawback is that for a four channel multiplexer, the output consists of only 1/4 the power from each, and is relatively inefficient. The reason for this is that at each combiner half the input power goes to port 4 and is dissipated in the termination load. If the two inputs were coherent the phases could be so arranged that cancellation occurred at port 4 and then all the power would go to port 1. However, multiplexer inputs are usually from entirely independent sources and therefore not coherent. Lossless multiplexing can only be done with filter networks.
Waveguide types
Waveguide directional couplers
Waveguide branch-line coupler
The branch-line coupler described above can also be implemented in waveguide.
Bethe-hole directional coupler
One of the most common, and simplest, waveguide directional couplers is the Bethe-hole directional coupler. This consists of two parallel waveguides, one stacked on top of the other, with a hole between them. Some of the power from one guide is launched through the hole into the other. The Bethe-hole coupler is another example of a backward coupler.
The concept of the Bethe-hole coupler can be extended by providing multiple holes. The holes are spaced λ/4 apart. The design of such couplers has parallels with the multiple section coupled transmission lines. Using multiple holes allows the bandwidth to be extended by designing the sections as a Butterworth, Chebyshev, or some other filter class. The hole size is chosen to give the desired coupling for each section of the filter. Design criteria are to achieve a substantially flat coupling together with high directivity over the desired band.
Riblet short-slot coupler
The Riblet short-slot coupler is two waveguides side-by-side with the side-wall in common instead of the long side as in the Bethe-hole coupler. A slot is cut in the sidewall to allow coupling. This design is frequently used to produce a coupler.
Schwinger reversed-phase coupler
The Schwinger reversed-phase coupler is another design using parallel waveguides, this time the long side of one is common with the short side-wall of the other. Two off-centre slots are cut between the waveguides spaced λ/4 apart. The Schwinger is a backward coupler. This design has the advantage of a substantially flat directivity response and the disadvantage of a strongly frequency-dependent coupling compared to the Bethe-hole coupler, which has little variation in coupling factor.
Moreno crossed-guide coupler
The Moreno crossed-guide coupler has two waveguides stacked one on top of the other like the Bethe-hole coupler but at right angles to each other instead of parallel. Two off-centre holes, usually cross-shaped are cut on the diagonal between the waveguides a distance apart. The Moreno coupler is good for tight coupling applications. It is a compromise between the properties of the Bethe-hole and Schwinger couplers with both coupling and directivity varying with frequency.
Waveguide power dividers
Waveguide hybrid ring
The hybrid ring discussed above can also be implemented in waveguide.
Magic tee
Coherent power division was first accomplished by means of simple Tee junctions. At microwave frequencies, waveguide tees have two possible forms – the E-plane and H-plane. These two junctions split power equally, but because of the different field configurations at the junction, the electric fields at the output arms are in phase for the H-plane tee and are 180° out of phase for the E-plane tee. The combination of these two tees to form a hybrid tee is known as the magic tee. The magic tee is a four-port component which can perform the vector sum (Σ) and difference (Δ) of two coherent microwave signals.
Discrete element types
Hybrid transformer
The standard 3 dB hybrid transformer is shown in figure 16. Power at port 1 is split equally between ports 2 and 3 but in antiphase to each other. The hybrid transformer is therefore a 180° hybrid. The centre-tap is usually terminated internally but it is possible to bring it out as port 4; in which case the hybrid can be used as a sum and difference hybrid. However, port 4 presents as a different impedance to the other ports and will require an additional transformer for impedance conversion if it is required to use this port at the same system impedance.
Hybrid transformers are commonly used in telecommunications for 2 to 4 wire conversion. Telephone handsets include such a converter to convert the 2-wire line to the 4 wires from the earpiece and mouthpiece.
Cross-connected transformers
For lower frequencies (less than ) a compact broadband implementation by means of RF transformers is possible. In figure 17 a circuit is shown which is meant for weak coupling and can be understood along these lines: A signal is coming in one line pair. One transformer reduces the voltage of the signal the other reduces the current. Therefore, the impedance is matched. The same argument holds for every other direction of a signal through the coupler. The relative sign of the induced voltage and current determines the direction of the outgoing signal.
The coupling is given by;
where n is the secondary to primary turns ratio.
For a coupling, that is equal splitting of the signal between the transmitted port and the coupled port, and the isolated port is terminated in twice the characteristic impedance – for a system. A power divider based on this circuit has the two outputs in 180° phase to each other, compared to λ/4 coupled lines which have a 90° phase relationship.
Resistive tee
A simple tee circuit of resistors can be used as a power divider as shown in figure 18. This circuit can also be implemented as a delta circuit by applying the Y-Δ transform. The delta form uses resistors that are equal to the system impedance. This can be advantageous because precision resistors of the value of the system impedance are always available for most system nominal impedances. The tee circuit has the benefits of simplicity, low cost, and intrinsically wide bandwidth. It has two major drawbacks; first, the circuit will dissipate power since it is resistive: an equal split will result in insertion loss instead of . The second problem is that there is directivity leading to very poor isolation between the output ports.
The insertion loss is not such a problem for an unequal split of power: for instance at port 3 has an insertion loss less than at port 2. Isolation can be improved at the expense of insertion loss at both output ports by replacing the output resistors with T pads. The isolation improvement is greater than the insertion loss added.
6 dB resistive bridge hybrid
A true hybrid divider/coupler with, theoretically, infinite isolation and directivity can be made from a resistive bridge circuit. Like the tee circuit, the bridge has insertion loss. It has the disadvantage that it cannot be used with unbalanced circuits without the addition of transformers; however, it is ideal for balanced telecommunication lines if the insertion loss is not an issue. The resistors in the bridge which represent ports are not usually part of the device (with the exception of port 4 which may well be left permanently terminated internally) these being provided by the line terminations. The device thus consists essentially of two resistors (plus the port 4 termination).
Applications
Monitoring
The coupled output from the directional coupler can be used to monitor frequency and power level on the signal without interrupting the main power flow in the system (except for a power reduction – see figure 3).
Making use of isolation
If isolation is high, directional couplers are good for combining signals to feed a single line to a receiver for two-tone receiver tests. In figure 20, one signal enters port P3 and one enters port P2, while both exit port P1. The signal from port P3 to port P1 will experience of loss, and the signal from port P2 to port P1 will have loss. The internal load on the isolated port will dissipate the signal losses from port P3 and port P2. If the isolators in figure 20 are neglected, the isolation measurement (port P2 to port P3) determines the amount of power from the signal generator F2 that will be injected into the signal generator F1. As the injection level increases, it may cause modulation of signal generator F1, or even injection phase locking. Because of the symmetry of the directional coupler, the reverse injection will happen with the same possible modulation problems of signal generator F2 by F1. Therefore, the isolators are used in figure 20 to effectively increase the isolation (or directivity) of the directional coupler. Consequently, the injection loss will be the isolation of the directional coupler plus the reverse isolation of the isolator.
Hybrids
Applications of the hybrid include monopulse comparators, mixers, power combiners, dividers, modulators, and phased array radar antenna systems. Both in-phase devices (such as the Wilkinson divider) and quadrature (90°) hybrid couplers may be used for coherent power divider applications. An example of quadrature hybrids being used in a coherent power combiner application is given in the next section.
An inexpensive version of the power divider is used in the home to divide cable TV or over-the-air TV signals to multiple TV sets and other devices. Multiport splitters with more than two output ports usually consist internally of a number of cascaded couplers. Domestic broadband internet service can be provided by cable TV companies (cable internet). The domestic user's internet cable modem is connected to one port of the splitter.
Power combiners
Since hybrid circuits are bi-directional, they can be used to coherently combine power as well as splitting it. In figure 21, an example is shown of a signal split up to feed multiple low power amplifiers, then recombined to feed a single antenna with high power.
The phases of the inputs to each power combiner are arranged such that the two inputs are 90° out of phase with each other. Since the coupled port of a hybrid combiner is 90° out of phase with the transmitted port, this causes the powers to add at the output of the combiner and to cancel at the isolated port: a representative example from figure 21 is shown in figure 22. Note that there is an additional fixed 90° phase shift to both ports at each combiner/divider which is not shown in the diagrams for simplicity. Applying in-phase power to both input ports would not get the desired result: the quadrature sum of the two inputs would appear at both output ports – that is half the total power out of each. This approach allows the use of numerous less expensive and lower-power amplifiers in the circuitry instead of a single high-power TWT. Yet another approach is to have each solid state amplifier (SSA) feed an antenna and let the power be combined in space or be used to feed a lens attached to an antenna.
Phase difference
The phase properties of a 90° hybrid coupler can be used to great advantage in microwave circuits. For example, in a balanced microwave amplifier the two input stages are fed through a hybrid coupler. The FET device normally has a very poor match and reflects much of the incident energy. However, since the devices are essentially identical the reflection coefficients from each device are equal. The reflected voltage from the FETs are in phase at the isolated port and are 180° different at the input port. Therefore, all of the reflected power from the FETs goes to the load at the isolated port and no power goes to the input port. This results in a good input match (low VSWR).
If phase-matched lines are used for an antenna input to a 180° hybrid coupler as shown in figure 23, a null will occur directly between the antennas. To receive a signal in that position, one would have to either change the hybrid type or line length. To reject a signal from a given direction, or create the difference pattern for a monopulse radar, this is a good approach.
Phase-difference couplers can be used to create beam tilt in a VHF FM radio station, by delaying the phase to the lower elements of an antenna array. More generally, phase-difference couplers, together with fixed phase delays and antenna arrays, are used in beam-forming networks such as the Butler matrix, to create a radio beam in any prescribed direction.
See also
Star coupler
Beam splitter
References
Bibliography
Stephen J. Bigelow, Joseph J. Carr, Steve Winder, Understanding telephone electronics Newnes, 2001 .
Geoff H. Bryant, Principles of Microwave Measurements, Institution of Electrical Engineers, 1993 .
Robert J. Chapuis, Amos E. Joel, 100 Years of Telephone Switching (1878–1978): Electronics, computers, and telephone switching (1960–1985), IOS Press, 2003 .
Walter Y. Chen, Home Networking Basis, Prentice Hall Professional, 2003 .
R. Comitangelo, D. Minervini, B. Piovano, "Beam forming networks of optimum size and compactness for multibeam antennas at 900 MHz", IEEE Antennas and Propagation Society International Symposium 1997, vol. 4, pp. 2127-2130, 1997.
Stephen A. Dyer, Survey of instrumentation and measurement Wiley-IEEE, 2001 .
Kyōhei Fujimoto, Mobile Antenna Systems Handbook, Artech House, 2008 .
Preston Gralla, How the Internet Works, Que Publishing, 1998 .
Ian Hickman, Practical Radio-frequency Handbook, Newnes, 2006 .
Apinya Innok, Peerapong Uthansakul, Monthippa Uthansakul, "Angular beamforming technique for MIMO beamforming system", International Journal of Antennas and Propagation, vol. 2012, iss. 11, December 2012.
Thomas Koryu Ishii, Handbook of Microwave Technology: Components and devices, Academic Press, 1995 .
Y. T. Lo, S. W. Lee, Antenna Handbook: Applications, Springer, 1993 .
Matthaei, George L.; Young, Leo and Jones, E. M. T. Microwave Filters, Impedance-Matching Networks, and Coupling Structures McGraw-Hill 1964
D. Morgan, A Handbook for EMC Testing and Measurement, IET, 1994 .
Antti V. Räisänen, Arto Lehto, Radio engineering for wireless communication and sensor applications, Artech House, 2003 .
K.R. Reddy, S. B. Badami, V. Balasubramanian, Oscillations And Waves, Universities Press, 1994 .
Peter Vizmuller, RF design guide: systems, circuits, and equations, Volume 1, Artech House, 1995 .
A. Franzen, Impulsantwort eines Leitungskopplers, CQ DL, vol. 7, pp. 28-31, 2020.
Radio electronics
Microwave technology
Distributed element circuits | Power dividers and directional couplers | [
"Engineering"
] | 7,229 | [
"Radio electronics",
"Electronic engineering",
"Distributed element circuits"
] |
5,498,706 | https://en.wikipedia.org/wiki/Current%20density | In electromagnetism, current density is the amount of charge per unit time that flows through a unit area of a chosen cross section. The current density vector is defined as a vector whose magnitude is the electric current per cross-sectional area at a given point in space, its direction being that of the motion of the positive charges at this point. In SI base units, the electric current density is measured in amperes per square metre.
Definition
Assume that (SI unit: m2) is a small surface centered at a given point and orthogonal to the motion of the charges at . If (SI unit: A) is the electric current flowing through , then electric current density at is given by the limit:
with surface remaining centered at and orthogonal to the motion of the charges during the limit process.
The current density vector is the vector whose magnitude is the electric current density, and whose direction is the same as the motion of the positive charges at .
At a given time , if is the velocity of the charges at , and is an infinitesimal surface centred at and orthogonal to , then during an amount of time , only the charge contained in the volume formed by and will flow through . This charge is equal to where is the charge density at . The electric current is , it follows that the current density vector is the vector normal (i.e. parallel to ) and of magnitude
The surface integral of over a surface , followed by an integral over the time duration to , gives the total amount of charge flowing through the surface in that time ():
More concisely, this is the integral of the flux of across between and .
The area required to calculate the flux is real or imaginary, flat or curved, either as a cross-sectional area or a surface. For example, for charge carriers passing through an electrical conductor, the area is the cross-section of the conductor, at the section considered.
The vector area is a combination of the magnitude of the area through which the charge carriers pass, , and a unit vector normal to the area, The relation is
The differential vector area similarly follows from the definition given above:
If the current density passes through the area at an angle to the area normal then
where is the dot product of the unit vectors. That is, the component of current density passing through the surface (i.e. normal to it) is , while the component of current density passing tangential to the area is , but there is no current density actually passing through the area in the tangential direction. The only component of current density passing normal to the area is the cosine component.
Importance
Current density is important to the design of electrical and electronic systems.
Circuit performance depends strongly upon the designed current level, and the current density then is determined by the dimensions of the conducting elements. For example, as integrated circuits are reduced in size, despite the lower current demanded by smaller devices, there is a trend toward higher current densities to achieve higher device numbers in ever smaller chip areas. See Moore's law.
At high frequencies, the conducting region in a wire becomes confined near its surface which increases the current density in this region. This is known as the skin effect.
High current densities have undesirable consequences. Most electrical conductors have a finite, positive resistance, making them dissipate power in the form of heat. The current density must be kept sufficiently low to prevent the conductor from melting or burning up, the insulating material failing, or the desired electrical properties changing. At high current densities the material forming the interconnections actually moves, a phenomenon called electromigration. In superconductors excessive current density may generate a strong enough magnetic field to cause spontaneous loss of the superconductive property.
The analysis and observation of current density also is used to probe the physics underlying the nature of solids, including not only metals, but also semiconductors and insulators. An elaborate theoretical formalism has developed to explain many fundamental observations.
The current density is an important parameter in Ampère's circuital law (one of Maxwell's equations), which relates current density to magnetic field.
In special relativity theory, charge and current are combined into a 4-vector.
Calculation of current densities in matter
Free currents
Charge carriers which are free to move constitute a free current density, which are given by expressions such as those in this section.
Electric current is a coarse, average quantity that tells what is happening in an entire wire. At position at time , the distribution of charge flowing is described by the current density:
where
is the current density vector;
is the particles' average drift velocity (SI unit: m∙s−1);
is the charge density (SI unit: coulombs per cubic metre), in which
is the number of particles per unit volume ("number density") (SI unit: m−3);
is the charge of the individual particles with density (SI unit: coulombs).
A common approximation to the current density assumes the current simply is proportional to the electric field, as expressed by:
where is the electric field and is the electrical conductivity.
Conductivity is the reciprocal (inverse) of electrical resistivity and has the SI units of siemens per metre (S⋅m−1), and has the SI units of newtons per coulomb (N⋅C−1) or, equivalently, volts per metre (V⋅m−1).
A more fundamental approach to calculation of current density is based upon:
indicating the lag in response by the time dependence of , and the non-local nature of response to the field by the spatial dependence of , both calculated in principle from an underlying microscopic analysis, for example, in the case of small enough fields, the linear response function for the conductive behaviour in the material. See, for example, Giuliani & Vignale (2005) or Rammer (2007). The integral extends over the entire past history up to the present time.
The above conductivity and its associated current density reflect the fundamental mechanisms underlying charge transport in the medium, both in time and over distance.
A Fourier transform in space and time then results in:
where is now a complex function.
In many materials, for example, in crystalline materials, the conductivity is a tensor, and the current is not necessarily in the same direction as the applied field. Aside from the material properties themselves, the application of magnetic fields can alter conductive behaviour.
Polarization and magnetization currents
Currents arise in materials when there is a non-uniform distribution of charge.
In dielectric materials, there is a current density corresponding to the net movement of electric dipole moments per unit volume, i.e. the polarization :
Similarly with magnetic materials, circulations of the magnetic dipole moments per unit volume, i.e. the magnetization , lead to magnetization currents:
Together, these terms add up to form the bound current density in the material (resultant current due to movements of electric and magnetic dipole moments per unit volume):
Total current in materials
The total current is simply the sum of the free and bound currents:
Displacement current
There is also a displacement current corresponding to the time-varying electric displacement field :
which is an important term in Ampere's circuital law, one of Maxwell's equations, since absence of this term would not predict electromagnetic waves to propagate, or the time evolution of electric fields in general.
Continuity equation
Since charge is conserved, current density must satisfy a continuity equation. Here is a derivation from first principles.
The net flow out of some volume (which can have an arbitrary shape but fixed for the calculation) must equal the net change in charge held inside the volume:
where is the charge density, and is a surface element of the surface enclosing the volume . The surface integral on the left expresses the current outflow from the volume, and the negatively signed volume integral on the right expresses the decrease in the total charge inside the volume. From the divergence theorem:
Hence:
This relation is valid for any volume, independent of size or location, which implies that:
and this relation is called the continuity equation.
In practice
In electrical wiring, the maximum current density (for a given temperature rating) can vary from 4 A⋅mm−2 for a wire with no air circulation around it, to over 6 A⋅mm−2 for a wire in free air. Regulations for building wiring list the maximum allowed current of each size of cable in differing conditions. For compact designs, such as windings of SMPS transformers, the value might be as low as 2 A⋅mm−2. If the wire is carrying high-frequency alternating currents, the skin effect may affect the distribution of the current across the section by concentrating the current on the surface of the conductor. In transformers designed for high frequencies, loss is reduced if Litz wire is used for the windings. This is made of multiple isolated wires in parallel with a diameter twice the skin depth. The isolated strands are twisted together to increase the total skin area and to reduce the resistance due to skin effects.
For the top and bottom layers of printed circuit boards, the maximum current density can be as high as 35 A⋅mm−2 with a copper thickness of 35 μm. Inner layers cannot dissipate as much heat as outer layers; designers of circuit boards avoid putting high-current traces on inner layers.
In the semiconductors field, the maximum current densities for different elements are given by the manufacturer. Exceeding those limits raises the following problems:
The Joule effect which increases the temperature of the component.
The electromigration effect which will erode the interconnection and eventually cause an open circuit.
The slow diffusion effect which, if exposed to high temperatures continuously, will move metallic ions and dopants away from where they should be. This effect is also synonymous with ageing.
The following table gives an idea of the maximum current density for various materials.
Even if manufacturers add some margin to their numbers, it is recommended to, at least, double the calculated section to improve the reliability, especially for high-quality electronics. One can also notice the importance of keeping electronic devices cool to avoid exposing them to electromigration and slow diffusion.
In biological organisms, ion channels regulate the flow of ions (for example, sodium, calcium, potassium) across the membrane in all cells. The membrane of a cell is assumed to act like a capacitor.
Current densities are usually expressed in pA⋅pF−1 (picoamperes per picofarad) (i.e., current divided by capacitance). Techniques exist to empirically measure capacitance and surface area of cells, which enables calculation of current densities for different cells. This enables researchers to compare ionic currents in cells of different sizes.
In gas discharge lamps, such as flashlamps, current density plays an important role in the output spectrum produced. Low current densities produce spectral line emission and tend to favour longer wavelengths. High current densities produce continuum emission and tend to favour shorter wavelengths. Low current densities for flash lamps are generally around 10 A⋅mm−2. High current densities can be more than 40 A⋅mm−2.
See also
Hall effect
Quantum Hall effect
Superconductivity
Electron mobility
Drift velocity
Effective mass
Electrical resistance
Sheet resistance
Speed of electricity
Electrical conduction
Green–Kubo relations
Green's function (many-body theory)
References
Electromagnetic quantities
Density
Area-specific quantities | Current density | [
"Physics",
"Mathematics"
] | 2,326 | [
"Electromagnetic quantities",
"Physical quantities",
"Area-specific quantities",
"Quantity",
"Mass",
"Density",
"Wikipedia categories named after physical quantities",
"Matter"
] |
5,498,814 | https://en.wikipedia.org/wiki/Bridge%20management%20system | A bridge management system (BMS) is a set of methodologies and procedures for managing information about bridges. Such system is capable of document and process data along the entire life cycle of the structure steps: project design, construction, monitoring, maintenance and end of operation.
First used in literature in 1987, the acronym BMS is commonly used in structural engineering to refer to a single or a combination of digital tools and software that support the documentation of every practice related to the single structure. Such software architecture has to meet the needs of road asset managers interested on tracking the serviceability status of bridges through a workflow mainly based on 4 components: data inventory, cost and construction management, structural analysis and assessment and maintenance planning. The implementation of BMS usually is built on top of relational databases, geographic information systems (GIS) and building information modeling platform (BIM) also named bridge information modeling (BrIM) with photogrammetric and laser scanning processing software used for the management of data collected during targeted inspections. The output of the whole procedure, as stated also in some national guidelines of different countries, usually consists of a prioritization of intervention on bridges classified in different risk level according to information collected and processed.
History
Since the late 1980s the structural health assessment and monitoring of bridges represented a critical topic in the field of civil infrastructure management. In the 1990s, the Federal Highway Administration (FHWA) of the United States promoted and sponsored PONTIS and BRIDGEIT, two computerized platforms for viaduct inventory and monitoring named BMSs. In the following years, also outside the U.S., the growing need of an organized and digitized road asset management has led responsible national agencies to adopt increasingly complex solutions able to meet their objectives, such as building inventories and inspection databases, planning for maintenance, repair and rehabilitation interventions in a systematic way, optimizing the allocation of financial resources, and increasing the safety of bridge users. Moreover, as of 2020s, the occurrence of some significant bridge collapse events and an increased sensitivity to the environmental impact of large structure management operations has led some national authorities such as France and Italy to the designation of national guidelines with detailed guidance for the development and adoption of multilevel BMS to optimize bridge management.
System components
Researchers in the field of structural engineering have identified 4 main components for the implementation of a functional BMS:
Data inventory.
Cost and construction management.
Structural analysis and assessment.
Maintenance planning.
Data inventory
Data and information referring to each life cycle step of bridges need to be collected and archived through a flexible approach, making possible to efficiently update and access them. In commonly used BMS, such goal is achieved adopting database solutions that allows the documentation of data in different formats such as texts, images, three-dimensional models and more. Indeed, the inventory usually includes technical drawings of the original project design, written reports from periodical in-situ inspections, numerical observation series of measurements recorded by installed sensors but also geo-referenced data about the structure site as well as 3D scaled model that document the actual state of the bridge.
While the collection of historical documentation and project design of the structure is represented by analogical and digital archives managed by road asset managers, the geometric data input implies the application of topographic techniques through dedicated surveys on the field. In particular, bridge inspections for the 3D reconstruction of a digital twin of the structure usually consists of survey campaigns using global navigation satellite system measurements, ground and drone-based photogrammetry and laser scanning. Data management in this phase implies the use of geographic information systems, BIM and computer-aided design software, manipulating both 2D and 3D geo-referenced data. Resulting products include point clouds and meshes that serves as the basis for building information modeling processes. Bridge surveys can be repeated in different steps of the structure life cycle and their frequency depends on decision making and prioritization of maintenance operation and national guidelines.
In addition to visual geomatic inspections, other nondestructive evaluation techniques are commonly adopted, allowing to collect data not limited to the accurate geometric reconstruction of the structure but also to the material conditions. In this case, the adoption of ground penetrating radar for detection of deterioration of the reinforcement in decks and infrared thermography for identification of delamination and degradation of bridge components is well documented in academic research and considered a complementary step to traditional visual inspection approaches.
Cost and construction management
An accurate implementation of a virtual digital twin or a BIM model of a structure is considered the starting point for budget management and optimization since the early stage studies on BMS. For example, it provides the opportunity to calculate the total cost materials and specialized operator needed in the construction step, quantifying in advance the expenses and consequently adopting better economic strategies. Moreover, a multi-temporal management of information referenced to specific portions of the bridge enables the possibility to define efficient time tables for material delivery planning, project progress monitoring and documentation, construction schedule improvement and workers and experts coordination. In recent BMS applications, sustainability also plays a crucial role in the definition of procedures for cost optimization adopting dedicated approaches such as Life Cycle Assessment, calculation of carbon-footprint and energy consumption along the different phase of the bridge life cycle.
Structural analysis and assessment
Visual inspection often result in large amounts of data stored in the BMS inventory that serve as input for image-based processes for defect and damage detection. While traditional method for simply relied on human evaluation, Computer Vision techniques taking advantage of Artificial Intelligence and Machine Learning semi-automatize the extraction of meaningful information from pictures taken during inspections. For example, recent applications of semantic segmentation allows the identification of elements affected by corrosion or other degradation phenomena, enabling experts to assign a grade of severity for the damage. Additional insights on the structure conditions are given by numerical simulations on fatigue behavior with finite element method modeling. This case is particularly valuable when data from in-depth detailed inspections or load tests are available, providing a rich information inventory also for computing simulations on stress behaviors and mechanics.
At a larger territorial scale, a similar grading approach is also applied to the evaluation of the whole road network context in which the single structure analyzed is located. Such quantitative analysis are usually connected to the evaluation of road surface deformations with InSAR technologies or to calculation and prediction of the average daily traffic flow in GIS environments.
All the results coming from analysis, simulations and severity level classifications serve as input for the execution of the intervention prioritization, the core part of the maintenance planning component in a BMS framework.
Maintenance planning
The definition of operation schedule and more detailed inspection is a key function in the decision making process of a BMS. Based on quantitative and qualitative data acquired during routine inspections and along the processing of information in the structural analyze phase, BMS users need to identify priority interventions through a dedicated maintenance plan. This goal is achieved with the implementation of platforms and tools that enable stakeholders to explore data, results and observations and link them to detailed fact sheets reporting the health conditions of each structural elements of the bridge.
Prioritization of intervention on single elements or on the whole structure is determined through a multi-criteria approach that consider the risk of defect or collapse. In particular, the process usually implies the computation of indexes for quantifying hazard, vulnerability and value exposure and derives from them a warning class. Warning classes then serve as parameters for prioritizing the allocation of funds and operators for further detailed and more frequent monitoring approaches for structures at risk. As a result, bridges whose structural integrity and serviceability are more affected are classified in higher warning classes, requiring targeted interventions. This operation is essential to determine if special inspections with expert operators and specific tests (e.g. load tests) are required and if any new or additional sensors (e.g. extensometer, accelerometer) for continuous monitoring need to be installed on the structures for targeted monitoring.
National guidelines
In order to assess and quantify the health condition of the bridges located in their national territory, many countries have formulated a series of general indications and guidelines for the implementation of dedicated bridge management systems.
France
In 2019, the French Centre for Studies on Risks, the Environment, Mobility and Urban Planning (CEREMA) in collaboration with the French Institute of Science and Technology for Transport, Development and Networks issued the national guidelines, proposing a multilevel methodology for the assessment and management of the risk of failure due to scour for bridges with foundations in water. The current version of the French guidelines only refers to bridge scour and hydraulic risk.The proposed methodology runs on 4 levels:
Summary analysis: qualitative risk analysis on a large scale and classification of the structure into three risk classes: low, medium, and high;
Simplified analysis: semi-quantitative analysis on bridges previously classified in the medium and high level of risk;
Detailed analysis: in-depth studies on high-risk structures with numerical modeling approaches
Risk management: identification of actions to improve the conditions and/or reduce the sensitivity of critical bridges.
Italy
Ensuring the safety and well-being of these road infrastructures has become an urgent matter in Italy, especially after the evidence of bridge collapses occurred in the last decade. In response to the need for reliable and up-to-date information regarding bridge conditions, in 2020 the Italian Superior Council of Public Works has developed the Guidelines on Risk Classification and Management. These guidelines establish a multi-level approach for documenting bridge characteristics, assessing their health through visual inspection and damage identification, and determining their risk classification based on hazard, exposure and vulnerability derived from the previous steps. Subsequently, depending on the assigned class, the number of investigation levels required to evaluate the structure's safety is determined. Road asset managers are then asked to establish and maintain a management system able to track in time interventions, documenting defects assessed on different portion of the bridge as well as environmental site conditions (hydraulics, geology, seismology). The guidelines identifies six levels:
1. Collection of available data about bridge construction, accessing existing archives;
2. Visual inspection reports about structure geometry and bridge elements conditions;
3. Risk classification of the structure in one of the five attention classes, i.e., low, medium-low, medium, medium-high, and high;
4. Simplified safety assessment for bridges in medium or medium-high attention class;
5. Accurate safety assessment for bridges in the high attention class;
6. Resilience analysis at the network level. (Only drafted in the current version of the guidelines).
Examples
The below are examples of commonly used bridge management software:
inspectX is a web and tablet-based bridge management software designed by AssetIntel for efficient inspection, inventory, and maintenance of transportation infrastructure, featuring offline capabilities, compliance with SNBI standards, and integrated GIS tools.
Pontis, now Known as AASHTOWare Bridge Management, a BMS software sponsored by the U.S. Federal Highway Administration for the management of highway networks;
DANBRO+, computer-based BMS commonly used in Denmark;
SwissInspect, Swiss digital twin platform specialized in the management of civil infrastructures, mainly bridges;
INBEE, digital platform and mobile application that implement the Italian guidelines for bridge monitoring.
See also
Structural health monitoring
Management system
Digital twin
Structural engineering
Glossary of structural engineering
References
Bridges
Construction
Transportation engineering
Technology systems
Information systems
Management systems
Further reading | Bridge management system | [
"Technology",
"Engineering"
] | 2,282 | [
"Structural engineering",
"Systems engineering",
"Technology systems",
"Industrial engineering",
"Construction",
"Information systems",
"Information technology",
"Transportation engineering",
"Civil engineering",
"nan",
"Bridges"
] |
5,498,909 | https://en.wikipedia.org/wiki/Letterlike%20Symbols | Letterlike Symbols is a Unicode block containing 80 characters which are constructed mainly from the glyphs of one or more letters. In addition to this block, Unicode includes full styled mathematical alphabets, although Unicode does not explicitly categorize these characters as being "letterlike."
Symbols
Glyph variants
Variation selectors may be used to specify chancery (U+FE00) vs roundhand (U+FE01) forms, if the font supports them:
The remainder of the set is at Mathematical Alphanumeric Symbols.
Block
Emoji
The Letterlike Symbols block contains two emoji:
U+2122 and U+2139.
The block has four standardized variants defined to specify emoji-style (U+FE0F VS16) or text presentation (U+FE0E VS15) for the
two emoji, both of which default to a text presentation.
History
The following Unicode-related documents record the purpose and process of defining specific characters in the Letterlike Symbols block:
See also
Greek in Unicode
Latin script in Unicode
Unicode symbols
Mathematical operators and symbols in Unicode
Mathematical Alphanumeric Symbols (Unicode block)
Currency Symbols (Unicode block)
References
Unicode blocks
Typographical symbols | Letterlike Symbols | [
"Mathematics"
] | 248 | [
"Symbols",
"Typographical symbols"
] |
5,499,083 | https://en.wikipedia.org/wiki/Masterminds%20%281997%20film%29 | Masterminds is a 1997 American action comedy film directed by Roger Christian, written by Floyd Byers and starring Patrick Stewart, Vincent Kartheiser, Brenda Fricker, Brad Whitford, and Matt Craven. It tells the story of a computer engineering prodigy who matches wits with a security consultant who has taken over his stepsister's school that he used to go to as a ransom is demanded for their release.
Plot
Oswald "Ozzie" Paxton is a computer engineering prodigy and expert hacker whose actions often have his father Jake threatening to send him to military school if he does not shape up. One day, he begins an unauthorized download of a soon-to-be-released movie. His download is interrupted when his younger stepsister Melissa Randall enters his room without permission. The resulting squabble between them results in Jake and Melissa's mother Helen intervening. In the process, Jake discovers the illicit download and Helen punishes Ozzie, making him take Melissa to her private school Shady Glen.
He takes her there by skateboard where they run into Principal Claire Maloney and security consultant Rafe Bentley where it was revealed that Maloney previously expelled Ozzie which she explained to Bentley why security measures were taken after the "science room burnout" that Ozzie caused. Before he can get out of the school, Bentley and his crew of "security guards" use a variety of firearms and tranquilizer dart guns to subdue several staff members, lock down the school, and hold the children hostage. Bentley has planned stages of a ransom scheme involving their parents' corporations. Ozzie attempts to alert Melissa to the danger. She does not believe him and he is subsequently chased by one of the gunmen. Using a bunsen burner and a vial of acid, he is able to subdue his pursuer. He subsequently begins wreaking havoc with Bentley's computerized security system.
The police make several attempts to breach the school's perimeter only to run into automatic gunfire, rocket launchers, and mines. As a concession, Bentley releases most of the children, but keeps the ten richest like Melissa and demands a very large ransom for their return. Ozzie locates ten of the eleven children and rescues them, but Melissa has been taken by Bentley. He then places an improvised time bomb at the bottom of the school's indoor pool. He attempts to stop the ransom payment, but finds out too late that the man designated to deliver it named Foster Deroy was actually Bentley's confederate. Bentley ties Ozzie to a chair and leaves with his men, keeping Melissa as an insurance policy. They intend to escape through the sewer pipes using ATVs.
While Ozzie is struggling to free himself, the bomb explodes, flooding the school's lower levels and neutralizing nearly everyone there. Ozzie and his friend K-Dog seize an abandoned ATV and pursue Bentley. They rescue Melissa, but Bentley escapes with the ransom. However, Ozzie is able to blow the whistle on Deroy with a little help from Maloney who also witnessed Rafe's actions. Through his cellphone, the police trace Rafe's employer to the CEO of a rival corporation named Larry Millard, who masterminded the plot so that the money used for the bidding would be given to terrorists so he could win a bidding war against the corporation run by Miles Lawrence that is employing Jake.
Soon afterward, Bentley sees a light at the end of the tunnel only to discover that the light leads to a sewage reclamation plant. The money begins to sink into the sewage as police officers arrive to arrest him.
Cast
Patrick Stewart as Rafe Bentley, a security consultant who takes over Shady Glen.
Vincent Kartheiser as Oswald "Ozzie" Paxton, a computer-engineering prodigy and hacker who matches wits with Rafe.
Brenda Fricker as Claire Maloney, the principal of Shady Glen.
Bradley Whitford as Miles Lawrence, the CEO of a company that Rafe demands a ransom from after he previously fired Rafe for embezzlement.
Matt Craven as Jake Paxton, a businessman and the father of Ozzie.
Annabelle Gurwitch as Helen Randall, Ozzie's stepmother.
Jon Abrahams as "K-Dog", Ozzie's friend
Katie Stuart as Melissa Randall, Ozzie's stepsister.
Michael MacRae as Foster Deroy, the CFO of Miles' company and an ally of Rafe.
Callum Keith Rennie as Ollie, one of Rafe's minions.
Earl Pastko as Captain Jankel
Jason Schombing as Marvin
Michael David Simms as Colonel Duke
David Paul Grove as "Ferret", one of Rafe's minions.
Pamela Martin as TV Reporter
Teryl Rothery as Ms. Saunders
Vanessa Morley as Gabby Lawrence, the daughter of Miles who attends Shady Glen.
Jay Brazeau as Eliot, the gate guard at Shady Glen.
Michael Benyaer as Taxi Driver
Jim Byrnes as Larry Millard (uncredited), the CEO of a company that is the rival of Miles' company.
Production
On site locations included Hatley Castle in Colwood, British Columbia, as well as locations in Victoria and Vancouver. While on-site filming took place in British Columbia, Canada, studio filming took place in Shepperton Studios in England.
Performance
In a release from Studio Briefing, Masterminds was listed as a box office flop for the Labor Day box office weekend, grossing only $1.8 million.
Reception
On Rotten Tomatoes the film has an approval rating of 19% based on reviews from 16 critics.
Roger Ebert of the Chicago Sun-Times panned the film, saying "all of the pieces have been assembled from better films, but then there are few worse films to borrow from" but had some praise for Stewart "the sole remaining interest comes from the presence of Stewart."
References
External links
1997 films
American action comedy films
Films about computing
Films directed by Roger Christian
Films set in schools
Films shot in Vancouver
Films scored by Anthony Marinelli
Columbia Pictures films
1997 action comedy films
1997 comedy films
1990s English-language films
1990s American films
English-language action comedy films | Masterminds (1997 film) | [
"Technology"
] | 1,230 | [
"Works about computing",
"Films about computing"
] |
11,790,980 | https://en.wikipedia.org/wiki/Acrosporium%20tingitaninum | Acrosporium tingitaninum is an ascomycete fungus that is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Enigmatic Ascomycota taxa
Fungus species | Acrosporium tingitaninum | [
"Biology"
] | 51 | [
"Fungi",
"Fungus species"
] |
11,790,996 | https://en.wikipedia.org/wiki/Rosellinia%20subiculata | Rosellinia subiculata is a fungal plant pathogen infecting citruses. It is a uniperitheciate pyrenomycete in division Ascomycota. It can be distinguished by its scattered growth on decaying wood, its singular ostiole, and yellow subiculum.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal citrus diseases
Xylariales
Fungi described in 1882
Fungus species | Rosellinia subiculata | [
"Biology"
] | 91 | [
"Fungi",
"Fungus species"
] |
11,791,038 | https://en.wikipedia.org/wiki/Camarotella%20costaricensis | Camarotella costaricensis is a plant pathogen.
References
Fungal plant pathogens and diseases
Phyllachorales
Fungus species | Camarotella costaricensis | [
"Biology"
] | 29 | [
"Fungi",
"Fungus species"
] |
11,791,065 | https://en.wikipedia.org/wiki/S%20Orionis | S Orionis is an asymptotic giant branch star in the constellation Orion, approximately away. It varies regularly in brightness between extremes of magnitude 7.2 and 14 every 14 months.
Variability
S Orionis is a Mira variable that pulsates with an approximately 420‑day cycle, and varies in radius from 2.0 to 2.3 astronomical units. The pulsations have been observed using VLTI and VLBA observations which measured an angular diameter varying between 7.9 and 9.7 mas.
The mean period of variation has been shown to change over time, from less than 410 days to over 440 days. The variations are approximately sinusoidal with a weak, not statistically-significant, trend towards longer period. The cycle of period changes is around 70 years within a total observation period of only about 100 years, so it is difficult to be certain about long-term behaviour. However, this behaviour is not expected to be the result of thermal pulses or evolutionary changes, and the cause is unknown.
Companion
S Orionis is listed in the Washington Double Star Catalog as a double star with a tenth magnitude companion 47" away. The companion is G0 star HD 294176.
Circumstellar environment
S Orionis is surrounded by masers and dust condensed from its cool stellar wind. The size of the dust shells varies as the star pulsates and changes temperature, from around 8 AU to 10 AU across. The positions of the masers have been measured very accurately using VLBI.
References
Orion (constellation)
Mira variables
M-type giants
Orionis, S
036090
Emission-line stars
025673
Durchmusterung objects | S Orionis | [
"Astronomy"
] | 342 | [
"Constellations",
"Orion (constellation)"
] |
11,791,198 | https://en.wikipedia.org/wiki/Bioclimatology | Bioclimatology is the interdisciplinary field of science that studies the interactions between the biosphere and the Earth's atmosphere on time scales of the order of seasons or longer (in contrast to biometeorology).
Examples of relevant processes
Climate processes largely control the distribution, size, shape and properties of living organisms on Earth. For instance, the general circulation of the atmosphere on a planetary scale broadly determines the location of large deserts or the regions subject to frequent precipitation, which, in turn, greatly determine which organisms can naturally survive in these environments. Furthermore, changes in climates, whether due to natural processes or to human interferences, may progressively modify these habitats and cause overpopulation or extinction of indigenous species.
The biosphere, for its part, and in particular continental vegetation, which constitutes over 99% of the total biomass, has played a critical role in establishing and maintaining the chemical composition of the Earth's atmosphere, especially during the early evolution of the planet (See History of Earth for more details on this topic). Currently, the terrestrial vegetation exchanges some 60 billion tons of carbon with the atmosphere on an annual basis (through processes of carbon fixation and carbon respiration), thereby playing a critical role in the carbon cycle. On a global and annual basis, small imbalances between these two major fluxes, as do occur through changes in land cover and land use, contribute to the current increase in atmospheric carbon dioxide.
References
M. I. Budyko (1974) Climate and Life, Academic Press, New York, 508 pp., .
David M. Gates (1980) Biophysical Ecology, Springer-Verlag, New York, 611 pp., .
Stephen H. Schneider and Randi Londer (1984) The Coevolution of Climate and Life, Sierra Club Books, San Francisco, 563 pp., .
Branches of meteorology
Branches of biology
Climatology
Ecology
Environmental science | Bioclimatology | [
"Biology",
"Environmental_science"
] | 391 | [
"Ecology",
"nan"
] |
11,792,920 | https://en.wikipedia.org/wiki/3-Mercaptopropane-1%2C2-diol | 3-Mercaptopropane-1,2-diol, also known as thioglycerol, is a chemical compound and thiol that is used as a matrix in fast atom bombardment mass spectrometry and liquid secondary ion mass spectrometry.
See also
Glycerol
Mercaptoethanol
References
Solvents
Thiols
Vicinal diols
Mass spectrometry | 3-Mercaptopropane-1,2-diol | [
"Physics",
"Chemistry"
] | 85 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Thiols",
"Organic compounds",
"Mass spectrometry",
"Matter"
] |
11,792,982 | https://en.wikipedia.org/wiki/Alternaria%20macrospora | Alternaria macrospora is a plant pathogen.
References
macrospora
Fungal plant pathogens and diseases
Eudicot diseases
Fungi described in 1904
Fungus species | Alternaria macrospora | [
"Biology"
] | 33 | [
"Fungi",
"Fungus species"
] |
11,793,041 | https://en.wikipedia.org/wiki/Colletotrichum%20sublineolum | Colletotrichum sublineola is a plant pathogen that causes anthracnose in wild rice and sorghum
Colletotrichum sublineola (wrongly named for many years as Colletotrichum sublineolum), is the causal agent of sorghum anthracnose, which is one of the most important diseases in sorghum and can cause losses up to 25%.
Symptoms
Symptoms appear as small circular red/orange lesions with distinct margins on the upper portion of the stalks, leaves and seeds. The lesions can measure 2mm-2 cm and can contain dark brown fungal structures. Brown sunken areas can also appear on the stems.
Management
Partners of the CABI-led programme, Plantwise recommend several methods for preventing spread of C. sublineolum, these include; planting two weeks after onset of rains, planting resistant varieties/hybrids and using certified seed from known seed dealers.
Crop rotation with other crops including soybean, groundnuts, cowpea and chickpeas can be used to prevent disease spread.
The disease can also be controlled by removing or burying crop residues after harvest. It is also recommended by Plantwise partners, including the National Agriculture Research Organization in Uganda to remove alternate hosts such as Johnson grass and any volunteer sorghum plants in the field.
Sources
References
External links
USDA ARS Fungal Database
sublineolum
Fungal plant pathogens and diseases
Cereal diseases
Fungi described in 1913
Fungus species | Colletotrichum sublineolum | [
"Biology"
] | 299 | [
"Fungi",
"Fungus species"
] |
11,793,054 | https://en.wikipedia.org/wiki/Claviceps%20zizaniae | Claviceps zizaniae is a plant pathogen that causes ergot in the wild rice species Zizania aquatica and Z. palustris. Originally described in 1920 as Spermoedia zizaniae by Faith Fyles, it was transferred to Claviceps in 1959 by Maria E. Pantidou. The new combination, however, was not published validly as Pantidou "failed to provide a full and direct reference to the place of publication". The binomial was published validly by Scott Redhead and colleagues in 2009.
References
Fungi described in 1920
Fungal plant pathogens and diseases
Monocot diseases
Clavicipitaceae
Fungus species | Claviceps zizaniae | [
"Biology"
] | 137 | [
"Fungi",
"Fungus species"
] |
11,793,095 | https://en.wikipedia.org/wiki/Eballistra%20lineata | Eballistra lineata is a fungal plant pathogen that causes stem smut in rice.
References
Fungi described in 1882
Fungal plant pathogens and diseases
Rice diseases
Ustilaginomycotina
Taxa named by Mordecai Cubitt Cooke
Fungus species | Eballistra lineata | [
"Biology"
] | 52 | [
"Fungi",
"Fungus species"
] |
11,793,198 | https://en.wikipedia.org/wiki/Pleospora%20lycopersici | Pleospora lycopersici is a plant pathogen fungus infecting tomatoes.
It was originally found on the fruit of Lycopersicon (an old name for a tomato genus) in Belgium.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Tomato diseases
Pleosporaceae
Fungi described in 1921
Fungus species | Pleospora lycopersici | [
"Biology"
] | 77 | [
"Fungi",
"Fungus species"
] |
11,793,262 | https://en.wikipedia.org/wiki/Phoma%20destructiva | Phoma destructiva is a fungal plant pathogen infecting tomatoes and potatoes.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Potato diseases
Tomato diseases
destructiva
Fungi described in 1881
Fungus species | Phoma destructiva | [
"Biology"
] | 50 | [
"Fungi",
"Fungus species"
] |
11,793,306 | https://en.wikipedia.org/wiki/Septoria%20lycopersici | Septoria lycopersici is a fungal pathogen that is most commonly found infecting tomatoes. It causes one of the most destructive diseases of tomatoes and attacks tomatoes during any stage of development.
Host and symptoms
Septoria lycopersici infects the tomato leaves via the stomata and also by direct penetration of epidermal cells. Symptoms generally include circular or angular lesions most commonly found on the older, lower leaves of the plant. The lesions are generally 2–5mm in diameter and have a greyish center with brown margins. The lesions are distinct characteristics of S. lycopersici and contain pycnidia in the center which aid when trying to identify the pathogen. Pycnidia can be found in the center of the said lesions. Pycnidia are fruiting bodies of the fungus. When the lesions become numerous often the leaves turn yellow, then brown, shriveling up and eventually dropping off the plant altogether.
Environment
Septoria lycopersici prefers warm, wet, and humid conditions. Disease development occurs within a wide range of temperatures; however, the optimal temperatures lie between 20 and 25 degrees Celsius. High humidity and leaf wetness are also ideal for disease development. The initial source of inoculum for S. lycopersici results from overwintered resting structures such as mycelium and conidia within pycnidia which can be found on and in infected seed and within infected tomato debris left in the field. Spores spread to healthy tomato leaves by windblown water, splashing rain, irrigation, mechanical transmission, and through the activities of insects such as beetles, tomato worms, and aphids. Provided the environment is conducive for disease development, lesions usually develop within 5 days of infection.
Management
The effects of Septoria lycopersici can often be reduced through the implementation of a variety of management techniques. First and foremost, each season should begin as pathogen-free as possible. This can be accomplished by burning or destroying all infected plant tissues to prevent the spread of the primary innoculum. Crop rotation is also encouraged to avoid the re-infection of new foliage from overwintered inoculum. Improving air circulation around the plants through separation of rows and use of cages can also promote faster drying and reduction of splashing, thus reducing the spread of fungal spores. Drip irrigation and mulching also help with the reduction of splashing thus decreasing further inoculum dispersal. Fungicidal sprays should also be considered, though they do not cure already infected leaves, they protect uninfected leaves from becoming infected.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Tomato diseases
lycopersici
Fungi described in 1881
Fungus species | Septoria lycopersici | [
"Biology"
] | 571 | [
"Fungi",
"Fungus species"
] |
11,793,331 | https://en.wikipedia.org/wiki/Cercospora%20nicotianae | Cercospora nicotianae is a fungal plant pathogen.
References
nicotianae
Fungal plant pathogens and diseases
Taxa named by Benjamin Matlack Everhart
Fungi described in 1893
Fungus species | Cercospora nicotianae | [
"Biology"
] | 40 | [
"Fungi",
"Fungus species"
] |
11,793,435 | https://en.wikipedia.org/wiki/Hymenula%20affinis | Hymenula affinis is an ascomycete fungus that is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Enigmatic Ascomycota taxa
Fungus species | Hymenula affinis | [
"Biology"
] | 50 | [
"Fungi",
"Fungus species"
] |
11,793,449 | https://en.wikipedia.org/wiki/Fusarium%20affine | Fusarium affine is a fungal plant pathogen affecting tobacco.
See also
List of tobacco diseases
References
affine
Fungal plant pathogens and diseases
Tobacco diseases
Fungus species | Fusarium affine | [
"Biology"
] | 34 | [
"Fungi",
"Fungus species"
] |
11,793,494 | https://en.wikipedia.org/wiki/Gloeosporium%20theae-sinensis | Gloeosporium theae-sinensis (syn. Colletotrichum theae-sinensis) is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Dermateaceae
Fungal plant pathogens and diseases
Fungus species | Gloeosporium theae-sinensis | [
"Biology"
] | 54 | [
"Fungi",
"Fungus species"
] |
11,793,523 | https://en.wikipedia.org/wiki/Armillaria%20fuscipes | Armillaria fuscipes is a plant pathogen that causes Armillaria root rot on Pinus, coffee plants, tea and various hardwood trees. It is common in South Africa. The mycelium of the fungus is bioluminescent.
Host and symptoms
Armillaria root rot is a disease that affects a wide variety of trees and is caused by multiple species in the Armillaria species complex. Armillaria spp. is a basidiomycete fungi. The symptoms for Armillaria spp. can vary greatly because of the wide host range and different species of pathogen. The hosts of Armillaria fuscipes specifically are tropical members of the genus Pinus, Camellia sinensis (tea), and members of the genus Coffea. General symptoms of A. fuscipes, include stunting of the plant, sparse foliage and chlorosis of the leaves. For hosts in the Pinus genus, such as Pinus elliottii, P. kesiya, P. patula, P. taeda, chlorosis of the needles of the infected plant is also a common symptom. Signs of this pathogen are white fans of hyphae that grow between the bark and wood of infected trees as well as the black mycelial cord or rhizomorph of the fungi growing in a net around the root system. The mycelium of A. fuscipes are bioluminescent and the rhizomorph is used to transfer nutrients over large distances to create fruiting bodies as well as infect other trees. The fruiting bodies are brown and white mushrooms that emerge from the base of the tree. The cracking of bark and resin leaking from the base of the tree are other symptoms seen mostly in the Pinus hosts.
Importance
Armillaria root rot caused by this A. fuscipes can result in the death of many Pinus species native to South Africa. The disease can spread from one tree to many and result in patches of dead trees of a considerable area. A. fuscipes is the major cause of armillaria root rot on tea in Kenya and has been found in other African countries. This has major economic implications for the tea industry in countries where the pathogen is prevalent, especially because of its wide distribution in Africa ranging from South Africa to as far north as Ethiopia. Kenya is the largest producer of tea in Africa, which accounts for 17–20% of the revenue made from exports. The way the disease spreads and symptoms, which greatly affect yield, make it an important disease to control, primarily in places where the plants it affects are of economic importance. A. fuscipes can infect coffee plants as well, but it mostly affects stands of tea.
Management
Managing A. fuscipes can be difficult because removing the pathogen via the application of fungicides isn't very straight forward. While fumigation of the plants is an option for control, it isn't often used because many fumigants, such as methyl bromide, are banned due to their extreme toxicity and the adverse effects they have on the environment. Another option for controlling inoculum is mechanical removal of infected stumps and plant material. It is difficult to completely eradicate the pathogen in this manner and it is invasive, expensive and labor-intensive. Some newer and more promising methods of management include solarization of the soil and the application of Trichoderma harzianum to the soil as a biological control. In a German study, it was found that solarization for 10 weeks increased the soil temperature enough that the viability of the pathogen was almost eliminated. The application of T. harzianum was effective in controlling A. fuscipes in woody species, and when combined with 5 weeks of solarization, caused a total loss of pathogen viability. Breeding for resistance and increasing host vigor are also options for long term management of this pathogen.
See also
List of Armillaria species
List of bioluminescent fungi
References
Bioluminescent fungi
fuscipes
Coffee diseases
Fungal tree pathogens and diseases
Fungi described in 1909
Fungus species | Armillaria fuscipes | [
"Biology"
] | 839 | [
"Fungi",
"Fungus species"
] |
11,793,549 | https://en.wikipedia.org/wiki/Cercospora%20theae | Cercospora theae is a fungal plant pathogen. It is the pathogen that causes bird's eye spot disease in tea plants.
References
theae
Fungal plant pathogens and diseases
Fungus species | Cercospora theae | [
"Biology"
] | 41 | [
"Fungi",
"Fungus species"
] |
11,793,626 | https://en.wikipedia.org/wiki/Pestalotia%20longiseta | Pestalotia longiseta is a plant pathogen infecting tea.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Tea diseases
Xylariales
Fungus species | Pestalotia longiseta | [
"Biology"
] | 43 | [
"Fungi",
"Fungus species"
] |
11,793,700 | https://en.wikipedia.org/wiki/Calonectria%20quinqueseptata | Calonectria quinqueseptata is a fungal plant pathogen.
References
Fungal plant pathogens and diseases
Nectriaceae
Fungi described in 1967
Fungus species | Calonectria quinqueseptata | [
"Biology"
] | 33 | [
"Fungi",
"Fungus species"
] |
11,793,837 | https://en.wikipedia.org/wiki/Sphaceloma%20theae | Sphaceloma theae is a plant pathogen infecting tea.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Tea diseases
Myriangiales
Fungi described in 1939
Fungus species | Sphaceloma theae | [
"Biology"
] | 43 | [
"Fungi",
"Fungus species"
] |
11,793,874 | https://en.wikipedia.org/wiki/Phaeoisariopsis%20bataticola | Phaeoisariopsis bataticola is a fungal plant pathogen infecting sweet potatoes.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Root vegetable diseases
Mycosphaerellaceae
Fungi described in 1976
Fungus species | Phaeoisariopsis bataticola | [
"Biology"
] | 54 | [
"Fungi",
"Fungus species"
] |
11,794,014 | https://en.wikipedia.org/wiki/Alternaria%20helianthi | Alternaria helianthi is a fungal plant pathogen causing a disease in sunflowers known as Alternaria blight of sunflower.
As a pathogen
Alternaria spp. plant pathogens are found across the world, affecting a variety of species, including sunflowers. It is a major defoliating pathogen in warm humid climates like India and Africa. Farmers and businesses produce sunflowers (Helianthus spp.) for manufacturing oil, edible seeds, and ornamental flowers displays. There are a couple of diseases that threaten the production of sunflowers.
Leaf blight of sunflowers is one of the most devastating diseases for sunflowers and is caused by Alternaria helianthi (Hansford) Tubaki and Nishihara, a seed-borne pathogenic fungus. It was recorded in Japan and was the same as a fungus collected earlier on sunflower in Argentina, India, Tanzania, Uganda and Zambia. Transportation of infected sunflowers and agricultural practices spread the pathogen worldwide and cause disasters in places, like India, where sunflowers are a main source of oil production. The pathogen that causes this disease is part of the Alternaria genus, it is ubiquitous and abundant and can cause a high mycotoxicological risk during harvest, which causes devastation to entire crop production.
Alternaria helianthi is an ascomycete from the Pleosporaceae family. This pathogen produces simple, meaning rarely branched conidiophores, that bear solitary conidia. These conidia are light brown colored, ellipsoid or broadly ovoid, and rarely form longitudinal septa.
Host(s) and symptoms
There are eight different species that cause yield loss of sunflowers; however, Alternaria helianthi is the primary causal agent and most widespread. The main hosts are sunflowers (Helianthus annuus); however, it has been proven that safflower (Carthamus tinctorius), noogoora burr (Xanthium pungens), cocklebur (Xanthium strumarium), and Bathurst burr (Xanthium spinosum) can serve as alternative hosts for the pathogen.
Symptoms
Leaf spots start as small, dark, and angular that eventually turn into necrotic areas that result in defoliation. Defoliation is most prevalent starting at lower leaves, where the microclimate is most favorable. Dark brown lesions are found on leaves, stems, petioles, and bracts. Stem lesions are normally narrow (1-3mm) black streaks that get up to 3 cm long. The pathogen may cause linear spots on stems and water-soaked, sunken lesions on the back of the sunflower head. Some spots could have yellow halo around the spots. Infection causes destruction of the flowers and early senescence.
Disease cycle
This pathogen overwinters on infected plant residues, but wild sunflowers may also serve as reservoirs. All species, including Alternaria helianthi, may also be seedborne. Alternaria spp. have no sexual or perfect stage. They multiply asexually through the method of sporulation. By producing one or more germ tubes, the conidia are germinated. Disease progression heavily relies on the duration of leaf wetness following initial infection, as the germination of new spores can occur within days. Germination occurs best at temperatures less than 26 degrees C and require a minimum of 4 hours of leaf wetness for sporulation. The pathogen is dispersed and spread via windblown or water-splashed onto lower leaves of the sunflower. Young seedlings are more susceptible, but lower leaves on mature plants frequently are defoliated by Alternaria spp..
Germ tubes are produced by the conidia and grow across the leaf surface before forming an appressorium. The fungus will then enter the host by penetrating through the cuticle and epidermis. It has also been observed that penetration through wounds and stomates can occur. The conidiophores then develop through the collapsed stomates. Conidiophores are 12 to 50 micrometers long and rise singly or in branches. The conidia will emerge through the stomata and trichomes. At this stage, the conidia produced will cause secondary infection and spread to other healthy plants. Under certain conditions, micro-cyclic conidia can be produced directly from the parent conidia.
Management
There are three main types of control for Alternaria helianthi: cultural practices, fungicides, and resistance. Cultural practices include removing wild and volunteer sunflower hosts, minimizing dampness/wetness of leaves, and removal of previous sunflower residues found in the soil. The destruction of the plant residue will eliminate the source of inoculum. Additionally, crop rotation with non-Asteraceae crops or allowing for fallow periods can assist in management. There is an option for seed treatments; however, the use of multiple applications of different fungicides are more effective. Finally, oilseed sunflowers have been found with disease resistance, so there could be a possibility of hybrid sunflowers incorporating the resistance into other sunflower species.
References
helianthi
Fungal plant pathogens and diseases
Fungi described in 1943
Fungus species | Alternaria helianthi | [
"Biology"
] | 1,088 | [
"Fungi",
"Fungus species"
] |
11,794,065 | https://en.wikipedia.org/wiki/Coleosporium%20madiae | Coleosporium madiae is a plant pathogen.
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Pucciniales
Fungus species | Coleosporium madiae | [
"Biology"
] | 36 | [
"Fungi",
"Fungus species"
] |
11,794,266 | https://en.wikipedia.org/wiki/Colletotrichum%20fragariae | Colletotrichum fragariae is a fungal plant pathogen infecting strawberries. It is not a well known fungus, and there are many similar fungi that are related to it. It is part of the Colletotrichum genus. It is a pathogen that occurs in strawberries. It leads to the disease known as anthracnose. This is typically at the crown of the strawberry, which is why it is often called crown rot. It is also known as the Anthracnose Crown rot. The fungus also infects leaves and is known as leaf spot, which is common among all Colletotrichum. This is not as common in the fragariae, as it is more common in the crown. This fungus is also better at infecting younger strawberries/seedlings. The most common way to control this disease is fungicides that are harmful to the environment. There have been studies done to see if the fungus infects other hosts but other than some weeds, it is very specific to Strawberries.
The occurrence of this fungus in strawberries fluctuates, and data can be found here. It is one of the more deadly pathogens to the strawberry, as once it is inside and affects the crown, the strawberry is no longer able to reproduce or be consumed. For some pictures, this website has some to show what the disease does.
Morphology
The Colletotrichum fragariae is a very small, microscopic pathogen. It can be seen under microscopes. In a study by A.N. Brooks, the pathogen had tapering to the base, was about 24 x 4.5 μm, had 3–5 septate, but up to 9. It did occur in fascicles, sometimes sinuous, brown, apical cell hyaline or light brown. The apical cell tapers to an open, truncate apex, apical cells of mature setae functioning as phialides and producing conidia (Brooks, 1931). It also produces cylindrical conidia. There is no above ground body or fruiting body that this fungi makes.
Ecology
The Colletotrichum fragariae is found in Subtropical/Tropical Moist Lowland Forest and Montane Forests. It had been found in both North and South America and even Asia. There are 66 records of this species in 5 countries. 85% of those were found in the US. It has been researched and found that high soil fertility increases the ability for this fungus to grow. There has been many research papers done to see what the fungus prefers and how it does in certain environments.
Reproduction
The Colletotrichum fragariae is a smaller fungus. It reproduces through asexual spores. This is true among all Colletotrichum genus fungi. There are a couple different growth stages including: Flowering stage, Fruiting stage, Post-harvest, Seedling stage, and Vegetative growing stage
See also
List of strawberry diseases
References
External links
fragariae
Fungal strawberry diseases
Fungi described in 1931
Fungus species | Colletotrichum fragariae | [
"Biology"
] | 622 | [
"Fungi",
"Fungus species"
] |
11,794,287 | https://en.wikipedia.org/wiki/Cercospora%20vexans | Cercospora vexans is a fungal plant pathogen.
References
vexans
Fungal plant pathogens and diseases
Fungus species | Cercospora vexans | [
"Biology"
] | 27 | [
"Fungi",
"Fungus species"
] |
11,794,556 | https://en.wikipedia.org/wiki/Amazon%20Simple%20Queue%20Service | Amazon Simple Queue Service (Amazon SQS) is a distributed message queuing service introduced by Amazon.com as a beta in late 2004, and generally available in mid 2006. It supports programmatic sending of messages via web service applications as a way to communicate over the Internet. SQS is intended to provide a highly scalable hosted message queue that resolves issues arising from the common producer–consumer problem or connectivity between producer and consumer.
Amazon SQS can be described as commoditization of the messaging service. Well-known examples of messaging service technologies include IBM WebSphere MQ and Microsoft Message Queuing. Unlike these technologies, users do not need to maintain their own server. Amazon does it for them and sells the SQS service at a per-use rate.
API
Amazon provides SDKs in several programming languages, including:
C++
Go
Java
JavaScript
Kotlin
.NET
PHP
Python
Ruby
Rust
Swift
A Java Message Service (JMS) 1.1 client for Amazon SQS was released in December 2014.
Authentication
Amazon SQS provides authentication procedures to allow for secure handling of data. Amazon uses its Amazon Web Services (AWS) identification to do this, requiring users to have an AWS enabled account with Amazon.com. AWS assigns a pair of related identifiers, your AWS access keys, to an AWS enabled account to perform identification. The first identifier is a public 20-character Access Key. This key is included in an AWS service request to identify the user. If the user is not using SOAP with WS-Security, a digital signature is calculated using the Secret Access Key. The Secret Access Key is a 40-character private identifier. AWS uses the Access Key ID provided in a service request to look up an account's Secret Access Key. Amazon.com then calculates a digital signature with the key. If they match then the user is considered authentic, if not then the authentication fails and the request is not processed.
Message delivery
Amazon SQS guarantees at-least-once delivery. Messages are stored on multiple servers for redundancy and to ensure availability. If a message is delivered while a server is not available, it may not be removed from that server's queue and may be resent. , Amazon SQS does not guarantee that the recipient will receive the messages in the order they were sent by the sender. If message ordering is important, it is required that the application place sequencing information within the messages to allow for reordering after delivery.
Messages can be of any type, and the data contained within is not restricted. Message bodies were initially limited to 8KB in size but was later raised to 64KB on 2010-07-01 and then 256KB on 2013-06-18. For larger messages, the user has a few options to get around this limitation. A large message can be split into multiple segments that are sent separately, or the message data can be stored using Amazon Simple Storage Service (Amazon S3) or Amazon DynamoDB with just a pointer to the data transmitted in the SQS message. Amazon has made an Extended Client Library available for this purpose.
The service supports both unlimited queues and message traffic.
Message deletion
SQS does not automatically delete messages once they are sent. When a message is delivered, a receipt handle is generated for that delivery and sent to the recipient. These receipts are not sent with the message but in addition to it. SQS requires the recipient to provide the receipt in order to delete a message. This feature is new as of 2008 where only the message ID was required for message deletion. Because the system is distributed, a message may be sent more than once. In this case, the most recent receipt handle is needed to delete the message. Furthermore, the receipt handle may have other validity constraints; for instance, the receipt handle may only be valid during the visibility timeout (see below).
Once a message is delivered, it has a visibility timeout to prevent other components from consuming it. The "clock" for the visibility timeout starts once a message is sent, the default time being 30 seconds. If the queue is not told to delete the message during this time, the message becomes visible again and will be present.
Each queue also consists of a retention parameter defaulting to 4 days. Any message residing in the queue for longer will be purged automatically. The retention can be modified from 1 minute up to 14 days by the user. If the retention is changed while messages are already in the queue, any message that has been in the queue for longer than the new retention will be purged.
Notable usage
Examples of companies that use SQS extensively include:
Dropbox
Netflix
Nextdoor
Amazon.com
See also
Java Message Service
Message queue
Message Queuing as a Service
Oracle Messaging Cloud Service
References
Amazon Web Services
Message-oriented middleware
Inter-process communication
Web services
Cloud platforms | Amazon Simple Queue Service | [
"Technology"
] | 1,004 | [
"Cloud platforms",
"Computing platforms"
] |
11,795,634 | https://en.wikipedia.org/wiki/Fourier%20algebra | Fourier and related algebras occur naturally in the harmonic analysis of locally compact groups. They play an important role in the duality theories of these groups. The Fourier–Stieltjes algebra and the Fourier–Stieltjes transform on the Fourier algebra of a locally compact group were introduced by Pierre Eymard in 1964.
Definition
Informal
Let G be a locally compact abelian group, and Ĝ the dual group of G. Then is the space of all functions on Ĝ which are integrable with respect to the Haar measure on Ĝ, and it has a Banach algebra structure where the product of two functions is convolution. We define to be the set of Fourier transforms of functions in , and it is a closed sub-algebra of , the space of bounded continuous complex-valued functions on G with pointwise multiplication. We call the Fourier algebra of G.
Similarly, we write for the measure algebra on Ĝ, meaning the space of all finite regular Borel measures on Ĝ. We define to be the set of Fourier-Stieltjes transforms of measures in . It is a closed sub-algebra of , the space of bounded continuous complex-valued functions on G with pointwise multiplication. We call the Fourier-Stieltjes algebra of G. Equivalently, can be defined as the linear span of the set of continuous positive-definite functions on G.
Since is naturally included in , and since the Fourier-Stieltjes transform of an function is just the Fourier transform of that function, we have that . In fact, is a closed ideal in .
Formal
Let be a Fourier–Stieltjes algebra and be a Fourier algebra such that the locally compact group is abelian. Let be the measure algebra of finite measures on and let be the convolution algebra of integrable functions on , where is the character group of the Abelian group .
The Fourier–Stieltjes transform of a finite measure on is the function on defined by
The space of these functions is an algebra under pointwise multiplication is isomorphic to the measure algebra . Restricted to , viewed as a subspace of , the Fourier–Stieltjes transform is the Fourier transform on and its image is, by definition, the Fourier algebra . The generalized Bochner theorem states that a measurable function on is equal, almost everywhere, to the Fourier–Stieltjes transform of a non-negative finite measure on if and only if it is positive definite. Thus, can be defined as the linear span of the set of continuous positive-definite functions on . This definition is still valid when is not Abelian.
Helson–Kahane–Katznelson–Rudin theorem
Let A(G) be the Fourier algebra of a compact group G. Building upon the work of Wiener, Lévy, Gelfand, and Beurling, in 1959 Helson, Kahane, Katznelson, and Rudin proved that, when G is compact and abelian, a function f defined on a closed convex subset of the plane operates in A(G) if and only if f is real analytic. In 1969 Dunkl proved the result holds when G is compact and contains an infinite abelian subgroup.
References
"Functions that Operate in the Fourier Algebra of a Compact Group" Charles F. Dunkl Proceedings of the American Mathematical Society, Vol. 21, No. 3. (Jun., 1969), pp. 540–544. Stable URL:
"Functions which Operate in the Fourier Algebra of a Discrete Group" Leonede de Michele; Paolo M. Soardi, Proceedings of the American Mathematical Society, Vol. 45, No. 3. (Sep., 1974), pp. 389–392. Stable URL:
"Uniform Closures of Fourier-Stieltjes Algebras", Ching Chou, Proceedings of the American Mathematical Society, Vol. 77, No. 1. (Oct., 1979), pp. 99–102. Stable URL:
"Centralizers of the Fourier Algebra of an Amenable Group", P. F. Renaud, Proceedings of the American Mathematical Society, Vol. 32, No. 2. (Apr., 1972), pp. 539–542. Stable URL:
Harmonic analysis
Algebras | Fourier algebra | [
"Mathematics"
] | 887 | [
"Algebras",
"Mathematical structures",
"Algebraic structures"
] |
11,796,103 | https://en.wikipedia.org/wiki/Blackle | Blackle is an internet search engine powered by Google Programmable Search Engine. It was created by Tony Heap of Heap Media Australia with the goal of saving energy by displaying a black background with grayish-white text color on search results. As of July 2023, Blackle claims to have saved over 10.07 MWh of electrical energy.
Concept
The concept behind Blackle is that computer monitors can use less energy by displaying darker colors. Blackle's design is based on a study that tested a variety of CRT and LCD monitors. However, the energy-saving claims, especially for users of LCD screens with a constant backlight, are disputed.
This concept was brought to the attention of Heap Media by a blog post, estimating that Google could save 750 megawatt hours a year by using it for CRT screens. The homepage of Blackle provides a count of the number of watt hours claimed to have been saved by enabling this concept.
History
Blackle launched in January 2007, gaining popularity and being featured in multiple mainstream media outlets during that time.
Blackle International, which translated Blackle into Portuguese, French, Czech, Italian, and Dutch, was retired in 2019. The International page still exists, but every link listed has experienced link rot. As of 2021, the site is only available in English.
See also
Light-on-dark color scheme
Performance per watt
Comparison of web search engines
List of search engines
List of search engines by popularity
References
External links
Australian websites
Energy conservation
Environmental websites
Web services
Web service providers
Google
Computers and the environment
Internet properties established in 2007 | Blackle | [
"Technology"
] | 320 | [
"Computers and the environment",
"Computers",
"Computing and society"
] |
11,796,179 | https://en.wikipedia.org/wiki/High-intensity%20magnetic%20separator | In the recent past the problem of removing the deleterious iron particles from a process stream had a few alternatives. Magnetic separation was typically limited and moderately effective. Magnetic separators that used permanent magnets could generate fields of low intensity only. These worked well in removing ferrous tramp but not fine paramagnetic particles. Thus high-intensity magnetic separators that were effective in collecting paramagnetic particles came into existence. These focus on the separation of very fine particles that are paramagnetic.
The current is passed through the coil, which creates a magnetic field, which magnetizes the expanded steel matrix ring. The paramagnetic matrix material behaves like a magnet in the magnetic field and thereby attracts the fines. The ring is rinsed when it is in the magnetic field and all the non-magnetic particles are carried with the rinse water. Next as the ring leaves the magnetic zone the ring is flushed and a vacuum of about – 0.3 bars is applied to remove the magnetic particles attached to the matrix ring.
Standard operating procedure
High-gradient magnetic separator is to separate magnetic and non-magnetic particles (concentrate and tails) from the feed slurry. This feed comes from intermediate thickener underflow pump through Linear Screen & Passive Matrix. Tailings go to tailing thickener & product goes to throw launder through vacuum tanks.
Ion separation
Ion separation is another application of magnetic separation. The separation is driven by the magnetic field that induces a separating force. The force differentiate then between heavy and lighter ions causing the separation. This phenomenon has been demonstrated on test bench and pilot scale.
References
Industrial processes
Separation processes
Magnetic devices | High-intensity magnetic separator | [
"Chemistry"
] | 340 | [
"nan",
"Separation processes"
] |
11,796,904 | https://en.wikipedia.org/wiki/Jainism%20and%20non-creationism | According to Jain doctrine, the universe and its constituents—soul, matter, space, time, and principles of motion—have always existed. Jainism does not support belief in a creator deity. All the constituents and actions are governed by universal natural laws. It is not possible to create matter out of nothing and hence the sum total of matter in the universe remains the same (similar to law of conservation of mass). Jain texts claim that the universe consists of jiva (life force or souls) and ajiva (lifeless objects). The soul of each living being is unique and uncreated and has existed during beginningless time.
The Jain theory of causation holds that a cause and its effect are always identical in nature and hence a conscious and immaterial entity like God cannot create a material entity like the universe. Furthermore, according to the Jain concept of divinity, any soul who destroys its karmas and desires achieves liberation (nirvana). A soul who destroys all its passions and desires has no desire to interfere in the working of the universe. Moral rewards and sufferings are not the work of a divine being, but a result of an innate moral order in the cosmos: a self-regulating mechanism whereby the individual reaps the fruits of their own actions through the workings of the karmas.
Through the ages, Jain philosophers have rejected and opposed the concept of any omnipotent creator god, and this has resulted in Jainism being labeled as nastika darsana, or an atheist philosophy by the rival religious philosophies. The theme of non-creationism and absence of omnipotent God and divine grace runs strongly in all the philosophical dimensions of Jainism, including its cosmology, karma, moksa and its moral code of conduct. Jainism asserts that a religious and virtuous life is possible without the idea of a creator god.
Jaina conception of the Universe
Jain scriptures reject God as the creator of the universe. Jainism offers an elaborate cosmology, including heavenly beings/devas. These heavenly beings are not viewed as creators, they are subject to suffering and change like all other living beings, and must eventually die. If godliness is defined as the state of having freed one's soul from karmas and the attainment of enlightenment/Nirvana and a god as one who exists in such a state, then those who have achieved such a state can be termed gods/Tirthankara. Thus, Mahavira was a god/Tirthankara.
According to Jains, this loka or universe is an entity, always existing in varying forms with no beginning or end. Jain texts describe the shape of the universe as similar to a man standing with legs apart and arms resting on his waist. Thus, the universe is narrow at the top, widens above the middle, narrows towards the middle, and once again becomes broad at the bottom.
Wheel of time
According to Jainism, time is beginningless and eternal. The cosmic wheel of time rotates ceaselessly. This cyclic nature eliminates the need for a creator, destroyer or external deity to maintain the universe.
The wheel of time is divided into two half-rotations, Utsarpiṇī or ascending time cycle and Avasarpiṇī, the descending time cycle, occurring continuously after each other. Utsarpiṇī is a period of progressive prosperity and happiness where the time spans and ages are at an increasing scale, while Avsarpiṇī is a period of increasing sorrow and immorality.
Concept of reality
This universe is made up of what Jainas call the six dravyas or substances classified as follows –
Jīva – The living substances
Ajīva – Non-Living Substances
Pudgala or Matter – Matter is solid, liquid, gas, energy, fine karmic materials and extra-fine matter or ultimate particles. Paramānu or ultimate particles are the basic building block of matter. One quality of paramānu and pudgala is permanence and indestructibility. It combines and changes its modes but its qualities remain the same. According to Jainism, it cannot be created nor destroyed.
Dharma-tattva or Medium of Motion and Adharma-tattva or Medium of Rest – Also known as Dharmāstikāya and Adharmāstikāya, they are distinct to Jain thought depicting motion and rest. They pervade the entire universe. Dharma-tattva and Adharma-tattva are by itself not motion or rest but mediate motion and rest in other bodies. Without dharmāstikāya motion is impossible and without adharmāstikāya rest is impossible in the Universe.
Ākāśa or Space – Space is a substance that accommodates living souls, matter, the principles of motion and rest, and time. It is all-pervading, infinite and made of infinite space-points.
Kāla or Time – Time is a real entity according to Jainism and all activities, changes or modifications are achieved only in time. Time is like a wheel with twelve spokes divided into descending and ascending: half with six stages of immense durations, each estimated at billions of "ocean years" (sagaropama). In each descending stage, sorrow increases and at each ascending stage, happiness and bliss increase.
These uncreated constituents of the universe impart dynamics upon the universe by interacting with each other. These constituents behave according to natural laws without interference from external entities. Dharma or true religion according to Jainism is vatthu sahāvo dhammo translated as "the intrinsic nature of a substance is its true dharma."
Material cause and effect
According to Jainism, causes are of two types – Upādanā kārana (substantial or material cause) and Nimitta kārana (instrumental cause). Upādanā kārana is always identical with its effect. For example, out of clay, you can only produce a clay pot; hence the clay is the upādanā kārana or material cause and the clay pot its effect. Wherever the effect is present, the cause is present and vice versa. The effect is always present in latent form in the material cause. For transforming the clay to a pot, the potter, the wheel, the stick and other operating agents are required that are merely nimitta or instrumental causes or catalysts in transformation. The material cause always remains the clay. Hence the cause and effect are always entirely identical in nature. A potter cannot be the material cause of the pot. If this were the case, then the potter might as well prepare the pot without any clay. But this is not so. Thus a clay pot can only be made from clay; gold ornaments can be made only from gold. Similarly, the different modes of existence of a soul are a result of activities of the soul itself. There cannot be any contradiction or exceptions.
In such a scenario, Jains argue that the material cause of a living soul with cetana (conscious entity) is always the soul itself and the cause of dead inert matter (non-cetana i.e. without any consciousness) is always the matter itself. If God is indeed the creator, then this is an impossible predication as the same cause will be responsible for two contradictory effects of cetana (life) and acetana (matter). This logically precludes an immaterial God (a conscious entity) from creating this universe, which is made up of material substances.
The soul
According to Jainism, one of the qualities of the soul is complete lordship of its own destiny. The soul alone chooses its actions and the soul alone reaps its consequences. No god or prophet or angel can interfere in the actions or the destiny of the soul. It is the soul alone who makes the necessary efforts to achieve liberation without any divine grace.
Jains frequently assert that “we are alone” in this world. Amongst the Twelve Contemplations (anupreksas) of Jains, one is the loneliness of one's soul and nature of the universe and transmigration. Hence only by cleansing our soul by our own actions can we help ourselves.
Jainism thus lays a strong emphasis on the efforts and the free will of the soul to achieve the desired goal of liberation.
Jaina conception of divinity
According to Jainism, gods can be categorized into Tīrthankaras, arihants or ordinary kevalins and siddhas. Jainism considers the devīs and devas to be celestial beings who dwell in heavens owing to meritorious deeds in their past lives.
Arihants
Arihants, also known as kevalins, are "gods" (supreme souls) in embodied states who ultimately become siddhas, or liberated souls, at the time of their nirvana. An arihant is a soul who has destroyed all passions, is totally unattached and without any desire and hence has destroyed the four ghātiyā karmas and attained Kevala jñāna, or omniscience. Such a soul still has a body and four aghātiyā karmas. An arhata, at the end of his lifespan, destroys his remaining aghātiyā karma and becomes a siddha.
Tīrthankaras
Tīrthankaras (also known as Jinas) are arihants who are teachers and revivers of the Jain philosophy. There are 24 Tīrthankaras in each time cycle; Mahāvīra was the 24th and last Tīrthankara of the current time cycle. Tīrthankaras are literally the ford makers who have shown the way to cross the ocean of rebirth and transmigration and hence have become a focus of reverence and worship amongst Jains. However it would be a mistake to regard the Tīrthankaras as gods analogous to the gods of the Hindu pantheon despite the superficial resemblances in Jain and Hindu way of worship. Tīrthankaras, like arhatas, ultimately become siddhas on liberation. Tīrthankaras, being liberated, are beyond any kind of transactions with the rest of the universe. They are not the beings who exercise any sort of creative activity or who have the capacity or ability to intervene in answers to prayers.
Siddhas
Ultimately, all arihants and Tīrthankaras become siddhas. A siddha is a soul who is permanently liberated from the transmigratory cycle of birth and death. Such a soul, having realized its true self, is free from all the karmas and embodiment. They are formless and dwell in Siddhashila (the realm of the liberated beings) at the apex of the universe in infinite bliss, infinite perception, infinite knowledge and infinite energy. Siddhahood is the ultimate goal of all souls.
Jains pray to these passionless gods not for any favours or rewards but rather pray to the qualities of the god with the objective of destroying the karmas and achieving godhood. This is best understood by the term – vandetadgunalabhdhaye i.e. we pray to the attributes of such gods to acquire such attributes”.
Heavenly beings – Demi-gods and demi-goddesses
Jainism describes the existence of śāsanadevatās and śāsanadevīs, the attendant gods and goddesses of Tīrthankaras, who create the samavasarana or the divine preaching assembly of a Tīrthankara.
Worship of such gods is considered as mithyātva or wrong belief leading to bondage of karmas.
Nature of karmas
According to Robert Zydendos, karma in Jainism can be considered a kind of system of laws, but natural rather than moral laws. In Jainism, actions that carry moral significance are considered to cause certain consequences in just the same way as, for instance, physical actions that do not carry any special moral significance. When one holds an apple in one's hand and then let go of the apple, the apple will fall: this is only natural. There is no judge, and no moral judgment involved, since this is a mechanical consequence of the physical action.
Hence in accordance with the natural karmic laws, consequences occur when one utters a lie, steals something, commits acts of senseless violence or leads the life of a debauchee. Rather than assume that moral rewards and retribution are the work of a divine judge, the Jains believe that there is an innate moral order to the cosmos, self-regulating through the workings of karma. Morality and ethics are important, not because of the personal whim of a fictional god, but because a life that is led in agreement with moral and ethical principles is beneficial: it leads to a decrease and finally to the total loss of karma, which means: to ever increasing happiness.
Karmas are often wrongly interpreted as a method for reward and punishment of a soul for its good and bad deeds. In Jainism, there is no question of there being any reward or punishment, as each soul is the master of its own destiny. The karmas can be said to represent a sum total of all unfulfilled desires of a soul. They enable the soul to experience the various themes of the lives that it desires to experience. They ultimately mature when the necessary supportive conditions required for maturity are fulfilled. Hence a soul may transmigrate from one life form to another for countless of years, taking with it the karmas that it has earned, until it finds conditions that bring about the fruits.
Hence whatever suffering or pleasure that a soul may be experiencing now is on account of choices that it has made in past. That is why Jainism stresses pure thinking and moral behavior. Apart from Buddhism, perhaps Jainism is the only religion that does not invoke the fear of God as a reason for moral behavior.
The karmic theory in Jainism operates endogenously. Tirthankaras are not attributed "absolute godhood" under Jainism. Thus, even the Tirthankaras themselves have to go through the stages of emancipation, for attaining that state. While Buddhism does give a similar and to some extent a matching account for Gautama Buddha, Hinduism maintains a totally different theory where "divine grace" is needed for emancipation.
The following quote in Bhagavatī Ārādhanā (1616) sums up the predominance of karmas in Jain doctrine:-
Thus it is not the so-called all embracing omnipotent God, but the law of karma that is the all governing force responsible for the manifest differences in the status, attainments and happiness of all life forms. It operates as a self-sustaining mechanism as natural universal law, without any need of an external entity to manage them.
Jain opposition to creationism
Jain scriptures reject God as the creator of universe. 12th century Ācārya Hemacandra puts forth the Jain view of universe in the Yogaśāstra thus –
Besides scriptural authority, Jains also resorted to syllogism and deductive reasoning to refute the creationist theories. Various views on divinity and universe held by the vedics, sāmkhyas, mimimsas, Buddhists and other schools of thought were analysed, debated and repudiated by the various Jain Ācāryas. However the most eloquent refutation of this view is provided by Ācārya Jinasena in Mahāpurāna thus –
Reception
The Jaina position on God and religion from a perspective of a non-Jain can be summed up in the words of Anne Vallely.
Criticism
Jainism, along with Buddhism, has been categorized as atheist philosophy (i.e. Nāstika darśana) by the followers of Vedic religion. However, the word Nāstika corresponds more to "heterodox" than to "atheism".
Sinclair Stevenson, an Irish missionary, declared that "the heart of Jainism is empty” since it does not depend on beseeching an omnipotent God for salvation. While fervently appealing for them to accept Christianity, she says Jains believe strongly in forgiving others, and yet have no hope of forgiveness by a higher power. Jains believe that liberation is by personal effort, not an appeal for divine intervention.
If atheism is defined as disbelief in the existence of a god, then Jainism cannot be labeled as atheistic, as it not only believes in the existence of gods but also of the soul which can attain godhood. As Paul Dundas puts it – "while Jainism is, as we have seen, atheist in a limited sense of rejection of both the existence of a creator God and the possibility of intervention of such a being in human affairs, it nonetheless must be regarded as a theist religion in the more profound sense that it accepts the existence of divine principle, the paramātmā i.e. God, existing in potential state within all beings".
However the usage of the word "paramatma" is not entirely accurate as there is no concept of "param-atma" or supreme atma in Jainism. Each atma has its own unique identity and remains independent even after achieving moksha, unlike certain Hindu schools of thought where the atma merges with paramatma on achieving mukti.
The usage of the English word "God" is itself problematic and inappropriate in the context of Jainism as there is no concept of such entity - and no positive, active denial of such entity - in Jain philosophy. A siddha is an atma which has achieved moksha and the closest approximation in English would be "liberated soul".
See also
Cosmogony
Creation myth
Creationism
Hindu views on evolution
History of creationism
Notes
a. Self is not an effect as it is not produced by anything nor it is a cause as it does not produce anything. Samayasāra Gāthā 10.310 See Nayanara (2005b)
b. See Vācaka Umāsvāti's description of the Universe in his Tattvārthasutra and Ācārya Hemacandras description of the universe in Yogaśāstra “…Picture a man standing with his arms akimbo – This is how Jainas believe the Loka looks like. 4.103–6
c. See Kārtikeyānupreksā, 478 – Dharma is nothing but the real nature of an object. Just as the nature of fire is to burn and the nature of water is to produce a cooling effect, in the same manner, the essential nature of the soul is to seek self-realization and spiritual elevation .
d. Vamdittu savvasiddhe .... [Samaysara 1.1] See Samaysara of Ācārya Kundakunda, Tr. By Prof A. Chakaravarti, page 1 of main text – "Jainism recognizes plurality of selves not only in world of samsara but also in the liberated state or siddhahood which is a sort of a divine republic of perfect souls where each soul retains its individual personality and does not empty its contents into the cauldron of the absolute as is maintained by other systems of philosophy"
e. See Tattvārthasūtra 1.1 "samyagdarśanajñānacāritrānimoksamārgah" – Translated as "Rational Perception, Rational Knowledge and Rational Conduct constitutes the path to liberation."
f. See Sarvārthasiddhi "Moksa mārgasya netāram bhettāram karmabhubrutām jnātāram vishva tatvānām vande tadguna labhdhaye." Translated as "We pray to those who have led the path to salvation,who have destroyed the mountains of karma, and who know the reality of the universe. We pray to them to acquire their attributes."
g. See Samayasāra 3.99–100] "If soul were indeed the producer of alien substances, then he must be of that nature; as it is not so, he cannot be their creator"
h. See Hemcandrācārya, Yogaśāstra. "eik utpadyate janturek eiv vipadyate" Translated as "each one is born alone and dies alone."
i. "Nishpaadito Na Kenaapi Na Dhritah Kenachichch Sah Swayamsiddho Niradhaaro Gagane Kimtvavasthitah". see Ācārya Hemacandra, (1989). In: S. Bothara (ed.),Dr. A. S. Gopani (Tr.), Yogaśāstra(Sanskrit). Jaipur: Prakrit Bharti Academy. Sutra 4.106
j. This quote from Mahapurana finds a mention in “Salters Horners Advanced Physics” by Jonathan Allda, which contains various scientific theories on Universe. The author quotes this extract from Mahapurana to show that Cosmology (the study of Universe) is an ancient science, which today is still probing some of the deepest questions about the origins and future of the Universe. (P 268)
Citations
References
Creationism
God in Jainism
Jain cosmology
Jainism and science | Jainism and non-creationism | [
"Biology"
] | 4,323 | [
"Creationism",
"Biology theories",
"Obsolete biology theories"
] |
11,797,347 | https://en.wikipedia.org/wiki/Electrochemical%20equivalent | In chemistry, the electrochemical equivalent (Eq or Z) of a chemical element is the mass of that element (in grams) transported by a specific quantity of electricity, usually expressed in grams per coulomb of electric charge. The electrochemical equivalent of an element is measured with a voltameter.
Definition
The electrochemical equivalent of a substance is the mass of the substance deposited to one of the electrodes when a current of 1 ampere is passed for 1 second, i.e. a quantity of electricity of one coulomb is passed.
The formula for finding electrochemical equivalent is as follows:
where is the mass of substance and is the charge passed. Since , where is the current applied and is time, we also have
Alternative formula for finding electrochemical equivalent is as follows:
where is the Equivalent weight of the substance and is Faraday constant
Eq values of some elements in kg/C
References
Physical chemistry
Units of chemical measurement | Electrochemical equivalent | [
"Physics",
"Chemistry",
"Mathematics"
] | 191 | [
"Applied and interdisciplinary physics",
"Quantity",
"Chemical quantities",
"nan",
"Units of chemical measurement",
"Physical chemistry",
"Physical chemistry stubs",
"Units of measurement"
] |
11,797,534 | https://en.wikipedia.org/wiki/Density%20dependence | In population ecology, density-dependent processes occur when population growth rates are regulated by the density of a population. This article will focus on density dependence in the context of macroparasite life cycles.
Positive density-dependence
Positive density-dependence, density-dependent facilitation, or the Allee effect describes a situation in which population growth is facilitated by increased population density.
Examples
In dioecious (separate sex) obligatory parasites, mated female worms are required to complete a transmission cycle. At low parasite densities, the probability of a female worm encountering a male worm and forming a mating pair can become so low that reproduction is restricted due to single sex infections. At higher parasite densities, the probability of mating pairs forming and successful reproduction increases. This has been observed in the population dynamics of Schistosomes.
Positive density-dependence processes occur in macroparasite life cycles that rely on vectors with a cibarial armature, such as Anopheles or Culex mosquitoes. For Wuchereria bancrofti, a filarial nematode, well-developed cibarial armatures in vectors can damage ingested microfilariae and impede the development of infective L3 larvae. At low microfilariae densities, most microfilariae can be ruptured by teeth, preventing successful development of infective L3 larvae. As more larvae are ingested, the ones that become entangled in the teeth may protect the remaining larvae, which are then left undamaged during ingestion.
Positive density-dependence processes may also occur in macroparasite infections that lead to immunosuppression. Onchocerca volvulus infection promotes immunosuppressive processes within the human host that suppress immunity against incoming infective L3 larvae. This suppression of anti-parasite immunity causes parasite establishment rates to increase with higher parasite burden.
Negative density-dependence
Negative density-dependence, or density-dependent restriction, describes a situation in which population growth is curtailed by crowding, predators and competition.
In cell biology, it describes the reduction in cell division. When a cell population reaches a certain density, the amount of required growth factors and nutrients available to each cell becomes insufficient to allow continued cell growth.
This is also true for other organisms because an increased density means an increase in intraspecific competition. Greater competition means an individual has a decreased contribution to the next generation i.e. offspring.
Density-dependent mortality can be overcompensating, undercompensating or exactly compensating.
There also exists density-independent inhibition, where other factors such as weather or environmental conditions and disturbances may affect a population's carrying capacity.
An example of a density-dependent variable is crowding and competition.
Examples
Density-dependent fecundity exists, where the birth rate falls as competition increases. In the context of gastrointestinal nematodes, the weight of female Ascaris lumbricoides and its rates of egg production decrease as host infection intensity increases. Thus, the per-capita contribution of each worm to transmission decreases as a function of infection intensity.
Parasite-induced vector mortality is a form of negative density-dependence. The Onchocerciasis life cycle involves transmission via a black fly vector. In this life-cycle, the life expectancy of the black fly vector decreases as the worm load ingested by the vector increases. Because O. volvulus microfilariae require at least seven days to mature into infective L3 larvae in the black fly, the worm load is restricted to levels that allow the black fly to survive for long enough to pass infective L3 larvae onto humans.
In macroparasite life cycles
In macroparasite life cycles, density-dependent processes can influence parasite fecundity, survival, and establishment. Density-dependent processes can act across multiple points of the macroparasite life cycle. For filarial worms, density-dependent processes can act at the host/vector interface or within the host/vector life-cycle stages. At the host/vector interface, density-dependence may influence the input of L3 larvae into the host's skin and the ingestion of microfilariae by the vector. Within the life-cycle stages taking place in the vector, density-dependence may influence the development of L3 larvae in vectors and vector life expectancy. Within the life-cycle stages taking place in the host, density-dependence may influence the development of microfilariae and host life expectancy.
In reality, combinations of negative (restriction) and positive (facilitation) density-dependent processes occur in the life cycles of parasites. However, the extent to which one process predominates over the other vary widely according to the parasite, vector, and host involved. This is illustrated by the W. bancrofti life cycle. In Culex mosquitoes, which lack a well-developed cibarial armature, restriction processes predominate. Thus, the number of L3 larvae per mosquito declines as the number of ingested microfilariae increases. Conversely, in Aedes and Anopheles mosquitoes, which have well-developed cibarial armatures, facilitation processes predominate. Consequently, the number of L3 larvae per mosquito increases as the number of ingested microfilariae increases.
Implications for parasite persistence and control
Negative density-dependent (restriction) processes contribute to the resilience of macroparasite populations. At high parasite populations, restriction processes tend to restrict population growth rates and contribute to the stability of these populations. Interventions that lead to a reduction in parasite populations will cause a relaxation of density-dependent restrictions, increasing per-capita rates of reproduction or survival, thereby contributing to population persistence and resilience.
Contrariwise, positive density-dependent or facilitation processes make elimination of a parasite population more likely. Facilitation processes cause the reproductive success of the parasite to decrease with lower worm burden. Thus, control measures that reduce parasite burden will automatically reduce per-capita reproductive success and increase the likelihood of elimination when facilitation processes predominate.
Extinction threshold
The extinction threshold refers to minimum parasite density level for the parasite to persist in a population. Interventions that reduce parasite density to a level below this threshold will ultimately lead to the extinction of that parasite in that population. Facilitation processes increase the extinction threshold, making it easier to achieve using parasite control interventions. Conversely, restriction processes complicates control measures by decreasing the extinction threshold.
Implications for parasite distribution
Anderson and Gordon (1982) propose that the distribution of macroparasites in a host population is regulated by a combination of positive and negative density-dependent processes. In overdispersed distributions, a small proportion of hosts harbour most of the parasite population. Positive density-dependent processes contribute to overdispersion of parasite populations, whereas negative density-dependent processes contribute to underdispersion of parasite populations. As mean parasite burden increases, negative density-dependent processes become more prominent and the distribution of the parasite population tends to become less overdispersed.
Consequently, interventions that lead to a reduction in parasite burden will tend to cause the parasite distribution to become overdispersed. For instance, time-series data for Onchocerciasis infection demonstrates that 10 years of vector control lead to reduced parasite burden with a more overdispersed distribution.
See also
Frequency-dependent selection
Plant density
References
External links
Density dependence
Eradicability of filarial diseases
Cellular processes
Epidemiology
Evolutionary biology concepts
Population dynamics | Density dependence | [
"Biology",
"Environmental_science"
] | 1,567 | [
"Evolutionary biology concepts",
"Cellular processes",
"Epidemiology",
"Cell cycle",
"Environmental social science"
] |
11,797,554 | https://en.wikipedia.org/wiki/Sclerotinia%20spermophila | Sclerotinia spermophila is a plant pathogen, infecting red clover, but can also be considered an animal pathogen.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Sclerotiniaceae
Fungi described in 1948
Fungus species | Sclerotinia spermophila | [
"Biology"
] | 54 | [
"Fungi",
"Fungus species"
] |
11,797,565 | https://en.wikipedia.org/wiki/Urophlyctis%20trifolii | Urophlyctis trifolii is a plant pathogen infecting red clover.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Blastocladiomycota
Fungus species | Urophlyctis trifolii | [
"Biology"
] | 48 | [
"Fungus stubs",
"Fungi",
"Fungus species"
] |
11,797,614 | https://en.wikipedia.org/wiki/Coryneum%20rhododendri | Coryneum rhododendri is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Diaporthales
Fungus species
Fungi described in 1832 | Coryneum rhododendri | [
"Biology"
] | 44 | [
"Fungi",
"Fungus species"
] |
11,797,761 | https://en.wikipedia.org/wiki/Ceratobasidium%20setariae | Ceratobasidium setariae is a fungal plant pathogen.
References
Fungal plant pathogens and diseases
Cantharellales
Fungi described in 1986
Fungus species | Ceratobasidium setariae | [
"Biology"
] | 33 | [
"Fungi",
"Fungus species"
] |
11,797,782 | https://en.wikipedia.org/wiki/Cercospora%20puderii | Cercospora puderii is a fungal plant pathogen.
References
puderii
Fungal plant pathogens and diseases
Fungus species | Cercospora puderii | [
"Biology"
] | 27 | [
"Fungi",
"Fungus species"
] |
11,797,800 | https://en.wikipedia.org/wiki/Phragmidium%20rosae-pimpinellifoliae | Phragmidium rosae-pimpinellifoliae is a species of fungus in the family Phragmidiaceae. A plant pathogen, it causes a rust on the stem, leaves, petioles and fruits of burnet rose and related hybrids. The fungus is found in Europe and North America.
References
Fungal plant pathogens and diseases
Rose diseases
Fungi described in 1873
Fungi of Europe
Fungi of North America
Fungus species | Phragmidium rosae-pimpinellifoliae | [
"Biology"
] | 88 | [
"Fungi",
"Fungus species"
] |
11,797,834 | https://en.wikipedia.org/wiki/Urocystis%20occulta | Urocystis occulta is a smut fungus which attacks the leaves and stalks of rye (Secale cereale).
It is found in Australia, Europe, and North America. The fungus was first described by German botanist Karl Friedrich Wilhelm Wallroth under the name Erysiphe occulta in 1833, before being renamed as Urocystis occulta in 1857 by German botanist and mycologist Gottlob Ludwig Rabenhorst.
References
Fungi described in 1833
Fungi of Australia
Fungi of Europe
Fungi of North America
Fungal plant pathogens and diseases
Rye diseases
Ustilaginomycotina
Taxa named by Karl Friedrich Wilhelm Wallroth
Fungus species | Urocystis occulta | [
"Biology"
] | 134 | [
"Fungus stubs",
"Fungi",
"Fungus species"
] |
11,797,854 | https://en.wikipedia.org/wiki/Double%20complex | In mathematics, specifically Homological algebra, a double complex is a generalization of a chain complex where instead of having a -grading, the objects in the bicomplex have a -grading. The most general definition of a double complex, or a bicomplex, is given with objects in an additive category . A bicomplex is a sequence of objects with two differentials, the horizontal differentialand the vertical differentialwhich have the compatibility relationHence a double complex is a commutative diagram of the formwhere the rows and columns form chain complexes.
Some authors instead require that the squares anticommute. That is
This eases the definition of Total Complexes. By setting , we can switch between having commutativity and anticommutativity. If the commutative definition is used, this alternating sign will have to show up in the definition of Total Complexes.
Examples
There are many natural examples of bicomplexes that come up in nature. In particular, for a Lie groupoid, there is a bicomplex associated to itpg 7-8 which can be used to construct its de-Rham complex.
Another common example of bicomplexes are in Hodge theory, where on an almost complex manifold there's a bicomplex of differential forms whose components are linear or anti-linear. For example, if are the complex coordinates of and are the complex conjugate of these coordinates, a -form is of the form
See also
Chain complex
Derived algebraic geometry
Additional applications
https://web.archive.org/web/20210708183754/http://www.dma.unifi.it/~vezzosi/papers/tou.pdf
Homological algebra
Additive categories | Double complex | [
"Mathematics"
] | 360 | [
"Mathematical structures",
"Additive categories",
"Fields of abstract algebra",
"Category theory",
"Homological algebra"
] |
11,797,855 | https://en.wikipedia.org/wiki/Septoria%20secalis | Septoria secalis also known as Septoria Leaf Blotch is a fungal plant pathogen infecting rye.
Morphology & Biology
Septoria secalis is a common disease that mainly attacks rye leaves. Small spots appear between leaf veins, elongate, then turn yellow-brown and become pale. The disease appears most often on seedling leaves during the autumn, but also affects adult plants.
Economic impact
Severe attacks of Septoria secalis can result in crop yield losses between 10% and 40%. Common control measures include crop rotation, the ploughing of plant debris, and fungicidal treatment of affected plants. Yan & Hunt 2001 finds that in most years SLB is the primary yield loss factor in Ontario, Canada. It is also a pathogen of concern in Europe.
References
Fungal plant pathogens and diseases
Rye diseases
secalis
Fungus species | Septoria secalis | [
"Biology"
] | 172 | [
"Fungi",
"Fungus species"
] |
11,797,882 | https://en.wikipedia.org/wiki/Mycosphaerella%20recutita | Mycosphaerella recutita is a fungal plant pathogen.
In Iceland, it is rather common on withered Elymus caninus, Festuca rubra and Hierochloe odorata.
See also
List of Mycosphaerella species
References
Fungal plant pathogens and diseases
recutita
Fungi described in 1823
Taxa named by Elias Magnus Fries
Fungus species | Mycosphaerella recutita | [
"Biology"
] | 75 | [
"Fungi",
"Fungus species"
] |
11,797,932 | https://en.wikipedia.org/wiki/Puccinia%20recondita | Puccinia recondita is a fungus species and plant pathogen belonging to the order of Pucciniales and family Pucciniaceae.
Distribution
This fungal species occurs worldwide.
Biology
It is a heteroecious fungus, macrocyclic, and has five distinct life-stages of development: teliospores, basidiospores, and urediniospores on cereal hosts, and pycniospores and aeciospores on the alternative plant hosts.
Host
These fungi are endoparasites plant pathogens mainly infecting species in the families of Balsaminaceae, Boraginaceae, Hydrophyllaceae, Ranunculaceae and Poaceae (especially wheat and rye). Puccinia recondita was also found to cause 'brown rust' in wheat and triticale (hybrid of wheat and rye). Symptoms of infestation are yellowish to brown spots and pustules on the leaf surfaces of the host plants. Brown rust is the most widespread and prevalent disease of wheat in South America, and is the most important wheat disease in Mexico.
It was originally found on the leaves of a species of Secale (grass) in France.
Subspecies and forms
Puccinia recondita f.sp. secalis - causes brown rust of rye.
In Iceland, Puccinia recondita ssp. borealis infects Agrostis canina, Anthoxanthum odoratum, Calamagrostis stricta, Hierochloe odorata and Thalictrum alpinum.
Gallery
See also
List of Puccinia species
Bibliography
George Baker Cummins: The Rust Fungi of Cereals, Grasses and Bamboos. Springer, Berlin 1971, ISBN 3-540-05336-0.
References
External links
Index Species Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Rye diseases
Wheat diseases
recondita
Fungi described in 1857
Fungus species | Puccinia recondita | [
"Biology"
] | 399 | [
"Fungi",
"Fungus species"
] |
11,797,973 | https://en.wikipedia.org/wiki/Tilletia%20laevis | Tilletia laevis is a plant pathogen that causes bunt on wheat.
It was used as a biological weapon by Iraq against Iran during the Iran–Iraq War in the 1980s.
References
External links
Index Fungorum
USDA ARS Fungal Database
Ustilaginomycotina
Fungal plant pathogens and diseases
Wheat diseases
Fungi described in 1873
Fungus species | Tilletia laevis | [
"Biology"
] | 72 | [
"Fungi",
"Fungus species"
] |
11,798,034 | https://en.wikipedia.org/wiki/Phakopsora%20pachyrhizi | Phakopsora pachyrhizi is a plant pathogen. It causes Asian soybean rust.
Hosts
Phakopsora pachyrhizi is an obligate biotrophic pathogen that causes Asian soybean rust. Phakopsora pachyrhizi is able to affect up to 31 different plant species that belong to 17 different genera under natural conditions. Experiments in laboratories were able to use P. pachyrhizi to infect 60 more plant species. The main hosts are Glycine max (soybean), Glycine soja (wild soybean), and Pachyrhizus erosus (Jicama).
*Preferred hosts. Other hosts were minor or determined experimentally under artificial conditions.
Symptoms
The disease forms tan to dark-brown or reddish-brown lesions with one to many prominent, globe-like orifices. Urediniospores form from these pores. At initial stages, small yellow spots are formed on the surface of the leaf. These spots may be better observed using assistance of a light source. As the disease progresses, lesions start to form on the leaves, stems, pod, and petioles. Lesions are initially small, turning from gray to tan or brown as they increase in size and the disease gets more severe. Soon volcano-shaped marks are noticed in the lesions.
Disease cycle
Phakopsora pachyrhizi is a fungus which has a spore moved by wind, called urediniospore. These spores are quite different from others as they don't need an open stomata or natural openings in the leaves. Urediniospores are able to penetrate the leaf. Pustules are visible after 10 days and they can produce spores for three weeks. The disease reaches its climax when the crop begins flowering. The cycle of the pathogen continues until the crop is defoliated or until the environment becomes unfavorable to the pathogen.
The Asian soybean rust is a polycyclic disease: within the disease cycle, the asexual urediniospores keep infecting the same plant. Teliospores (sexual spores) are the survival spores that overwinter in the soil. Basidiospores are the spores that are able to contaminate an alternative host. The urediniospores need a minimum of six hours to infect leaves at a favorable temperature (between ).
Environment
The favorable conditions for the disease to progress are related to temperature, humidity, and wind. The appropriate temperature for the pathogen to be active is (more efficient between ). The humidity must be high, about 90% or more, for more than 12 hours. A significant amount of wind is also important for the pathogen to move from one plant to the other. Currently, in the United States, infected plants can be found in Florida, Georgia, Louisiana, and Texas.
Risk factors
Uredospores are wind-blown and are produced abundantly on the infected tissue of soybeans or other legume hosts.
Management
The disease is often controlled using the fungicides oxycarboxin, triforine, and triclopyr.
Phakospsora pachyrhizi is a pathogen that acts quickly in contaminating the host. The plant can be severely contaminated in as short a period as 10 days. This makes it difficult to control the disease, as it does not just spread quickly, but its progression is also fast. That is why it is important to implement control techniques as soon as possible.
Genetic resistance
The disease may be controlled by using genetic resistance, but this has not exhibited great results and has not been durable because the soybean genome almost entirely lacks potential genes for ASR resistance. A gene from Cajanus cajan has shown promise when transferred to soybean. This method could be expanded to a wide array of genes in the entire family; as with native genes these are best deployed in combination due to P. pachyrhizi's ability to rapidly overcome resistance.
Chemical control
A second form of management that can work is using fungicides, but this is only efficient at early stages of the disease. The disease spreads fast and it is complicated to control after certain stages, so it is important to act with care around contaminated plants, as the spores can be attached to clothing and other materials and infect other plants.
Research
Genetic modification for infection factor dissection including knockout, including of effectors proves difficult. Host-induced gene silencing may be the better tool for this pathogen.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Soybean diseases
Pucciniales
Fungi described in 1914
Taxa named by Hans Sydow
Taxa named by Paul Sydow
Fungus species | Phakopsora pachyrhizi | [
"Biology"
] | 984 | [
"Fungi",
"Fungus species"
] |
11,798,042 | https://en.wikipedia.org/wiki/Cercospora%20kikuchii | Cercospora kikuchii is a fungal plant pathogen that affects soybeans. It results in both the Cercospora leaf blight and purple seed stain diseases on soybean and is found almost worldwide. C. kikuchii produces the toxin cercosporin, as do a number of other Cercospora species.
Symptoms
Seed: The disease on the host soybean itself can be determined by a light activated red perlene quinon, cercosporin, with a molecular weight of 534. In being exposed to light, cercosporin causes oxidative damage to the hosts cells membranes, lipids, and proteins, resulting in cell death (Newman 2016). Purple seed stain by Cercospora can be determined by a variety of factors. First, infected seeds have spots that are usually pink or purple. The size can be from minuscule to covering the whole seed (Li 2019). The coloration will be from the hilum area. Germination and seedling size may be delayed after 50% of the seed is covered. They may also have lower oil and higher protein.
Cotyledons: Cotyledons will shrivel and fall off.
Stems: Stems will have red deep lesions about ¼ inch in size.
Leaves: Young leaves on top and bottom will have reddish purple lesions from minuscule to ½ inch in size. The leaf itself will become purple, described as a dark bronzing. Severely infected upper leaves will fall off (Giesler). If this defoliation occurs during the period of seed fill, significant yield losses have been recorded (Schneider et al. 2009).
Environment
C. kuckuchii has a large range of habitable environments, but it is most prolific, and the greatest threat in top soybean producing countries such as Brazil, China, the United States, and India (Soares et al. 2015). The disease thrives in conditions of high temperature and humidity. “Sporulation increases as temperature rises above 80 degrees Fahrenheit” (26.7 degrees Celsius)(Jeschke 2020). More wind and water (rain, river, ground water) will result in further and more spore distribution on plant tissue. It usually occurs later in the season around August. The disease survives on crop residue from four months to four years (Ward, 2015). This means it thrives on no till tilled repeat planted fields (Schapaugh 2014).
Management
Extended crop rotation and residue incorporation reduces inoculum.
Seed treatment foliar fungicides applied during R3- R5 can reduce blight incidence and severity.
The farmers can blend infected beans with clean and still sell them (Successful Farming Staff 2018). Dr. Anne Dorrance in 2019 also suggested planting varieties with resistance to Cercospora, plant disease free seeds, proper plant spacing to prevent spore spreading, and harvest during dry times.
References
2. Groenewald JZ, Nakashima C, Nishikawa J, Shin HD, Park JH, Jama AN, Groenewald M, Braun U, Crous PW (2013) Species concepts in Cercospora: spotting the weeds among the roses. Stud Mycol. 2013 Jun 30; 75(1): 115–170. Published online 2012 Oct 1. doi:10.3114/sim0012
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0222673
https://www.pioneer.com/us/agronomy/cercospora_leaf_blight.html
https://cropwatch.unl.edu/plantdisease/soybean/purple-seed-
stain#:~:text=Purple%20Seed%20Stain%20(Cercospora%20blight,purple%20spot%20or%20lavender%20spot.&text=The%20inoculum%20source%20for%20this,debris%20from%20previous%20soybean%20crops.
https://www.agriculture.com/crops/some-soybean-fields-have-a-big-purple-problem
https://www.ilsoyadvisor.com/on-farm/ilsoyadvisor/disease-management-purple-seed-stain-soybeans
https://u.osu.edu/capstone/2019/04/03/cercospora-leaf-blight-and-purple-seed-stain/
kikuchii
Fungi described in 1925
Fungal plant pathogens and diseases
Soybean diseases
Fungus species | Cercospora kikuchii | [
"Biology"
] | 995 | [
"Fungi",
"Fungus species"
] |
11,798,049 | https://en.wikipedia.org/wiki/Microsphaera%20diffusa | Microsphaera diffusa is a plant pathogen. M. diffusa infections on soybeans are referred to as powdery mildew.
Importance:
Powdery mildew of soybeans is an important pathogen and tends to cause epidemics about every 10–15 years in Wisconsin. In 1975 the first epidemic there was observed and several have occurred since. Powdery mildew affects the soybean plants. When 82% of the soybean leaf area is covered by M. diffusa, photosynthetic and transpiration rates are less than half of normal soybeans, thus affecting soybean yield. Different studies have found different amounts of yield reduction due to the powdery mildew. In Illinois, measured yield losses ranged up to 14 percent. From Iowa studies, measured yield losses were estimated up to 10 bushels per acre. In Wisconsin, the yield loss was up to 5 bushels per acre. And it's important to note that yield loss due to powdery mildew will be greater for soybeans planted late for a region compared to early-planted soybeans.
Environment:
The temperature plays an important role in powdery mildew development. Powdery mildew favors cooler temperatures (65–77 degrees F). Temperatures above 30 degrees C appears to constrain disease development. Rainfall does not appear to affect the disease. But, it has been found that a shorter leaf wetness duration appears to be a driver of the disease. Additionally, low relative humidity is required for disease development.
Management:
Variety selection is a tool that can be used to help combat powdery mildew. It's not entirely effective because no variety of soybean has complete resistance to powdery mildew, but there are definitely some varieties more susceptible than others. Resistance affects initial inoculation of the plant. The currently effective management tools are fungicides. They can be sprayed once powdery mildew is detected and they kill the spores. This affects the dispersal and secondary inoculation of the plant. Some examples of fungicides include Topsin M, Quadris, and Headline, with the last two being less effective. Another management practice is planting date. Early-planted soybeans tend to show less severity of powdery mildew than late-planted soybeans.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
diffusa
Soybean diseases
Fungus species | Microsphaera diffusa | [
"Biology"
] | 495 | [
"Fungi",
"Fungus species"
] |
11,798,095 | https://en.wikipedia.org/wiki/Phyllosticta%20sojaecola | Phyllosticta sojaecola is a plant pathogen infecting soybean.
Hosts and symptoms
Causes Phyllosticta Leaf Spot on soybeans. Forms circular lesions with reddish-brown borders and light brown center. The center of the lesion will drop out over time. Visible pycnidia can be seen in older lesions. A common consequence of infection is reduced yield from the damaged leaves.
Disease cycle
Phyllosticta sojicola and all other members of the Phyllosticta genus are ascomycete fungi, with pathogenic species forming spots on leaves and some fruit. Phyllosticta sojicola emerges from infected plant debris in spring and spread by wind and rain-splash onto healthy plants. While the infection method for Phyllosticta sojicola are unknown, other Phyllosticta species are known to infect leaves via an appressorium in a process that requires adequate moisture. Within mature lesions, the fungus forms pycnidia to overwinter and repeat the cycle. Phyllosticta sojicola can also survive on seeds and infect new fields through infected seed.
Environment and management
Phyllosticta sojicola prefers cool, moist conditions, as pycnidia require moist conditions to germinate. The pathogen can be managed by rotating to non-hosts and using tillage to remove infected residue. As infected seed can transmit the pathogen, seed testing is recommended to prevent introduction of disease.
See also
List of soybean diseases
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Peanut diseases
Soybean diseases
sojaecola
Fungi described in 1900
Fungus species | Phyllosticta sojaecola | [
"Biology"
] | 354 | [
"Fungi",
"Fungus species"
] |
11,798,131 | https://en.wikipedia.org/wiki/Septoria%20glycines | Septoria glycines is a fungal plant pathogen that causes leaf spot on soybean, a disease that is also known as brown spot. The disease leads to early defoliation of the plant, but does not normally cause severe reductions in yield. The fungus overwinters on infected soybean straw and is spread by wind dispersal or rain splash.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Soybean diseases
glycines
Fungi described in 1915
Fungus species | Septoria glycines | [
"Biology"
] | 103 | [
"Fungi",
"Fungus species"
] |
11,798,160 | https://en.wikipedia.org/wiki/Phialophora%20gregata | Phialophora gregata is a Deuteromycete fungus that is a plant pathogen which causes the disease commonly known as brown stem rot of soybean. P. gregata does not produce survival structures, but has the ability to overwinter as mycelium in decaying soybean residue.
Two strains of the fungus exist; genotype A causes both foliar and stem symptoms, while genotype B causes only stem symptoms. Common leaf symptoms are browning, chlorosis, and necrosis Foliar symptoms which are often seen with genotype A are chlorosis, defoliation, and wilting.
Brown Stem Rot of soybeans is a common fungal disease in soybeans grown in the upper Midwest and Canada. Brown Stem Rot (BSR) may commonly reduce yield of soybeans by 10-30% on susceptible varieties, up to 10 bu./acre in severe cases. BSR decreases both the number of beans per pod as well as bean size as a result of wilting, premature defoliation and lodging. In addition to decreasing yield, plants infected by BSR can be difficult to harvest due to lodging of soybean plants. University of Wisconsin Extension Field Crop Pathologist, Damon Smith ranks Brown Stem Rot as the third most important soybean disease in Wisconsin. Brown Stem Rot can impact most susceptible soybean beans in the north central states, especially during cooler late summer months.
There are many ways to manage Phialophora gregata. The most effective form of management is disease resistance, but crop rotation, tillage, SCN management, and changing the pH of the soil can also be effective
Symptoms and signs
Phialophora gregata’s infection of a soybean plant is accompanied by browning of the plant’s vascular and pith tissues. The plant often exhibits chlorosis and necrosis, as well as leaf browning. Wilting and defoliation are also known to occur.
Signs of infection often go unnoticed until reproductive stages of a plant’s life cycle. They can be diagnosed earlier on by opening the stem and visualizing the pathogen. One can visualize signs by cutting open the stem in early stages of infection, but symptoms do not become apparent until after the soybean pod formation.
Depending on which strain infects the plant, and what the environmental conditions are, the effect is more or less potent. Genotype A causes browning of stems as well as foliar symptoms such as interveinal chlorosis, defoliation and wilting. Symptomatic leaves have a shriveled appearance, but remain attached to the stem. Genotype B causes only browning of stems.
Secondary symptoms of brown stem rot are stunting, premature death, decrease in seed number, reduced pod set, and decrease in seed size.
Disease from P. gregata is easily confused with Fusarium wilt, due to the similar vascular symptoms observed in both. The diseases could be differentiated through growth on isolation media. The two diseases can be further distinguished by splitting the stems. A split stem with Fusarium infection would have tan or light brown discoloration in the cortex and a normal white pith, while a split stem with P. gregata would have a discolored, reddish brown pith. Root rot and blue masses of spores are symptoms only caused by Fusarium.
Environment
The fungal pathogen, Phialophora gregata, that causes Brown Stem Rot (BSR) of soybeans prefers conditions that are also optimal to soybean plant growth. Later planted soybeans are more susceptible to BSR as cooler temperatures during early pod forming stages make the plant most vulnerable. Early season wet conditions can also favor early season pathogen growth, often causing more dramatic affects later in season. Foliar symptoms of BSR are favored when conditions are cool during flowering and pod formation. The Phialophora gregata pathogen proliferates in stem tissues when soil has high moisture content and air temperatures remain near 60-75 degrees Fahrenheit. Fungal growth of Phialophora gregata shuts down above 80 degrees Fahrenheit. Low water available to the plant, late in season can also dramatically increase disease severity. As the disease is soilborne, it is not uncommon to find clusters of diseased plants together. Additionally, the prevalence of Soybean Cyst Nematodes (SCN) can affect the growth of Phialophora gregata, the BSR pathogen. Greater populations of SCN, can greatly increase the likelihood and impact of Brown Stem Rot.
Disease cycle
The Phialophora gregata fungus is a deuteromycete with a monocyclic life cycle. There are two strains of Phialophora gregata, referred to as genotype A and genotype B. Genotype A causes both foliar and stem symptoms, while genotype B causes only stem symptoms.
The Phialophora gregata fungus produces no survival structures, but can overwinter as mycelium in decaying soybean residue. During overwintering, conidia are produced; these conidia are the inoculum for new plants in the spring. The amount of asexual reproduction that occurs during the winter affects the spring inoculum levels. Infection initially occurs in the roots of young soybean plants, and then spreads to the stem (and foliage, depending on the strain). Generally, early and severe foliar symptoms indicate that the yield losses will be heavier.
Economic significance
Brown Stem Rot of soybeans is a source of major crop loss. It is not uncommon for soybeans grown in management systems prone for brown stem rot to have yield losses between 10%, with a maximum potential loss of 30%. It has been listed as the 3rd most important disease to soybeans in Wisconsin. A recent study showed that nearly half the counties in Iowa, from 2006 and 2007, had brown stem rot of soybean.
Management
Brown Stem Rot can be easily managed using several techniques employed by the grower. Common techniques include crop rotation, tillage, selection and Soybean Cyst Nematode management. There are currently no available seed treatments or fungicides to prevent or protect against BSR.
Crop rotation
The easiest and most effective way to protect against Brown Stem Rot in soybeans is crop rotation. Phialophora gregata has no overwintering structures but instead lives in plant debris. Due to this, waiting until plant debris has decomposed (at least one full growing season) is the most effective way to control this disease. In cases of severe infection 2–3 years without planting soybeans in infected fields may be necessary.
Disease resistance
Given the presence of Phialophora gregata on much of the nation’s soybean acres, research and development have gone into selecting soybean varieties that have greater resistance to BSR, although not immunity. Brown stem rot has the uncanny ability to produce yield loss even without obvious symptoms. Higher rated BSR tolerance in beans can be selected for when choosing a variety to be grown. Genetic Resistance should not be relied upon when expected BSR pressure is high. Additionally, choosing varieties higher rated for tolerance against soybean Cyst nematode can be effective.
Tillage
More decomposition of soybean residue results in less pathogen, as the fungus can only survive on soybean residue. Therefore, tillage can be effective. Once the soybean residue has decomposed, the survival of P. gregata is drastically decreased. It is common for farmers to practice both crop rotations and tillage in a cyclic fashion. This is done by conducting little to no tillage when a soybean crop is planted after corn, followed by intensive tillage when a corn crop is planted after soybean.
Management of soybean cyst nematode (Heterodera glycines)
P. gregata is often found to be more severe in the presence of SCN; soybean plants showing resistance to SCN have been found to produce greater yields. Soybean plants with resistance to both SCN and genotype A of P. gregata can grow normally, even when both pathogens are present. Given the correlation between SCN populations and disease impact of BSR it is important to control SCN. SCN can be controlled using rotation to non-susceptible crops, seed treatments, variety selection and nematicides.
Monitoring soil pH
Maintaining a soil pH near 6.5-7.5 can also help protect against BSR. There is evidence of significantly lower disease severity with a near neutral soil pH, although there is no evidence to suggest a neutral pH prevents BSR.
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Soybean diseases
Eurotiomycetes
Fungi described in 1971
Fungus species | Phialophora gregata | [
"Biology"
] | 1,813 | [
"Fungi",
"Fungus species"
] |
11,798,248 | https://en.wikipedia.org/wiki/Penicillium%20glabrum | Penicillium glabrum is a plant pathogen infecting strawberries.
References
External links
USDA ARS Fungal Database
Fungal strawberry diseases
Fungi described in 1911
glabrum
Fungus species | Penicillium glabrum | [
"Biology"
] | 39 | [
"Fungi",
"Fungus species"
] |
11,798,258 | https://en.wikipedia.org/wiki/Pilidiella%20quercicola | Pilidiella quercicola is a plant pathogen infecting strawberries.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal strawberry diseases
Fungi described in 1927
Diaporthales
Fungus species | Pilidiella quercicola | [
"Biology"
] | 45 | [
"Fungi",
"Fungus species"
] |
11,798,290 | https://en.wikipedia.org/wiki/Pestalotia%20longisetula | Pestalotia longisetula is a plant pathogen causing strawberry fruit rot.
Hosts and symptoms
While P. longisetula is best known for infecting strawberry crops, it can also infect other plants, including apricots, peaches, guava, and tomato fruits. Some plants such as beans are immune to the disease. It takes on average about two weeks for mature plants to be fully infected, while plants at an earlier stage of growth spread infection more slowly. Infected areas become covered with white mycelia growth and the host plant starts to rot from the skin to the core of the plant. The plant as a whole suffers as the leaves develop lesions with spores to spread the disease.
Infection
P. longisetula infects other plants through the leaves. Spores grow on the leaves and spread through the wind. The disease thrives in areas with high humidity and high wind. Once the plant has been infected, the disease spreads throughout the leaves and then attacks the fruit, causing it to rot on the skin and then the core. After eight days, most mature plants will be completely infected and a new phase of the infection begins, which spreads to the next plant. The host plant dies in most circumstances. Using pesticides and growing strawberries in areas with low wind power and low humidity can slow the progression of the infection.
Importance
Countries depend on growing strawberries as profits. If the disease manifests in the area then by the time farmers locate it, a percentage of the plants are wiped out and profits are lost. Countries most affected are those that do not have access to pesticides or greenhouses to protect the plants.
References
External links
USDA ARS Fungal Database
Fungal strawberry diseases
Fungi described in 1961
Xylariales
Fungus species | Pestalotia longisetula | [
"Biology"
] | 353 | [
"Fungi",
"Fungus species"
] |
11,798,302 | https://en.wikipedia.org/wiki/Ganoderma%20tsugae | Ganoderma tsugae, also known as hemlock varnish shelf, is a flat polypore mushroom of the genus Ganoderma.
Habitat
In contrast to Ganoderma lucidum, to which it is closely related and which it closely resembles, G. tsugae tends to grow on conifers, especially hemlocks.
Uses
Like G. lucidum, G. tsugae is non-poisonous but generally considered inedible, because of its solid woody nature; however, teas and extracts made from its fruiting bodies supposedly allow medicinal use of the compounds it contains, although this is controversial within the scientific community. A hot water extraction or tea can be very effective for extracting the polysaccharides; however, an alcohol or alcohol/glycerin extraction method is more effective for the triterpenoids.
The fresh, soft growth of the "lip" of G. tsugae can be sautéed and prepared much like other edible mushrooms. While in this nascent stage it is not woody, it can still be tough and chewy.
Medicinal
Like G. lucidum, G. tsugae is purported to have medicinal properties including use for dressing a skin wound. Though phylogenetic analysis has begun to better differentiate between many closely related species of Ganoderma; there is still disagreement as to which have the most medicinal properties. Natural and artificial variations (e.g. growing conditions and preparation) can also effect the species' medicinal value.
Studies in mice have shown that G. tsugae shows several potential medicinal benefits including anti-tumor activity through some of the active polysaccharides found in G. tsugae. G. tsugae has also been shown to significantly promote wound healing in mice as well as markedly increase the proliferation and migration of fibroblast cells in culture.
References
tsugae
Dietary supplements
Inedible fungi
Medicinal fungi
Fungus species | Ganoderma tsugae | [
"Biology"
] | 394 | [
"Fungi",
"Fungus species"
] |
11,798,314 | https://en.wikipedia.org/wiki/Control%20of%20International%20Trade%20in%20Endangered%20Species | Control of International Trade in Endangered Species also known as COTES is an organisation (1996) which complies with CITES.
COTES is used in the United Kingdom to convict wildlife crimes involving protected and endangered species.
References
Endangered species
Conservation in the United Kingdom | Control of International Trade in Endangered Species | [
"Biology"
] | 53 | [
"Biota by conservation status",
"Endangered species"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.