id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
16,372,307 | https://en.wikipedia.org/wiki/Combinatory%20categorial%20grammar | Combinatory categorial grammar (CCG) is an efficiently parsable, yet linguistically expressive grammar formalism. It has a transparent interface between surface syntax and underlying semantic representation, including predicate–argument structure, quantification and information structure. The formalism generates constituency-based structures (as opposed to dependency-based ones) and is therefore a type of phrase structure grammar (as opposed to a dependency grammar).
CCG relies on combinatory logic, which has the same expressive power as the lambda calculus, but builds its expressions differently. The first linguistic and psycholinguistic arguments for basing the grammar on combinators were put forth by Steedman and Szabolcsi.
More recent prominent proponents of the approach are Pauline Jacobson and Jason Baldridge. In these new approaches, the combinator B (the compositor) is useful in creating long-distance dependencies, as in "Who do you think Mary is talking about?" and the combinator W (the duplicator) is useful as the lexical interpretation of reflexive pronouns, as in "Mary talks about herself". Together with I (the identity mapping) and C (the permutator) these form a set of primitive, non-interdefinable combinators. Jacobson interprets personal pronouns as the combinator I, and their binding is aided by a complex combinator Z, as in "Mary lost her way". Z is definable using W and B.
Parts of the formalism
The CCG formalism defines a number of combinators (application, composition, and type-raising being the most common). These operate on syntactically-typed lexical items, by means of Natural deduction style proofs. The goal of the proof is to find some way of applying the combinators to a sequence of lexical items until no lexical item is unused in the proof. The resulting type after the proof is complete is the type of the whole expression. Thus, proving that some sequence of words is a sentence of some language amounts to proving that the words reduce to the type S.
Syntactic types
The syntactic type of a lexical item can be either a primitive type, such as S, N, or NP, or complex, such as , or .
The complex types, schematizable as and , denote functor types that take an argument of type Y and return an object of type X. A forward slash denotes that the argument should appear to the right, while a backslash denotes that the argument should appear on the left. Any type can stand in for the X and Y here, making syntactic types in CCG a recursive type system.
Application combinators
The application combinators, often denoted by > for forward application and < for backward application, apply a lexical item with a functor type to an argument with an appropriate type. The definition of application is given as:
Composition combinators
The composition combinators, often denoted by for forward composition and for backward composition, are similar to function composition from mathematics, and can be defined as follows:
Type-raising combinators
The type-raising combinators, often denoted as for forward type-raising and for backward type-raising, take argument types (usually primitive types) to functor types, which take as their argument the functors that, before type-raising, would have taken them as arguments.
Example
The sentence "the dog bit John" has a number of different possible proofs. Below are a few of them. The variety of proofs demonstrates the fact that in CCG, sentences don't have a single structure, as in other models of grammar.
Let the types of these lexical items be
We can perform the simplest proof (changing notation slightly for brevity) as:
Opting to type-raise and compose some, we could get a fully incremental, left-to-right proof. The ability to construct such a proof is an argument for the psycholinguistic plausibility of CCG, because listeners do in fact construct partial interpretations (syntactic and semantic) of utterances before they have been completed.
Formal properties
CCGs are known to be able to generate the language (which is a non-context-free indexed language). A grammar for this language can be found in Vijay-Shanker and Weir (1994).
Vijay-Shanker and Weir (1994) demonstrates that Linear Indexed Grammars, Combinatory Categorial Grammars, Tree-adjoining Grammars, and Head Grammars are weakly equivalent formalisms, in that they all define the same string languages. Kuhlmann et al. (2015) show that this equivalence, and the ability of CCG to describe , rely crucially on the ability to restrict the use of the combinatory rules to certain categories, in ways not explained above.
See also
Categorial grammar
Combinatory logic
Embedded pushdown automaton
Link grammar
Type shifter
References
Baldridge, Jason (2002), "Lexically Specified Derivational Control in Combinatory Categorial Grammar." PhD Dissertation. Univ. of Edinburgh.
Curry, Haskell B. and Richard Feys (1958), Combinatory Logic, Vol. 1. North-Holland.
Jacobson, Pauline (1999), “Towards a variable-free semantics.” Linguistics and Philosophy 22, 1999. 117–184
Steedman, Mark (1987), “Combinatory grammars and parasitic gaps”. Natural Language and Linguistic Theory 5, 403–439.
Steedman, Mark (1996), Surface Structure and Interpretation. The MIT Press.
Steedman, Mark (2000), The Syntactic Process. The MIT Press.
Szabolcsi, Anna (1989), "Bound variables in syntax (are there any?)." Semantics and Contextual Expression, ed. by Bartsch, van Benthem, and van Emde Boas. Foris, 294–318.
Szabolcsi, Anna (1992), "Combinatory grammar and projection from the lexicon." Lexical Matters. CSLI Lecture Notes 24, ed. by Sag and Szabolcsi. Stanford, CSLI Publications. 241–269.
Szabolcsi, Anna (2003), “Binding on the fly: Cross-sentential anaphora in variable-free semantics”. Resource Sensitivity in Binding and Anaphora, ed. by Kruijff and Oehrle. Kluwer, 215–229.
Further reading
Michael Moortgat, Categorial Type Logics, Chapter Two in J. van Benthem and A. ter Meulen (eds.) Handbook of Logic and Language. Elsevier, 1997,
homepages.inf.ed.ac.uk
External links
The Combinatory Categorial Grammar Site
The ACL CCG wiki page (likely to be more up-to-date than this one)
Semantic Parsing with Combinatory Categorial Grammars – Tutorial describing general principles for building semantic parsers
Grammar frameworks
Combinatory logic
Type theory | Combinatory categorial grammar | [
"Mathematics"
] | 1,470 | [
"Type theory",
"Mathematical logic",
"Mathematical structures",
"Mathematical objects"
] |
16,373,249 | https://en.wikipedia.org/wiki/Structured-light%203D%20scanner | A structured-light 3D scanner is a device that measures the three-dimensional shape of an object by projecting light patterns—such as grids or stripes—onto it and capturing their deformation with cameras. This technique allows for precise surface reconstruction by analyzing the displacement of the projected patterns, which are processed into detailed 3D models using specialized algorithms.
Due to their high resolution and rapid scanning capabilities, structured-light 3D scanners are utilized in various fields, including industrial design, quality control, cultural heritage preservation, augmented reality gaming, and medical imaging. Compared to 3D laser scanning, structured-light scanners can offer advantages in speed and safety by using non-coherent light sources like LEDs or projectors instead of lasers. This approach allows for relatively quick data capture over large areas and reduces potential safety concerns associated with laser use. However, structured-light scanners can be affected by ambient lighting conditions and the reflective properties of the scanned objects.
Principle
Projecting a narrow band of light onto a three-dimensionally shaped surface produces a line of illumination that appears distorted from other perspectives than that of the projector, and can be used for geometric reconstruction of the surface shape (light section).
A faster and more versatile method is the projection of patterns consisting of many stripes at once, or of arbitrary fringes, as this allows for the acquisition of a multitude of samples simultaneously.
Seen from different viewpoints, the pattern appears geometrically distorted due to the surface shape of the object.
Although many other variants of structured light projection are possible, patterns of parallel stripes are widely used. The picture shows the geometrical deformation of a single stripe projected onto a simple 3D surface. The displacement of the stripes allows for an exact retrieval of the 3D coordinates of any details on the object's surface.
Generation of light patterns
Two major methods of stripe pattern generation have been established: Laser interference and projection.
The laser interference method works with two wide planar laser beam fronts. Their interference results in regular, equidistant line patterns. Different pattern sizes can be obtained by changing the angle between these beams. The method allows for the exact and easy generation of very fine patterns with unlimited depth of field. Disadvantages are high cost of implementation, difficulties providing the ideal beam geometry, and laser typical effects like speckle noise and the possible self interference with beam parts reflected from objects. Typically, there is no means of modulating individual stripes, such as with Gray codes.
The projection method uses incoherent light and basically works like a video projector. Patterns are usually generated by passing light through a digital spatial light modulator, typically based on one of the three currently most widespread digital projection technologies, transmissive liquid crystal, reflective liquid crystal on silicon (LCOS) or digital light processing (DLP; moving micro mirror) modulators, which have various comparative advantages and disadvantages for this application. Other methods of projection could be and have been used, however.
Patterns generated by digital display projectors have small discontinuities due to the pixel boundaries in the displays. Sufficiently small boundaries however can practically be neglected as they are evened out by the slightest defocus.
A typical measuring assembly consists of one projector and at least one camera. For many applications, two cameras on opposite sides of the projector have been established as useful.
Invisible (or imperceptible) structured light uses structured light without interfering with other computer vision tasks for which the projected pattern will be confusing. Example methods include the use of infrared light or of extremely high framerates alternating between two exact opposite patterns.
Calibration
Geometric distortions by optics and perspective must be compensated by a calibration of the measuring equipment, using special calibration patterns and surfaces. A mathematical model is used for describing the imaging properties of projector and cameras. Essentially based on the simple geometric properties of a pinhole camera, the model also has to take into account the geometric distortions and optical aberration of projector and camera lenses. The parameters of the camera as well as its orientation in space can be determined by a series of calibration measurements, using photogrammetric bundle adjustment.
Analysis of stripe patterns
There are several depth cues contained in the observed stripe patterns. The displacement of any single stripe can directly be converted into 3D coordinates. For this purpose, the individual stripe has to be identified, which can for example be accomplished by tracing or counting stripes (pattern recognition method). Another common method projects alternating stripe patterns, resulting in binary Gray code sequences identifying the number of each individual stripe hitting the object.
An important depth cue also results from the varying stripe widths along the object surface. Stripe width is a function of the steepness of a surface part, i.e. the first derivative of the elevation. Stripe frequency and phase deliver similar cues and can be analyzed by a Fourier transform. Finally, the wavelet transform has recently been discussed for the same purpose.
In many practical implementations, series of measurements combining pattern recognition, Gray codes and Fourier transform are obtained for a complete and unambiguous reconstruction of shapes.
Another method also belonging to the area of fringe projection has been demonstrated, utilizing the depth of field of the camera.
It is also possible to use projected patterns primarily as a means of structure insertion into scenes, for an essentially photogrammetric acquisition.
Precision and range
The optical resolution of fringe projection methods depends on the width of the stripes used and their optical quality. It is also limited by the wavelength of light.
An extreme reduction of stripe width proves inefficient due to limitations in depth of field, camera resolution and display resolution. Therefore, the phase shift method has been widely established: A number of at least 3, typically about 10 exposures are taken with slightly shifted stripes. The first theoretical deductions of this method relied on stripes with a sine wave shaped intensity modulation, but the methods work with "rectangular" modulated stripes, as delivered from LCD or DLP displays as well. By phase shifting, surface detail of e.g. 1/10 the stripe pitch can be resolved.
Current optical stripe pattern profilometry hence allows for detail resolutions down to the wavelength of light, below 1 micrometer in practice or, with larger stripe patterns, to approx. 1/10 of the stripe width. Concerning level accuracy, interpolating over several pixels of the acquired camera image can yield a reliable height resolution and also accuracy, down to 1/50 pixel.
Arbitrarily large objects can be measured with accordingly large stripe patterns and setups. Practical applications are documented involving objects several meters in size.
Typical accuracy figures are:
Planarity of a wide surface, to .
Shape of a motor combustion chamber to (elevation), yielding a volume accuracy 10 times better than with volumetric dosing.
Shape of an object large, to about
Radius of a blade edge of e.g. , to ±0.4 μm
Navigation
As the method can measure shapes from only one perspective at a time, complete 3D shapes have to be combined from different measurements in different angles. This can be accomplished by attaching marker points to the object and combining perspectives afterwards by matching these markers. The process can be automated, by mounting the object on a motorized turntable or CNC positioning device. Markers can as well be applied on a positioning device instead of the object itself.
The 3D data gathered can be used to retrieve CAD (computer aided design) data and models from existing components (reverse engineering), hand formed samples or sculptures, natural objects or artifacts.
Challenges
As with all optical methods, reflective or transparent surfaces raise difficulties. Reflections cause light to be reflected either away from the camera or right into its optics. In both cases, the dynamic range of the camera can be exceeded. Transparent or semi-transparent surfaces also cause major difficulties. In these cases, coating the surfaces with a thin opaque lacquer just for measuring purposes is a common practice. A recent method handles highly reflective and specular objects by inserting a 1-dimensional diffuser between the light source (e.g., projector) and the object to be scanned. Alternative optical techniques have been proposed for handling perfectly transparent and specular objects.
Double reflections and inter-reflections can cause the stripe pattern to be overlaid with unwanted light, entirely eliminating the chance for proper detection. Reflective cavities and concave objects are therefore difficult to handle. It is also hard to handle translucent materials, such as skin, marble, wax, plants and human tissue because of the phenomenon of sub-surface scattering. Recently, there has been an effort in the computer vision community to handle such optically complex scenes by re-designing the illumination patterns. These methods have shown promising 3D scanning results for traditionally difficult objects, such as highly specular metal concavities and translucent wax candles.
Speed
Although several patterns have to be taken per picture in most structured light variants, high-speed implementations are available for a number of applications, for example:
Inline precision inspection of components during the production process.
Health care applications, such as live measuring of human body shapes or the micro structures of human skin.
Motion picture applications have been proposed, for example the acquisition of spatial scene data for three-dimensional television.
Applications
Industrial Optical Metrology Systems (ATOS) from GOM GmbH utilize Structured Light technology to achieve high accuracy and scalability in measurements. These systems feature self-monitoring for calibration status, transformation accuracy, environmental changes, and part movement to ensure high-quality measuring data.
Google Project Tango SLAM (Simultaneous localization and mapping) using depth technologies, including Structured Light, Time of Flight, and Stereo. Time of Flight require the use of an infrared (IR) projector and IR sensor; Stereo does not.
MainAxis srl produces a 3D Scanner utilizing an advanced patented technology that enables 3d scanning in full color and with an acquisition time of a few microseconds, used in medical and other applications.
A technology by PrimeSense, used in an early version of Microsoft Kinect, used a pattern of projected infrared points to generate a dense 3D image. (Later on, the Microsoft Kinect switched to using a time-of-flight camera instead of structured light.)
Occipital
Structure Sensor uses a pattern of projected infrared points, calibrated to minimize distortion to generate a dense 3D image.
Structure Core uses a stereo camera that matches against a random pattern of projected infrared points to generate a dense 3D image.
Intel RealSense camera projects a series of infrared patterns to obtain the 3D structure.
Face ID system works by projecting more than 30,000 infrared dots onto a face and producing a 3D facial map.
VicoVR sensor uses a pattern of infrared points for skeletal tracking.
Chiaro Technologies uses a single engineered pattern of infrared points called Symbolic Light to stream 3D point clouds for industrial applications
Made to measure fashion retailing
3D-Automated optical inspection
Precision shape measurement for production control (e.g. turbine blades)
Reverse engineering (obtaining precision CAD data from existing objects)
Volume measurement (e.g. combustion chamber volume in motors)
Classification of grinding materials and tools
Precision structure measurement of ground surfaces
Radius determination of cutting tool blades
Precision measurement of planarity
Documenting objects of cultural heritage
Capturing environments for augmented reality gaming
Skin surface measurement for cosmetics and medicine
Body shape measurement
Forensic science inspections
Road pavement structure and roughness
Wrinkle measurement on cloth and leather
Structured Illumination Microscopy
Measurement of topography of solar cells
3D vision system enables DHL's e-fulfillment robot
Software
3DUNDERWORLD SLS – OPEN SOURCE
DIY 3D scanner based on structured light and stereo vision in Python language
SLStudio—Open Source Real Time Structured Light
See also
Depth map
Kinect
Laser Dynamic Range Imager (LDRI)
Lidar
Light stage
Range imaging
Virtual cinematography
References
Sources
Fechteler, P., Eisert, P., Rurainsky, J.: Fast and High Resolution 3D Face Scanning Proc. of ICIP 2007
Fechteler, P., Eisert, P.: Adaptive Color Classification for Structured Light Systems Proc. of CVPR 2008
Kai Liu, Yongchang Wang, Daniel L. Lau, Qi Hao, Laurence G. Hassebrook: Gamma Model and its Analysis for Phase Measuring Profilometry. J. Opt. Soc. Am. A, 27: 553–562, 2010
Yongchang Wang, Kai Liu, Daniel L. Lau, Qi Hao, Laurence G. Hassebrook: Maximum SNR Pattern Strategy for Phase Shifting Methods in Structured Light Illumination, J. Opt. Soc. Am. A, 27(9), pp. 1962–1971, 2010
Hof, C., Hopermann, H.: Comparison of Replica- and In Vivo-Measurement of the Microtopography of Human Skin University of the Federal Armed Forces, Hamburg
Frankowski, G., Chen, M., Huth, T.: Real-time 3D Shape Measurement with Digital Stripe Projection by Texas Instruments Micromirror Devices (DMD) Proc. SPIE-Vol. 3958(2000), pp. 90–106
Frankowski, G., Chen, M., Huth, T.: Optical Measurement of the 3D-Coordinates and the Combustion Chamber Volume of Engine Cylinder Heads Proc. Of "Fringe 2001", pp. 593–598
Elena Stoykova, Jana Harizanova, Venteslav Sainov: Pattern Projection Profilometry for 3D Coordinates Measurement of Dynamic Scenes. In: Three Dimensional Television, Springer, 2008,
Song Zhang, Peisen Huang: High-resolution, Real-time 3-D Shape Measurement (PhD Dissertation, Stony Brook Univ., 2005)
Tao Peng: Algorithms and models for 3-D shape measurement using digital fringe projections (Ph.D. Dissertation, University of Maryland, USA. 2007)
W. Wilke: Segmentierung und Approximation großer Punktwolken (Dissertation Univ. Darmstadt, 2000)
G. Wiora: Optische 3D-Messtechnik Präzise Gestaltvermessung mit einem erweiterten Streifenprojektionsverfahren (Dissertation Univ. Heidelberg, 2001)
Klaus Körner, Ulrich Droste: Tiefenscannende Streifenprojektion (DSFP) University of Stuttgart (further English references on the site)
R. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, J. Nissano, "Structured light using pseudorandom codes", IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (3)(1998)322–327
Further reading
Fringe, 2005, The 5th International Workshop on Automatic Processing of Fringe Patterns, Berlin: Springer, 2006.
3D imaging
Computer vision | Structured-light 3D scanner | [
"Engineering"
] | 3,007 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
16,374,761 | https://en.wikipedia.org/wiki/Piromidic%20acid | Piromidic acid is a quinolone antibiotic.
References
Quinolone antibiotics
Pyridopyrimidines
1-Pyrrolidinyl compounds
Carboxylic acids | Piromidic acid | [
"Chemistry"
] | 41 | [
"Carboxylic acids",
"Functional groups"
] |
16,374,797 | https://en.wikipedia.org/wiki/Oxolinic%20acid | Oxolinic acid is a quinolone antibiotic developed in Japan in the 1970s. Dosages 12–20 mg/kg orally administered for five to ten days. The antibiotic works by inhibiting the enzyme DNA gyrase. It also acts as a dopamine reuptake inhibitor and has stimulant effects in mice.
See also
Amfonelic acid
Fluoroquinolone
References
Quinolone antibiotics
Dopamine reuptake inhibitors
Carboxylic acids
Nitrogen heterocycles
Oxygen heterocycles
Heterocyclic compounds with 3 rings | Oxolinic acid | [
"Chemistry"
] | 121 | [
"Carboxylic acids",
"Functional groups"
] |
16,374,808 | https://en.wikipedia.org/wiki/Flumequine | Flumequine is a synthetic fluoroquinolone antibiotic used to treat bacterial infections. It is a first-generation fluoroquinolone antibacterial that has been removed from clinical use and is no longer being marketed. The marketing authorization of flumequine has been suspended throughout the EU. It kills bacteria by interfering with the enzymes that cause DNA to unwind and duplicate. Flumequine was used in veterinarian medicine for the treatment of enteric infections (all infections of the intestinal tract), as well as to treat cattle, swine, chickens, and fish, but only in a limited number of countries. It was occasionally used in France (and a few other European Countries) to treat urinary tract infections under the trade name Apurone. However this was a limited indication
because only minimal serum levels were achieved.
History
The first quinolone used was nalidixic acid (was marketed in many countries as Negram) followed by the fluoroquinolone flumequine. The first-generation fluoroquinolone agents, such as flumequine, had poor distribution into the body tissues and limited activity. As such they were used mainly for treatment of urinary tract infections. Flumequine (benzo quinolizine) was first patented in 1973, (German Patent) by Rikker Labs. Flumequine is a known antimicrobial compound described and claimed in U.S. Pat. No. 3,896,131 (Example 3), July 22, 1975. Flumequine is the first quinolone compound with a fluorine atom at the C6-position of the related quinolone basic molecular structure. Even though this was the first fluoroquinolone, it is often overlooked when classifying the drugs within this class by generations and excluded from such a list.
Though used frequently to treat farm animals and on occasion household pets, flumequine was also used to treat urinary tract infections in humans. Flumequine, was used transiently treat urinary infections until ocular toxicity was reported. as well as liver damage and anaphylactic shock.
In 2008, the United States Food and Drug Administration (FDA) requested that all quinolone/fluoroquinolone drugs package inserts include a Black Boxed Warning concerning the risk of spontaneous tendon ruptures, which would have included flumequine. The FDA also requested that the manufacturers send out Dear Doctor Letters regarding this new warning. Such tendon problems have also been associated with flumequine.
Drug residue
The use of flumequine in food animals had sparked considerable debate. Significant and harmful residues of quinolones have been found in animals treated with quinolones and later slaughtered and sold as food products. There has been significant concern regarding the amount of flumequine residue found within food animals such as fish, poultry and cattle. In 2003 the Joint FAO/WHO Committee on Food Additives (JECFA) withdrew the maximum residue limits (MRLs) for flumequine and carbadox based on evidence showing both are direct acting genotoxic carcinogens, therefore the Committee was unable to establish an Acceptable Daily Intake (ADI) for human exposure to such residues. Subsequently, in 2006, the JEFCA, re-established the ADI having received appropriate evidence and MRLs were re-specified. The role of JECFA is to evaluate toxicology, residue chemistry and related information and make recommendations for acceptable daily intake (ADI) levels and maximum residue limits (MRLs). At its 16th session, held May 2006, the Committee on Residues of Veterinary Drugs in Foods (CCRVDF) requested information on registered uses of flumequine. As the CCRVDF did not receive any information regarding the registered uses of flumequine that they had requested, the committee members agreed to discontinue work on the MRLs for flumequine in shrimp.
Licensed uses
Urinary tract infections (veterinary and human)
Availability
Veterinary use:
Solution; Oral; 20% (prescription only)
Solution; Oral; 10% (prescription only)
Human use:
Tablet; Oral; Flumequine 400 mg (discontinued)
Mode of action
Flumequine is a member of the quinolone antibiotics family, which are active against both Gram-positive and Gram-negative bacteria. It functions by inhibiting DNA gyrase, a type II topoisomerase, and topoisomerase IV, enzymes necessary to separate bacterial DNA, thereby inhibiting cell division.
This mechanism can also affect mammalian cell replication. In particular, some congeners of this drug family (for example those that contain the C-8 fluorine), display high activity not only against bacterial topoisomerases, but also against eukaryotic topoisomerases and are toxic to cultured mammalian cells and in vivo tumor models.
Although quinolones are highly toxic to mammalian cells in culture, its mechanism of cytotoxic action is not known. Quinolone induced DNA damage was first reported in 1986 (Hussy et al.).
Recent studies have demonstrated a correlation between mammalian cell cytotoxicity of the quinolones and the induction of micronuclei.
As such, some fluoroquinolones may cause injury to the chromosome of eukaryotic cells.
There continues to be considerable debate as to whether or not this DNA damage is to be considered one of the mechanisms of action concerning the severe adverse reactions experienced by some patients following fluoroquinolone therapy.
Adverse reactions
Flumequine was associated with severe ocular toxicity, which precluded its use in human patients. Drug-induced calculi (kidney stones) has been associated with such therapy as well. Anaphylactic shock induced by flumequine therapy has also been associated with its use. Anaphylactoid reactions such as shock, urticaria, and Quincke’s oedema have been reported to generally appear within two hours after taking the first tablet. There were eighteen reports listed within the WHO file in 1996. As with all drugs within this class, flumequine therapy may result in severe central nervous system (CNS) reactions, phototoxicity resulting in skin reactions like erythema, pruritus, urticaria and severe rashes, gastrointestinal and neurological disorders.
Drug interactions
Flumequine was found to have no effect on theophylline pharmacokinetics.
Chemistry
Flumequine is a 9-fluoro-6,7-dihydro-5-methyl-1-oxo-1H,5H-benzo[ij]quinolizine-2-carboxylic acid. The molecular formula is C14H12FNO3. It is a white powder, odorless, flavorless, insoluble in water but soluble in organic solvent.
Pharmacokinetics
Flumequine is considered to be well absorbed and is excreted in the urine and feces as the glucuronide conjugates of the parent drug and 7-hydroxyflumequine. It is eliminated within 168 hours post-dosing. However, studies concerning the calf liver showed additional unidentified residues, of which a new metabolite, ml, represented the major single metabolite 24 hours after the last dose and at all subsequent time points. The metabolite ml, which exhibited no antimicrobial activity, was present in both free and protein-bound fractions. The major residue found in the edible tissues of sheep, pigs, and chickens was parent drug together with minor amounts of the 7-hydroxy-metabolite. The only detected residue in trout was the parent drug.
See also
Adverse effects of fluoroquinolones
References
Fluoroquinolone antibiotics
Tetrahydroquinolines
Heterocyclic compounds with 3 rings
withdrawn drugs
Enoic acids | Flumequine | [
"Chemistry"
] | 1,665 | [
"Drug safety",
"Withdrawn drugs"
] |
16,374,851 | https://en.wikipedia.org/wiki/Livestock%20grazing%20comparison | Livestock grazing comparison is a method of comparing the numbers and density of livestock grazing in agriculture. Various units of measurement are used, usually based on the grazing equivalent of one adult cow, or in some areas on that of one sheep. Many different schemes exist, giving various values to the grazing effect of different types of animal.
Use
Livestock grazing comparison units are used for assessing the overall effect on grazing land of different types of animals (or of mixtures of animals), expressed either as a total for a whole field or farm, or as units per hectare (ha) or acre. For example, using UK government Livestock Units (LUs) from the 2003 scheme a particular pasture field might be able to support 15 adult cattle or 25 horses or 100 sheep: in that scheme each of these would be regarded as being 15 LUs, or 1.5 LUs per hectare (about 0.6 LUs per acre).
Different species (and breeds) of livestock do not all graze in the same way, and this is also taken into account when deciding the appropriate number of units for grazing land. For example, horses naturally graze unevenly, eating short grass areas first and only grazing longer turf if there is insufficient short grass; cattle graze longer grass preferentially, tending to produce a uniform sward; goats tend to browse shrubs if these are available. As these feeding styles are complementary, a pasture may therefore support slightly more units of mixed species than of each species separately. Another consequence of different grazing styles is variation between species in the number of units that can lead to overgrazing – for example, horses may overgraze the short parts of a pasture even when taller grass is still available.
Livestock grazing comparison units are used by many governments to measure and control the intensity of farming. For example, until 2004 the UK Government had an extensification scheme which paid additional subsidy to farmers who kept their livestock at less than an average of 1.4 LUs per hectare.
Schemes
Although different schemes have similar aims, they vary in complexity and detail. For example, some schemes give no value to a young calf, but an additional value to a cow together with her calf at foot. Some give values to different-sized animals of the same species, or different values to the same species in different regions. Most schemes use a calculation based on the weight of the animal. Some use figures for animals of different sizes which are directly proportional to their weight – for example the 2006 UK Government scheme uses a figure for ruminants of the animal's weight (in kilogrammes) divided by 650. Others include an adjustment for the proportionally higher metabolic rate of smaller animals, according to Kleiber's law, which states that the metabolic rate of most animals varies according to their weight raised to the power of approximately 0.75. For example, the Food and Agriculture Organization's Tropical Livestock Unit is based on the weight of the animal raised to the power of 0.75, compared with the equivalent figure for a "tropical cow" of .
The following is a summary of some schemes in common use, using the most closely comparable categories:
Central Europe
The size of a livestock farm in Central Europe was traditionally given in Stößen (singular: Stoß) This unit of measurement was subsequently replaced by the grazing livestock unit or Großvieheinheit (GV).
Stoß
The Stoß is a unit of cattle stock density used in the Alps. For each Alm or Alp it is worked out how many Stoß (Swiss: Stössen) can be grazed (bestoßen); one cow equals one Stoß, 3 bulls equal 2 Stöße, a calf is Stoß, a horse of 1, 2 or 3 years old is worth 1, 2 or 3 Stöße, a pig equals , a goat or a sheep is Stoß.
In Switzerland a Normalstoß is defined as a Großvieheinheit that is "summered" for 100 days. For small livestock there are corresponding conversions. Depending on the quality of the Alp or Alm a full Stoß may require between 1/2 ha and 2 ha.
The Stoß is divided into feet or Füße. A full Stoß is the pasture required by a cow, and equals 4 Füße. Bulls, calves, etc., are a fraction of that, e.g. a one-year old bull needs 2 Füße.
Großvieheinheit
A Großvieheinheit (GV or GVE) is a conversion key used to compare different farm animals on the basis of their live weight. A Großvieheinheit represents 500 kilogrammes (roughly the weight of an adult bull). In the wild it excludes small animals like amphibians and insects, but is used for game in forestry and hunting.
Examples are:
Calf 50–100 kg = 0.1–0.2 GV
Young milk cow 450–650 kg = 0.9–1.3 GV
Milk cow = 1 GV
Horse = 0.8–1.5 GV
Boar = 0.3 GV
Domestic pig = 0.12 GV
Piglet = 0.01 GV
Sheep = 0.1 GV
100 Chickens = 0.8–1 GV
320 egg-laying chickens = 1 GV
A more precise unit is the "fodder-consuming livestock unit" or Raufutter verzehrende Großvieheinheit (RGV), which corrects the value above based on the demands of a given species and direct, near-natural supply of food (fibre-rich roughage) without concentrates.
The "tropical livestock unit" or (tropische Vieheinheit) or TLU is based on a live weight of 250 kg.
Aquaculture and hunting
Analogous units are :
Fish population (Fischbesatz) in fishing, is a measure of the stock of fish in a waterbody
Game population (Wildbesatz), in hunting is the stock of game in a reserve
References
External links
FAO discussion paper, explaining relationship between LSU, UBT and UGB.
Livestock
Agricultural research
Equivalent units | Livestock grazing comparison | [
"Mathematics"
] | 1,282 | [
"Equivalent units",
"Quantity",
"Equivalent quantities",
"Units of measurement"
] |
16,375,528 | https://en.wikipedia.org/wiki/IBM%204694 | The IBM 4694 was one of IBM's PC based point of sale (POS) systems, a successor to the IBM 4683 and IBM 4693. Introduced in 1991, the 4694 became a flagship model for the company's SurePOS system. The system consists of a PC-based controller, and PC-based POS Terminals—typically an IBM keyboard and monitor, or touch screen. The system requires the IBM 4694 computer which is used as a "Controller", or also more or less, as a server on the network. The controller can be set up to boot from a floppy disk, or from a main server on a network. The 4694 was a best-selling POS System, widely used in most large chain stores such as supermarkets, department stores and restaurants. The 4694 could still be seen in the wild at US Foot Locker locations until 2020.
This system was replaced with the IBM 4695.
See also
Digital Research
FlexOS
IBM 4680 OS
IBM 4690 OS
IBM Printer Model 4 (IBM 4694 Printer)
External links
4694 Photo Album at IBM.com
4694 | IBM 4694 | [
"Technology"
] | 240 | [
"Computing stubs",
"Computer hardware stubs"
] |
16,376,341 | https://en.wikipedia.org/wiki/Gas%20diffusion%20electrode | Gas diffusion electrodes (GDE) are electrodes with a conjunction of a solid, liquid and gaseous interface, and an electrical conducting catalyst supporting an electrochemical reaction between the liquid and the gaseous phase.
Principle
GDEs are used in fuel cells, where oxygen and hydrogen react at the gas diffusion electrodes, to form water, while converting the chemical bond energy into electrical energy. Usually the catalyst is fixed in a porous foil, so that the liquid and the gas can interact. Besides these wetting characteristics, the gas diffusion electrode must, of course, offer an optimal electric conductivity, in order to enable an electron transport with low ohmic resistance.
An important prerequisite for the operation of gas diffusion electrodes is that both the liquid and the gaseous phase coexist in the pore system of the electrodes which can be demonstrated with the Young–Laplace equation:
The gas pressure p is in relation to the liquid in the pore system over the pore radius r, the surface tension γ of the liquid and the contact angle θ. This equation is to be taken as a guide for determination because there are too many unknown, or difficult to achieve, parameters. When the surface tension is considered, the difference in surface tension between the solid and the liquid has to be taken into account. But the surface tension of catalysts such as platinum on carbon or silver are hardly measurable. The contact angle on a flat surface can be determined with a microscope. A single pore, however, cannot be examined, so it is necessary to determine the pore system of an entire electrode. Thus in order to create an electrode area for liquid and gas, the path can be chosen to create different pore radii r, or to create different wetting angles θ.
Sintered electrode
In this image of a sintered electrode it can be seen that three different grain sizes were used. The different layers were:
top layer of fine-grained material
layer from different groups
gas distribution layer of coarse-grained material
Most of the electrodes that were manufactured from 1950 to 1970 with the sintered method were for use in fuel cells. This type of production was dropped for economic reasons because the electrodes were thick and heavy, with a common thickness of 2 mm, while the individual layers had to be very thin and without defects. The sales price was too high and the electrodes could not be produced continuously.
Principle of operation
The principle of gas diffusion is illustrated in this diagram. The so-called gas distribution layer is located in the middle of the electrode. With only a small gas pressure, the electrolyte is displaced from this pore system. A small flow resistance ensures that the gas can freely flow inside the electrode. At a slightly higher gas pressure the electrolyte in the pore system is restricted to the work layer. The surface layer itself has such fine pores that, even when the pressure peaks, gas cannot flow through the electrode into the electrolyte. Such electrodes were produced by scattering and subsequent sintering or hot pressing. To produce multi-layered electrodes a fine-grained material was scattered in a mold and smoothed. Then, the other materials were applied in multiple layers and put under pressure. The production was not only error-prone but also time-consuming and difficult to automate.
Bonded electrode
Since about 1970, PTFEs are used to produce an electrode having both hydrophilic and hydrophobic properties while chemically stable and which can be used as binders. This means that, in places with a high proportion of PTFE, no electrolyte can penetrate the pore system and vice versa. In that case the catalyst itself should be non-hydrophobic.
Variations
There are two technical variations to produce PTFE catalyst-mixtures:
Dispersion of water, PTFE, catalyst, emulsifiers, thickening agents...
Dry mixture of PTFE powder and catalyst powder
The dispersion route is chosen mainly for electrodes with polymer electrolytes, as successfully introduced in the proton exchange membrane fuel cell (PEM fuel cell) and in proton exchange membrane (PEM) or hydrochloric acid (HCL) membrane electrolysis. When used in liquid electrolyte, a dry process is more appropriate.
Also, in the dispersion route (through evaporation of water and sintering of the PTFEs at 340 °C) the mechanical pressing is skipped and the produced electrodes are very porous. With fast drying methods, cracks can form in the electrodes which can be penetrated by the liquid electrolyte. For applications with liquid electrolytes, such as the zinc-air battery or the alkaline fuel cell, the dry mixture method is used.
Catalyst
In acidic electrolytes the catalysts are usually precious metals like platinum, ruthenium, iridium and rhodium. In alkaline electrolytes, like zinc-air batteries and alkaline fuel cells, it is usual to use less expensive catalysts like carbon, manganese, silver, nickel foam or nickel mesh.
Application
At first solid electrodes were used in the Grove cell, Francis Thomas Bacon was the first to use gas diffusion electrodes for the Bacon fuel cell, converting hydrogen and oxygen at high temperature into electricity. Over the years, gas diffusion electrodes have been adapted for various other processes like:
Zinc-air battery since 1980
Nickel-metal hydride battery since 1990
Chlorine production by electrolysis of waste hydrochloric acid
Chloralkali process
Electrochemical reduction of carbon dioxide
Production
GDE is produced at all levels. It is not only used for research and development firms but for larger companies as well in the production of a membrane electrode assembly (MEA) that is in most cases used in a fuel cell or battery apparatus. Companies that specialize in high volume production of GDE include Johnson Matthey, Gore and Gaskatel. However, there are many companies which produce custom or low quantity GDE, allowing different shapes, catalysts and loadings to be evaluated as well, which include FuelCellStore, FuelCellsEtc, and many others.
See also
Anion exchange membrane
Concentration cell
Electrode potential
Glossary of fuel cell terms
Ion transport number
Ion selective electrode
Liquid junction potential
References
Electrodes
Fuel cells | Gas diffusion electrode | [
"Chemistry"
] | 1,284 | [
"Electrochemistry",
"Electrodes"
] |
16,378,585 | https://en.wikipedia.org/wiki/3-Nitrobenzanthrone | 3-Nitrobenzanthrone (3-nitro-7H-benz[de]anthracen-7-one) is a chemical compound emitted in diesel exhaust; it is a potent carcinogen. It produced the highest score ever reported in the Ames test, a standard measure of the cancer-causing potential of toxic chemicals, far greater than the previous known strongest (1,8-dinitropyrene, which is also found in diesel exhaust).
See also
Benzanthrone
References
Carcinogens
Nitroarenes
Ketones
Polycyclic aromatic compounds
IARC Group 2B carcinogens | 3-Nitrobenzanthrone | [
"Chemistry",
"Environmental_science"
] | 130 | [
"Ketones",
"Carcinogens",
"Toxicology",
"Functional groups"
] |
13,642,299 | https://en.wikipedia.org/wiki/Bead%20probe%20technology | Bead probe technology (BPT) is technique used to provide electrical access (called “nodal access”) to printed circuit board (PCB) circuitry for performing in-circuit testing (ICT). It makes use of small beads of solder placed onto the board's traces to allow measuring and controlling of the signals using a test probe. This permits test access to boards on which standard ICT test pads are not feasible due to space constraints.
Description
Bead probe technology is a probing method used to connect electronic test equipment to the device under test (DUT) within a bed of nails fixture. The technique was first used in the 1990s and originally given the name “Waygood Bump” after one of the main proponents, Rex Waygood. They are also commonly referred to as solder bumps. Bead probes were designed for when less than 30 mil is available for test probe points on the PCB. They are used with standard ICT spring-loaded test probes to connect the test equipment to the DUT.
Construction
Bead probes are made from a very small "beads" of solder that fit atop of the PCB traces. They are manufactured using the same techniques as other solder features. Construction requires a hole to be opened in the solder mask, exposing the copper trace. This hole is sized to precisely control the amount of metal that forms the bead. Solder paste is applied to the location and reflowed. During reflow, solder flows and is drawn to the copper trace. Surface tension causes the bead to have a curved surface and rise above the solder mask, where it solidifies into a Bead Probe. The bead will be roughly obround in shape and may be 15-25 mils long. A properly constructed bead is the same width as the trace and just enough to clear the surrounding solder mask. The bead is then accessible for testing using a probe with a flat end, which can help compensate for the tolerance build up in the test fixture and PCB.
Advantages
Bead probe can be used in circuits where the pin-pitch is too fine to allow standard test pads. This is becoming more common as pin pitches continue to reduce, particularly in embedded devices. Typically bead probe widths are the width of the PCB traces with a length of about three times this. This allows a high degree of flexibility in their positioning, and can in some cases be applied retrospectively to existing layouts. Because of their small size, bead probes do not affect the signal quality of the signals transferring within the PCB trace. This is especially useful in high speed input/output (HSIO) interconnects, where a standard test pad would interfere with the signal.
Disadvantages
The soldering process that forms the bead probe leaves a coating of flux. Depending on the manufacturing process used, this flux can have varying levels of hardness. Flux with a waxy hardness can reduce the deformation force from the bead, preventing proper contact with the test probe during the first pass contact. This becomes less of an issue on subsequent contacts as the flux is displaced. Test probes with serrated ends of an appropriate size can also aid in measuring bead probes where flux is an issue.
Bead probes require the trace being tested to be located on the surface. This makes it unsuitable for testing high-density boards with many obscured or internal traces and buried vias.
Alternatives
Boundary scan integrates test components into the integrated circuits (ICs) mounted on the board, giving the ability to read or drive the ICs' pins. This allows for testing of interconnects for which physical access is not an option, such as BGA components or signal routes sandwiched between plane layers. A boundary scan controller uses four or more dedicated pins on the board to control test cells serially and receive the measured values. It has the disadvantage of needing board infrastructure to support boundary scan.
Test Access Component (TAC) uses a device such as a 0201 as a target for a large probe as in the solder bump examples. The advantage of this technique is that it provides two target points at each end of the package. The disadvantage of this technique is it can add process and cost to the PCB.
A technique has been described which opens up windows in the solder mask to create test points located directly on PCB tracks. This technique uses a conductive rubber tipped probe to contact the test point which could have a conductive Hot Air Solder Levelling (HASL) finish.
References
Printed circuit board manufacturing
Hardware testing | Bead probe technology | [
"Engineering"
] | 936 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
13,642,373 | https://en.wikipedia.org/wiki/Upstream%20and%20downstream%20%28DNA%29 | In molecular biology and genetics, upstream and downstream both refer to relative positions of genetic code in DNA or RNA. Each strand of DNA or RNA has a 5' end and a 3' end, so named for the carbon position on the deoxyribose (or ribose) ring. By convention, upstream and downstream relate to the 5' to 3' direction respectively in which RNA transcription takes place. Upstream is toward the 5' end of the RNA molecule, and downstream is toward the 3' end. When considering double-stranded DNA, upstream is toward the 5' end of the coding strand for the gene in question and downstream is toward the 3' end. Due to the anti-parallel nature of DNA, this means the 3' end of the template strand is upstream of the gene and the 5' end is downstream.
Some genes on the same DNA molecule may be transcribed in opposite directions. This means the upstream and downstream areas of the molecule may change depending on which gene is used as the reference.
The terms upstream and downstream are sometimes also applied to a polypeptide sequence, where upstream refers to a region N-terminal and downstream to residues C-terminal of a reference point.
See also
Upstream and downstream (transduction)
References
Molecular biology
Orientation (geometry) | Upstream and downstream (DNA) | [
"Physics",
"Chemistry",
"Mathematics",
"Biology"
] | 260 | [
"Molecular biology stubs",
"Topology",
"Space",
"Geometry",
"Molecular biology",
"Biochemistry",
"Spacetime",
"Orientation (geometry)"
] |
13,642,379 | https://en.wikipedia.org/wiki/Upstream%20and%20downstream%20%28transduction%29 | The upstream signaling pathway is triggered by the binding of a signaling molecule, a ligand, to a receiving molecule, a receptor. Receptors and ligands exist in many different forms, and only recognize/bond to particular molecules. Upstream extracellular signaling transduce a variety of intracellular cascades.
Receptors and ligands are common upstream signaling molecules that dictate the downstream elements of the signal pathway. A plethora of different factors affect which ligands bind to which receptors and the downstream cellular response that they initiate.
TGF-β
The extracellular type II and type I kinase receptors binding to the TGF-β ligands. Transforming growth factor-β (TGF-β) is a superfamily of cytokines that play a significant upstream role in regulating of morphogenesis, homeostasis, cell proliferation, and differentiation. The significance of TGF-β is apparent with the human diseases that occur when TGF-β processes are disrupted, such as cancer, and skeletal, intestinal and cardiovascular diseases. TGF-β is pleiotropic and multifunctional, meaning they are able to act on a wide variety of cell types.
Mechanism
The effects of transforming growth factor-β (TGF-β) are determined by cellular context. There are three kinds of contextual factors that determine the shape the TGF-β response: the signal transduction components, the transcriptional cofactors and the epigenetic state of the cell. The different ligands and receptors of TGF-β are significant as well in the composition signal transduction pathway.
the signal transduction components: ligand isoforms, ligand traps, co-receptors, receptor sub-types, inhibitory SMAD proteins, crosstalk inputs
the transcriptional cofactors of SMAD proteins: pluripotency factors, lineage regulators, DNA-binding cofactors, HATs and HDACs, SNF, chromatin readers
the epigenetic factors: heterochromatin, pluripotency marks, lineage marks, EMT marks, iPS cell marks, oncogenic marks.
Upstream pathway
The type II receptors phosphorylate the type I receptors; the type I receptors are then enabled to phosphorylate cytoplasmic R-Smads, which then act as transcriptional regulators. Signaling is initiated by the binding of TGF-β to its serine/threonine receptors. The serene/threonine receptors are the type II and type I receptors on the cell membrane. Binding of a TGF-β members induces assembly of a heterotetrameric complex of two type I and two type II receptors at the plasma membrane. Individual members of the TGF-β family bind to a certain set of characteristic combination of these type I and type II receptors. The type I receptors can be divided into two groups, which depends on the cytoplasmic R-Smads that they bind and phosphorylate. The first group of type I receptors (Alk1/2/3/6) bind and activate the R-Smads, Smad1/5/8. The second group of type I reactors (Alk4/5/7) act on the R-Smads, Smad2/3. The phosphorylated R-Smads then form complexes and the signals are funneled through two regulatory Smad (R-Smad) channels (Smad1/5/8 or Smad2/3). After the ligand-receptor complexes phosphorylate the cytoplasmic R-Smads, the signal is then sent through Smad 1/5/8 or Smad 2/3. This leads to the downstream signal cascade and cellular gene targeting.
Downstream pathway
TGF-β regulates multiple downstream processes and cellular functions. The pathway is highly variable based on cellular context. TGF-β downstream signaling cascade includes regulation of cell growth, cell proliferation, cell differentiation, and apoptosis.
See also
Upstream and downstream (DNA)
References
Molecular biology
Orientation (geometry) | Upstream and downstream (transduction) | [
"Physics",
"Chemistry",
"Mathematics",
"Biology"
] | 840 | [
"Topology",
"Space",
"Geometry",
"Molecular biology",
"Biochemistry",
"Spacetime",
"Orientation (geometry)"
] |
13,643,452 | https://en.wikipedia.org/wiki/Pole%20cell | In early Drosophila development, the embryo passes through thirteen nuclear divisions (karyokinesis) without cytokinesis, resulting in a multinucleate cell (generally referred to as a syncytium, but strictly a coenocyte). Pole cells are the cells that form at the polar ends of the Drosophila egg, which begin the adult germ cells. Pole plasm functions to bud the pole cells, as well as restore fertilization, even when the cell was previously sterile.
Formation
During early development of Drosophila, pole plasm assembles at the posterior pole of the Drosophila embryo, allowing determination of the abdominal patterning. Late in oogenesis, polar organelles, which are electro-negative granules, are in the pole plasm. When the pole plasm further matures, it continues to consist of polar granules into the development of germ cells, which develop into adult germ cells. Serine protease activity occurs less than 2 hours after the budding of the pole cells from the pole plasm, and ending just prior to the movement of the pole cells via gastrulation. The patterning of the pole cells are determined by the activation of oskar, which acts in the determination of body patterning segments. Pole cells begin their migration in a cluster in the midgut primordium. To reach their final destination, pole cells must migrate through the epithelial wall. It is known that the cells migrate through the epithelial wall, but little is known about the mechanisms used to do so.
References
Mitosis | Pole cell | [
"Biology"
] | 334 | [
"Cellular processes",
"Mitosis"
] |
13,643,487 | https://en.wikipedia.org/wiki/ABP%20Induction%20Systems | ABP Induction Systems is a global industrial firm that develops and integrates induction-related equipment and services for foundries, forges, tube and pipe producers, general manufacturers using heating equipment, and manufacturers of microelectronics. With foundry headquarters in Dortmund, Germany, induction heating headquarters in Massillon Ohio United States, and operations in China, Sweden, Thailand, Russia, Mexico, India, Japan, and Brazil, ABP operates worldwide.
History
In 1903, the predecessor of ABP, ASEA (Allmänna Svenska Elektriska Aktiebolaget) in Sweden built the first induction channel furnace for foundry operations.
In order to expand the know-how of the company and to consolidate its market position, ASEA decided to merge in 1988 with Brown Boveri from Switzerland and to form ABB (Asea Brown Boveri). The new firm added sophisticated control, automation and information technology to the products, and supplemented them with consulting, planning, start-up, and training services. Today their components and systems are used in nearly every foundry throughout the world.
In 2005, with the backing of a group of experienced foundry industry investors, the Foundry Systems group was acquired from ABB, and ABP Induction, LLC was born.
In 2008 ABP Induction and the Pillar Induction Company combined their operations into ABP Induction.
In January 2011, Ajax Tocco Magnethermic acquired the assets and intellectual property formerly known as Pillar from ABP Induction. The sale included the Brookfield, Wisconsin, and Sterling Heights, Michigan operations. ABP's induction melting operations in North Brunswick, NJ, and Massillon, OH, were unaffected, as were its international operations including ABP Induction Systems
Shanghai, previously known as Pillar Shanghai.
Locations
ABP Induction Systems has locations in 11 countries:
ABP Induction LLC (Foundry Division), North Brunswick, NJ.
ABP Induction Systems GmbH, Dortmund.
ABP Induction Furnaces (PTY) Ltd. Johannesburg.
ABP Induction Systems (Shanghai) Co. Ltd, No.118, Shanghai.
ABP Induction Systems Pvt. Ltd, Vadodara,
ABP Induction Systems, S. de R.L. de C.V., Santa Catarina, N.L.
ABP Induction Systems GmbH, Moscow.
ABP Induction AB, Norberg.
ABP Induction Ltd, Pathumthanee.
Biuro Techniczno Handlowe, Katowice.
ABP Induction Systems K.K, Kobe.
References
External links
Official website
Manufacturing companies based in Dortmund
Industrial furnaces | ABP Induction Systems | [
"Chemistry"
] | 519 | [
"Metallurgical processes",
"Industrial furnaces"
] |
13,643,525 | https://en.wikipedia.org/wiki/Norwegian%20Black%20List | The Norwegian Black List (Fremmedartslista) is an overview of alien species in Norway, with ecological risk assessments for some of the species. The Norwegian Black List was first published in 2007 by the Norwegian Biodiversity Information Centre and developed in cooperation with 18 scientific experts from six research institutions.
The 2007 Norwegian Black List is the first issue, and is compiled as a counterpart to the Norwegian Red List of 2006.
The 2007 Norwegian Black List
The 2007 Norwegian Black List contains a total of 2483 species of plants, animals and other organisms, 217 of which are risk assessed. A set of criteria has been developed to ensure a standardised assessment of the ecological consequences of alien species.
The assessed species are placed in categories according to the risk they represent.
High risk – 93 species
Unknown risk – 83 species
Low risk – 41 species
Alien species on Svalbard, Bjørnøya and Jan Mayen are not assessed.
Result
Among the 93 species which are found to threaten the natural local biodiversity, are bacteria, macroalgae, microalgae, pseudofungi, fungi, mosses, vascular plants, comb jellies, flatworms, roundworms, crustaceans, arachnids, insects, snails, bivalves, tunicates, fishes and mammals.
Among the vascular plants with a high risk, are Heracleum tromsoensis (aka Heracleum persicum), sycamore maple (Acer pseudoplatanus) and garden lupin (Lupinus polyphyllus). Among the flatworms; Gyrodactylus salaris, among the crustaceans the red king crab (Paralithodes camtschaticus) and American lobster (Homarus americanus). Five species of mammals are noted as high risk species; West European hedgehog, European rabbit, southern vole, American mink and raccoon.
See also
IUCN Red List
References
External links
The 2007 Norwegian Black List – artsdatabanken.no
Nature conservation in Norway
Introduced species
Invasive species | Norwegian Black List | [
"Biology"
] | 422 | [
"Pests (organism)",
"Invasive species"
] |
13,643,527 | https://en.wikipedia.org/wiki/Historical%20definitions%20of%20races%20in%20India | Various attempts have been made, under the British Raj and since, to classify the population of India according to a racial typology. After independence, in pursuance of the government's policy to discourage distinctions between communities based on race, the 1951 Census of India did away with racial classifications. Today, the national Census of independent India does not recognise any racial groups in India.
Some scholars of the colonial epoch attempted to find a method to classify the various groups of India according to the predominant racial theories popular at that time in Europe. This scheme of racial classification was used by the British census of India, which was often integrated with caste system considerations.
Great races
Scientific racism of the late 19th and early 20th centuries divided humans into three races based on "common physical characteristics": Caucasoid, Mongoloid, and Negroid.
American anthropologist Carleton S. Coon wrote that "India is the easternmost outpost of the Caucasian racial region" and defined the Indid race that occupies the Indian subcontinent as beginning in the Khyber Pass. John Montgomery Cooper, an American ethnologist and Roman Catholic priest, on 26 April 1945 in a hearing before the United States Senate "To Permit all people from India residing in the United States to be Naturalised" recorded:
The theory propounded by German comparative philologists in the 1840s and 1850s "maintained that the speakers of Indo-European languages in India, Persia, and Europe were of the same culture and race." This led to a distinction between the Indo-Aryan peoples of northern India and the Dravidian peoples, located mostly in southern India with pockets in the Baluchistan Province in the northwest and in the eastern corner of the Bihar Province.
Although anthropologists classify Dravidians as Caucasoid with the "Mediterranean-Caucasoid" type being the most predominant, the racial status of the Dravidians was initially disputed. In 1898, ethnographer Friedrich Ratzel remarked about the "Mongolian features" of Dravidians, resulting in what he described as his "hypothesis of their [Dravidians] close connection with the population of Tibet", whom he adds "Tibetans may be decidedly reckoned in the Mongol race". In 1899, Science summarised Ratzel's findings over India with,
Edgar Thurston named what he called Homo Dravida and described it close to Australoids, with Caucasoid (Indo-Aryan) admixture. As evidence, he adduced the use of the boomerang by Kallar and Maravar warriors and the proficiency at tree-climbing among both the Kadirs of the Anamalai hills and the Dayaks of Borneo. In 1900, anthropologist Joseph Deniker said,
Deniker grouped Dravidians as a "subrace" under "Curly or Wavy Hair Dark Skin" in which he also includes the Ethiopian and Australian. Also, Deniker mentions that the "Indian race has its typical representatives among the Afghans, the Rajputs, the Brahmins and most of North India but it has undergone numerous alterations as a consequence with crosses with Assyriod, Dravidian, Mongol, Turkish, Arab and other elements."
In 1915, Arnold Wright said,
Wright also mentions that Richard Lydekker and Flowers classified Dravidians as Caucasian. Later, Carleton S. Coon, in his book The Races of Europe (1939), reaffirmed this assessment and classified the Dravidians as Caucasoid due to their "Caucasoid skull structure" and other physical traits such as noses, eyes and hair, and 20th century anthropologists classified Dravidians as Caucasoid with the "Mediterranean-Caucasoid" type being the most predominant.
See also
Mongoloid race
Brown people
Asian people
Ethnic groups of South Asia
Indian people
Caste system in India
Genetics and archaeogenetics of South Asia
mtDNA haplogroups in populations of South Asia
Y-DNA haplogroups in populations of South Asia
Anglo-Indian
Afro-Asians
Indo-African (disambiguation)
Indian South Africans
Afro-Asians in South Asia
Telingan
References
India
British Empire
Indigenous peoples of South Asia
Scientific racism | Historical definitions of races in India | [
"Biology"
] | 867 | [
"Biology theories",
"Obsolete biology theories",
"Scientific racism"
] |
13,643,581 | https://en.wikipedia.org/wiki/Phenylpropylaminopentane | 1-Phenyl-2-propylaminopentane (PPAP; developmental code name MK-306) is an experimental drug related to selegiline which acts as a catecholaminergic activity enhancer (CAE).
PPAP is a CAE and enhances the nerve impulse propagation-mediated release of norepinephrine and dopamine. It produces psychostimulant-like effects in animals. The drug is a phenethylamine and amphetamine derivative and was derived from selegiline.
PPAP was first described in the literature in 1988 and in the first major paper in 1992. It led to the development of the improved monoaminergic activity enhancer (MAE) benzofuranylpropylaminopentane (BPAP) in 1999. PPAP was a reference compound for studying the MAE system for many years. However, it was superseded by BPAP, which is more potent, selective, and also enhances serotonin. There has been interest in PPAP for potential clinical use in humans, including in the treatment of depression, attention deficit hyperactivity disorder (ADHD), and Alzheimer's disease.
Pharmacology
Pharmacodynamics
Catecholaminergic activity enhancer
PPAP is classified as a catecholaminergic activity enhancer (CAE), a drug that stimulates the impulse propagation-mediated release of the catecholamine neurotransmitters norepinephrine and dopamine in the brain.
Unlike stimulants such as amphetamine, which release a flood of monoamine neurotransmitters in an uncontrolled manner, (–)-PPAP instead only increases the amount of neurotransmitters that get released when a neuron is stimulated by receiving an impulse from a neighboring neuron. Both amphetamine and (–)-PPAP promote the release of monoamines; however, while amphetamine causes neurons to release neurotransmitter stores into the synapse regardless of external input, (–)-PPAP does not influence the pattern of neurotransmitter release and instead releases a larger amount of neurotransmitters than normal.
Recent findings have suggested that known synthetic monoaminergic activity enhancers (MAEs) like PPAP, BPAP, and selegiline may exert their effects via trace amine-associated receptor 1 (TAAR1) agonism. This was evidenced by the TAAR1 antagonist EPPTB reversing the MAE effects of BPAP and selegiline, among other findings. Another compound, rasagiline, has likewise been found to reverse the effects of MAEs, and has been proposed as a possible TAAR1 antagonist.
The therapeutic index for PPAP in animal models is greater than that of amphetamine while producing comparable improvements in learning, retention, and antidepressant effects. It has been found to reduce deficits induced by the dopamine depleting agent tetrabenazine in the shuttle box learning test in rats.
PPAP and selegiline are much less potent than BPAP as MAEs. Whereas PPAP and selegiline are active at doses of 1 to 5mg/kg in vivo in rats, BPAP is active at doses of 0.05 to 10mg/kg. BPAP is 130times as potent as selegiline in the shuttle box test. In contrast to BPAP however, the MAE effects of PPAP and selegiline are not reversed by the BPAP antagonist 3-F-BPAP. In addition, whereas PPAP and selegiline are selective as MAEs of norepinephrine and dopamine, BPAP is a MAE of not only norepinephrine and dopamine but also of serotonin.
Other actions
Unlike the related CAE selegiline, (–)-PPAP has no activity as a monoamine oxidase inhibitor.
Chemistry
PPAP, also known as α,N-dipropylphenethylamine or as α-desmethyl-α,N-dipropylamphetamine, is a substituted phenethylamine and amphetamine derivative. It was derived from structural modification of selegiline (L-deprenyl; (R)-(–)-N,α-dimethyl-N-2-propynylphenethylamine).
Both racemic PPAP and subsequently its more active (–)- or (2R)-enantiomer (–)-PPAP have been employed in the literature.
PPAP is similar in chemical structure to propylamphetamine (N-propylamphetamine; PAL-424), but has an extended α-alkyl chain. It is also similar in structure to α-propylphenethylamine (PAL-550), but has an extended N-alkyl chain. A more well-known derivative of α-propylphenethylamine is pentedrone (α-propyl-β-keto-N-methylphenethylamine). N-Propylamphetamine and α-propylphenethylamine act as low-potency dopamine reuptake inhibitors ( = 1,013nM and 2,596nM, respectively) and are inactive as dopamine releasing agents in vitro.
A related MAE, BPAP, is a substituted benzofuran derivative and tryptamine relative that was derived from structural modification of PPAP. It was developed by replacement of the benzene ring in PPAP with a benzofuran ring. Another related MAE, indolylpropylaminopentane (IPAP), is a tryptamine derivative that is the analogue of PPAP in which the benzene ring has been replaced with an indole ring.
PPAP (MK-306) and its (–)-enantiomer (–)-PPAP must not be confused with the sigma receptor ligand R(−)-N-(3-phenyl-n-propyl)-1-phenyl-2-aminopropane ((–)-PPAP—same abbreviation) or with the cephamycin antibiotic cefoxitin (MK-306—same developmental code name).
History
Racemic PPAP (MK-306) was first described in the scientific literature in 1988 and a series of papers characterizing it were published in the early 1990s. The first major paper on the drug was published in 1992. It was synthesized by József Knoll and colleagues. The potencies of the different enantiomers of PPAP were assessed in 1994. Subsequent papers have employed (–)-PPAP.
Several patents of PPAP have been published.
The development of PPAP was critical in elucidating that the CAE effects of selegiline are unrelated to its monoamine oxidase inhibition. For many years, PPAP served as a reference compound in studying MAEs. However, it was eventually superseded by BPAP, which was discovered in 1999. This MAE is potent and selective than PPAP and, in contrast to PPAP and selegiline, also enhances serotonin.
Research
PPAP has been proposed as a potential therapeutic agent for attention deficit hyperactivity disorder (ADHD), Alzheimer's disease, and depression based on preclinical findings. The developers of PPAP attempted to have it clinically studied, but were unsuccessful and it was never assessed in humans.
References
Antidepressants
Antiparkinsonian agents
Designer drugs
Drugs with unknown mechanisms of action
Enantiopure drugs
Experimental drugs
Monoaminergic activity enhancers
Phenethylamines
Pro-motivational agents
Stimulants
Substituted amphetamines
TAAR1 agonists | Phenylpropylaminopentane | [
"Chemistry"
] | 1,662 | [
"Stereochemistry",
"Enantiopure drugs"
] |
13,643,762 | https://en.wikipedia.org/wiki/Edixa%20Reflex | The Edixa Reflex cameras, introduced in 1954, were West Germany's most popular own series of SLR's with focal plane shutter. The original name of the first Edixa SLR was Komet. The Wirgin company had to change the name after complaints of two other companies with equally named products. Since 1955 the cameras got additional slow shutter speeds, and since 1956 cameras with aperture release shifter for the M42 lenses were available. Until 1959 four lines of Edixa SLRs were introduced:
Type A, with shutter speeds up to 1/1000 sec.
Type B, with aperture release mechanics
Type C, with meter
Type D, with exposure times up to 9 sec.
In 1960 the types B, C and D got the rapid mirror and improved shutter mechanics. Type A was replaced by the type S which had a slower shutter. A special feature of this camera series was the exchangeable viewfinder unit. A simple top-viewfinder and a pentaprism finder were available. In 1960 the Model B had a name change and became the Edixa-Mat Flex Model B, the word Reflex being shortened to Flex, and it featured shutter speeds from 1/25 to 1/1000 of a second, the instant return mirror, automatic aperture actuation and interchangeable viewfinders. The waist-level finder was standard and the pentaprism was an optional extra. The retail price in the UK in 1960 was about £48.
External links
Edixa Reflex by Sylvain Halgand (French)
Edixa Reflex A at Schaum-Holzappel's (German source)
Wirgin Edixa SLRs at ukcamera.com
Single-lens reflex cameras
Wirgin cameras | Edixa Reflex | [
"Technology"
] | 360 | [
"System cameras",
"Single-lens reflex cameras"
] |
13,643,769 | https://en.wikipedia.org/wiki/Drawbar%20%28haulage%29 | A drawbar is a solid coupling between a hauling vehicle and its hauled load. Drawbars are in common use with rail transport; road trailers, both large and small, industrial and recreational; and agricultural equipment.
Agriculture and horse-drawn vehicles
Agricultural equipment is hauled by a tractor-mounted drawbar. Specialist agricultural tools such as ploughs are attached to specialist drawbars which have functions in addition to transmitting tractive force. This was partly made redundant with Ferguson's development of the three-point linkage in his famous TE20.
A wooden drawbar extends from the front of a wagon, cart, chariot or other horse-drawn vehicles to between the horses. A steel drawbar attaches a three-point hitch or other farm implement to a tractor.
Road
A drawbar is a towing or pushing connection between a tractive vehicle and its load.
Light vehicles
On light vehicles, the most common coupling is an A-frame drawbar coupled to a 1 7/8 inch or 50 mm tow ball. These drawbars transmit around 10% of the gross trailer weight through the coupling.
Heavy vehicles
The direction of haulage may be push or pull, though pushing tends to be for a pair of ballast tractors working together, one pulling and the other pushing an exceptional load on a specialist trailer. The most common drawbar configuration for heavy vehicles is an A-frame drawbar at the front of a full trailer that connects to a tow coupling on a hauling vehicle
On heavy vehicles, the drawbar is coupled using a drawbar eye, typically of 40 mm or 50 mm diameter, connected to a bolt and pin coupling. Commonly seen brands include Ringfeder, V. Orlandi and Jost Rockinger. These drawbars transmit little or no downwards force through the coupling.
The drawbar should not be confused with the fifth wheel coupling. The drawbar requires a trailer which either loads the drawbar lightly (for example a small boat trailer, or caravan, or the load is the weight of the coupling components only (larger trailers, usually but not always with a steerable hauled axle, front or rear). By contrast, the fifth wheel is designed to transmit a proportion of the load's weight to the hauling vehicle. Drawbar configuration is mostly seen on hydraulic modular trailer and ballast tractor combination to haul oversize loads which require special trailer and tractor.
Drawbar eye
A drawbar eye, also called tow eye, is a mechanical part to connect an independent trailer/dolly via a drawbar coupling to a tractor. the eye is connected to the front end of a drawbar with bolt, flange-mounting or welding. Most are made from high tension material to bear heavy loads while being pulled by the tractor. The eye is made in the shape of an "i" with a hole at top which is locked in the drawbar coupling and the lower part is mounted to the drawbar making it an essential connector between the drawbar and drawbar coupling. The drawbar eye is used in many heavy transport operations around the world. It is mostly used for agriculture equipment, construction equipment, road trains, dolly trailers, full trailer and hydraulic modular trailers.
Rail
Two or more passenger or freight cars may be attached by means of a drawbar rather than a coupler. At each end of the permanently coupled vehicles there is a regular coupler, such as the North American Janney coupler or the Russian SA3 coupler. The use of a drawbar eliminates slack action.
Rail applications
MR-90
MR-63
MR-73
MPM-10
Drawgear
See also
Ballast tractor
Drawbar force gauge
Drawgear
Fifth Wheel and Gooseneck
Fifth wheel coupling
Jumper cable
Ringfeder
Three point hitch
Tow hitch
References
External links
Agricultural machinery
Couplers
Trucks
Transport operations
Articulated vehicles
Heavy equipment | Drawbar (haulage) | [
"Physics"
] | 768 | [
"Physical systems",
"Transport",
"Transport operations"
] |
13,644,049 | https://en.wikipedia.org/wiki/Michel%20Kervaire | Michel André Kervaire (26 April 1927 – 19 November 2007) was a French mathematician who made significant contributions to topology and algebra.
He introduced the Kervaire semi-characteristic. He was the first to show the existence of topological n-manifolds with no differentiable structure (using the Kervaire invariant), and (with John Milnor) computed the number of exotic spheres in dimensions greater than four. He is also well known for fundamental contributions to high-dimensional knot theory. The solution of the Kervaire invariant problem was announced by Michael Hopkins in Edinburgh on 21 April 2009.
Education
He was the son of André Kervaire (a French industrialist) and Nelly Derancourt. After completing high school in France, Kervaire pursued his studies at ETH Zurich (1947–1952), receiving a Ph.D. in 1955. His thesis, entitled Courbure intégrale généralisée et homotopie, was written under the direction of Heinz Hopf and Beno Eckmann.
Career
Kervaire was a professor at New York University's Courant Institute from 1959 to 1971, and then at the University of Geneva from 1971 to 1997, when he retired. He received an honorary doctorate from the University of Neuchâtel in 1986; he was also an honorary member of the Swiss Mathematical Society.
See also
Homology sphere
Kervaire manifold
Plus construction
Selected publications
This paper describes the structure of the group of smooth structures on an n-sphere for n > 4.
Notes
References
External links
Michel Kervaire's work in surgery and knot theory (Slides of lectures given by Andrew Ranicki at the Kervaire Memorial Symposium, Geneva, February 2009)
20th-century French mathematicians
20th-century Swiss mathematicians
Topologists
Algebraists
Courant Institute of Mathematical Sciences faculty
ETH Zurich alumni
People from Częstochowa
1927 births
2007 deaths
Academic staff of the University of Geneva
Swiss expatriates in the United States | Michel Kervaire | [
"Mathematics"
] | 400 | [
"Topologists",
"Topology",
"Algebra",
"Algebraists"
] |
13,645,483 | https://en.wikipedia.org/wiki/Samsung%20Contact | Samsung Contact was an enterprise email and groupware server that ran on Linux and HP-UX. It provided email, calendars and other collaborative software. It could be accessed from many different clients, most notably Microsoft Outlook. It was based on HP OpenMail, which was licensed from Hewlett-Packard.
History
Development began in November 2001. Following a major reorganization of its sponsor entity, Samsung SDS, most of the original core developers were laid off in 2003. Samsung Contact was discontinued at the end of 2007.
See also
Scalix
References
Contact | Samsung Contact | [
"Engineering"
] | 114 | [
"Software engineering",
"Software engineering stubs"
] |
13,646,046 | https://en.wikipedia.org/wiki/R2A%20agar | R2A agar (Reasoner's 2A agar) is a culture medium developed to study bacteria which normally inhabit potable water. These bacteria tend to be slow-growing species and would quickly be suppressed by faster-growing species on a richer culture medium.
Since its development in 1985, it has been found to allow the culturing of many other bacteria that will not readily grow on fuller, complex organic media.
Typical composition (% w/v)
Proteose peptone, 0.05%
Casamino acids, 0.05%
Yeast extract, 0.05%
Dextrose, 0.05%
Soluble starch, 0.05%
Dipotassium phosphate, 0.03%
Magnesium sulfate, 0.005%
Sodium pyruvate, 0.03%
Agar, 1.5%
Final pH 7.2 ± 0.2 @ 25 °C
References
Microbiological media
Cell culture media | R2A agar | [
"Biology"
] | 192 | [
"Microbiological media",
"Microbiology equipment"
] |
13,646,381 | https://en.wikipedia.org/wiki/Open%20Virtualization%20Format | Open Virtualization Format (OVF) is an open standard for packaging and distributing virtual appliances or, more generally, software to be run in virtual machines.
The standard describes an "open, secure, portable, efficient and extensible format for the packaging and distribution of software to be run in virtual machines". The OVF standard is not tied to any particular hypervisor or instruction set architecture. The unit of packaging and distribution is a so-called OVF Package which may contain one or more virtual systems each of which can be deployed to a virtual machine.
History
In September 2007 VMware, Dell, HP, IBM, Microsoft and XenSource submitted to the Distributed Management Task Force (DMTF) a proposal for OVF, then named "Open Virtual Machine Format".
The DMTF subsequently released the OVF Specification V1.0.0 as a preliminary standard in September, 2008, and V1.1.0 in January, 2010. In January 2013, DMTF released the second version of the standard, OVF 2.0 which applies to emerging cloud use cases and provides important developments from OVF 1.0 including improved network configuration support and package encryption capabilities for safe delivery.
ANSI has ratified OVF 1.1.0 as ANSI standard INCITS 469-2010.
OVF 1.1 was adopted in August 2011 by ISO/IEC JTC 1/SC 38 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as an International Standard ISO/IEC 17203.
OVF 2.0 brings an enhanced set of capabilities to the packaging of virtual machines, making the standard applicable to a broader range of cloud use cases that are emerging as the industry enters the cloud era. The most significant improvements include support for network configuration along with the ability to encrypt the package to ensure safe delivery.
Design
An OVF package consists of several files placed in one directory. An OVF package always contains exactly one OVF descriptor (a file with extension .ovf). The OVF descriptor is an XML file which describes the packaged virtual machine; it contains the metadata for the OVF package, such as name, hardware requirements, references to the other files in the OVF package and human-readable descriptions. In addition to the OVF descriptor, the OVF package will typically contain one or more disk images, and optionally certificate files and other auxiliary files.
The entire directory can be distributed as an Open Virtual Appliance (OVA) package, which is a tar archive file with the OVF directory inside.
Industry support
OVF has generally been broadly accepted. Several virtualization players in the industry have announced support for OVF.
See also
VHD (file format)
VMDK
References
External links
DMTF OVF Whitepaper
VMware OVF Whitepaper
Computer standards
DMTF standards
ISO/IEC standards
Open standards
Virtualization | Open Virtualization Format | [
"Technology",
"Engineering"
] | 630 | [
"Computer standards",
"DMTF standards",
"Virtualization",
"Computer networks engineering"
] |
13,646,468 | https://en.wikipedia.org/wiki/Tank%20leaching | In metallurgical processes tank leaching is a hydrometallurgical method of extracting valuable material (usually metals) from ore.
Tank vs. vat leaching
Factors
Tank leaching is usually differentiated from vat leaching on the following factors:
In tank leaching the material is ground sufficiently fine to form a slurry or pulp, which can flow under gravity or when pumped. In vat leaching typically a coarser material is placed in the vat for leaching, this reduces the cost of size reduction;
Tanks are typically equipped with agitators, baffles, gas introduction equipment designed to maintain the solids in suspension in the slurry, and achieve leaching. Vats usually do not contain much internal equipment, except for agitators.
Tank leaching is typically continuous, while vat leaching is operated in a batch fashion, this is not always the case, and commercial processes using continuous vat leaching have been tested;
Typically the retention time required for vat leaching is more than that for tank leaching to achieve the same percentage of recovery of the valuable material being leached;
In a tank leach the slurry is moved, while in a vat leach the solids remain in the vat, and solution is moved.
Processes
Tank and vat leaching involves placing ore, usually after size reduction and classification, into large tanks or vats at ambient operating conditions containing a leaching solution and allowing the valuable material to leach from the ore into solution.
In tank leaching the ground, classified solids are already mixed with water to form a slurry or pulp, and this is pumped into the tanks. Leaching reagents are added to the tanks to achieve the leaching reaction. In a continuous system the slurry will then either overflow from one tank to the next, or be pumped to the next tank. Ultimately the “pregnant” solution is separated from the slurry using some form of liquid/solid separation process, and the solution passes on to the next phase of recovery.
In vat leaching the solids are loaded into the vat, once full the vat is flooded with a leaching solution. The solution drains from the tank, and is either recycled back into the vat or is pumped to the next step of the recovery process. Vat leach units are rectangular containers (drums, barrels, tanks or vats), usually very big and made of wood or concrete, lined with material resistant to the leaching media. The treated ore is usually coarse.
The vats are usually run sequentially to maximize the contact time between the ore and the reagent. In such a series the leachate collected from one container is added to another vat with fresher ore
As mentioned previously tanks are equipped with agitators to keep the solids in suspension in the vats and improve the solid to liquid to gas contact. Agitation is further assisted by the use of tank baffles to increase the efficiency of agitation and prevent centrifuging of slurries in circular tanks...
Extraction efficiency factors
Aside from chemical requirements several key factors influence extraction efficiency:
Retention time - refers to the time spent in the leaching system by the solids. This is calculated as the total volumetric capacity of the leach tank/s divided by the volumetric throughput of the solid/liquid slurry. Retention time is commonly measured in hours for precious metals recovery. A sequence of leach tanks is referred to as a leach "train", and retention time is measured considering the total volume of the leach train. The desired retention time is determined during the testing phase, and the system is then designed to achieve this.
Size - The ore must be ground to a size that exposes the desired mineral to the leaching agent (referred to as “liberation”), and in tank leaching this must be a size that can be suspended by the agitator. In vat leaching this is the size that is the most economically viable, where the recovery achieved as ore is ground finer is balanced against the increased cost of processing the material.
Slurry density - The slurry density (percent solids) determines retention time. The settling rate and viscosity of the slurry are functions of the slurry density. The viscosity, in turn, controls the gas mass transfer and the leaching rate.
Numbers of tanks - Agitated tank leach circuits are typically designed with no less than four tanks and preferably more to prevent short-circuiting of the slurry through the tanks.
Dissolved gas - Gas is often injected below the agitator or into the vat to obtain the desired dissolved gas levels – typically oxygen, in some base metal plants sulphur dioxide may be required.
Reagents - Adding and maintaining the appropriate amount of reagents throughout the leach circuit is critical to a successful operation. Adding insufficient quantities of reagents reduces the metal recovery but adding excess reagents increases the operating costs without recovering enough additional metal to cover the cost •of the reagents.
The tank leaching method is commonly used to extract gold and silver from ore, such as with the Sepro Leach Reactor.
References
Metallurgical processes | Tank leaching | [
"Chemistry",
"Materials_science"
] | 1,065 | [
"Metallurgical processes",
"Metallurgy"
] |
13,646,690 | https://en.wikipedia.org/wiki/False%20morel | The name false morel is given to several species of mushroom which bear a resemblance to the highly regarded true morels of the genus Morchella. Like Morchella, false morels are members of the Pezizales, but within that group represent several unrelated taxa scattered through the families Morchellaceae, Discinaceae, and Helvellaceae, with the epithet "false morel" most often ascribed to members of the genus Gyromitra.
Compared to morels
When gathering morels for mushrooms, care must be taken to distinguish them from potentially poisonous lookalikes. While a great many morel lookalikes, and even morels themselves are toxic or cause gastrointestinal upset when consumed raw, some, such as Gyromitra esculenta remain toxic even after conventional cooking methods. Although some false morels can be eaten without ill effect, others can cause severe gastrointestinal upset, loss of muscular coordination (including cardiac muscle), or even death. Incidents of poisoning usually occur when they are eaten in large quantities, inadequately cooked, or over several days in a row. Some species contain gyromitrin, a toxic and carcinogenic organic compound, which is hydrolyzed in the body into monomethylhydrazine (MMH). Gyromitra esculenta in particular has been reported to be responsible for up to 23% of mushroom fatalities each year in Poland. G. esculenta—regarded as delicious—is known to be potentially deadly when eaten fresh, but research in the 1990s showed that toxins remain even after proper treatment. While many people freely eat false morels, potentially even toxic species, without apparent harm, some people have developed acute toxicity and recent evidence suggests that there may be long-term health risks as well.
The key morphological features distinguishing some of the false morels from true morels are as follows:
Gyromitra species often have a "wrinkled" or "cerebral" (brain-like) appearance to the cap due to multiple wrinkles and folds, rather than the honeycomb appearance of true morels due to ridges and pits. Some species of Gyromitra do not contain gyromitrin, but are potentially easy to confuse with Gyromitra esculenta and other toxic species in the areas where their ranges overlap.
Gyromitra esculenta has a cap that is usually reddish-brown in colour, but sometimes also chestnut, purplish-brown, or dark brown.
Gyromitra species are typically chambered in longitudinal section, while Verpa species contain a cottony substance inside their stem, in contrast to true morels which are always hollow.
The caps of Verpa species (V. bohemica, V. conica and others) are attached to the stem only at the apex (top of the cap), unlike true morels which have caps that are attached to the stem at, or near the base of the cap, or halfway along the stem ("half-free morels"). The easiest way to distinguish Verpa species from Morchella species is to slice them longitudinally. Since all known Verpa species are safe to eat if prepared similarly to morels, there is little to no risk in mistaking them for morels.
See also
Gyromitrin, a toxic chemical in Gyromitra fungi
References
Pezizales
Fungus common names | False morel | [
"Biology"
] | 706 | [
"Fungus common names",
"Fungi",
"Common names of organisms"
] |
13,647,371 | https://en.wikipedia.org/wiki/MRS%20agar | De Man–Rogosa–Sharpe agar, often abbreviated to MRS, is a selective culture medium designed to favour the luxuriant growth of Lactobacilli for lab study. Developed in 1960, this medium was named for its inventors, , , and . It contains sodium acetate, which suppresses the growth of many competing bacteria (although some other Lactobacillales, like Leuconostoc and Pediococcus, may grow). This medium has a clear brown colour.
Typical composition
MRS agar typically contains (w/v):
1.0% peptone
1.0% beef extract
0.4% yeast extract
2.0% glucose
0.5% sodium acetate trihydrate
0.1% polysorbate 80 (also known as Tween 80)
0.2% dipotassium hydrogen phosphate
0.2% triammonium citrate
0.02% magnesium sulfate heptahydrate
0.005% manganese sulfate tetrahydrate
1.0% agar
pH adjusted to 6.2 at 25 °C
The yeast/meat extracts and peptone provide sources of carbon, nitrogen, and vitamins for general bacterial growth. The yeast extract also contains vitamins and amino acids required by Lactobacilli. Polysorbate 80 is a surfactant which assists in nutrient uptake by Lactobacilli. Magnesium sulfate and manganese sulfate provide cations used in metabolism.
See also
MacConkey agar (culture medium designed to grow Gram-negative bacteria and differentiate them for lactose fermentation).
References
Microbiological media | MRS agar | [
"Biology"
] | 340 | [
"Microbiological media",
"Microbiology equipment"
] |
13,648,127 | https://en.wikipedia.org/wiki/ESA%20Centre%20for%20Earth%20Observation | The ESA Centre for Earth Observation (also known as the European Space Research Institute or ESRIN) is a research centre belonging to the European Space Agency (ESA), located in Frascati (Rome) Italy. It is dedicated to research involving earth observation data taken from satellites, among other specialised activities. The establishment currently hosts the European Space Agency's development team for the Vega launcher.
History
ESLAR, a laboratory for advanced research was created in 1966 mainly to break the political deadlock over the location of ESLAB. Later renamed ESRIN, an acronym for European Space Research Institute, ESLAR was based in Frascati (Italy). The ESRO Convention describes ESRINs' role in the following manner:
The facility began acquiring data from environmental satellites within Earthnet programme in the 1970s.
See also
European Astronaut Centre (EAC)
European Centre for Space Applications and Telecommunications (ECSAT)
European Space Agency (ESA)
European Space Astronomy Centre (ESAC)
European Space Operations Centre (ESOC)
European Space Research and Technology Centre (ESTEC)
European Space Tracking Network (ESTRACK)
Guiana Space Centre (CSG)
References
External links
European Space Agency facilities
1966 establishments in Italy
Remote sensing organizations | ESA Centre for Earth Observation | [
"Astronomy"
] | 245 | [
"Outer space stubs",
"Outer space",
"Astronomy stubs"
] |
13,649,015 | https://en.wikipedia.org/wiki/Wey%20%28unit%29 |
The wey or weight (Old English: , waege, "weight") was an English unit of weight and dry volume by at least 900 AD, when it began to be mentioned in surviving legal codes.
Weight
A statute of Edgar the Peaceful set a price floor on wool by threatening both the seller and purchaser who agreed to trade a wool wey for less than 120 pence (i.e., ½ pound of sterling silver per wey), but the wey itself varied over time and by location. The wey was standardized as 14 stones of 12½ merchants' pounds each (175 lbs. or around 76.5 kg) by the time of the Assize of Weights and Measures . This wey was applied to lead, soap, and cheese, as well as wool. 2 wey made a sack, 12 a load, and 24 a last.
The wool wey was later figured as 2 hundredweight of 8 stone of 14 avoirdupois pounds each (224 lbs. or about 101.7 kg).
The Suffolk wey was 356 avoirdupois pounds (around 161.5 kg). It was used as a measure for butter and cheese.
Volume
As a measure of volume for dry commodities, it denoted roughly 40 bushels or .
See also
English units
Stone, sack, last, & load
Whey (unit)
References
Units of mass
Units of volume
Obsolete units of measurement | Wey (unit) | [
"Physics",
"Mathematics"
] | 292 | [
"Obsolete units of measurement",
"Matter",
"Units of volume",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
13,649,130 | https://en.wikipedia.org/wiki/ISO/IEC%20JTC%201/SC%2022 | ISO/IEC JTC 1/SC 22 Programming languages, their environments and system software interfaces is a standardization subcommittee of the Joint Technical Committee ISO/IEC JTC 1 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) that develops and facilitates standards within the fields of programming languages, their environments and system software interfaces. ISO/IEC JTC 1/SC 22 is also sometimes referred to as the "portability subcommittee". The international secretariat of ISO/IEC JTC 1/SC 22 is the American National Standards Institute (ANSI), located in the United States.
History
ISO/IEC JTC 1/SC 22 was created in 1985, with the intention of creating a JTC 1 subcommittee that would address standardization within the field of programming languages, their environments and system software interfaces. Before the creation of ISO/IEC JTC 1/SC 22, programming language standardization was addressed by ISO TC 97/SC 5. Many of the original working groups of ISO/IEC JTC 1/SC 22 were inherited from a number of the working groups of ISO TC 97/SC 5 during its reorganization, including ISO/IEC JTC 1/SC 22/WG 2 – Pascal (originally ISO TC 97/SC 5/WG 4), ISO/IEC JTC 1/SC 22/WG 4 – COBOL (originally ISO TC 97/SC 5/ WG 8), and ISO/IEC JTC 1/SC 22/WG 5 – Fortran (originally ISO TC 97/SC 5/WG 9). Since then, ISO/IEC JTC 1/SC 22 has created and disbanded many of its working groups in response to the changing standardization needs of programming languages, their environments and system software interfaces.
Scope and mission
The scope of ISO/IEC JTC 1/SC 22 is the standardization of programming languages (such as COBOL, Fortran, Ada, C, C++, and Prolog), their environments (such as POSIX and Linux), and systems software interfaces, such as:
Specification techniques
Common facilities and interfaces
ISO/IEC JTC 1/SC 22 also produces common language-independent specifications to facilitate standardized bindings between programming languages and system services, as well as greater interaction between programs written in different languages.
The scope of ISO/IEC JTC 1/SC 22 does not include specialized languages or environments within the program of work of other subcommittees or technical committees.
The mission of ISO/IEC JTC 1/SC 22 is to improve portability of applications, productivity and mobility of programmers, and compatibility of applications over time within high level programming environments. The three main goals of ISO/IEC JTC 1/SC 22 are:
To support the current global investment in software applications through programming languages standardization
To improve programming language standardization based on previous specification experience in the field
To respond to emerging technological opportunities
Structure
Although ISO/IEC JTC 1/SC 22 has had a total of 24 working groups (WGs), many have been disbanded when the focus of the working group was no longer applicable to the current standardization needs. ISO/IEC JTC 1/SC 22 is currently made up of eight (8) active working groups, each of which carries out specific tasks in standards development within the field of programming languages, their environments and system software interfaces. The focus of each working group is described in the group’s terms of reference. Working groups of ISO/IEC JTC 1/SC 22 are:
Collaborations
ISO/IEC JTC 1/SC 22 works in close collaboration with a number of other organizations or subcommittees, some internal to ISO, and others external to it. Organizations in liaison with ISO/IEC JTC 1/SC 22, internal to ISO are:
ISO/IEC JTC 1/SC 2, Coded character sets
ISO/IEC JTC 1/SC 7, Software and systems engineering
ISO/IEC JTC 1/SC 27, IT Security techniques
ISO/TC 37, Terminology and other language and content resources
ISO/TC 215, Health informatics
Organizations in liaison to ISO/IEC JTC 1/SC 22 that are external to ISO are:
Ecma International
Linux Foundation
Association for Computing Machinery Special Interest Group on Ada (ACM SIGAda)
Ada-Europe
MISRA
Member countries
Countries pay a fee to ISO to be members of subcommittees.
The 23 "P" (participating) members of ISO/IEC JTC 1/SC 22 are: Austria, Bulgaria, Canada, China, Czech Republic, Denmark, Finland, France, Germany, Israel, Italy, Japan, Kazakhstan, Republic of Korea, Netherlands, Poland, Russian Federation, Slovenia, Spain, Switzerland, Ukraine, United Kingdom, and United States of America.
The 21 "O" (observing) members of ISO/IEC JTC 1/SC 22 are: Argentina, Belgium, Bosnia and Herzegovina, Cuba, Egypt, Ghana, Greece, Hungary, Iceland, India, Indonesia, Islamic Republic of Iran, Ireland, Democratic People’s Republic of Korea, Malaysia, New Zealand, Norway, Portugal, Romania, Serbia, and Thailand.
Published standards and technical reports
ISO/IEC JTC 1/SC 22 currently has 98 published standards in programming languages, their environments and system software interfaces. Some standards published by ISO/IEC JTC 1/SC 22 within this field include:
See also
ISO/IEC JTC 1
List of ISO standards
American National Standards Institute
International Organization for Standardization
International Electrotechnical Commission
References
External links
ISO/IEC JTC 1/SC 22 page at ISO
022
Programming language standards | ISO/IEC JTC 1/SC 22 | [
"Technology"
] | 1,130 | [
"Computer standards",
"Programming language standards"
] |
13,649,448 | https://en.wikipedia.org/wiki/Obesogen | Obesogens are certain chemical compounds that are hypothesised to disrupt normal development and balance of lipid metabolism, which in some cases, can lead to obesity. Obesogens may be functionally defined as chemicals that inappropriately alter lipid homeostasis and fat storage, change metabolic setpoints, disrupt energy balance or modify the regulation of appetite and satiety to promote fat accumulation and obesity.
There are many different proposed mechanisms through which obesogens can interfere with the body's adipose tissue biology. These mechanisms include alterations in the action of metabolic sensors; dysregulation of sex steroid synthesis, action or breakdown; changes in the central integration of energy balance including the regulation of appetite and satiety; and reprogramming of metabolic setpoints. Some of these proposed pathways include inappropriate modulation of nuclear receptor function which therefore allows the compounds to be classified as endocrine disrupting chemicals that act to mimic hormones in the body, altering the normal homeostasis maintained by the endocrine system.
Obesogens have been detected in the body both as a result of intentional administration of obesogenic chemicals in the form of pharmaceutical drugs such as diethylstilbestrol, selective serotonin reuptake inhibitors, and thiazolidinedione and as a result of unintentional exposure to environmental obesogens such as tributyltin, bisphenol A, diethylhexylphthalate, and perfluorooctanoate.
The term obesogen was coined in 2006 by Felix Grün and Bruce Blumberg of the University of California, Irvine.
Mechanisms of action
There are many ways in which obesogenic drugs and chemicals can disrupt the body's adipose tissue biology. The three main mechanisms of action include
alterations in the action of metabolic sensors in which obesogens mimic metabolic ligands acting to either block or upregulate hormone receptors
dysregulation of sex steroid synthesis, in which they alter the ratio of sex hormones leading to changes in their control of lipid balance
changes in the central integration of energy balance including the regulation of appetite and satiety in the brain and the reprogramming of metabolic setpoints.
Metabolic sensors
Obesogenic drugs and chemicals have been shown to target transcription regulators found in gene networks that function to control intracellular lipid homeostasis and proliferation and differentiation on adipocytes. The major group of regulators that is targeted is a group of nuclear hormone receptors known as peroxisome proliferator activated receptors (PPARα, δ, and γ). These hormone receptors sense a variety of metabolic ligands including lipophilic hormones, dietary fatty acids and their metabolites, and, depending on the varying levels of these ligands, control transcription of genes involved in balancing the changes in lipid balance in the body. To become active and properly function as metabolic sensors and transcription regulators, the PPAR receptors must heterodimerize with another receptor known as the 9-cis retinoic acid receptor (RXR). The RXR receptor itself is the second major target of obesogens next to the PPAR receptors.
The PPARα receptor, when complexed with RXR and activated by the binding of a lipid, promotes peroxisome proliferation leading to increased fatty acid β-oxidation. Substances, such a xenobiotics that target and act as agonists of PPARα, typically act to reduce overall serum concentrations of lipids. In contrast, the PPARγ receptor, when complexed with RXR and activated by the binding of fatty acids or their derivatives, promotes lipid biosynthesis and storage of lipids is favored over fatty acid oxidation. In addition, activation promotes differentiation of preadipocytes and the conversion of mesenchymal progenitor cells to preadipocytes in adipose tissues. Substances that target and act as agonists of PPARγ/RXR complex typically act to increase overall serum concentrations of lipids.
Obesogens that target the PPARγ/RXR complex mimic the metabolic ligands and activate the receptor leading to upregulation of lipid accumulation which explains their obesogenic effects. However, in the case of obesogens that target the PPARα/RXR complex, which when stimulated reduces adipose mass and body weight, there are a few explanations as to how they promote obesity.
The ligand binding pockets of PPARs are very large and unspecified, allowing for different isoforms of the receptor (PPARα, δ, and γ) to be activated by the same agonist ligands or their metabolites. In addition, fatty acid oxidation stimulated by PPARα requires continuous stimulation while only a single activation event of PPARγ is required to permanently increase adipocyte differentiation and number. Therefore, it may be the case that metabolites of PPARα targeting obesogens are also activating PPARγ, providing the single activation event needed to potentially lead to a pro-adipogenic response.
A second explanation points to specific PPARα targeters that have been shown to additionally cause abnormal transcriptional regulation of testicular steroidogenesis when introduced during fetal development. This abnormal regulation leads to a decreased level of androgen in the body which, itself, is obesogenic.
Finally, if PPARα activation occurs during critical periods of development, the resulting decrease in lipid concentration in the developing fetus is recognized by the fetal brain as undernourishment. In this case, the developing brain makes what will become permanent changes to the body's metabolic control, leading to long-term upregulation of lipid storage and maintenance.
Sex steroid dysregulation
Sex steroids normally play a significant role in lipid balance in the body. Aided by other peptide hormones such as growth hormone, they act against the lipid accumulation mediated by insulin and cortisol by mobilizing lipid stores that are present. Exposure to obesogens often leads to a deficiency or change in the ratio between androgen and estrogen sex steroid levels, which modifies this method of lipid balance resulting in lowered growth hormone secretion, hypocortisolemia (low levels of circulating cortisol), and increased resistance to insulin effects.
This alteration in sex steroid levels due to obesogens can vary enormously according to both the sex of the exposed individual as well as the timing of the exposure. If the chemicals are introduced at critical windows of development, the vulnerability of an individual to their effects is much higher than if exposure occurs later in adulthood. It has been shown that obesogenic effects are apparent in female mice exposed to both phytoestrogens and DES during their neonatal periods of development, as they, though born with a lower birth weight, almost always developed obesity, high leptin levels, and altered glucose response pathways. Both phytoestrogen and DES exposed male mice did not develop obesity and, rather, showed decreased body weights with increased exposure confirming the role of gender differences in exposure response. Further studies have shown positive correlations for serum BPA levels with obese females in the human population, along with other xenoestrogen compounds suggesting the parallel roles that these effects may be having on humans.
Central balance of energy
While hormone receptors tend to be the most obvious candidates for targets of obesogens, central mechanisms that balance and regulate the body's nutritional changes on a day-to-day basis as a whole cannot be overlooked. The HPA axis (hypothalamic-pituitary-adrenal) is involved in controlling appetite and energy homeostasis circuits which are mediated by a large number of monoaminoergic, peptidergic (use of hormones as neurotransmitters), and endocannabinoid signals that come from the digestive tract, adipose tissues, and from within the brain. It is these types of signals that provide a likely target for obesogens that have shown to have weight altering effects.
Neuroendocrine effects
Neurological disorders may enhance the susceptibility to develop the metabolic syndrome that includes obesity. Many neuropharmaceuticals used to alter behavioral pathways in patients with neurological disorders have shown to have metabolic altering side-effects leading to obesogenic phenotypes as well. These findings give evidence to conclude that an increase in lipid accumulation can result from the targeting of neurotransmitter receptors by foreign chemicals.
Peptidergic hormones
Several peptidergic hormone pathways controlling appetite and energy balance —such as those involving ghrelin, neuropeptide Y, and agouti-related peptide — are particularly sensitive to changes in nuclear receptor signaling pathways and can therefore be easily altered by the introduction of endocrine disruptors. Such an alteration can lead to induced feelings of hunger and decreased feelings of fullness causing an increase in food intake and inability to feel satisfied, both characteristic of obesity.
Some xenoestrogens such as BPA, nonylphenol, and DEHP have all shown to act is this way, altering NPY expression and significantly shifting the feeding behaviors of exposed mice. In addition, organotins such as trimethyltin (TMT), triethyltin (TET), and tributyltin (TBT) compounds can exert their effects through similar pathways. TBT can locally disrupt aromatase regulation in the hypothalamus causing the responses of the HPA axis to hormones to become abnormal. TMT works in a similar but unique way, inducing NPY and NPY2 receptor expression initially which later is counteracted by neuronal degeneration in lesions causing decrease in signaling ability.
While an increase in food intake is often the case after exposure, weight gain involves the body's maintenance of its metabolic setpoint as well. Given this information, it is particularly important to note that exposure during development and initial programming of these setpoints can be extremely significant throughout the remainder of life.
Endocannabinoid signaling
A wide range of environmental organotins that mimic petidergic hormones in the HPA axis as mentioned before, additionally mimic lipid activators of the cannabinoid system and inhibit AMPK activity. Endocannaboid levels are high in those suffering from obesity due to hyperactivity of cannaboid signalling pathways. It is these high levels that have been found to be closely associated with increased fat stores linking the lipid activator mimics to the actual disease.
Programming of metabolic set points
Regions in the hypothalamus control the responses that establish an individual’s metabolic setpoint and metabolic efficiency. These responses are adaptive in that they vary according to the individual's needs, always working to restore the metabolic setpoint through the increase or decrease of metabolic functions depending on varying energy needs. Since it is adapted, it is expected that it would be able to achieve equilibrium if the lipid balance was altered by hormones via the mechanisms mentioned above. However, since obesogenic phenotypes persist, it can be concluded that adaptive response components of the hypothalamus may be a target of obesogens as well.
Body composition is very much predetermined before birth and changes rarely occur in adulthood. Adipocyte numbers increase during development and come to a plateau, after which adipocytes are restricted to mostly hypertrophic growth and don't seem to change much in cell number. This is demonstrated by the difficulty in altering somatotypes or more simply by the difficulty that goes along with trying to lose weight past a certain point.
A particular study on polybrominated diphenyl ethers (PBDE), a commonly used chemical in flame retardants, made its role in altering the functions of the thyroid hormone axis apparent. This finding leads to increased concern as neonatal thyroid status plays a large role in the integration of maternal environmental signals during development in the womb that is used for long-term body weight programming.
Pharmaceutical obesogens
Obesogens detection in the body and resulting obesogenic effects can result as side effects from intentional administration of obesogenic chemicals in the form of pharmaceutical drugs. These pharmaceutical obesogens can show their effects through a variety of targets.
Metabolic sensors
Thiazolidinediones (TZD), rosiglitazone, and pioglitazone are all used to treat diabetes. These drugs act as agonists of the PPAR-γ receptor leading to insulin sensitizing effects that can improve glycemic control and serum triglyceride levels. Despite the positive effects these chemicals can have in treating diabetes patients, administration also lead to unwanted PPAR-γ mediated side effects such as peripheral edema which can be followed by persistent weight gain if the drug is used over a long period of time. These side effects are particularly prominent in diabetes 2 patients, a disease that tends to result from an overabundance of adipose tissue.
Sex steroid dysregulation
Diethylstilbestrol (DES) is a synthetic estrogen that was once prescribed to women to decrease the risk of miscarriage until it was found to be causing abnormalities in exposed offspring. This same chemical has been shown to cause weight gain in female mice when exposed during neonatal development. While exposure didn't lead to an abnormal birth weight, significant weight gain occurred much later in adulthood.
Central integration of energy balance
Selective serotonin reuptake inhibitors (SSRI) (e.g. paroxetine), tricyclic antidepressants (e.g. amitriptyline), tetracyclic antidepressants (e.g. mirtazapine) and atypical antipsychotics (e.g. clozapine) are all neuropharmaceuticals that target neurotransmitter receptors that are involved with brain circuits that regulate behavior. Often the function of these receptors overlaps with metabolism regulation, such as that of the H1 receptor which when activated decreases AMPK activity. As a result, the administration of these drugs can have side effects including increased lipid accumulation that can result in obesity.
Metabolic setpoints
The mechanisms behind SSRI, tricyclic antidepressants, and atypical antipsychotics function allow them all to have potential roles in the alteration of metabolic setpoints. TZD, in particular has been linked to regulatory function in the HPT axis, however, no conclusive evidence has been determined thus far and further research is required to confirm these hypotheses.
Environmental obesogens
While obesogens can be introduced to the body intentionally via administration of obesogenic pharmaceuticals, exposure can also occur through chemical exposure to obesogens found in the environment such as organotins and xenobiotics.
Organotins
Particular members of the organotin class of persistent organic pollutants (POPs), namely tributyltin (TBT) and triphenyltin (TPT) are highly selective and act as very potent agonists of both the retinoid X receptors (RXR α,β, and γ) and PPARγ. This ability to target both receptors at the same time, is more effective than single receptor activation, as adopogenic signaling can be mediated through both components of the heterodimer complex. This highly effective activation mechanism can pose detrimental, long-term adipogenic effects especially if exposure occurs during development and early life.
Organotins (tin-based chemicals), used in marine anti-fouling paints, wood catalysts, plasticizers, slimicides, in industrial water systems, and fungicides on food have recently been linked to obesogenic properties when introduced in the body. Human exposure to these major environmental sources most commonly occurs through ingestion of contaminated seafood, agricultural products, and drinking water as well as from exposure to leaching from plastics.
Although studies that have directly measured organotin levels in human tissue and blood are limited, it has been determined that vulnerability of a portion of the general population to organotin exposure at levels high enough to activate RXRs and PPARγ receptors is very probable. The high usage of organotins in both plastics and agricultural maintenance as well as the high affinity of the chemicals further confirms this conclusion.
Liver samples from the late 1990s in Europe and Asia contained on average 6 and 84 ng/g wet wt respectively for total organotin levels, while later studies found levels of total organotins in US blood samples averaged around 21 ng/mL with TBT comprising around 8 ng/mL (~ 27 nM). Even more recent analyses of European blood samples found the predominant species to be TPT rather than TBT at 0.09 and 0.67 ng/mL (~0.5-2 nM). Only occasional trace amounts of TBT were found. These results indicate that organtin exposure to humans, while found to be present among many different populations, can vary in terms of type of organatin and level of exposure from region to region.
Other xenobiotics
Other common xenobiotics found in the environment have been shown to have PPAR activity, posing even further threats to dysregulated metabolic balance. BPA from polycarbonate plastics, phthalate plasticizers used to soften PVC plastics, and various perfluoroalkyll compounds (PFCs) that are widely used surfactants and surface repellents in consumer products are all potentially obesogenic when introduced in the body. Phthalates and PFCs in particular have been found to function as agonists for one or more of the PPARs Additionally, metabolites of DHEP such as MEHP also activate PPARγ leading to a proadipogenic response.
Public health implications
Although research on endocrine disruptors or "obesogens" is still emerging, the public health implications so far have mainly surrounded obesity, diabetes, and cardiovascular disease.
Obesity has become a pandemic, increasing for all population groups. From 1980 to 2008, the rates of obesity have doubled for adults and tripled for children. In the U.S. alone, it has been estimated that almost 100 million individuals in are obese Traditional thinking suggested that diet and exercise alone were the main contributors to obesity; however, current experimental evidence shows that obesogens might be part of the cause.
Obesity may lead to potentially debilitating chronic diseases such as diabetes, and certain environmental exposures, or obesogens, have been directly linked to Type II diabetes mellitus (T2DM).
Potential obesogens in everyday life
Obesogens can be found in many things, from water bottles to microwaveable popcorn, and from nonstick pans to shower curtains. People interact with them on a daily basis, both intentionally and unintentionally, at work, school and home. They are an unnecessary and mostly preventable potential hazard to health, which can have a large impact on how individuals gain and lose weight.
Bisphenol-A (BPA) is an industrial chemical and organic compound that has been used in the production of plastics and resins for over a half-century. It is used in products such as toys, medical devices, plastic food and beverage containers, shower curtains, dental sealants and compounds, and register receipts. BPA has been shown to seep into food sources from containers or into the body just by handling products made from it. Certain researchers suggest that BPA actually decreases the fat cell count in the body, but at the same time increasing the size of the ones remaining; therefore, no difference in weight is shown, and an individual is even likely to gain more.
Nicotine is a chemical found in tobacco products and certain insecticides. As an obesogen, nicotine mostly acts on prenatal development after maternal smoking occurs. A strong association has been made between maternal smoking and childhood overweight/obesity, with nicotine as the single causal agent.
Arsenic is a metalloid (i.e., an element with some metallic properties) found in and on most naturally occurring substances on Earth. It can be found in the soil, ground water, air, and in small concentrations in food. Arsenic has many applications such as in the production of insecticides, herbicides, pesticides and electronic devices. The development of diabetes has been linked to arsenic exposure from drinking water and occupational contact.
Pesticides are substances used to prevent, destroy, repel or mitigate pests, and they have been used throughout all of recorded history. Some pesticides persist for short periods of time and some for long periods of time which are considered persistent organic pollutants (POPs). Several cross-sectional studies have shown pesticides as obesogens, linking them to obesity, diabetes and other morbidities.
Some pharmaceutical drugs are also potentially obesogens. From 2005–2008, 11% of Americans aged 12 and over took antidepressant medications. Certain antidepressants, known as selective serotonin reuptake inhibitors (SSRIs), are potentially adding to the almost 100 million obese individuals in the U.S. A key function of SSRI antidepressants is to regulate the serotonin reuptake transporter (SERT) which can affect food intake and lipid accumulation leading to obesity.
Organotins such as tributyltin (TBT) and triphenyltin (TPT) are endocrine disruptors that have been shown to increase triglyceride storage in adipocytes. Although they have been widely used in the marine industry since the 1960s, other common sources of human exposure include contaminated seafood and shellfish, fungicides on crops and as antifungal agents used in wood treatments, industrial water systems and textiles. Organotins are also being used in the manufacture of PVC plastics and have been identified in drinking water and food supplies.
Perfluorooctanoic acid (PFOA) is a surfactant used for reduction of friction, and it is also used in nonstick cookware. PFOA has been detected in the blood of more than 98% of the general US population. It is a potential endocrine disruptor. Animal studies have shown that prenatal exposure to PFOA is linked to obesity when reaching adulthood.
Future research
Most of the environmental obesogens currently identified are either classified into the category of chemical mimics of metabolic hormones throughout the body or of neurotransmitters within the brain. Because they fall into these two categories, extensive opportunities for complex interactions and varied sites of action as well as multiple molecular targets are open for consideration. Changing dose ranges tend to result in varying phenotypes and timing of exposure, gender, and gender predisposition introduce even more levels of complexity in how these substances effect the human body.
Because the mechanisms behind the different effects of obesogens are so complex and not well understood, the extent to which they play in the current obesity epidemic may be greater than once thought. Epigenetic changes due to obesogen exposure must also be considered as a possibility, as they open up the potential for misregulated metabolic functions to be passed on from generation to generation. Epigenetic processes via hypermethylation of regulatory regions could lead to overexpression of different proteins, and therefore, amplification of acquired environmental effects. Research will be required in order to gain a better understanding of the mechanism of action these chemicals are involved in before the extent of the risk of exposure can be determined and methods of prevention and removal from the environment can be established.
Natural occurring obesogens
Fructose is a natural occurring obesogen found in several of our food products. It increases the development of diabetes as well as increasing the amount of fat stored in the liver which results in weight gain. It is found in sweets and sweetened beverages.
Genistein is a naturally occurring obesogen found in soy bean and soy products. Genestein has been found to decrease mammary tumors in animal models. Genestein belongs to a family of phytoestrogens. Phytoestrogens are used to help humans with menopausal relief and the prevention of hormonal cancers.
Obesogen prevention
Preventing the effects of obesogen in the human body is crucial for maintaining healthy body weights. A few ways individuals can proactively reverse the effects would be regularly exercising, maintaining a healthy diet, ensuring quality sleep, and managing stress levels. These few options that allow individuals to practice healthy habits will in turn reduce the effects of obesogen within the body.
See also
Bariatrics
Obesity
Childhood obesity
Orexigenic
Epidemiology of obesity
Epidemiology of childhood obesity
References
Further reading
Risk factors for obesity
Nutrition
Body shape
Receptors | Obesogen | [
"Chemistry"
] | 5,125 | [
"Receptors",
"Signal transduction"
] |
13,649,456 | https://en.wikipedia.org/wiki/Stantec | Stantec Inc. is an international professional services company in the design and consulting industry. The company was founded in 1954, as D. R. Stanley Associates in Edmonton, Alberta, Canada. Stantec provides professional consulting services in planning, engineering, architecture, interior design, landscape architecture, surveying, environmental sciences, project management, and project economics for infrastructure and facilities projects. The company provides services on projects around the world, with over 30,000 employees operating out of more than 450 locations in North America and across offices on six continents.
History
Don Stanley was the first Canadian to earn a Ph.D. in environmental engineering. Attending Harvard University on a Rockefeller Foundation scholarship, he earned his doctorate in 1952 and two years later founded D.R. Stanley & Associates, working as the sole proprietor out of a office in Edmonton, Alberta. In 1955, Stanley hired a retired railway engineer, Herb Roblin, and a former chief bridge engineer for the provincial government, Louis Grimble. The firm was renamed Stanley, Grimble and Roblin Ltd. and with the two new partners’ transportation backgrounds, the firm diversified quickly.
The 1970s were boom years for Stanley Associates, but with the advance of the sharp recession of the 1980s, Stanley was ready to turn the company over to his second-in-command, Ron Triffo, in 1983. Triffo held a bachelor's degree in civil engineering from the University of Manitoba and an MSc in Engineering from the University of Illinois. In 1983, when Alberta's economy was struggling in response to the Canadian government's National Energy Program, Triffo became president and COO, while Stanley retained his role as CEO and chair. “We had cut our staff in half from 400 to about 200,” Triffo said. “We really started to think about a new way of doing things for the company. We were heavily involved in Alberta in a big, big way and therefore very vulnerable to the up and down cycles of the province. We decided we had to diversify the company in a discipline sense. We had to become more than just a civil engineering company and we had to diversify geographically.”
The company started its diversification by forming an urban development company under another name, IMC, which grew to 200 people. The diversification of Stanley Associates occurred by acquisition as well. The firm expanded into British Columbia and Saskatchewan and internationally, beginning a corporate move into central Canada. Stanley also made its first U.S. acquisition, in Phoenix, establishing a base for specialty services and future expansion in the US Southwest.
Following the success of IMC, Stanley Associates' various practices operated under boutique names, with as many as 20 different companies. By the early 1990s, the companies were placed under the umbrella of Stanley Technology Group, and most subsidiaries featured the name Stanley in their name. Staff numbers neared 900 and the firm went public on the Toronto Stock Exchange in 1994.
In 1998, Triffo stepped into the role of board chair, where he remained until retiring in 2011. Tony Franceschini, then vice president of the Commercial/Institutional sector and a board member, became president and CEO. Franceschini began his career with a consulting engineering firm in Toronto, Ontario in 1975 after graduating from the University of Waterloo with a degree in civil engineering, where he worked with Triffo.
The year Franceschini became president and CEO, Stantec had 2,000 employees in 40 offices and reported $185.5 million in gross revenue. “Our vision is to grow the company into a 10,000 employee, billion-dollar firm by 2008,” Franceschini said. Franceschini was instrumental in launching the new global, single-brand identity, Stantec, which enabled the company's services to be delivered through an integrated approach. “The move was a major achievement –– in a two-month period, we sought and received shareholder approval to change the name of over 30 companies,” says Franceschini.
Stantec was listed on the New York Stock Exchange in 2005. Franceschini retired in May 2009. Bob Gomes was appointed president and CEO. Like the Stantec's previous three presidents, Gomes is a licensed engineer who earned his degree in civil engineering from the University of Alberta and joined Stantec in 1988.
Between 2008 and 2011, gross revenue increased from $1.4 billion to $1.7 billion, the Stantec team grew from 8,000 staff to over 12,000 staff, the company acquired 16 firms, and strengthened its presence in markets across North America and internationally. According to Reuters, US Edition, "Its services are offered through more than 170 locations in North America and four locations internationally"
In May 2016, Stantec signed a definitive merger agreement with MWH Global, Inc. worth $1.04 billion CAD, making Stantec Inc. one of the world's top three global design firms. It expected to boost revenues by 60% a year, and increased the number of employees from 15,000 to 22,000.
Bob Gomes retired from Stantec in 2017 and is a current member of the board of directors. In January 2018, Gord Johnston became CEO of Stantec. Johnston has 30 years of private and public sector experience in the design and project management of infrastructure projects. Johnston has bachelor of science and master of engineering degrees in civil engineering from the University of Alberta, and is a registered professional engineer, certified project management professional, and Envision Sustainability Professional.
Growth
Stantec has 31,000 employees and 400 locations on six continents worldwide.
Acquisitions
Stantec has acquired over 130 firms since 1994. Acquisitions include MWH Global, Inc., KBR, Inc. Infrastructure Americas Division, Dessau engineering, Fay, Spofford & Thorndike, Sparling, Traffic Design Group, ZETCON Engineering, and Hydrock.
Major projects
Panama Canal Expansion
Stantec Tower; 2014
Open Hearth Park; 2013
Telus Spark; 2011
Blatchford Community; 2010
Anthony Henday Drive Southeast; 2003-2007
Keystone Pipeline & Keystone Expansion Project; 2005
The Apollo – Saturn V Visitor Center; 1961-1972
Headquarters
On July 17, 2013, Stantec announced that it was going to initiate a request for proposal (RFP) to consolidate its Edmonton headquarters and other Edmonton offices into a single building. On August 26, 2014 the company announced that it had signed a 15-year lease with Edmonton's Arena District Partnership, a joint venture between the Katz Group and WAM Developments.
Stantec Tower is located in Edmonton's new downtown Ice District, and is a 66-storey , mixed use skyscraper. It is the tallest building in Edmonton, and the tallest building in Western Canada and outside of Toronto.
See also
Lists of companies on the TSX
S&P/TSX 60
References
External links
Stantec Achieves International Certification of its Environmental Management System
Stantec reports solid operational results for 2011 year-end
Forbes Profile: Robert J. (Bob) Gomes
"Don't Wait for the Bottom" - Alberta Venture
Companies listed on the New York Stock Exchange
Companies listed on the Toronto Stock Exchange
Companies based in Edmonton
Construction and civil engineering companies of Canada
Engineering companies of Canada
Architecture firms of Canada
Mining engineering companies
1954 establishments in Alberta
Construction and civil engineering companies established in 1954
Canadian companies established in 1954 | Stantec | [
"Engineering"
] | 1,479 | [
"Mining engineering companies",
"Mining engineering",
"Engineering companies"
] |
13,649,480 | https://en.wikipedia.org/wiki/Syndicat%20de%20l%27Architecture | The Syndicat de l'Architecture is a French labor union for architects co-founded by Jean Nouvel.
External links
Website
Architecture organizations
Trade unions in France
Year of establishment missing | Syndicat de l'Architecture | [
"Engineering"
] | 39 | [
"Architecture organizations",
"Architecture"
] |
13,649,912 | https://en.wikipedia.org/wiki/Linear%20partial%20information | Linear partial information (LPI) is a method of making decisions based on insufficient or fuzzy information. LPI was introduced in 1970 by Polish–Swiss mathematician Edward Kofler (1911–2007) to simplify decision processes. Compared to other methods the LPI-fuzziness is algorithmically simple and particularly in decision making, more practically oriented. Instead of an indicator function the decision maker linearizes any fuzziness by establishing of linear restrictions for fuzzy probability distributions or normalized weights. In the LPI-procedure the decision maker linearizes any fuzziness instead of applying a membership function. This can be done by establishing stochastic and non-stochastic LPI-relations. A mixed stochastic and non-stochastic fuzzification is often a basis for the LPI-procedure. By using the LPI-methods any fuzziness in any decision situation can be considered on the base of the linear fuzzy logic.
Definition
Any Stochastic Partial Information SPI(p), which can be considered as a solution of a linear inequality system, is called Linear Partial Information LPI(p) about probability p. It can be considered as an LPI-fuzzification of the probability p corresponding to the concepts of linear fuzzy logic.
Applications
The MaxEmin Principle
To obtain the maximally warranted expected value, the decision maker has to choose the strategy which maximizes the minimal expected value. This procedure leads to the MaxEmin – Principle and is an extension of the Bernoulli's principle.
The MaxWmin Principle
This principle leads to the maximal guaranteed weight function, regarding the extreme weights.
The Prognostic Decision Principle (PDP)
This principle is based on the prognosis interpretation of strategies under fuzziness.
Fuzzy equilibrium and stability
Despite the fuzziness of information, it is often necessary to choose the optimal, most cautious strategy, for example in economic planning, in conflict situations or in daily decisions. This is impossible without the concept of fuzzy equilibrium. The concept of fuzzy stability is considered as an extension into a time interval, taking into account the corresponding stability area of the decision maker. The more complex is the model, the softer a choice has to be considered.
The idea of fuzzy equilibrium is based on the optimization principles. Therefore, the MaxEmin-, MaxGmin- and PDP-stability have to be analyzed. The violation of these principles leads often to wrong predictions and decisions.
LPI equilibrium point
Considering a given LPI-decision model, as a convolution of the corresponding fuzzy states or a disturbance set, the fuzzy equilibrium strategy remains the most cautious one, despite the presence of the fuzziness. Any deviation from this strategy can cause a loss for the decision maker.
See also
Edward Kofler
Fuzzy set
Fuzzy logic
Game theory
Defuzzification
Stochastic process
Deterministic
Probability distribution
Uncertainty
Vagueness
Optimization (mathematics)
Logic
List of set theory topics
Selected references
Edward Kofler – Equilibrium Points, Stability and Regulation in Fuzzy Optimisation Systems under Linear Partial Stochastic Information (LPI), Proceedings of the International Congress of Cybernetics and Systems, AFCET, Paris 1984, pp. 233–240
Edward Kofler – Decision Making under Linear Partial Information. Proceedings of the European Congress EUFIT, Aachen, 1994, pp. 891–896.
Edward Kofler – Linear Partial Information with Applications. Proceedings of ISFL 1997 (International Symposium on Fuzzy Logic), Zurich, 1997, p. 235–239.
Edward Kofler – Entscheidungen bei teilweise bekannter Verteilung der Zustände, Zeitschrift für OR, Vol. 18/3, 1974
Edward Kofler – Extensive Spiele bei unvollständiger Information, in Information in der Wirtschaft, Gesellschaft für Wirtschafts- und Sozialwissenschaften, Band 126, Berlin 1982
External links
Tools for establishing dominance with linear partial information and attribute hierarchy
Linear Partial Information with applications
Linear Partial Information (LPI) with applications to the U.S. economic policy
Practical decision making with Linear Partial Information (LPI)
Stochastic programming with fuzzy linear partial information on probability distribution
One-shot decisions under Linear Partial Information
Information theory
Fuzzy logic
Randomized algorithms
Algorithmic information theory
Decision theory
Mathematical modeling | Linear partial information | [
"Mathematics",
"Technology",
"Engineering"
] | 885 | [
"Telecommunications engineering",
"Mathematical modeling",
"Applied mathematics",
"Computer science",
"Information theory"
] |
13,649,932 | https://en.wikipedia.org/wiki/Structural%20pipe%20fitting | A structural pipe fitting, also known as a slip on pipe fitting, clamp or pipe clamp is used to build structures such as handrails, guardrails, and other types of pipe or tubular structure. They can also be used to build furniture and theatrical riggings. The fittings slip on the pipe and are usually locked down with a set screw. The set screw can then be tightened with a simple hex wrench. Because of the modular design of standard fittings, assembly is easy, only simple hand tools are required, and risks from welding a structure are eliminated.
Other advantages of using structural pipe fittings are easy installation and reconfigurable design. Since there are no permanent welds in the structure, the set screws of the fittings can simply be loosened, allowing them to be repositioned. The project can be disassembled and stored if needed, or even taken apart with fittings and pipe recycled into a new project.
Fittings used for strong structures are galvanised malleable iron castings, and come in many styles such as elbows, tees, crosses, reducers and flanges. The fittings are not threaded; they simply lock onto the pipe with the supplied hex set screws.
See also
Kee Klamp
References
Structural engineering | Structural pipe fitting | [
"Engineering"
] | 272 | [
"Structural engineering",
"Civil engineering",
"Construction"
] |
13,650,583 | https://en.wikipedia.org/wiki/Malliavin%27s%20absolute%20continuity%20lemma | In mathematics — specifically, in measure theory — Malliavin's absolute continuity lemma is a result due to the French mathematician Paul Malliavin that plays a foundational rôle in the regularity (smoothness) theorems of the Malliavin calculus. Malliavin's lemma gives a sufficient condition for a finite Borel measure to be absolutely continuous with respect to Lebesgue measure.
Statement of the lemma
Let μ be a finite Borel measure on n-dimensional Euclidean space Rn. Suppose that, for every x ∈ Rn, there exists a constant C = C(x) such that
for every C∞ function φ : Rn → R with compact support. Then μ is absolutely continuous with respect to n-dimensional Lebesgue measure λn on Rn. In the above, Dφ(y) denotes the Fréchet derivative of φ at y and ||φ||∞ denotes the supremum norm of φ.
References
(See section 1.3)
Lemmas in analysis
Measure theory
Malliavin calculus | Malliavin's absolute continuity lemma | [
"Mathematics"
] | 214 | [
"Theorems in mathematical analysis",
"Calculus",
"Malliavin calculus",
"Lemmas in mathematical analysis",
"Lemmas"
] |
13,651,046 | https://en.wikipedia.org/wiki/Double%20layer%20%28plasma%20physics%29 | A double layer is a structure in a plasma consisting of two parallel layers of opposite electrical charge. The sheets of charge, which are not necessarily planar, produce localised excursions of electric potential, resulting in a relatively strong electric field between the layers and weaker but more extensive compensating fields outside, which restore the global potential. Ions and electrons within the double layer are accelerated, decelerated, or deflected by the electric field, depending on their direction of motion.
Double layers can be created in discharge tubes, where sustained energy is provided within the layer for electron acceleration by an external power source. Double layers are claimed to have been observed in the aurora and are invoked in astrophysical applications. Similarly, a double layer in the auroral region requires some external driver to produce electron acceleration.
Electrostatic double layers are especially common in current-carrying plasmas, and are very thin (typically tens of Debye lengths), compared to the sizes of the plasmas that contain them. Other names for a double layer are electrostatic double layer, electric double layer, plasma double layers. The term ‘electrostatic shock’ in the magnetosphere has been applied to electric fields oriented at an oblique angle to the magnetic field in such a way that the perpendicular electric field is much stronger than the parallel electric field, In laser physics, a double layer is sometimes called an ambipolar electric field.
Double layers are conceptually related to the concept of a 'sheath' (see Debye sheath). An early review of double layers from laboratory experiment and simulations is provided by Torvén.
Classification
Double layers may be classified in the following ways:
Weak and strong double layers. The strength of a double layer is expressed as the ratio of the potential drop in comparison with the plasma's equivalent thermal energy, or in comparison with the rest mass energy of the electrons. A double layer is said to be strong if the potential drop within the layer is greater than the equivalent thermal energy of the plasma's components.
Relativistic or non-relativistic double layers. A double layer is said to be relativistic if the potential drop within the layer is comparable to the rest mass energy (~512KeV) of the electron. Double layers of such energy are to be found in laboratory experiments. The charge density is low between the two opposing potential regions and the double layer is similar to the charge distribution in a capacitor in that respect.
Current carrying double layers These double layers may be generated by current-driven plasma instabilities that amplify variations of the plasma density. One example of these instabilities is the Farley–Buneman instability, which occurs when the streaming velocity of electrons (basically the current density divided by the electron density) exceeds the electron thermal velocity of the plasma. It occurs in collisional plasmas having a neutral component, and is driven by drift currents.
Current-free double layers These occur at the boundary between plasma regions with different plasma properties. A plasma may have a higher electron temperature, and thermal velocity, on one side of a boundary layer than on the other. The same may apply for plasma densities. Charged particles exchanged between the regions may enable potential differences to be maintained between them locally. The overall charge density, as in all double layers, will be neutral.
Potential imbalance will be neutralised by electron (1&3) and ion (2&4) migration, unless the potential gradients are sustained by an external energy source. Under most laboratory situations, unlike outer space conditions, charged particles may effectively originate within the double layer, by ionization at the anode or cathode, and be sustained.
The figure shows the localised perturbation of potential produced by an idealised double layer consisting of two oppositely charged discs. The perturbation is zero at a distance from the double layer in every direction.
If an incident charged particle, such as a precipitating auroral electron, encounters such a static or quasistatic structure in the magnetosphere, provided that the particle energy exceeds half the electric potential difference within the double layer, it will pass through without any net change in energy. Incident particles with less energy than this will also experience no net change in energy but will undergo more overall deflection.
Four distinct regions of a double layer can be identified, which affect charged particles passing through it, or within it:
A positive potential side of the double layer where electrons are accelerated towards it;
A positive potential within the double layer where electrons are decelerated;
A negative potential within the double layer where electrons are decelerated; and
A negative potential side of the double layer where electrons are accelerated.
Double layers will tend to be transient in the magnetosphere, as any charge imbalance will become neutralised, unless there is a sustained external source of energy to maintain them as there is under laboratory conditions.
Formation mechanisms
The details of the formation mechanism depend on the environment of the plasma (e.g. double layers in the laboratory, ionosphere, solar wind, nuclear fusion, etc.). Proposed mechanisms for their formation have included:
1971: Between plasmas of different temperatures
1976: In laboratory plasmas
1982: Disruption of a neutral current sheet
1983: Injection of non-neutral electron current into a cold plasma
1985: Increasing the current density in a plasma
1986: In the accretion column of a neutron star
1986: By pinches in cosmic plasma regions
1987: In a plasma constrained by a magnetic mirror
1988: By an electrical discharge
1988: Current-driven instabilities (strong double layers)
1988: Spacecraft-ejected electron beams
1989: From shock waves in a plasma
2000: Laser radiation
2002: When magnetic field-aligned currents encounter density cavities
2003: By the incidence of plasma on the dark side of the Moon's surface. See picture.
Features and characteristics
Thickness: The production of a double layer requires regions with a significant excess of positive or negative charge, that is, where quasi-neutrality is violated. In general, quasi-neutrality can only be violated on scales of the Debye length. The thickness of a double layer is of the order of ten Debye lengths, which is a few centimeters in the ionosphere, a few tens of meters in the interplanetary medium, and tens of kilometers in the intergalactic medium.
Electrostatic potential distribution: As described under double layer classification above, there are effectively four distinct regions of a double layer where incoming charged particles will be accelerated or decelerated along their trajectory . Within the double layer the two opposing charge distributions will tend to become neutralised by internal charged particle motion.
Particle flux: For non-relativistic current carrying double layers, electrons carry most of the current. The Langmuir condition states that the ratio of the electron and the ion current across the layer is given by the square root of the mass ratio of the ions to the electrons. For relativistic double layers the current ratio is 1; i.e. the current is carried equally by electrons and ions.
Energy supply: The instantaneous voltage drop across a current-carrying double layer is proportional to the total current, and is similar to that across a resistive element (or load), which dissipates energy in an electric circuit. A double layer cannot supply net energy on its own.
Stability: Double layers in laboratory plasmas may be stable or unstable depending on the parameter regime. Various types of instabilities may occur, often arising due to the formation of beams of ions and electrons. Unstable double layers are noisy in the sense that they produce oscillations across a wide frequency band. A lack of plasma stability may also lead to a sudden change in configuration often referred to as an explosion (and hence exploding double layer). In one example, the region enclosed in the double layer rapidly expands and evolves. An explosion of this type was first discovered in mercury arc rectifiers used in high-power direct-current transmission lines, where the voltage drop across the device was seen to increase by several orders of magnitude. Double layers may also drift, usually in the direction of the emitted electron beam, and in this respect are natural analogues to the smooth-bore magnetron
Magnetised plasmas: Double layers can form in both magnetised and unmagnetised plasmas.
Cellular nature: While double layers are relatively thin, they will spread over the entire cross surface of a laboratory container. Likewise where adjacent plasma regions have different properties, double layers will form and tend to cellularise the different regions.
Energy transfer: Double layers can facilitate the transfer of electrical energy into kinetic energy, dW/dt=I•ΔV where I is the electric current dissipating energy into a double layer with a voltage drop of ΔV. Alfvén points out that the current may well consist exclusively of low-energy particles. Torvén et al. have postulated that plasma may spontaneously transfer magnetically stored energy into kinetic energy by electric double layers. No credible mechanism for producing such double layers has been presented, however. Ion thrusters can provide a more direct case of energy transfer from opposing potentials in the form of double layers produced by an external electric field.
Oblique double layer: An oblique double layer has electric fields that are not parallel to the ambient magnetic field; i.e., it is not field-aligned.
Simulation: Double layers may be modelled using kinetic computer models like particle-in-cell (PIC) simulations. In some cases the plasma is treated as essentially one- or two-dimensional to reduce the computational cost of a simulation.
Bohm Criterion: A double layer cannot exist under all circumstances. In order to produce an electric field that vanishes at the boundaries of the double layer, an existence criterion says that there is a maximum to the temperature of the ambient plasma. This is the so-called Bohm criterion.
Bio-physical analogy: A model of plasma double layers has been used to investigate their applicability to understanding ion transport across biological cell membranes. Brazilian researchers have noted that "Concepts like charge neutrality, Debye length, and double layer are very useful to explain the electrical properties of a cellular membrane." Plasma physicist Hannes Alfvén also noted that association of double layers with cellular structure, as had Irving Langmuir before him, who coined the term "plasma" after its resemblance to blood cells.
History
It was already known in the 1920s that a plasma has a limited capacity for current maintenance, Irving Langmuir characterized double layers in the laboratory and called these structures double-sheaths. In the 1950s a thorough study of double layers started in the laboratory. Many groups are still working on this topic theoretically, experimentally and numerically. It was first proposed by Hannes Alfvén (the developer of magnetohydrodynamics from laboratory experiments) that the polar lights or Aurora Borealis are created by electrons accelerated in the magnetosphere of the Earth. He supposed that the electrons were accelerated electrostatically by an electric field localized in a small volume bounded by two charged regions, and the so-called double layer would accelerate electrons earthwards. Since then other mechanisms involving wave-particle interactions have been proposed as being feasible, from extensive spatial and temporal in situ studies of auroral particle characteristics.
Many investigations of the magnetosphere and auroral regions have been made using rockets and satellites. McIlwain discovered from a rocket flight in 1960 that the energy spectrum of auroral electrons exhibited a peak that was thought then to be too sharp to be produced by a random process and which suggested, therefore, that an ordered process was responsible. It was reported in 1977 that satellites had detected the signature of double layers as electrostatic shocks in the magnetosphere. indications of electric fields parallel to the geomagnetic field lines was obtained by the Viking satellite, which measures the differential potential structures in the magnetosphere with probes mounted on 40m long booms. These probes measured the local particle density and the potential difference between two points 80m apart. Asymmetric potential excursions with respect to 0 V were measured, and interpreted as a double layer with a net potential within the region. Magnetospheric double layers typically have a strength (where the electron temperature is assumed to lie in the range ) and are therefore weak. A series of such double layers would tend to merge, much like a string of bar magnets, and dissipate, even within a rarefied plasma. It has yet to be explained how any overall localised charge distribution in the form of double layers might provide a source of energy for auroral electrons precipitated into the atmosphere.
Interpretation of the FAST spacecraft data proposed strong double layers in the auroral acceleration region. Strong double layers have also been reported in the downward current region by Andersson et al. Parallel electric fields with amplitudes reaching nearly 1 V/m were inferred to be confined to a thin layer of approximately 10 Debye lengths. It is stated that the structures moved ‘at roughly the ion acoustic speed in the direction of the accelerated electrons, i.e., anti-earthward.’ That raises a question of what role, if any, double layers might play in accelerating auroral electrons that are precipitated downwards into the atmosphere from the magnetosphere. Double layers have also been found in the Earth's magnetosphere by the space missions Cluster and MMS.
The possible role of precipitating electrons from 1-10keV themselves generating such observed double layers or electric fields has seldom been considered or analysed. Equally, the general question of how such double layers might be generated from an alternative source of energy, or what the spatial distribution of electric charge might be to produce net energy changes, is seldom addressed. Under laboratory conditions an external power supply is available.
In the laboratory, double layers can be created in different devices. They are investigated in double plasma machines, triple plasma machines, and Q-machines. The stationary potential structures that can be measured in these machines agree very well with what one would expect theoretically. An example of a laboratory double layer can be seen in the figure below, taken from Torvén and Lindberg (1980), where we can see how well-defined and confined is the potential drop of a double layer in a double plasma machine.
One of the interesting aspects of the experiment by Torvén and Lindberg (1980) is that not only did they measure the potential structure in the double plasma machine but they also found high-frequency fluctuating electric fields at the high-potential side of the double layer (also shown in the figure). These fluctuations are probably due to a beam-plasma interaction outside the double layer, which excites plasma turbulence. Their observations are consistent with experiments on electromagnetic radiation emitted by double layers in a double plasma machine by Volwerk (1993), who, however, also observed radiation from the double layer itself.
The power of these fluctuations has a maximum around the plasma frequency of the ambient plasma. It was later reported that the electrostatic high-frequency fluctuations near the double layer can be concentrated in a narrow region, sometimes called the hf-spike. Subsequently, both radio emissions, near the plasma frequency, and whistler waves at much lower frequencies were seen to emerge from this region. Similar whistler wave structures were observed together with electron beams near Saturn's moon Enceladus, suggesting the possible presence of a double layer at lower altitude.
A recent development in double layer experiments in the laboratory is the investigation of so-called stairstep double layers. It has been observed that a potential drop in a plasma column can be divided into different parts. Transitions from a single double layer into two-, three-, or greater-step double layers are strongly sensitive to the boundary conditions of the plasma.
Unlike experiments in the laboratory, the concept of such double layers in the magnetosphere, and any role in creating the aurora, suffers from there so far being no identified steady source of energy. The electric potential characteristic of double layers might however indicate that, those observed in the auroral zone are a secondary product of precipitating electrons that have been energized in other ways, such as by electrostatic waves.
Some scientists have suggested a role of double layers in solar flares. Establishing such a role indirectly is even harder to verify than postulating double layers as accelerators of auroral electrons within the Earth's magnetosphere. Serious questions have been raised on their role even there.
Footnotes
External links
Numerical modeling of low-pressure plasmas: applications to electric double layers (2006, PDF), A. Meige, PhD thesis
References
Alfvén, H., On the theory of magnetic storms and aurorae, Tellus, 10, 104, 1958.
Peratt, A., Physics of the Plasma Universe, 1991
Raadu, M.,A., The physics of double layers and their role in astrophysics, Physics Reports, 178, 25–97, 1989.
Plasma phenomena
ja:電気二重層 | Double layer (plasma physics) | [
"Physics"
] | 3,464 | [
"Plasma phenomena",
"Physical phenomena",
"Plasma physics"
] |
13,651,081 | https://en.wikipedia.org/wiki/Data%20proliferation | Data proliferation refers to the prodigious amount of data, structured and unstructured, that businesses and governments continue to generate at an unprecedented rate and the usability problems that result from attempting to store and manage that data. While originally pertaining to problems associated with paper documentation, data proliferation has become a major problem in primary and secondary data storage on computers.
While digital storage has become cheaper, the associated costs, from raw power to maintenance and from metadata to search engines, have not kept up with the proliferation of data. Although the power required to maintain a unit of data has fallen, the cost of facilities which house the digital storage has tended to rise.
Data proliferation has been documented as a problem for the U.S. military since August 1971, in particular regarding the excessive documentation submitted during the acquisition of major weapon systems. Efforts to mitigate data proliferation and the problems associated with it are ongoing.
Problems caused
The problem of data proliferation is affecting all areas of commerce as a result of the availability of relatively inexpensive data storage devices. This has made it very easy to dump data into secondary storage immediately after its window of usability has passed. This masks problem that could gravely affect the profitability of businesses and the efficient functioning of health services, police and security forces, local and national governments, and many other types of organizations. Data proliferation is problematic for several reasons:
Difficulty when trying to find and retrieve information. At Xerox, on average it takes employees more than one hour per week to find hard-copy documents, costing $2,152 a year to manage and store them. For businesses with more than 10 employees, this increases to almost two hours per week at $5,760 per year. In large networks of primary and secondary data storage, problems finding electronic data are analogous to problems finding hard copy data.
Data loss and legal liability when data is disorganized, not properly replicated, or cannot be found promptly. In April 2005, the Ameritrade Holding Corporation told 200,000 current and past customers that a tape containing confidential information had been lost or destroyed in transit. In May of the same year, Time Warner Incorporated reported that 40 tapes containing personal data on 600,000 current and former employees had been lost en route to a storage facility. In March 2005, a Florida judge hearing a $2.7 billion lawsuit against Morgan Stanley issued an "adverse inference order" against the company for "willful and gross abuse of its discovery obligations." The judge cited Morgan Stanley for repeatedly finding misplaced tapes of e-mail messages long after the company had claimed that it had turned over all such tapes to the court.
Increased manpower requirements to manage increasingly chaotic data storage resources.
Slower networks and application performance due to excess traffic as users search and search again for the material they need.
High cost in terms of the energy resources required to operate storage hardware. A 100 terabyte system will cost up to $35,040 a year to run—not counting cooling costs.
Proposed solutions
Applications that better utilize modern technology
Reductions in duplicate data (especially as caused by data movement)
Improvement of metadata structures
Improvement of file and storage transfer structures
User education and discipline
The implementation of Information Lifecycle Management solutions to eliminate low-value information as early as possible before putting the rest into actively managed long-term storage in which it can be quickly and cheaply accessed.
See also
Backup
Digital Asset Management
Disk storage
Document management system
Hierarchical storage management
Information Lifecycle Management
Information repository
Magnetic tape data storage
Retention schedule
References
Content management systems
Data management | Data proliferation | [
"Technology"
] | 713 | [
"Data management",
"Data"
] |
13,651,338 | https://en.wikipedia.org/wiki/Interface%20and%20colloid%20science | Interface and colloid science is an interdisciplinary intersection of branches of chemistry, physics, nanoscience and other fields dealing with colloids, heterogeneous systems consisting of a mechanical mixture of particles between 1 nm and 1000 nm dispersed in a continuous medium. A colloidal solution is a heterogeneous mixture in which the particle size of the substance is intermediate between a true solution and a suspension, i.e. between 1–1000 nm. Smoke from a fire is an example of a colloidal system in which tiny particles of solid float in air. Just like true solutions, colloidal particles are small and cannot be seen by the naked eye. They easily pass through filter paper. But colloidal particles are big enough to be blocked by parchment paper or animal membrane.
Interface and colloid science has applications and ramifications in the chemical industry, pharmaceuticals, biotechnology, ceramics, minerals, nanotechnology, and microfluidics, among others.
There are many books dedicated to this scientific discipline, and there is a glossary of terms, Nomenclature in Dispersion Science and Technology, published by the US National Institute of Standards and Technology.
See also
Interface (matter)
Electrokinetic phenomena
Surface science
References
External links
Max Planck Institute of Colloids and Interfaces
American Chemical Society division of Colloid & Surface Chemistry
Chemical mixtures
Colloidal chemistry
Condensed matter physics | Interface and colloid science | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 282 | [
"Colloidal chemistry",
"Phases of matter",
"Materials science",
"Colloids",
"Surface science",
"Chemical mixtures",
"Condensed matter physics",
"nan",
"Matter"
] |
13,651,655 | https://en.wikipedia.org/wiki/Mixer-wagon | A mixer-wagon, or diet feeder, is a specialist agricultural machine used for accurately weighing, mixing and distributing total mixed ration (TMR) for ruminant farm animals, in particular cattle and most commonly, dairy cattle.
Trailed mixer-wagons vary in size from 5 m3 to more than 45 m3. Some self-propelled mixer-wagons may be bigger than this. Displacement varies according to ration dry matter. More water means more weight. With dry (45% DM; dry matter) rations, a 14m3 mixer-wagon such as the one pictured, may contain 3 tonnes fully loaded, or enough for about 60 Holstein cows.
A mixer-wagon commonly consists of:
a trailed chassis, for coupling to a tractor power unit, and fitted with one or more usually braked axles, and fitted with a road-legal lighting system.
a mixing body, attached to the chassis by four weighing sensors, one at each corner. There are three main types of mixing body:
paddle, whereby a central axial shaft turns a series of paddles which rotate the contents and provide front to back mixing in the mixer-wagon.
vertical auger, with one, two or three augers used to move the contents from top to bottom.
horizontal auger, containing between one and five augers, used to circulate the contents from front to back and bottom to top.
a digital weighing computer working through the above-mentioned weighing sensors. Such a computer can typically memorise 9 or more different rations and 99 or more ingredients.
The system can weigh in all the ingredients of a ration in any chosen order and weigh out according to chosen quantities corresponding to the needs of different groups of animals. As each ingredient is loaded, a visual and audible signal alerts the operator when the required amount is reached. The flashing light and "beep" system repeats faster and faster until the ingredient is complete, when the signal becomes continuous for 2 seconds. The computer then shifts to the next ingredient and displays the product name and weight to be loaded.
Loading is generally done using the mixer-wagon cutting and loading device, a high capacity tractor loader, or a telescopic handler.
the mixing paddle, rotors or augers are connected to the tractor Power Take-Off (PTO) through a reducer system, provided by either a planetary gearbox, and / or a step-down pulley and chain system.
a set of stationary knives, against which long fibre may be chopped by the forcing action of the mixing rotor.
a hydraulic door to seal the ration in during mixing, thus permitting the use of liquid feeds such as molasses.
an unloading system, consisting of a simple hydraulically adjustable chute, up to a hydraulic powered conveyor belt.
Self-propelled mixer-wagons are mounted on a lorry chassis or may be specialist self-loading machines.
See also
Feed mixer
References
Cattle
Dairy farming technology
Intensive farming
Agricultural machinery | Mixer-wagon | [
"Chemistry"
] | 598 | [
"Eutrophication",
"Intensive farming"
] |
13,651,683 | https://en.wikipedia.org/wiki/Spectral%20clustering | In multivariate statistics, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of points in the dataset.
In application to image segmentation, spectral clustering is known as segmentation-based object categorization.
Definitions
Given an enumerated set of data points, the similarity matrix may be defined as a symmetric matrix , where represents a measure of the similarity between data points with indices and . The general approach to spectral clustering is to use a standard clustering method (there are many such methods, k-means is discussed below) on relevant eigenvectors of a Laplacian matrix of . There are many different ways to define a Laplacian which have different mathematical interpretations, and so the clustering will also have different interpretations. The eigenvectors that are relevant are the ones that correspond to several smallest eigenvalues of the Laplacian except for the smallest eigenvalue which will have a value of 0. For computational efficiency, these eigenvectors are often computed as the eigenvectors corresponding to the largest several eigenvalues of a function of the Laplacian.
Laplacian matrix
Spectral clustering is well known to relate to partitioning of a mass-spring system, where each mass is associated with a data point and each spring stiffness corresponds to a weight of an edge describing a similarity of the two related data points, as in the spring system. Specifically, the classical reference explains that the eigenvalue problem describing transversal vibration modes of a mass-spring system is exactly the same as the eigenvalue problem for the graph Laplacian matrix defined as
,
where is the diagonal matrix
and A is the adjacency matrix.
The masses that are tightly connected by the springs in the mass-spring system evidently move together from the equilibrium position in low-frequency vibration modes, so that the components of the eigenvectors corresponding to the smallest eigenvalues of the graph Laplacian can be used for meaningful clustering of the masses. For example, assuming that all the springs and the masses are identical in the 2-dimensional spring system pictured, one would intuitively expect that the loosest connected masses on the right-hand side of the system would move with the largest amplitude and in the opposite direction to the rest of the masses when the system is shaken — and this expectation will be confirmed by analyzing components of the eigenvectors of the graph Laplacian corresponding to the smallest eigenvalues, i.e., the smallest vibration frequencies.
Laplacian matrix normalization
The goal of normalization is making the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a weighted graph, a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights.
A popular normalized spectral clustering technique is the normalized cuts algorithm or Shi–Malik algorithm introduced by Jianbo Shi and Jitendra Malik, commonly used for image segmentation. It partitions points into two sets based on the eigenvector corresponding to the second-smallest eigenvalue of the symmetric normalized Laplacian defined as
The vector is also the eigenvector corresponding to the second-largest eigenvalue of the symmetrically normalized adjacency matrix
The random walk (or left) normalized Laplacian is defined as
and can also be used for spectral clustering. A mathematically equivalent algorithm takes the eigenvector corresponding to the largest eigenvalue of the random walk normalized adjacency matrix .
The eigenvector of the symmetrically normalized Laplacian and the eigenvector of the left normalized Laplacian are related by the identity
Cluster analysis via Spectral Embedding
Knowing the -by- matrix of selected eigenvectors, mapping — called spectral embedding — of the original data points is performed to a -dimensional vector space using the rows of . Now the analysis is reduced to clustering vectors with components, which may be done in various ways.
In the simplest case , the selected single eigenvector , called the Fiedler vector, corresponds to the second smallest eigenvalue. Using the components of one can place all points whose component in is positive in the set and the rest in , thus bi-partitioning the graph and labeling the data points with two labels. This sign-based approach follows the intuitive explanation of spectral clustering via the mass-spring model — in the low frequency vibration mode that the Fiedler vector represents, one cluster data points identified with mutually strongly connected masses would move together in one direction, while in the complement cluster data points identified with remaining masses would move together in the opposite direction. The algorithm can be used for hierarchical clustering by repeatedly partitioning the subsets in the same fashion.
In the general case , any vector clustering technique can be used, e.g., DBSCAN.
Algorithms
Basic Algorithm
Calculate the Laplacian (or the normalized Laplacian)
Calculate the first eigenvectors (the eigenvectors corresponding to the smallest eigenvalues of )
Consider the matrix formed by the first eigenvectors; the -th row defines the features of graph node
Cluster the graph nodes based on these features (e.g., using k-means clustering)
If the similarity matrix has not already been explicitly constructed, the efficiency of spectral clustering may be improved if the solution to the corresponding eigenvalue problem is performed in a matrix-free fashion (without explicitly manipulating or even computing the similarity matrix), as in the Lanczos algorithm.
For large-sized graphs, the second eigenvalue of the (normalized) graph Laplacian matrix is often ill-conditioned, leading to slow convergence of iterative eigenvalue solvers. Preconditioning is a key technology accelerating the convergence, e.g., in the matrix-free LOBPCG method. Spectral clustering has been successfully applied on large graphs by first identifying their community structure, and then clustering communities.
Spectral clustering is closely related to nonlinear dimensionality reduction, and dimension reduction techniques such as locally-linear embedding can be used to reduce errors from noise or outliers.
Costs
Denoting the number of the data points by , it is important to estimate the memory footprint and compute time, or number of arithmetic operations (AO) performed, as a function of . No matter the algorithm of the spectral clustering, the two main costly items are the construction of the graph Laplacian and determining its eigenvectors for the spectral embedding. The last step — determining the labels from the -by- matrix of eigenvectors — is typically the least expensive requiring only AO and creating just a -by- vector of the labels in memory.
The need to construct the graph Laplacian is common for all distance- or correlation-based clustering methods. Computing the eigenvectors is specific to spectral clustering only.
Constructing graph Laplacian
The graph Laplacian can be and commonly is constructed from the adjacency matrix. The construction can be performed matrix-free, i.e., without explicitly forming the matrix of the graph Laplacian and no AO. It can also be performed in-place of the adjacency matrix without increasing the memory footprint. Either way, the costs of constructing the graph Laplacian is essentially determined by the costs of constructing the -by- graph adjacency matrix.
Moreover, a normalized Laplacian has exactly the same eigenvectors as the normalized adjacency matrix, but with the order of the eigenvalues reversed. Thus, instead of computing the eigenvectors corresponding to the smallest eigenvalues of the normalized Laplacian, one can equivalently compute the eigenvectors corresponding to the largest eigenvalues of the normalized adjacency matrix, without even talking about the Laplacian matrix.
Naive constructions of the graph adjacency matrix, e.g., using the RBF kernel, make it dense, thus requiring memory and AO to determine each of the entries of the matrix. Nystrom method can be used to approximate the similarity matrix, but the approximate matrix is not elementwise positive, i.e. cannot be interpreted as a distance-based similarity.
Algorithms to construct the graph adjacency matrix as a sparse matrix are typically based on a nearest neighbor search, which estimate or sample a neighborhood of a given data point for nearest neighbors, and compute non-zero entries of the adjacency matrix by comparing only pairs of the neighbors. The number of the selected nearest neighbors thus determines the number of non-zero entries, and is often fixed so that the memory footprint of the -by- graph adjacency matrix is only , only sequential arithmetic operations are needed to compute the non-zero entries, and the calculations can be trivially run in parallel.
Computing eigenvectors
The cost of computing the -by- (with ) matrix of selected eigenvectors of the graph Laplacian is normally proportional to the cost of multiplication of the -by- graph Laplacian matrix by a vector, which varies greatly whether the graph Laplacian matrix is dense or sparse. For the dense case the cost thus is . The very commonly cited in the literature cost comes from choosing and is clearly misleading, since, e.g., in a hierarchical spectral clustering as determined by the Fiedler vector.
In the sparse case of the -by- graph Laplacian matrix with non-zero entries, the cost of the matrix-vector product and thus of computing the -by- with matrix of selected eigenvectors is , with the memory footprint also only — both are the optimal low bounds of complexity of clustering data points. Moreover, matrix-free eigenvalue solvers such as LOBPCG can efficiently run in parallel, e.g., on multiple GPUs with distributed memory, resulting not only in high quality clusters, which spectral clustering is famous for, but also top performance.
Software
Free software implementing spectral clustering is available in large open source projects like scikit-learn using LOBPCG with multigrid preconditioning or ARPACK, MLlib for pseudo-eigenvector clustering using the power iteration method, and R.
Relationship with other clustering methods
The ideas behind spectral clustering may not be immediately obvious. It may be useful to highlight relationships with other methods. In particular, it can be described in the context of kernel clustering methods, which reveals several similarities with other approaches.
Relationship with k-means
The weighted kernel k-means problem
shares the objective function with the spectral clustering problem, which can be optimized directly by multi-level methods.
Relationship to DBSCAN
In the trivial case of determining connected graph components — the optimal clusters with no edges cut — spectral clustering is also related to a spectral version of DBSCAN clustering that finds density-connected components.
Measures to compare clusterings
Ravi Kannan, Santosh Vempala and Adrian Vetta proposed a bicriteria measure to define the quality of a given clustering. They said that a clustering was an (α, ε)-clustering if the conductance of each cluster (in the clustering) was at least α and the weight of the inter-cluster edges was at most ε fraction of the total weight of all the edges in the graph. They also look at two approximation algorithms in the same paper.
History and related literatures
Spectral clustering has a long history. Spectral clustering as a machine learning method was popularized by Shi & Malik and Ng, Jordan, & Weiss.
Ideas and network measures related to spectral clustering also play an important role in a number of applications apparently different from clustering problems. For instance, networks with stronger spectral partitions take longer to converge in opinion-updating models used in sociology and economics.
See also
Affinity propagation
Kernel principal component analysis
Cluster analysis
Spectral graph theory
References
Cluster analysis algorithms
Algebraic graph theory | Spectral clustering | [
"Mathematics"
] | 2,551 | [
"Mathematical relations",
"Graph theory",
"Algebra",
"Algebraic graph theory"
] |
13,652,006 | https://en.wikipedia.org/wiki/Close%20studding | Close studding is a form of timber work used in timber-framed buildings in which vertical timbers (studs) are set close together, dividing the wall into narrow panels. Rather than being a structural feature, the primary aim of close studding is to produce an impressive front.
Close studding first appeared in England in the 13th century and was commonly used there from the mid-15th century until the end of the 17th century. It was also common in France from the 15th century.
Description
Although close studding is defined by the distance between the vertical timbers, the spacing used is variable, up to a maximum of around 2 feet (600 mm). Studs can either span the full height of the storey or be divided by a middle (or intermediate) rail. To give the frame stability, some form of diagonal bracing is required. Limewash and coloured paints would have been used to enhance the pattern.
History and usage
The use of close studding possibly originated in East Anglia, where the technique was employed in the earliest surviving timber walls thought to date from the early 13th century. Among the earliest examples outside East Anglia are St Michael's Church, Baddiley in Cheshire (1308) and Mancetter Manor in Warwickshire (c. 1330). It became fashionable in England around 1400, and by the middle of the 15th century close studding was widely used across that country. Its popularity coincided with the dominance of the Perpendicular style of architecture, with its emphasis on verticals. Close studding remained in common use in England until the end of the 17th century. Close-studded buildings dating from the 15th and 16th centuries are also seen in France, and some experts believe the technique might have originated there. Close studding is very common in the Normandy region of France.
Compared with square framing, close studding uses a lot of timber and is time-consuming to construct; it was therefore particularly employed for buildings of relatively high status. Public buildings such as guildhalls, market halls, churches and inns often employed close studding. It was also used for private houses of the wealthy, particularly townhouses but also the more prosperous farmhouses. Close studding was not usually employed in outbuildings, although occasional examples exist, such as the Gunthwaite Hall barn in Barnsley. Although most examples occur in entirely timber-framed buildings, close studding was also used on the upper storeys of houses with a stone or brick ground storey; examples include the Dragon Hall in Norwich and the Café 'Cave St-Vincent' in Compiègne, France.
With its lavish use of timber, close studding was extravagant and was seen as a status symbol. This led to it being faked with paint or even cosmetic planking. The heavy timber consumption probably also contributed to the decline in the use of close studding from the end of the 17th century, with a reduced supply of domestic hardwood as well as increased competition for timber.
Variations
Regional variation occurred across England in the use of the middle rail, which was common in the midlands but rare in the east and south east. Variation in bracing is also seen. Some close-studded buildings, mainly dated before the mid-16th century, have arch or tension bracing to the exterior; examples include the Guildhall in Lavenham and the Chantry House in Bunbury. In later use, however, braces were usually constructed on the interior and concealed by plaster panelling.
Close studding was sometimes used in association with decorative panel work or close panelling, particularly from the end of the 16th century. In such buildings, the lower storey would usually employ close studding, while the upper storeys would have small square panels with or without ornamentation. Examples include the White Lion in Congleton and Moat Farm in Longdon. An ornamental effect was also sometimes obtained with herringbone or chevron bracing between the uprights.
Selected examples
Good examples of the various forms of the technique include:
Churches
Church of St James and St Paul, Marton, Cheshire: close studding with middle rail (c. 1370)
St Michael's Church, Baddiley, Cheshire: the chancel has close studding without a middle rail, with later brick infill (1308)
St Michael and All Angels Church, Altcar, West Lancashire: mostly close studded with middle rail (a much later example of 1879)
St Peter's Church, Melverley, Shropshire: close studding with middle rail (late 15th century)
Inns and cafés
Bear's Head Hotel, Brereton, Cheshire: close studding with two rails (1615)
Café 'Cave St-Vincent', Compiègne, France: close studding with braces on upper storey over brick ground floor with stone trimming (15th century)
Crown Hotel, Nantwich, Cheshire: close studding on all three storeys with middle rail (c. 1584)
String of Horses Inn, originally at Frankwell, Shrewsbury, Shropshire, now at Avoncroft Museum of Historic Buildings: close studding with middle rail on both ground and first storeys (1576)
White Lion, Congleton, Cheshire: lower storey has close studding, with decorative panelling above (early 16th century)
The Falcon, Chester, Cheshire, formerly a town house, now a public house, which has close studding on its east front at the level of the Chester Rows.
Private houses
Chantry House, Bunbury, Cheshire: very close studding, with tension braces and arch bracing and no middle rail. (1527)
Gawsworth Old Rectory, Gawsworth, Cheshire: close studding with middle rail and arch bracing (late 16th century)
Greyfriars, Worcester, Worcestershire: close studding with middle rail to both storeys (c.1480–1500)
Mancetter Manor, Mancetter, Warwickshire: close studding with plaster infill (c. 1330)
Moat Farm, Longdon, Worcestershire: close studding with middle rail on ground floor; upper floor mixes square framing and decorative panelling
Moss Hall, Audlem, Cheshire: close studding with middle rails to each storey, with no decorative panelling (1616)
Paycocke's, Coggeshall, Essex: the main elevation has close studding on both storeys, with a middle rail on the ground floor (c. 1500)
Public halls
Booth Hall or Round House, Evesham, Worcestershire: close studding with middle rail on all three storeys (late 15th century)
Dragon Hall, Norwich, Norfolk: close studding without middle rail to first floor, over brick and flint ground floor (14th century)
Guildhall, Lavenham, Suffolk: close studding to all storeys, with tension braces and no middle rail (early 16th century)
Moot Hall, Fordwich, Kent: close-studded overhanging first storey with brick or plaster infill and no middle rail; the ground floor in brick and flint was rebuilt at a later date (early 15th century)
Town residence, Albi, France: close studding on three storeys (16th century)
See also
Poteaux-sur-sol construction in the historical region of North America known as New France, which can have a similar appearance
Notes and references
Sources
Harris R. Discovering Timber-framed Buildings (Shire Publications, Princes Risborough; 2003) ()
McKenna L. Timber Framed Buildings in Cheshire (Cheshire County Council; 1994) ()
Brooks A, Pevsner N. Worcestershire: The Buildings of England (revised edn) (Yale University Press; 2007) ()
Building
Timber framing | Close studding | [
"Technology",
"Engineering"
] | 1,552 | [
"Structural system",
"Building",
"Construction",
"Timber framing"
] |
7,422,265 | https://en.wikipedia.org/wiki/Lehmer%20mean | In mathematics, the Lehmer mean of a tuple of positive real numbers, named after Derrick Henry Lehmer, is defined as:
The weighted Lehmer mean with respect to a tuple of positive weights is defined as:
The Lehmer mean is an alternative to power means
for interpolating between minimum and maximum via arithmetic mean and harmonic mean.
Properties
The derivative of is non-negative
thus this function is monotonic and the inequality
holds.
The derivative of the weighted Lehmer mean is:
Special cases
is the minimum of the elements of .
is the harmonic mean.
is the geometric mean of the two values and .
is the arithmetic mean.
is the contraharmonic mean.
is the maximum of the elements of . Sketch of a proof: Without loss of generality let be the values which equal the maximum. Then
Applications
Signal processing
Like a power mean, a Lehmer mean serves a non-linear moving average which is shifted towards small signal values for small and emphasizes big signal values for big . Given an efficient implementation of a moving arithmetic mean called you can implement a moving Lehmer mean according to the following Haskell code.
lehmerSmooth :: Floating a => ([a] -> [a]) -> a -> [a] -> [a]
lehmerSmooth smooth p xs =
zipWith (/)
(smooth (map (**p) xs))
(smooth (map (**(p-1)) xs))
For big it can serve an envelope detector on a rectified signal.
For small it can serve an baseline detector on a mass spectrum.
Gonzalez and Woods call this a "contraharmonic mean filter" described for varying values of p (however, as above, the contraharmonic mean can refer to the specific case ). Their convention is to substitute p with the order of the filter Q:
Q=0 is the arithmetic mean. Positive Q can reduce pepper noise and negative Q can reduce salt noise.
See also
Mean
Power mean
Notes
External links
Lehmer Mean at MathWorld
Means
Articles with example Haskell code | Lehmer mean | [
"Physics",
"Mathematics"
] | 441 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
7,422,711 | https://en.wikipedia.org/wiki/Energy%20budget | An energy budget is a balance sheet of energy income against expenditure. It is studied in the field of Energetics which deals with the study of energy transfer and transformation from one form to another. Calorie is the basic unit of measurement. An organism in a laboratory experiment is an open thermodynamic system, exchanging energy with its surroundings in three ways - heat, work and the potential energy of biochemical compounds.
Organisms use ingested food resources (C=consumption) as building blocks in the synthesis of tissues (P=production) and as fuel in the metabolic process that power this synthesis and other physiological processes (R=respiratory loss). Some of the resources are lost as waste products (F=faecal loss, U=urinary loss). All these aspects of metabolism can be represented in energy units. The basic model of energy budget may be shown as:
P = C - R - U - F or
P = C - (R + U + F) or
C = P + R + U + F
All the aspects of metabolism can be represented in energy units (e.g. joules (J);1 calorie = 4.2 kJ).
Energy used for metabolism will be
R = C - (F + U + P)
Energy used in the maintenance will be
R + F + U = C - P
Endothermy and ectothermy
Energy budget allocation varies for endotherms and ectotherms. Ectotherms rely on the environment as a heat source while endotherms maintain their body temperature through the regulation of metabolic processes. The heat produced in association with metabolic processes facilitates the active lifestyles of endotherms and their ability to travel far distances over a range of temperatures in the search for food. Ectotherms are limited by the ambient temperature of the environment around them but the lack of substantial metabolic heat production accounts for an energetically inexpensive metabolic rate. The energy demands for ectotherms are generally one tenth of that required for endotherms.
References
Kumar, Ranjan (1999): Studies on Bioenergics modelling in a fresh water fish, Mystus vittatus (Bloch), Ph.D. thesis, Magadh University, Bodh Gaya.
B.R. Braaten (1976): Bioenergetics - a review on Methodology. In: Halver J. E. and K. Tiews (eds). Finfish nutrition and Finfish Technology vol. II, pp. 461–504. Berlin, Hennemann.
Brett, J. R. (1962) and T. D. D. Groves (1979): Physiological energetics. In: W.S. Hoar, D.J. Randall and J. R. Brett(eds). In: Fish Physiology, Vol VII. PP.279–352. N.Y.; A.P.
Cui, Y and R. J. Wootton (1988): Bioenergetics of growth of a cyprinid Phoxinus phoxinus : the effect of ration, temperature, and body size on food consumption, faecal and nitrogen excretion. J. Fish. Biol, 33: 431–443.
Elliott, J.M. and L. Persson (1978): The estimation of daily rate of food consumption for fish. J. Anim. Ecol. 47,977.
Fischer, Z (1983): The elements of energy balance in grass carp (Ctenophayngodon idella) part-IV, consumption rate of grass carp fed on different types of food.
Kerr, S.R. (1982): Estimating the energy budgets of actively predatory fish. Can. J.Fish Aqual. Sci, 39-371.
Kleiber, M. (1961): The fire of life - An Introduction to animal Energetics. Wiley, New York
Prabhakar, A. K. (1997): Studies on energy budget in a siluroid fish, Heteropneustes fossilis (Bloch), Ph.D. thesis, Magadh University, Bodh Gaya.
Ray, A. K and B. C. Patra (1987): Method for collecting fish faeces for studying the digestibility of feeds J. Inland. Fish Soc. India. 19 (I) 71–73.
Sengupta, A. and Amitta Moitra (1996): Energy Budget in relation to various dietary conditions in snake headed murrel, Channa punctatus: Proc. 83rd ISCA: ABS No. 95: pp. 56.
Staples, D.J. and M. Nomura (1976): Influence of body size and food ration on the energy budget of rainbow trout, Salmo gairdneri (Rechardson). J. Fish Biol. 9, 29.
Von Bertalanfly, L. (1957): Quantitative law in Metabolism. Quartz. Rev. biol. 32: 217–231
Warren, C.E. and G.E. Davies (1967): Laboratory studies on the feeding bioenergetics and growth of fish. In: Gerking, S.D. (eds). The biological basis for freshwater Fish Production. pp. 175–214. Oxford, Blackwell.
Budgets
Biology | Energy budget | [
"Biology"
] | 1,114 | [
"Physiology"
] |
7,423,236 | https://en.wikipedia.org/wiki/Serrurier%20truss | A Serrurier truss is used in telescope tube assembly construction. The design was created in 1935 by engineer Mark U. Serrurier when he was working on the Mount Palomar Hale Telescope. The design solves the problem of truss flexing by supporting the primary objective mirror and the secondary mirror by two sets of opposing trusses before and after the declination pivot. The trusses are designed to have an equal amount of flexure, which allows the optics to stay on a common optical axis. When flexing, the "top" truss resists tension and the "bottom" truss resists compression. This has the effect of keeping the optical elements parallel to each other. The net result is all of the optical elements stay in collimation regardless of the orientation of the telescope.
Some Serrurier truss designs end the truss members with a short flexible rod creating a more ideal "parallel motion flexure" system, to allow maximum parallelism of optical elements under gravitational load. Since truss members work primarily in tension and compression, there is no appreciable loss of stiffness due to the bending of the end flexures.
Certain designs used by amateur telescope makers, specifically truss tube Dobsonians that use a single truss, are sometimes called "Serrurier truss" designs. These single truss designs are used for their rigidity and do perform the function of keeping the optical elements parallel, but since they lack the opposing truss that keeps optics on the same optical axis they are not technically "Serrurier trusses".
Other examples of Serrurier truss designs:
See also
List of telescope parts and construction
Footnotes
References
Learner, Richard. "The Legacy of the 200-inch", Sky&Telescope, April 1986, pp. 349–353
Diffrient, Roy. "Flexure of a Serrurier Truss", Sky&Telescope, February 1994, pp. 91–94
astro.caltech.edu - Reflecting Telescopes
Optical telescope components
Articles containing video clips
ja:トラス#セルリエトラス | Serrurier truss | [
"Technology"
] | 420 | [
"Optical telescope components",
"Components"
] |
7,423,263 | https://en.wikipedia.org/wiki/Control%20reconfiguration | Control reconfiguration is an active approach in control theory to achieve fault-tolerant control for dynamic systems. It is used when severe faults, such as actuator or sensor outages, cause a break-up of the control loop, which must be restructured to prevent failure at the system level. In addition to loop restructuring, the controller parameters must be adjusted to accommodate changed plant dynamics. Control reconfiguration is a building block toward increasing the dependability of systems under feedback control.
Reconfiguration problem
Fault modelling
The figure to the right shows a plant controlled by a controller in a standard control loop.
The nominal linear model of the plant is
The plant subject to a fault (indicated by a red arrow in the figure) is modelled in general by
where the subscript indicates that the system is faulty. This approach models multiplicative faults by modified system matrices. Specifically, actuator faults are represented by the new input matrix , sensor faults are represented by the output map , and internal plant faults are represented by the system matrix .
The upper part of the figure shows a supervisory loop consisting of fault detection and isolation (FDI) and reconfiguration which changes the loop by
choosing new input and output signals from {} to reach the control goal,
changing the controller internals (including dynamic structure and parameters),
adjusting the reference input .
To this end, the vectors of inputs and outputs contain all available signals, not just those used by the controller in fault-free operation.
Alternative scenarios can model faults as an additive external signal influencing the state derivatives and outputs as follows:
Reconfiguration goals
The goal of reconfiguration is to keep the reconfigured control-loop performance sufficient for preventing plant shutdown. The following goals are distinguished:
Stabilization
Equilibrium recovery
Output trajectory recovery
State trajectory recovery
Transient time response recovery
Internal stability of the reconfigured closed loop is usually the minimum requirement. The equilibrium recovery goal (also referred to as weak goal) refers to the steady-state output equilibrium which the reconfigured loop reaches after a given constant input. This equilibrium must equal the nominal equilibrium under the same input (as time tends to infinity). This goal ensures steady-state reference tracking after reconfiguration. The output trajectory recovery goal (also referred to as strong goal) is even stricter. It requires that the dynamic response to an input must equal the nominal response at all times. Further restrictions are imposed by the state trajectory recovery goal, which requires that the state trajectory be restored to the nominal case by the reconfiguration under any input.
Usually a combination of goals is pursued in practice, such as the equilibrium-recovery goal with stability.
The question whether or not these or similar goals can be reached for specific faults is addressed by reconfigurability analysis.
Reconfiguration approaches
Fault hiding
This paradigm aims at keeping the nominal controller in the loop. To this end, a reconfiguration block can be placed between the faulty plant and the nominal controller. Together with the faulty plant, it forms the reconfigured plant. The reconfiguration block has to fulfill the requirement that the behaviour of the reconfigured plant matches the behaviour of the nominal, that is fault-free plant.
Linear model following
In linear model following, a formal feature of the nominal closed loop is attempted to be recovered. In the classical pseudo-inverse method, the closed loop system matrix of a state-feedback control structure is used. The new controller is found to approximate in the sense of an induced matrix norm.
In perfect model following, a dynamic compensator is introduced to allow for the exact recovery of the complete loop behaviour under certain conditions.
In eigenstructure assignment, the nominal closed loop eigenvalues and eigenvectors (the eigenstructure) is recovered to the nominal case after a fault.
Optimisation-based control schemes
Optimisation control schemes include: linear-quadratic regulator design (LQR), model predictive control (MPC) and eigenstructure assignment methods.
Probabilistic approaches
Some probabilistic approaches have been developed.
Learning control
There are learning automata, neural networks, etc.
Mathematical tools and frameworks
The methods by which reconfiguration is achieved differ considerably. The following list gives an overview of mathematical approaches that are commonly used.
Adaptive control (AC)
Disturbance decoupling (DD)
Eigenstructure assignment (EA)
Gain scheduling (GS)/linear parameter varying (LPV)
Generalised internal model control (GIMC)
Intelligent control (IC)
Linear matrix inequality (LMI)
Linear-quadratic regulator (LQR)
Model following (MF)
Model predictive control (MPC)
Pseudo-inverse method (PIM)
Robust control techniques
See also
Prior to control reconfiguration, it must be at least determined whether a fault has occurred (fault detection) and if so, which components are affected (fault isolation). Preferably, a model of the faulty plant should be provided (fault identification). These questions are addressed by fault diagnosis methods.
Fault accommodation is another common approach to achieve fault tolerance. In contrast to control reconfiguration, accommodation is limited to internal controller changes. The sets of signals manipulated and measured by the controller are fixed, which means that the loop cannot be restructured.
References
Further reading
.
.
Control theory
Cybernetics
Control engineering
Fault tolerance | Control reconfiguration | [
"Mathematics",
"Engineering"
] | 1,130 | [
"Reliability engineering",
"Applied mathematics",
"Control theory",
"Fault tolerance",
"Control engineering",
"Dynamical systems"
] |
7,423,338 | https://en.wikipedia.org/wiki/Stolarsky%20mean | In mathematics, the Stolarsky mean is a generalization of the logarithmic mean. It was introduced by Kenneth B. Stolarsky in 1975.
Definition
For two positive real numbers x, y the Stolarsky Mean is defined as:
Derivation
It is derived from the mean value theorem, which states that a secant line, cutting the graph of a differentiable function at and , has the same slope as a line tangent to the graph at some point in the interval .
The Stolarsky mean is obtained by
when choosing .
Special cases
is the minimum.
is the geometric mean.
is the logarithmic mean. It can be obtained from the mean value theorem by choosing .
is the power mean with exponent .
is the identric mean. It can be obtained from the mean value theorem by choosing .
is the arithmetic mean.
is a connection to the quadratic mean and the geometric mean.
is the maximum.
Generalizations
One can generalize the mean to n + 1 variables by considering the mean value theorem for divided differences for the nth derivative.
One obtains
for .
See also
Mean
References
Means | Stolarsky mean | [
"Physics",
"Mathematics"
] | 229 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
7,423,424 | https://en.wikipedia.org/wiki/Identric%20mean | The identric mean of two positive real numbers x, y is defined as:
It can be derived from the mean value theorem by considering the secant of the graph of the function . It can be generalized to more variables according by the mean value theorem for divided differences. The identric mean is a special case of the Stolarsky mean.
See also
Mean
Logarithmic mean
References
Means | Identric mean | [
"Physics",
"Mathematics"
] | 83 | [
"Means",
"Mathematical analysis",
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
7,423,545 | https://en.wikipedia.org/wiki/IBM%20App%20Connect%20Enterprise | IBM App Connect Enterprise (abbreviated as IBM ACE, formerly known as IBM Integration Bus (IIB), WebSphere Message Broker (WMB), WebSphere Business Integration Message Broker (WBIMB), WebSphere MQSeries Integrator (WMQI) and started life as MQSeries Systems Integrator (MQSI). App Connect IBM's integration software offering, allowing business information to flow between disparate applications across multiple hardware and software platforms. Rules can be applied to the data flowing through user-authored integrations to route and transform the information. The product can be used as an Enterprise Service Bus supplying a communication channel between applications and services in a service-oriented architecture. App Connect from V11 supports container native deployments with highly optimised container start-up times.
IBM ACE provides capabilities to build integration flows needed to support diverse integration requirements through a set of connectors to a range of data sources, including packaged applications, files, mobile devices, messaging systems, and databases. A benefit of using IBM ACE is that the tool enables existing applications for Web Services without costly legacy application rewrites. IBM ACE avoids the point-to-point strain on development resources by connecting any application or service over multiple protocols, including SOAP, HTTP and JMS. Modern secure authentication mechanisms, including the ability to perform actions on behalf of masquerading or delegate users, through MQ, HTTP and SOAP nodes are supported such as LDAP, X-AUTH, O-AUTH, and two-way SSL.
A major focus of IBM ACE in its recent releases has been the capability of the product's runtime to be fully hosted in a cloud. Hosting the runtime in the cloud provides certain advantages and potential cost savings compared to hosting the runtime on premises as it simplifies the maintenance and application of OS-level patches which can sometimes be disruptive to business continuity. Also, cloud hosting of IBM ACE runtime allows easy expansion of capacity by adding more horsepower to the CPU configuration of a cloud environment or by adding additional nodes in an Active-Active configuration. An additional advantage of maintaining IBM ACE runtime in the cloud is the ability to configure access to your IBM ACE functionality separate and apart from your internal network using DataPower or API Connect devices. This allows people or services on the public internet to access your Enterprise Service Bus without passing through your internal network, which can be a more secure configuration than if your ESB was deployed to your internal on premises network.
IBM ACE embeds a Common Language Runtime to invoke any .NET logic as part of an integration. It also includes full support for the Visual Studio development environment, including the integrated debugger and code templates. IBM Integration Bus includes a comprehensive set of patterns and samples that demonstrate bi-directional connectivity with both Microsoft Dynamics CRM and MSMQ. Several improvements have been made to this current release, among them the ability to configure runtime parameters using a property file that is part of the deployed artifacts contained in the BAR ('broker archive') file. Previously, the only way to configure runtime parameters was to run an MQSI command on the command line. This new way of configuration is referred to as a policy document and can be created with the new Policy Editor. Policy documents can be stored in a source code control system and a different policy can exist for different environments (DEV, INT, QA, PROD).
IBM ACE is compatible with several virtualization platforms right out-of-the-box, Docker being a prime example. With IBM ACE, you can download from the global Docker repository a runtime of IBM ACE and run it locally. Because IBM ACE has its administrative console built right into the runtime, once the Docker image is active on your local, you can do all the configuration and administration commands needed to fully activate any message flow or deploy any BAR file. In fact, you can construct message flows that are microservices and package these microservices into a Docker deployable object directly. Because message flows and BAR files can contain Policy files, this node configuration can be automatic and no or little human intervention is needed to complete the application deployment.
Features
IBM represents the following features as key differentiators of the IBM ACE product when compared to other industry products that provide the services of an Enterprise Service Bus or Micro-services integration service:
Simplicity and productivity
Simplified process for installation: The process to deploy and configure IBM ACE so that an integration developer can use the IBM ACE Toolkit to start creating applications is simplified and quicker to complete.
Tutorials Gallery: From the Tutorials Gallery an integration developer can install, deploy, and test sample integration flows.
Shared libraries: Shared libraries are introduced in V10 to share resources between multiple applications. Libraries in previous versions of IBM Integration Bus are static libraries.
Removal of the WebSphere MQ prerequisite: WebSphere MQ is no longer a prerequisite for using IBM ACE on distributed platforms, which means that you can develop and deploy applications independently of WebSphere MQ.
Universal and independent
Graphical data mapping
Industry-specific and relevant
Dynamic and intelligent
High-performing and scalable
Discovery Connectors
Optimised container deployments
Built-in unit testing, with mocks, batch creation of tests integrated with CI/CD pipelines.
IBM delivers the IBM ACE software either in traditional software install on your local premises to deploy to VM's, bare metal, container native on premise also IBM ACE is a key technology in IBM Cloud Pak for Integration (CP4i) or by an IBM administered cloud environment. The Integration services in a cloud environment reduces capital expenditures, increases application and hardware availability, and offloads the skills for managing an Integration service environment to IBM cloud engineers. This promotes the ability of end users to focus on developing integration flows rather than installing, configuring, and managing the IBM ACE software. The offering is intended to be compatible with the on-premises product. Within the constraints of a cloud environment, users can use the same development tooling for both cloud and on-premises software, and the assets that are generated can be deployed to either.
History
Originally IBM partnered with NEON (New Era of Networks) Inc., a company that was acquired by Sybase in 2001. IBM 2000 wrote their product called 'MQSeries Integrator' (or 'MQSI' for short). Versions of MQSI ran up to version 2.0. The product was added to the WebSphere family and re-branded 'WebSphere MQ Integrator', at version 2.1.
After 2.1 the version numbers became more synchronized with the rest of the WebSphere family and jumped to version 5.0. The name changed to 'WebSphere Business Integration Message Broker' (WBIMB). In this version the development environment was redesigned using Eclipse and support for Web services was integrated into the product.
Since version 6.0 the product has been known as 'WebSphere Message Broker'. WebSphere Message Broker version 7.0 was announced in October 2009, and WebSphere Message Broker version 8.0 was announced in October 2011
In April 2013, IBM announced that the WebSphere Message Broker product was undergoing another rebranding name change. IBM Integration Bus version 9 includes new nodes such as the Decision Service node which enables content based routing based on a rules engine and requires IBM WebSphere Operational Decision Management product. The IBM WebSphere Enterprise Service Bus product has been discontinued with the release of IBM Integration Bus and IBM is offering transitional licenses to move to IBM Integration Bus. The WebSphere Message Broker Transfer License for WebSphere Enterprise Service Bus enables customers to exchange some or all of their WebSphere Enterprise Service Bus license entitlements for WebSphere Message Broker license entitlements. Following the license transfer, entitlement to use WebSphere Enterprise Service Bus will be reduced or cease. This reflects the WebSphere Enterprise Service Bus license entitlements being relinquished during the exchange. IBM announced at Impact 2013 that WESB will be end-of-life in five years and no further feature development of the WESB product will occur.
In 2018 IBM App Connect Enterprise V11 was released which enabled the deployment of container native micro-services style integration services as well as continued support of Enterprise Service Bus (ESB) deployments. In 2021 App Connect Enterprise V12 was released with many enhanced capabilities such as optimised container deployments reducing container start-up times and resource requirements. IBM App Connect Enterprise V12 also featured the use of 'Discovery Connectors', enabling integration developers to discover objects in systems such as Saas and Cloud, as well as discoverable on-premise applications.
Components
IBM App Connect Enterprise consists of the following components:
An integration server process hosts threads called message flows to route, transform, and enrich in-flight messages. Application programs connect to and send messages to the integration server, and receive messages from the integration server. Integration servers can exist independently or as part of a set owned by an integration node (formerly known as a Broker).
IBM ACE Toolkit is an Eclipse-based tool that developers use to construct message flows and transformation artifacts using editors to work with specific types of resources. Context-sensitive help is available to developers throughout the Toolkit and various wizards provide quick-start capability on certain tasks. Application developers work in separate instances of the Toolkit to develop resources associated with message flows. The Toolkit connects to the integration servers or integration nodes to which the message flows are deployed.
IBM App Connect web user interface (UI) enables System Administrators to view and manage integration resources through an HTTP client without any additional management software. It connects to a single port on an integration server or integration node, provides a view of all deployed integration flows, and gives System Administrators access to important operational features such as data record and replay, Business Transaction Monitoring (BTM), statistics and accounting data for deployed message flows that monitor the performance of integrations, and an administration audit log. (The web UI supersedes the Eclipse-based Explorer from earlier versions).
How App Connect works
A SOA developer or integration developer defines message flows in the IBM ACE Toolkit by including several message flow nodes, each of which represents a set of actions that define a processing step. How the message flow nodes are joined determines which processing steps are carried out, in which order, and under which conditions. A message flow includes an input node that provides the source of the messages that are processed, which can be processed in one or more ways, and optionally deliver through one or more output nodes. The message is received as a bit stream, without representational structure or format, and is converted by a parser into a tree structure that is used internally in the message flow. Before the message is delivered to a final destination, it is converted back into a bit stream.
IBM App Connect supports a wide variety of data formats, including standards-based formats (such as XML, DFDL, and JSON) CSV and many more as well as industry formats (such as HL7, EDI and SWIFT), ISOxxxx and others as well as custom formats. A comprehensive range of operations can be performed on data, including routing, filtering, enrichment, multicast for publish-subscribe, sequencing, and aggregation. These flexible integration capabilities are able to support the customer's choice of solution architecture, including service-oriented, event-oriented, data-driven, and file-based (batch or real-time). IBM App Connect unifies the Business Process Management grid, providing the workhorse behind how to do something, taking directions from other BPM tooling which tells IBM App Connect what to do.
IBM App Connect includes a set of performance monitoring tools that visually portray current server throughput rates, showing various metrics such as elapsed and CPU time in ways that immediately draw attention to performance bottlenecks and spikes in demand. You can drill down into granular details, such as rates for individual connectors, and the tools enable you to correlate performance information with configuration changes so that you can quickly determine the performance impact of specific configuration changes, resource metrics can also be emitted to show what resources are being used by an integration service.
In version 7 and earlier, the primary way general text and binary messages were modeled and parsed was through a container called a message set and associated 'MRM' parser. From version 8 onwards such messages are modeled and parsed using a new open technology called DFDL from the Open Grid Forum. This is IBM's strategic technology for modeling and parsing general text and binary data. The MRM parser and message sets remain a fully supported part of the product; in order to use message sets, a developer must enable them as they are disabled by default to encourage the adoption of the DFDL technology for its ease of use and superior performance characteristics.
IBM App Connect supports policy-driven traffic shaping that enables greater visibility for system administrators and operational control over workload. Traffic shaping enables system administrators to meet the demands when the quantity of new endpoints (such as mobile and cloud applications) exponentially increases by adjusting available system resources to meet that new demand, delay or redirect the traffic to cope with load spikes. The traffic monitoring enables notifications to system administrators and other business stakeholders which increases business awareness and enables trend discovery.
Overview
IBM App Connect reduces cost and complexity of IT systems by unifying the method a company uses to implement interfaces between disparate systems. The integration node runtime forms the Enterprise Service Bus of a service-oriented architecture by efficiently increasing the flexibility of connecting unlike systems into a unified, homogeneous architecture, independent integration servers can be deployed to containers offering a Micro-Services method of integration, allowing App Connect integration services to be managed by container orchestrators such as OpenShift, Kubernetes and others. A key feature of IBM App Connect is the ability to abstract the business logic away from transport or protocol specifics.
IBM App Connect also provides deployment flexibility by not only supporting the ESB pattern but also container native deployments by separating Integration Servers from the ESB pattern which are a lightweight process hosting the integration flows, these Integration Servers and flows can be deployed across containers managed by orchestration services such as Red Hat OpenShift, Kubernetes, Dock Swarm and others, furthermore these Integration servers are optimised for container deployments by only loading resources that are needed to run an integration, offering fast start up times with reduced resource utilisation.
The IBM ACE Toolkit enables developers to graphically design mediations, known as message flows, and related artifacts. Once developed, these resources can be packaged into a broker archive (BAR) file and deployed to an integration node runtime environment or a container. At this point, the integration node is able to continually process messages according to the logic described by the message flow. A wide variety of data formats are supported, and may be modeled using standard XML Schema and DFDL schema, JSON and others. After modeling, a developer can create transformations between various formats using nodes supplied in the Toolkit, either graphically using a Mapping node, or programmatically using a Compute node using Java, ESQL, or .Net.
IBM App Connect message flows can be used in a service-oriented architecture, and if properly designed by Middleware Analysts, integrated into event-driven SOA schemas, sometimes referred to as SOA 2.0 and/or deployed as micro-services in container native deployments. Businesses rely on the processing of events, which might be part of a business process, such as issuing a trade order, purchasing an insurance policy, reading data using a sensor, or monitoring information gathered about IT infrastructure performance. lex-event-processing capabilities that enable analysis of events to perform validation, enrichment, transformation and intelligent routing of messages based on a set of business rules.
A developer creates message flows in a cyclical workflow, probably more agile than most other software development. Developers will create a message flow, generate a BAR file, deploy the message flow contained in the BAR file, test the message flow and repeat as necessary to achieve reliable functionality.
Market position
Based on earnings reported for IBM's 1Q13, annualized revenue for IBM's middleware software unit increased to US$14 billion (up $7bn from 2011). License and maintenance revenue for IBM middleware products reached $7bn in 2011. In 2012, IBM expected an increase in both market share and total market increase of ten percent. The worldwide application infrastructure and middleware software market grew 9.9 percent in 2011 to $19.4bn, according to Gartner. Gartner reported that IBM continues to be number one in other growing and key areas including the Enterprise Service Bus Suites, Message Oriented Middleware Market, the Transaction Processing Monitor market and Integration Appliances.
Expected performance
IBM publishes performance reports for IBM Integration Bus V10 and App Connect Enterprise V11, App Connect V12 reports can be requested for both ESB and Container measurements. The reports provide sample throughput figures. Performance varies depending on message sizes, message volumes, processing complexity (such as complexity of message transformations), system capacities (CPU, memory, network, etc.), software version and patch levels, configuration settings, and other factors. Some published tests demonstrate message rates in excess of 10,000 per second in particular configurations.
Message flow nodes available
A developer can choose from many pre-designed message flow 'nodes', which are used to build up a message flow. Nodes have different purposes. Some nodes map data from one format to another (for instance, Cobol Copybook to canonical XML). Other nodes evaluate content of data and route the flow differently based on certain criteria
Message flow node types
There are many types of node that can be used in developing message flows; the following node transformation technology options are available:
Graphical Mapping content
eXtensible Stylesheet Language Transformations (XSLT)
Java
Smart Connectors, Discovery of objects; Salesforce and others
.NET
PHP
JSON with validation
HTTP Synch and Asynch
RESTful
API V3
Extended Structured Query Language (ESQL)
JMS
Database
MQ's Managed File Transfer
Connect:Direct (Managed File Transfer)
File/FTP
Kafka
MQTT
CICS
IMS
TCP/IP Sockets client and server.
Flow Routing and Ordering: Filter, Label, route to label, route, flow order, resequence, sequence, passthru
Callable flows - Secure calling of message flows across hybrid deployments
Error handling: TryCatch, Throw, Validate, Trace
Grouping: Aggregation, Collection, scatter, gather
Security
Sub flows
Timer
SAP
PeopleSoft
JD Edwards
SCA
IBM Transformation Extender (formerly known as Ascential DataStage TX, DataStage TX and Mercator Integration Broker). Available as a separate licensing option
Email
Decision Support node. This node allows the Program to invoke business rules that run on a component of IBM Decision Server that is provided with the Program. Use of this component is supported only via Decision Service nodes. The Program license provides entitlement for the Licensee to make use of Decision Service nodes for development and functional test uses. Refer to the IBM Integration Bus License Information text for details about the program-unique terms.
Localization
IBM Integration Bus on distributed systems has been localized to the following cultures:
Brazilian Portuguese
Chinese (Simplified)
Chinese (Traditional)
French
German
Italian
Japanese
Korean
Spanish
US English
Polish
Russian
Turkish
Patterns
A pattern captures a commonly recurring solution to a problem (e.g. Request-Reply pattern). The specification of a pattern describes the problem being addressed, why the problem is important, and any constraints on the solution. Patterns typically emerge from common usage and the application of a particular product or technology. A pattern can be used to generate customized solutions to a recurring problem in an efficient way. We can do this pattern recognition or development through a process called service-oriented modeling.
Version 7 introduced patterns that:
Provide guidance in implementing solutions
Increase development efficiency because resources are generated from a set of predefined templates
Improve quality through asset reuse and common implementation of functions such as error handling and logging
The patterns cover a range of categories including file processing, application integration, and message based integration.
Pattern examples
Fire-and-Forget (FaF)
Request-Reply (RR)
Aggregation (Ag)
Sequential (Seq)
Supported platforms
Operating systems
Currently available platforms for IBM Integration Bus are:
AIX
HP-UX (IA-64)
Solaris (SPARC and x86-64)
Linux (IA-32, x86-64, PPC and IBM Z)
Microsoft Windows
z/OS
See also
Comparison of business integration software
References
What's new
App Connect documentation
Message Broker
Middleware | IBM App Connect Enterprise | [
"Technology",
"Engineering"
] | 4,262 | [
"Software engineering",
"Middleware",
"IT infrastructure"
] |
7,423,668 | https://en.wikipedia.org/wiki/Message%20broker | A message broker (also known as an integration broker or interface engine) is an intermediary computer program module that translates a message from the formal messaging protocol of the sender to the formal messaging protocol of the receiver. Message brokers are elements in telecommunication or computer networks where software applications communicate by exchanging formally-defined messages. Message brokers are a building block of message-oriented middleware (MOM) but are typically not a replacement for traditional middleware like MOM and remote procedure call (RPC).
Overview
A message broker is an architectural pattern for message validation, transformation, and routing. It mediates communication among applications, minimizing the mutual awareness that applications should have of each other in order to be able to exchange messages, effectively implementing decoupling.
Purpose
The primary purpose of a broker is to take incoming messages from applications and perform some action on them. Message brokers can decouple end-points, meet specific non-functional requirements, and facilitate reuse of intermediary functions. For example, a message broker may be used to manage a workload queue or message queue for multiple receivers, providing reliable storage, guaranteed message delivery and perhaps transaction management.
Life cycle
The following represent other examples of actions that might be handled by the broker:
Route messages to one or more destinations
Transform messages to an alternative representation
Perform message aggregation, decomposing messages into multiple messages and sending them to their destination, then recomposing the responses into one message to return to the user
Interact with an external repository to augment a message or store it
Invoke web services to retrieve data
Respond to events or errors
Provide content and topic-based message routing using the publish–subscribe pattern
Message brokers are generally based on one of two fundamental architectures: hub-and-spoke and message bus. In the first, a central server acts as the mechanism that provides integration services, whereas with the latter, the message broker is a communication backbone or distributed service that acts on the bus. Additionally, a more scalable multi-hub approach can be used to integrate multiple brokers.
Real-time Semantics
Message brokers that are purpose built to achieve time-bounded communications with end-to-end predictability allow for the development of real-time systems that require execution predictability. Frequently systems with real-time requirements involve interaction with the real world (robotics, vehicle automation, Software-defined radio, et al.)
The Object Management Group Real-time CORBA specification provides a theoretical foundation for predictable communications technologies by levying the following requirements:
For the purposes of this specification, "end-to-end predictability" of timeliness in a fixed priority CORBA system is defined to mean:
• respecting thread priorities between client and server for resolving resource contention during the processing of CORBA invocations;
• bounding the duration of thread priority inversions during end-to-end processing;
• bounding the latencies of operation invocations
List of message broker software
Amazon Web Services (AWS) Amazon MQ
Amazon Web Services (AWS) Kinesis
Apache
Apache ActiveMQ
Apache Artemis
Apache Camel
Apache Kafka
Apache Qpid
Apache Thrift
Apache Pulsar
Cloverleaf (Enovation Lifeline - NL)
Comverse Message Broker (Comverse Technology)
Coreflux Coreflux MQTT Broker
Eclipse Mosquitto MQTT Broker (Eclipse Foundation)
EMQX EMQX MQTT Broker
Enduro/X Transactional Message Queue (TMQ)
Financial Fusion Message Broker (Sybase)
Fuse Message Broker (enterprise ActiveMQ)
Gearman
Google Cloud Pub/Sub (Google)
HiveMQ HiveMQ MQTT Broker
HornetQ (Red Hat) (Now part of Apache Artemis)
IBM App Connect
IBM MQ
JBoss Messaging (JBoss)
JORAM
Microsoft Azure Service Bus (Microsoft)
Microsoft BizTalk Server (Microsoft)
MigratoryData (a publish/subscribe WebSockets message broker written to address the C10M problem )
NATS (MIT Open Source License, written in Go)
NanoMQ MQTT Broker for IoT Edge
Open Message Queue
Oracle Message Broker (Oracle Corporation)
ORBexpress (OIS)
ORBexpress written in Ada
ORBexpress written in C#
ORBexpress written in C++
ORBexpress written in Java
RabbitMQ (Mozilla Public License, written in Erlang)
Redpanda (implement Apache Kafka api, written in C++)
Redis An open source, in-memory data structure store, used as a database, cache and message broker.
SAP PI (SAP AG)
SMC SMC Platform
Solace PubSub+
Spread Toolkit
Tarantool, a NoSQL database, with a set of stored procedures for message queues
TIBCO Enterprise Message Service
WSO2 Message Broker
ZeroMQ
See also
Broker injection
Publish–subscribe pattern
MQTT
Comparison of business integration software
Message-oriented middleware
References
Message-oriented middleware
Middleware
Software design patterns | Message broker | [
"Technology",
"Engineering"
] | 1,023 | [
"Software engineering",
"Middleware",
"IT infrastructure"
] |
7,424,236 | https://en.wikipedia.org/wiki/Marantz%20PMD-660 | Manufactured by Marantz, the Marantz PMD-660 is a portable, solid-state, compact flash audio field recorder. It has 2 XLR (balanced) inputs, 2 line-in inputs, 2 internal microphones and can record in raw WAV or MP3 formats. It is powered with four (non-rechargeable) AA-sized batteries which offers 3.5 to 4 hours of uninterrupted recording.
Uses
As a field recorder, the PMD-660 is designed to be used outside of a controlled studio environment. Uses are electronic news gathering (ENG), podcasting, live music recording.
External links
Oade PMD660 Mods
Customer Reviews at amazon
Transom.org PMD660 review
Marantz products
Sound recording technology | Marantz PMD-660 | [
"Technology"
] | 163 | [
"Recording devices",
"Sound recording technology"
] |
7,424,505 | https://en.wikipedia.org/wiki/Iometer | Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It is used as a benchmark and troubleshooting tool and is easily configured to replicate the behaviour of many popular applications. One commonly quoted measurement provided by the tool is IOPS.
History
Created by Intel Corporation (Sean Hefty, David Levine and Fab Tillier are listed by the Iometer About dialog as the developers), the tool was officially announced at the Intel Developer Forum (IDF) on 17 February 1998. In 2001 Intel discontinued development and subsequently handed the sources to the Open Source Development Lab for release under the Intel Open Source License. On 15 November 2001 the Iometer project was registered at SourceForge.net and an initial version was made available. Experiencing no further development, the project was relaunched by Daniel Scheibli in February 2003. Since then it has been driven by an international group of individuals who have been improving and porting the product to additional platforms.
Functionality
Iometer is based on a client–server model, where one instance of the Iometer graphical user interface is managing one or more 'managers' (each one representing a separate Dynamo.exe process) which are doing the I/O with one or more worker threads. Iometer performs Asynchronous I/O - accessing files or block devices (later one allowing to bypass the file system buffers).
Iometer allows the configuration of disk parameters such as the 'Maximum Disk Size', 'Starting Disk Sector' and '# of Outstanding I/Os'. This allows a user to configure a test file upon which the 'Access Specifications' configure the I/O types to the file.
Configurable items within the Access Specifications are:
Transfer Request Size
Percent Random/Sequential distribution.
Percent Read/Write Distribution
Aligned I/O's.
Reply Size
TCP/IP status
Burstiness.
In conjunction with the Access Specifications, Iometer allows the specifications to be cycled with incrementing outstanding I/O's, either exponentially or linearly. The tool outputs 50 parameters into a .CSV file, allowing multiple applications to analyse and generate graphs and reports on the measured performance.
See also
DiskSpd
References
External links
Iometer Project homepage
Iometer Project on SourceForge.net
Input/output
Benchmarks (computing) | Iometer | [
"Technology"
] | 487 | [
"Benchmarks (computing)",
"Computing comparisons",
"Computer performance"
] |
7,425,607 | https://en.wikipedia.org/wiki/List%20of%20largest%20clock%20faces | A list of permanent working clocks with the largest faces in the world. Entries include all clocks with faces at least in diameter. Clocks can be located on the exterior or interior of buildings, and towers as well as on the ground as is the case with floral clock faces.
Temporarily installed clocks
A list of clocks with the largest faces that have been installed as temporary structures. Inclusion in this list follows the criteria for the list above except for the temporary nature of the clock.
Demolished clocks
A list of the largest faced clocks that have been destroyed or demolished since their construction. Inclusion in this list follows the criteria for the list above except for the fact that the clocks are no longer extant.
See also
List of clocks
List of tallest clock towers
References | List of largest clock faces | [
"Physics",
"Technology",
"Engineering"
] | 148 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
7,425,653 | https://en.wikipedia.org/wiki/MARIACHI | MARIACHI, the Mixed Apparatus for Radar Investigation of Cosmic-rays of High Ionization, is an apparatus for the detection of ultra-high-energy cosmic rays (UHECR) via bi-static radar interferometry using VHF transmitters.
MARIACHI is also the name of the research project created and directed by Brookhaven National Laboratory (BNL) on Long Island, New York, initially intended to verify the concept that VHF signals can be reflected off the ionization patch produced by a cosmic ray shower. Project emphasis subsequently shifted to the attempted detection of radio wave reflections from a high energy ionization beam apparatus located at BNL's NASA Space Radiation Laboratory.
Its inventors hope the MARIACHI apparatus will detect UHECR over much larger areas than previously possible, and that it will also detect ultra-high-energy neutrino flux. The ground array detectors are scintillator arrays that are built and operated by high school students and teachers.
The MARIACHI project, being in essence a public outreach project for high school and undergraduate students more than a full-scale science experiment, has continued in a sporadic fashion since its conception in the late 2000s. For example, a high school in New York continued MARIACHI measurements for over 8-year period between 2008 and 2016; the results of these measurements were published 2016. Measurements have been performed by other instances (high schools, community colleges,...) also.
The main researcher behind MARIACHI is Helio Takai (Brookhaven National Laboratory, Stony Brook University, as of 2019 Pratt Institute).
References
Further reading
External links
Implementation of ground-based scintillation detectors as a tool for studying cosmic ray activity- Matthew Lucia, University of Notre Dame; Matthew Captaine, St. Norbert College; Dima Vavilov, Michael Marx, Department of Physics and Astronomy, Stony Brook University
Cosmic-ray experiments | MARIACHI | [
"Physics",
"Astronomy"
] | 380 | [
"Astrophysics stubs",
"Astronomy stubs",
"Astrophysics"
] |
7,426,352 | https://en.wikipedia.org/wiki/Planetary%20Data%20System | The Planetary Data System (PDS) is a distributed data system that NASA uses to archive data collected by Solar System missions.
The PDS is an active archive that makes available well documented, peer reviewed planetary data to the research community. The data comes from orbital, landed and robotic missions and ground-based support data associated with those missions. It is managed by NASA Headquarters' Planetary Sciences Division.
PDS archiving philosophy
The main objective of the PDS is to maintain a planetary data archive that will withstand the test of time such that future generations of scientists can access, understand and use preexisting planetary data. The PDS tries to ensure compatibility of the archive by adhering to strict standards of storage media, archiving formats, and required documentation.
Storage media
One critical component of the PDS archive is the storage media. The data must be stored effectively and efficiently with no degradation of the data over the archive's lifespan. Therefore, the physical media must have large capacity and must remain readable over many years. PDS is migrating toward electronic storage as its "standard" media.
Archiving formats
The format of the data is also important. In general, transparent, non-proprietary formats are best. When a proprietary format is submitted to the archive (such as a Microsoft Word document) an accompanying plain text file is also required. It is assumed that the scientists of the future will at least be able to make sense of regular ASCII bytes even if the proprietary software and support ceases to exist. PDS allows figures and illustrations to be included in the archive as individual images. PDS adheres to many other standards including, but not limited to, special directory and file naming conventions and label requirements. Each file in the PDS archive is accompanied by a searchable label (attached or detached) that describes the file content.
Archiving documents
The archive must be complete and be able to stand alone. There is no guarantee that the people who originally worked with and submitted the data to the archive will be available in the future to field questions regarding the data, its calibration or the mission. Therefore, the archive must include good descriptive documentation of how the spacecraft and its instruments worked, how the data were collected and calibrated, and what the data mean. The quality of the documentation is examined during a mission independent PDS peer review.
Nodes
The PDS is composed of 8 nodes, 6 science discipline nodes and 2 support nodes. In addition, there are several subnodes and data nodes whose exact status tends to change over time.
Science discipline nodes
Atmospheres Node – handles non-imaging atmospheric data (New Mexico State University)
Geosciences Node – handles data of the surfaces and interiors of terrestrial planetary bodies (Washington University in St. Louis)
Cartography and Imaging Science Node – archives many of the larger planetary image data collections (Astrogeology Research Program of the United States Geological Survey, and Jet Propulsion Laboratory)
Planetary Plasma Interaction (PPI) Node – handles data consisting of the interaction between the solar wind and planetary winds with planetary magnetospheres, ionospheres and surfaces (University of California, Los Angeles)
Ring-Moon Systems Node – handles archiving, cataloging, and distributing planetary data of ring systems, moons, and planets (SETI Institute)
Small Bodies Node (SBN) – handles asteroid, comet and planetary dust data (University of Maryland, College Park)
Comet Subnode (University of Maryland, College Park)
Asteroid/Interplanetary Dust Subnode (Planetary Science Institute)
Support nodes
Engineering Node – provides systems engineering support to the PDS (Jet Propulsion Laboratory)
Navigation and Ancillary Information Facility (NAIF) Node – maintains the SPICE information system (Jet Propulsion Laboratory)
Organizational structure
The PDS is divided into a number of science discipline "nodes" which are individually curated by planetary scientists.
The PDS Management Council serves as the technical policy board of the PDS, and provides findings for NASA with respect to planetary science data management, ensures coordination among the nodes, guarantees responsiveness to customer needs, and monitors the appropriate uses of evolving information technologies that may make PDS tasks both more efficient and more cost effective. It is formed by the principal investigators of the science discipline nodes, along with the leaders of the Technical Support Nodes, the Project Manager, and Deputy Project Manager.
The Solar System Exploration Data Services Office at the Goddard Space Flight Center handles PDS Project Management.
Roadmap 2017–2026
NASA and the PDS recently engaged in development of a Roadmap for the period 2017 to 2026. The purpose of the roadmap effort was to outline a strategy for moving forward in planetary data archiving under the auspices of a rapidly growing data volume (nearly 1 petabyte at present), new computing capabilities, tools, and facilities, and a growing community of planetary science investigators.
See also
ESA Planetary Science Archive
International Planetary Data Alliance (IPDA)
NASA Astrophysics Data System (ADS)
NASA Spacecraft Planet Instrument C-matrix Events (SPICE)
NASA/IPAC Extragalactic Database (NED)
Parameter Value Language (markup language)
SIMBAD
References
External links
Official NASA PDS site
Atmospheres Node
Cartography and Imaging Sciences Node
Geosciences Node
Planetary Plasma Interactions Node
Ring-Moon Systems Node
Small Bodies Node
Navigation and Ancillary Information Facility Node
PDS Project Management Office
Goddard Space Flight Center
Astronomical databases | Planetary Data System | [
"Astronomy"
] | 1,091 | [
"Astronomical databases",
"Works about astronomy"
] |
7,426,937 | https://en.wikipedia.org/wiki/Ahoy%21 | Ahoy! was a computer magazine published between January 1984 and January 1989 in the US, covering on all Commodore color computers, primarily Commodore 64 and Amiga.
History
The first issue of Ahoy! was published in January 1984. The magazine was published monthly by Ion International and was headquartered in New York City. It published many games in BASIC and machine language, occasionally also printing assembly language source code. Ahoy! published a checksum program called Flankspeed for entering machine language listings.
Ahoy!'s AmigaUser was a related but separate publication dedicated to the Amiga. It was spun off from a series of columns in Ahoy! with the same title, and the first two issues were published instead of the parent magazine in May and August 1988.
References
External links
Gallery of covers and downloadable archive of disks
Monthly magazines published in the United States
Commodore 8-bit computer magazines
Defunct computer magazines published in the United States
Magazines established in 1984
Magazines disestablished in 1989
Defunct magazines published in New York City
1984 establishments in New York City
1989 disestablishments in New York (state) | Ahoy! | [
"Technology"
] | 220 | [
"Computing stubs",
"Computer magazine stubs"
] |
7,427,473 | https://en.wikipedia.org/wiki/List%20of%20Microsoft%20Office%20filename%20extensions | The following is a list of filename extensions used by programs in the Microsoft Office suite.
Word
Legacy Legacy filename extensions denote binary Microsoft Word formatting that became outdated with the release of Microsoft Office 2007. Although the latest version of Microsoft Word can still open them, they are no longer developed. Legacy filename extensions include:
.doc – Legacy Word document; Microsoft Office refers to them as "Microsoft Word 97–2003 Document"
.dot – Legacy Word templates; officially designated "Microsoft Word 97–2003 Template"
.wbk – Legacy Word document backup; referred as "Microsoft Word Backup Document"
OOXML Office Open XML (OOXML) format was introduced with Microsoft Office 2007 and became the default format of Microsoft Word ever since. Pertaining file extensions include:
.docx – Word document
.docm – Word macro-enabled document; same as docx, but may contain macros and scripts
.dotx – Word template
.dotm – Word macro-enabled template; same as dotx, but may contain macros and scripts
Other formats
.pdf – PDF documents
.wll – Word add-in
.wwl – Word add-in
Excel
Legacy Legacy filename extensions denote binary Microsoft Excel formats that became outdated with the release of Microsoft Office 2007. Although the latest version of Microsoft Excel can still open them, they are no longer developed. Legacy filename extensions include:
.xls – Legacy Excel worksheets; officially designated "Microsoft Excel 97–2003 Worksheet" or "Microsoft Excel 5.0/95 Workbook"
.xlt – Legacy Excel templates; officially designated "Microsoft Excel 97–2003 Template"
.xlm – Legacy Excel macro
OOXML Office Open XML (OOXML) format was introduced with Microsoft Office 2007 and became the default format of Microsoft Excel ever since. Excel-related file extensions of this format include:
.xlsx – Excel workbook
.xlsm – Excel macro-enabled workbook; same as xlsx but may contain macros and scripts
.xltx – Excel template
.xltm – Excel macro-enabled template; same as xltx but may contain macros and scripts
Other formats Microsoft Excel uses dedicated file formats that are not part of OOXML, and use the following extensions:
.xlsb – Excel binary worksheet (BIFF12)
.xla – Excel add-in that can contain macros
.xlam – Excel macro-enabled add-in
.xll – Excel XLL add-in; a form of DLL-based add-in
.xlw – Excel work space; previously known as "workbook"
.xll_ – Excel 4 for Mac add-in
.xla_ - Excel 4 for Mac add-in
.xla5 – Excel 5 for Mac add-in
.xla8 – Excel 98 for Mac add-in
PowerPoint
Legacy
.ppt – Legacy PowerPoint presentation
.pot – Legacy PowerPoint template
.pps – Legacy PowerPoint slideshow
.ppa – Legacy PowerPoint add-in
OOXML
.pptx – PowerPoint presentation
.pptm – PowerPoint macro-enabled presentation
.potx – PowerPoint template
.potm – PowerPoint macro-enabled template
.ppam – PowerPoint add-in
.ppsx – PowerPoint slideshow
.ppsm – PowerPoint macro-enabled slideshow
.sldx – PowerPoint slide
.sldm – PowerPoint macro-enabled slide
.ppam – PowerPoint add-in
Access
Microsoft Access 2007 introduced new file extensions:
.accda – Access add-in file
.accdb – Access Database
.accde – The file extension for Office Access 2007 files that are in "execute only" mode. ACCDE files have all Visual Basic for Applications (VBA) source code hidden. A user of an ACCDE file can only execute VBA code, but not view or modify it. ACCDE takes the place of the MDE file extension.
.accdr – is a new file extension that enables you to open a database in runtime mode. By simply changing a database's file extension from .accdb to .accdr, you can create a "locked-down" version of your Office Access database. You can change the file extension back to .accdb to restore full functionality.
.accdt – The file extension for Access Database Templates.
.accdu – Access add-in file
Other
OneNote
.one – OneNote export file
Outlook
.ecf – Outlook 2013+ add-in file
Publisher
.pub – a Microsoft Publisher publication
See also
Microsoft Office
Microsoft Office XML formats
Filename extension
Alphabetical list of file extensions
Office Open XML
External links
Introducing the Microsoft Office (2007) Open XML File Formats
Introduction to new file-name extensions
References
Microsoft Office filename extensions
File extensions
Office Open XML | List of Microsoft Office filename extensions | [
"Technology"
] | 986 | [
"Computing-related lists",
"Lists of file formats"
] |
7,427,811 | https://en.wikipedia.org/wiki/Perceptual%20Audio%20Coder | Perceptual Audio Coder (PAC) is a lossy audio compression algorithm. It is used by Sirius Satellite Radio for their digital audio radio service.
Development
The original version of PAC developed by James Johnston and Anibal Ferreira at AT&T's Bell Labs has a flexible format and bitrate. It provides efficient compression of high-quality audio over a variety of formats from 16 kbit/s for a monophonic channel to 1024 kbit/s for a 5.1 format with four or six auxiliary audio channels, and provisions for an ancillary (fixed rate) and auxiliary (variable rate) side data channel. For stereo audio signals, it is claimed that it provides near-CD quality at about 56-64 kbit/s, with transparent coding at bit rates approaching 128 kbit/s.
Over the years PAC has evolved considerably. A known software implementation of this codec is CelestialTech's AudioLib. Later, it was considerably improved and renamed to ePAC (enhanced Perceptual Audio Coder) by Lucent, available in the AudioVeda music library manager.
iBiquity initially tested PAC for the HD-Radio IBOC digital radio upgrade for FM and AM, but chose an MPEG4-derived codec, HE-AAC, instead. MPEG-2 AAC is substantially similar to the original AT&T PAC algorithm written by Johnston and Ferreira, including the specifics of stereo pair coding, bitstream sectioning, handling of 1 or 2 channels at a time, multiple codebooks responding to the same largest absolute value, and block switching triggers. The version of PAC tested for the MPEG-NBC (later to become AAC) trials used 1024/128 sample block lengths, rather than 512/128 sample block lengths.
See also
MP3
References
Audio codecs | Perceptual Audio Coder | [
"Technology"
] | 374 | [
"Computing stubs"
] |
7,428,060 | https://en.wikipedia.org/wiki/Simdesk | Simdesk, fully known as Simdesk Technologies, Inc., formerly Internet Access Technologies, was a Houston-based software as a service provider of on demand messaging and collaboration tools for business. It was founded by Ray C. Davis in 1999. Early in the company's history, it was sold to municipal authorities. The company began to commercially offer Simdesk direct to small businesses in March 2006. There were several Simdesk resellers, including KDDI in Japan.
Discontinuation
On May 1, 2008, Simdesk ceased operations, terminating retail hosted services for SMB and individual customers in the United States and Latin America. It was announced that externally hosted services based on Simdesk's platform license would remain in place. (As of 2009, the URLs have gone dark.) Its future direction has not been announced.
References
Further reading
External links
Simdesk homepage
Cache of the Simdesk homepage
KDDI Secure Share
Simdesk at Startup Houston
Simdesk: No comment
BlogHouston
Companies established in 1999
Defunct software companies of the United States
Companies disestablished in 2008
Companies based in Houston | Simdesk | [
"Technology"
] | 238 | [
"Computing stubs",
"Computer company stubs"
] |
7,428,170 | https://en.wikipedia.org/wiki/Origin%20recognition%20complex | In molecular biology, origin recognition complex (ORC) is a multi-subunit DNA binding complex (6 subunits) that binds in all eukaryotes and archaea in an ATP-dependent manner to origins of replication. The subunits of this complex are encoded by the ORC1, ORC2, ORC3, ORC4, ORC5 and ORC6 genes. ORC is a central component for eukaryotic DNA replication, and remains bound to chromatin at replication origins throughout the cell cycle.
ORC directs DNA replication throughout the genome and is required for its initiation. ORC and Noc3p bound at replication origins serve as the foundation for assembly of the pre-replication complex (pre-RC), which includes Cdc6, Tah11 (a.k.a. Cdt1), and the Mcm2-Mcm7 complex. Pre-RC assembly during G1 is required for replication licensing of chromosomes prior to DNA synthesis during S phase. Cell cycle-regulated phosphorylation of Orc2, Orc6, Cdc6, and MCM by the cyclin-dependent protein kinase Cdc28 regulates initiation of DNA replication, including blocking reinitiation in G2/M phase.
The ORC is present throughout the cell cycle bound to replication origins, but is only active in late mitosis and early G1.
In yeast, ORC also plays a role in the establishment of silencing at the mating-type loci Hidden MAT Left (HML) and Hidden MAT Right (HMR). ORC participates in the assembly of transcriptionally silent chromatin at HML and HMR by recruiting the Sir1 silencing protein to the HML and HMR silencers.
Both Orc1 and Orc5 bind ATP, though only Orc1 has ATPase activity. The binding of ATP by Orc1 is required for ORC binding to DNA and is essential for cell viability. The ATPase activity of Orc1 is involved in formation of the pre-RC. ATP binding by Orc5 is crucial for the stability of ORC as a whole. Only the Orc1-5 subunits are required for origin binding; Orc6 is essential for maintenance of pre-RCs once formed. Interactions within ORC suggest that Orc2-3-6 may form a core complex. A 2020 report suggests that budding yeast ORC dimerizes in a cell cycle dependent manner to control licensing.
Proteins
The following proteins are present in the ORC:
Archaea feature a simplified version of the ORC, Mcm, and as a consequence the combined pre-RC. Instead of using six different mcm proteins to form a pseudo-symmetrical heterohexamer, all six subunits in the archaeal MCM are the same. They usually have multiple proteins that are homologous to both Cdc6 and Orc1, some of which perform the function of both. Unlike eukaryotic Orc, they do not always form a complex. In fact, they have divergent complex structures when these do form. Sulfolobus islandicus also uses a Cdt1 homologue to recognize one of its replication origins.
Autonomously replicating sequences
Budding yeast
Autonomously Replicating Sequences (ARS), first discovered in budding yeast, are integral to the success of the ORC. These 100-200bp sequences facilitate replication activity during S phase. ARSs can be placed at any novel location of the chromosomes of budding yeast and will facilitate replication from those sites. A highly conserved sequence of 11bp (known as the A element) is thought to be essential for origin function in budding yeast. The ORC was originally identified by its ability to bind to the A element of the ARS in budding yeast.
Animals
Animal cells contain a much more cryptic version of an ARS, with no conserved sequences found as of yet. Here, replication origins gather into bundles called replicon clusters. Each cluster's replicons are similar in length, but individual clusters have replicons of varying length. These replicons all have similar basic residues to which the ORC binds, which in many ways mimic the conserved 11bp A element. All of these clusters are simultaneously activated during S phase.
Role in pre-RC assembly
The ORC is essential for the loading of MCM complexes (Pre-RC) onto DNA. This process is dependent on the ORC, Noc3, Cdc6, and Cdt1 – involving several ATP controlled recruiting events. First, the ORC, Noc3p and Cdc6 form a complex on origin DNA (marked by ARS type regions). New ORC/Noc3/Cdc6 complexes then recruit Cdt1/Mcm2-7 molecules to the site. Once this massive ORC/Noc3/Cdc6/Cdt1/Mcm2-7 complex is formed, the ORC/Noc3/Cdc6/Cdt1 molecules work together to load Mcm2-7 onto the DNA itself by hydrolysis of ATP by Cdc6. Cdc6's phosphorylative activity is dependent on both the ORC and origin DNA. This leads to Cdt1 having decreased stability on the DNA and falling off of the complex leading to Mcm2-7 loading on to the DNA. The structure of the ORC, MCM, as well as the intermediate OCCM complex has been resolved.
Origin binding activity
Although the ORC is composed of six discrete subunits, only one of these has been found to be significant - ORC1. In vivo studies have shown that Lys-263 and Arg-367 are the basic residues responsible for faithful ORC loading. These molecules represent the above-mentioned ARS. ORC1 interacts with ATP and these basic residues in order to bind the ORC to origin DNA. It has been established that this occurs far before replication, and that the ORC itself is already bound to Origin DNA by the time any Mcm2-7 loading occurs. When Mcm2-7 is first loaded it completely encircles the DNA and helicase activity is inhibited. In S phase, the Mcm2-7 complex interacts with helicase cofactors Cdc45 and GINS to isolate a single DNA strand, unwind the origin, and begin replication down the chromosome. In order to have bidirectional replication, this process happens twice at an origin. Both loading events are mediated by one ORC via an identical process as the first.
See also
Cyclin dependant kinases (CDK)
Cyclins
DNA helicase
DnaA
Pre-replication complex
References
Further reading
Protein complexes
DNA-binding proteins
Protein families | Origin recognition complex | [
"Biology"
] | 1,401 | [
"Protein families",
"Protein classification"
] |
7,428,585 | https://en.wikipedia.org/wiki/Bag%20%28puzzle%29 | Bag (also called Corral or Cave) is a binary-determination logic puzzle published by Nikoli.
Rules
Bag is played on a rectangular grid, usually of dashed lines, in which numbers appear in some of the cells.
The object is to draw a single, continuous loop along the lines of the grid that contains all the numbers on the grid. Each number indicates the total number of cells visible in any orthogonal direction before a line of the loop is reached, plus the cell itself. For example, a 2-cell will have one cell adjacent to it, followed by a wall of the loop.
Solution methods
The easiest starting place is to find a "maximum cell"; that is, a numbered cell which if the walls are not at the maximum distance possible, the number is not satisfied. For example, in a 10x10 grid which has not started to be solved, a 19-cell is a maximum cell, since if the four walls are not at the edges of the grid, the number of cells visible wouldn't be enough. After making some progress, "minimum cells" appear, where if the walls are not at the minimum distance possible, the number is not satisfied.
Many of the solution methods for Bag are very similar to those used for Kuromasu, as the rules are also very similar. The most notable difference is the use of the loop as a part of the solution, as opposed to shaded cells.
Computational complexity
Decision question (Friedman, 2002): Does a given instance of Corral Puzzle have a solution?
This decision question is NP-complete. This is proven by reducing the decision problem of deciding the 3-colorability of a planar graph, which is known to be NP-complete, to a Corral Puzzle.
See also
List of Nikoli puzzle types
:Category:Logic puzzles
References
External links
Nikoli's Japanese page on Bag
Better Know a Corral - describes some helpful solution strategies.
Informative page on Corral, mostly in German.
Logic puzzles
Japanese games
NP-complete problems | Bag (puzzle) | [
"Mathematics"
] | 412 | [
"NP-complete problems",
"Mathematical problems",
"Computational problems"
] |
7,428,842 | https://en.wikipedia.org/wiki/Content%20format | A content format is an encoded format for converting a specific type of data to displayable information. Content formats are used in recording and transmission to prepare data for observation or interpretation. This includes both analog and digitized content. Content formats may be recorded and read by either natural or manufactured tools and mechanisms.
In addition to converting data to information, a content format may include the encryption and/or scrambling of that information. Multiple content formats may be contained within a single section of a storage medium (e.g. track, disk sector, computer file, document, page, column) or transmitted via a single channel (e.g. wire, carrier wave) of a transmission medium. With multimedia, multiple tracks containing multiple content formats are presented simultaneously. Content formats may either be recorded in secondary signal processing methods such as a software container format (e.g. digital audio, digital video) or recorded in the primary format (e.g. spectrogram, pictogram).
Observable data is often known as raw data, or raw content. A primary raw content format may be directly observable (e.g. image, sound, motion, smell, sensation) or physical data which only requires hardware to display it, such as a phonographic needle and diaphragm or a projector lamp and magnifying glass.
There has been a countless number of content formats throughout history. The following are examples of some common content formats and content format categories (covering: sensory experience, model, and language used for encoding information):
See also
Communication
Representation (arts)
Content carrier signals
Content multiplexing format
Signal transmission
Wireless content transmission
Data storage device
Recording format
Data compression
Analog television: NTSC, PAL and SECAM
References
Computer-mediated communication
Mass media technology
Data management
Recording
Film and video technology
Sound production technology | Content format | [
"Technology"
] | 370 | [
"Information and communications technology",
"Mass media technology",
"Data management",
"Information systems",
"Data",
"Computing and society",
"Computer-mediated communication"
] |
7,428,961 | https://en.wikipedia.org/wiki/Hadamard%20code | The Hadamard code is an error-correcting code named after the French mathematician Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mars back to Earth from the NASA space probe Mariner 9. Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in coding theory, mathematics, and theoretical computer science.
The Hadamard code is also known under the names Walsh code, Walsh family, and Walsh–Hadamard code in recognition of the American mathematician Joseph Leonard Walsh.
The Hadamard code is an example of a linear code of length over a binary alphabet.
Unfortunately, this term is somewhat ambiguous as some references assume a message length while others assume a message length of .
In this article, the first case is called the Hadamard code while the second is called the augmented Hadamard code.
The Hadamard code is unique in that each non-zero codeword has a Hamming weight of exactly , which implies that the distance of the code is also .
In standard coding theory notation for block codes, the Hadamard code is a -code, that is, it is a linear code over a binary alphabet, has block length , message length (or dimension) , and minimum distance .
The block length is very large compared to the message length, but on the other hand, errors can be corrected even in extremely noisy conditions.
The augmented Hadamard code is a slightly improved version of the Hadamard code; it is a -code and thus has a slightly better rate while maintaining the relative distance of , and is thus preferred in practical applications.
In communication theory, this is simply called the Hadamard code and it is the same as the first order Reed–Muller code over the binary alphabet.
Normally, Hadamard codes are based on Sylvester's construction of Hadamard matrices, but the term “Hadamard code” is also used to refer to codes constructed from arbitrary Hadamard matrices, which are not necessarily of Sylvester type.
In general, such a code is not linear.
Such codes were first constructed by Raj Chandra Bose and Sharadchandra Shankar Shrikhande in 1959.
If n is the size of the Hadamard matrix, the code has parameters , meaning it is a not-necessarily-linear binary code with 2n codewords of block length n and minimal distance n/2. The construction and decoding scheme described below apply for general n, but the property of linearity and the identification with Reed–Muller codes require that n be a power of 2 and that the Hadamard matrix be equivalent to the matrix constructed by Sylvester's method.
The Hadamard code is a locally decodable code, which provides a way to recover parts of the original message with high probability, while only looking at a small fraction of the received word. This gives rise to applications in computational complexity theory and particularly in the design of probabilistically checkable proofs.
Since the relative distance of the Hadamard code is 1/2, normally one can only hope to recover from at most a 1/4 fraction of error. Using list decoding, however, it is possible to compute a short list of possible candidate messages as long as fewer than of the bits in the received word have been corrupted.
In code-division multiple access (CDMA) communication, the Hadamard code is referred to as Walsh Code, and is used to define individual communication channels. It is usual in the CDMA literature to refer to codewords as “codes”. Each user will use a different codeword, or “code”, to modulate their signal. Because Walsh codewords are mathematically orthogonal, a Walsh-encoded signal appears as random noise to a CDMA capable mobile terminal, unless that terminal uses the same codeword as the one used to encode the incoming signal.
History
Hadamard code is the name that is most commonly used for this code in the literature. However, in modern use these error correcting codes are referred to as Walsh–Hadamard codes.
There is a reason for this:
Jacques Hadamard did not invent the code himself, but he defined Hadamard matrices around 1893, long before the first error-correcting code, the Hamming code, was developed in the 1940s.
The Hadamard code is based on Hadamard matrices, and while there are many different Hadamard matrices that could be used here, normally only Sylvester's construction of Hadamard matrices is used to obtain the codewords of the Hadamard code.
James Joseph Sylvester developed his construction of Hadamard matrices in 1867, which actually predates Hadamard's work on Hadamard matrices. Hence the name Hadamard code is disputed and sometimes the code is called Walsh code, honoring the American mathematician Joseph Leonard Walsh.
An augmented Hadamard code was used during the 1971 Mariner 9 mission to correct for picture transmission errors. The binary values used during this mission were 6 bits long, which represented 64 grayscale values.
Because of limitations of the quality of the alignment of the transmitter at the time (due to Doppler Tracking Loop issues) the maximum useful data length was about 30 bits. Instead of using a repetition code, a [32, 6, 16] Hadamard code was used.
Errors of up to 7 bits per 32-bit word could be corrected using this scheme. Compared to a 5-repetition code, the error correcting properties of this Hadamard code are much better, yet its rate is comparable. The efficient decoding algorithm was an important factor in the decision to use this code.
The circuitry used was called the "Green Machine". It employed the fast Fourier transform which can increase the decoding speed by a factor of three. Since the 1990s use of this code by space programs has more or less ceased, and the NASA Deep Space Network does not support this error correction scheme for its dishes that are greater than 26 m.
Constructions
While all Hadamard codes are based on Hadamard matrices, the constructions differ in subtle ways for different scientific fields, authors, and uses. Engineers, who use the codes for data transmission, and coding theorists, who analyse extremal properties of codes, typically want the rate of the code to be as high as possible, even if this means that the construction becomes mathematically slightly less elegant.
On the other hand, for many applications of Hadamard codes in theoretical computer science it is not so important to achieve the optimal rate, and hence simpler constructions of Hadamard codes are preferred since they can be analyzed more elegantly.
Construction using inner products
When given a binary message of length , the Hadamard code encodes the message into a codeword using an encoding function
This function makes use of the inner product of two vectors , which is defined as follows:
Then the Hadamard encoding of is defined as the sequence of all inner products with :
As mentioned above, the augmented Hadamard code is used in practice since the Hadamard code itself is somewhat wasteful.
This is because, if the first bit of is zero, , then the inner product contains no information whatsoever about , and hence, it is impossible to fully decode from those positions of the codeword alone.
On the other hand, when the codeword is restricted to the positions where , it is still possible to fully decode .
Hence it makes sense to restrict the Hadamard code to these positions, which gives rise to the augmented Hadamard encoding of ; that is, .
Construction using a generator matrix
The Hadamard code is a linear code, and all linear codes can be generated by a generator matrix . This is a matrix such that holds for all , where the message is viewed as a row vector and the vector-matrix product is understood in the vector space over the finite field . In particular, an equivalent way to write the inner product definition for the Hadamard code arises by using the generator matrix whose columns consist of all strings of length , that is,
where is the -th binary vector in lexicographical order.
For example, the generator matrix for the Hadamard code of dimension is:
The matrix is a -matrix and gives rise to the linear operator .
The generator matrix of the augmented Hadamard code is obtained by restricting the matrix to the columns whose first entry is one.
For example, the generator matrix for the augmented Hadamard code of dimension is:
Then is a linear mapping with .
For general , the generator matrix of the augmented Hadamard code is a parity-check matrix for the extended Hamming code of length and dimension , which makes the augmented Hadamard code the dual code of the extended Hamming code.
Hence an alternative way to define the Hadamard code is in terms of its parity-check matrix: the parity-check matrix of the Hadamard code is equal to the generator matrix of the Hamming code.
Construction using general Hadamard matrices
Hadamard codes are obtained from an n-by-n Hadamard matrix H. In particular, the 2n codewords of the code are the rows of H and the rows of −H. To obtain a code over the alphabet {0,1}, the mapping −1 ↦ 1, 1 ↦ 0, or, equivalently, x ↦ (1 − x)/2, is applied to the matrix elements. That the minimum distance of the code is n/2 follows from the defining property of Hadamard matrices, namely that their rows are mutually orthogonal. This implies that two distinct rows of a Hadamard matrix differ in exactly n/2 positions, and, since negation of a row does not affect orthogonality, that any row of H differs from any row of −H in n/2 positions as well, except when the rows correspond, in which case they differ in n positions.
To get the augmented Hadamard code above with , the chosen Hadamard matrix H has to be of Sylvester type, which gives rise to a message length of .
Distance
The distance of a code is the minimum Hamming distance between any two distinct codewords, i.e., the minimum number of positions at which two distinct codewords differ. Since the Walsh–Hadamard code is a linear code, the distance is equal to the minimum Hamming weight among all of its non-zero codewords. All non-zero codewords of the Walsh–Hadamard code have a Hamming weight of exactly by the following argument.
Let be a non-zero message. Then the following value is exactly equal to the fraction of positions in the codeword that are equal to one:
The fact that the latter value is exactly is called the random subsum principle. To see that it is true, assume without loss of generality that .
Then, when conditioned on the values of , the event is equivalent to for some depending on and . The probability that happens is exactly . Thus, in fact, all non-zero codewords of the Hadamard code have relative Hamming weight , and thus, its relative distance is .
The relative distance of the augmented Hadamard code is as well, but it no longer has the property that every non-zero codeword has weight exactly since the all s vector is a codeword of the augmented Hadamard code. This is because the vector encodes to . Furthermore, whenever is non-zero and not the vector , the random subsum principle applies again, and the relative weight of is exactly .
Local decodability
A locally decodable code is a code that allows a single bit of the original message to be recovered with high probability by only looking at a small portion of the received word.
A code is -query locally decodable if a message bit, , can be recovered by checking bits of the received word. More formally, a code, , is -locally decodable, if there exists a probabilistic decoder, , such that (Note: represents the Hamming distance between vectors and ):
, implies that
Theorem 1: The Walsh–Hadamard code is -locally decodable for all .
Lemma 1: For all codewords, in a Walsh–Hadamard code, , , where represent the bits in in positions and respectively, and represents the bit at position .
Proof of lemma 1
Let be the codeword in corresponding to message .
Let be the generator matrix of .
By definition, . From this, . By the construction of , . Therefore, by substitution, .
Proof of theorem 1
To prove theorem 1 we will construct a decoding algorithm and prove its correctness.
Algorithm
Input: Received word
For each :
Pick uniformly at random.
Pick such that , where is the -th standard basis vector and is the bitwise xor of and .
.
Output: Message
Proof of correctness
For any message, , and received word such that differs from on at most fraction of bits, can be decoded with probability at least .
By lemma 1, . Since and are picked uniformly, the probability that is at most . Similarly, the probability that is at most . By the union bound, the probability that either or do not match the corresponding bits in is at most . If both and correspond to , then lemma 1 will apply, and therefore, the proper value of will be computed. Therefore, the probability is decoded properly is at least . Therefore, and for to be positive, .
Therefore, the Walsh–Hadamard code is locally decodable for .
Optimality
For k ≤ 7 the linear Hadamard codes have been proven optimal in the sense of minimum distance.
See also
Zadoff–Chu sequence — improve over the Walsh–Hadamard codes
References
Further reading
(xiv+225 pages)
Coding theory
Error detection and correction | Hadamard code | [
"Mathematics",
"Engineering"
] | 2,842 | [
"Discrete mathematics",
"Coding theory",
"Reliability engineering",
"Error detection and correction"
] |
7,429,696 | https://en.wikipedia.org/wiki/C6H8O6 | {{DISPLAYTITLE:C6H8O6}}
The molecular formula C6H8O6 (molar mass: 176.124 g/mol) may be:
Ascorbic acid (vitamin C)
Erythorbic acid
Glucuronolactone
Propane-1,2,3-tricarboxylic acid
Triformin | C6H8O6 | [
"Chemistry"
] | 79 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
7,430,072 | https://en.wikipedia.org/wiki/Noise%20spectral%20density | In communications, noise spectral density (NSD), noise power density, noise power spectral density, or simply noise density (N0) is the power spectral density of noise or the noise power per unit of bandwidth. It has dimension of power over frequency, whose SI unit is watt per hertz (equivalent to watt-second or joule).
It is commonly used in link budgets as the denominator of the important figure-of-merit ratios, such as carrier-to-noise-density ratio as well as Eb/N0 and Es/N0.
If the noise is one-sided white noise, i.e., constant with frequency, then the total noise power N integrated over a bandwidth B is N = BN0 (for double-sided white noise, the bandwidth is doubled, so N is BN0/2). This is utilized in signal-to-noise ratio calculations.
For thermal noise, its spectral density is given by N0 = kT, where k is the Boltzmann constant in joules per kelvin, and T is the receiver system noise temperature in kelvins.
The noise amplitude spectral density is the square root of the noise power spectral density, and is given in units such as .
See also
Noise-equivalent bandwidth
Spectral density estimation
Welch's method
References
Noise (electronics)
Frequency-domain analysis
Acoustics | Noise spectral density | [
"Physics"
] | 278 | [
"Spectrum (physical sciences)",
"Frequency-domain analysis",
"Classical mechanics",
"Acoustics"
] |
7,430,120 | https://en.wikipedia.org/wiki/Kuehneromyces%20mutabilis | Kuehneromyces mutabilis (synonym: Pholiota mutabilis), commonly known as the sheathed woodtuft, is a species of fungus that grows in clumps on dead wood. It is edible but strongly resembles the deadly poisonous Galerina marginata.
Description
The clustered shiny convex caps grow up to in diameter. They are very hygrophanous; in a damp state they are shiny and greasy with a deep orange-brown colour towards the rim; often there is a disc of lighter (less sodden) flesh in the middle. In a dry state they are cinnamon-coloured.
The gills are initially light and later cinnamon brown, and are sometimes somewhat decurrent (running down the stem).
The stipe is 8–10 cm long by about 0.5–1 cm in diameter with a ring which separates the bare, smooth light cinnamon upper part from the darker brown shaggily scaly lower part. This type of stem is sometimes described as "booted".
Similar species
It resembles the deadly poisonous Galerina marginata. Although a typical K. mutabilis is easily distinguished from a typical G. marginata by the "booted" stipe which is shaggy below the ring, this character is not reliable and G. marginata can also have scales. The main differences are:
While they are both hygrophanous, K. mutabilis dries from the centre outwards (so having a lighter colour in the centre) and G. marginata dries from the edge inwards.
the stem below the ring is scaly below the ring in K. mutabilis, but normally fibrously silky in G. marginata.
K. mutabilis has a pleasant mushroom smell and mild taste, whereas G. marginata tastes and smells mealy.
Distribution and habitat
Kuehneromyces mutabilis is found in Australia, Asia (in the Caucuses, Siberia, and Japan), North America, and Europe. In Europe, it can be found from Southern Europe to Iceland and Scandinavia.
This species always grows on wood, generally on stumps of broad-leaved trees (especially beech, birch and alder), and rarely on conifers.
It is found from April to late October, and also in the remaining winter months where conditions are mild. It is often seen at times when there are few other fungi in evidence.
Edibility
Some guides caution that K. mutabilis is not safe to consume as it could be confused with the deadly poisonous Galerina marginata, even by people who are quite knowledgeable.
The caps of this mushroom can be fried or used for flavouring in sauces and soups (the stems being too tough to eat). The flavour is best after cooking.
References
Sources
This article is partly translated from the German page.
Marcel Bon : The Mushrooms and Toadstools of Britain and North-Western Europe (Hodder & Stoughton, 1987).
Régis Courtecuisse, Bernard Duhem : Guide des champignons de France et d'Europe (Delachaux & Niestlé, 1994–2000).
External links
Pholiota mutabilis, from Smith AH & Hesler LR. (1968). The North American Species of Pholiota. (Archived at Mykoweb.com.)
Pholiota mutabilis by Michael Kuo, MushroomExpert.Com, November, 2007.
Kuehneromyces mutabilis by Roger Philips, RogersMushrooms (website).
“Kuehneromyces mutabilis” by Robert Sasata, Healing-Mushrooms.net, February, 2008.
Edible fungi
Fungi described in 1871
Fungi of Europe
Fungi of North America
Strophariaceae
Taxa named by Jacob Christian Schäffer
Fungi of Iceland
Fungus species | Kuehneromyces mutabilis | [
"Biology"
] | 783 | [
"Fungi",
"Fungus species"
] |
7,430,174 | https://en.wikipedia.org/wiki/Discrete%20dipole%20approximation | Discrete dipole approximation (DDA), also known as coupled dipole approximation, is a method for computing scattering of radiation by particles of arbitrary shape and by periodic structures. Given a target of arbitrary geometry, one seeks to calculate its scattering and absorption properties by an approximation of the continuum target by a finite array of small polarizable dipoles. This technique is used in a variety of applications including nanophotonics, radar scattering, aerosol physics and astrophysics.
Basic concepts
The basic idea of the DDA was introduced in 1964 by DeVoe who applied it to study the optical properties of molecular aggregates; retardation effects were not included, so DeVoe's treatment was limited to aggregates that were small compared with the wavelength. The DDA, including retardation effects, was proposed in 1973 by Purcell and Pennypacker
who used it to study interstellar dust grains. Simply stated, the DDA is an approximation of the continuum target by a finite array of polarizable points. The points acquire dipole moments in response to the local electric field. The dipoles interact with one another via their electric fields, so the DDA is also sometimes referred to as the coupled dipole approximation.
Nature provides the physical inspiration for the DDA - in 1909 Lorentz
showed that the dielectric properties of a substance could be directly related to the polarizabilities of the individual atoms of which it was composed, with a particularly simple and exact relationship, the Clausius-Mossotti relation (or Lorentz-Lorenz), when the atoms are located on a cubical lattice. We may expect that, just as a continuum representation of a solid is appropriate on length scales that are large compared with the interatomic spacing, an array of polarizable points can accurately approximate the response of a continuum target on length scales that are large compared with the interdipole separation.
For a finite array of point dipoles the scattering problem may be solved exactly, so the only approximation that is present in the DDA is the replacement of the continuum target by an array of N-point dipoles. The replacement requires specification of both the geometry (location of the dipoles) and the dipole polarizabilities. For monochromatic incident waves the self-consistent solution for the oscillating dipole moments may be found; from these the absorption and scattering cross sections are computed. If DDA solutions are obtained for two independent polarizations of the incident wave, then the complete amplitude scattering matrix can be determined.
Alternatively, the DDA can be derived from volume integral equation for the electric field. This highlights that the approximation of point dipoles is equivalent to that of discretizing the integral equation, and thus decreases with decreasing dipole size.
With the recognition that the polarizabilities may be tensors, the DDA can readily be applied to anisotropic materials. The extension of the DDA to treat materials with nonzero magnetic susceptibility is also straightforward, although for most applications magnetic effects are negligible.
There are several reviews of DDA method.
The method was improved by Draine, Flatau, and Goodman, who applied the fast Fourier transform to solve fast convolution problems arising in the discrete dipole approximation (DDA). This allowed for the calculation of scattering by large targets. They distributed an open-source code DDSCAT.
There are now several DDA implementations, extensions to periodic targets, and particles placed on or near a plane substrate. Comparisons with exact techniques have also been published.
Other aspects, such as the validity criteria of the discrete dipole approximation, were published. The DDA was also extended to employ rectangular or cuboid dipoles, which are more efficient for highly oblate or prolate particles.
Fast Fourier Transform for fast convolution calculations
The Fast Fourier Transform (FFT) method was introduced in 1991 by Goodman, Draine, and Flatau for the discrete dipole approximation. They utilized a 3D FFT GPFA written by Clive Temperton. The interaction matrix was extended to twice its original size to incorporate negative lags by mirroring and reversing the interaction matrix. Several variants have been developed since then. Barrowes, Teixeira, and Kong in 2001 developed a code that uses block reordering, zero padding, and a reconstruction algorithm, claiming minimal memory usage. McDonald, Golden, and Jennings in 2009 used a 1D FFT code and extended the interaction matrix in the x, y, and z directions of the FFT calculations, suggesting memory savings due to this approach. This variant was also implemented in the MATLAB 2021 code by Shabaninezhad and Ramakrishna. Other techniques to accelerate convolutions have been suggested in a general context along with faster evaluations of Fast Fourier Transforms arising in DDA problem solvers.
Conjugate gradient iteration schemes and preconditioning
Some of the early calculations of the polarization vector were based on
direct inversion and the implementation of the conjugate gradient method by Petravic and Kuo-Petravic. Subsequently, many other conjugate gradient methods have been tested. Advances in the preconditioning of linear systems of equations arising in the DDA setup have also been reported.
Thermal discrete dipole approximation
Thermal discrete dipole approximation is an extension of the original DDA to simulations of near-field heat transfer between 3D arbitrarily-shaped objects.
Discrete dipole approximation codes
Most of the codes apply to arbitrary-shaped inhomogeneous nonmagnetic particles and particle systems in free space or homogeneous dielectric host medium. The calculated quantities typically include the Mueller matrices, integral cross-sections (extinction, absorption, and scattering), internal fields and angle-resolved scattered fields (phase function). There are some published comparisons of existing DDA codes.
General-purpose open-source DDA codes
These codes typically use regular grids (cubical or rectangular cuboid), conjugate gradient method to solve large system of linear equations, and FFT-acceleration of the matrix-vector products which uses convolution theorem. Complexity of this approach is almost linear in number of dipoles for both time and memory.
Specialized DDA codes
These list include codes that do not qualify for the previous section. The reasons may include the following: source code is not available, FFT acceleration is absent or reduced, the code focuses on specific applications not allowing easy calculation of standard scattering quantities.
Gallery of shapes
See also
Computational electromagnetics
Mie theory
Finite-difference time-domain method
Method of moments (electromagnetics)
References
Computational science
Electrodynamics
Scattering
Scattering, absorption and radiative transfer (optics)
Computational electromagnetics | Discrete dipole approximation | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,364 | [
"Computational electromagnetics",
" absorption and radiative transfer (optics)",
"Applied mathematics",
"Computational physics",
"Computational science",
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics",
"Electrodynamics",
"Dynamical systems"
] |
7,430,224 | https://en.wikipedia.org/wiki/Bipolar%20outflow | A bipolar outflow comprises two continuous flows of gas from the poles of a star. Bipolar outflows may be associated with protostars (young, forming stars), or with evolved post-AGB stars (often in the form of bipolar nebulae).
Protostars
In the case of a young star, the bipolar outflow is driven by a dense, collimated jet. These astrophysical jets are narrower than the outflow and very difficult to observe directly. However, supersonic shock fronts along the jet heat the gas in and around the jet to thousands of degrees. These pockets of hot gas radiate at infrared wavelengths and thus can be detected with telescopes like the United Kingdom Infrared Telescope (UKIRT). They often appear as discrete knots or arcs along the beam of the jet. They are usually called molecular bow shocks, since the knots are usually curved like the bow wave at the front of a ship.
Occurrence
Typically, molecular bow shocks are observed in ro-vibrational emission from hot molecular hydrogen. These objects are known as molecular hydrogen emission-line objects, or MHOs.
Bipolar outflows are usually observed in emission from warm carbon monoxide molecules with millimeter-wave telescopes like the James Clerk Maxwell Telescope, though other trace molecules can be used. Bipolar outflows are often found in dense, dark clouds. They tend to be associated with the very youngest stars (ages less than 10,000 years) and are closely related to the molecular bow shocks. Indeed, the bow shocks are thought to sweep up or "entrain" dense gas from the surrounding cloud to form the bipolar outflow.
Jets from more evolved young stars - T Tauri stars - produce similar bow shocks, though these are visible at optical wavelengths and are called Herbig–Haro objects (HH objects). T Tauri stars are usually found in less dense environments. The absence of surrounding gas and dust means that HH objects are less effective at entraining molecular gas. Consequently, they are less likely to be associated with visible bipolar outflows.
The presence of a bipolar outflow shows that the central star is still accumulating material from the surrounding cloud via an accretion disk. The outflow relieves the build-up of angular momentum as material spirals down onto the central star through the accretion disk. The magnetised material in these protoplanetary jets is rotating and comes from a wide area in the protostellar disk.
Bipolar outflows are also ejected from evolved stars, such as proto-planetary nebulae, planetary nebulae, and post-AGB stars. Direct imaging of proto-planetary nebulae and planetary nebulae has shown the presence of outflows ejected by these systems. Large spectroscopic radial velocity monitoring campaigns have revealed the presence of high-velocity outflows or jets from post-AGB stars. The origin of these jets is the presence of a binary companion, where mass-transfer and accretion onto one of the stars lead to the creation of an accretion disk, from which matter is ejected. The presence of a magnetic field causes the eventual ejection and collimation of the matter, forming a bipolar outflow or jet.
In both cases, bipolar outflows consist largely of molecular gas. They can travel at tens or possibly even hundreds of kilometers per second, and in the case of young stars extend over a parsec in length.
Galactic outflow
Massive galactic molecular outflows may have the physical conditions such as high gas densities to form stars. This star-formation mode could contribute to the morphological evolution of galaxies.
See also
Accretion disc
Astrophysical jet
Bipolar nebula
Herbig–Haro object
Planetary nebula
References
Reipurth B., Bally J. (2001), "Herbig–Haro flows: probes of early stellar evolution", Annual Review of Astronomy and Astrophysics, vol. 39, p. 403-455
Davis C. J., Eisloeffel J. (1995), "Near-infrared imaging in H2 of molecular (CO) outflows from young stars", Astronomy and Astrophysics, vol. 300, p. 851-869.
Kwok S. (2000), The origin and evolution of Planetary Nebulae, Cambridge Astrophysics Series, Cambridge University Press.
Chen Z., Frank A., Blackman E. G., Nordhaus J. and Carroll-Nellenback J., (2017), "Mass Transfer and Disc Formation in AGB Binary systems", Monthly Notices of the Royal Astronomical Society, https://doi.org/10.1093/mnras/stx680
External links
A General Catalogue of Herbig–Haro Objects
A Catalogue of Molecular Hydrogen Emission-Line Objects in Outflows from Young Stars: MHO Catalogue
Stellar astronomy | Bipolar outflow | [
"Astronomy"
] | 994 | [
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
7,430,232 | https://en.wikipedia.org/wiki/Project%20Highwater | Project Highwater was an experiment carried out as part of two of the test flights of NASA's Saturn I launch vehicle (using battleship upper stages), successfully launched into a sub-orbital trajectory from Cape Canaveral, Florida. The Highwater experiment sought to determine the effect of a large volume of water suddenly released into the ionosphere. The project answered questions about the effect of the diffusion of propellants in the event that a rocket was destroyed at high altitude.
The first flight, SA-2, took place on April 25, 1962. After the flight test of the rocket was complete and first stage shutdown occurred, explosive charges on the dummy upper stages destroyed the rocket and released of ballast water weighing into the upper atmosphere at an altitude of , eventually reaching an apex of .
The second flight, SA-3, launched on November 16, 1962, and involved the same payload. The ballast water was explosively released at the flight's peak altitude of . For both of these experiments, the resulting ice clouds expanded to several miles in diameter and lightning-like radio disturbances were recorded.
See also
High-altitude nuclear explosion - other high altitude explosive tests
References
Further reading
1962 in spaceflight
NASA programs
Military projects of the United States
Water and the environment
Spacecraft launched by Saturn rockets
Saturn I | Project Highwater | [
"Engineering"
] | 261 | [
"Military projects of the United States",
"Military projects"
] |
7,430,340 | https://en.wikipedia.org/wiki/Oort%20limit | The Oort limit is a theoretical location at the outer limits of the Oort cloud, where the amount of comets and minor planets orbiting the Sun drops drastically, or drops entirely. The exact location of such a limit, if there is such one, is uncertain. About 100 comets, of 3500 known comets, come more than 5000 AU from the Sun, and a very few come as far as 20,000 AU from the Sun.
So far, it appears that rather than a sudden drop in the amount of comets orbiting the sun at about 50,000 AU, the Oort cloud rather uniformly decreases in size, the further away from the Sun it goes. As current observations indicate, the Oort limit is somewhere around 50,000 AU (0.8 LY) from the Sun.
Notes
Of these known comets, the majority (>2000) were discovered using the SOHO telescope, and are mostly sungrazing comets from the Kreutz Sungrazers. Of the other comets, about half of them are long-period comets, orbiting several hundred Astronomical Units out or further. Considering this, Oort Cloud comets are fairly common.
See also
Kuiper Cliff
References
The Encyclopedia of Astrobiology, Astronomy, and Spaceflight
Oort cloud
Trans-Neptunian region
Jan Oort | Oort limit | [
"Astronomy"
] | 263 | [
"Astronomical hypotheses",
"Oort cloud",
"Trans-Neptunian region",
"Solar System"
] |
7,430,578 | https://en.wikipedia.org/wiki/Molecular%20biophysics | Molecular biophysics is a rapidly evolving interdisciplinary area of research that combines concepts in physics, chemistry, engineering, mathematics and biology. It seeks to understand biomolecular systems and explain biological function in terms of molecular structure, structural organization, and dynamic behaviour at various levels of complexity (from single molecules to supramolecular structures, viruses and small living systems). This discipline covers topics such as the measurement of molecular forces, molecular associations, allosteric interactions, Brownian motion, and cable theory. Additional areas of study can be found on Outline of Biophysics. The discipline has required development of specialized equipment and procedures capable of imaging and manipulating minute living structures, as well as novel experimental approaches.
Overview
Molecular biophysics typically addresses biological questions similar to those in biochemistry and molecular biology, seeking to find the physical underpinnings of biomolecular phenomena. Scientists in this field conduct research concerned with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis, as well as how these interactions are regulated. A great variety of techniques are used to answer these questions.
Fluorescent imaging techniques, as well as electron microscopy, X-ray crystallography, NMR spectroscopy, atomic force microscopy (AFM) and small-angle scattering (SAS) both with X-rays and neutrons (SAXS/SANS) are often used to visualize structures of biological significance. Protein dynamics can be observed by neutron spin echo spectroscopy. Conformational change in structure can be measured using techniques such as dual polarisation interferometry, circular dichroism, SAXS and SANS. Direct manipulation of molecules using optical tweezers or AFM, can also be used to monitor biological events where forces and distances are at the nanoscale. Molecular biophysicists often consider complex biological events as systems of interacting entities which can be understood e.g. through statistical mechanics, thermodynamics and chemical kinetics. By drawing knowledge and experimental techniques from a wide variety of disciplines, biophysicists are often able to directly observe, model or even manipulate the structures and interactions of individual molecules or complexes of molecules.
Areas of Research
Computational biology
Computational biology involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, ecological, behavioral, and social systems. The field is broadly defined and includes foundations in biology, applied mathematics, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, computer science and evolution. Computational biology has become an important part of developing emerging technologies for the field of biology.
Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies.
Membrane biophysics
Membrane biophysics is the study of biological membrane structure and function using physical, computational, mathematical, and biophysical methods. A combination of these methods can be used to create phase diagrams of different types of membranes, which yields information on thermodynamic behavior of a membrane and its components. As opposed to membrane biology, membrane biophysics focuses on quantitative information and modeling of various membrane phenomena, such as lipid raft formation, rates of lipid and cholesterol flip-flop, protein-lipid coupling, and the effect of bending and elasticity functions of membranes on inter-cell connections.
Motor proteins
Motor proteins are a class of molecular motors that can move along the cytoplasm of animal cells. They convert chemical energy into mechanical work by the hydrolysis of ATP. A good example is the muscle protein myosin which "motors" the contraction of muscle fibers in animals. Motor proteins are the driving force behind most active transport of proteins and vesicles in the cytoplasm. Kinesins and cytoplasmic dyneins play essential roles in intracellular transport such as axonal transport and in the formation of the spindle apparatus and the separation of the chromosomes during mitosis and meiosis. Axonemal dynein, found in cilia and flagella, is crucial to cell motility, for example in spermatozoa, and fluid transport, for example in trachea.
Some biological machines are motor proteins, such as myosin, which is responsible for muscle contraction, kinesin, which moves cargo inside cells away from the nucleus along microtubules, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "[I]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines...Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics. Other biological machines are responsible for energy production, for example ATP synthase which harnesses energy from proton gradients across membranes to drive a turbine-like motion used to synthesise ATP, the energy currency of a cell. Still other machines are responsible for gene expression, including DNA polymerases for replicating DNA, RNA polymerases for producing mRNA, the spliceosome for removing introns, and the ribosome for synthesising proteins. These machines and their nanoscale dynamics are far more complex than any molecular machines that have yet been artificially constructed.
These molecular motors are the essential agents of movement in living organisms. In general terms, a motor is a device that consumes energy in one form and converts it into motion or mechanical work; for example, many protein-based molecular motors harness the chemical free energy released by the hydrolysis of ATP in order to perform mechanical work. In terms of energetic efficiency, this type of motor can be superior to currently available man-made motors.
Richard Feynman theorized about the future of nanomedicine. He wrote about the idea of a medical use for biological machines. Feynman and Albert Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would be possible to (as Feynman put it) "swallow the doctor". The idea was discussed in Feynman's 1959 essay "There's Plenty of Room at the Bottom".
These biological machines might have applications in nanomedicine. For example, they could be used to identify and destroy cancer cells. Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, biological machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry. The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities.
Protein folding
Protein folding is the physical process by which a protein chain acquires its native 3-dimensional structure, a conformation that is usually biologically functional, in an expeditious and reproducible manner. It is the physical process by which a polypeptide folds into its characteristic and functional three-dimensional structure from a random coil.
Each protein exists as an unfolded polypeptide or random coil when translated from a sequence of mRNA to a linear chain of amino acids. This polypeptide lacks any stable (long-lasting) three-dimensional structure (the left hand side of the first figure). As the polypeptide chain is being synthesized by a ribosome, the linear chain begins to fold into its three-dimensional structure. Folding begins to occur even during the translation of the polypeptide chain. Amino acids interact with each other to produce a well-defined three-dimensional structure, the folded protein (the right-hand side of the figure), known as the native state. The resulting three-dimensional structure is determined by the amino acid sequence or primary structure (Anfinsen's dogma).
Protein structure determination
As the three-dimensional structure of proteins brings with it an understanding of its function and biological context, there is great effort placed in observing the structures of proteins. X-ray crystallography was the primary method used in the 20th century to solve the structures of proteins in their crystalline form. Ever since the early 2000s, cryogenic electron microscopy has been used to solve the structures of proteins closer to their native state, as well as observing cellular structures.
Protein structure prediction
Protein structure prediction is the inference of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of its folding and its secondary and tertiary structure from its primary structure. Structure prediction is fundamentally different from the inverse problem of protein design. Protein structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine, in drug design, biotechnology and in the design of novel enzymes). Every two years, the performance of current methods is assessed in the CASP experiment (Critical Assessment of Techniques for Protein Structure Prediction). A continuous evaluation of protein structure prediction web servers is performed by the community project CAMEO3D.
The challenge in predicting protein structures is that there lacks a physical model that can fully predict protein tertiary structures from their amino acid sequence. This problem is known as the de novo protein structure prediction problem and is one of the great problems of modern science. AlphaFold, an artificial intelligence program, is able to accurately predict the structures of proteins with genetic homology to other proteins that have been previously solved. Though, this is not a solution to the de novo problem, as it relies on a database of prior data which results in it always being biased. The solution to the de novo protein structure prediction problem must be a purely physical model that will simulate the protein folding in its native environment, resulting in the in silico observation of protein structures and dynamics that were never previously observed.
Spectroscopy
Spectroscopic techniques like NMR, spin label electron spin resonance, Raman spectroscopy, infrared spectroscopy, circular dichroism, and so on have been widely used to understand structural dynamics of important biomolecules and intermolecular interactions.
See also
Small angle scattering
Biophysical chemistry
Biophysics
Biophysical Society
Cryo-electron microscopy (cryo-EM)
Dual-polarization interferometry and circular dichroism
Electron paramagnetic resonance (EPR)
European Biophysical Societies' Association
Index of biophysics articles
List of publications in biology – Biophysics
List of publications in physics – Biophysics
List of biophysicists
Outline of biophysics
Mass spectrometry
Medical biophysics
Membrane biophysics
Multiangle light scattering
Neurophysics
Nuclear magnetic resonance spectroscopy of proteins (NMR)
Physiomics
Proteolysis
Ultrafast laser spectroscopy
Virophysics
Macromolecular crystallography
References | Molecular biophysics | [
"Chemistry"
] | 2,278 | [
"Molecular biophysics",
"Molecular biology"
] |
7,431,100 | https://en.wikipedia.org/wiki/Paul%20Gray%20%28information%20technology%29 | Paul Gray (1930 – May 10, 2012) was an American information systems pioneer, and Professor Emeritus at Claremont Graduate University where he was the founding chair of The School of Information Systems and Technology. The School of Information Systems and Technology at Claremont Graduate University is the home of the Paul Gray PC Museum.
Biography
Gray received a PhD in Operations Research from Stanford University in 1968.
He has been a member of the faculty at Stanford, Georgia Institute of Technology, University of Southern California, Southern Methodist University, University of California at Irvine, and Claremont Graduate University. Gray served as secretary of The Institute of Management Sciences from 1975 to 1979, vice president at large from 1983 to 1986, and president from 1992 to 1993.
Gray was the founding editor of CAIS, the Communications of the Association for Information Systems and served as editor-in-chief from 1999 to 2006. He is on the editorial board of CAIS and eight other journals.
In 1999, he was elected Fellow of the Association for Information Systems. In 2000, he was named Educator of the Year by EDSIG. In 2002, he was elected Fellow of the Institute for Operations Research and Management Science (INFORMS). In 2002, he received the LEO Award of the Association for Information Systems for Lifetime Achievement. In 2003, he received the INFORMS George E. Kimball Medal. In 2006, he received the Lifetime Achievement Award of SIGMIS.
Paul Gray, professor emeritus at Claremont Graduate University (CGU), died on May 10, 2012, from injuries suffered in a car crash. He was 81.
Selected publications
Gray is the author of 13 books and over 140 professional articles. Books, a selection:
Gray, P., Watson, H. J., King, W. R., & McLean, E. R. (1997). Management of information systems. Dryden Press.
Negash, Solomon, and Paul Gray. Business intelligence. Springer Berlin Heidelberg, 2008.
Articles, a selection:
Gray, Paul. "Group decision support systems." Decision Support Systems 3.3 (1987): 233-242.
Gorgone, John, and Paul Gray. "MSIS 2000: model curriculum and guidelines for graduate degree programs in information." Communications of the AIS 3.1es (2000): 1.
Ives, B., Valacich, J., Watson, R. T., Zmud, R., Alavi, M., Baskerville, R., ... & Whinston, A. B. (2002). What every business student needs to know about information systems. Communications of the Association for Information Systems, 9(30), 1-18.
Gorgone, J. T., Gray, P., Stohr, E. A., Valacich, J. S., & Wigand, R. T. (2006). "MSIS 2006: model curriculum and guidelines for graduate degree programs in information systems." ACM SIGCSE Bulletin, 38(2), 121-196.
References
External links
Kimball Medal awarding
CGU mourns the loss of emeritus professor Paul Gray
Biography of Paul Gray from the Institute for Operations Research and the Management Sciences
1930 births
2012 deaths
American operations researchers
Information systems researchers
Claremont Graduate University faculty
People from Orange County, California
Fellows of the Institute for Operations Research and the Management Sciences | Paul Gray (information technology) | [
"Technology"
] | 687 | [
"Information systems",
"Information systems researchers"
] |
12,082,283 | https://en.wikipedia.org/wiki/Human%20vestigiality | In the context of human evolution, vestigiality involves those traits occurring in humans that have lost all or most of their original function through evolution. Although structures called vestigial often appear functionless, they may retain lesser functions or develop minor new ones. In some cases, structures once identified as vestigial simply had an unrecognized function. Vestigial organs are sometimes called rudimentary organs. Many human characteristics are also vestigial in other primates and related animals.
History
Charles Darwin listed a number of putative human vestigial features, which he termed rudimentary, in The Descent of Man (1871). These included the muscles of the ear; wisdom teeth; the appendix; the tail bone; body hair; and the semilunar fold in the corner of the eye. Darwin also commented on the sporadic nature of many vestigial features, particularly musculature. Making reference to the work of the anatomist William Turner, Darwin highlighted a number of sporadic muscles that he identified as vestigial remnants of the panniculus carnosus, particularly the sternalis muscle.
In 1893, Robert Wiedersheim published The Structure of Man, a book on human anatomy and its relevance to evolutionary history. This book contains a list of 86 human organs he considered vestigial, which he called "wholly or in part functionless, some appearing in the Embryo alone, others present during Life constantly or inconstantly. For the greater part Organs which may be rightly termed Vestigial." His list of supposedly vestigial organs included many of the examples on this page as well as others then mistakenly believed to be purely vestigial, such as the pineal gland, the thymus gland, and the pituitary gland. Some of these organs that had lost their obvious, original functions later turned out to have retained functions that had gone unrecognized before the discovery of hormones or many of the functions and tissues of the immune system. Examples included:
the role of the pineal in the regulation of the circadian rhythm (neither the function nor even the existence of melatonin was yet known);
discovery of the role of the thymus in the immune system lay many decades in the future; it remained a mystery until the mid-20th century;
the pituitary and hypothalamus, with their many and varied hormones, were far from understood, let alone the complexity of their interrelationships.
Historically, there was a trend not only to dismiss the appendix as being uselessly vestigial, but an anatomical hazard liable to dangerous inflammation. As late as the mid-20th century, many reputable authorities conceded it no beneficial function. This was a view supported, or perhaps inspired, by Darwin himself in the 1874 edition of his book The Descent of Man, and Selection in Relation to Sex. The organ's patent liability to appendicitis and poorly understood role left it open to blame for a number of possibly unrelated conditions. For example, in 1916, a surgeon claimed that removal of the appendix had cured several cases of trifacial neuralgia and other nerve pain about the head and face, even though he said the evidence for appendicitis in those patients was inconclusive. The discovery of hormones and hormonal principles, notably by Bayliss and Starling, argued against these views, but in the early 20th century, a great deal of fundamental research remained to be done on the functions of large parts of the digestive tract. In 1916, an author found it necessary to argue against the idea that the colon had no important function and that "the ultimate disappearance of the appendix is a coordinate action and not necessarily associated with such frequent inflammations as we are witnessing in the human".
There had been a long history of doubt about such dismissive views. Around 1920, the surgeon Kenelm Hutchinson Digby documented previous observations, going back more than 30 years, that suggested lymphatic tissues, such as the tonsils and appendix, might have substantial immunological functions.
Anatomical
Appendix
The appendix was once believed to be a vestige of a redundant organ that in ancestral species had digestive functions, much as it still does in extant species in which intestinal flora hydrolyze cellulose and similar indigestible plant materials. This view has changed in recent decades, with research suggesting that the appendix may serve an important purpose. In particular, it may serve as a reservoir for beneficial gut bacteria, possibly to allow the bacteria to reestablish in the colon during recovery from diarrhea or other illnesses.
Some herbivorous animals, such as rabbits, have a terminal vermiform appendix and cecum that apparently bear patches of tissue with immune functions and that may also be important in maintaining the composition of intestinal flora. It does not seem to have much digestive function, if any, and is not present in all herbivores, even those with large caeca. As shown in the accompanying pictures, the human appendix typically is about comparable to that of the rabbit's in size, though the caecum is reduced to a single bulge where the ileum empties into the colon. Some carnivorous animals have appendices too, but few have more than vestigial caeca. In line with the possibility that vestigial organs develop new functions, some research suggests that the appendix may guard against the loss of symbiotic bacteria that aid in digestion, though that is unlikely to be a novel function, given the presence of vermiform appendices in many herbivores. Intestinal bacterial populations entrenched in the appendix may support quick reestablishment of the flora of the large intestine after an illness, poisoning, or after an antibiotic treatment depletes or otherwise causes harmful changes to the bacterial population of the colon.
A 2013 study refutes the idea of an inverse relationship between cecum size and appendix size and presence. It is widely present in Euarchontoglires (a superorder of mammals that includes rodents, lagomorphs and primates) and has also evolved independently in the diprotodont marsupials and monotremes, and is highly diverse in size and shape, which could suggest it is not vestigial. Researchers deduce that the appendix has the ability to protect good bacteria in the gut: when the gut is affected by diarrhea or another illness that cleans out the intestines, the good bacteria in the appendix can repopulate the digestive system and keep the person healthy.
Coccyx
The coccyx, or tailbone, is the remnant of a lost tail. All mammals have a tail at some point in their development; in humans, it is present for a period of 4 weeks, during stages 14 to 22 of human embryogenesis. This tail is most prominent in human embryos 31–35 days old. The tailbone, at the end of the spine, has lost its original function in assisting balance and mobility, though it still serves some secondary functions, such as being an attachment point for muscles, which explains why it has not degraded further.
In rare cases, congenital defect results in a short tail-like structure being present at birth. Twenty-three cases of human babies born with such a structure have been reported in the medical literature since 1884. In these cases, the spine and skull were determined to be entirely normal. The only abnormality was that of a tail approximately 12 centimeters long. These tails, though of no deleterious effect, were almost always surgically removed.
Wisdom teeth
Wisdom teeth are vestigial third molars that human ancestors used to help in grinding down plant tissue. The common postulation is that their skulls had larger jaws with more teeth, which were possibly used to help chew down foliage to compensate for a lack of ability to efficiently digest the cellulose that makes up a plant cell wall. As human diets changed, smaller jaws were naturally selected, but the third molars, or "wisdom teeth", still commonly develop in human mouths.
Agenesis (failure to develop) of wisdom teeth in human populations ranges from zero in Tasmanian Aboriginals to nearly 100% in indigenous Mexicans. The difference is related to the PAX9 gene (and perhaps other genes).
Vomeronasal organ
In some animals, the vomeronasal organ (VNO) is part of a second, completely separate sense of smell, known as the accessory olfactory system. Many studies have been performed to find if there is an actual presence of a VNO in adult human beings. Trotier et al. estimate that around 92% of their subjects who had not had septal surgery had at least one intact VNO. Kjaer and Fisher Hansen, on the other hand, found that the VNO structure disappeared during fetal development as it does for some primates. Smith and Bhatnagar (2000) asserted that Kjaer and Fisher Hansen simply missed the structure in older fetuses. Won (2000) found evidence of a VNO in 13 of his 22 cadavers (59.1%) and in 22 of his 78 living patients (28.2%). Given these findings, some scientists have argued that there is a VNO in adult human beings. Most have sought to identify the opening of the vomeronasal organ in humans, rather than identify the tubular epithelial structure itself. Thus it has been argued that such studies, employing macroscopic observational methods, have sometimes missed or even misidentified the vomeronasal organ.
Among studies that use microanatomical methods, there is no reported evidence that human beings have active sensory neurons like those in other animals' working vomeronasal systems. Furthermore, no evidence suggests there are nerve and axon connections between any existing sensory receptor cells in the adult human VNO and the brain. Likewise, there is no evidence of any accessory olfactory bulb in adult human beings, and the key genes involved in other mammals' VNO function have become pseudogenes in human beings. Therefore, while the presence of a structure in adult human beings is debated, a review of the scientific literature by Tristram Wyatt concluded, "most in the field ... are sceptical about the likelihood of a functional VNO in adult human beings on current evidence."
Ear
The ears of a macaque monkey and most other monkeys have far more developed muscles than those of humans, and therefore have the capability to move their ears to better hear potential threats. Humans and other primates such as the orangutan and chimpanzee however have ear muscles that are minimally developed and non-functional, yet still large enough to be identifiable. A muscle attached to the ear that cannot move the ear, for whatever reason, can no longer be said to have any biological function. In humans there is variability in these muscles, such that some people are able to move their ears in various directions, and it can be possible for others to gain such movement by repeated trials. In such primates, the inability to move the ear is compensated mainly by the ability to turn the head on a horizontal plane, an ability which is not common to most monkeys—a function once provided by one structure is now replaced by another.
The outer structure of the ear also shows some vestigial features, such as the node or point on the helix of the ear known as Darwin's tubercle which is found in around 10% of the population.
Eye
The plica semilunaris is a small fold of tissue on the inside corner of the eye. It is the vestigial remnant of the nictitating membrane, i.e., third eyelid, an organ that is fully functional in some other species of mammals. Its associated muscles are also vestigial. Only one species of primate, the Calabar angwantibo, is known to have a functioning nictitating membrane.
The orbitalis muscle is a vestigial or rudimentary nonstriated muscle (smooth muscle) of the eye that crosses from the infraorbital groove and sphenomaxillary fissure and is intimately united with the periosteum of the orbit. It was described by Johannes Peter Müller and is often called Müller's muscle. The muscle forms an important part of the lateral orbital wall in some animals, but in humans it is not known to have any significant function.
Reproductive system
Genitalia
In the internal genitalia of each human sex, there are some residual organs of mesonephric and paramesonephric ducts during embryonic development:
Gartner's duct
Epoophoron
Vesicular appendages of epoophoron
Paroophoron
Human vestigial structures also include leftover embryological remnants that once served a function during development, such as the belly button, and analogous structures between biological sexes. For example, men are also born with two nipples, which are not known to serve a function compared to women. In regards to genitourinary development, both internal and external genitalia of male and female fetuses have the ability to fully or partially form their analogous phenotype of the opposite biological sex if exposed to a lack/overabundance of androgens or the SRY gene during fetal development. Examples of vestigial remnants of genitourinary development include the hymen, which is a membrane that surrounds or partially covers the external vaginal opening that derives from the sinus tubercle during fetal development and is homologous to the male seminal colliculus. Some researchers have hypothesized that the persistence of the hymen may be to provide temporary protection from infection, as it separates the vaginal lumen from the urogenital sinus cavity during development. Other examples include the glans penis and the clitoris, the labia minora and the ventral penis, and the ovarian follicles and the seminiferous tubules.
In modern times, there is controversy regarding whether the foreskin is a vital or vestigial structure. In 1949, British physician Douglas Gairdner noted that the foreskin plays an important protective role in newborns. He wrote, "It is often stated that the prepuce is a vestigial structure devoid of function ... However, it seems to be no accident that during the years when the child is incontinent the glans is completely clothed by the prepuce, for, deprived of this protection, the glans becomes susceptible to injury from contact with sodden clothes or napkin." During the physical act of sex, the foreskin reduces friction, which can reduce the need for additional sources of lubrication. "Some medical researchers, however, claim circumcised men enjoy sex just fine and that, in view of recent research on HIV transmission, the foreskin causes more trouble than it's worth." The area of the outer foreskin measures between 7 and 100 cm, and the inner foreskin measures between 18 and 68 cm, which is a wide range. Regarding vestigial structures, Charles Darwin wrote, "An organ, when rendered useless, may well be variable, for its variations cannot be checked by natural selection."
Musculature
A number of muscles in the human body are thought to be vestigial, either by virtue of being greatly reduced in size compared to homologous muscles in other species, by having become principally tendonous, or by being highly variable in their frequency within or between populations.
Head
The occipitalis minor is a muscle in the back of the head which normally joins to the auricular muscles of the ear. This muscle is very sporadic in frequency—always present in Malays, present in 56% of Africans, 50% of Japanese, and 36% of Europeans, and nonexistent in the Khoikhoi people of southwestern Africa and in Melanesians. Other small muscles in the head associated with the occipital region and the post-auricular muscle complex are often variable in their frequency.
The platysma, a quadrangular (four sides) muscle in a sheet-like configuration, is a vestigial remnant of the panniculous carnosus of animals. In horses, it is the muscle that allows it to flick a fly off its back.
Face
In many animals, the upper lip and sinus area is associated with whiskers or vibrissae which serve a sensory function. In humans, these whiskers do not exist but there are still sporadic cases where elements of the associated vibrissal capsular muscles or sinus hair muscles can be found. Based on histological studies of the upper lips of 20 cadavers, Tamatsu et al. found that structures resembling such muscles were present in 35% (7/20) of their specimens.
Arm
The palmaris longus muscle is seen as a small tendon between the flexor carpi radialis and the flexor carpi ulnaris, although it is not always present. The muscle is absent in about 14% of the population, however this varies greatly with ethnicity. It is believed that this muscle actively participated in the arboreal locomotion of primates, but currently has no function, because it does not provide more grip strength. One study has shown the prevalence of palmaris longus agenesis in 500 Indian patients to be 17.2% (8% bilateral and 9.2% unilateral). The palmaris is a popular source of tendon material for grafts and this has prompted studies which have shown the absence of the palmaris does not have any appreciable effect on grip strength.
The levator claviculae muscle in the posterior triangle of the neck is a supernumerary muscle present in only 2–3% of all people but nearly always present in most mammalian species, including gibbons and orangutans.
Torso
The pyramidalis muscle of the abdomen is a small and triangular muscle, anterior to the rectus abdominis, and contained in the rectus sheath. It is absent in 20% of humans and when absent, the lower end of the rectus then becomes proportionately increased in size. Anatomical studies suggest that the forces generated by the pyramidalis muscles are relatively small.
The latissimus dorsi muscle of the back has several sporadic variations. One particular variant is the existence of the dorsoepitrochlearis or latissimocondyloideus muscle which is a muscle passing from the tendon of the latissimus dorsi to the long head of the triceps brachii. It is notable due to its well developed character in other apes and monkeys, where it is an important climbing muscle, namely the dorsoepitrochlearis brachii. This muscle is found in ≈5% of humans.
Leg
The plantaris muscle is composed of a thin muscle belly and a long thin tendon. The muscle belly is approximately long, and is absent in 7–10% of the human population. It has some weak functionality in moving the knee and ankle but is generally considered redundant and is often used as a source of tendon for grafts. The long, thin tendon of the plantaris is humorously called "the freshman's nerve", as it is often mistaken for a nerve by new medical students.
Tongue
Another example of human vestigiality occurs in the tongue, specifically the chondroglossus muscle. In a morphological study of 100 Japanese cadavers, it was found that 86% of fibers identified were solid and bundled in the appropriate way to facilitate speech and mastication. The other 14% of fibers were short, thin and sparse – nearly useless, and thus concluded to be of vestigial origin.
Breasts
Extra nipples or breasts sometimes appear along the mammary lines of humans, appearing as a remnant of mammalian ancestors who possessed more than two nipples or breasts. One 2021 report demonstrated that all healthy young men and women who participated in an anatomic study of the front surface of the body exhibited 8 pairs of focal fat mounds running along the embryological mammary ridges from axillae to the upper inner thighs. These were always located in the same relative anatomic sites – analogous to the loci of breasts in other placental mammals – and often had nipple-like moles or extra hairs located atop the mounds. Therefore, focal fatty prominences on the fronts of human torsos likely represent chains of vestigial breasts composed of primordial breast fat.
Behavioral
Humans also bear some vestigial behaviors and reflexes.
Goose bumps
The formation of goose bumps in humans under stress is a vestigial reflex; a possible function in the distant evolutionary ancestors of humanity was to raise the body's hair, making the ancestor appear larger and scaring off predators. Raising the hair is also used to trap an extra layer of air, keeping an animal warm. Due to the diminished amount of hair in humans, the reflex formation of goose bumps when cold is also vestigial.
Palmar grasp reflex
The palmar grasp reflex is thought to be a vestigial behavior in human infants. When placing a finger or object to the palm of an infant, it will securely grasp it. This grasp is found to be rather strong. Some infants—37% according to a 1932 study—are able to support their own weight from a rod, although there is no way they can cling to their mother. The grasp is also evident in the feet. When a baby is sitting down, its prehensile feet assume a curled-in posture, similar to that observed in an adult chimp. An ancestral primate would have had sufficient body hair to which an infant could cling, unlike modern humans, thus allowing its mother to escape from danger, such as climbing up a tree in the presence of a predator without having to occupy her hands holding her baby.
Hiccup
It has been proposed that the hiccup is an evolutionary remnant of earlier amphibian respiration. Amphibians such as tadpoles gulp air and water across their gills via a rather simple motor reflex akin to mammalian hiccuping. The motor pathways that enable hiccuping form early during fetal development, before the motor pathways that enable normal lung ventilation form. Additionally, hiccups and amphibian gulping are inhibited by elevated CO and may be stopped by GABAB receptor agonists, illustrating a possible shared physiology and evolutionary heritage. These proposals may explain why premature infants spend 2.5% of their time hiccuping, possibly gulping like amphibians, as their lungs are not yet fully formed. Fetal intrauterine hiccups are of two types. The physiological type occurs before 28 weeks after conception and tend to last five to ten minutes. These hiccups are part of fetal development and are associated with the myelination of the phrenic nerve, which primarily controls the thoracic diaphragm. The phylogeny hypothesis explains how the hiccup reflex might have evolved, and if there is not an explanation, it may explain hiccups as an evolutionary remnant, held-over from our amphibious ancestors.
This hypothesis has been questioned because of the existence of the afferent loop of the reflex, the fact that it does not explain the reason for glottic closure, and because the very short contraction of the hiccup is unlikely to have a significant strengthening effect on the slow-twitch muscles of respiration.
Pseudogenes
There are many pseudogenes present in the human genome. One example of this is L-gulonolactone oxidase, a gene that is functional in most other mammals and produces an enzyme that synthesizes vitamin C. In humans and other members of the suborder Haplorrhini, a mutation disabled the gene and made it unable to produce the enzyme. However, the remains of the gene are still present in the human genome.
See also
Color blindness
Deprecation
Myopia
References
Further reading
Evolutionary biology concepts
Human anatomy
Human evolution
Human physiology | Human vestigiality | [
"Biology"
] | 4,968 | [
"Evolutionary biology concepts"
] |
12,082,736 | https://en.wikipedia.org/wiki/Internet-mediated%20research | Internet-mediated research (IMR) is the research conducted through the medium of the Internet. In the medical field, it pertains to the practice of gathering medical, biomedical or health related research data via the internet directly from research subjects. The subject, uses a web browser to view and respond to questionnaires that are included in an approved medical research protocol. Other fields such as geography also use IMR as a research tool.
The primary Internet-mediated research is classified into three main types: online questionnaires, virtual interviews, and virtual ethnographies. There is also the case of secondary Internet research, which involves the use of the Internet in the location of secondary information sources such as journal databases, newspapers, and digital archives, among others. Some sources, however, exclude this type in their conceptualization of IMR.
In a traditional medical research study, the principal investigator, Research Coordinator, or other study staff conducts an interview with the research subject and records the information on a paper or electronic case report form. Using IMR, the research subject instead responds to a questionnaire without the guidance of a research staff member, often performing the action at a time and place disassociated with the research clinic, using only a computer connected to the internet and a standard browser.
Recently, the medical community has begun to study whether there are differences between IMR data and traditionally collected data.
References
External links
Ethics Guidelines for Internet-mediated Research
Methodological Issues in Internet-Mediated Research: A Randomized Comparison of Internet Versus Mailed Questionnaires
Internet Mediated Research: A Critical Reflection upon the Practice of Using Instant Messenger for Higher Educational Research Interviewing
Qualitative Approaches in Internet-Mediated Research: Opportunities, Issues, Possibilities
Epidemiology | Internet-mediated research | [
"Environmental_science"
] | 356 | [
"Epidemiology",
"Environmental social science"
] |
12,083,298 | https://en.wikipedia.org/wiki/List%20of%20renewable%20energy%20organizations | This is a list of notable renewable energy organizations:
Associations
Bioenergy
World Bioenergy Association
Biomass Thermal Energy Council (BTEC)
Pellet Fuels Institute
Geothermal energy
Geothermal Energy Association
Geothermal Rising
Global Geothermal Alliance
Hydropower
International Hydropower Association (IHA) (International)
National Hydropower Association (US)
Renewable energy
Agency for Non-conventional Energy and Rural Technology (ANERT), Kerala, India
American Council on Renewable Energy
American Solar Energy Society
Clean Energy States Alliance (CESA)
EKOenergy
Energy-Quest
Environmental and Energy Study Institute
EurObserv'ER
European Renewable Energy Council
Green Power Forum
International Renewable Energy Agency (IRENA)
International Renewable Energy Alliance (REN Alliance)
Office of Energy Efficiency and Renewable Energy
REN21
Renewable and Appropriate Energy Laboratory
Renewable Energy and Energy Efficiency Partnership (REEEP)
RenewableUK
Renewable Fuels Association
Rocky Mountain Institute
SustainableEnergy
Trans-Mediterranean Renewable Energy Cooperation
World Council for Renewable Energy
The World Renewable Energy Association (WoREA)
Solar energy
International Solar Alliance(ISA)
International Solar Energy Society
Solar Cookers International
Solar Energy Industries Association (SEIA)
Wadebridge Renewable Energy Network (WREN)
Wind energy
American Wind Energy Association
Citizen Partnerships for Offshore Wind (CPOW)
Global Wind Energy Council
WindEurope
World Wind Energy Association
Educational and research institutions
Renewable energy
Centre for Renewable Energy Systems Technology (CREST) at Loughborough University
NaREC (UK National Renewable Energy Centre)
National Renewable Energy Laboratory (NREL)
RES - The School for Renewable Energy Science (University in Iceland and University in Akureyri)
Norwegian Centre for Renewable Energy (SFFE) at NTNU, SINTEF.
Centre for Alternative Technology (CAT)
Solar energy
Clean Energy Institute (CEI) at the University of Washington
Florida Solar Energy Center (FSEC)
Plataforma Solar de Almería (PSA)
See also
List of countries by renewable electricity production
List of renewable energy topics by country
List of photovoltaics companies
List of large wind farms
List of environmental organizations
List of anti-nuclear groups
List of photovoltaics research institutes
Renewable
Organizations
Renewable energy commercialization | List of renewable energy organizations | [
"Engineering"
] | 426 | [
"Renewable energy organizations",
"Energy organizations"
] |
12,083,359 | https://en.wikipedia.org/wiki/Ramboll | Rambøll Group A/S, also known as "Ramboll", is a Danish multinational architecture, engineering, and consulting company. In the past 25 years, the company has expanded from being a business mainly focused on the Nordic region, to having offices in more than 35 countries, with more than 18,000 employees working on projects across the world. Much of the company's activity is centred on Europe, North America, but also in emerging markets. Ramboll has been listed among the world's top 15 international design firms in 2023.
The company's main work and solutions are for clients in the Buildings, Transport, Energy, Environment & Health, Water, Management Consulting, and Architecture & Landscape sectors.
History
1945–1991: Foundation and initial growth
Ramboll was founded in October 1945 as Rambøll & Hannemann in Copenhagen by a pair of engineers, Børge Johannes Rambøll (1911–2009) and Johan Georg Hannemann (1907–1980). Both had studied at the Technical University of Denmark (DTU) and were heavily motivated to be involved in the rebuilding effort following the devastation of the Second World War. One of the first undertakings of the newly-formed company was the construction of a ferris wheel for Copenhagen’s Tivoli Gardens.
During 1950, Rambøll & Hannemann built Denmark’s first giant radio transmission mast; despite a length of 142m, this mast weighed just 28 tonnes, being 12 tonnes light and using 30 percent less steel than competing designs. Later that decade, the company secured major contracts with the Danish broadcast engineering services (‘Radioingeniørtjenesten’) to erect broadcast towers across both Denmark and Norway. This experience contributed to future undertakings, including work on high-tension-line towers for power plants as well as with the Norwegian telephone directorate.
By the start of the 1960, the firm had around 30 employees; by the end of the decade, it had expanded to 170 employees as the undertakings it was involved in not only became more numerous but also more diversified. During the 1960s, the company worked on incineration plants and waste management projects for the first time; environmental affairs proved to be a key new area of growth. Rambøll & Hannemann started engineering what it referred to as future-proof buildings, such as the 16-storey National Hospital at the heart of Copenhagen (opened in 1970).
During 1972, the ownership of the company was transferred to a newly-created employee-controlled foundation. The stated aims of this move included the desire for all profits generated to be used to continue the development of Rambøll & Hannemann, to safeguard its long-term future and independence, as well as to benefit its employees, clients and communities. By this point, the company had offices in both Copenhagen and Aarhus, a branch office was opened in Oslo in 1976. During the 1980s, Børge Rambøll formulated the Ramboll Philosophy, which had since served as the basis for the organisation's values, culture and working practices.
1991–2003: Expansion in the Nordic region
In 1991, the company merged with B. Højlund Rasmussen A/S, greatly expanding its multidisciplinary reach. The combined entity initially traded as Rambøll, Hannemann & Højlund, however, during the mid 1990s, the company name was shortened to Rambøll.
During the late 1990s, the company decided to make use of digital tools in the execution of a railway electrification scheme that it had been tasked with. Covering 350 km of tracks and 20,000 steel masts, it was considered to be one of Ramboll's landmark projects at that time.
During 2003, Ramboll merged with rival company Scandiaconsult; the resulting company was the largest consulting engineering firm in the Nordic region. Around this time, Danish ceased to be the business' corporate language as an increasing focus on international operations took hold.
2003–present: International growth
In August 2006, the company acquired the Norwegian firm Storvik & Co.
In August 2007, Ramboll bought the privately owned UK based engineering firm Whitbybird. At the time of the acquisition, Whitbybird employed 680 people based at offices throughout the UK as well as in Italy, India and the United Arab Emirates.
During April 2008, Ramboll's presence in India was strengthened by acquiring the Indian telecom design company ImIsoft.
In March 2011, Ramboll bought the privately owned UK based engineering firm Gifford. Gifford also has offices around the world.
During March 2011, Ramboll acquired the power engineering section of DONG Energy (now Ørsted A/S), DONG Energy Power.
In July 2011, Ramboll Informatik was divested to the Danish IT company KMD.
During 2014, Ramboll acquired the US-based global consultancy, ENVIRON, adding more than 1,500 environmental and health science specialists in 21 countries.
In 2018, Ramboll acquired North American engineering and design consultancy OBG (formally O'Brien & Gere), adding 950 consultants to Ramboll's North American presence. By 1 January 2019, Ramboll Americas consists of engineering and science experts across Brazil, Canada, Mexico and the United States.
In December 2019, Ramboll announced the acquisition of Henning Larsen Architects, effective on 2 January 2020.
During 2020-2021 Ramboll acquired Web Structures.
In August 2023, Ramboll announced the acquisition of the German Consultancy firm civity Management Consultants
In August 2024, Ramboll announced the acquisition of K2 Management
Ownership
Almost all shares in Ramboll Group A/S are owned by the Ramboll Foundation (approx. 96.9%). The remainder are owned by Ramboll employees and Ramboll Group A/S.
Organisation
Ramboll Group A/S includes a number of primary business units within Markets and Geographies spanning the EU and US, and with branches and offices in 35 countries.
Management
Ramboll's corporate governance comprises the Group Board of Directors, the Group Executive Board, the Group Leadership Team, and Corporate Management. The Board of Directors is responsible for management of Ramboll Group A/S; while the Executive Board is responsible for day-to-day operation of Ramboll Group A/S.
Business units
Romania
Denmark
Sweden
Norway
Finland
Germany
India
UK
Americas
Asia-Pacific
Management Consulting
Energy
Architecture & Landscape
Environment & Health
Water
Buildings
Transport
Large scale projects
Ramboll had been involved in many large-scale projects, both domestically and internationally. During the early 2010s, the company announced that it was re-orientating itself towards major infrastructure works in regions such as the Middle East, Russia and eastern Europe.
In Denmark, one of the company's most significant undertakings has been the Oresund Bridge (1995–1999), connecting Copenhagen, Denmark with Malmö, Sweden. The bridge is one of the most important infrastructures in Denmark. The international European route E20 runs across the bridge, as does the Oresund Railway Line. The firm was also involved in the planning and construction of the Great Belt Bridge (1988–1998). This bridge connects Halsskov on Zealand with Knudshoved on Funen, 18 kilometres to its west, a two-track railway and a four-lane motorway had to be built, aligned via the small islet Sprogø in the middle of the Great Belt.
Ramboll was the leading engineer on the new Royal Danish Opera, The Copenhagen Opera House. As the lead consultant on the project, Ramboll delivered engineering design, fire & safety, project management, structural engineering, geophysical engineering, geotechnical engineering, HVAC engineering, electrical engineering, bridge engineering, traffic engineering, traffic planning and traffic safety services. This was carried out between 2001 and 2004. A characteristic feature of the Opera building is the gigantic roof covering the entire building stretching all the way to the harbour front. Measuring 158 metres x 90 metres, the Opera roof is one of the largest roof constructions in the world. The innovative design of the roof, which Ramboll has projected in cooperation with Henning Larsen Architects, was the reason for the Opera winning "The 2008 IABSE Outstanding Structure Award".
Ramboll were the structural engineers for the new Tate Modern extension, opened on 17 June 2016 in London, the world's most visited museum of modern art. The company has also provided services to Network Rail's Digital Railway programme.
Ramboll is currently working on several projects concerning linking the infrastructure of the Nordic countries. Among these are projects under the Trans-European Networks and the Fehmarn Belt Fixed Link, the world's longest immersed tunnel.
Internationally, Ramboll has also marked itself by being involved in projects such as Chicago Lakeside Development, Ferrari World in Abu Dhabi, King Abdullah Petroleum Studies and Research Center in Saudi Arabia, the National Museum of Art, Architecture and Design in Oslo, and infrastructure upgrades on the Falkland Islands. Perhaps most unusually, Ramboll has undertaken work in Antarctica.
References
External links
The Ramboll Group homepage
The Ramboll Foundation homepage
International engineering consulting firms
Construction and civil engineering companies of Denmark
Architecture firms of Denmark
Service companies based in Copenhagen
Architecture firms based in Copenhagen
Companies based in Copenhagen Municipality
Construction and civil engineering companies established in 1945
Design companies established in 1945
Danish companies established in 1945 | Ramboll | [
"Engineering"
] | 1,934 | [
"Engineering consulting firms",
"International engineering consulting firms"
] |
12,083,623 | https://en.wikipedia.org/wiki/Thiolutin | Thiolutin is a sulfur-containing antibiotic, which is a potent inhibitor of bacterial and yeast RNA polymerases. It was found to inhibit in vitro RNA synthesis directed by all three yeast RNA polymerases (I, II, and III). Thiolutin is also an inhibitor of mannan and glucan formation in Saccharomyces cerevisiae and used for the analysis of mRNA stability. Studies have shown that thiolutin inhibits adhesion of human umbilical vein endothelial cells (HUVECs) to vitronectin and thus suppresses tumor cell-induced angiogenesis in vivo.
Thiolutin is formed in submerged fermentation by several strains of Streptomycetes luteosporeus. Some sources erroneously specify "aureothricin" as a synonym of thiolutin. Aureothricin is an antibiotic very similar to thiolutin, and is created as a by-product during the thiolutin fermentation.
References
Antibiotics
Organic disulfides
Lactams
Acetamides
Enones | Thiolutin | [
"Chemistry",
"Biology"
] | 233 | [
"Pharmacology",
"Biotechnology products",
"Medicinal chemistry stubs",
"Antibiotics",
"Pharmacology stubs",
"Biocides"
] |
12,083,818 | https://en.wikipedia.org/wiki/Filling%20area%20conjecture | In differential geometry, Mikhail Gromov's filling area conjecture asserts that the hemisphere has minimum area among the orientable surfaces that fill a closed curve of given length without introducing shortcuts between its points.
Definitions and statement of the conjecture
Every smooth surface or curve in Euclidean space is a metric space, in which the (intrinsic) distance between two points of is defined as the infimum of the lengths of the curves that go from to along . For example, on a closed curve of length , for each point of the curve there is a unique other point of the curve (called the antipodal of ) at distance from .
A compact surface fills a closed curve if its border (also called boundary, denoted ) is the curve . The filling is said to be isometric if for any two points of the boundary curve , the distance between them along is the same (not less) than the distance along the boundary. In other words, to fill a curve isometrically is to fill it without introducing shortcuts.
Question: How small can be the area of a surface that isometrically fills its boundary curve, of given length?
For example, in three-dimensional Euclidean space, the circle
(of length 2) is filled by the flat disk
which is not an isometric filling, because any straight chord along it is a shortcut. In contrast, the hemisphere
is an isometric filling of the same circle , which has twice the area of the flat disk. Is this the minimum possible area?
The surface can be imagined as made of a flexible but non-stretchable material, that allows it to be moved around and bended in Euclidean space. None of these transformations modifies the area of the surface nor the length of the curves drawn on it, which are the magnitudes relevant to the problem. The surface can be removed from Euclidean space altogether, obtaining a Riemannian surface, which is an abstract smooth surface with a Riemannian metric that encodes the lengths and area. Reciprocally, according to the Nash-Kuiper theorem, any Riemannian surface with boundary can be embedded in Euclidean space preserving the lengths and area specified by the Riemannian metric. Thus the filling problem can be stated equivalently as a question about Riemannian surfaces, that are not placed in Euclidean space in any particular way.
Conjecture (Gromov's filling area conjecture, 1983): The hemisphere has minimum area among the orientable compact Riemannian surfaces that fill isometrically their boundary curve, of given length.
Gromov's proof for the case of Riemannian disks
In the same paper where Gromov stated the conjecture, he proved that
the hemisphere has least area among the Riemannian surfaces that isometrically fill a circle of given length, and are homeomorphic to a disk.
Proof: Let be a Riemannian disk that isometrically fills its boundary of length . Glue each point with its antipodal point , defined as the unique point of that is at the maximum possible distance from . Gluing in this way we obtain a closed Riemannian surface that is homeomorphic to the real projective plane and whose systole (the length of the shortest non-contractible curve) is equal to . (And reciprocally, if we cut open a projective plane along a shortest noncontractible loop of length , we obtain a disk that fills isometrically its boundary of length .) Thus the minimum area that the isometric filling can have is equal to the minimum area that a Riemannian projective plane of systole can have. But then Pu's systolic inequality asserts precisely that a Riemannian projective plane of given systole has minimum area if and only if it is round (that is, obtained from a Euclidean sphere by identifying each point with its opposite). The area of this round projective plane equals the area of the hemisphere (because each of them has half the area of the sphere).
The proof of Pu's inequality relies, in turn, on the uniformization theorem.
Fillings with Finsler metrics
In 2001, Sergei Ivanov presented another way to prove that the hemisphere has smallest area among isometric fillings homeomorphic to a disk. His argument does not employ the uniformization theorem and is based instead on the topological fact that two curves on a disk must cross if their four endpoints are on the boundary and interlaced. Moreover, Ivanov's proof applies more generally to disks with Finsler metrics, which differ from Riemannian metrics in that they need not satisfy the Pythagorean equation at the infinitesimal level. The area of a Finsler surface can be defined in various inequivalent ways, and the one employed here is the Holmes–Thompson area, which coincides with the usual area when the metric is Riemannian. What Ivanov proved is that
The hemisphere has minimum Holmes–Thompson area among Finsler disks that isometrically fill a closed curve of given length.
Let be a Finsler disk that isometrically fills its boundary of length . We may assume that is the standard round disk in , and the Finsler metric is smooth and strongly convex. The Holmes–Thompson area of the filling can be computed by the formula
where for each point , the set is the dual unit ball of the norm (the unit ball of the dual norm ), and is its usual area as a subset of .
Choose a collection of boundary points, listed in counterclockwise order. For each point , we define on M the scalar function . These functions have the following properties:
Each function is Lipschitz on M and therefore (by Rademacher's theorem) differentiable at almost every point .
If is differentiable at an interior point , then there is a unique shortest curve from to x (parametrized with unit speed), that arrives at x with a speed . The differential has norm 1 and is the unique covector such that .
In each point where all the functions are differentiable, the covectors are distinct and placed in counterclockwise order on the dual unit sphere . Indeed, they must be distinct because different geodesics cannot arrive at with the same speed. Also, if three of these covectors (for some ) appeared in inverted order, then two of the three shortest curves from the points to would cross each other, which is not possible.
In summary, for almost every interior point , the covectors are vertices, listed in counterclockwise order, of a convex polygon inscribed in the dual unit ball . The area of this polygon is (where the index i + 1 is computed modulo n). Therefore we have a lower bound
for the area of the filling. If we define the 1-form , then we can rewrite this lower bound using the Stokes formula as
.
The boundary integral that appears here is defined in terms of the distance functions restricted to the boundary, which do not depend on the isometric filling. The result of the integral therefore depends only on the placement of the points on the circle of length 2L. We omitted the computation, and expressed the result in terms of the lengths of each counterclockwise boundary arc from a point to the following point . The computation is valid only if .
In summary, our lower bound for the area of the Finsler isometric filling converges to as the collection is densified. This implies that
,
as we had to prove.
Unlike the Riemannian case, there is a great variety of Finsler disks that isometrically fill a closed curve and have the same Holmes–Thompson area as the hemisphere. If the Hausdorff area is used instead, then the minimality of the hemisphere still holds, but the hemisphere becomes the unique minimizer. This follows from Ivanov's theorem since the Hausdorff area of a Finsler manifold is never less than the Holmes–Thompson area, and the two areas are equal if and only if the metric is Riemannian.
Non-minimality of the hemisphere among rational fillings with Finsler metrics
A Euclidean disk that fills a circle can be replaced, without decreasing the distances between boundary points, by a Finsler disk that fills the same circle =10 times (in the sense that its boundary wraps around the circle times), but whose Holmes–Thompson area is less than times the area of the disk. For the hemisphere, a similar replacement can be found. In other words, the filling area conjecture is false if Finsler 2-chains with rational coefficients are allowed as fillings, instead of orientable surfaces (which can be considered as 2-chains with integer coefficients).
Riemannian fillings of genus one and hyperellipticity
An orientable Riemannian surface of genus one that isometrically fills the circle cannot have less area than the hemisphere. The proof in this case again starts by gluing antipodal points of the boundary. The non-orientable closed surface obtained in this way has an orientable double cover of genus two, and is therefore hyperelliptic. The proof then exploits a formula by J. Hersch from integral geometry. Namely, consider the family of figure-8 loops on a football, with the self-intersection point at the equator. Hersch's formula expresses the area of a metric in the conformal class of the football, as an average of the energies of the figure-8 loops from the family. An application of Hersch's formula to the hyperelliptic quotient of the Riemann surface proves the filling area conjecture in this case.
Almost flat manifolds are minimal fillings of their boundary distances
If a Riemannian manifold (of any dimension) is almost flat (more precisely, is a region of with a Riemannian metric that is -near the standard Euclidean metric), then is a volume minimizer: it cannot be replaced by an orientable Riemannian manifold that fills the same boundary and has less volume without reducing the distance between some boundary points. This implies that if a piece of sphere is sufficiently small (and therefore, nearly flat), then it is a volume minimizer. If this theorem can be extended to large regions (namely, to the whole hemisphere), then the filling area conjecture is true. It has been conjectured that all simple Riemannian manifolds (those that are convex at their boundary, and where every two points are joined by a unique geodesic) are volume minimizers.
The proof that each almost flat manifold is a volume minimizer involves embedding in , and then showing that any isometric replacement of can also be mapped into the same space , and projected onto , without increasing its volume. This implies that the replacement has not less volume than the original manifold .
See also
Filling radius
Pu's inequality
Systolic geometry
References
Conjectures
Unsolved problems in geometry
Riemannian geometry
Differential geometry
Differential geometry of surfaces
Surfaces
Area
Systolic geometry | Filling area conjecture | [
"Physics",
"Mathematics"
] | 2,233 | [
"Scalar physical quantities",
"Unsolved problems in mathematics",
"Geometry problems",
"Physical quantities",
"Quantity",
"Unsolved problems in geometry",
"Size",
"Conjectures",
"Wikipedia categories named after physical quantities",
"Mathematical problems",
"Area"
] |
12,084,179 | https://en.wikipedia.org/wiki/Schotten%E2%80%93Baumann%20reaction | The Schotten–Baumann reaction is a method to synthesize amides from amines and acid chlorides:
Schotten–Baumann reaction also refers to the conversion of acid chloride to esters. The reaction was first described in 1883 by German chemists Carl Schotten and Eugen Baumann.
The name "Schotten–Baumann reaction conditions" often indicate the use of a two-phase solvent system, consisting of water and an organic solvent. The base within the water phase neutralizes the acid, generated in the reaction, while the starting materials and product remain in the organic phase, often dichloromethane or diethyl ether.
Applications
The Schotten–Baumann reaction or reaction conditions are widely used in organic chemistry.
Examples:
synthesis of N-vanillyl nonanamide, also known as synthetic capsaicin
synthesis of benzamide from benzoyl chloride and a phenethylamine
synthesis of flutamide, a nonsteroidal antiandrogen
acylation of a benzylamine with acetyl chloride (acetic anhydride is an alternative)
In the Fischer peptide synthesis (Emil Fischer, 1903), an α-chloro acid chloride is condensed with the ester of an amino acid. The ester is then hydrolyzed and the acid converted to the acid chloride, enabling the extension of the peptide chain by another unit. In a final step the chloride atom is replaced by an amino group, completing the peptide synthesis.
Further reading
See also
Lumière–Barbier method
References
Carbon-heteroatom bond forming reactions
Amide synthesis reactions
1883 in science
1883 in Germany
Name reactions | Schotten–Baumann reaction | [
"Chemistry"
] | 348 | [
"Organic reactions",
"Name reactions",
"Amide synthesis reactions",
"Carbon-heteroatom bond forming reactions",
"Condensation reactions"
] |
12,084,557 | https://en.wikipedia.org/wiki/Sustainable%20planting | Sustainable planting is an approach to planting design and landscaping-gardening.
Practical examples
When creating new roads or widening current roads, the Nevada Department of Transportation will reserve topsoil and native plants for donation.
The Grain for Green Program in China pays farmers to convert their retired farmland back into forests or other natural landscapes.
See also
Sustainable landscaping
Sustainable gardening
Sustainable landscape architecture
References
Sustainable gardening
Landscape architecture
Sustainable agriculture
Garden plants | Sustainable planting | [
"Engineering"
] | 82 | [
"Landscape architecture",
"Architecture"
] |
12,084,606 | https://en.wikipedia.org/wiki/Yalda%20Night | Yaldā Night ( or Chelle Night (also Chellah Night, ) is an ancient festival in Iran, Afghanistan, Azerbaijan, Uzbekistan, Tajikistan, Turkmenistan that is celebrated on the winter solstice. This corresponds to the night of December 20/21 (±1) in the Gregorian calendar, and to the night between the last day of the ninth month (Azar) and the first day of the tenth month (Dey) of the Iranian solar calendar.The longest and darkest night of the year is a time when friends and family gather together to eat, drink and read poetry (especially Hafez) and Shahnameh until well after midnight. Fruits and nuts are eaten and pomegranates and watermelons are particularly significant. The red colour in these fruits symbolizes the crimson hues of dawn and the glow of life. The poems of Divan-e Hafez, which can be found in the bookcases of most Iranian families, are read or recited on various occasions such as this festival and Nowruz. Shab-e Yalda was officially added to UNESCO Intangible Cultural Heritage Lists in December 2022.
Names
The longest and darkest night of the year marks "the night opening the initial forty-day period of the three-month winter", from which the name Chelleh, "fortieth", derives. There are all together three 40-day periods, one in summer, and two in winter. The two winter periods are known as the "great Chelleh" period ( to , 40 full days), followed/overlapped by the "small Chelleh" period ( to , 20 days + 20 nights = 40 nights and days). Shab-e Chelleh is the night opening the "big Chelleh" period, that is the night between the last day of autumn and the first day of winter. The other name of the festival, 'Yaldā', is ultimately borrowing from Syriac-speaking Christians. According to Dehkhoda, "Yalda is a Syriac word meaning birthday, and because people have adapted Yalda night with the nativity of Messiah, it's called the name; however, the celebration of Christmas (Noël) established on December 25, is set as the birthday of Jesus. Yalda is the beginning of winter and the last night of autumn, and it is the longest night of the year". In the first century, significant numbers of Eastern Christians were settled in Parthian and Sasanian territories, where they had received protection from religious persecution. Through them, Iranians (i.e. Parthians, Persians etc.) came in contact with Christian religious observances, including, it seems, Nestorian Christian Yalda, which in Syriac (a Middle Aramaic dialect) literally means "birth" but in a religious context was also the Syriac Christian proper name for Christmas, and which—because it fell nine months after Annunciation—was celebrated on eve of the winter solstice. The Christian festival's name passed to the non-Christian neighbors and although it is not clear when and where the Syriac term was borrowed into Persian, gradually 'Shab-e Yalda' and 'Shab-e Chelleh' became synonymous and the two are used interchangeably.
History
Yalda Night was one of the holy nights in ancient Iran and included in the official calendar of the Iranian Achaemenid Empire from at least 502 BCE under Darius I. Many of its modern festivities and customs remain unchanged from this period.
Ancient peoples such as the Aryans and Indo-Europeans were well attuned to natural phenomena such as the changing of seasons, as their daily activities were dictated by the availability of sunlight, while their crops were impacted by climate and weather. They found that the shortest days are the last days of autumn and the first night of winter, and that immediately after, the days gradually become longer and the nights shorter. As such, the winter solstice, as the longest night, was called "The night of sun’s birth (Mehr)" and considered to mark the beginning of the year.
The Iranian calendar
The Iranian (Persian) calendar was founded and framed by Hakim Omar Khayyam.
The history of Persian calendars initially points back to the time when the region of modern-day Persia celebrated their new years according to the Zoroastrian calendar. As Zoroastrianism was then the main religion in the region, their years consisted of "Exactly 365 days, distributed among twelve months of 30 days each plus five special month-less days, known popularly as the ‘stolen ones’, or, in religious parlance, as the ‘five Gatha Days'".
Before the creation of the Solar Hijri calendar, the Jalali calendar was put in place through the order of Sulṭān Jalāl al-Dīn Malikshāh-i Saljūqī in the 5th c. A.H. According to the Biographical Encyclopedia of Astronomers, “After the death of Yazdigird III (the last king of the Sassanid dynasty), the Yazdigirdī Calendar, as a solar one, gradually lost its position, and the Hijrī Calendar replaced it”.
Yalda Night is celebrated on the winter solstice, the longest and darkest night of the year.
Customs and traditions
In Zoroastrian tradition the longest and darkest night of the year was a particularly inauspicious day, and the practices of what is now known as "Shab-e Chelleh/Yalda" were originally customs intended to protect people from evil (see dews) during that long night, at which time the evil forces of Ahriman were imagined to be at their peak. People were advised to stay awake most of the night, lest misfortune should befall them, and people would then gather in the safety of groups of friends and relatives, share the last remaining fruits from the summer, and find ways to pass the long night together in good company. The next day (i.e. the first day of Dae month) was then a day of celebration, and (at least in the 10th century, as recorded by Al-Biruni), the festival of the first day of Dae month was known as Ḵorram-ruz (joyful day) or Navad-ruz (ninety days [left to Nowruz]). Although the religious significance of the long dark night has been lost, the old traditions of staying up late in the company of friends and family have been retained in Iranian culture to the present day.
References to other older festivals held around the winter solstice are known from both Middle Persian texts as well as texts of the early Islamic period. In the 10th century, Al-Biruni mentions the mid-year festival (Maidyarem Gahanbar) that ran from . This festival is generally assumed to have been originally on the winter solstice, and which gradually shifted through the introduction of intercalation. Al-Biruni also records an Adar Jashan festival of fire celebrated on the intersection of Adar day (9th) of Adar month (9th), which is the last autumn month. This was probably the same as the fire festival called Shahrevaragan (Shahrivar day of Shahrivar month), which marked the beginning of winter in Tokarestan. In 1979, journalist Hashem Razi theorized that Mehregan the day-name festival of Mithra that in pre-Islamic times was celebrated on the autumn equinox and is today still celebrated in the autumn had in early Islamic times shifted to the winter solstice. Razi based his hypothesis on the fact that some of the poetry of the early Islamic era refers to Mihragan in connection with snow and cold. Razi's theory has a significant following on the Internet, but while Razi's documentation has been academically accepted, his adduction has not. Sufism's Chella, which is a 40-day period of retreat and fasting, is also unrelated to winter solstice festival.
Food plays a central role in the present-day form of the celebrations. In most parts of Iran the extended family come together and enjoy a fine dinner. A wide variety of fruits and sweetmeats specifically prepared or kept for this night are served. Foods common to the celebration include watermelon, pomegranate, nuts, and dried fruit. These items and more are commonly placed on a korsi, which people sit around. In some areas it is custom that forty varieties of edibles should be served during the ceremony of the night of Chelleh.
Light-hearted superstitions run high on the night of Chelleh. These superstitions, however, are primarily associated with consumption. For instance, it is believed that consuming watermelons on the night of Chelleh will ensure the health and well-being of the individual during the months of summer by protecting him from falling victim to excessive heat or disease produced by hot humors. In Khorasan, there is a belief that whoever eats carrots, pears, pomegranates, and green olives will be protected against the harmful bite of insects, especially scorpions. Eating garlic on this night protects one against pains in the joints.
In khorasan, one of the attractive ceremony was and still is preparing Kafbikh a kind of traditional Iranian sweet is made in Khorasan, specially in the cities of Gonabad and Birjand. This is made for Yalda.
After dinner the older individuals entertain the others by telling them tales and anecdotes. Another favorite and prevalent pastime of the night of Chelleh is fāl-e Ḥāfeẓ, which is divination using the Dīvān of Hafez (i.e. bibliomancy). It is believed that one should not divine by the Dīvān of Hafez more than three times, however, or the poet may get angry.
Activities common to the festival include staying up past midnight, conversation, drinking, reading poems out loud, telling stories and jokes, and, for some, dancing. Prior to the invention and prevalence of electricity, decorating and lighting the house and yard with candles was also part of the tradition, but few have continued this tradition. Another tradition is giving dried fruits and nuts and gift to family and friends specially to the bride, wrapped in tulle and tied with ribbon (similar to wedding and shower "party favors") in khorasan giving gift to the bride was obligatory.
Gallery
See also
Dongzhi Festival
Hanukkah
Mehregan
Nowruz
Tirgan
Footnotes
References
Group 1
Group 2
External links
Article about Yalda night on Irpersiatour website
Festivals in Iran
December observances
Persian culture
Observances set by the Solar Hijri calendar
Winter events in Iran
Winter solstice
Intangible Cultural Heritage of Iran | Yalda Night | [
"Astronomy"
] | 2,245 | [
"Astronomical events",
"Winter solstice"
] |
12,085,248 | https://en.wikipedia.org/wiki/Pumping%20%28oil%20well%29 | In the context of oil wells, pumping is a routine operation involving injecting fluids into the well. Pumping may either be done by rigging up to the kill wing valve on the Xmas tree or, if an intervention rig up is present pumping into the riser through a T-piece (a small section of riser with a connection on the side). Pumping is most routinely done to protect the well against scale and hydrates through the pumping of scale inhibitors and methanol. Pumping of kill weight brine may be done for the purposes of well kills and more exotic chemicals may be pumped from surface for cleaning the lower completion or stimulating the reservoir (though these types are jobs are more frequently done with coiled tubing for extra precision).
Importance of knowing quantity
Work involving wells is fraught with difficulties as there is often very little information about the real time condition of the completion. This lack of knowledge also covers potential damage and even loss of well integrity. Therefore, it is essential for the operator to pay attention to the pressures as recorded and to the quantity pumped. A premature increase in pressure is sign of a potential blockage and continuing to pump risks burst pressure retaining components. Pumping more than an anticipated amount of fluid is a sign of a loss of integrity and a potential leak path somewhere. In either of these two situations, pumping must be stopped and the potential causes analysed.
Compressed volumes
It is vital to know the effective capacity of the completion being filled in order to understand what are sensible volumes. If pumping is to continue until reaching a desired pressurisation, then the compressibility of the fluid will become significant. It is therefore important to know how much the fluid will compress under pressure to know how much extra fluid is expected to be required.
As a rule of thumb in the oilfield, compression is governed by the equation:
where ΔV is the change in volume, P is the pressure at surface and V is the volume of fluid unpressurised. k is a compression factor approximately 3.5×10−6 psi−1.
For example, a volume of 300 bbl is to be filled with brine and pressurised to 3000 psi at the surface. The compression is
Therefore, it is expected that 303.15 bbl are required to accomplish this task. If 3000 psi is achieved prior to this quantity being pumped, a blockage is to be suspected. If after pumping 303 bbl, pressurisation is not achieved, a leak is to be suspected.
Oil wells | Pumping (oil well) | [
"Chemistry"
] | 509 | [
"Petroleum technology",
"Oil wells"
] |
12,085,294 | https://en.wikipedia.org/wiki/Climate%20Change%20and%20Emissions%20Management%20Amendment%20Act |
The Climate Change and Emissions Management Amendment Act of Alberta was the first law of its type to impose greenhouse gas cuts on large industrial facilities.
Starting from July 1, 2007, Alberta facilities that emit more than 100,000 tonnes of greenhouse gases per year will be required to reduce their emissions intensity by 12% under the Climate Change and Emissions Management Amendment Act.
Companies have three ways to meet their reductions: they can make operating improvements, buy an Alberta-based credit, or contribute to the Climate Change and Emissions Management Fund.
The regulations apply to about 100 large facilities which emit more than 100,000 tonnes of greenhouse gases a year. Those facilities account for about 70% of Alberta's industrial greenhouse gas emissions.
The annual cost of compliance is estimated to be $177 million - or less than one-tenth of one per cent of Alberta's nominal GDP ($242 billion in 2006).
Alberta-based credits
A facility can purchase credits from large emitters that have reduced their emissions intensity beyond their 12 per cent target. They can also purchase credits from facilities whose emissions are below the 100,000-tonne threshold but are voluntarily reducing their emissions.
The projects must have legitimate greenhouse gas reductions in the province.
Climate Change and Emissions Management Fund
A third option would be for companies to pay $15 for every tonne over their reduction target. The money will be put into the Climate Change and Emissions Management Fund, which will be directed to strategic projects or transformative technology aimed at reducing greenhouse gas emissions in the province.
According to the Climate Change and Emissions Management Amendment Act, funds may be used only for purposes related to reducing emissions of specified gases or improving Alberta's ability to adapt to climate change; including without limitation, the following purposes:
energy conservation and energy efficiency;
demonstration and use of new technologies that emphasize reductions in specified gas emissions in the discovery, recovery, processing, * transportation and use of Alberta's energy resources;
demonstration and use of new technologies that emphasize reductions in specified gas emissions through the use of alternative energy and renewable energy sources;
demonstration and use of specified gas capture, use and storage technology;
development of opportunities for removal of specified gases from the atmosphere through sequestration by sinks;
measurement of the natural removal and storage of carbon;
climate change adaptation programs and measures;
paying salaries, fees, expenses, liabilities or other costs incurred by a delegated authority in carrying out a duty or function of or exercising a power of the Minister in respect of the Fund that has been delegated to the delegated authority, if authorized by the regulations.
References
External links
Climate Change and Emissions Management Act
Alberta Government, Environment, Climate Change
2007 in Alberta
2007 in the environment
2007 in Canadian law
Alberta provincial legislation
Climate change in Canada
Climate change law
Environmental law in Canada
Emissions trading
Carbon finance
Emissions reduction
Environmental tax
Taxation in Canada
Environment of Alberta
Carbon pricing in Canada | Climate Change and Emissions Management Amendment Act | [
"Chemistry"
] | 576 | [
"Greenhouse gases",
"Emissions reduction"
] |
12,085,484 | https://en.wikipedia.org/wiki/Slepian%27s%20lemma | In probability theory, Slepian's lemma (1962), named after David Slepian, is a Gaussian comparison inequality. It states that for Gaussian random variables and in satisfying ,
the following inequality holds for all real numbers :
or equivalently,
While this intuitive-seeming result is true for Gaussian processes, it is not in general true for other random variables—not even those with expectation 0.
As a corollary, if is a centered stationary Gaussian process such that for all , it holds for any real number that
History
Slepian's lemma was first proven by Slepian in 1962, and has since been used in reliability theory, extreme value theory and areas of pure probability. It has also been re-proven in several different forms.
References
Slepian, D. "The One-Sided Barrier Problem for Gaussian Noise", Bell System Technical Journal (1962), pp 463–501.
Huffer, F. "Slepian's inequality via the central limit theorem", Canadian Journal of Statistics (1986), pp 367–370.
Ledoux, M., Talagrand, M. "Probability in Banach Spaces", Springer Verlag, Berlin 1991, pp 75.
Lemmas | Slepian's lemma | [
"Mathematics"
] | 265 | [
"Mathematical theorems",
"Mathematical problems",
"Lemmas"
] |
12,086,046 | https://en.wikipedia.org/wiki/The%20Filter | The Filter's TV personalisation products increase viewing, loyalty and revenue. Their data science underpins the business decisions of the world's most forward thinking broadcasters. Founded in 2004, it has ties to musician Peter Gabriel and is based in Bath, UK. In March 2022, The Filter was acquired by the Amsterdam-headquartered end-to-end video streaming provider, 24i.
History
The idea behind The Filter was devised by musician Peter Gabriel and software entrepreneur Martin Hopkins. Gabriel foresaw that the growth of digital technologies would lead to such large volume of content becoming available that users would need filters to help them find what was relevant to them. In 2004 he was introduced to Hopkins, who had written a piece of software to manage his extensive music collection. The software learned tastes and preferences and utilised artificial intelligence to generate playlists and recommendations. With investment from the founders and from venture capital firm Eden Ventures, they launched Exabre in 2004, and promoted The Filter as a site providing music and movie recommendations directly to consumers.
Although the venture was successful, reaching an average of 800,000 unique visitors per month, in 2009 The Filter modified its business model to licensing the recommendation engine to other businesses. To date, this strategy has proved successful, and the company has secured large contracts, particularly in the US.
Executives and Board of Directors
Peter Gabriel has been involved in various media, music and technology businesses since 1987, when he founded the Real World Group, comprising Real World Studios, Real World Records, and later Real World Multi Media and Real World Films. In 2000, he was co-founder and board member of OD2 (On Demand Distribution), which became the leading European platform provider for the distribution of online music (acquired in 2004 by Loudeye of Seattle, Washington). In 2005, he acquired Solid State Logic with David Engelke. Gabriel remains an advisor and investor at The Filter.
Clients
Since 2009, The Filter has secured contracts tailoring its relevance platform for a number of digital content providers such as Nokia, Dailymotion, BT TV, NBC.com, Warner Brothers, Vudu, we7 and Sony Music. In 2014 The Filter began offering its personalisation services to online retailers, securing its first contracts with Maplin Electronics and Liberty of London.
Awards
In January 2015, AUPEO Personal Radio was named as a CES Innovation Awards Honoree. The service uses The Filter's technology to provide feed and metadata aggregation as well as radio station personalisation.
In May 2011 Music Week Magazine nominated the Nokia Gig Finder app (developed by Ovi) as a finalist for its Mobile Music App of the year award. This app utilises The Filter's technology to learn music tastes and recommend the best and most relevant live events happening near the user. In 2009 The Filter was selected by the UKTI (UK Trade & Investment) for its Digital Mission to SXSW in Austin, Texas.
The Filter was a recipient of the Red Herring 100 Europe 2008 - awarded to the best European tech start-ups, and were also selected to partake in Webmission08. Webmission is a UK initiative backed by Techcrunch, Bebo, Sun Microsystems and Oracle Corporation (among others) that aims to bring the 20 most innovative tech companies that are "ready to do business in the US or potentially attract a US investor." The Filter was also chosen as one of six finalists from over 600 entries in the Popkomm Innovation in Music Awards in October 2008.
References
External links
The Filter official website
British entertainment websites
Companies based in Bath, Somerset
Software companies established in 2006
Recommender systems
2006 establishments in England
British companies established in 2006
2022 mergers and acquisitions | The Filter | [
"Technology"
] | 749 | [
"Information systems",
"Recommender systems"
] |
12,086,079 | https://en.wikipedia.org/wiki/F3%20%28font%20format%29 | F3 is an outline font format created by Folio, Inc. Sun Microsystems acquired Folio in 1988, and included 57 F3 fonts and the F3 interpreter, TypeScaler, in its OpenWindows desktop environment. The font format allowed for hinting. The extension of F3 Font Format outlines is .f3b.
References
External links
Font formats
Sun Microsystems software | F3 (font format) | [
"Technology"
] | 81 | [
"Computing stubs",
"Digital typography stubs"
] |
12,086,235 | https://en.wikipedia.org/wiki/Instituto%20de%20Medicina%20Molecular | The Instituto de Medicina Molecular João Lobo Antunes (Institute of Molecular Medicine), or iMM for short, is an associated research institution of the University of Lisbon, in Lisbon, Portugal.
IMM is devoted to human genome research with the aim of contributing to a better understanding of disease mechanisms, developing novel predictive tests, improving diagnostics tools, and developing new therapeutic approaches.
History
IMM was created in November 2001, as a result from the association of 5 research centres from the University of Lisbon Medical School: the Biology and Molecular Pathology Centre (CEBIP), the Lisbon Neurosciences Centre (CNL), the Microcirculation and Vascular Pathobiology Centre (CMBV), the Gastroenterology Centre (CG), and the Nutrition and Metabolism Centre (CNB).
In 2003, the Molecular Pathobiology Research Centre (CIPM) of the Portuguese Institute of Oncology Francisco Gentil (IPOFG) became an associate member of IMM.
Historically, IMM benefited from the full integration of academic researchers into the Lisbon Medical School who initiated their academic training and scientific careers at Instituto Gulbenkian de Ciência (IGC), in Oeiras, one of the first national institutions to introduce and make use of state-of-the-art cell and molecular biology techniques.
The IMM is now known as Instituto de Medicina Molecular João Lobo Antunes, to honour one of its founders and president (2001-2014), Professor João Lobo Antunes. Maria do Carmo-Fonseca is the current president of IMM, having served before as IMM Executive Director since its creation. The current executive director is the malaria researcher Maria Mota.
References
External links
Official site
Members
Medical research institutes in Portugal
Biotechnology organizations
University of Lisbon
2001 establishments in Portugal
Organizations established in 2001 | Instituto de Medicina Molecular | [
"Engineering",
"Biology"
] | 378 | [
"Biotechnology organizations"
] |
12,086,708 | https://en.wikipedia.org/wiki/Bebugging | Bebugging (or fault seeding or error seeding) is a popular software engineering technique used in the 1970s to measure test coverage. Known bugs are randomly added to a program source code and the software tester is tasked to find them. The percentage of the known bugs not found gives an indication of the real bugs that remain.
The term "bebugging" was first mentioned in The Psychology of Computer Programming (1970), where Gerald M. Weinberg described the use of the method as a way of training, motivating, and evaluating programmers, not as a measure of faults remaining in a program. The approach was borrowed from the SAGE system, where it was used to keep operators watching radar screens alert. Here's a quote from the original use of the term:
An early application of bebugging was Harlan Mills's fault seeding approach which was later refined by stratified fault-seeding. These techniques worked by adding a number of known faults to a software system for the purpose of monitoring the rate of detection and removal. This assumed that it is possible to estimate the number of remaining faults in a software system still to be detected by a particular test methodology.
Bebugging is a type of fault injection.
See also
Fault injection
Mutation testing
References
Software testing | Bebugging | [
"Engineering"
] | 261 | [
"Software engineering",
"Software testing"
] |
12,086,946 | https://en.wikipedia.org/wiki/Water%20metering | Water metering is the practice of measuring water use. Water meters measure the volume of water used by residential and commercial building units that are supplied with water by a public water supply system. They are also used to determine flow through a particular portion of the system.
In most of the world water meters are calibrated in cubic metres (m3) or litres, but in the United States and some other countries water meters are calibrated in cubic feet (ft3) or US gallons on a mechanical or electronic register. Modern meters typically can display rate-of-flow in addition to total volume.
Several types of water meters are in common use, and may be characterized by the flow measurement method, the type of end-user, the required flow rates, and accuracy requirements.
Water metering is changing rapidly with the advent of smart metering technology and various innovations.
In North America, standards for manufacturing water meters are set by the American Water Works Association. Outside of North America, most countries use ISO standards.
Types of metering technologies
There are two common approaches to flow measurement: displacement and velocity, each making use of a variety of technologies. Common displacement designs include oscillating piston and nutating disc meters. Velocity-based designs include single- and multi-jet meters and turbine meters.
There are also non-mechanical designs, for example, electromagnetic and ultrasonic meters, and meters designed for special uses. Most meters in a typical water distribution system are designed to measure cold potable water only. Specialty hot water meters are designed with materials that can withstand higher temperatures. Meters for reclaimed water have special lavender register covers to signify that the water should not be used for drinking.
Additionally, there are electromechanical meters, like prepaid water meters and automatic meter reading meters. The latter integrates an electronic measurement component and a LCD with a mechanical water meter. Mechanical water meters normally use a reed switch, hall or photoelectric coding register as the signal output. After processing by the microcontroller unit (MCU) in the electronic module, the data are transmitted to the LCD or output to an information management system.
Water meters are generally owned, read and maintained by a public water provider such as a city, rural water association or private water company. In some cases an owner of a mobile home park, apartment complex or commercial building may be billed by a utility based on the reading of one meter, with the costs shared among the tenants based on some sort of key (size of flat, number of inhabitants or by separately tracking the water consumption of each unit in what is called submetering).
Displacement water meters
Displacement meters are commonly referred to as Positive Displacement, or "PD" meters. Two common types are oscillating piston meters and nutating disk meters. Either method relies on the water to physically displace the moving measuring element in direct proportion to the amount of water that passes through the meter. The piston or disk moves a magnet that drives the register.
PD meters are generally very accurate at the low-to-moderate flow rates typical of residential and small commercial users and commonly range in size from 5/8" to 2". Because displacement meters require that all water flows through the meter to "push" the measuring element, they generally are not practical in large commercial applications requiring high flow rates or low-pressure loss. PD meters normally have a built-in strainer to protect the measuring element from rocks or other debris that could stop or break the measuring element. PD meters normally have bronze, brass or plastic bodies with internal measuring chambers made of moulded plastics and stainless steel.
Velocity water meters
A velocity-type meter measures the velocity of flow through a meter of known internal capacity. The speed of the flow can then be converted into a volume of flow to determine the usage. There are several types of meters that measure water flow velocity, including jet meters (single-jet and multi-jet), turbine meters, propeller meters and mag meters. Most velocity-based meters have an adjustment vane for calibrating the meter to the required accuracy.
Multi-jet meters
Multi-jet meters are very accurate in small sizes and are commonly used in to sizes for residential and small commercial users. Multi-jet meters use multiple ports surrounding an internal chamber to create multiple jets of water against a turbine, whose rotation speed depends on the velocity of water flow. Multi-jets are very accurate at low flow rates, but there are no large size meters since they do not have the straight-through flow path needed for the high flow rates used in large pipe diameters. Multi-jet meters generally have an internal strainer element that can protect the jet ports from getting clogged. Multi-jet meters normally have bronze alloy bodies or outer casings, with internal measuring parts made from modern thermoplastics and stainless steel.
Turbine meters
Turbine meters are less accurate than displacement and jet meters at low flow rates, but the measuring element does not occupy or severely restrict the entire path of flow. The flow direction is generally straight through the meter, allowing for higher flow rates and less pressure loss than displacement-type meters. They are the meter of choice for large commercial users, fire protection and as master meters for the water distribution system. Strainers are generally required to be installed in front of the meter to protect the measuring element from gravel or other debris that could enter the water distribution system. Turbine meters are generally available for to or higher pipe sizes. Turbine meter bodies are commonly made of bronze, cast iron or ductile iron. Internal turbine elements can be plastic or non-corrosive metal alloys. They are accurate in normal working conditions but are greatly affected by the flow profile and fluid conditions.
Fire meters are a specialized type of turbine meter meeting the high flow rates requirements for fire protection. They are often approved by Underwriters Laboratories (UL) or Factory Mutual (FM) for use in fire protection.
Fire hydrant meters are a specialized type of portable turbine meter attached to a fire hydrant to measure water flowing out of the hydrant. The meters are normally made of aluminium to keep their weight low and usually have a capacity. Utilities often require them for measuring water used on construction sites, for pool filling, or where a permanent meter has not yet been installed.
Compound meters
A compound meter is used where high flow rates are necessary, but where at times there are also smaller rates of flow that need to be accurately measured. Compound meters have two measuring elements and a check valve to regulate flow between them. At high flow rates, water is normally diverted primarily or completely to the high flow element. The high flow element is typically a turbine meter. When flow rates drop to where the high flow element cannot measure accurately, a check valve closes to divert water to a smaller element that can measure the lower flow rates accurately. The low flow element is typically a multi-jet or PD meter. By adding the values registered by the high and low elements, the utility has a record of the total consumption of water flowing through the meter.
Electromagnetic meters
Magnetic flow meters, commonly referred to as "mag meters", are technically a velocity-type water meter, except that they use electromagnetic properties to determine the water flow velocity, rather than the mechanical means used by jet and turbine meters. Mag meters use the physics principle of Faraday's law of induction for measurement and require AC or DC electricity from a power line or battery to operate the electromagnets. Since mag meters have no mechanical measuring element, they normally have the advantage of being able to measure flow in either direction, and use electronics for measuring and totalizing the flow. Mag meters can also be useful for measuring raw (untreated/unfiltered) water and waste-water since there is no mechanical measuring element to get clogged or damaged by debris flowing through the meter. Strainers are not required with mag meters since there is no measuring element in the stream of flow that could be damaged. Since stray electrical energy flowing through the flow tube can cause inaccurate readings, most mag meters are installed with either grounding rings or grounding electrodes to divert stray electricity away from the electrodes used to measure the flow inside the flow tube.
Ultrasonic meters
Ultrasonic water meters use one or more ultrasonic transducer to send ultrasonic sound waves through the fluid to determine the velocity of the water. Since the cross-sectional area of the meter body is a fixed and known value, when the velocity of water is detected, the volume of water passing through the meter can be calculated with very high accuracy. Because of water density changes with temperature, most ultrasonic water meters also measure the water temperature as a component of the volume calculation.
There are 2 primary ultrasonic measurement technologies used in water metering:
Doppler effect meters which utilize the Doppler Effect to determine the velocity of water passing through the meter.
Transit Time meters which measure the amount of time required for the ultrasonic signal to pass between 2 or more fixed points inside the meter.
Ultrasonic meters may either be of flow-through or "clamp-on" design. Flow-through designs are those where the water passes directly through the meter, and are typically found in residential or commercial applications. Clamp-on designs are generally used for larger diameters where the sensors are mounted to the exterior of pipes, etc.
Ultrasonic water meters are highly accurate devices, with residential models capable of measuring flow rates as low as 1 liter per hour (L/h). They feature wide flow measurement ranges, require minimal maintenance, and have long lifespans due to the absence of internal mechanical components that could wear out over time. This design ensures stable long-term operation and reduces the need for maintenance.
Although relatively new to some markets, including the American water utility sector, ultrasonic water meters have been well-established in Europe, Asia, and other regions. Their growing popularity is driven by the increasing demand for reliable, low-maintenance, and durable metering solutions suitable for diverse climates and water supply conditions.
Furthermore, the integration of smart meter technology with ultrasonic systems is accelerating their adoption worldwide, as utility providers seek more efficient and accurate data collection methods.
Coriolis Water Meter
A Coriolis water meter is a precision instrument used to measure the mass flow rate and density of fluids, including water, by utilizing the Coriolis effect. Unlike traditional mechanical meters with moving parts, Coriolis meters use oscillating tubes through which the fluid flows. As the fluid passes through the tubes, it induces a phase shift in the oscillation, which is detected by sensors and is directly proportional to the mass flow rate.
Additionally, the meter can determine the fluid's density by analyzing the natural frequency of the oscillating tubes. This dual measurement capability provides high accuracy and reliability, making Coriolis meters particularly suitable for industrial applications requiring precise flow measurements. In addition, "Coriolis meters have a wide, dynamic range due to the linear nature of the signal created while measuring flow."
However, their high cost often limits their use in residential or municipal water metering.
Water Meter Length and Diameter
The dimensions of water meters, including tube length and diameter, are standardized to ensure compatibility with plumbing systems and compliance with regulatory frameworks. These dimensions are typically defined in terms of nominal pipe size (NPS) in the United States and nominal diameter (DN) in Europe, with corresponding measurements in inches and millimeters, respectively. Installation lengths are also standardized, differing between the United States and Europe to ensure interchangeability within regional plumbing systems.
Diameter
Water meter diameters are standardized based on expected flow rates and usage scenarios, with regional differences in specifications:
United States: Diameters are specified in nominal pipe size (NPS), measured in inches. Common sizes include:
Residential meters: 5/8 inch (commonly referred to as "5/8 x 3/4 inch"), 3/4 inch, and 1 inch.
Commercial/Industrial meters: Sizes range from 1½ inches to 12 inches or larger, depending on flow requirements and system design.
Europe: Diameters are defined in nominal diameter (DN), measured in millimeters. Common sizes include:
Residential meters: DN15 (15 mm), DN20 (20 mm), and DN25 (25 mm).
Commercial/Industrial meters: DN40 (40 mm) to DN300 (300 mm), with larger sizes available for high-capacity systems.
Length
The installation length (distance between the connection points) of a water meter varies between regions:
United States:
Residential meters: Standard lengths are 7½ inches, 9 inches, or 12 inches, as specified by the American Water Works Association (AWWA) Standard C700.
Europe:
Residential meters: Standard lengths include 110 mm, 165 mm, and 190 mm, conforming to ISO 4064 standards.
Commercial/Industrial meters: In both regions, larger meters often have lengths exceeding 300 mm (12 inches), with exact dimensions tailored to the application and local plumbing requirements.
Water meter index Display
Registers
There are several types of registers on water meters. A standard register normally has a dial similar to a clock, with gradations around the perimeter to indicate the measuring unit and the amount of water used, if less than the lowest digit in a display similar to the odometer wheels in a car, their sum is the total volume used. Modern registers are normally driven by a magnetic coupling between a magnet in the measuring chamber attached to the measuring element and another attached to the bottom of the register. Gears in the register convert the motion of the measuring element to the proper usage increment for display on the sweep hand and the odometer-style wheels. Many registers also have a leak detector. This is a small visible disk or hand that is geared closer to the rotation speed of the drive magnet, so that very small flows that would be visually undetectable on the regular sweep hand can be seen.
With Automatic Meter Reading, manufacturers have developed pulse or encoder registers to produce electronic output for radio transmitters, reading storage devices, and data logging devices. Pulse meters send a digital or analog electronic pulse to a recording device. Encoder registers have an electronic means permitting an external device to interrogate the register to obtain either the position of the wheels or a stored electronic reading. Frequent transmissions of consumption data can be used to give smart meter functionality.
LCD
There are also some specialized types of registers such as meters with an LCD instead of mechanical wheels, and registers to output data or pulses to a variety of recording and controller devices. For industrial applications, the output is often 4-20 mA analog for recording or controlling different flow rates in addition to totalization.
Water meter reading
Different size meters indicate different resolutions of the reading. One rotation of the sweep hand may be equivalent to 10 gallons or to 1,000 gallons (1 to 100 ft3, 0.1 to 10 m3). If one rotation of the hand represents 10 gallons, the meter has a 10-gallon sweep. Sometimes the last number(s) of the wheel display are non-rotating or printed on the dial face. The fixed zero number(s) are represented by the position of the rotating sweep hand. For example, if one rotation of the hand is 10 gallons, the sweep hand is on 7, and the wheel display shows 123456 plus a fixed zero, the actual total usage would be 1,234,567 gallons.
In the United States most utilities bill only to the nearest 100 or 1,000 gallons (10 to 100 ft3, 1 to 10 m3), and often only read the leftmost 4 or 5 numbers on the display wheels. Using the above example, they would read and bill 1,234, rounding to 1,234,000 gallons based on a 1,000-gallon billing resolution. The most common rounding for a particular size meter is often indicated by differently coloured number wheels, the ones ignored being black, and the ones used for billing being white.
Water meter smart metering technologies and usage
Smart metering technologies for water meters refer to advanced systems that enable real-time monitoring, data collection, and analysis of water usage through digital and connected devices. Unlike traditional mechanical water meters, smart meters are equipped with electronic components that measure water flow and transmit the data wirelessly to utilities and consumers. Key technologies include Automated Meter Reading (AMR), which provides one-way communication to collect usage data, and Advanced Metering Infrastructure (AMI), which supports two-way communication for enhanced features such as remote monitoring, leak detection, and dynamic billing. Smart water meters are integrated with Internet of Things (IoT) platforms, allowing for more efficient water management, reduced waste, and improved customer engagement.
RF Technologies and Protocols
Radio Frequency (RF) technologies form the backbone of smart metering systems by enabling wireless communication between meters and utility networks. Several RF technologies and protocols are widely used in smart water metering:
Wireless M-Bus (WMBus): WMBus, compliant with the European EN 13757 standard, is widely adopted across Europe for water, gas, and electricity metering. It offers secure, reliable, and energy-efficient communication tailored for utility applications. The data collected by this mean are sent to the network using a WMUS Gateway (see below).
Wize technology: A protocol based on the 169 MHz frequency band, WIZE is designed for long-range, low-power communication. It is commonly used in Europe for water and gas metering due to its excellent signal penetration and scalability.
LoRaWAN: LoRaWAN is valued for its long-range and low-power capabilities, making it suitable for large-scale deployments in both rural and urban settings. It is widely used in industrial and municipal applications.
Zigbee: Known for its ability to create mesh networks, Zigbee is often used in urban environments where dense connectivity is required. It is energy-efficient and supports secure communication.
NB-IoT and Cat-M: Narrowband Internet of Things (NB-IoT) and LTE Cat-M are cellular-based technologies that enable direct communication with cellular networks. These protocols are particularly suitable for large-scale deployments in areas with existing cellular infrastructure, offering extended battery life and robust coverage.
Encoder receiver transmitter (ERT) technology is a widely used communication system in utility metering, particularly in the United States. Water meters are connected through a cable to an external unit called Meter Interface Unit (MIU) and gives the ability to transition between wired and wireless systems have made it a popular choice for utility providers seeking efficient and scalable metering solutions.
Application-Layer Protocols in Smart Metering
Application-layer protocols operating above RF communication technologies to standardize data exchange, ensure interoperability, and enhance device functionality. These protocols enable seamless integration of meters into broader utility and Internet of Things (IoT) ecosystems.
DLMS/COSEM (Device Language Message Specification/Companion Specification for Energy Metering) is one of the most widely adopted protocols in smart metering. It provides a flexible and standardized framework for data exchange between metering devices and utility systems. The protocol supports various communication technologies, including RF, wired, and cellular networks, and facilitates secure data transfer, structured data management, and remote monitoring.
LwM2M (Lightweight Machine to Machine) is a protocol specifically designed for IoT devices, offering efficient resource management and secure communication over constrained networks. Its lightweight design makes it ideal for smart water meters and other low-power devices. LwM2M supports remote configuration, firmware updates, and real-time monitoring, enabling enhanced functionality and scalability in metering systems.
Other application-layer protocols, such as MQTT (Message Queuing Telemetry Transport) and CoAP (Constrained Application Protocol), are also utilized in smart metering systems, particularly in IoT-centric deployments. These protocols focus on low-bandwidth, high-efficiency communication, ensuring reliable data exchange in diverse environments.
Smart Water Metering System: Infrastructure Overview
A smart water metering system integrates advanced water meters, communication networks, and centralized platforms like the Head-End System (HES) and Meter Data Management System (MDMS). Smart meters collect data on water usage, pressure, and anomalies, transmitting it through wireless networks. The HES aggregates and validates this data, forwarding it to the MDMS, which performs advanced analytics, trend reporting, and billing integration.
WMBUS Gateway for Water Meter Remote Reading
A WMBUS Gateway (Wireless M-Bus Gateway) is a communication device that enables remote reading of water meters by bridging the gap between the water meters equipped with Wireless M-Bus communication modules and centralized data collection systems. The gateway typically operates on standard frequencies such as 868 MHz (Europe) or other ISM bands.
WMBUS gateways can be deployed as fixed gateways, installed at permanent locations to continuously collect data from meters within range, or as part of mobile solutions, such as drive-by or walk-by systems, where data is collected via handheld devices or vehicles equipped with receivers as they pass by the meters.
In some cases, electricity meters with integrated communication modules are also utilized as fixed gateways to collect data from nearby water and gas meters, leveraging their existing infrastructure to minimize deployment costs.
The collected data is then transmitted to a central server via technologies like GSM, GPRS, LTE, or Ethernet for analysis and management.
Applications and Benefits
The adoption of these RF technologies and protocols enables seamless integration of smart water meters into utility systems, offering several advantages:
Improved Efficiency: Automated data collection reduces manual labor and errors.
Enhanced Leak Detection: Real-time monitoring helps identify and address leaks promptly.
Dynamic Billing: Enables more accurate and flexible billing based on real-time usage.
Sustainability: Supports water conservation by providing detailed consumption insights.
Prevalence
Water metering is common for residential and commercial drinking water supply in many countries, as well as for industrial self-supply with water. However, it is less common in irrigated agriculture, which is the major water user worldwide. Water metering is also uncommon for piped drinking water supply in rural areas and small towns, although there are examples of successful metering in rural areas in developing countries, such as in El Salvador.
Metering of water supplied by utilities to residential, commercial and industrial users is common in most developed countries, except for the United Kingdom where only about 52% of users are metered. In some developing countries metering is very common, such as in Chile where it stands at 96%, while in others it still remains low, such as in Argentina.
The percentage of residential water metering in selected cities in developing countries is as follows:
99% in Santiago de Chile (1998)
96% in Abidjan, Ivory Coast (1987)
62% in cities in Guatemala (2000)
30% in Lima, Peru (1991)
28% in Kathmandu, Nepal (2001)
2% in Buenos Aires, Argentina (1992)
Nearly two-thirds of OECD countries meter more than 90% of single-family houses. A few are also expanding their metering of apartments (e.g., France and Germany).
Benefits
The benefits of metering are that:
in conjunction with volumetric pricing it provides an incentive for water conservation,
it helps to detect water leaks in the distribution network, thus providing a basis for reducing the amount of non-revenue water;
it is a precondition for quantity-targeting of water subsidies to the poor.
Costs
The costs of metering include:
Investment costs to purchase, install and replace meters,
Recurring costs to read meters and issue bills based on consumption instead of bills based on monthly flat fees.
While the cost of purchasing residential meters is low, the total life cycle costs of metering are high. For example, retrofitting flats in large buildings with meters for every flat can involve major and thus costly plumbing work.
Problems
Problems associated with metering arise particularly in the case of intermittent supply, which is common in many developing countries. Sudden changes in pressure can damage meters to the extent that many meters in cities in developing countries are not functional. Also, some types of meters become less accurate as they age, and under-registering consumption leads to lower revenues if defective meters are not regularly replaced. Many types of meters also register air flows, which can lead to over-registration of consumption, especially in systems with intermittent supply, when water supply is re-established and the incoming water pushes air through the meters.
Displacement Water meters do not distinguish between air and water, both are counted as fluid. There are two regulations where water companies and meter manufacturers do not comply and charge air for water. A measuring system shall be equipped with an effective air/vapor eliminator or other automatic means to prevent the passage of air/vapor through the meter. ref.[Handbook 44 – 2019 3.30. S.2.1.] Measuring systems shall incorporate a gas elimination device for the proper elimination of any air or undissolved gases which may be contained in the liquid before it enters the meter.
Water meter standards and certification
Water meter Measurement standards and certification
Water meters are subject to measurement standards and certifications to ensure their accuracy, reliability, and compliance with regulatory requirements. The most widely recognized standards include the ISO 4064 series and the OIML R49 standards, which define the performance, accuracy classes, and testing procedures for water meters.
In the European Union, compliance with the Measuring Instruments Directive (MID) is mandatory for water meters sold within member states, ensuring conformity with harmonized European standards.
In the United States, water meters typically adhere to the AWWA (American Water Works Association) C700 series standards, which specify design, materials, and performance criteria.
In Australia and New Zealand, water meters must comply with the AS 3565 standard.
Certification processes for water meters often include testing for
accuracy under varying flow rates,
durability under environmental stress,
and long-term stability.
Water meter potability standards and certification
Water meters used in potable water systems are required to meet stringent standards to ensure they do not contaminate the water supply or alter its quality. These standards address materials, coatings, and designs that come into contact with drinking water.
In the United States, compliance with NSF/ANSI 61 is mandatory, setting limits on leachable contaminants from water system components.
The European Union mandates conformity with the Regulation (EU) 305/2011 (Construction Products Regulation), alongside national certifications like
United Kingdom: WRAS Approval,
Germany: KTW Guideline,
France: ACS Certification (Attestation de Conformité Sanitaire),
Italy: DM 174/2004
In Australia and New Zealand, the AS/NZS 4020 standard governs the suitability of products for use with potable water, focusing on factors such as taste, color, and toxicity.
In Latin America, countries like Brazil and Mexico often reference international standards such as those from NSF International.
Environmental constraints
Water meters are frequently installed in environments where they are exposed to rain, flooding, and dust, necessitating robust protection to maintain accurate and reliable operation. An IP68 rating indicates that a device is completely dust-tight and can withstand continuous immersion in water beyond 1 meter depth, as specified by the manufacturer.
To achieve such protection, manufacturers employ various ingress protection mechanisms:
Potting with Epoxy or Silicone Gel: Encapsulating electronic components in materials like epoxy resin or silicone gel provides a robust barrier against water ingress. Epoxy offers strong adhesion and durability, while silicone gel provides flexibility and thermal stability.
Sealing and Desiccants for Humidity Control: Incorporating desiccants within the meter's enclosure helps absorb moisture, maintaining low humidity levels and preventing condensation that could lead to corrosion or electrical failures.
Innovation in water metering
Additional sensor
Additional sensors integrated into water meters are being explored as part of proof-of-concept (PoC) projects to enhance functionality and provide more detailed insights into water usage and system performance.
These innovations aim to address challenges such as leak detection, water quality monitoring, and reverse flow detection.
For instance,
Pressure sensors are being tested to identify anomalies like pipe bursts or blockages,
Temperature sensors are evaluated for their ability to detect freezing risks or thermal variations in water supply systems.
Acoustic sensors are tested in PoC systems for leak detection by analyzing sound patterns and vibrations within pipes.
Data analytics
The data collected by the smart meters is analyzed to provide insights into water usage patterns, peak consumption times, and potential issues like leaks or inefficiencies in the system. Utilities can use this data to optimize water distribution and address problems proactively.
Effect on consumption
There is disagreement as to the effect of metering and water pricing on water consumption. The price elasticity of metered water demand varies greatly depending on local conditions. The effect of volumetric water pricing on consumption tends to be higher if the water bill represents a significant portion of household expenditures.
There is evidence from the UK that there is an instant drop in consumption of some 10% when meters are installed, although in most instances consumption is not directly measured prior to meter installation, so the benefits are uncertain. Whilst metered water users in the UK do use less than unmetered users, in most areas metering is not compulsory for homes built before 1990, so the metered customers are to some extent a self-selecting group. There is also concern that water metering could be socially regressive, as householders on low incomes are less able to invest in water efficiency measures and may experience water poverty (defined as when a household spends more than 3% of net income on water and sewage services).
In Hamburg, Germany, domestic water consumption for metered flats (112 liter/capita/day) was 18% lower than for unmetered flats (137 liter/capita/day) in 1992.
Calibration and verification bench
Water meter calibration and testing benches employ various methods to evaluate the accuracy and performance of water meters. Each method caters to specific testing requirements, such as flow range, precision, or scalability.
Once the water flow is controlled, various measurement methods are employed to assess the performance and accuracy of water meters. These methods focus on comparing the meter's readings to a reference standard
Start-Stop Method
A basic and widely used approach where the flow is initiated and stopped over a fixed period or volume. The meter's reading is compared against a precisely measured reference volume, offering reliable results for low to medium flow rates.
Gravimetric Method:
This method involves collecting the fluid over a known period, typically 60s and measuring its mass using high precision weighing scales.
Volume Comparator Method
This method uses a calibrated reference device, such as a piston prover or master meter, to compare the water volume measured by the test meter. It is highly precise and suitable for meters requiring strict compliance with standards.
Real-Time Dynamic Measurement
Continuous flow systems use real-time data acquisition to monitor and compare the meter's readings with those from a calibrated sensor. This modern method enables fast and efficient testing, especially for high-volume operations.
Prepaid and Postpaid water meters
Meters can be prepaid or postpaid, depending on the payment method. Most mechanical type water meters are of the postpaid type, as are electromagnetic and ultrasonic meters. With prepaid water meters, the user purchases and prepays for a given amount of water from a vending station. The amount of water credited is entered on media such as an IC or RF type card. The main difference is whether the card needs contact with the processing part of the prepaid water meter. In some areas, a prepaid water meter uses a keypad as the interface for inputting the water credit.
Main suppliers
Sagemcom
Kamstrup
DH Metering Europe
Honeywell / Elster (ex Kent, ex Magnol, ex Wateau/Wameter)
Farnier
Hydrometer
Itron (ex Actaris, ex Schlumberger, ex Compagnie des Compteurs ou CDC),
Maddalena
Smarteo Water (ex Polier Water)
Sappel et Hydrometer (groupe Diehl)
Sensus (ex Sensus Metering Systems, ex Invensys, ex Socam)
Tagus
Zenner
Arad
See also
Advanced metering infrastructure
American Water Works Association
Automated meter reading
Curb box
Drinking water
Electricity meter
Flow measurement
Gas meter
Meter data management
Public utility
Residential water use
Utility submeter
Water conservation
References
Further reading
American Water Works Association Manual of Water Supply Practices Manual M6, Water Meters — Selection, Installation, Testing, and Maintenance,
American Water Works Association standards C700-02: Cold-Water Meters—Displacement Type, Bronze Main Case
American Water Works Association standards C701-02: Cold-Water Meters—Turbine Type
American Water Works Association standards C702-01: Cold-Water Meters—Compound Type
American Water Works Association standards C703-96: Cold-Water Meters—Fire Service Type
American Water Works Association standards C707-05: Encoder-Type Remote-Registration Systems for Cold-Water Meters
American Water Works Association standards C708-05: Cold-Water Meters Multijet Type
External links
Manual Water Meters http://watflux.in/manual-water-meters/
Water Measurement Manual of the United States Bureau of Reclamation
How to read different size water meters (PDF file)
How Meter Keeps Tab On The Water You Use, Popular Science, July 1950 very detailed article with good illustrations
Typical Prepaid water meter introduction(PDF file)
Kosher Smart Water Meters
Installation Guidelines for Electromagnetic Flow Meter based on CGWA norms
Flow meters
Public services
Water industry
Water supply
Water technology | Water metering | [
"Chemistry",
"Technology",
"Engineering",
"Environmental_science"
] | 6,905 | [
"Hydrology",
"Measuring instruments",
"Water industry",
"Flow meters",
"Environmental engineering",
"Water technology",
"Water supply",
"Fluid dynamics"
] |
12,087,300 | https://en.wikipedia.org/wiki/Minimum%20bounding%20box | In geometry, the minimum bounding box or smallest bounding box (also known as the minimum enclosing box or smallest enclosing box) for a point set in dimensions is the box with the smallest measure (area, volume, or hypervolume in higher dimensions) within which all the points lie. When other kinds of measure are used, the minimum box is usually called accordingly, e.g., "minimum-perimeter bounding box".
The minimum bounding box of a point set is the same as the minimum bounding box of its convex hull, a fact which may be used heuristically to speed up computation.
In the two-dimensional case it is called the minimum bounding rectangle.
Axis-aligned minimum bounding box
The axis-aligned minimum bounding box (or AABB) for a given point set is its minimum bounding box subject to the constraint that the edges of the box are parallel to the (Cartesian) coordinate axes. It is the Cartesian product of N intervals each of which is defined by the minimal and maximal value of the corresponding coordinate for the points in S.
Axis-aligned minimal bounding boxes are used as an approximate location of an object in question and as a very simple descriptor of its shape. For example, in computational geometry and its applications when it is required to find intersections in the set of objects, the initial check is the intersections between their MBBs. Since it is usually a much less expensive operation than the check of the actual intersection (because it only requires comparisons of coordinates), it allows quickly excluding checks of the pairs that are far apart.
Arbitrarily oriented minimum bounding box
The arbitrarily oriented minimum bounding box is the minimum bounding box, calculated subject to no constraints as to the orientation of the result. Minimum bounding box algorithms based on the rotating calipers method can be used to find the minimum-area or minimum-perimeter bounding box of a two-dimensional convex polygon in linear time, and of a three-dimensional point set in the time it takes to construct its convex hull followed by a linear-time computation. A three-dimensional rotating calipers algorithm can find the minimum-volume arbitrarily-oriented bounding box of a three-dimensional point set in cubic time. Matlab implementations of the latter as well as the optimal compromise between accuracy and CPU time are available.
Object-oriented minimum bounding box
In the case where an object has its own local coordinate system, it can be useful to store a bounding box relative to these axes, which requires no transformation as the object's own transformation changes.
Digital image processing
In digital image processing, the bounding box is merely the coordinates of the rectangular border that fully encloses a digital image when it is placed over a page, a canvas, a screen or other similar bidimensional background.
See also
Bounding sphere
Bounding volume
Minimum bounding rectangle
Darboux integral
References
Geometry
Geometric algorithms | Minimum bounding box | [
"Mathematics"
] | 614 | [
"Geometry"
] |
12,087,798 | https://en.wikipedia.org/wiki/Johnson%20circles | In geometry, a set of Johnson circles comprises three circles of equal radius sharing one common point of intersection . In such a configuration the circles usually have a total of four intersections (points where at least two of them meet): the common point that they all share, and for each of the three pairs of circles one more intersection point (referred here as their 2-wise intersection). If any two of the circles happen to osculate, they only have as a common point, and it will then be considered that be their 2-wise intersection as well; if they should coincide we declare their 2-wise intersection be the point diametrically opposite . The three 2-wise intersection points define the reference triangle of the figure. The concept is named after Roger Arthur Johnson.
Properties
The centers of the Johnson circles lie on a circle of the same radius as the Johnson circles centered at . These centers form the Johnson triangle.
The circle centered at with radius , known as the anticomplementary circle is tangent to each of the Johnson circles. The three tangent points are reflections of point about the vertices of the Johnson triangle.
The points of tangency between the Johnson circles and the anticomplementary circle form another triangle, called the anticomplementary triangle of the reference triangle. It is similar to the Johnson triangle, and is homothetic by a factor 2 centered at , their common circumcenter.
Johnson's theorem: The 2-wise intersection points of the Johnson circles (vertices of the reference triangle ) lie on a circle of the same radius as the Johnson circles. This property is also well known in Romania as The 5 lei coin problem of Gheorghe Țițeica.
The reference triangle is in fact congruent to the Johnson triangle, and is homothetic to it by a factor −1.
The point is the orthocenter of the reference triangle and the circumcenter of the Johnson triangle.
The homothetic center of the Johnson triangle and the reference triangle is their common nine-point center.
Proofs
Property 1 is obvious from the definition.
Property 2 is also clear: for any circle of radius , and any point on it, the circle of radius centered at is tangent to the circle in its point opposite to ; this applies in particular to , giving the anticomplementary circle .
Property 3 in the formulation of the homothety immediately follows; the triangle of points of tangency is known as the anticomplementary triangle.
For properties 4 and 5, first observe that any two of the three Johnson circles are interchanged by the reflection in the line connecting and their 2-wise intersection (or in their common tangent at if these points should coincide), and this reflection also interchanges the two vertices of the anticomplementary triangle lying on these circles. The 2-wise intersection point therefore is the midpoint of a side of the anticomplementary triangle, and lies on the perpendicular bisector of this side. Now the midpoints of the sides of any triangle are the images of its vertices by a homothety with factor −½, centered at the barycenter of the triangle. Applied to the anticomplementary triangle, which is itself obtained from the Johnson triangle by a homothety with factor 2, it follows from composition of homotheties that the reference triangle is homothetic to the Johnson triangle by a factor −1. Since such a homothety is a congruence, this gives property 5, and also the Johnson circles theorem since congruent triangles have circumscribed circles of equal radius.
For property 6, it was already established that the perpendicular bisectors of the sides of the anticomplementary triangle all pass through the point ; since that side is parallel to a side of the reference triangle, these perpendicular bisectors are also the altitudes of the reference triangle.
Property 7 follows immediately from property 6 since the homothetic center whose factor is -1 must lie at the midpoint of the circumcenters of the reference triangle and of the Johnson triangle; the latter is the orthocenter of the reference triangle, and its nine-point center is known to be that midpoint. Since the central symmetry also maps the orthocenter of the reference triangle to that of the Johnson triangle, the homothetic center is also the nine-point center of the Johnson triangle.
There is also an algebraic proof of the Johnson circles theorem, using a simple vector computation. There are vectors all of length , such that the Johnson circles are centered respectively at Then the 2-wise intersection points are respectively , and the point clearly has distance to any of those 2-wise intersection points.
Further properties
The three Johnson circles can be considered the reflections of the circumcircle of the reference triangle about each of the three sides of the reference triangle. Furthermore, under the reflections about the three sides of the reference triangle, its orthocenter maps to three points on the circumcircle of the reference triangle that form the vertices of the circum-orthic triangle, its circumcenter maps onto the vertices of the Johnson triangle and its Euler line (line passing through ) generates three lines that are concurrent at X(110).
The Johnson triangle and its reference triangle share the same nine-point center, the same Euler line and the same nine-point circle. The six points formed from the vertices of the reference triangle and its Johnson triangle all lie on the Johnson circumconic that is centered at the nine-point center and that has the point X(216) of the reference triangle as its perspector. The circumconic and the circumcircle share a fourth point, X(110) of the reference triangle.
Finally there are two interesting and documented circumcubics that pass through the six vertices of the reference triangle and its Johnson triangle as well as the circumcenter, the orthocenter and the nine-point center. The first is known as the first Musselman cubic – K026. This cubic also passes through the six vertices of the medial triangle and the medial triangle of the Johnson triangle. The second cubic is known as the Euler central cubic – K044. This cubic also passes through the six vertices of the orthic triangle and the orthic triangle of the Johnson triangle.
The X(i) point notation is the Clark Kimberling ETC classification of triangle centers.
External links
F. M. Jackson and
F. M. Jackson and
Bernard Gibert Circumcubic K026
Bernard Gibert Circumcubic K044
Clark Kimberling, "Encyclopedia of triangle centers". (Lists some 3000 interesting points associated with any triangle.)
References
Triangle geometry
Circles | Johnson circles | [
"Mathematics"
] | 1,403 | [
"Circles",
"Pi"
] |
12,088,404 | https://en.wikipedia.org/wiki/Quantitative%20precipitation%20forecast | The quantitative precipitation forecast (abbreviated QPF) is the expected amount of melted precipitation accumulated over a specified time period over a specified area. A QPF will be created when precipitation amounts reaching a minimum threshold are expected during the forecast's valid period. Valid periods of precipitation forecasts are normally synoptic hours such as 00:00, 06:00, 12:00 and 18:00 GMT. Terrain is considered in QPFs by use of topography or based upon climatological precipitation patterns from observations with fine detail. Starting in the mid-to-late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States. Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height. QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis. Radar imagery forecasting techniques show higher skill than model forecasts within 6 to 7 hours of the time of the radar image. The forecasts can be verified through use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast.
Use of radar
Algorithms exist to forecast rainfall based on short term radar trends, within a matter of hours. Radar imagery forecasting techniques show higher skill than model forecasts within 6 to 7 hours of the time of the radar image.
Use of forecast models
In the past, the forecaster was responsible for generating the entire weather forecast based upon available observations. Today, meteorologists' input is generally confined to choosing a model based on various parameters, such as model biases and performance. Using a consensus of forecast models, as well as ensemble members of the various models, can help reduce forecast error. However, regardless how small the average error becomes with any individual system, large errors within any particularly piece of guidance are still possible on any given model run. Professionals are required to interpret the model data into weather forecasts that are understandable to the lay person. Professionals can use knowledge of local effects which may be too small in size to be resolved by the model to add information to the forecast. As an example, terrain is considered in the QPF process by using topography or climatological precipitation patterns from observations with fine detail. Using model guidance and comparing the various forecast fields to climatology, extreme events such as excessive precipitation associated with later flood events lead to better forecasts. While increasing accuracy of forecast models implies that humans may no longer be needed in the forecast process at some point in the future, there is currently still a need for human intervention.
Nowcasting
The forecasting of the precipitation within the next six hours is often referred to as nowcasting. In this time range it is possible to forecast smaller features such as individual showers and thunderstorms with reasonable accuracy, as well as other features too small to be resolved by a computer model. A human given the latest radar, satellite and observational data will be able to make a better analysis of the small scale features present and so will be able to make a more accurate forecast for the following few hours. However, there are now expert systems using those data and mesoscale numerical model to make better extrapolation, including evolution of those features in time.
Ensemble forecasting
The detail that can be given in a forecast increases with time as errors decrease. There comes a point when the errors are so large that the forecast has no correlation with the actual state of the atmosphere. Looking at a single forecast model does not indicate how likely that forecast is to be correct. Ensemble forecasting entails the production of many forecasts to reflect the uncertainty in the initial state of the atmosphere (due to errors in the observations and insufficient sampling). The range of different forecasts produced can then assess the uncertainty in the forecast. Ensemble forecasts are increasingly being used for operational weather forecasting (for example at European Centre for Medium-Range Weather Forecasts (ECMWF), National Centers for Environmental Prediction (NCEP), and the Canadian Forecasting Center). Ensemble mean forecasts for precipitation have the same problems associated with their use in other fields, as they average out more extreme values, and therefore have limited usefulness for extreme events. In the case of the SREF ensemble mean, used within the United States, this decreasing usefulness starts with values as low as .
Probability approach
In addition to graphical rainfall forecasts showing quantitative amounts, rainfall forecasts can be made describing the probabilities of certain rainfall amounts being met. This allows the forecaster to assign the degree of uncertainty to the forecast. This technique is considered to be informative, relative to climatology. This method has been used for years within National Weather Service forecasts, as a period's chance of rain equals the chance that will fall in any particular spot. In this case, it is known as probability of precipitation. These probabilities can be derived from a deterministic forecast using computer post-processing.
Entities which generate rainfall forecasts
Australia
The Bureau of Meteorology began a method of forecasting rainfall using a combination, or ensemble, of different forecast models in 2006. It is termed The Poor Man's Ensemble (PME). Its forecasts are more accurate over time than any of the individual models composing the ensemble. The PME is quick to produce, and is available through their Water and the Land page on their website.
Hong Kong
The Hong Kong Observatory generates short term rainstorm warnings for systems which are expected to accumulate a certain amount of rainfall per hour over the next few hours. They use three levels of warning. The amber warning indicates that a rainfall intensity of per hour is expected. The red warning indicates rainfall amounts of per hour are anticipated. The black warning indicates that rainfall rates of are possible.
United States
Within the United States, the Hydrometeorological Prediction Center, River Forecast Centers, and local forecast offices within the National Weather Service create precipitation forecasts for up to five days in the future, forecasting amounts equal to or greater than . Starting in the mid-to-late 1990s, QPFs were used within hydrologic forecast models to simulate impact of rainfall on river stages.
Verification
Rainfall forecasts can be verified a number of ways. Rain gauge observations can be gridded into areal averages, which are then compared to the grids for the forecast models. Weather radar estimates can be used outright, or corrected for rain gauge observations.
Several statistical scores can be based on the observed and forecast fields. One, known as a bias, compares the size of the forecast field to the observed field, with the goal of a score of 1. The threat score involves the intersection of the forecast and observed sets, with a maximum possible verification score of 1. The probability of detection, or POD, is found by dividing the overlap between the forecast and observed fields by the size of the observed field: the goal here is a score of 1. The critical success index, or CSI, divides the overlap between the forecast and observed fields by the combined size of the forecast and observed fields: the goal here is a score of 1. The false alarm rate, or FAR, divides the area of the forecast which does not overlap the observed field by the size of the forecasted area. The goal value in this measure is zero.
With tropical cyclones which impact the United States, the GFS global forecast model performed best in regards to its rainfall forecasts over the last few years, outperforming the NAM and ECMWF forecast models.
See also
Tropical cyclone rainfall forecasting
European Flood Alert System: using QPF and EPS for flood forecasting
References
External links
Hydrometeorological Prediction Center QPF for the lower 48 United States
Irrigation controller using QPF
Weather forecasting
Hydrology | Quantitative precipitation forecast | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,602 | [
"Hydrology",
"Environmental engineering"
] |
12,088,522 | https://en.wikipedia.org/wiki/Specification%20pattern | In computer programming, the specification pattern is a particular software design pattern, whereby business rules can be recombined by chaining the business rules together using boolean logic. The pattern is frequently used in the context of domain-driven design.
A specification pattern outlines a business rule that is combinable with other business rules. In this pattern, a unit of business logic inherits its functionality from the abstract aggregate Composite Specification class. The Composite Specification class has one function called IsSatisfiedBy that returns a boolean value. After instantiation, the specification is "chained" with other specifications, making new specifications easily maintainable, yet highly customizable business logic. Furthermore, upon instantiation the business logic may, through method invocation or inversion of control, have its state altered in order to become a delegate of other classes such as a persistence repository.
As a consequence of performing runtime composition of high-level business/domain logic, the Specification pattern is a convenient tool for converting ad-hoc user search criteria into low level logic to be processed by repositories.
Since a specification is an encapsulation of logic in a reusable form it is very simple to thoroughly unit test, and when used in this context is also an implementation of the humble object pattern.
Code examples
C#
public interface ISpecification
{
bool IsSatisfiedBy(object candidate);
ISpecification And(ISpecification other);
ISpecification AndNot(ISpecification other);
ISpecification Or(ISpecification other);
ISpecification OrNot(ISpecification other);
ISpecification Not();
}
public abstract class CompositeSpecification : ISpecification
{
public abstract bool IsSatisfiedBy(object candidate);
public ISpecification And(ISpecification other)
{
return new AndSpecification(this, other);
}
public ISpecification AndNot(ISpecification other)
{
return new AndNotSpecification(this, other);
}
public ISpecification Or(ISpecification other)
{
return new OrSpecification(this, other);
}
public ISpecification OrNot(ISpecification other)
{
return new OrNotSpecification(this, other);
}
public ISpecification Not()
{
return new NotSpecification(this);
}
}
public class AndSpecification : CompositeSpecification
{
private ISpecification leftCondition;
private ISpecification rightCondition;
public AndSpecification(ISpecification left, ISpecification right)
{
leftCondition = left;
rightCondition = right;
}
public override bool IsSatisfiedBy(object candidate)
{
return leftCondition.IsSatisfiedBy(candidate) && rightCondition.IsSatisfiedBy(candidate);
}
}
public class AndNotSpecification : CompositeSpecification
{
private ISpecification leftCondition;
private ISpecification rightCondition;
public AndNotSpecification(ISpecification left, ISpecification right)
{
leftCondition = left;
rightCondition = right;
}
public override bool IsSatisfiedBy(object candidate)
{
return leftCondition.IsSatisfiedBy(candidate) && rightCondition.IsSatisfiedBy(candidate) != true;
}
}
public class OrSpecification : CompositeSpecification
{
private ISpecification leftCondition;
private ISpecification rightCondition;
public OrSpecification(ISpecification left, ISpecification right)
{
leftCondition = left;
rightCondition = right;
}
public override bool IsSatisfiedBy(object candidate)
{
return leftCondition.IsSatisfiedBy(candidate) || rightCondition.IsSatisfiedBy(candidate);
}
}
public class OrNotSpecification : CompositeSpecification
{
private ISpecification leftCondition;
private ISpecification rightCondition;
public OrNotSpecification(ISpecification left, ISpecification right)
{
leftCondition = left;
rightCondition = right;
}
public override bool IsSatisfiedBy(object candidate)
{
return leftCondition.IsSatisfiedBy(candidate) || rightCondition.IsSatisfiedBy(candidate) != true;
}
}
public class NotSpecification : CompositeSpecification
{
private ISpecification Wrapped;
public NotSpecification(ISpecification x)
{
Wrapped = x;
}
public override bool IsSatisfiedBy(object candidate)
{
return !Wrapped.IsSatisfiedBy(candidate);
}
}
C# 6.0 with generics
public interface ISpecification<T>
{
bool IsSatisfiedBy(T candidate);
ISpecification<T> And(ISpecification<T> other);
ISpecification<T> AndNot(ISpecification<T> other);
ISpecification<T> Or(ISpecification<T> other);
ISpecification<T> OrNot(ISpecification<T> other);
ISpecification<T> Not();
}
public abstract class LinqSpecification<T> : CompositeSpecification<T>
{
public abstract Expression<Func<T, bool>> AsExpression();
public override bool IsSatisfiedBy(T candidate) => AsExpression().Compile()(candidate);
}
public abstract class CompositeSpecification<T> : ISpecification<T>
{
public abstract bool IsSatisfiedBy(T candidate);
public ISpecification<T> And(ISpecification<T> other) => new AndSpecification<T>(this, other);
public ISpecification<T> AndNot(ISpecification<T> other) => new AndNotSpecification<T>(this, other);
public ISpecification<T> Or(ISpecification<T> other) => new OrSpecification<T>(this, other);
public ISpecification<T> OrNot(ISpecification<T> other) => new OrNotSpecification<T>(this, other);
public ISpecification<T> Not() => new NotSpecification<T>(this);
}
public class AndSpecification<T> : CompositeSpecification<T>
{
ISpecification<T> left;
ISpecification<T> right;
public AndSpecification(ISpecification<T> left, ISpecification<T> right)
{
this.left = left;
this.right = right;
}
public override bool IsSatisfiedBy(T candidate) => left.IsSatisfiedBy(candidate) && right.IsSatisfiedBy(candidate);
}
public class AndNotSpecification<T> : CompositeSpecification<T>
{
ISpecification<T> left;
ISpecification<T> right;
public AndNotSpecification(ISpecification<T> left, ISpecification<T> right)
{
this.left = left;
this.right = right;
}
public override bool IsSatisfiedBy(T candidate) => left.IsSatisfiedBy(candidate) && !right.IsSatisfiedBy(candidate);
}
public class OrSpecification<T> : CompositeSpecification<T>
{
ISpecification<T> left;
ISpecification<T> right;
public OrSpecification(ISpecification<T> left, ISpecification<T> right)
{
this.left = left;
this.right = right;
}
public override bool IsSatisfiedBy(T candidate) => left.IsSatisfiedBy(candidate) || right.IsSatisfiedBy(candidate);
}
public class OrNotSpecification<T> : CompositeSpecification<T>
{
ISpecification<T> left;
ISpecification<T> right;
public OrNotSpecification(ISpecification<T> left, ISpecification<T> right)
{
this.left = left;
this.right = right;
}
public override bool IsSatisfiedBy(T candidate) => left.IsSatisfiedBy(candidate) || !right.IsSatisfiedBy(candidate);
}
public class NotSpecification<T> : CompositeSpecification<T>
{
ISpecification<T> other;
public NotSpecification(ISpecification<T> other) => this.other = other;
public override bool IsSatisfiedBy(T candidate) => !other.IsSatisfiedBy(candidate);
}
Python
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Any
class BaseSpecification(ABC):
@abstractmethod
def is_satisfied_by(self, candidate: Any) -> bool:
raise NotImplementedError()
def __call__(self, candidate: Any) -> bool:
return self.is_satisfied_by(candidate)
def __and__(self, other: "BaseSpecification") -> "AndSpecification":
return AndSpecification(self, other)
def __or__(self, other: "BaseSpecification") -> "OrSpecification":
return OrSpecification(self, other)
def __neg__(self) -> "NotSpecification":
return NotSpecification(self)
@dataclass(frozen=True)
class AndSpecification(BaseSpecification):
first: BaseSpecification
second: BaseSpecification
def is_satisfied_by(self, candidate: Any) -> bool:
return self.first.is_satisfied_by(candidate) and self.second.is_satisfied_by(candidate)
@dataclass(frozen=True)
class OrSpecification(BaseSpecification):
first: BaseSpecification
second: BaseSpecification
def is_satisfied_by(self, candidate: Any) -> bool:
return self.first.is_satisfied_by(candidate) or self.second.is_satisfied_by(candidate)
@dataclass(frozen=True)
class NotSpecification(BaseSpecification):
subject: BaseSpecification
def is_satisfied_by(self, candidate: Any) -> bool:
return not self.subject.is_satisfied_by(candidate)
C++
template <class T>
class ISpecification
{
public:
virtual ~ISpecification() = default;
virtual bool IsSatisfiedBy(T Candidate) const = 0;
virtual ISpecification<T>* And(const ISpecification<T>& Other) const = 0;
virtual ISpecification<T>* AndNot(const ISpecification<T>& Other) const = 0;
virtual ISpecification<T>* Or(const ISpecification<T>& Other) const = 0;
virtual ISpecification<T>* OrNot(const ISpecification<T>& Other) const = 0;
virtual ISpecification<T>* Not() const = 0;
};
template <class T>
class CompositeSpecification : public ISpecification<T>
{
public:
virtual bool IsSatisfiedBy(T Candidate) const override = 0;
virtual ISpecification<T>* And(const ISpecification<T>& Other) const override;
virtual ISpecification<T>* AndNot(const ISpecification<T>& Other) const override;
virtual ISpecification<T>* Or(const ISpecification<T>& Other) const override;
virtual ISpecification<T>* OrNot(const ISpecification<T>& Other) const override;
virtual ISpecification<T>* Not() const override;
};
template <class T>
class AndSpecification final : public CompositeSpecification<T>
{
public:
const ISpecification<T>& Left;
const ISpecification<T>& Right;
AndSpecification(const ISpecification<T>& InLeft, const ISpecification<T>& InRight)
: Left(InLeft),
Right(InRight) { }
virtual bool IsSatisfiedBy(T Candidate) const override
{
return Left.IsSatisfiedBy(Candidate) && Right.IsSatisfiedBy(Candidate);
}
};
template <class T>
ISpecification<T>* CompositeSpecification<T>::And(const ISpecification<T>& Other) const
{
return new AndSpecification<T>(*this, Other);
}
template <class T>
class AndNotSpecification final : public CompositeSpecification<T>
{
public:
const ISpecification<T>& Left;
const ISpecification<T>& Right;
AndNotSpecification(const ISpecification<T>& InLeft, const ISpecification<T>& InRight)
: Left(InLeft),
Right(InRight) { }
virtual bool IsSatisfiedBy(T Candidate) const override
{
return Left.IsSatisfiedBy(Candidate) && !Right.IsSatisfiedBy(Candidate);
}
};
template <class T>
class OrSpecification final : public CompositeSpecification<T>
{
public:
const ISpecification<T>& Left;
const ISpecification<T>& Right;
OrSpecification(const ISpecification<T>& InLeft, const ISpecification<T>& InRight)
: Left(InLeft),
Right(InRight) { }
virtual bool IsSatisfiedBy(T Candidate) const override
{
return Left.IsSatisfiedBy(Candidate) || Right.IsSatisfiedBy(Candidate);
}
};
template <class T>
class OrNotSpecification final : public CompositeSpecification<T>
{
public:
const ISpecification<T>& Left;
const ISpecification<T>& Right;
OrNotSpecification(const ISpecification<T>& InLeft, const ISpecification<T>& InRight)
: Left(InLeft),
Right(InRight) { }
virtual bool IsSatisfiedBy(T Candidate) const override
{
return Left.IsSatisfiedBy(Candidate) || !Right.IsSatisfiedBy(Candidate);
}
};
template <class T>
class NotSpecification final : public CompositeSpecification<T>
{
public:
const ISpecification<T>& Other;
NotSpecification(const ISpecification<T>& InOther)
: Other(InOther) { }
virtual bool IsSatisfiedBy(T Candidate) const override
{
return !Other.IsSatisfiedBy(Candidate);
}
};
template <class T>
ISpecification<T>* CompositeSpecification<T>::AndNot(const ISpecification<T>& Other) const
{
return new AndNotSpecification<T>(*this, Other);
}
template <class T>
ISpecification<T>* CompositeSpecification<T>::Or(const ISpecification<T>& Other) const
{
return new OrSpecification<T>(*this, Other);
}
template <class T>
ISpecification<T>* CompositeSpecification<T>::OrNot(const ISpecification<T>& Other) const
{
return new OrNotSpecification<T>(*this, Other);
}
template <class T>
ISpecification<T>* CompositeSpecification<T>::Not() const
{
return new NotSpecification<T>(*this);
}
TypeScript
export interface ISpecification {
isSatisfiedBy(candidate: unknown): boolean;
and(other: ISpecification): ISpecification;
andNot(other: ISpecification): ISpecification;
or(other: ISpecification): ISpecification;
orNot(other: ISpecification): ISpecification;
not(): ISpecification;
}
export abstract class CompositeSpecification implements ISpecification {
abstract isSatisfiedBy(candidate: unknown): boolean;
and(other: ISpecification): ISpecification {
return new AndSpecification(this, other);
}
andNot(other: ISpecification): ISpecification {
return new AndNotSpecification(this, other);
}
or(other: ISpecification): ISpecification {
return new OrSpecification(this, other);
}
orNot(other: ISpecification): ISpecification {
return new OrNotSpecification(this, other);
}
not(): ISpecification {
return new NotSpecification(this);
}
}
export class AndSpecification extends CompositeSpecification {
constructor(private leftCondition: ISpecification, private rightCondition: ISpecification) {
super();
}
isSatisfiedBy(candidate: unknown): boolean {
return this.leftCondition.isSatisfiedBy(candidate) && this.rightCondition.isSatisfiedBy(candidate);
}
}
export class AndNotSpecification extends CompositeSpecification {
constructor(private leftCondition: ISpecification, private rightCondition: ISpecification) {
super();
}
isSatisfiedBy(candidate: unknown): boolean {
return this.leftCondition.isSatisfiedBy(candidate) && this.rightCondition.isSatisfiedBy(candidate) !== true;
}
}
export class OrSpecification extends CompositeSpecification {
constructor(private leftCondition: ISpecification, private rightCondition: ISpecification) {
super();
}
isSatisfiedBy(candidate: unknown): boolean {
return this.leftCondition.isSatisfiedBy(candidate) || this.rightCondition.isSatisfiedBy(candidate);
}
}
export class OrNotSpecification extends CompositeSpecification {
constructor(private leftCondition: ISpecification, private rightCondition: ISpecification) {
super();
}
isSatisfiedBy(candidate: unknown): boolean {
return this.leftCondition.isSatisfiedBy(candidate) || this.rightCondition.isSatisfiedBy(candidate) !== true;
}
}
export class NotSpecification extends CompositeSpecification {
constructor(private wrapped: ISpecification) {
super();
}
isSatisfiedBy(candidate: unknown): boolean {
return !this.wrapped.isSatisfiedBy(candidate);
}
}
Example of use
In the next example, invoices are retrieved and sent to a collection agency if:
they are overdue,
notices have been sent, and
they are not already with the collection agency.
This example is meant to show the result of how the logic is 'chained' together.
This usage example assumes a previously defined OverdueSpecification class that is satisfied when an invoice's due date is 30 days or older, a NoticeSentSpecification class that is satisfied when three notices have been sent to the customer, and an InCollectionSpecification class that is satisfied when an invoice has already been sent to the collection agency. The implementation of these classes isn't important here.
Using these three specifications, we created a new specification called SendToCollection which will be satisfied when an invoice is overdue, when notices have been sent to the customer, and are not already with the collection agency.
var overDue = new OverDueSpecification();
var noticeSent = new NoticeSentSpecification();
var inCollection = new InCollectionSpecification();
// Example of specification pattern logic chaining
var sendToCollection = overDue.And(noticeSent).And(inCollection.Not());
var invoiceCollection = Service.GetInvoices();
foreach (var currentInvoice in invoiceCollection)
{
if (sendToCollection.IsSatisfiedBy(currentInvoice))
{
currentInvoice.SendToCollection();
}
}
References
External links
Specifications by Eric Evans and Martin Fowler
The Specification Pattern: A Primer by Matt Berther
The Specification Pattern: A Four Part Introduction using VB.Net by Richard Dalton
The Specification Pattern in PHP by Moshe Brevda
Happyr Doctrine Specification in PHP by Happyr
The Specification Pattern in Swift by Simon Strandgaard
The Specification Pattern in TypeScript and JavaScript by Thiago Delgado Pinto
specification pattern in flash actionscript 3 by Rolf Vreijdenberger
Architectural pattern (computer science)
Software design patterns
Programming language comparisons
Articles with example C Sharp code
Articles with example C++ code
Articles with example JavaScript code
Articles with example Python (programming language) code | Specification pattern | [
"Technology"
] | 4,784 | [
"Programming language comparisons",
"Computing comparisons"
] |
12,088,839 | https://en.wikipedia.org/wiki/Boroxine | Boroxine () is a 6-membered heterocyclic compound composed of alternating oxygen and singly-hydrogenated boron atoms. Boroxine derivatives (boronic anhydrides) such as trimethylboroxine and triphenylboroxine also make up a broader class of compounds called boroxines. These compounds are solids that are usually in equilibrium with their respective boronic acids at room temperature. Beside being used in theoretical studies, boroxine is primarily used in the production of optics.
Structure and bonding
Three-coordinate compounds of boron typically exhibit trigonal planar geometry, therefore the boroxine ring is locked in a planar geometry as well. These compounds are isoelectronic to benzene. With the vacant p-orbital on the boron atoms, they may possess some aromatic character. Boron single-bonds on boroxine compounds are mostly s-character. Ethyl-substituted boroxine has B-O bond lengths of 1.384 Å and B-C bond lengths of 1.565 Å. Phenyl-substituted boroxine has similar bond lengths of 1.386 Å and 1.546 Å respectively, showing that the substituent has little effect on the boroxine ring size.
Substitutions onto a boroxine ring determine its crystal structure. Alkyl-substituted boroxines have the simplest crystal structure. These molecules stack on top of each other, aligning an oxygen atom from one molecule with a boron atom in another, leaving each boron atom between two other oxygen atoms. This forms a tube out of the individual boroxine rings. The intermolecular B-O distance of ethyl-substituted boroxine is 3.462 Å, which is much longer than the B-O bond distance of 1.384 Å. The crystal structure of phenyl-substituted boroxine is more complex. The interaction between the vacant p-orbitals in the boron atoms and the π-electrons in the aromatic, phenyl-substituents cause a different crystal structure. The boroxine ring of one molecule is stacked between two phenyl rings of other molecules. This arrangement allows the phenyl-substituents to donate π-electron density to the vacant boron p-orbitals.
Synthesis
The parent boroxine (cyclo-(HBO)3) is prepared in small quantities as a low pressure gas by high temperature reaction of water and elemental boron or reaction of various boranes (B2H6 or B5H9) with O2. It is thermodynamically unstable with respect to disproportionation to diborane and boron oxide. Some reactivity studies and an IR spectrum are reported, but it is otherwise not well characterized.
As discovered in the 1930s, substituted boroxines (cyclo-(RBO)3, R = alkyl or aryl) are generally produced from their corresponding boronic acids by dehydration. This dehydration can be done either by a drying agent or by heating under a high vacuum.
Trimethylboroxine can be synthesized by reacting carbon monoxide with diborane (B2H6) and lithium borohydride (LiBH4) as a catalyst (or reaction of borane–tetrahydrofuran or borane–(dimethyl sulfide) in the presence of sodium borohydride):
Reactions
Trimethylboroxine is used in the methylation of various aryl halides through palladium-catalyzed Suzuki-Miyaura coupling reactions:
\overset{(X = Br, I)}{C6H5X} + (CH3BO)3 ->[\ce{K2CO3, Pd(PPh3)4}][\ce{dioxane}] C6H5CH3
Another form of the Suzuki-Miyaura coupling reaction exhibits selectivity to aryl chlorides:
Boroxines have also been examined as precursors to monomeric oxoborane, HB≡O. This compound quickly converts back to the cyclic boroxine, even at low temperatures.
References
Boron heterocycles
Inorganic compounds
Six-membered rings | Boroxine | [
"Chemistry"
] | 924 | [
"Inorganic compounds"
] |
12,089,135 | https://en.wikipedia.org/wiki/Meerkat%20%28vehicle%29 | The Meerkat is the lead vehicle in the Interim Vehicle Mounted Mine Detector VMMD system, which evolved from a system known as Chubby.
The first units were delivered in 1998.
The system is manufactured by the Rolling Stock Division (RSD) of DCD-Dorbyl, a mechanical engineering conglomerate in South Africa.
The Meerkat resembles a cross between a dune buggy and a grader, with a pair of horizontally mounted rectangular panels, one each side, where the grader's blade would be.
The Meerkat is intended only to detect land mines - the related Husky and its trailers are intended to dispose of detected mines.
With its tire pressures lowered, the Meerkat produces a relatively low pressure on the ground below the tires, enabling it to pass over pressure-sensitive mines designed to destroy heavy vehicles. Even if it detonates a mine, the floor of the driver's cabin is armored and the structural components are designed to be quickly replaced.
References
External links
DCD-Dorbyl Rolling Stock Division (RSD)
Military vehicles of South Africa
Military engineering vehicles
Military vehicles introduced in the 1990s | Meerkat (vehicle) | [
"Engineering"
] | 232 | [
"Engineering vehicles",
"Military engineering",
"Military engineering vehicles"
] |
12,089,705 | https://en.wikipedia.org/wiki/Continuous%20function%20%28set%20theory%29 | In set theory, a continuous function is a sequence of ordinals such that the values assumed at limit stages are the limits (limit suprema and limit infima) of all values at previous stages. More formally, let γ be an ordinal, and be a γ-sequence of ordinals. Then s is continuous if at every limit ordinal β < γ,
and
Alternatively, if s is an increasing function then s is continuous if s: γ → range(s) is a continuous function when the sets are each equipped with the order topology. These continuous functions are often used in cofinalities and cardinal numbers.
A normal function is a function that is both continuous and strictly increasing.
References
Thomas Jech. Set Theory, 3rd millennium ed., 2002, Springer Monographs in Mathematics, Springer,
Set theory
Ordinal numbers | Continuous function (set theory) | [
"Mathematics"
] | 177 | [
"Ordinal numbers",
"Set theory",
"Mathematical logic",
"Mathematical objects",
"Mathematical logic stubs",
"Order theory",
"Numbers"
] |
12,089,795 | https://en.wikipedia.org/wiki/Mackmyra%20Whisky | Mackmyra Whisky was a Swedish single malt whisky distillery. On August 19th, 2024, Mackmyra Svensk Whisky AB filed for bankruptcy. After the bancrupcy, more than 50 organizations expressed interest to buy Mackmyra, such interest level is unprecedented in Swedish history.
It is named after the village and manor of Mackmyra, where the first distillery was established, in the residential district of Valbo, south-west of Gävle. The toponym is commonly suggested as deriving from a regional word for gnats (Swedish: mack) and mire (Swedish: myr). However, owlet moths have all but disappeared from present-day Mackmyra, due to the gradual rebound of land—a result of the melting of ice sheets 10,000 years ago.
Mackmyra Svensk Whisky AB is a publicly traded company, listed in December 2011 on Nasdaq OMX's alternative-investment market First North. The company has about 45 employees with annual revenues of around SEK 100 million, and its biggest shareholder is the Swedish farmer's co-op Lantmännen.
History
Mackmyra's history started in 1998 at a Swedish winter resort, where eight friends from the Royal Institute of Technology met up for a ski trip. Noticing all of them had brought along a bottle of malt whisky for the host, a conversation started about the manufacturing of a Swedish whisky. The following year a company was founded, and after years of experimenting with 170 different recipes, they finally settled on two recipes in 2002. That same year a new distillery was built in the old mill and power station at Mackmyra, which went on stream in October. The first limited edition single malt whisky, Preludium 01, launched in February 2006 and sold-out in less than 20 minutes.
Production
All ingredients used in the production are sourced within a 75-mile radius from Mackmyra, except for the yeast, which is from Rotebro. The water undergoes a natural filtration process in an esker nearby and is only sterilized with a high-intensity UV light. The peat is from a local bog near Österfärnebo, and the distillery uses barley from Dalarna and Strömsta Manor in Enköping.
Mackmyra bottles all of its wares in their natural color, without additives, and ages their spirits in four different cask types: bourbon, sherry, Swedish oak and in a special signature cask made from classic American bourbon casks and Swedish oak. The whisky is generally matured 50 meters below ground in the disused Bodås Mine in Hofors, and most releases have been at cask strength, except for The First Edition and Mackmyra Brukswhisky. Mackmyra filed for insolvency in 2024.
Distilleries
Mackmyra has two active distilleries. The first went on stream at Mackmyra in 2002, featuring a full-sized pot still from Forsyth's in Rothes, Scotland, Swedish stainless steel washbacks and a German mash tun, with a production capacity of 600,000 bottles a year.
A second distillery, about 6 miles east of Mackmyra village, was built and went on stream in 2011. The project cost has been estimated at SEK 50 million, featuring two full-sized pot stills with a production capacity of 1.8 M bottles a year. It's seven storeys high, using gravity to power many internal processes within the distillery, resulting in about 45% less energy use compared to the first distillery.
Products
Standard Range
The First Edition (ABV 46.1%) - Introduced in 2008, and the first Swedish whisky produced in large volumes since Skeppets Whisky.
Mackmyra Brukswhisky (ABV 41.1%) – Introduced in 2010, and sold internationally as The Swedish Whisky.
Mackmyra Svensk Rök (ABV 46.1%) – Introduced in 2013, and the first Swedish single malt whisky with a smoky flavor.
Special edition bottlings
Mackmyra Midvinter – A limited edition series, launched in November 2013.
Mackmyra Midnattssol - A limited editions series, launched in May 2014.
Mackmyra Moment – A series of whiskies from handpicked casks selected by the master blender.
Mackmyra Reserve – A single barrel whisky made to order and stored until ready to drink in 30-litre casks. The customer picks recipe and cask type.
Mackmyra 10 år – 10-year-old limited edition whisky.
Past special edition bottlings
Mackmyra Preludium – 2006-2007
Mackmyra Privus – 2006
Mackmyra Special – 2008-2013
Other spirits
Vit Hund (ABV 46.1%) - An unmatured raw whisky
Bee (ABV 22%) - A whisky and honey liqueur
Awards
Mackmyra have won several awards at international spirit competitions. For example:
Mackmyra Brukswhisky have been named "European Whisky of the Year" by Jim Murray in the Whisky Bible, and was awarded gold by The International Wine and Spirit Competition (IWSC) in 2010.
In 2012, Mackmyra received a trophy as the "European Spirits Producer of the Year" by the IWSC, and was awarded gold for Moment Skog and Mackmyra Special 08. Gold have also previously been awarded in 2011 for The First Edition, Moment Drivved, Moment Medvind and Mackmyra Reserve.
In 2013 the distillery was awarded Gold Outstanding by the IWSC and Three Golden Stars by the International Taste and Quality Institute for Moment Glöd single malt whisky.
See also
List of whisky brands
Single malt whisky
References
External links
Official Mackmyra Website (in English)
Swedish Whisky (in English)
Coordinates:
Distilleries
Swedish distilled drinks
Food and drink companies of Sweden
Companies based in Gävleborg County | Mackmyra Whisky | [
"Chemistry"
] | 1,231 | [
"Distilleries",
"Distillation"
] |
12,090,211 | https://en.wikipedia.org/wiki/WS-Federation | WS-Federation (Web Services Federation) is an Identity Federation specification, developed by a group of companies: BEA Systems, BMC Software, CA Inc. (along with Layer 7 Technologies now a part of CA Inc.), IBM, Microsoft, Novell, Hewlett Packard Enterprise, and VeriSign. Part of the larger Web Services Security framework, WS-Federation defines mechanisms for allowing different security realms to broker information on identities, identity attributes and authentication.
Associated specifications
The following draft specifications are associated with WS-Security:
WS-SecureConversation
WS-Federation
WS-Authorization
WS-Policy
WS-Trust
WS-Privacy
See also
List of Web service specifications
Web Services
SAML
XACML
Liberty Alliance
OpenID
External links
WS-Federation 1.2 specification
Whitepaper: Understanding WS-Federation
Whitepaper: Federation of Identities in a Web Services world
Computer access control
Federated identity
Identity management
Identity management systems
Web service specifications | WS-Federation | [
"Technology",
"Engineering"
] | 200 | [
"Computing stubs",
"Cybersecurity engineering",
"Computer access control",
"Computer network stubs"
] |
12,092,048 | https://en.wikipedia.org/wiki/Capillary%20length | The capillary length or capillary constant is a length scaling factor that relates gravity and surface tension. It is a fundamental physical property that governs the behavior of menisci, and is found when body forces (gravity) and surface forces (Laplace pressure) are in equilibrium.
The pressure of a static fluid does not depend on the shape, total mass or surface area of the fluid. It is directly proportional to the fluid's specific weight – the force exerted by gravity over a specific volume, and its vertical height. However, a fluid also experiences pressure that is induced by surface tension, commonly referred to as the Young–Laplace pressure. Surface tension originates from cohesive forces between molecules, and in the bulk of the fluid, molecules experience attractive forces from all directions. The surface of a fluid is curved because exposed molecules on the surface have fewer neighboring interactions, resulting in a net force that contracts the surface. There exists a pressure difference either side of this curvature, and when this balances out the pressure due to gravity, one can rearrange to find the capillary length.
In the case of a fluid–fluid interface, for example a drop of water immersed in another liquid, the capillary length denoted or is most commonly given by the formula,
,
where is the surface tension of the fluid interface, is the gravitational acceleration and is the mass density difference of the fluids. The capillary length is sometimes denoted in relation to the mathematical notation for curvature. The term capillary constant is somewhat misleading, because it is important to recognize that is a composition of variable quantities, for example the value of surface tension will vary with temperature and the density difference will change depending on the fluids involved at an interface interaction. However if these conditions are known, the capillary length can be considered a constant for any given liquid, and be used in numerous fluid mechanical problems to scale the derived equations such that they are valid for any fluid. For molecular fluids, the interfacial tensions and density differences are typically of the order of mN m−1 and g mL−1 respectively resulting in a capillary length of mm for water and air at room temperature on earth. On the other hand, the capillary length would be mm for water-air on the moon. For a soap bubble, the surface tension must be divided by the mean thickness, resulting in a capillary length of about meters in air! The equation for can also be found with an extra term, most often used when normalising the capillary height.
Origin
Theoretical
One way to theoretically derive the capillary length, is to imagine a liquid droplet at the point where surface tension balances gravity.
Let there be a spherical droplet with radius ,
The characteristic Laplace pressure , due to surface tension, is equal to
,
where is the surface tension. The pressure due to gravity (hydrostatic pressure) of a column of liquid is given by
,
where is the droplet density, the gravitational acceleration, and is the height of the droplet.
At the point where the Laplace pressure balances out the pressure due to gravity ,
.
Relationship with the Eötvös number
The above derivation can be used when dealing with the Eötvös number, a dimensionless quantity that represents the ratio between the gravitational forces and surface tension of the liquid. Despite being introduced by Loránd Eötvös in 1886, he has since become fairly dissociated with it, being replaced with Wilfrid Noel Bond such that it is now referred to as the Bond number in recent literature.
The Bond number can be written such that it includes a characteristic length- normally the radius of curvature of a liquid, and the capillary length
,
with parameters defined above, and the radius of curvature.
Therefore the bond number can be written as
,
with the capillary length.
If the bond number is set to 1, then the characteristic length is the capillary length.
Experimental
The capillary length can also be found through the manipulation of many different physical phenomenon. One method is to focus on capillary action, which is the attraction of a liquids surface to a surrounding solid.
Association with Jurin's law
Jurin's law is a quantitative law that shows that the maximum height that can be achieved by a liquid in a capillary tube is inversely proportional to the diameter of the tube. The law can be illustrated mathematically during capillary uplift, which is a traditional experiment measuring the height of a liquid in a capillary tube. When a capillary tube is inserted into a liquid, the liquid will rise or fall in the tube, due to an imbalance in pressure. The characteristic height is the distance from the bottom of the meniscus to the base, and exists when the Laplace pressure and the pressure due to gravity are balanced. One can reorganize to show the capillary length as a function of surface tension and gravity.
,
with the height of the liquid, the radius of the capillary tube, and the contact angle.
The contact angle is defined as the angle formed by the intersection of the liquid-solid interface and the liquid–vapour interface. The size of the angle quantifies the wettability of liquid, i.e., the interaction between the liquid and solid surface. A contact angle of can be considered, perfect wetting.
.
Thus the forms a cyclical 3 factor equation with .
This property is usually used by physicists to estimate the height a liquid will rise in a particular capillary tube, radius known, without the need for an experiment. When the characteristic height of the liquid is sufficiently less than the capillary length, then the effect of hydrostatic pressure due to gravity can be neglected.
Using the same premises of capillary rise, one can find the capillary length as a function of the volume increase, and wetting perimeter of the capillary walls.
Association with a sessile droplet
Another way to find the capillary length is using different pressure points inside a sessile droplet, with each point having a radius of curvature, and equate them to the Laplace pressure equation. This time the equation is solved for the height of the meniscus level which again can be used to give the capillary length.
The shape of a sessile droplet is directly proportional to whether the radius is greater than or less than the capillary length. Microdrops are droplets with radius smaller than the capillary length, and their shape is governed solely by surface tension, forming a spherical cap shape. If a droplet has a radius larger than the capillary length, they are known as macrodrops and the gravitational forces will dominate. Macrodrops will be 'flattened' by gravity and the height of the droplet will be reduced.
History
The investigations in capillarity stem back as far as Leonardo da Vinci, however the idea of capillary length was not developed until much later. Fundamentally the capillary length is a product of the work of Thomas Young and Pierre Laplace. They both appreciated that surface tension arose from cohesive forces between particles and that the shape of a liquid's surface reflected the short range of these forces. At the turn of the 19th century they independently derived pressure equations, but due to notation and presentation, Laplace often gets the credit. The equation showed that the pressure within a curved surface between two static fluids is always greater than that outside of a curved surface, but the pressure will decrease to zero as the radius approached infinity. Since the force is perpendicular to the surface and acts towards the centre of the curvature, a liquid will rise when the surface is concave and depress when convex. This was a mathematical explanation of the work published by James Jurin in 1719, where he quantified a relationship between the maximum height taken by a liquid in a capillary tube and its diameter – Jurin's law. The capillary length evolved from the use of the Laplace pressure equation at the point it balanced the pressure due to gravity, and is sometimes called the Laplace capillary constant, after being introduced by Laplace in 1806.
In nature
Bubbles
Like a droplet, bubbles are round because cohesive forces pull its molecules into the tightest possible grouping, a sphere. Due to the trapped air inside the bubble, it is impossible for the surface area to shrink to zero, hence the pressure inside the bubble is greater than outside, because if the pressures were equal, then the bubble would simply collapse. This pressure difference can be calculated from Laplace's pressure equation,
.
For a soap bubble, there exists two boundary surfaces, internal and external, and therefore two contributions to the excess pressure and Laplace's formula doubles to
.
The capillary length can then be worked out the same way except that the thickness of the film, must be taken into account as the bubble has a hollow center, unlike the droplet which is a solid. Instead of thinking of a droplet where each side is as in the above derivation, for a bubble is now
,
with and the radius and thickness of the bubble respectively.
As above, the Laplace and hydrostatic pressure are equated resulting in
.
Thus the capillary length contributes to a physiochemical limit that dictates the maximum size a soap bubble can take.
See also
Capillarity
Surface tension
Pressure
Bond number
Jurin's law
Young–Laplace equation
References
Fluid dynamics | Capillary length | [
"Chemistry",
"Engineering"
] | 1,937 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
17,598,460 | https://en.wikipedia.org/wiki/Clique-sum | In graph theory, a branch of mathematics, a clique sum (or clique-sum) is a way of combining two graphs by gluing them together at a clique, analogous to the connected sum operation in topology. If two graphs G and H each contain cliques of equal size, the clique-sum of G and H is formed from their disjoint union by identifying pairs of vertices in these two cliques to form a single shared clique, and then deleting all the clique edges (the original definition, based on the notion of set sum) or possibly deleting some of the clique edges (a loosening of the definition). A k-clique-sum is a clique-sum in which both cliques have exactly (or sometimes, at most) k vertices. One may also form clique-sums and k-clique-sums of more than two graphs, by repeated application of the clique-sum operation.
Different sources disagree on which edges should be removed as part of a clique-sum operation. In some contexts, such as the decomposition of chordal graphs or strangulated graphs, no edges should be removed. In other contexts, such as the SPQR-tree decomposition of graphs into their 3-vertex-connected components, all edges should be removed. And in yet other contexts, such as the graph structure theorem for minor-closed families of simple graphs, it is natural to allow the set of removed edges to be specified as part of the operation.
Related concepts
Clique-sums have a close connection with treewidth: If two graphs have treewidth at most k, so does their k-clique-sum. Every tree is the 1-clique-sum of its edges. Every series–parallel graph, or more generally every graph with treewidth at most two, may be formed as a 2-clique-sum of triangles. The same type of result extends to larger values of k: every graph with treewidth at most k may be formed as a clique-sum of graphs with at most k + 1 vertices; this is necessarily a k-clique-sum.
There is also a close connection between clique-sums and graph connectivity: if a graph is not (k + 1)-vertex-connected (so that there exists a set of k vertices the removal of which disconnects the graph) then it may be represented as a k-clique-sum of smaller graphs. For instance, the SPQR tree of a biconnected graph is a representation of the graph as a 2-clique-sum of its triconnected components.
Application in graph structure theory
Clique-sums are important in graph structure theory, where they are used to characterize certain families of graphs as the graphs formed by clique-sums of simpler graphs. The first result of this type was a theorem of , who proved that the graphs that do not have a five-vertex complete graph as a minor are the 3-clique-sums of planar graphs with the eight-vertex Wagner graph; this structure theorem can be used to show that the four color theorem is equivalent to the case k = 5 of the Hadwiger conjecture. The chordal graphs are exactly the graphs that can be formed by clique-sums of cliques without deleting any edges, and the strangulated graphs are the graphs that can be formed by clique-sums of cliques and maximal planar graphs without deleting edges. The graphs in which every induced cycle of length four or greater forms a minimal separator of the graph (its removal partitions the graph into two or more disconnected components, and no subset of the cycle has the same property) are exactly the clique-sums of cliques and maximal planar graphs, again without edge deletions. use the clique-sums of chordal graphs and series–parallel graphs to characterize the partial matrices having positive definite completions.
It is possible to derive a clique-sum decomposition for any graph family closed under graph minor operations: the graphs in every minor-closed family may be formed from clique-sums of graphs that are "nearly embedded" on surfaces of bounded genus, meaning that the embedding is allowed to omit a small number of apexes (vertices that may be connected to an arbitrary subset of the other vertices) and vortices (graphs with low pathwidth that replace faces of the surface embedding). These characterizations have been used as an important tool in the construction of approximation algorithms and subexponential-time exact algorithms for NP-complete optimization problems on minor-closed graph families.
Generalizations
The theory of clique-sums may also be generalized from graphs to matroids. Notably, Seymour's decomposition theorem characterizes the regular matroids (the matroids representable by totally unimodular matrices) as the 3-sums of graphic matroids (the matroids representing spanning trees in a graph), cographic matroids, and a certain 10-element matroid.
Notes
References
.
.
.
.
.
.
.
.
.
.
.
Graph operations
Graph minor theory | Clique-sum | [
"Mathematics"
] | 1,069 | [
"Mathematical relations",
"Graph minor theory",
"Graph theory",
"Graph operations"
] |
17,599,513 | https://en.wikipedia.org/wiki/Nitinol%20biocompatibility | Nitinol biocompatibility is an important factor in biomedical applications. Nitinol (NiTi), which is formed by alloying nickel and titanium (~ 50% Ni), is a shape-memory alloy with superelastic properties more similar to that of bone, when compared to stainless steel, another commonly used biomaterial. Biomedical applications that utilize nitinol include stents, heart valve tools, bone anchors, staples, septal defect devices and implants. It is a commonly used biomaterial especially in the development of stent technology.
Metal implants containing a combination of biocompatible metals or used in conjunction with other biomaterials are often considered the standard for many implant types. Passivation is a process that removes corrosive implant elements from the implant-body interface and creates an oxide layer on the surface of the implant. The process is important for making biomaterials more biocompatible.
Overview of common passivation methods
When materials are introduced to the body it is important not only that the material does not damage the body, but also that the environment of the body does not damage the implant. One method that prevents the negative effects resulting from this interaction is called passivation.
In general, passivation is considered to be a process that creates a non-reactive layer at the surface of materials, such that the material may be protected from damage caused by the environment. Passivation can be accomplished through many mechanisms. Passive layers can be made through the assembly of monolayers through polymer grafting. Often, for corrosion protection, passive layers are created through the formation of oxide or nitride layers at the surface.
Oxide films
Passivation often occurs naturally in some metals like titanium, a metal that often forms an oxide layer mostly composed of TiO2. This process occurs spontaneously as the enthalpy of formation of TiO2 is negative. In alloys, such as nitinol, the formation of an oxide layer not only protects against corrosion, but also removes Ni atoms from the surface of the material. Removing certain elements from the surface of materials is another form of passivation. In nitinol, the removal of Ni is important, because Ni is toxic if leached into the body. Stainless steel is commonly passivated by the removal of iron from the surface through the use of acids and heat. Nitric acid is commonly used as a mild oxidant to create the thin oxide film on the surface of materials that protects against corrosion.
Electropolishing
Another mode of passivation involves polishing. Mechanical polishing removes many surface impurities and crystal structure breaks that may promote corrosion. Electropolishing is even more effective, because it doesn’t leave the scratches that mechanical polishing will. Electropolishing is accomplished by creating electrochemical cells where the material of interest is used as the anode. The surface will have jagged qualities where certain points are higher than others. In this cell the current density will be higher at the higher points and cause those points dissolve at a higher rate than the lower points, thus smoothing the surface. Crystal lattice point impurities will also be removed as the current will force these high-energy impurities to dissolve from the surface.
Coatings
Another commonly used method of passivation is accomplished through coating the material with polymer layers. Layers composed of polyurethane have been used to improve biocompatibility, but have seen limited success. Coating materials with biologically similar molecules has seen much better success. For example, phosphorylcholine surface modified stents have exhibited reduced thrombogenic activity. Passivation is an extremely important area of research for biomedical applications, as the body is a harsh environment for materials and materials can damage the body through leaching and corrosion. All of the above passivation methods have been used in the development of nitinol biomaterials to produce the most biocompatible implants.
Influence of surface passivation on biocompatibility
Surface passivation techniques can greatly increase the corrosion resistance of nitinol. In order for nitinol to have the desired superelastic and shape memory properties, heat treatment is required. After heat treatment, the surface oxide layer contains a larger concentration of nickel in the form of NiO2 and NiO. This increase in nickel has been attributed to the diffusion of nickel out of the bulk material and into the surface layer during elevated temperature treatments. Surface characterization methods have shown that some surface passivation treatments decrease the concentration of NiO2 and NiO within the surface layer, leaving a higher concentration of the more stable TiO2 than in raw, heat-treated nitinol.
The decrease in nickel concentration in the surface layer of nitinol is correlated with a greater corrosion resistance. A potentiodynamic test is commonly employed to measure a material’s resistance to corrosion. This test determines the electrical potential at which a material begins to corrode. The measurement is called the pitting or breakdown potential. After passivation in a nitric acid solution, nitinol stent components showed significantly higher breakdown potentials than those that were unpassivated. In fact, there are many surface treatments that can greatly enhance the breakdown potentials of nitinol. These treatments include mechanical polishing, electropolishing, and chemical treatments such as, Nitric Oxide submersion, etching of the raw surface oxide layer, and pickling to break down bulk material near the surface.
Thrombogenicity, a material’s tendency to induce clot formation, is an important factor that determines the biocompatibility of any biomaterial that comes into contact with the bloodstream. There are two proteins, fibrinogen and albumin, that first adsorb to the surface of a foreign object in contact with blood. It has been suggested that fibrinogen may cause platelet activation due to a breakdown of the protein structure as it interacts with high energy grain boundaries on certain surfaces. Albumin on the other hand, inhibits platelet activation. This implies that there are two mechanisms which can help lower thrombogenicity, an amorphous surface layer where there will be no grain boundary interactions with fibrinogen, and a surface with a higher affinity to albumin than fibrinogen.
Just as thrombogenicity is important in determining suitability of other biomaterials, it is equally important with nitinol as a stent material. Currently, when stents are implanted, the patient receives antiaggregant therapy for a year or more in order to prevent the formation of a clot near the stent. By the time the drug therapy has ceased, ideally, a layer of endothelial cells, which line the inside of blood vessels would coat the outside of the stent. The stent is effectively integrated into the surrounding tissue and no longer in direct contact with the blood. There have been many attempts made using surface treatments to create stents that are more biocompatible and less thrombogenic, in an attempt to reduce the need for extensive antiplatelet therapy. Surface layers that are higher in nickel concentration cause less clotting due to albumin’s affinity to nickel. This is opposite of the surface layer characteristics that increase corrosion resistance. In vitro tests use indicators of thrombosis, such as platelet, Tyrosine aminotransferase, and β-TG levels. Surface treatments that have to some extent, lowered thrombogenicity in vitro are:
Electropolishing
Sandblasting
Polyurethane coatings
Aluminum coatings
Another area of research involves binding various pharmaceutical agents such as heparin to the surface of the stent. These drug-eluting stents show promise in further reducing thrombogenicity while not compromising corrosion resistance.
Welding
New advances with micro laser welding have vastly improved the quality of medical devices made with nitinol.
Remarks
Nitinol is an important alloy for use in medical devices, due to its exceptional biocompatibility, especially in the areas of corrosion resistance and thrombogenicity. Corrosion resistance is enhanced through methods that produce a uniform titanium dioxide layer on the surface with very few defects and impurities. Thrombogenicity is lowered on nitinol surfaces that contain nickel, so processes that retain nickel oxides in the surface layer are beneficial. The use of coatings has also been shown to greatly improve biocompatibility.
Because implanted devices contact the surface of the material, surface science plays an integral role in research aimed at enhancing biocompatibility, and in the development of new biomaterials. The development and improvement of nitinol as an implant material, from characterizing and improving the oxide layer to developing coatings, has been based largely on surface science.
Research is underway to produce better, more biocompatible, coatings. This research involves producing a coating that is very much like biologic material in order to further lessen the foreign body reaction. Biocomposite coatings containing cells or protein coatings are being explored for use with nitinol as well as many other biomaterials.
Current research/further reading
The U.S. Food and Drug Administration lists recently approved stent technology on their Heart Health Online site.
Angioplasty.org contains information on current stent research, as well as other issues relating to the circulatory system. Including:
Drug-eluting Stents
Current developments in stent technology.
Advancements in circulatory imaging technology.
Use of biocomposites for medical applications:
Orthopaedic
Dental
ISO and FDA set standards for evaluating and determining biocompatibility. ISO 10993 Standards-"Biological Evaluation of Medical Devices"
References
Nickel–titanium alloys
Transplantation medicine
Immunology | Nitinol biocompatibility | [
"Biology"
] | 2,007 | [
"Immunology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.