id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
16,234,885 | https://en.wikipedia.org/wiki/Surgical%20planning | Surgical planning is the preoperative method of pre-visualising a surgical intervention, in order to predefine the surgical steps and furthermore the bone segment navigation in the context of computer-assisted surgery.
The surgical planning is most important in neurosurgery and oral and maxillofacial surgery. The transfer of the surgical planning to the patient is generally made using a medical navigation system.
Principles of surgical planning
The imagistic dataset used for surgical planning is mainly based on a CT or MRI. In oral and maxillofacial surgery, a different, more "traditional" surgical planning can be used for orthognatic surgery, based on cast models fixed into an articulator.
History of the concept
In order to make a surgical planning, one would need a 3D image of the patient. The starting point was made by G. Hounsfield in the 1970s, by using CT in order to record data about the anatomical situation of the patients. In the 1980s, advances were made by the radiologist M. Vannier and his team, by creating the first computed three-dimensional reconstruction from a CT dataset. In the early 1990s, the surgical planning was performed by using stereolithographic models. During the late 1990s, the first full computer-based virtual surgical planning was made for osteotomies, and then transferred to the operating theatre by a navigation system. Currently 3D Printed models are also used to plan a procedure and improve patient outcomes.
The first commercially available neurosurgical planning systems appeared in the 1990s (the StealthStation by Medtronic, the VectorVision by Brainlab). As newer imaging modalities emerged providing increasing anatomical and functional detail for the patient in the 2000s, these surgical planning systems started to incorporate virtual reality technology to facilitate the visualisation and manipulation of the 3D data. One example of such systems is the Dextroscope, manufactured by Volume Interactions Pte Ltd. The Dextroscope is mostly used in the planning of complex neurosurgical procedures.
References
Oral and maxillofacial surgery
Health informatics
Radiology
Tomography
Computer-assisted surgery | Surgical planning | [
"Biology"
] | 429 | [
"Health informatics",
"Medical technology"
] |
16,234,982 | https://en.wikipedia.org/wiki/3D%20reconstruction | In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects.
This process can be accomplished either by active or passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction.
Motivation and applications
The research of 3D reconstruction has always been a difficult goal. By Using 3D reconstruction one can determine any object's 3D profile, as well as knowing the 3D coordinate of any point on the profile. The 3D reconstruction of objects is a generally scientific problem and core technology of a wide variety of fields, such as Computer Aided Geometric Design (CAGD), computer graphics, computer animation, computer vision, medical imaging, computational science, virtual reality, digital media, etc. For instance, the lesion information of the patients can be presented in 3D on the computer, which offers a new and accurate approach in diagnosis and thus has vital clinical value. Digital elevation models can be reconstructed using methods such as airborne laser altimetry or synthetic aperture radar.
Active methods
Active methods, i.e. range data methods, given the depth map, reconstruct the 3D profile by numerical approximation approach and build the object in scenario based on model. These methods actively interfere with the reconstructed object, either mechanically or radiometrically using rangefinders, in order to acquire the depth map, e.g. structured light, laser range finder and other active sensing techniques. A simple example of a mechanical method would use a depth gauge to measure a distance to a rotating object put on a turntable. More applicable radiometric methods emit radiance towards the object and then measure its reflected part. Examples range from moving light sources, colored visible light, time-of-flight lasers to microwaves or 3D ultrasound. See 3D scanning for more details.
Passive methods
Passive methods of 3D reconstruction do not interfere with the reconstructed object; they only use a sensor to measure the radiance reflected or emitted by the object's surface to infer its 3D structure through image understanding. Typically, the sensor is an image sensor in a camera sensitive to visible light and the input to the method is a set of digital images (one, two or more) or video. In this case we talk about image-based reconstruction and the output is a 3D model. By comparison to active methods, passive methods can be applied to a wider range of situations.
Monocular cues methods
Monocular cues methods refer to using one or more images from one viewpoint (camera) to proceed to 3D construction. It makes use of 2D characteristics(e.g. Silhouettes, shading and texture) to measure 3D shape, and that's why it is also named Shape-From-X, where X can be silhouettes, shading, texture etc. 3D reconstruction through monocular cues is simple and quick, and only one appropriate digital image is needed thus only one camera is adequate. Technically, it avoids stereo correspondence, which is fairly complex.
Shape-from-shading Due to the analysis of the shade information in the image, by using Lambertian reflectance, the depth of normal information of the object surface is restored to reconstruct.
Photometric Stereo This approach is more sophisticated than the shape-of-shading method. Images taken in different lighting conditions are used to solve the depth information. It is worth mentioning that more than one image is required by this approach.
Shape-from-texture Suppose such an object with smooth surface covered by replicated texture units, and its projection from 3D to 2D causes distortion and perspective. Distortion and perspective measured in 2D images provide the hint for inversely solving depth of normal information of the object surface.
Machine Learning Based Solutions Machine learning enables learning the correspondance between the subtle features in the input and the respective 3D equivalent. Deep neural networks have shown to be highly effective for 3D reconstruction from a single color image. This works even for non-photorealistic input images such as sketches. Thanks to the high level of accuracy in the reconstructed 3D features, deep learning based method has been employed for biomedical engineering applications to reconstruct CT imagery from X-ray.
Stereo vision
Stereo vision obtains the 3-dimensional geometric information of an object from multiple images based on the research of human visual system. The results are presented in form of depth maps. Images of an object acquired by two cameras simultaneously in different viewing angles, or by one single camera at different time in different viewing angles, are used to restore its 3D geometric information and reconstruct its 3D profile and location. This is more direct than Monocular methods such as shape-from-shading.
Binocular stereo vision method requires two identical cameras with parallel optical axis to observe one same object, acquiring two images from different points of view. In terms of trigonometry relations, depth information can be calculated from disparity. Binocular stereo vision method is well developed and stably contributes to favorable 3D reconstruction, leading to a better performance when compared to other 3D construction. Unfortunately, it is computationally intensive, besides it performs rather poorly when baseline distance is large.
Problem statement and basics
The approach of using Binocular stereo vision to acquire object's 3D geometric information is on the basis of visual disparity. The following picture provides a simple schematic diagram of horizontally sighted Binocular Stereo Vision, where b is the baseline between projective centers of two cameras.
The origin of the camera's coordinate system is at the optical center of the camera's lens as shown in the figure. Actually, the camera's image plane is behind the optical center of the camera's lens. However, to simplify the calculation, images are drawn in front of the optical center of the lens by f. The u-axis and v-axis of the image's coordinate system are in the same direction with x-axis and y-axis of the camera's coordinate system respectively. The origin of the image's coordinate system is located on the intersection of imaging plane and the optical axis. Suppose such world point whose corresponding image points are and respectively on the left and right image plane. Assume two cameras are in the same plane, then y-coordinates of and are identical, i.e.,. According to trigonometry relations,
where are coordinates of in the left camera's coordinate system, is focal length of the camera.
Visual disparity is defined as the difference in image point location of a certain world point acquired by two cameras,
based on which the coordinates of can be worked out.
Therefore, once the coordinates of image points is known, besides the parameters of two cameras, the 3D coordinate of the point can be determined.
The 3D reconstruction consists of the following sections:
Image acquisition
2D digital image acquisition is the information source of 3D reconstruction. Commonly used 3D reconstruction is based on two or more images, although it may employ only one image in some cases. There are various types of methods for image acquisition that depends on the occasions and purposes of the specific application. Not only the requirements of the application must be met, but also the visual disparity, illumination, performance of camera and the feature of scenario should be considered.
Camera calibration
Camera calibration in Binocular Stereo Vision refers to the determination of the mapping relationship between the image points and , and space coordinate in the 3D scenario. Camera calibration is a basic and essential part in 3D reconstruction via Binocular Stereo Vision.
Feature extraction
The aim of feature extraction is to gain the characteristics of the images, through which the stereo correspondence processes. As a result, the characteristics of the images closely link to the choice of matching methods. There is no such universally applicable theory for features extraction, leading to a great diversity of stereo correspondence in Binocular Stereo Vision research.
Stereo correspondence
Stereo correspondence is to establish the correspondence between primitive factors in images, i.e. to match and from two images. Certain interference factors in the scenario should be noticed, e.g. illumination, noise, surface physical characteristic, etc.
Restoration
According to precise correspondence, combined with camera location parameters, 3D geometric information can be recovered without difficulties. Due to the fact that accuracy of 3D reconstruction depends on the precision of correspondence, error of camera location parameters and so on, the previous procedures must be done carefully to achieve relatively accurate 3D reconstruction.
3D Reconstruction of medical images
Clinical routine of diagnosis, patient follow-up, computer assisted surgery, surgical planning etc. are facilitated by accurate 3D models of the desired part of human anatomy. Main motivation behind 3D reconstruction includes
Improved accuracy due to multi view aggregation.
Detailed surface estimates.
Can be used to plan, simulate, guide, or otherwise assist a surgeon in performing a medical procedure.
The precise position and orientation of the patient's anatomy can be determined.
Helps in a number of clinical areas, such as radiotherapy planning and treatment verification, spinal surgery, hip replacement, neurointerventions and aortic stenting.
Applications:
3D reconstruction has applications in many fields. They include:
Pavement engineering
Medicine
Free-viewpoint video reconstruction
Robotic mapping
City planning
Tomographic reconstruction
Gaming
Virtual environments and virtual tourism
Earth observation
Archaeology
Augmented reality
Reverse engineering
Motion capture
3D object recognition, gesture recognition and hand tracking
Problem Statement:
Mostly algorithms available for 3D reconstruction are extremely slow and cannot be used in real-time. Though the algorithms presented are still in infancy but they have the potential for fast computation.
Existing Approaches:
Delaunay and alpha-shapes
Delaunay method involves extraction of tetrahedron surfaces from initial point cloud. The idea of ‘shape’ for a set of points in space is given by concept of alpha-shapes. Given a finite point set S, and the real parameter alpha, the alpha-shape of S is a polytope (the generalization to any dimension of a two dimensional polygon and a three-dimensional polyhedron) which is neither convex nor necessarily connected. For a large value, the alpha-shape is identical to the convex-hull of S. The algorithm proposed by Edelsbrunner and Mucke eliminates all tetrahedrons which are delimited by a surrounding sphere smaller than α. The surface is then obtained with the external triangles from the resulting tetrahedron.
Another algorithm called Tight Cocone labels the initial tetrahedrons as interior and exterior. The triangles found in and out generate the resulting surface.
Both methods have been recently extended for reconstructing point clouds with noise. In this method the quality of points determines the feasibility of the method. For precise triangulation since we are using the whole point cloud set, the points on the surface with the error above the threshold will be explicitly represented on reconstructed geometry.
Zero set Methods
Reconstruction of the surface is performed using a distance function which assigns to each point in the space a signed distance to the surface S. A contour algorithm is used to extracting a zero-set which is used to obtain polygonal representation of the object. Thus, the problem of reconstructing a surface from a disorganized point cloud is reduced to the definition of the appropriate function f with a zero value for the sampled points and different to zero value for the rest. An algorithm called marching cubes established the use of such methods. There are different variants for given algorithm, some use a discrete function f, while other use a polyharmonic radial basis function is used to adjust the initial point set. Functions like Moving Least Squares, basic functions with local support, based on the Poisson equation have also been used. Loss of the geometry precision in areas with extreme curvature, i.e., corners, edges is one of the main issues encountered. Furthermore, pretreatment of information, by applying some kind of filtering technique, also affects the definition of the corners by softening them. There are several studies related to post-processing techniques used in the reconstruction for the detection and refinement of corners but these methods increase the complexity of the solution.
VR Technique
Entire volume transparence of the object is visualized using VR technique. Images will be performed by projecting rays through volume data. Along each ray, opacity and color need to be calculated at every voxel. Then information calculated along each ray will to be aggregated to a pixel on image plane. This technique helps us to see comprehensively an entire compact structure of the object. Since the technique needs enormous amount of calculations, which requires strong configuration computers is appropriate for low contrast data. Two main methods for rays projecting can be considered as follows:
Object-order method: Projecting rays go through volume from back to front (from volume to image plane).
Image-order or ray-casting method: Projecting rays go through volume from front to back (from image plane to volume).There exists some other methods to composite image, appropriate methods depending on the user's purposes. Some usual methods in medical image are MIP (maximum intensity projection), MinIP (minimum intensity projection), AC (alpha compositing) and NPVR (non-photorealistic volume rendering).
Voxel Grid
In this filtering technique input space is sampled using a grid of 3D voxels to reduce the number of points. For each voxel, a centroid is chosen as the representative of all points. There are two approaches, the selection of the voxel centroid or select the centroid of the points lying within the voxel. To obtain internal points average has a higher computational cost, but offers better results. Thus, a subset of the input space is obtained that roughly represents the underlying surface. The Voxel Grid method presents the same problems as other filtering techniques: impossibility of defining the final number of points that represent the surface, geometric information loss due to the reduction of the points inside a voxel and sensitivity to noisy input spaces.
See also
3D modeling
3D data acquisition and object reconstruction
3D reconstruction from multiple images
3D scanner
3D SEM surface reconstruction
4D reconstruction
Depth map
Kinect
Photogrammetry
Stereoscopy
Structure from motion
References
External links
Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes with Deep Generative Networks - Generate and reconstruct 3D shapes via modeling multi-view depth maps or silhouettes.
External links
http://www.nature.com/subjects/3d-reconstruction#news-and-comment
http://6.869.csail.mit.edu/fa13/lectures/lecture11shapefromX.pdf
http://research.microsoft.com/apps/search/default.aspx?q=3d+reconstruction
https://research.google.com/search.html#q=3D%20reconstruction
3D computer graphics
3D imaging
Computer vision | 3D reconstruction | [
"Engineering"
] | 3,000 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
16,239,927 | https://en.wikipedia.org/wiki/Nimrod%20%28synchrotron%29 | Nimrod (National Institute Machine Radiating on Downs, "the Mighty Hunter" Nimrod; name attributed to W. Galbraith) was a 7 GeV proton synchrotron operating in the Rutherford Appleton Laboratory in the United Kingdom between 1964 and 1978. Nimrod delivered its last particles at 17:00 hrs on 6 June 1978. Although roughly contemporary with the CERN PS its conservative design used the "weak focussing" principle instead of the much more cost-effective "strong-focussing" technique, which would have enabled a machine of the same cost to reach much higher energies.
The design and construction of Nimrod was carried out at a capital cost of approximately £11 million. It was used for studies of nuclear and sub-nuclear
phenomena.
Nimrod was dismantled and the space it occupied reused for the synchrotron of the ISIS neutron source.
Magnet power supply
The magnet power supply included 2 motor-alternator-flywheel sets. Each drive motor was 5,000 HP. Each flywheel was 30 tonnes. Each alternator was 60 MVA 12.8kV. Magnet currents would pulse at 10,550 A.
References
External links
http://www.isis.stfc.ac.uk/about-isis/target-station-2/publications/issue-1-september-20038209.pdf
Nuclear research institutes
Particle physics facilities
Research institutes in Oxfordshire
Synchrotron radiation facilities
Vale of White Horse | Nimrod (synchrotron) | [
"Materials_science",
"Engineering"
] | 301 | [
"Nuclear research institutes",
"Materials testing",
"Nuclear organizations",
"Synchrotron radiation facilities"
] |
16,242,787 | https://en.wikipedia.org/wiki/V725%20Sagittarii | V725 Sagittarii is a variable star in the southern constellation of Sagittarius. As recently as a century ago, it was a Population II Cepheid; its transformation was documented by Henrietta Swope beginning in 1937, and is one of the most exciting and instructive events in variable-star astronomy. The star has varied between apparent visual magnitude 12.3 and 14.3.
Prior to 1926, this star showed the appearance of being an irregular variable. It then became a Population II Cepheid showing a regular light curve with a period of 12 days. Monitoring showed a gradual increase to a 21 day period by 1935, but did not show a corresponding change in brightness. The star was mostly ignored until 1967–68 when it was seen to vary by 0.4 magnitude with a 50 day period. Steady observation thereafter showed that the star had experienced a thermal flash and performed a loop on the H-R diagram. It migrated from the asymptotic giant branch (AGB) to the Cepheid instability strip and then back to the AGB.
In 1973, the spectral class of V725 Sagittarii was estimated to be between F8 and G2 and similar to a type Ib supergiant. In 1994 it was observed to be G8 based on the spectral lines of metals and later than F8 based on the hydrogen lines. In 2006, it was reported that in 2000 V725 Sagittarii was an early M star with emission lines. In 2010, the spectral type was estimated from its colours and other properties to be K4III, although possibly late K.
References
External links
AAVSO Variable Star of the Season. Autumn 2006
Semiregular variable stars
Sagittarius (constellation)
Sagittarii, V725 | V725 Sagittarii | [
"Astronomy"
] | 371 | [
"Sagittarius (constellation)",
"Constellations"
] |
16,244,315 | https://en.wikipedia.org/wiki/Polar%20Class | Polar Class (PC) refers to the ice class assigned to a ship by a classification society based on the Unified Requirements for Polar Class Ships developed by the International Association of Classification Societies (IACS). Seven Polar Classes are defined in the rules, ranging from PC 1 for year-round operation in all polar waters to PC 7 for summer and autumn operation in thin first-year ice.
The IACS Polar Class rules should not be confused with International Code for Ships Operating in Polar Waters (Polar Code) by the International Maritime Organization (IMO).
Background
The development of the Polar Class rules began in the 1990s with an international effort to harmonize the requirements for marine operations in the polar waters in order to protect life, property and the environment. The guidelines developed by the International Maritime Organization (IMO), which were later incorporated in the Polar Code, made reference to the compliance with Unified Requirements for Polar Ships developed by the International Association of Classification Societies (IACS). In May 1996, an "Ad-Hoc Group to establish Unified Requirements for Polar Ships (AHG/PSR)" was established with one working group concentrating on the structural requirements and another working on machinery-related issues. The first IACS Polar Class rules were published in 2007.
Prior to the development of the unified requirements, each classification society had their own set of ice class rules ranging from Baltic ice classes intended for operation in first-year ice to higher vessel categories, including icebreakers, intended for operations in polar waters. When developing the upper and lower boundaries for the Polar Classes, it was agreed that the highest Polar Class vessels (PC 1) should be capable of operating safely anywhere in the Arctic or the Antarctic waters at any time of the year while the lower boundary was set to existing tonnage operating during the summer season, most of which followed the Baltic ice classes with some upgrades and additions. The lowest Polar Class (PC 7) was thus set to the similar level with the Finnish-Swedish ice class 1A. The definition of operational conditions for each Polar Class was intentionally left vague due to the wide variety of ship operations carried out in polar waters.
Definition
Polar Class notations
The IACS has established seven different Polar Class notations, ranging from PC 1 (highest) to PC 7 (lowest), with each level corresponding to operational capability and strength of the vessel. The description of ice conditions where ships of each Polar Class are intended to operate are based on World Meteorological Organization (WMO) Sea Ice Nomenclature. These definitions are intended to guide owners, designers and administrations in selecting the appropriate Polar Class to match the intended voyage or service of the vessel. Ships with sufficient power and strength to undertake "aggressive operations in ice-covered waters", such as escort and ice management operations, can be assigned an additional notation "Icebreaker".
The two lowest Polar Classes (PC 6 and PC 7) are roughly equivalent to the two highest Finnish-Swedish ice classes (1A Super and 1A, respectively). However, unlike the Baltic ice classes intended for operation only in first-year sea ice, even the lowest Polar Classes consider the possibility of encountering multi-year ice ("old ice inclusions").
Requirements
In the Polar Class rules, the hull of the vessel is divided longitudinally into four regions: "bow", "bow intermediate", "midbody" and "stern". All longitudinal regions except the bow are further divided vertically into "bottom", "lower" and "icebelt" regions. For each region, a design ice load is calculated based on the dimensions, hull geometry, and ice class of the vessel. This ice load is then used to determine the scantlings and steel grades of structural elements such as shell plating and frames in each location. The design scenario used to determine the ice loads is a glancing collision with a floating ice floe.
In addition to structural details, the Polar Class rules have requirements for machinery systems such as the main propulsion, steering gear, and systems essential for the safety of the crew and survivability of the vessel. For example, propeller-ice interaction should be taken into account in the propeller design, cooling systems and sea water inlets should be designed to work also in ice-covered waters, and the ballast tanks should be provided with effective means of preventing freezing.
Although the rules generally require the ships to have suitable hull form and sufficient propulsion power to operate independently and at continuous speed in ice conditions corresponding to their Polar Class, the ice-going capability requirements of the vessel are not clearly defined in terms of speed or ice thickness. In practice, this means that the Polar Class of the vessel may not reflect the actual icebreaking capability of the vessel.
Polar Class ships
The IACS Polar Class rules apply for ships contracted for construction on or after 1 July 2007. This means that while vessels built prior to this date may have an equivalent or even higher level of ice strengthening, they are not officially assigned a Polar Class and may not in fact fulfill all the requirements in the unified requirements. In addition, particularly Russian ships and icebreakers are assigned ice classes only according to the requirements of the Russian Maritime Register of Shipping, which maintains its own ice class rules parallel to the IACS Polar Class rules.
Although numerous ships have been built to the two least hardened Polar Classes, PC6 and PC7, only a small number of ships have been assigned ice class PC5 or higher.
Polar Class 5
A number of research vessels intended for scientific missions in the polar regions are built to PC5 rating: the South African S. A. Agulhas II in 2012, the American Sikuliaq in 2014, and the British RRS Sir David Attenborough in 2020. In addition, a PC5 Antarctic vessel Almirante Viel is under construction for the Chilean Navy .
In 2012, the Royal Canadian Navy awarded a shipbuilding contract for the construction of six to eight Arctic Offshore Patrol Ships (AOPS) rated at PC5. , HMCS Harry DeWolf and HMCS Margaret Brooke have entered service, HMCS Max Bernays is undergoing post-acceptance trials, and HMCS William Hall, HMCS Frédérick Rolette and HMCS Robert Hampton Gray are under construction. Two additional ships have been ordered for the Canadian Coast Guard.
, four cruise ships have been built with PC5 rating: National Geographic Endurance (delivered in 2020) and National Geographic Resolution (2021) for Lindblad Expeditions, and SH Minerva (2021) and SH Vega (2022) for Swan Hellenic.
Polar Class 4
The 2012-built drillship Stena IceMAX has a hull strengthened according to PC4 requirements. However, the long and wide vessel does not feature an icebreaking hull and is designed to operate primarily in pre-broken ("managed") ice.
The Canadian shipping company Fednav operates two PC4 rated bulk carriers, 2014-built Nunavik and 2021-built Arvik I. The 28,000-tonne vessels are primarily used to transport nickel ore from Raglan Mine in the Canadian Arctic.
In 2015, the hull of the Finnish 1986-built icebreaker Otso was reinforced with additional steel to PC4 level to allow the vessel to support seismic surveys in the Arctic during the summer months.
The Finnish LNG-powered icebreaker Polaris, built in 2016, is rated PC4 with an additional Lloyd's Register class notation "Icebreaker(+)". The latter part of the notation refers to additional structural strengthening based on analysis of the vessel's operational profile and potential ice loading scenarios.
The interim icebreakers CCGS Captain Molly Kool, CCGS Jean Goodwill, and CCGS Vincent Massey, built in 2000–01 and acquired by the Canadian Coast Guard 2018, will be upgraded to PC4 rating as part of the vessels' conversion to Canadian service.
The new PC4 polar logistics vessel of the Argentine Navy intended to complement the country's existing icebreaker ARA Almirante Irízar in Antarctica is currently in design stage.
The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) is in the process of acquiring a new PC4 rated icebreaker for researching the Arctic region.
The Swedish Maritime Administration is in the process of acquiring 2–3 new icebreakers rated PC4 Icebreaker(+). The first icebreaker is expected to enter service in 2027.
The new Canadian Coast Guard Multi-Purpose Vessels (MPV) will be rated PC4 Icebreaker(+). Sixteen vessels will be built by Seaspan in the 2020s and 2030s, and the first vessel is expected to enter service in 2028.
Polar Class 3
The first PC3 vessels were two heavy load carriers, Audax and Pugnax, built for the Netherlands-based ZPMC-Red Box Energy Services in 2016. The long and wide vessels, capable of breaking up to ice independently, were built for year-round transportation of LNG liquefaction plant modules to Sabetta.
Although usually referred to by their Russian Maritime Register of Shipping ice class Arc7, the fifteen first-generation Yamalmax LNG carriers built in 2016–2019 as well as the arctic condensate tankers Boris Sokolov (built in 2018) and Yuriy Kuchiev (2019) serving the Yamal LNG project also have PC3 rating from Bureau Veritas.
In April 2015, it was reported that Edison Chouest would build two PC3 anchor handling tug supply vessels (AHTS) for Alaskan operations. However, the construction of the vessels due for delivery by the end of 2016 was later cancelled following Shell Oil's decision to halt Arctic oil exploration.
, three polar research vessels have been built with PC3 rating: Kronprins Haakon for the Norwegian Polar Institute in 2018, Xue Long 2 for the Polar Research Institute of China in 2019, and Nuyina for the Australian Antarctic Division in 2021. Kronprins Haakon also has the additional notation "Icebreaker" while Nuyina notation includes Lloyd's Register's "Icebreaker(+)" notation.
The Finnish multipurpose icebreakers Fennica and Nordica, built in the early 1990s, were assigned PC3 rating as part of the vessels' Polar Code certification in 2019.
, there are no PC3 rated vessels under construction.
Polar Class 2
, the only PC2 rated vessel in service is the expedition cruise ship operated by the French company Compagnie du Ponant. The 270-passenger vessel, capable of breaking up to thick multi-year ice and taking passengers to the North Pole, was delivered in 2021.
The United States Coast Guard has ordered two out of three planned PC2 rated heavy polar icebreakers referred to as Polar Security Cutters. Construction of the first vessel, , has been delayed by several years and now is not expected to be delivered to the U.S. Coast Guard until at least 2028. While the vessels these Polar Security Cutters are intended to replace, and , are sometimes referred to as s, these mid-1970s icebreakers do not carry a PC rating.
The future Canadian Coast Guard polar icebreakers and are designed to PC2 rating with an additional notation "Icebreaker(+)". While a single vessel was initially scheduled for delivery in 2017, the National Shipbuilding Strategy has since been revised to include two such icebreakers, the first of which is planned to enter service by December 2029.
Germany has signed the order of a replacement vessel for the 1982-built research icebreaker Polarstern in December 2024. While the old Polarstern was built to Germanischer Lloyd ice class ARC3, the replacement Polarstern 2 will be a PC2 ship.
Polar Class 1
, no ships have been built, under construction or planned to PC1, the highest ice class specified by the IACS.
Notes
References
External links
Unified Requirements for Polar Class ships, International Association of Classification Societies (IACS)
Shipbuilding
Icebreakers
Sea ice | Polar Class | [
"Physics",
"Engineering"
] | 2,424 | [
"Physical phenomena",
"Earth phenomena",
"Sea ice",
"Shipbuilding",
"Marine engineering"
] |
16,244,706 | https://en.wikipedia.org/wiki/Von%20Neumann%27s%20inequality | In operator theory, von Neumann's inequality, due to John von Neumann, states that, for a fixed contraction T, the polynomial functional calculus map is itself a contraction.
Statement
For a contraction T acting on a Hilbert space and a polynomial p, then the norm of p(T) is bounded by the supremum of |p(z)| for z in the unit disk."
Proof
The inequality can be proved by considering the unitary dilation of T, for which the inequality is obvious.
Generalizations
This inequality is a specific case of Matsaev's conjecture. That is that for any polynomial P and contraction T on
where S is the right-shift operator. The von Neumann inequality proves it true for and for and it is true by straightforward calculation.
S.W. Drury has shown in 2011 that the conjecture fails in the general case.
References
See also
Crouzeix's conjecture
Operator theory
Inequalities
John von Neumann | Von Neumann's inequality | [
"Mathematics"
] | 195 | [
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Mathematical theorems"
] |
16,246,246 | https://en.wikipedia.org/wiki/Digital%20perm | A digital perm is a perm that uses hot rods with the temperature regulated by a machine with a digital display, hence the name. The process is otherwise similar to that of a traditional perm. The name "digital perm" is trademarked by a Japanese company, Paimore Co. Hairstylists usually call it a "hot perm."
A normal perm basically requires only the perm solution. A digital perm requires a (different) solution plus heat. This type of perm is popular in several countries, including South Korea and Japan.
Difference between a normal perm and a digital perm
The biggest difference between other perms and a digital perm is the shape and the texture of the wave created by the digital process. A normal perm, or "cold perm," makes the wave most prominent when the hair is wet, and loose when it is dry. The hair tends to look moist and as locks. A digital perm makes the wave most prominent when the hair is dry, and loose when it is wet. Therefore, the dry and curly look of the curl iron or the hot curler can be created.
Digital perms thermally recondition the hair, though the chemicals and processing are similar to a straight perm. The hair often feels softer, smoother, and shinier after a digital perm.
Cost and time of a digital perm
The price depends on the hair salon, but a digital perm is usually a little more expensive than a cold perm. Also, some hair salons have systems where they can use the machine one at a time, in which case the price could be a lot higher.
The time it takes to perm the hair also depends on the hair salon and the hair type, but it usually takes longer than a cold perm. In some cases, it takes about the same time, but different salons use different solutions and machines, so the time varies.
Styling
A cold perm makes the hair most wavy when it is wet, so adding styling gel/foam when it is wet and air-drying it makes the wave most prominent. A digital perm makes the hair wavy when it is dry, so it can be dried with a blow dryer, and a hand can be used to make the curl. Styling is very easy, and if the curl is set in the morning, at the end of the day when the wave loosens, the curls can be revived by curling around a finger.
See also
Haircut
List of hairstyles
References
Further reading
Liu, Christine, Le Gala Hair Group: Introducing the digital perm, Boston's Weekly Dig, Wednesday, January 31, 2007, Issue 9.5.
Pastor, Pam, Hi-tech hair, Philippine Daily Inquirer
Hairdressing
Hairstyles
2000s neologisms
2000s in technology
Temperature control
Japanese inventions | Digital perm | [
"Technology"
] | 585 | [
"Home automation",
"Temperature control"
] |
16,250,540 | https://en.wikipedia.org/wiki/Hillyard%2C%20Inc. | Hillyard, Inc. (earlier known as Hillyard Disinfectant Company and Hillyard Chemical Company) is a privately owned cleaning products company in St. Joseph, Missouri with a speciality in providing products for cleaning and maintenance of wood basketball courts.
The company fielded two Amateur Athletic Union national champion basketball teams in the 1920s and was instrumental in the founding of the Basketball Hall of Fame (where an exhibit celebrates its contributions to the sport).
In 2007 the company had an estimated $120 million in sales and employed 600 people.
Newton S. Hillyard founded the company in 1907 as a cleaning supplies manufacturer. Hillyard's son Marvin asked him to sponsor a basketball team. N.S. then developed the company's signature cleaning supplies that made the floors "less oily."
In 1920 the company moved to a new building that included a 90 x 140 foot wood gymnasium floor—claimed to be the largest west of the Mississippi River at the company headquarters where the company tested gym seals and finishes.
The company then sponsored the Hillyard Shine Alls basketball team that won the Amateur Athletic Union national championships in 1926 and 1927 (and also played in two other AAU national championships in 1923 and 1925). The team was led by Forrest DeBernardi.
During the Hillyard domination charges surfaced that Hillyard was paying its amateur players or that the players had no-show jobs at the plant. The controversy passed without any formal action taken against the company.
Elliot C. Spratt, a Hillyard son-in-law, was the founding president of the Basketball Hall of Fame.
The National Association of Basketball Coaches gives the Newton S. Hillyard Memorial Award to its outgoing president.
Other Hillyard family members including Haskell Hillyard have received the John W. Bunn Lifetime Achievement Award at the Basketball Hall of Fame.
Hillyard plays a major role at Missouri Western State University. Spratt Stadium is named for Elliot C. Spratt. The Hillyard Tip Off Classic is a basketball tournament at the school.
References
External links
hillyard.com
Companies based in Missouri
Cleaning products
Naismith Memorial Basketball Hall of Fame inductees
Chemical companies established in 1907
1907 establishments in Missouri | Hillyard, Inc. | [
"Chemistry"
] | 444 | [
"Cleaning products",
"Products of chemical industry"
] |
16,250,616 | https://en.wikipedia.org/wiki/Epicentral%20distance | Epicentral distance refers to the ground distance from the epicenter to a specified point. Generally, the smaller the epicentral distance of an earthquake of the same scale, the heavier the damage caused by the earthquake. On the contrary, with the increase of epicentral distance, the damage caused by the earthquake is gradually reduced. Due to the limitation of seismometers designed in the early years, some seismic magnitude scales began to show errors when the epicentral distance exceeded a certain range from the observation points. In seismology, the unit of far earthquakes is usually ° (degree), while the unit of near earthquakes is km. But regardless of distance, Δ is used as a symbol for the epicentral distance.
Measuring method
S-P time difference method
Even if the depth of focus of an earthquake is very deep, it can still have a very short epicentral distance. When measuring the epicentral distance of an earthquake with a small epicentral distance, first measure the reading of the initial motion of P wave, and then confirm the arrival of S wave. The value of the epicenter distance Δ is found on the travel timetable according to the arrival time difference between the P wave and S wave.
Other Methods
If the source is very far away, that is, when the epicenter distance is greater than 105 °, the epicenter distance cannot be determined according to the S-P move out method so it must be determined by P, PKP, PP, SKS, PS, and other waves.
Correlation with seismic measurement
Definition of near earthquake magnitude
In 1935, in the absence of a mature seismic magnitude scales, two seismologists from the California Institute of Technology, Charles Francis Richter and Bino Gutenberg, designed the Richter magnitude scale to study the earthquakes that occurred in California, USA. In order to keep the result from being negative, Richter defined an earthquake with a maximum horizontal displacement of 1 μ m (which is also the highest accuracy and precision of the Wood Anderson torsion seismometer) recorded by the seismometer at the observation point at the epicentral distance of 100 km as a magnitude 0 earthquake. According to this definition, if the amplitude of the seismic wave measured by the Wood Anderson torsion seismometer at the epicentral distance of 100 km is 1 mm, then the magnitude is 3. Although Richter et al. attempted to make the results non-negative, modern precision seismographs often record earthquakes with negative scales due to the lack of clear upper or lower limits on the magnitude of nearby earthquakes. Moreover, due to the limitation of the Wood Anderson torsion seismometer used in the original design of the Richter scale, if the local earthquake scale ML is greater than 6.8 or the epicentral distance exceeds about 600 km the observation point, it is not applicable.
Calculation of surface wave magnitude
The epicentral distance is one of the important parameters for calculating surface-wave magnitude. The equation for calculating surface wave magnitude is
In this equation, represents the maximum particle displacement in the surface wave (sum of two horizontal Euclidean vectors), in micrometers; T represents the corresponding period, in seconds; Δ Is the epicentral distance, in degrees; and is a gauge function. Generally, the expression for the gauge function is
According to GB 17740-1999, two horizontal displacements must be measured at the same time or one-eighth of a period. If two displacements have different cycles, weighted summation must be used.
Among them, AN represents the displacement in the north-south direction, in micrometers; AE represents the displacement in the east-west direction, in micrometers; TN represents the period of the corresponding AN, in seconds; TE represents the period corresponding to AE, in seconds.
It can be seen that the seismic surface wave period value selected for different epicentral distances is different. Generally, the cycle values can be selected by referring to the table below.
Rapid report of large earthquakes with surface wave magnitude
In addition to the calculation of surface wave magnitude (Δ≤15°) body wave attenuation characteristics and better conversion relationship between MB and MS are effective ways to improve the longitude of Body wave magnitude MB rapid report of large earthquakes. This is also a meaningful quantitative work for carrying out research on the measurement of Body wave magnitude MB recorded by short period instrument DD-1 and VGK.
Correlation with epicenter
Before the 20th century, the method of determining the epicenter was generally the geometric center method. Since the beginning of the 20th century, as the technology of seismometers and other instruments gradually matured, the single station measurement method and network measurement method were born. Compared to the three methods, due to the influence of uneven crustal structure on the propagation of seismic rays, the network measurement method has the highest accuracy, while the geometric center method has the lowest accuracy.
Geometric center method
Before the 20th century, in the absence of instrument records, the epicenter position of earthquakes was determined by the macroscopic epicenter based on the extent of damage, which was the geometric center of the epicenter area (the area near the epicenter where the damage was most severe). Due to the inability to determine the precise range of the polar region, errors were often caused.
Single station measurement method
Due to the varying propagation speeds of various seismic waves in different regions and depths, those with fast wave speeds or diameters first arrive at the station, followed by other waves, resulting in a time difference. The epicentral distance, source depth, and time difference of various recorded waves can be compiled into time distance curves and travel timetables suitable for local use. When an earthquake occurs in a certain place, the analyst can measure the time difference of various waves of the earthquake from the seismogram and calculate the epicentral distance by comparing it with the prepared travel timetable or applying the formula. Subsequently, it is necessary to determine the azimuth angle. Transforming the initial motion amplitudes in two horizontal directions into ground motion displacements, the azimuth angle can be determined using a trigonometric function. After the azimuth and epicentral distance are calculated, the epicenter position can be easily found. This method is called the single station measurement method.
Network measurement method
When the epicentral distance is calculated by at least three seismic stations, the location of the epicenter can be determined by trilateral measurement. This method of measuring epicenters through instruments, commonly known as microscopic epicenters, is called network measurement method. The specific method is done by drawing a circle on the map with the three stations as the center of the circle and the epicentral distance is calculated with the radius according to the corresponding proportion. Then, the intersection of each two circles is connected, and the intersection points of the three strings are the obtained epicenter. Then, the latitude and longitude are calculated (Geographic coordinate system).
Others
Seismic classification
Epicentral distance also plays a unique role in earthquake classification. The same earthquake is called differently when observed at different distances, near and far. According to epicentral distance, earthquakes can be divided into three categories:
Local earthquake: Δ<100km
Near earthquake: 100km ≤ Δ ≤ 1000km
Distant earthquake: Δ>1000km
Seismic phase study
The epicentral distance is different, and the seismic phases are reflected in different patterns on the seismic record map due to the combined effects of the source, the source depth, and the propagation of seismic rays. Therefore, with the different epicentral distances, the determination of seismic parameters will be different. Given the epicentral distance from the observation points, it is easier to distinguish complex and different seismic phases, which are generally judged according to the overall situation of seismic records on the record map. The size, distance, and depth of earthquakes have distinct characteristics. The closer the source is, the shorter the duration of the vibration; the farther the source, the longer the duration.
Notes
References
Earthquakes
Earthquakes in the United States
Measurement
Earth sciences
Seismology | Epicentral distance | [
"Physics",
"Mathematics"
] | 1,630 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
4,213,424 | https://en.wikipedia.org/wiki/Four-valued%20logic | In logic, a four-valued logic is any logic with four truth values. Several types of four-valued logic have been advanced.
Belnap
Nuel Belnap considered the challenge of question answering by computer in 1975. Noting human fallibility, he was concerned with the case where two contradictory facts were loaded into memory, and then a query was made. "We all know about the fecundity of contradictions in two-valued logic: contradictions are never isolated, infecting as they do the whole system." Belnap proposed a four-valued logic as a means of containing contradiction.
He called the table of values A4: Its possible values are true, false, both (true and false), and neither (true nor false). Belnap's logic is designed to cope with multiple information sources such that if only true is found then true is assigned, if only false is found then false is assigned, if some sources say true and others say false then both is assigned, and if no information is given by any information source then neither is assigned. These four values correspond to the elements of the power set based on {T, F}.
T is the supremum and F the infimum in the logical lattice where None and Both are in the wings. Belnap has this interpretation: "The worst thing is to be told something is false simpliciter. You are better off (it is one of your hopes) in either being told nothing about it, or being told both that it is true and also that it is false; while of course best of all is to be told that it is true." Belnap notes that "paradoxes of implication" (A&~A)→B and A→(B∨~B) are avoided in his 4-valued system.
Logical connectives
Belnap addressed the challenge of extending logical connectives to A4. Since it is the power set on {T, F}, the elements of A4 are ordered by inclusion making it a lattice with Both at the supremum and None at the infimum, and T and F on the wings. Referring to Dana Scott, he assumes the connectives are Scott-continuous or monotonic functions. First he expands negation by deducing that ¬Both = Both and ¬None = None. To expand And and Or the monotonicity goes only so far. Belnap uses equivalence (a&b = a iff avb = b) to fill out the tables for these connectives. He finds None & Both = F while None v Both = T.
The result is a second lattice L4 called the "logical lattice", where A4 is the "approximation lattice" determining Scott continuity.
Implementation using two bits
Let one bit be assigned for each truth value: 01=T and 10=F with 00=N and 11=B.
Then the subset relation in the power set on {T, F} corresponds to order ab<cd iff a<c and b<d in two-bit representation. Belnap calls the lattice associated with this order the "approximation lattice".
The logic associated with two-bit variables can be incorporated into computer hardware.
Matrix transitions
As a discrete system, the four-valued logic illustrates a set of states subject to transitions by logical matrices to form a transition system. An input of two bits transitions to an output of two bits through matrix multiplication.
There are sixteen logical matrices that are 2x2, and four logical vectors that act as inputs and outputs of the matrix transitions:
X = {A, B, C, D } = {(0,1), (1, 0), (0, 0), (1, 1) }.
When C is input, the output is always C. Four of the sixteen have zero in one corner only, so the output of vector-matrix multiplication with Boolean arithmetic is always D, except for C input.
Nine further logical matrices need description to fill out the labelled transition system where the matrices label the transitions. Excluding C, inputs A, B, and D are considered in order and the output in X expressed as a triple, for example ABD for commonly known as the identity matrix.
The asymmetric matrices differ in their action on row versus column vectors. The row convention is used here:
has code BBB, code AAA
has code CDB, code DCA.
The remaining operations on X are expressed with matrices with three zeros, so outputs include C for a third of the inputs. The codes are CAA, BCA, ACA, and CBB in these cases.
Applications
A four-valued logic was established by IEEE with the standard IEEE 1364: It models signal values in digital circuits. The four values are 1, 0, Z and X. 1 and 0 stand for Boolean true and false, Z stands for high impedance or open circuit and X stands for don't care (e.g., the value has no effect). This logic is itself a subset of the 9-valued logic standard called IEEE 1164 and implemented in Very High Speed Integrated Circuit Hardware Description Language, VHDL's std_logic.
One should not confuse four-valued mathematical logic (using operators, truth tables, syllogisms, propositional calculus, theorems and so on) with communication protocols built using binary logic and displaying responses with four possible states implemented with Boolean-like type of values : for instance, the SAE J1939 standard, used for CAN data transmission in heavy road vehicles, which has four logical (Boolean) values: False, True, Error Condition, and Not installed (represented by values 0–3). Error Condition means there is a technical problem obstructing data acquisition. The logics for that is for example True and Error Condition=Error Condition. Not installed is used for a feature that does not exist in this vehicle, and should be disregarded for logical calculation. On CAN, usually fixed data messages are sent containing many signal values each, so a signal representing a not-installed feature will be sent anyway.
Split bit proposed gate
Creation of carbon nanotubes for logical gates has used carbon nanotube field-effect transistors (CNFETs). An anticipated demand for data storage in the Internet of Things (IoT) provides a motivation. A proposal has been made for 32 nm process application using a split bit-gate: "By using CNFET technology in 32 nm node by the proposed SQI gate, two split bit-lines QSRAM architectures have been suggested to address the issue of increasing demand for storage capacity in IoT/IoVT applications. Peripheral circuits such as a novel quaternary to binary decoder for QSRAM have been offered."
References
See also
Tetralemma in Ancient Greek and Indian logics
Catuṣkoṭi in Buddhist logic
Dialetheism, the idea that a statement can be both true and false
Further reading
Hardware description languages
Many-valued logic | Four-valued logic | [
"Engineering"
] | 1,447 | [
"Electronic engineering",
"Hardware description languages"
] |
4,213,750 | https://en.wikipedia.org/wiki/Salt%20water%20chlorination | Salt water chlorination is a process that uses dissolved salt (1000–4000 ppm or 1–4 g/L) for the chlorination of swimming pools and hot tubs. The chlorine generator (also known as salt cell, salt generator, salt chlorinator, or SWG) uses electrolysis in the presence of dissolved salt to produce chlorine gas or its dissolved forms, hypochlorous acid and sodium hypochlorite, which are already commonly used as sanitizing agents in pools. Hydrogen is produced as byproduct too.
Distinction from traditional pool chlorination
The presence of chlorine in traditional swimming pools can be described as a combination of free available chlorine (FAC) and combined available chlorine (CAC). While FAC is composed of the free chlorine that is available for disinfecting the water, the CAC includes chloramines, which are formed by the reaction of FAC with amines (introduced into the pool by human perspiration, saliva, mucus, urine, and other biologics, and by insects and other pests). Chloramines are responsible for the "chlorine smell" of pools, as well as skin and eye irritation. These problems are the result of insufficient levels of free available chlorine, and indicate a pool that must be "shocked" by the addition of 5–10 times the normal amount of chlorine. In saltwater pools, the generator uses electrolysis to continuously produce free chlorine. As such, a saltwater pool or hot tub is not actually chlorine-free; it simply utilizes added salt and a chlorine generator instead of direct addition of chlorine. It also burns off chloramines in the same manner as traditional shock (oxidizer). As with traditionally chlorinated pools, saltwater pools must be monitored in order to maintain proper water chemistry. Low chlorine levels can be caused by insufficient salt, incorrect (low) chlorine-generation setting on the SWG unit, higher-than-normal chlorine demand, low stabilizer, sun exposure, insufficient pump speed, or mechanical issues with the chlorine generator. Salt count can be lowered due to splash-out, backwashing, and dilution via rainwater.
Health concerns
Research has shown that because saltwater pools still use chlorine sanitization, they generate the same disinfection byproducts (DBPs) that are present in traditional pools. Of highest concern are haloketones and trihalomethanes (THMs) of those the predominant form being bromoform. Very high levels of bromoform—up to 1.3 mg per liter, or 13 times the World Health Organization's guideline values—have been found in some public saltwater swimming pools.
Manufacturers have been producing saltwater chlorine generators in the United States since the early 1980s, and they first appeared commercially in New Zealand in the early 1970s (the Aquatech IG4500).
Operation
The chlorinator cell consists of parallel titanium plates coated with ruthenium and sometimes iridium. Older models make use of perforated (or mesh) plates rather than solid plates. Electrolysis naturally attracts calcium and other minerals to the plates. Thus, depending on water chemistry and magnitude of use, the cell will require periodic cleaning in a mild acid solution (1 part HCl to 15 parts water) which will remove the buildup of calcium compound crystals, such as calcium carbonate or calcium nitrate. Excessive buildup can reduce the effectiveness of the cell. Running the chlorinator for long periods with insufficient salt in the pool can strip the coating off the cell which then requires an expensive replacement, as can using too strong an acid wash.
Saltwater pools can also require stabilizer (cyanuric acid) to help stop the sun's UV rays from breaking down free chlorine in the pool. Usual levels are 20–50 ppm. They also require the pH to be kept between 7.2 and 7.8 with the chlorine being more effective if the pH is kept closer to 7.2. The average salt levels are usually in the 3000-5000 ppm range, much less than the ocean, which has salt levels of around 35,000 ppm. In swimming pools, salt is typically poured across the bottom and swept with the pool brush until it dissolves; if concentrated brine is allowed into the return-water system it can cause the chlorinator cell to malfunction due to overconductivity.
Salt water chlorination produces an excess of hydroxide ions, and this requires the frequent addition of hydrochloric acid (HCl, also known as muriatic acid) to maintain pH. The initial chlorine chemistry is as follows.
4NaCl → 4Na+ + 4Cl− Salt dissolves in water.
4Na+ + 4Cl− → 4Na+ + 2Cl2 By electrolysis.
4Na+ + 4H2O → 4Na+ + 4OH− + 2H2 By electrolysis.
2Cl2 + 2H2O → 2HClO + 2H+ + 2Cl− Hydrolysis of aqueous Chlorine gas.
2HClO → HClO + ClO− + H+ Dissociation of hypochlorous acid at pH 7.5 and 25C.
4NaCl + 3H2O → 4Na+ + HClO + ClO− + OH− + 2Cl− + 2H2 Net of all the above.
Addition of Hydrochloric Acid to restore the pH to 7.5
HCl + 4Na+ + HClO + ClO− + OH− + 2Cl− +2H2 → HClO + OCl− + H2O + 4Na+ + 3Cl− + 2H2.
4NaCl + HCl + 2H2O → HClO + OCl− + 4Na+ + 3Cl− + 2H2 Net of the last two.
Benefits and disadvantages
The benefits of salt systems in pools are the convenience and the constant delivery of pure chlorine-based sanitizer. The reduction of irritating chloramines versus traditional chlorinating methods and the "softening" effect of electrolysis reducing dissolved alkali minerals in the water are also perceived as benefits. For some people that have sensitivities to chlorine, these systems may be less offensive.
Disadvantages are the initial cost of the system, maintenance, and the cost of replacement cells. Salt is corrosive and will damage some metals and some improperly-sealed stone. However, as the ideal saline concentration of a salt-chlorinated pool is very low (<3,500ppm, the threshold for human perception of salt by taste; seawater is about ten times this concentration), damage usually occurs due to improperly-maintained pool chemistry or improper maintenance of the electrolytic cell. Pool equipment manufacturers typically will not warrant stainless steel products damaged by saline pools.
Calcium and other alkali precipitate buildup will occur naturally on the cathode plate, and sometimes in the pool itself as "scaling". Regular maintenance of the cell is necessary; failure to do so will reduce the effectiveness of the cell. Certain designs of saline chlorinators use a "reverse-polarity" method that will regularly switch the roles of the two electrodes between anode and cathode, causing this calcium buildup to dissolve off the accumulating electrode. Such systems reduce but do not eliminate the need to clean the electrolytic cell and the occurrence of calcium scale in the water.
As chlorine is generated, pH will rise causing the chlorine to be less effective. Many systems with chemistry automation can sense the rising pH and automatically introduce either CO2 or hydrochloric acid in order to bring the pH back to the target level.Automation systems will also manage levels of sanitizer by monitoring the ORP or redox levels of the water. This allows only the needed amount of chlorine to be generated based on the demand.
Sodium bromide can be used instead of sodium chloride, which produces a bromine pool. The benefits and downsides are the same as those of a salt system. It is not necessary to use a chloride-based acid to balance the pH. Also, bromine is only effective as a sanitizer, not as an oxidizer, leaving a need for adding a "shock" such as hydrogen peroxide or any chlorine-based shock to burn off inorganic waste and free up combined bromines. This extra step is not needed in a sodium chloride system, as chlorine is effective as both a sanitizer and an oxidizer. A user would only need to "super chlorinate" or increase chlorine production of the cell occasionally. That would normally be less than once a week or after heavy bather loads.
References
Swimming pools
Water treatment
Chlorine | Salt water chlorination | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,876 | [
"Water technology",
"Water treatment",
"Water pollution",
"Environmental engineering"
] |
4,213,869 | https://en.wikipedia.org/wiki/Ministry%20of%20Petroleum%20and%20Mineral%20Resources%20%28Egypt%29 | The first independent Ministry of Petroleum was established in March 1973, to manage the political role of petroleum resources before the war of 1973. In view of the strategic significance of the Ministry's existence as a political body that sets the general petroleum strategies on new bases to go in line with the requirements of the country at this stage. On top of its priority list, is to provide the local market needs of petroleum products, petrochemicals and mineral resources, and to contribute to achieving the targeted growth rates of the national economy.
Functions and duties
The Ministry of Petroleum and Mineral Resources sets up policies and strategies for the Petroleum Sector and its five entities for implementation.
The petroleum policy is based on increasing the reserves as well as production of crude oil and natural gas through intensifying the upstream activities.
Working on developing and building human cadres, capable of carrying out responsibilities, to be achieved within the comprehensive program, currently, being executed to develop and modernize the Petroleum Sector.
Working on transforming Egypt into a Regional Hub for Oil and Gas Trading.
The petroleum sector's vision
Achieve financial sustainability.
Become a leading regional Oil and Gas hub.
Be a role model for the future of modernized Egypt.
Take into consideration the Sector Core Values....
Core Values: safety, innovation, ethics, transparency and efficiency
Strategic objectives of the ministry of petroleum and mineral resources
Meeting the demands of the domestic market for petroleum and petrochemical products, mineral resources as well as achieving the target of the national economy growth rates.
Securing oil and natural gas supplies through expanding upstream activities, diversification of resources and working towards modifying the energy mix.
Achieving the optimum value-added of natural resources.
Advancing a national high efficiency manpower.
Maintaining environmental standards and sustainable development.
Transforming Egypt into a Regional Hub for Oil & Gas Trading.
Developing and modernizing the Petroleum Sector to meet the demands of the current era.
The ministry’s hierarchy
The petroleum sector in Egypt consists of 6 state-owned entities. These are: Egyptian General Petroleum Corporation (EGPC), Egyptian Natural Gas Holding Company(EGAS), Egyptian Petrochemicals Holding Company (ECHEM), Ganoub El Wadi Petroleum Holding Company (GANOPE), and Egyptian General Authority for Mineral Resources.
Previous petroleum ministers
Sherif Ismail (July 2013 – September 2015)
Sherif Haddara (May 2013 – July 2013)
Osama Kamal (August 2012 – May 2013)
Abdullah Ghorab (March 2011 – August 2012)
Mahmoud Latif (February 2011 – March 2011)
Sameh Fahmi (October 1999 – February 2011)
Hamdi Al Banbi (May 1991 – October 1999)
Abdel Hadi Kandil (July 1984 – May 1991)
Ahmed Ezzettin Hilal (March 1973 – July 1984)
Ali Waly (May 1971 )
See also
Energy in Egypt
References
External links
Ministry of Petroleum Official website
The Egyptian General Petroleum Corporation (EGPC)
The Egyptian Natural Gas Holding Company
The Egyptian Petrochemicals Holding Company
Ganoub El-Wadi Petroleum Holding Company
Egypt's Cabinet Database
Petroleum
Fossil fuels in Egypt
Mining in Egypt
Petroleum politics
Egypt
1972 establishments in Egypt
Ministries established in 1972 | Ministry of Petroleum and Mineral Resources (Egypt) | [
"Chemistry",
"Engineering"
] | 639 | [
"Petroleum politics",
"Petroleum stubs",
"Petroleum",
"Energy organizations",
"Energy ministries"
] |
4,214,075 | https://en.wikipedia.org/wiki/Intrinsic%20safety | Intrinsic safety (IS) is a protection technique for safe operation of electrical equipment in hazardous areas by limiting the energy, electrical and thermal, available for ignition. In signal and control circuits that can operate with low currents and voltages, the intrinsic safety approach simplifies circuits and reduces installation cost over other protection methods. Areas with dangerous concentrations of flammable gases or dust are found in applications such as petrochemical refineries and mines. As a discipline, it is an application of inherent safety in instrumentation. High-power circuits such as electric motors or lighting cannot use intrinsic safety methods for protection.
Intrinsic safety devices, can be subdivided in to:
Intrinsically safe apparatus
Associated apparatus
Intrinsically safe apparatus
Intrinsically safe apparatuses are electrical devices that have connected circuits that are intrinsically safe circuits whilst in the hazardous area.
Associated apparatus
Associated apparatuses are electrical devices that have both intrinsically safe and non-intrinsically safe circuits and is designed in a way that the non-intrinsically safe circuits cannot negatively affect the intrinsically safe circuits. The apparatus is normally
Intrinsically safe circuit
An intrinsically safe circuit is designed to not be capable of causing ignition of a given explosive atmosphere, by any spark or any thermal effect under normal operation and specified fault conditions.
Operating and design principles
In normal use, electrical equipment often creates tiny electric arcs (internal sparks) in switches, motor brushes, connectors, and in other places. Compact electrical equipment generates heat as well, which under some circumstances can become an ignition source.
There are multiple ways to make equipment safe for use in explosive-hazardous areas. Intrinsic safety (denoted by "i" in the ATEX and IECEx Explosion Classifications) is one of several available methods for electrical equipment. see Types of protection for more info.
For handheld electronics, intrinsic safety is the only realistic method that allows a functional device to be explosion protected. A device which is termed "intrinsically safe" has been designed to be incapable of producing heat or spark sufficient to ignite an explosive atmosphere, even if the device has experienced deterioration or has been damaged.
There are several considerations in designing intrinsically safe electronics devices:
reducing or eliminating internal sparking.
controlling component temperatures.
eliminating component spacing that would allow dust to short a circuit.
Elimination of spark potential within components is accomplished by limiting the available energy in any given circuit and the system as a whole.
Temperature, under certain fault conditions such as an internal short in a semiconductor device, becomes an issue as the temperature of a component can rise to a level that can ignite some explosive gasses, even in normal use.
Safeguards, such as current limiting by resistors and fuses, must be employed to ensure that in no circumstance can a component reach a temperature that could cause autoignition of a combustible atmosphere. In the highly compact electronic devices used today PCBs often have component spacing that create the possibility of an arc between components if dust or other particulate matter works into the circuitry, thus component spacing, siting and isolation become important to the design.
The primary concept behind intrinsic safety is the restriction of available electrical and thermal energy in the system so that ignition of a hazardous atmosphere (explosive gas or dust) cannot occur. This is achieved by ensuring that only low voltages and currents enter the hazardous area, and that no significant energy storage is possible.
One of the most common methods for protection is to limit electric current by using series resistors (using types of resistors that always fail open); and limit the voltage with multiple zener diodes. In zener barriers dangerous incoming potentials are grounded, with galvanic isolation barriers there is no direct connection between the safe- and hazardous-area circuits by interposing a layer of insulation between the two. Certification standards for intrinsic safety designs (mainly IEC 60079-11 but since 2015 also IEC TS 60079-39) generally require that the barrier do not exceed approved levels of voltage and current with specified damage to limiting components.
Equipment or instrumentation for use in a hazardous area will be designed to operate with low voltage and current, and will be designed without any large capacitors or inductors that could discharge in a spark. The instrument will be connected, using approved wiring methods, back to a control panel in a non-hazardous area that contains safety barriers. The safety barriers ensure that, in normal operation, and with the application of faults according to the equipment protection level (EPL), even if accidental contact occurs between the instrument circuit and other power sources, no more than the approved voltage and current enters the hazardous area.
For example, during marine transfer operations when flammable products are transferred between the marine terminal and tanker ships or barges, two-way radio communication needs to be constantly maintained in case the transfer needs to stop for unforeseen reasons such as a spill. The United States Coast Guard requires that the two way radio must be certified as intrinsically safe.
Another example is intrinsically safe or explosion-proof mobile phones used in explosive atmospheres, such as refineries. Intrinsically safe mobile phones must meet special battery design criteria in order to achieve UL, ATEX directive, or IECEx certification for use in explosive atmospheres.
Only properly designed battery-operated, self-contained devices can be intrinsically safe by themselves. Other field devices and wiring are intrinsically safe only when employed in a properly designed IS system. Requirements for intrinsically safe electrical systems are given in the IEC 60079 series of standards.
Certifying agencies
Standards for intrinsic protection are mainly developed by International Electrotechnical Commission (IEC), but different agencies also develop standards for intrinsic safety. Agencies may be run by governments or may be composed of members from insurance companies, manufacturers, and industries with an interest in safety standards. Certifying agencies allow manufacturers to affix a label or mark to identify that the equipment has been designed to the relevant product safety standards. Examples of such agencies in North America are the Factory Mutual Research Corporation, which certifies radios, Underwriters Laboratories (UL) that certifies mobile phones, and in Canada the Canadian Standards Association. In the EU the standard for intrinsic safety certification is the CENELEC standard EN 60079-11 and shall be certified according to the ATEX directive, while in other countries around the world the IEC standards are followed. To facilitate world trade, standards agencies around the world engage in harmonization activity so that intrinsically safe equipment manufactured in one country eventually might be approved for use in another without redundant, expensive testing and documentation.
See also
ATEX directive
References
Intrinsic safety on-line assessment tool
IEC 60079-11:2023
Further reading
Redding, R.J., Intrinsic Safety: Safe Use of Electronics in Hazardous Locations. McGraw-Hill European technical and industrial programme. 1971.
Paul, V., '"The earthing of intrinsically safe barriers on offshore transportable equipment". IMarEST. Proceedings of IMarEST - Part A - Journal of Marine Engineering and Technology, Volume 2009, Number 14, April 2009, pp. 3–17(15)
.
.
Electrical safety
Explosion protection
Natural gas safety | Intrinsic safety | [
"Chemistry",
"Engineering"
] | 1,450 | [
"Explosion protection",
"Natural gas safety",
"Combustion engineering",
"Natural gas technology",
"Explosions"
] |
4,214,828 | https://en.wikipedia.org/wiki/Safety%20Provisions%20%28Building%29%20Convention%2C%201937 | Safety Provisions (Building) Convention, 1937 is an International Labour Organization Convention.
It was established in 1937:
Ratifications and denunciations
As of January 2023, the convention had been ratified by 30 states. However, eleven of the ratifying states have automatically denounced the treaty because of subsequent ratification of conventions that automatically trigger denunciation of the 1937 treaty.
External links
Text.
Ratifications.
Health treaties
International Labour Organization conventions
Occupational safety and health treaties
Treaties concluded in 1937
Treaties entered into force in 1942
Treaties of Belgium
Treaties of the People's Republic of Bulgaria
Treaties of Burundi
Treaties of the Central African Republic
Treaties of the Republic of the Congo (Léopoldville)
Treaties of Egypt
Treaties of the French Fourth Republic
Treaties of Greece
Treaties of Guinea
Treaties of Honduras
Treaties of Ireland
Treaties of Malta
Treaties of Mauritania
Treaties of the Netherlands
Treaties of Peru
Treaties of the Polish People's Republic
Treaties of Rwanda
Treaties of Francoist Spain
Treaties of Suriname
Treaties of Switzerland
Treaties of Tunisia
Construction law
1937 in labor relations | Safety Provisions (Building) Convention, 1937 | [
"Engineering"
] | 200 | [
"Construction",
"Construction law"
] |
4,215,135 | https://en.wikipedia.org/wiki/Atomic%20mirror | In physics, an atomic mirror is a device which reflects neutral atoms in a way similar to the way a conventional mirror reflects visible light. Atomic mirrors can be made of electric fields or magnetic fields, electromagnetic waves or just silicon wafer; in the last case, atoms are reflected by the attracting tails of the van der Waals attraction (see quantum reflection). Such reflection is efficient when the normal component of the wavenumber of the atoms is small or comparable to the effective depth of the attraction potential (roughly, the distance at which the potential becomes comparable to the kinetic energy of the atom). To reduce the normal component, most atomic mirrors are blazed at the grazing incidence.
At grazing incidence, the efficiency of the quantum reflection can be enhanced by a surface covered with ridges (ridged mirror).
The set of narrow ridges reduces the van der Waals attraction of atoms to the surfaces and enhances the reflection. Each ridge blocks part of the wavefront, causing Fresnel diffraction.
Such a mirror can be interpreted in terms of the Zeno effect.
We may assume that the atom is "absorbed" or "measured" at the ridges. Frequent measuring (narrowly spaced ridges) suppresses the transition of the particle to the half-space with absorbers, causing specular reflection. At large separation between thin ridges, the reflectivity of the ridged mirror is determined by dimensionless momentum , and does not depend on the origin of the wave; therefore, it is suitable for reflection of atoms.
Applications
Atomic interferometry
See also
Quantum reflection
Ridged mirror
Zeno effect
Atomic nanoscope
Atom laser
References
Atomic, molecular, and optical physics | Atomic mirror | [
"Physics",
"Chemistry"
] | 335 | [
"Atomic",
" molecular",
" and optical physics"
] |
4,215,591 | https://en.wikipedia.org/wiki/International%20Committee%20on%20Taxonomy%20of%20Viruses | The International Committee on Taxonomy of Viruses (ICTV) authorizes and organizes the taxonomic classification of and the nomenclature for viruses. The ICTV develops a universal taxonomic scheme for viruses, and thus has the means to appropriately describe, name, and classify every virus taxon. The members of the International Committee on Taxonomy of Viruses are considered expert virologists. The ICTV was formed from and is governed by the Virology Division of the International Union of Microbiological Societies. Detailed work, such as identifying new taxa and delimiting the boundaries of species, genera, families, etc. typically is performed by study groups of experts in the families.
History
The International Committee on Nomenclature of Viruses (ICNV) was established in 1966, at the International Congress for Microbiology in Moscow, to standardize the naming of virus taxa. The ICVN published its first report in 1971. For viruses infecting vertebrates, the first report included 19 genera, 2 families, and a further 24 unclassified groups.
The ICNV was renamed the International Committee on Taxonomy of Viruses in 1974.
Organisational structure
The organisation is divided into an executive committee, which includes members and executives with fixed-term elected roles, as well as directly appointed heads of seven subcommittees. Each subcommittee head, in turn, appoints numerous 'study groups', which each consist of one chair and a variable number of members dedicated to the taxonomy of a specific taxon, such as an order or family. This structure may be visualised as follows:
Executive committee
President
Vice-president
Secretaries
Business Secretary
Proposals Secretary
Data Secretary
Chairs – positions: 7 (one for each subcommittee)
Elected members – positions: 11
Subcommittees
Animal DNA Viruses and Retroviruses Subcommittee – study groups: 18
Animal dsRNA and ssRNA- Viruses Subcommittee – study groups: 24
Animal ssRNA+ Viruses Subcommittee – study groups: 16
Bacterial Viruses Subcommittee – study groups: 20
Archaeal Viruses Subcommittee – study groups: 11
Fungal and Protist Viruses Subcommittee – study groups: 12
Plant Viruses Subcommittee – study groups: 22
Objectives
The objectives of the International Committee on Taxonomy of Viruses are:
To develop an internationally agreed taxonomy for viruses.
To establish internationally agreed names for virus taxa.
To communicate the decisions reached concerning the classification and nomenclature of viruses to virologists by holding meetings and publishing reports.
To maintain an official index of agreed names of virus taxa.
To study the virus effects in modern society and their behaviour.
Principles of nomenclature
The ICTV's essential principles of virus nomenclature are:
Stability
To avoid or reject the use of names which might cause error or confusion
To avoid the unnecessary creation of names
The ICTV's universal virus classification system uses a slightly modified version of the standard biological classification system. It only recognises the taxa order, family, subfamily, genus, and species. When it is uncertain how to classify a species into a genus but its classification in a family is clear, it will be classified as an unassigned species of that family. Many taxa remain unranked. There are also, in GenBank sequences assigned to 3,142 "species" which are not accounted for in the ICTV report (due to the way GenBank works, however, the actual number of proper species is probably significantly smaller). The number of unidentified virus sequences is only expected to increase as the rate of virus sequencing increases dramatically. In 2017, the ICTV endorsed a proposal to adapt the classification of viruses in order to keep up better with the growth of available sequences.
The ICTV has been strikingly successful in achieving stability, since their inception in 1962. Every genus and family recognized in the 1980s continued to be in use as of 2005, for example.
Naming and changing taxa
Proposals for new names, name changes, and the establishment and taxonomic placement of taxa are handled by the executive committee of the ICTV in the form of proposals. All relevant ICTV subcommittees and study groups are consulted prior to a decision being taken.
The name of a taxon has no official status until it has been approved by ICTV, and names will only be accepted if they are linked to approved hierarchical taxa. If no suitable name is proposed for a taxon, the taxon may be approved and the name be left undecided until the adoption of an acceptable international name, when one is proposed to and accepted by ICTV. Names must not convey a meaning for the taxon which would seem to either exclude viruses which are rightfully members of that taxon, exclude members which might one day belong to that taxon, or include viruses which are members of different taxa.
There is no principle of priority for virology, so that a name in current use cannot be invalidated by claiming priority.
Rules for taxa
Species
Since 2020, the Viral Code requires the use of binomial names for new species: a genus followed by a specific epithet. A species name must provide an appropriately unambiguous identification of the species.
Before then, a more liberal naming system was in effect: a species name shall consist of as few words as practicable but must not consist only of a host name and the word virus. Numbers, letters, or combinations thereof may be used as species epithets where such numbers and letters are already widely used. However, newly designated serial numbers, letters or combinations thereof are not acceptable alone as species epithets. If a number or letter series is in existence it may be continued.
Genera
A virus genus is a group of related species that share some significant properties and often only differ in host range and virulence. A genus name must be a single word ending in the suffix -virus. Approval of a new genus must be accompanied by the approval of a type species.
Subfamilies
A subfamily is a group of genera sharing certain common characters. The taxon shall be used only when it is needed to solve a complex hierarchical problem. A subfamily name must be a single word ending in the suffix -virinae.
Families
A family is a group of genera, whether or not these are organized into subfamilies, sharing certain common characters with each other. A family name must be a single word ending in the suffix -viridae.
Orders
An order is a group of families sharing certain common characters. An order name must be a single word ending in the suffix -virales.
Rules for sub-viral agents
Rules concerned with the classification of viruses shall also apply to the classification of viroids. The formal endings for taxa of viroids are the word viroid for species, the suffix -viroid for genera, the suffix -viroinae for sub-families, should this taxon be needed, and -viroidae for families. A similar system is in use for satellites and viriforms, substituting -vir- in normal taxa endings with -satellit- and -viriform-.
Retrotransposons are considered to be viruses in classification and nomenclature. Prions are not classified as viruses but are assigned an arbitrary classification as seems useful to workers in the particular fields.
Rules for orthography
In formal taxonomic usage the accepted names of virus orders, families, subfamilies, and genera are printed in italics and the first letters of the names are capitalized.
Species names are printed in italics and have the first letter of the first word capitalized. Other words are not capitalized unless they are proper nouns, or parts of proper nouns.
In formal usage, the name of the taxon shall precede the term for the taxonomic unit.
Classification of viruses discovered by metagenomics
Acknowledging the importance of viral metagenomics, the ICTV recognizes that genomes assembled from metagenomic data represent actual viruses and encourages their official classification following the same procedures as those used for viruses isolated and characterized using classical virology approaches.
ICTV reports
The ICTV has published reports of virus taxonomy about twice a decade since 1971 (listed below - "Reports"). The ninth ICTV report was published in December 2011; the content is now freely available through the ICTV website. Beginning in 2017, the tenth ICTV report was published online on the ICTV website and is free to access with individual chapters updated on a rolling basis. The 2018 and onward taxonomy is available online, including a downloadable Excel spreadsheet of all recognized species.
ICTVdb database
ICTVdb is a species and isolate database that has been intended to serve as a companion to the ICTV taxonomy database. The development of ICTVdB has been supported by the ICTV since 1991 and was initially intended to aid taxonomic research. The database classifies viruses based primarily on their chemical characteristics, genomic type, nucleic acid replication, diseases, vectors, and geographical distribution, among other characteristics.
The database was developed at the Australian National University with support of the US National Science Foundation, and sponsored by the American Type Culture Collection. It uses the Description Language for Taxonomy (DELTA) system, a world standard for taxonomic data exchange, developed at Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO). DELTA is able to store a wide diversity of data and translate it into a language suitable for traditional reports and web publication. For example, ICTVdB does not itself contain genomic sequence information but can convert DELTA data into NEXUS format. It can also handle large data inputs and is suited to compiling long lists of virus properties, text comments, and images.
ICTVdB has grown in concept and capability to become a major reference resource and research tool; in 1999 it was receiving over 30,000 combined online hits per day from its main site at the Australian National University, and two mirror sites based in the UK and United States.
In 2011, the ICTV decided to suspend the ICTVdb project and web site. This decision was made after it became apparent that the taxonomy provided on the site was many years out of date, and that some of the information on the site was inaccurate due to problems with how the database was being queried and processed to support the natural language output of the ICTVdb web site. The ICTV has begun discussions on how best to fix these problems, but decided that the time frame for updates and error correction were sufficiently long that it was best to take the site down rather than perpetuate the release of inaccurate information. As of August 2013, the database remains on hold. According to some views, "ICTV should also promote the use of a public database to replace the ICTV database as a store of the primary metadata of individual viruses, and should publish abstracts of the ICTV Reports in that database, so that they are 'Open Access'." The database was revived in 2017.
Reports
Also available online.
ICTV 10th (online) Report
See also
Glossary of scientific naming
Virus classification
Bioinformatics
References
External links
Viral Bioinformatics Resource Center
Taxonomy (biology) organizations
Nomenclature codes
Systems of virus taxonomy
Organizations established in 1966
Virology organizations | International Committee on Taxonomy of Viruses | [
"Biology"
] | 2,207 | [
"Nomenclature codes",
"Taxonomy (biology) organizations",
"Biological nomenclature",
"Taxonomy (biology)"
] |
4,215,766 | https://en.wikipedia.org/wiki/Transportable%20Applications%20Environment | The Transportable Applications Environment (TAE) was a rapid prototyping graphical user interface development environment created by NASA in the 1980s. It is available for us on DEC VAX ULTRIX, DEC RISC ULTRIX, Sun, VAX/VMS, Silicon Graphics, HP9000, and IBM RS/6000 based systems.
References
NASA online | Transportable Applications Environment | [
"Technology"
] | 72 | [
"Computing stubs"
] |
4,215,881 | https://en.wikipedia.org/wiki/Active%20fire%20protection | Active fire protection (AFP) is an integral part of fire protection. AFP is characterized by items and/or systems, which require a certain amount of motion and response in order to work, contrary to passive fire protection.
Categories
Manual fire suppression
Manual fire suppression includes the use of a fire blanket, fire extinguisher, or a standpipe system.
Fire blanket
A fire blanket is a sheet of fire retardant material that is designed to be placed over a fire to smother it out. Small fire blankets are meant for inception stage fires. They are normally made of fiberglass or Kevlar. Larger ones can be found in laboratories and factories, and are designed to be wrapped around a person whose clothes have caught fire.
Fire extinguisher
Fire extinguishers are devices that contain and discharge a substance that extinguishes or puts out a fire. These handheld devices come in a huge range of sizes, but the most common are portable fire extinguishers, typically weighing up to 15 kg in total. These can be easily handled and operated by one person and placement can either be wall-mounted, on a fire extinguisher trolley or housed inside a cabinet. Fire extinguishers are one of the most common manual fire suppression devices and are required in all commercial buildings and vehicles. Fire extinguishers can be used with little to no training and are meant for small incipient stage fires. The most common extinguisher is the ABC extinguisher and are found in most offices and homes. It can be used on normal fires, liquid fires, and electrical fires. There are also special extinguishers for kitchen fires and for use on burning metals, those being Class K and Class D respectively.
Standpipe
Standpipes are installed in most large, multistory buildings. There are two types of standpipes: dry and wet. Most standpipes are dry systems and cannot be used by the public. Dry systems require a fire engine to pump water into the system. Most dry systems do not have pre-connected hoses and require firefighters to bring in the hose. In wet systems, there is always water in the pipes and they can be used by anyone. Wet systems will have hoses so building occupants can try and extinguish fires. Wet systems are becoming less common with the increase in number of sprinkler systems being installed. In some systems, firefighters have the option of pumping a Fire Department Connection (FDC) which will increase the water pressure at a standpipe in the event of a fire pump failure or loss of pressure. Typically, these systems pressurize the sprinkler system or the standpipe but not both at the same time.
Automatic fire suppression
Automatic control means are any form of suppression that requires no human intervention these can include a fire sprinkler system, a gaseous clean agent, or automatic foam suppression system. Most automatic suppression systems would be found in large commercial kitchens or other high-risk areas.
Sprinkler systems
Fire sprinkler systems are installed in all types of buildings, commercial and residential. They are usually located at ceiling level and are connected to a reliable water source, most commonly municipal water supply. A typical automatic sprinkler system operates when heat at the site of a fire causes a fusible link or glass component in the sprinkler head to fail, thereby releasing the water from the sprinkler head. This means that only the sprinkler heads at the fire location actuate – not all the sprinklers on a floor or in a building. However, certain systems, such as deluge systems, do spray water from all heads in the same zone upon actuation. Sprinkler systems help to reduce the growth of a fire, thereby increasing life safety and limiting structural damage.
Gaseous clean agent
Gaseous clean agents are installed to result in less fire and water damage than sprinklers, such as in computer rooms. The system works by flooding an area with a gas which interferes with the fire tetrahedron. These systems are often found in areas where people are not going to be present when the system is activated such as datacenters, cooling systems, and other industrial applications. Activating a gaseous clean agent system when people are present can cause injury or death, and are usually equipped with an audible notification system to warn any potential occupants to evacuate the area.
Foam suppression system
Automatic foam suppression systems come in three main forms low expansion, medium expansion, and high expansion.
Low expansion
Low expansion foam expands less than 20 times its original size. These systems can be installed in a variety of places but are commonly found in places where hydrocarbons are stored. Low expansion foam systems when using film forming work by making a blanket of foam over the burning liquid to both cool it down and suppress the vapors.
Medium expansion
Medium expansion foam expands between 20 and 200 times its original size. These can be installed in outdoor settings like transfer stations or for use in open pits. Medium foam is used outdoors because it is denser than high expansion and will not blow away as easily. It works by covering what is on fire in a thick blanket of foam to smother it and suppress vapors.
High expansion
High expansion foam expands between 200 and 1000 times its original size. These systems are commonly installed in large volume areas like airplane hangars, mine shafts, and ship holds. These systems are normally installed inside and make a very light foam. They extinguish the fire by rapid smothering and cooling. Its rapid rate of expansion enables it to fill large areas with foam rapidly. When used on LNG tanks they provide an added insulation layer that helps reduce the vapor rate.
Electronically controlled
Nozzles that are powered by electrical energy that is generated and supplied by fire detection and control devices and are typically closed.
Ignitable liquid drainage floor assembly (ILDFA)
ILDFA uses a dual approach, combining a water-based fire suppression system in conjunction with a hollow, perforated flooring system to drain and remove spilled flammable liquid. This approach reduces the risk of pool fires inside infrastructure by diverting any leaked fuel away from potential ignition sources or by extinguishing any flammable liquid fire by depriving it of oxygen once it is removed.
Fire detection
Fire detection works using smoke or heat sensors. These systems are very effective tool at alerting people in the immediate vicinity of where the fire is detected but building regulations require an integrated fire detection system. These system not only alerts people throughout the building by triggering the fire alarm but it can also summon emergency services. There are two types of systems available – addressable and conventional. Addressable systems monitor the specific location of each device (e.g. smoke detector, call point or sounder). It means in the event of a fire or other emergency you know exactly where the problem is. This saves precious time and helps the emergency services prevent the loss of life and serious damage. Conventional systems can only determine the problem is in a general area and thus are more suited for small sites.
When the fire detection system is activated it can also send an alert to the local fire department, broadcast a prerecorded warning message and unlock the buildings access control system.
Hypoxic air fire prevention
Fire can be prevented by hypoxic air. Hypoxic air fire prevention systems, also known as oxygen reduction systems are new automatic fire prevention systems that permanently reduce the oxygen concentration inside the protected volumes so that ignition or fire spreading cannot occur. Unlike traditional fire suppression systems that usually extinguish fire after it is detected, hypoxic air is able to prevent fires. At lower altitudes hypoxic air is safe to breathe for healthy individuals.
Construction and maintenance
All AFP systems are required to be installed and maintained in accordance with strict guidelines in order to maintain compliance with the local building code and the fire code.
AFP works alongside modern architectural designs and construction materials and fire safety education to prevent, retard, and suppress structural fires.
See also
Fire damper
Fire hydrant
Fire protection engineering
References
External links
Treatise on Active and Passive Fire Protection from UK Government
When Fire Strikes, Stop, Drop and... Sing? – Article about acoustic fire suppression, Scientific American, January 24, 2008
Karlsruhe Institute of Technology (KIT) - Forschungsstelle für Brandschutztechnik
Fire protection | Active fire protection | [
"Engineering"
] | 1,726 | [
"Building engineering",
"Fire protection"
] |
4,216,002 | https://en.wikipedia.org/wiki/Near-field%20scanning%20optical%20microscope | Near-field scanning optical microscopy (NSOM) or scanning near-field optical microscopy (SNOM) is a microscopy technique for nanostructure investigation that breaks the far field resolution limit by exploiting the properties of evanescent waves. In SNOM, the excitation laser light is focused through an aperture with a diameter smaller than the excitation wavelength, resulting in an evanescent field (or near-field) on the far side of the aperture. When the sample is scanned at a small distance below the aperture, the optical resolution of transmitted or reflected light is limited only by the diameter of the aperture. In particular, lateral resolution of 6 nm and vertical resolution of 2–5 nm have been demonstrated.
As in optical microscopy, the contrast mechanism can be easily adapted to study different properties, such as refractive index, chemical structure and local stress. Dynamic properties can also be studied at a sub-wavelength scale using this technique.
NSOM/SNOM is a form of scanning probe microscopy.
History
Edward Hutchinson Synge is given credit for conceiving and developing the idea for an imaging instrument that would image by exciting and collecting diffraction in the near field. His original idea, proposed in 1928, was based upon the usage of intense nearly planar light from an arc under pressure behind a thin, opaque metal film with a small orifice of about 100 nm. The orifice was to remain within 100 nm of the surface, and information was to be collected by point-by-point scanning. He foresaw the illumination and the detector movement being the biggest technical difficulties. John A. O'Keefe also developed similar theories in 1956. He thought the moving of the pinhole or the detector when it is so close to the sample would be the most likely issue that could prevent the realization of such an instrument. It was Ash and Nicholls at University College London who, in 1972, first broke the Abbe's diffraction limit using microwave radiation with a wavelength of 3 cm. A line grating was resolved with a resolution of λ0/60. A decade later, a patent on an optical near-field microscope was filed by Dieter Pohl, followed in 1984 by the first paper that used visible radiation for near field scanning. The near-field optical (NFO) microscope involved a sub-wavelength aperture at the apex of a metal coated sharply pointed transparent tip, and a feedback mechanism to maintain a constant distance of a few nanometers between the sample and the probe. Lewis et al. were also aware of the potential of an NFO microscope at this time. They reported first results in 1986 confirming super-resolution. In both experiments, details below 50 nm (about λ0/10) in size could be recognized.
Theory
According to Abbe's theory of image formation, developed in 1873, the resolving capability of an optical component is ultimately limited by the spreading out of each image point due to diffraction. Unless the aperture of the optical component is large enough to collect all the diffracted light, the finer aspects of the image will not correspond exactly to the object. The minimum resolution (d) for the optical component is thus limited by its aperture size, and expressed by the Rayleigh criterion:
Here, λ0 is the wavelength in vacuum; NA is the numerical aperture for the optical component (maximum 1.3–1.4 for modern objectives with a very high magnification factor). Thus, the resolution limit is usually around λ0/2 for conventional optical microscopy.
This treatment takes into account only the light diffracted into the far-field that propagates without any restrictions. NSOM makes use of evanescent or non propagating fields that exist only near the surface of the object. These fields carry the high frequency spatial information about the object and have intensities that drop off exponentially with distance from the object. Because of this, the detector must be placed very close to the sample in the near field zone, typically a few nanometers. As a result, near field microscopy remains primarily a surface inspection technique. The detector is then rastered across the sample using a piezoelectric stage. The scanning can either be done at a constant height or with regulated height by using a feedback mechanism.
Modes of operation
Aperture and apertureless operation
There exist NSOM which can be operated in so-called aperture mode and NSOM for operation in a non-aperture mode. As illustrated, the tips used in the apertureless mode are very sharp and do not have a metal coating.
Though there are many issues associated with the apertured tips (heating, artifacts, contrast, sensitivity, topology and interference among others), aperture mode remains more popular. This is primarily because apertureless mode is even more complex to set up and operate, and is not understood as well. There are five primary modes of apertured NSOM operation and four primary modes of apertureless NSOM operation. The major ones are illustrated in the next figure.
Some types of NSOM operation utilize a campanile probe, which has a square pyramid shape with two facets coated with a metal. Such a probe has a high signal collection efficiency (>90%) and no frequency cutoff. Another alternative is "active tip" schemes, where the tip is functionalized with active light sources such as a fluorescent dye or even a light emitting diode that enables fluorescence excitation.
The merits of aperture and apertureless NSOM configurations can be merged in a hybrid probe design, which contains a metallic tip attached to the side of a tapered optical fiber. At visible range (400 nm to 900 nm), about 50% of the incident light can be focused to the tip apex, which is around 5 nm in radius. This hybrid probe can deliver the excitation light through the fiber to realize tip-enhanced Raman spectroscopy (TERS) at tip apex, and collect the Raman signals through the same fiber. The lens-free fiber-in-fiber-out STM-NSOM-TERS has been demonstrated.
Feedback mechanisms
Feedback mechanisms are usually used to achieve high resolution and artifact free images since the tip must be positioned within a few nanometers of the surfaces. Some of these mechanisms are constant force feedback and shear force feedback
Constant force feedback mode is similar to the feedback mechanism used in atomic force microscopy (AFM). Experiments can be performed in contact, intermittent contact, and non-contact modes.
In shear force feedback mode, a tuning fork is mounted alongside the tip and made to oscillate at its resonance frequency. The amplitude is closely related to the tip-surface distance, and thus used as a feedback mechanism.
Contrast
It is possible to take advantage of the various contrast techniques available to optical microscopy through NSOM but with much higher resolution. By using the change in the polarization of light or the intensity of the light as a function of the incident wavelength, it is possible to make use of contrast enhancing techniques such as staining, fluorescence, phase contrast and differential interference contrast. It is also possible to provide contrast using the change in refractive index, reflectivity, local stress and magnetic properties amongst others.
Instrumentation and standard setup
The primary components of an NSOM setup are the light source, feedback mechanism, the scanning tip, the detector and the piezoelectric sample stage. The light source is usually a laser focused into an optical fiber through a polarizer, a beam splitter and a coupler. The polarizer and the beam splitter would serve to remove stray light from the returning reflected light. The scanning tip, depending upon the operation mode, is usually a pulled or stretched optical fiber coated with metal except at the tip or just a standard AFM cantilever with a hole in the center of the pyramidal tip. Standard optical detectors, such as avalanche photodiode, photomultiplier tube (PMT) or CCD, can be used. Highly specialized NSOM techniques, Raman NSOM for example, have much more stringent detector requirements.
Near-field spectroscopy
As the name implies, information is collected by spectroscopic means instead of imaging in the near field regime. Through near field spectroscopy (NFS), one can probe spectroscopically with sub-wavelength resolution. Raman SNOM and fluorescence SNOM are two of the most popular NFS techniques as they allow for the identification of nanosized features with chemical contrast. Some of the common near-field spectroscopic techniques are below.
Direct local Raman NSOM is based on Raman spectroscopy. Aperture Raman NSOM is limited by very hot and blunt tips, and by long collection times. However, apertureless NSOM can be used to achieve high Raman scattering efficiency factors (around 40). Topological artifacts make it hard to implement this technique for rough surfaces.
Tip-enhanced Raman spectroscopy (TERS) is an offshoot of surface enhanced Raman spectroscopy (SERS). This technique can be used in an apertureless shear-force NSOM setup, or by using an AFM tip coated with gold or silver. The Raman signal is found to be significantly enhanced under the AFM tip. This technique has been used to give local variations in the Raman spectra under a single-walled nanotube. A highly sensitive optoacoustic spectrometer must be used for the detection of the Raman signal.
Fluorescence NSOM is a highly popular and sensitive technique which makes use of fluorescence for near field imaging, and is especially suited for biological applications. The technique of choice here is apertureless back to the fiber emission in constant shear force mode. This technique uses merocyanine-based dyes embedded in an appropriate resin. Edge filters are used for removal of all primary laser light. Resolution as low as 10 nm can be achieved using this technique.
Near field infrared spectrometry and near-field dielectric microscopy use near-field probes to combine sub-micron microscopy with localized IR spectroscopy.
The nano-FTIR method is a broadband nanoscale spectroscopy that combines apertureless NSOM with broadband illumination and FTIR detection to obtain a complete infrared spectrum at every spatial location. Sensitivity to a single molecular complex and nanoscale resolution up to 10 nm has been demonstrated with nano-FTIR.
The nanofocusing technique can create a nanometer-scale "white" light source at the tip apex, which can be used to illuminate a sample at near-field for spectroscopic analysis. The interband optical transitions in individual single-walled carbon nanotubes are imaged and a spatial resolution around 6 nm has been reported.
Artifacts
NSOM can be vulnerable to artifacts that are not from the intended contrast mode. The most common root for artifacts in NSOM are tip breakage during scanning, striped contrast, displaced optical contrast, local far field light concentration, and topographic artifacts.
In apertureless NSOM, also known as scattering-type SNOM or s-SNOM, many of these artifacts are eliminated or can be avoided by proper technique application.
Limitations
One limitation is a very short working distance and extremely shallow depth of field. It is normally limited to surface studies; however, it can be applied for subsurface investigations within the corresponding depth of field. In shear force mode and other contact operation it is not conducive for studying soft materials. It has long scan times for large sample areas for high resolution imaging.
An additional limitation is the predominant orientation of the polarization state of the interrogating light in the near-field of the scanning tip. Metallic scanning tips naturally orient the polarization state perpendicular to the sample surface. Other techniques, like anisotropic terahertz microspectroscopy utilize in-plane polarimetry to study physical properties inaccessible to near-field scanning optical microscopes including the spatial dependence of intramolecular vibrations in anisotropic molecules.
See also
Fluorescence spectroscopy
Nano-optics
Near-field optics
References
External links
Scanning probe microscopy
Cell imaging
Laboratory equipment
Microscopy
Optical microscopy | Near-field scanning optical microscope | [
"Chemistry",
"Materials_science",
"Biology"
] | 2,466 | [
"Optical microscopy",
"Measuring instruments",
"Microscopes",
"Scanning probe microscopy",
"Microscopy",
"Nanotechnology",
"Cell imaging"
] |
4,216,621 | https://en.wikipedia.org/wiki/Allose | Allose is an aldohexose sugar. It is a rare monosaccharide that occurs as a 6-O-cinnamyl glycoside in the leaves of the African shrub Protea rubropilosa. Extracts from the fresh-water alga Ochromas malhamensis contain this sugar but of unknown absolute configuration. It is soluble in water and practically insoluble in methanol.
Reduction of allose by catalytic hydrogenation produces an obscure sugar alcohol allitol which is rarely used in the chemical industry.
Allose is a C-3 epimer of glucose.
Notes
References
Carbohydrates, edited by P.M. Collins, Chapman and Hall,
Aldohexoses
Pyranoses | Allose | [
"Chemistry"
] | 156 | [] |
4,216,648 | https://en.wikipedia.org/wiki/Gulose | Gulose is an aldohexose sugar. It is a monosaccharide that is very rare in nature, but has been found in archaea, bacteria and eukaryotes. It also exists as a syrup with a sweet taste. It is soluble in water and slightly soluble in methanol. Neither the - nor -forms are fermentable by yeast.
D-Gulose is a C-3 epimer of D-galactose and a C-5 epimer of L-mannose.
References
Aldohexoses
Pyranoses | Gulose | [
"Chemistry"
] | 123 | [] |
4,216,670 | https://en.wikipedia.org/wiki/Maitland%20Jones%20Jr. | Maitland Jones Jr. (born November 23, 1937) is an American experimental chemist. Jones worked at Princeton University in his research lab from 1964 until his 2007 retirement. He then taught at New York University from 2007 until his dismissal in 2022. He is known for changing how the subject of organic chemistry is taught to undergraduate students, through writing a popular textbook, Organic Chemistry, and re-shaping the course from simple rote learning to one that focuses on scientific problem solving.
Education
Jones earned a Bachelor of Science, Master of Science, and PhD from Yale University.
Career
Jones' field of expertise is reactive intermediates, with particular emphasis on carbenes. He has published extensively in the field of quantum organic chemistry, particularly focusing on the mechanism of quantum molecular reactions. His interest areas include carbenes, carboranes, and heterocycles. Over the course of almost forty years, he and his research group have published 225 papers, averaging some five papers per year or one paper per active group member per year.
Jones is also the author of Organic Chemistry texts. He is credited with the naming of bullvalene, which is named after William "Bull" Doering, whom Jones was studying under during his time as a graduate student at Yale University.
He established his Jones research Lab at Princeton from 1964 to 2004. During this time, he published papers with 63 undergraduates, 30 graduate students and 34 postdoctoral fellows and visitors.
Teaching
Jones is credited as being among the early adapters of the distance education technology, in the late 1960s, using the Victor Electrowriter Remote Blackboard (VERB) system.
After retiring from Princeton in 2007, Jones taught organic chemistry at New York University until spring 2022 on annual contract basis. NYU offers different classes to students majoring in chemistry and pre-med students, and Jones was assigned to teach aspiring doctors. His contract at NYU was not renewed in 2022 after students complained that the class was too hard and did not provide adequate academic support. Jones said that the premed students' study skills and ability to focus had been declining during the previous decade, and then had declined dramatically after the interruption of the COVID-19 pandemic. The non-renewal of Jones' contract concerned professors inside and outside of NYU. Alán Aspuru-Guzik, a professor at the University of Toronto, suggested that the student petition points to a "premed culture" in which the most important outcome of a course is the grade for the medical school application and the effects of social media distractions such as TikTok on the amount of studying done by students. The head of NYU's Chemistry Department Mark Tuckerman, said that the decision not to renew Jones’s contract was made against the department’s recommendation, which was to have Jones teach organic chemistry to students majoring in chemistry, because chemistry majors would appreciate Jones' high standards.
Textbooks
Jones is the first author of an influential textbook on Organic Chemistry. The book, first published in 1997, is now in its fifth edition (2014).
Organic Chemistry, Jones, M. Jr., Fleming, S.A., W. W. Norton, New York, 1997
Instructor's Manual and Supplementary Problems Set for Organic Chemistry, Jones, M. Jr., Ovaska, T. W. W. Norton, New York, 1997.
Study Guide for Organic Chemistry, Jones, M. Jr.; Gingrich, H. L. W. W. Norton, New York, 1997
Study Guide for Organic Chemistry, Third Edition, Jones, M. Jr.; Gingrich, H. L. W. W. Norton, New York, 2004
How to Survive and Thrive in Organic Chemistry for Dummies. Second Edition, Jones, M. Jr.; Gingrich, H. L. W. W. Norton, New York, 2004
Academic experience
Postdoctoral Fellow, Yale University (1963)
Postdoctoral Fellow, University of Wisconsin–Madison (1963–1964)
Instructor in Chemistry, Princeton University (1964–1966)
Assistant Professor, Princeton University (1966–1970)
Visiting Assistant Professor, Columbia University (1969–1970)
Associate Professor, Princeton University (1970–1973)
Professor, Princeton University (1973–2007)
Visiting Professor, Vrije Universiteit, Amsterdam (1973–1974, 1978)
David B. Jones Professor of Chemistry, Princeton University (1983–2007)
Visiting Professor, Harvard University (1986)
Visiting Professor, Kiev Polytechnic Institute (1990)
Visiting Professor, Fudan University (1994)
Professor, New York University (2007–2022)
Awards and honors
David B. Jones Professor of Chemistry (Princeton University)
References
External links
Faculty website
1937 births
Living people
21st-century American chemists
Princeton University faculty
Yale University alumni
American organic chemists
New York University faculty | Maitland Jones Jr. | [
"Chemistry"
] | 972 | [
"Organic chemists",
"American organic chemists"
] |
4,216,673 | https://en.wikipedia.org/wiki/Altrose | Altrose is an aldohexose sugar. D-Altrose is an unnatural monosaccharide. It is soluble in water and practically insoluble in methanol. However, L-altrose has been isolated from strains of the bacterium Butyrivibrio fibrisolvens.
Altrose is a C-3 epimer of mannose. The ring conformation of α-altropyranoside is flexible compared to most other aldohexopyranosides, with idose as exception. In solution different derivatives of altrose have been shown to occupy both 4C1, OS2 and 1C4-conformations.
References
Aldohexoses
Furanoses
Pyranoses | Altrose | [
"Chemistry"
] | 151 | [] |
4,216,687 | https://en.wikipedia.org/wiki/Talose | Talose is an aldohexose sugar. It is an unnatural monosaccharide, that is soluble in water and slightly soluble in methanol. Some etymologists suggest that talose's name derives from the automaton of Greek mythology named Talos, but the relevance is unclear.
Talose is a C-2 epimer of galactose and a C-4 epimer of mannose.
References
Aldohexoses
Pyranoses | Talose | [
"Chemistry"
] | 100 | [] |
4,216,735 | https://en.wikipedia.org/wiki/Analyser | An analyser (British English) or analyzer (American English; see spelling differences) is a tool used to analyze data. For example, a gas analyzer tool is used to analyze gases. It examines the given data and tries to find patterns and relationships. An analyser can be a piece of hardware or software.
Autoanalysers are machines that perform their work with little human involvement.
Operation
Analysis can be done directly on samples or the analyser can process data acquired from a remote sensor. The source of samples for automatic sampling is commonly some kind of industrial process. Analysers that are connected to a process and conduct automatic sampling, can be called online (or on-line) analysers or sometimes inline (or in-line) analysers. For inline analysis, a sensor can be placed in a process vessel or stream of flowing material. Another method of online analysis is allowing a sample stream to flow from the process equipment into an analyser, sometimes conditioning the sample stream e.g., by reducing pressure or changing the sample temperature. Many analysers are not designed to withstand high pressure. Such sampling is typically for fluids (either liquids or gases). If the sample stream is not substantially modified by the analyser, it can be returned to the process. Otherwise, the sample stream is discarded; for example, if reagents were added.
Pressure can be lowered by a pressure reducing valve. Such valves may be used to control the flow rate to the online analyser. The temperature of a hot sample may be lowered by use of an online sample cooler. Analysis can be done periodically (for example, every 15 minutes), or continuously. For periodic sampling, valves (or other devices) can be switched open to allow a fluid sample stream to flow to the analyser and shut when not sampling.
Some methods of inline analysis are so simple, such as electrical conductivity or pH, the instruments are usually not even called analysers. Salinity determined from simple online analysis is often determined from a conductivity measurement where the output signal is calibrated in terms of salinity concentration (for example ppm of NaCl). Various types of other analyses can be devised. Physical properties can include electrical conductivity (or effectively electrical resistivity), refractive index, and radioactivity measurement. Simple processes that use inline electrical conductivity determination are water purification processes which test how effectively salts have been removed from the output water. Electrical conductivity variations include cation and anion conductivity. Chromatography such as ion chromatography or HPLC often tests the output stream continuously by measuring electrical conductivity, particularly cation or anion conductivity, refractive index, colorimetry or ultraviolet/visible absorbance at a certain wavelength. InlineOnline and offline analysers are available for other types of analytes. Many of these add reagents to the samples or sample streams.
Types of analysers
Automated analyser
Breathalyzer (breath analyzer)
Bus analyser
Differential analyser – early analogue computer
Electron microprobe
Lexical analyser
Logic analyser
Network analyser
Protocol analyser (packet sniffer)
Quadrupole mass analyser
Spectrum analyser
Vector signal analyser
References
Measuring instruments | Analyser | [
"Technology",
"Engineering"
] | 682 | [
"Measuring instruments"
] |
4,217,131 | https://en.wikipedia.org/wiki/Ditellurium%20decafluoride | Ditellurium decafluoride was widely reported in the literature but what was believed to be Te2F10 has been shown to be teflic anhydride, F5TeOTeF5. An account as to how this error occurred was made by P. M. Watkins.
If it existed, it would be valence isoelectronic with disulfur decafluoride, and have a similar structure.
References
Tellurium compounds
Fluorides
Nonmetal halides
Chalcohalides
Hypothetical chemical compounds | Ditellurium decafluoride | [
"Chemistry"
] | 114 | [
"Inorganic compounds",
"Theoretical chemistry stubs",
"Hypotheses in chemistry",
"Salts",
"Chalcohalides",
"Theoretical chemistry",
"Hypothetical chemical compounds",
"Fluorides"
] |
4,217,297 | https://en.wikipedia.org/wiki/Electromagnetic%20hypersensitivity | Electromagnetic hypersensitivity (EHS) is a claimed sensitivity to electromagnetic fields, to which adverse symptoms are attributed. EHS has no scientific basis and is not a recognized medical diagnosis, although it is generally accepted that the experience of EHS symptoms is of psychosomatic origin. Claims are characterized by a "variety of non-specific symptoms, which afflicted individuals attribute to exposure to electromagnetic fields". Attempts to justify the claim that EHS is caused by exposure to electromagnetic fields have amounted to pseudoscience.
Those who are self-diagnosed with EHS report adverse reactions to electromagnetic fields at intensities well below the maximum levels permitted by international radiation safety standards. Provocation trials have found that such claimants are unable to distinguish between exposure and non-exposure to electromagnetic fields. A systematic review of medical research in 2011 found no convincing scientific evidence for symptoms being caused by electromagnetic fields. Since then, several double-blind experiments have shown that people who report electromagnetic hypersensitivity are unable to detect the presence of electromagnetic fields and are as likely to report ill health following a sham exposure as they are following exposure to genuine electromagnetic fields, suggesting the cause in these cases is the nocebo effect.
, the WHO recommended that claims of EHS be clinically evaluated to determine and rule out alternative diagnoses for suffered symptoms. Cognitive behavioral therapy and management of comorbid psychiatric disorders may be helpful in managing the condition.
Some people who feel they are sensitive to electromagnetic fields may seek to reduce their exposure or use alternative medicine. Government agencies have enforced false advertising claims against companies selling devices to shield against EM radiation.
Signs and symptoms
There are no specific symptoms associated with claims of EHS, and the reported symptoms range widely among individuals. They include headache, fatigue, stress, sleep disturbances, skin prickling, burning sensations and rashes, and pain and ache in muscles. In severe cases such symptoms can be a real and sometimes disabling problem for the affected person, causing psychological distress. There is no scientific basis to link such symptoms to electromagnetic field exposure.
The prevalence of some reported symptoms is geographically or culturally dependent and does not imply "a causal relationship between symptoms and attributed exposure". Many such reported symptoms overlap with other syndromes known as symptom-based conditions, functional somatic syndromes, and IEI (idiopathic environmental intolerance).
Those reporting electromagnetic hypersensitivity usually describe different levels of susceptibility to electric fields, magnetic fields, and various frequencies of electromagnetic waves. Devices implicated include fluorescent and low-energy lights, mobile, cordless/portable phones, and Wi-Fi. A 2001 survey found that people self-diagnosing as EHS related their symptoms most frequently to cell sites (74%), followed by mobile phones (36%), cordless phones (29%), and power lines (27%). Surveys of people with EHS have found no consistent pattern to these symptoms.
Causes
Most blinded conscious provocation studies have failed to show a correlation between exposure and symptoms. An example is a 2007 study where 17 individuals who showed symptoms in an open test were exposed variously to real mobile phones or sham ones. The individuals showed discomfort at the mobile phones regardless of whether the phones were genuine. These results suggest that psychological mechanisms play a role in causing or exacerbating EHS symptoms. In 2010, Rubin et al. published a follow-up to their 2005 review, bringing the totals to 46 double-blind experiments and 1175 people with self-diagnosed EHS. Neither review found robust evidence to support the hypothesis that electromagnetic exposure causes EHS, nor have other studies. They also concluded that the studies supported the role of the nocebo effect in triggering acute symptoms in those with EHS.
Diagnosis
Electromagnetic hypersensitivity is not an accepted diagnosis; medically there is no case definition or clinical practice guideline and no test to identify it, nor is there an agreed-upon definition with which to conduct clinical research.
Complaints of electromagnetic hypersensitivity may mask organic or psychiatric illness: in a recent psychological model of mental disorder, Sébastien Point proposed to consider it as a specific phobia. Diagnosis of those underlying conditions involves investigating and identifying possible known medical causes of any symptoms observed. It may require both a thorough medical evaluation to identify and treat any specific conditions that may be responsible for the symptoms, and a psychological evaluation to identify alternative psychiatric/psychological conditions that may be responsible or contribute to the symptoms.
Symptoms may also be brought on by imagining that exposure is causing harm, an example of the nocebo effect. Studies have shown that reports of symptoms are more closely associated with belief that one is being exposed than with actual exposure.
Management
Whatever the cause of symptoms attributed to EHS, it can be a debilitating condition that benefits from treatment or management. Cognitive behavioral therapy has shown some success helping people cope with the condition.
As of 2005, WHO recommended that people presenting with claims of EHS be evaluated to determine if they have a medical condition that may be causing the symptoms the person is attributing to EHS, that they have a psychological evaluation, and that the person's environment be evaluated for issues like air or noise pollution that may be causing problems.
A variety of pseudoscientific devices are marketed to those who fear that they are being harmed by electromagnetic fields. The US Federal Trade Commission has warned about scams that involve selling products purported to protect against cell phone radiation. In the UK, a product called 5GBioShield was identified by Trading Standards as a "scam" device. Its manufacturers claimed that it could mitigate harms from phone radiation, but British authorities determined that the device was merely a USB drive.
Prevalence
In 1997, before Wi-Fi, Bluetooth and 3G technology, a group of scientists attempted to estimate the number of people reporting "subjective symptoms" from electromagnetic fields for the European Commission. They estimated that electromagnetic sensitivity occurred in "less than a few cases per million of the population" (based on centres of occupational medicine in UK, Italy and France) or up to "a few tenths of a per cent of the population" (based on self-aid groups in Denmark, Ireland and Sweden). In 2005, the UK Health Protection Agency reviewed this and several other studies for prevalence figures and concluded that "the differences in prevalence were at least partly due to the differences in available information and media attention around electromagnetic hypersensitivity that exist in different countries" and that "Similar views have been expressed by other commentators". The authors noted that most of the studies focused on computer monitors (VDUs), as such the "findings cannot apply in full" to other forms of EMF exposure such as radio waves from mobile phones/base stations.
In 2007, a UK survey aimed at a randomly selected group of 20,000 people found a prevalence of 4% for symptoms self-attributed to electromagnetic exposure.
A 2013 study using telephone surveys in Taiwan concluded that the rates of IEI-EMF were in decline within the country, despite previous expectations of a rise in prevalence as electronic devices became more widespread. Rates declined from 13% in 2007 to 5% in 2013. The study also referred to apparent declines in the Netherlands (from 7% in 2009 to 4% in 2011) and in Germany (from 10% in 2009 to 7% in 2013). More women believed themselves to be electromagnetically hypersensitive than men.
In 2021, physicist Sébastien Point noted that the prevalence of electrohypersensitivity is similar to the prevalence of specific phobias as well as the gender ratio (2 electrohypersensitive or phobic women for one electrohypersensitive or phobic man), which, according to him, reinforces the hypothesis that electrohypersensitivity is a new specific phobia.
Society and culture
In 2010, a cell tower operator in South Africa revealed at a public meeting that the tower that nearby residents were blaming for their EHS symptoms had been turned off over six weeks before the meeting, making it a highly unlikely cause of EHS symptoms.
In February 2014, the UK Advertising Standards Authority found that claims of harm from electromagnetic radiation, made in a product advertisement, were unsubstantiated and misleading.
People have sued for damages due to harm claimed from electromagnetic radiation. In 2012, a New Mexico judge dismissed a lawsuit in which a person sued his neighbor, claiming to have been harmed by EM radiation from his neighbor's cordless telephones, dimmer switches, chargers, Wi-Fi and other devices. The plaintiff brought the testimony of his doctor, who also believed she had EHS, and a person who represented himself as a neurotoxicologist; the judge found none of their testimony credible. In 2015, parents of a boy at a school in Southborough, Massachusetts, alleged that the school's Wi-Fi was making the boy sick.
In November 2015, a depressed teenage girl in England died by suicide. This act was attributed to EHS by her parents and taken up by tabloids and EHS advocates.
The public position of the EU's Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) to the European Commission is that "new improved studies on the association between radio frequency fields from broadcast transmitters and childhood cancer provide evidence against such an association." But "data on the health effects of intermediate frequency fields used, for example, in metal detectors or anti-theft devices in shops, are still lacking." The SCENIHR called for research to continue.
Some people who feel they are sensitive to electromagnetic fields self-treat by trying to reduce their exposure to electromagnetic sources by disconnecting or removing electrical devices, shielding or screening their selves or residences, and alternative medicine. In Sweden, some municipalities provide disability grants to people who claim to have EHS in order to have abatement work done in their homes, even though the public health authority does not recognize EHS as an actual medical condition; towns in Halland do not provide such funds and this decision was challenged and upheld in court.
The United States National Radio Quiet Zone is an area where wireless signals are restricted for scientific research purposes, and some people who believe they have EHS have relocated there to seek relief.
Gro Harlem Brundtland, former prime minister of Norway and Director general of the World Health Organization, claims to have EHS. In 2015, she said that she had been sensitive for 25 years.
The 2022 documentary Electric Malady examines the life of a Swedish man who claims to have EHS.
The crime drama television series Better Call Saul, the prequel to Breaking Bad, features the character Chuck McGill, who claims to have EHS.
See also
Wireless electronic devices and health
Electromagnetic radiation and health
Bioelectromagnetics – the study of the interaction between electromagnetic fields and biological entities
Microwave auditory effect
List of questionable diseases
Radiophobia – the fear of ionizing radiation, originating in the early 1900s
Wind turbine syndrome
Tinfoil hat – a popular stereotype and slang term for paranoia, persecutory delusions, pseudoscience and conspiracy theories
References
External links
Radiofrequency Electromagnetic Energy and Health: Research Needs from the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) (Technical Report 178 – published June 2017)
Alternative diagnoses
Pseudoscience
Somatic psychology
Radiation health effects
Wireless | Electromagnetic hypersensitivity | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,316 | [
"Radiation health effects",
"Telecommunications engineering",
"Wireless",
"Radiation effects",
"Radioactivity"
] |
4,217,326 | https://en.wikipedia.org/wiki/Intracluster%20medium | In astronomy, the intracluster medium (ICM) is the superheated plasma that permeates a galaxy cluster. The gas consists mainly of ionized hydrogen and helium and accounts for most of the baryonic material in galaxy clusters. The ICM is heated to temperatures on the order of 10 to 100 megakelvins, emitting strong X-ray radiation.
Composition
The ICM is composed primarily of ordinary baryons, mainly ionised hydrogen and helium. This plasma is enriched with heavier elements, including iron. The average amount of heavier elements relative to hydrogen, known as metallicity in astronomy, ranges from a third to a half of the value in the sun. Studying the chemical composition of the ICMs as a function of radius has shown that cores of the galaxy clusters are more metal-rich than at larger radii. In some clusters (e.g. the Centaurus cluster) the metallicity of the gas can rise to above that of the sun. Due to the gravitational field of clusters, metal-enriched gas ejected from supernova remains gravitationally bound to the cluster as part of the ICM. By looking at varying redshift, which corresponds to looking at different epochs of the evolution of the Universe, the ICM can provide a history record of element production in a galaxy.
Roughly 15% of a galaxy cluster's mass resides in the ICM. The stars and galaxies contribute only around 5% to the total mass. It is theorized that most of the mass in a galaxy cluster consists of dark matter and not baryonic matter. For the Virgo Cluster, the ICM contains roughly 3 × 1014 M☉ while the total mass of the cluster is estimated to be 1.2 × 1015 M☉.
Although the ICM on the whole contains the bulk of a cluster's baryons, it is not very dense, with typical values of 10−3 particles per cubic centimeter. The mean free path of the particles is roughly 1016 m, or about one lightyear. The density of the ICM rises towards the centre of the cluster with a relatively strong peak. In addition, the temperature of the ICM typically drops to 1/2 or 1/3 of the outer value in the central regions. Once the density of the plasma reaches a critical value, enough interactions between the ions ensures cooling via X-ray radiation.
Observing the intracluster medium
As the ICM is at such high temperatures, it emits X-ray radiation, mainly by the bremsstrahlung process and X-ray emission lines from the heavy elements. These X-rays can be observed using an X-ray telescope and through analysis of this data, it is possible to determine the physical conditions, including the temperature, density, and metallicity of the plasma.
Measurements of the temperature and density profiles in galaxy clusters allow for a determination of the mass distribution profile of the ICM through hydrostatic equilibrium modeling. The mass distributions determined from these methods reveal masses that far exceed the luminous mass seen and are thus a strong indication of dark matter in galaxy clusters.
Inverse Compton scattering of low energy photons through interactions with the relativistic electrons in the ICM cause distortions in the spectrum of the cosmic microwave background radiation (CMB), known as the Sunyaev–Zel'dovich effect. These temperature distortions in the CMB can be used by telescopes such as the South Pole Telescope to detect dense clusters of galaxies at high redshifts.
In December 2022, the James Webb Space Telescope is reported to be studying the faint light emitted in the intracluster medium. Which a 2018 study found to be an "accurate luminous tracer of dark matter".
Cooling flows
Plasma in regions of the cluster, with a cooling time shorter than the age of the system, should be cooling due to strong X-ray radiation where emission is proportional to the density squared. Since the density of the ICM is highest towards the center of the cluster, the radiative cooling time drops a significant amount. The central cooled gas can no longer support the weight of the external hot gas and the pressure gradient drives what is known as a cooling flow where the hot gas from the external regions flows slowly towards the center of the cluster. This inflow would result in regions of cold gas and thus regions of new star formation. Recently however, with the launch of new X-ray telescopes such as the Chandra X-ray Observatory, images of galaxy clusters with better spatial resolution have been taken. These new images do not indicate signs of new star formation on the order of what was historically predicted, motivating research into the mechanisms that would prevent the central ICM from cooling.
Heating
There are two popular explanations of the mechanisms that prevent the central ICM from cooling: feedback from active galactic nuclei through injection of relativistic jets of plasma and sloshing of the ICM plasma during mergers with subclusters. The relativistic jets of material from active galactic nuclei can be seen in images taken by telescopes with high angular resolution such as the Chandra X-ray Observatory.
See also
Interstellar medium
References
Large-scale structure of the cosmos
Extragalactic astronomy
Outer space
Space plasmas
Intergalactic media | Intracluster medium | [
"Physics",
"Astronomy"
] | 1,072 | [
"Space plasmas",
"Galaxy clusters",
"Outer space",
"Intergalactic media",
"Astrophysics",
"Extragalactic astronomy",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
4,217,714 | https://en.wikipedia.org/wiki/Salience%20%28neuroscience%29 | Salience (also called saliency, from Latin saliō meaning “leap, spring”) is the property by which some thing stands out. Salient events are an attentional mechanism by which organisms learn and survive; those organisms can focus their limited perceptual and cognitive resources on the pertinent (that is, salient) subset of the sensory data available to them.
Saliency typically arises from contrasts between items and their neighborhood. They might be represented, for example, by a red dot surrounded by white dots, or by a flickering message indicator of an answering machine, or a loud noise in an otherwise quiet environment. Saliency detection is often studied in the context of the visual system, but similar mechanisms operate in other sensory systems. Just what is salient can be influenced by training: for example, for human subjects particular letters can become salient by training. There can be a sequence of necessary events, each of which has to be salient, in turn, in order for successful training in the sequence; the alternative is a failure, as in an illustrated sequence when tying a bowline; in the list of illustrations, even the first illustration is a salient: the rope in the list must cross over, and not under the bitter end of the rope (which can remain fixed, and not free to move); failure to notice that the first salient has not been satisfied means the knot will fail to hold, even when the remaining salient events have been satisfied.
When attention deployment is driven by salient stimuli, it is considered to be bottom-up, memory-free, and reactive. Conversely, attention can also be guided by top-down, memory-dependent, or anticipatory mechanisms, such as when looking ahead of moving objects or sideways before crossing streets. Humans and other animals have difficulty paying attention to more than one item simultaneously, so they are faced with the challenge of continuously integrating and prioritizing different bottom-up and top-down influences.
Neuroanatomy
The brain component named the hippocampus helps with the assessment of salience and context by using past memories to filter new incoming stimuli, and placing those that are most important into long term memory. The entorhinal cortex is the pathway into and out of the hippocampus, and is an important part of the brain's memory network; research shows that it is a brain region that suffers damage early on in Alzheimer's disease, one of the effects of which is altered (diminished) salience.
The pulvinar nuclei (in the thalamus) modulate physical/perceptual salience in attentional selection.
One group of neurons (i.e., D1-type medium spiny neurons) within the nucleus accumbens shell (NAcc shell) assigns appetitive motivational salience ("want" and "desire", which includes a motivational component), aka incentive salience, to rewarding stimuli, while another group of neurons (i.e., D2-type medium spiny neurons) within the NAcc shell assigns aversive motivational salience to aversive stimuli.
The primary visual cortex (V1) generates a bottom-up saliency map from visual inputs to guide reflexive attentional shifts or gaze shifts. According to V1 Saliency Hypothesis, the saliency of a location is higher when V1 neurons give higher responses to that location relative to V1 neurons' responses to other visual locations. For example, a unique red item among green items, or a unique vertical bar among horizontal bars, is salient since it evokes higher V1 responses and attracts attention or gaze. The V1 neural responses are sent to the superior colliculus to guide gaze shifts to the salient locations. A fingerprint of the saliency map in V1 is that attention or gaze can be captured by the location of an eye-of-origin singleton in visual inputs, e.g., a bar uniquely shown to the left eye in a background of many other bars shown to the right eye, even when observers cannot tell the difference between the singleton and the background bars.
In psychology
The term is widely used in the study of perception and cognition to refer to any aspect of a stimulus that, for any of many reasons, stands out from the rest. Salience may be the result of emotional, motivational or cognitive factors and is not necessarily associated with physical factors such as intensity, clarity or size. Although salience is thought to determine attentional selection, salience associated with physical factors does not necessarily influence selection of a stimulus.
Salience bias
Salience bias (also referred to as perceptual salience) is a cognitive bias that predisposes individuals to focus on or attend to items, information, or stimuli that are more prominent, visible, or emotionally striking. This is as opposed to stimuli that are unremarkable, or less salient, even though this difference is often irrelevant by objective standards. The American Psychological Association (APA) defines the salience hypothesis as a theory regarding perception where “motivationally significant” information is more readily perceived than information with little or less significant motivational importance. Perceptual salience (salience bias) is linked to the vividness effect, whereby a more pronounced response is produced by a more vivid perception of a stimulus than the mere knowledge of the stimulus. Salience bias assumes that more dynamic, conspicuous, or distinctive stimuli engage attention more than less prominent stimuli, disproportionately impacting decision making, it is a bias which favors more salient information.
Application
Cognitive Psychology
Salience bias, like all other cognitive biases, is an applicable concept to various disciplines. For example, cognitive psychology investigates cognitive functions and processes, such as perception, attention, memory, problem solving, and decision making, all of which could be influenced by salience bias. Salience bias acts to combat cognitive overload by focusing attention on prominent stimuli, which affects how individuals perceive the world as other, less vivid stimuli that could add to or change this perception, are ignored. Human attention gravitates towards novel and relevant stimuli and unconsciously filters out less prominent information, demonstrating salience bias, which influences behavior as human behavior is affected by what is attended to. Behavioral economists Tversky and Kahneman also suggest that the retrieval of instances is influenced by their salience, such as how witnessing or experiencing an event first-hand has a greater impact than when it is less salient, like if it were read about, implying that memory is affected by salience.
Language
It is also relevant in language understanding and acquisition. Focusing on more salient phenomena allows people to detect language patterns and dialect variations more easily, making dialect categorization more efficient.
Social Behavior
Furthermore, social behaviors and interactions can also be influenced by perceptual salience. Changes in the perceptual salience of an individual heavily influences their social behavior and subjective experience of their social interactions, confirming a “social salience effect”. Social salience relates to how individuals perceive and respond to other people.
Behavioral Science
The connection between salience bias and other heuristics, like availability and representativeness, links it to the fields of behavioral science and behavioral economics. Salience bias is closely related to the availability heuristic in behavioral economics, based on the influence of information vividness and visibility, such as recency or frequency, on judgements, for example:Humans have bounded rationality, which refers to their limited ability to be rational in decision making, due to a limited capacity to process information and cognitive ability. Heuristics, such as availability, are employed to reduce the complexity of cognitive and social tasks or judgements, in order to decrease the cognitive load that result from bounded rationality. Despite the effectiveness of heuristics in doing so, they are limited by systematic errors that occur, often the result of influencing biases, such as salience. This can lead to misdirected or misinformed judgements, based on an overemphasis or overweighting of certain, more salient information. For example, the irrational behavior of procrastination occurs because costs in the present, like sacrificing free time, are disproportionately salient to future costs, because at that time they are more vivid. The more prominent information is more readily available than the less salient information, and thus has a larger impact on decision making and behavior, resulting in errors in judgement.
Other fields such as philosophy, economics, finance, and political science have also investigated the effects of salience, such as in relation to taxes, where salience bias is applied to real-world behaviors, affecting systems like the economy. The existence of salience bias in humans can make behavior more predictable and this bias can be leveraged to influence behavior, such as through nudges.
Evaluation
Salience bias is one of many explanations for why humans deviate from rational decision making: by being overly focused on or biased to the most visible data and ignoring other potentially important information that could result in a more reasonable judgment. As a concept it is supported in psychological and economic literature, through its relationship with the availability heuristic outlined by Tversky and Kahneman, and its applicability to behaviors relevant to multiple disciplines, such as economics.
Despite this support, salience bias is limited for various reasons, one example being its difficulty in quantifying, operationalizing, and universally defining. Salience is often confused with other terms in literature, for example, one article states that salience, which is defined as a cognitive bias referring to “visibility and prominence”, is often confused with terms like transparency and complexity in public finance literature. This limits salience bias as the confusion negates its importance as an individual term, and therefore the influence it has on tax related behavior. Likewise, the APA definition of salience refers to motivational importance, which is based on subjective judgement, adding to the difficulty. According to psychologist S. Taylor “some people are more salient than others” and these differences can further bias judgements.
Biased judgements have far-reaching consequences, beyond poor decision making, such as overgeneralizing and stereotyping. Studies into solo status or token integration demonstrate this. The token is an individual in a group different to the other members in that social environment, like a female in an all-male workplace. The token is viewed as symbolic of their social group, whereby judgments made about the solo individual predict judgements of their social group, which can result in inaccurate perceptions of that group and potential stereotyping. The distinctiveness of the individual in that environment “fosters a salience bias” and hence predisposes those generalized judgements, positive or negative.
In interaction design
Salience in design draws from the cognitive aspects of attention, and applies it to the making of 2D and 3D objects. When designing computer and screen interfaces, salience helps draw attention to certain objects like buttons and signify affordance, so designers can utilize this aspect of perception to guide users.
There are several variables used to direct attention:
Color. Hue, saturation, and value can all be used to call attention to areas or objects within an interface, and de-emphasize others.
Size. Object size and proportion to surrounding elements creates visual hierarchy, both in interactive elements like buttons, but also within informative elements like text.
Position. An object's orientation or spatial arrangement in relation to the surrounding objects creates differentiation to invite action.
Accessibility
A consideration for salience in interaction design is accessibility. Many interfaces used today rely on visual salience for guiding user interaction, and people with disabilities like color-blindness may have trouble interacting with interfaces using color or contrast to create salience.
Aberrant salience hypothesis of schizophrenia
Kapur (2003) proposed that a hyperdopaminergic state, at a "brain" level of description, leads to an aberrant assignment of salience to the elements of one's experience, at a "mind" level. These aberrant salience attributions have been associated with altered activities in the mesolimbic system, including the striatum, the amygdala, the hippocampus, the parahippocampal gyrus., the anterior cingulate cortex and the insula. Dopamine mediates the conversion of the neural representation of an external stimulus from a neutral bit of information into an attractive or aversive entity, i.e. a salient event. Symptoms of schizophrenia may arise out of 'the aberrant assignment of salience to external objects and internal representations', and antipsychotic medications reduce positive symptoms by attenuating aberrant motivational salience via blockade of the dopamine D2 receptors (Kapur, 2003).
Alternative areas of investigation include supplementary motor areas, frontal eye fields and parietal eye fields. These areas of the brain are involved with calculating predictions and visual salience. Changing expectations on where to look restructures these areas of the brain. This cognitive repatterning can result in some of the symptoms found in such disorders.
Visual saliency modeling
In the domain of psychology, efforts have been made in modeling the mechanism of human attention, including the learning of prioritizing the different bottom-up and top-down influences.
In the domain of computer vision, efforts have been made in modeling the mechanism of human attention, especially the bottom-up attentional mechanism, including both spatial and temporal attention. Such a process is also called visual saliency detection.
Generally speaking, there are two kinds of models to mimic the bottom-up saliency mechanism. One way is based on the spatial contrast analysis: for example, a center-surround mechanism is used to define saliency across scales, which is inspired by the putative neural mechanism. The other way is based on the frequency domain analysis. While they used the amplitude spectrum to assign saliency to rarely occurring magnitudes, Guo et al. use the phase spectrum instead.
Recently, Li et al. introduced a system that uses both the amplitude and the phase information.
A key limitation in many such approaches is their computational complexity leading to less than real-time performance, even on modern computer hardware. Some recent work attempts to overcome these issues at the expense of saliency detection quality under some conditions. Other work suggests that saliency and associated speed-accuracy phenomena may be a fundamental mechanisms determined during recognition through gradient descent, needing not be spatial in nature.
See also
References
External links
iLab at the University of Southern California
Scholarpedia article on visual saliency by Prof. Laurent Itti
Saliency map at Scholarpedia
Cognitive neuroscience
Neuropsychology
Attention
Computer vision | Salience (neuroscience) | [
"Engineering"
] | 3,039 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
4,217,717 | https://en.wikipedia.org/wiki/Moore%20v.%20Regents%20of%20the%20University%20of%20California | Moore v. Regents of the University of California was a landmark Supreme Court of California decision. Filed on July 9, 1990, it dealt with the issue of property rights to one's own cells taken in samples by doctors or researchers.
In 1976, John Moore was treated for hairy cell leukemia by physician David Golde, a cancer researcher at the UCLA Medical Center. Moore's cancer cells were later developed into a cell line that was commercialized by Golde and UCLA. The California Supreme Court ruled that a hospital patient's discarded blood and tissue samples are not his personal property and that individuals do not have rights to a share in the profits earned from commercial products or research derived from their cells. Following this decision, most U.S. courts have ruled against family members who sue researchers and universities over the "improper commercialization" of their dead family member's body parts.
Background
John Moore first visited UCLA Medical Center on October 5, 1976, after he was diagnosed with hairy cell leukemia. Physician and cancer researcher David Golde took samples of Moore's blood, bone marrow, and other bodily fluids to confirm the diagnosis and recommended a splenectomy because of the potentially fatal amount of swelling in Moore's spleen. Moore signed a written consent form, authorizing the procedure. It said the hospital could "dispose of any severed tissue or member by cremation", and his spleen was removed by surgeons, who were not named as defendants, at UCLA Medical Center.
Moore's blood profile returned to normal after only a few days, and further examination of his spleen led Golde to discover that Moore's blood cells were unique in that they produced a protein that stimulated the growth of white blood cells, which help to protect the body from infections.
Moore moved to Seattle, Washington, after his surgery and returned to the UCLA Medical Center for follow-up visits with Golde several times, between 1976 and 1983. After a few years of traveling back to Los Angeles to see Golde and to have samples taken of bone marrow, blood, and semen, Moore asked about transferring his care to a doctor closer to home. In response, Golde offered to cover the expense of Moore's airfare and accommodations in Los Angeles, and Moore agreed to continue.
In 1983, Moore became suspicious about a new consent form he was asked to sign that said, "I (do, do not) voluntarily grant to the University of California all rights I, or my heirs, may have in any cell line or any other potential product which might be developed from the blood and/or bone marrow obtained from me". Moore initially signed the consent but refused at later visits and eventually gave the form to an attorney, who then discovered a patent on Moore's cell line, dubbed "Mo", which had been issued to the regents of UCLA in 1984. It named Golde and his research assistant as the inventors. Under an agreement with Genetics Institute, Golde became a paid consultant and acquired the rights to 75,000 shares of common stock in the patent. Genetics Institute also agreed to pay Golde and the regents at least $330,000 over three years, in exchange for exclusive access to the materials and research performed on the cell line and products derived from it.
Lawsuit
After learning of the patent, Moore filed a lawsuit for a share in the potential profits from products or research that had been derived from his cell line, without his knowledge or consent. Moore's lawsuit alleged that Golde had been aware of the potential for financial benefit when medical consent was obtained, but he had concealed that from Moore. The claim was rejected by the Los Angeles Superior Court, but in 1988, the California Court of Appeal ruled that blood and tissue samples were one's own personal property and that patients could have a right to share in profits derived from them.
According to the Los Angeles Times, "Moore later negotiated what he called a 'token' settlement with UCLA that covered his legal fees based on the fact that he wasn't informed and hadn't agreed to the research."
Parties to the dispute
Moore brought suit against defendants Dr. David W. Golde, a physician who attended Moore at UCLA Medical Center; the Regents of the University of California, who own and operate the university; Shirley G. Quan, a researcher employed by the Regents; Genetics Institute, Inc.; and Sandoz Pharmaceuticals Corporation and related entities.
Decision
The court found that Moore had no property rights to his discarded cells or to any profits made from them. However, the research physician had an obligation to reveal his financial interest in the materials that were harvested from Moore, who could thus bring a claim for any injury that he sustained by the physician's failure to disclose his interests.
The opinion, written by Justice Edward Panelli, was joined by three of the seven judges of the Supreme Court of California.
The majority opinion first looked at Moore's claim of property interests under existing law. The court first rejected the argument that a person has an absolute right to the unique products of their body, as his products were not unique, as the cells are "no more unique to Moore than the number of vertebrae in the spine or the chemical formula of hemoglobin".
The court then rejected the argument that his spleen should be protected as property to protect Moore's privacy and dignity. The court held that his interests were already protected by informed consent and decided that since laws required the destruction of human organs as some indication, the legislature had intended to prevent patients from possessing their extracted organs. Finally, the property at issue may not have been Moore's cells but the cell line created from his cells.
The court then looked at the policy behind having Moore's cells considered property. Because conversion of property is a strict liability tort, the court feared that extending property rights to include organs would have a chilling effect on medical research. Laboratories doing research receive a large volume of medical samples and cannot be expected to know or discover whether somewhere down the line their samples were illegally converted. Furthermore, Moore's interest in his bodily integrity and privacy are protected by the requirement of informed consent, which must also inform about economic interests.
Justice Arabian wrote a concurring opinion, stating that the deep philosophical, moral and religious issues presented by the case could not be decided by the court.
Justice Broussard concurred in part and dissented in part.
Justice Mosk dissented, stating that Moore could have been denied some property rights and given others. At the very least, Moore had the "right to do with his own tissue what the defendants did with it". That is, as soon as the tissue was removed, Moore had at least the right to choose to sell it to a laboratory or to have it destroyed. Thus, there would be no necessity to hold labs strictly liable for conversion when property rights could be broken up, to allow Moore to extract a significant portion of the economic value created by his tissue. Furthermore, to prove damages from informed consent, Moore would have to have proved that if he were properly informed, neither he, nor a reasonable person would have consented to the procedure. Thus, Moore's chances of proving damages through informed consent were slim. Also, he could not consent to the procedure but reserve the right to sell his organs. Finally, Moore could sue only his doctor, nobody else, for failing to adequately inform him. Thus, he was unlikely to win, could not extract the economic value of his tissue even if he had refused consent, and could not sue the parties that might be exploiting him.
Aftermath
Moore's cancer went into remission from 1976 to 1996 following the removal of his spleen. He died from the cancer in October 2001.
The Michael Crichton book Next, while specifically mentioning the case, extrapolates its possible legal ramifications with a patient called Frank Burnet. Further, the 2010 book The Immortal Life of Henrietta Lacks by Rebecca Skloot and its 2017 film adaptation discuss this case and its precedent with regards to the Lacks Family.
See also
HeLa
References
Sources
External links
Full text opinion in HTML format - courtesy of California Continuing Education of the Bar
Full text opinion in PDF format (archived)
Case brief by LexisNexis
Supreme Court of California case law
Bioethics
1990 in United States case law
University of California litigation
1990 in California
United States property case law | Moore v. Regents of the University of California | [
"Technology"
] | 1,701 | [
"Bioethics",
"Ethics of science and technology"
] |
4,217,791 | https://en.wikipedia.org/wiki/Koilocyte | A koilocyte is a squamous epithelial cell that has undergone a number of structural changes, which occur as a result of infection of the cell by human papillomavirus (HPV). Identification of these cells by pathologists can be useful in diagnosing various HPV-associated lesions.
Koilocytosis
Koilocytosis or koilocytic atypia or koilocytotic atypia are terms used in histology and cytology to describe the presence of koilocytes in a specimen.
Koilocytes may have the following cellular changes:
Nuclear enlargement (two to three times normal size).
Irregularity of the nuclear membrane contour, creating a wrinkled or raisinoid appearance.
A darker than normal staining pattern in the nucleus, known as hyperchromasia.
A clear area around the nucleus, known as a perinuclear halo or perinuclear cytoplasmic vacuolization.
Collectively, these types of changes are called a cytopathic effect; various types of cytopathic effect can be seen in many different cell types infected by many different viruses. Infection of cells with HPV causes the specific cytopathic effects seen in koilocytes.
Pathogenesis
The atypical features seen in cells displaying koilocytosis result from the action of the E5 and E6 oncoproteins produced by HPV. These proteins break down keratin in HPV-infected cells, resulting in the perinuclear halo and nuclear enlargement typical of koilocytes. The E6 oncoprotein, along with E7, is also responsible for the dysregulation of the cell cycle that results in squamous cell dysplasia. The E6 and E7 oncoproteins do this by binding and inhibiting the tumor suppressor genes p53 and RB, respectively. This promotes progression of cells through the cell cycle without appropriate repair of DNA damage, resulting in dysplasia. Due to the ability of HPV to cause cellular dysplasia, koilocytes are found in a number of potentially precancerous lesions.
Visualization of koilocytes
Koilocytes can be visualized microscopically when tissue is collected, fixed, and stained. Though koilocytes can be found in lesions in a number of locations, cervical cytology samples, commonly known as Pap smears, frequently contain koilocytes. In order to visualize koilocytes collected from the cervix, the tissue is stained with the Papanicolaou stain. Another way koilocytes can be visualized is by fixation of tissue with formalin and staining with hematoxylin and eosin, commonly known as H&E. These stains give the cytoplasm and nuclei of cells characteristic colors and allows for visualization of the nuclear enlargement and irregularity, hyperchromasia, and perinuclear halo that are typical of koilocytes.
Lesions containing koilocytes
Koilocytes may be found in potentially precancerous cervical, oral and anal lesions.
Cervical lesions
Atypical squamous cells of undetermined significance (ASC-US)
When examining cytologic specimens, a diagnosis of ASC-US is given if squamous cells are suspicious for low-grade squamous intraepithelial lesion (LSIL) but do not fulfill the criteria. This may be due to limitations in the quality of the specimen, or because the abnormalities in the cells are milder than that seen in LSIL. Cells in this category display koilocyte-like changes such as vacuolization, but not enough changes to definitively diagnose as LSIL. A diagnosis of ASC-US warrants further follow-up to better characterize the extent of the abnormal cells.
Low-grade squamous intraepithelial lesion (LSIL)
In LSIL of the cervix, definitive koilocytes are present. In addition, squamous cells commonly display binucleation and mitoses are present, signifying increased cellular division. However, these changes are primarily limited to upper cell layers in the epithelium, no mitoses are found higher than the lower one third of epithelium, and the basal layer of cells remains a discrete layer. This differentiates this lesion from high-grade squamous intraepithelial lesion (HSIL) of the cervix.
Oral lesions
Verruca vulgaris
Verruca vulgaris, or common warts, may arise in the oral mucosa. These lesions are associated with HPV subtypes 1, 6, 11, and 57. Histopathology of these lesions displays koilocytes in the epithelium.
Oropharyngeal cancer
Approximately 50 percent of oropharyngeal cancers are associated with HPV infection. Koilocytosis is the most common cytopathic effect present in HPV-related oropharyngeal cancers. However, the current standard of care for these tumors includes verification of HPV status using methodologies other than the histopathologic presence or absence of koilocytes alone. These methodologies include polymerase chain reaction (PCR), in situ hybridization (ISH), and immunohistochemistry (IHC).
Anal lesions
Anal intraepithelial neoplasia
Histopathologic changes seen in LSIL of the cervix can also be seen in anal epithelium. Koilocytes are characteristic of LSIL in the anus. In contrast to LSIL, HSIL in the anus consists of abnormal basaloid cells replacing more than half of the anal epithelium.
Interpretation
These changes occur in the presence of human papillomavirus and occasionally can lead to cervical intraepithelial neoplasia, and if left untreated some may eventually progress to malignant cancer.
References
Papillomavirus
Epithelial cells
Cervical cancer | Koilocyte | [
"Biology"
] | 1,254 | [
"Viruses",
"Papillomavirus"
] |
4,218,036 | https://en.wikipedia.org/wiki/Tebbe%27s%20reagent | Tebbe's reagent is the organometallic compound with the formula (C5H5)2TiCH2ClAl(CH3)2. It is used in the methylidenation of carbonyl compounds, that is it converts organic compounds containing the R2C=O group into the related R2C=CH2 derivative. It is a red solid that is pyrophoric in the air, and thus is typically handled with air-free techniques. It was originally synthesized by Fred Tebbe at DuPont Central Research.
Tebbe's reagent contains two tetrahedral metal centers linked by a pair of bridging ligands. The titanium has two cyclopentadienyl (, or Cp) rings and aluminium has two methyl groups. The titanium and aluminium atoms are linked together by both a methylene bridge (-CH2-) and a chloride atom in a nearly square-planar (Ti–CH2–Al–Cl) geometry. The Tebbe reagent was the first reported compound where a methylene bridge connects a transition metal (Ti) and a main group metal (Al).
Preparation
The Tebbe reagent is synthesized from titanocene dichloride and trimethylaluminium in toluene solution.
Cp2TiCl2 + 2 Al(CH3)3 → CH4 + Cp2TiCH2AlCl(CH3)2 + Al(CH3)2Cl
After about 3 days, the product is obtained after recrystallization to remove Al(CH3)2Cl. Although syntheses using the isolated Tebbe reagent give a cleaner product, successful procedures using the reagent "in situ" have been reported. Instead of isolating the Tebbe reagent, the solution is merely cooled in an ice bath or dry ice bath before adding the starting material.
An alternative but less convenient synthesis entails the use of dimethyltitanocene (Petasis reagent):
Cp2Ti(CH3)2 + Al(CH3)2Cl → Cp2TiCH2AlCl(CH3)2 + CH4
One drawback to this method, aside from requiring Cp2Ti(CH3)2, is the difficulty of separating product from unreacted starting reagent.
Reaction mechanism
Tebbe's reagent itself does not react with carbonyl compounds, but must first be treated with a mild Lewis base, such as pyridine, which generates the active Schrock carbene.
Also analogous to the Wittig reagent, the reactivity appears to be driven by the high oxophilicity of Ti(IV). The Schrock carbene (1) reacts with carbonyl compounds (2) to give a postulated oxatitanacyclobutane intermediate (3). This cyclic intermediate has never been directly isolated, presumably because it breaks down immediately to the produce the desired alkene (5).
Scope
The Tebbe reagent is used in organic synthesis for carbonyl methylidenation.
This conversion can also be effected using the Wittig reaction, although the Tebbe reagent is more efficient especially for sterically encumbered carbonyls. Furthermore, the Tebbe reagent is less basic than the Wittig reagent and does not give the β-elimination products.
Methylidenation reactions also occur for aldehydes as well as esters, lactones and amides. The Tebbe reagent converts esters and lactones to enol ethers and amides to enamines. In compounds containing both ketone and ester groups, the ketone selectively reacts in the presence of one equivalent of the Tebbe reagent.
The Tebbe reagent methylidenates carbonyls without racemizing a chiral α carbon. For this reason, the Tebbe reagent has found applications in reactions of sugars where maintenance of stereochemistry can be critical.
The Tebbe reagent reacts with acid chlorides to form titanium enolates by replacing Cl−.
Modifications
It is possible to modify Tebbe's reagent through the use of different ligands. This can alter the reactivity of the complex, allowing for a broader range of reactions. For example, cyclopropanation can be achieved using a chlorinated analogue.
See also
Related organotitanium reagents and reactions
Kulinkovich reaction
Petasis reagent
Lombardo reagent
McMurry reaction
Related methylidenation reactions
Nysted reagent
Peterson olefination
Wittig reaction
Kauffmann olefination
References
Reagents for organic chemistry
Titanocenes
Organoaluminium compounds
Chloro complexes
Titanium(IV) compounds | Tebbe's reagent | [
"Chemistry"
] | 995 | [
"Reagents for organic chemistry"
] |
4,218,673 | https://en.wikipedia.org/wiki/Spatial%20relation | A spatial relation specifies how some object is located in space in relation to some reference object. When the reference object is much bigger than the object to locate, the latter is often represented by a point. The reference object is often represented by a bounding box.
In Anatomy it might be the case that a spatial relation is not fully applicable. Thus, the degree of applicability is defined which specifies from 0 till 100% how strongly a spatial relation holds. Often researchers concentrate on defining the applicability function for various spatial relations.
In spatial databases and geospatial topology the spatial relations are used for spatial analysis and constraint specifications.
In cognitive development for walk and for catch objects, or for understand objects-behaviour; in robotic Natural Features Navigation; and many other areas, spatial relations plays a central role.
Commonly used types of spatial relations are: topological, directional and distance relations.
Topological relations
The DE-9IM model expresses important space relations which are invariant to rotation, translation and scaling transformations.
For any two spatial objects a and b, that can be points, lines and/or polygonal areas, there are 9 relations derived from DE-9IM:
Directional relations
Directional relations can again be differentiated into external directional relations and internal directional relations. An internal directional relation specifies where an object is located inside the reference object while an external relations specifies where the object is located outside of the reference objects.
Examples for internal directional relations: left; on the back; athwart, abaft
Examples for external directional relations: on the right of; behind; in front of, abeam, astern
Distance relations
Distance relations specify how far is the object away from the reference object.
Examples are: at; nearby; in the vicinity; far away
Relations by class
Reference objects represented by a bounding box or another kind of "spatial envelope" that encloses its borders, can be denoted with the maximum number of dimensions of this envelope: '0' for punctual objects, '1' for linear objects, '2' for planar objects, '3' for volumetric objects. So, any object, in a 2D modeling, can by classified as point, line or area according to its delimitation. Then, a type of spatial relation can be expressed by the class of the objects that participate in the relation:
point-point relations: ...
point-line relations:
point-area relations:
line-line relations:
line-area relations:
area-area relations:
More complex modeling schemas can represent an object as a composition of simple sub-objects. Examples: represent in an astronomical map a star by a point and a binary star by two points; represent in geographical map a river with a line, for its source stream, and with an strip-area, for the rest of the river. These schemas can use the above classes, uniform composition classes (multi-point, multi-line and multi-area) and heterogeneous composition (points+lines as "object of dimension 1", points+lines+areas as "object of dimension 2").
Two internal components of a complex object can express (the above) binary relations between them, and ternary relations, using the whole object as a frame of reference. Some relations can be expressed by an abstract component, such the center of mass of the binary star, or a center line of the river.
Temporal references
For human thinking, spatial relations include qualities like size, distance, volume, order, and, also, time:
Stockdale and Possin discusses the many ways in which people with difficulty establishing spatial and temporal relationships can face problems in ordinary situations.
See also
Anatomical terms of location
Dimensionally Extended nine-Intersection Model (DE-9IM)
Water-level task
Allen's interval algebra (temporal analog)
Commonsense reasoning
References
Cognitive science
Space | Spatial relation | [
"Physics",
"Mathematics"
] | 782 | [
"Spacetime",
"Space",
"Geometry"
] |
4,218,742 | https://en.wikipedia.org/wiki/Q10%20%28temperature%20coefficient%29 | {{DISPLAYTITLE:Q10 (temperature coefficient)}}
The Q10 temperature coefficient is a measure of temperature sensitivity based on the chemical reactions.
The Q10 is calculated as:
where;
R is the rate
T is the temperature in Celsius degrees or kelvin.
Rewriting this equation, the assumption behind Q10 is that the reaction rate R depends exponentially on temperature:
Q10 is a unitless quantity, as it is the factor by which a rate changes, and is a useful way to express the temperature dependence of a process.
For most biological systems, the Q10 value is ~ 2 to 3.
In muscle performance
The temperature of a muscle has a significant effect on the velocity and power of the muscle contraction, with performance generally declining with decreasing temperatures and increasing with rising temperatures. The Q10 coefficient represents the degree of temperature dependence a muscle exhibits as measured by contraction rates. A Q10 of 1.0 indicates thermal independence of a muscle whereas an increasing Q10 value indicates increasing thermal dependence. Values less than 1.0 indicate a negative or inverse thermal dependence, i.e., a decrease in muscle performance as temperature increases.
Q10 values for biological processes vary with temperature. Decreasing muscle temperature results in a substantial decline of muscle performance such that a 10 degree Celsius temperature decrease results in at least a 50% decline in muscle performance. Persons who have fallen into icy water may gradually lose the ability to swim or grasp safety lines due to this effect, although other effects such as atrial fibrillation are a more immediate cause of drowning deaths. At some minimum temperature biological systems do not function at all, but performance increases with rising temperature (Q10 of 2-4) to a maximum performance level and thermal independence (Q10 of 1.0-1.5). With continued increase in temperature, performance decreases rapidly (Q10 of 0.2-0.8) up to a maximum temperature at which all biological function again ceases.
Within vertebrates, different skeletal muscle activity has correspondingly different thermal dependencies. The rate of muscle twitch contractions and relaxations are thermally dependent (Q10 of 2.0-2.5), whereas maximum contraction, e.g., tetanic contraction, is thermally independent.
Muscles of some ectothermic species. e.g., sharks, show less thermal dependence at lower temperatures than endothermic species
See also
Arrhenius equation
Arrhenius plot
Isotonic (exercise physiology)
Isometric exercise
Skeletal striated muscle
Tetanic contraction
References
Ecological metrics
Chemical kinetics | Q10 (temperature coefficient) | [
"Chemistry",
"Mathematics"
] | 529 | [
"Chemical reaction engineering",
"Metrics",
"Ecological metrics",
"Quantity",
"Chemical kinetics"
] |
4,218,888 | https://en.wikipedia.org/wiki/Zygotene | Zygotene (from greek "paired threads") is the second stage of prophase I during meiosis, the specialized cell division that reduces the chromosome number by half to produce haploid gametes. It follows the Leptotene stage and is followed by Pachytene stage.
Synapsis completion
The key event during zygotene is the completion of synapsis between homologous chromosomes. Synapsis began during the previous leptotene stage, with the homologous chromosomes starting to pair together and associate lengthwise, facilitated by the synaptonemal complex protein structure.
In zygotene, the synaptonemal complex forms more extensively between the paired chromosomes. It zips the homologs together along their entire length, with the lateral elements of the complex associated with each chromosome and the central region holding them together. This allows intimate pairing and genetic recombination events.
Chromosome condensation
The chromosomes continue condensing during zygotene into distinct threadlike structures. Each chromosome now appears thicker as the sister chromatids are closely aligned.
Recombination nodules
As synapsis completes, proteinaceous recombination nodules begin to appear along the synaptonemal complex between the homologous chromosomes. These represent sites of genetic crossover events, where exchange of chromosomal segments occurs between the non-sister chromatids.
Key recombination proteins like MLH1/3 and MSH4/5 mark the sites of crossover formation. The number and positioning of these crossovers is regulated to ensure at least one crossover per chromosome arm for proper segregation in later meiotic stages.
Transition to pachytene
Once synapsis and crossing over are complete, the cell transitions to the pachytene stage of prophase I. Pachytene features fully condensed and paired chromosomes along their length, with distinctly visible recombination nodules.
Importance
The zygotene stage is crucial for genetic recombination and proper chromosome segregation in meiosis. Defects in synapsis, recombination, or crossover regulation can lead to aneuploidy and chromosomal abnormalities in gametes.
References
Cellular processes
Meiosis | Zygotene | [
"Biology"
] | 465 | [
"Molecular genetics",
"Meiosis",
"Cellular processes"
] |
4,218,901 | https://en.wikipedia.org/wiki/Pachytene | The pachytene stage (/ˈpækɪtiːn/ PAK-i-teen; from Greek words meaning "thick threads".), also known as pachynema, is the third stage of prophase I during meiosis, the specialized cell division that reduces chromosome number by half to produce haploid gametes. It follows the zygotene stage and is followed by the stage Diplotene
Synapsed chromosomes
During pachytene, the homologous chromosomes are fully synapsed along their lengths by the completed synaptonemal complex protein structure formed in the previous stages. This holds the homologous closely paired, allowing intimate DNA interactions.
Chromosome condensation
The chromosomes reach their highest level of condensation during pachytene. Each chromosome consists of two closely associated sister chromatids along their entire length. The chromosomes appear as distinct, well-defined threadlike structures under the microscope. Sex chromosomes, however, are not wholly identical, and only exchange information over a small region of homology called the pseudoautosomal region.
Recombination nodules
Multiple recombination nodules are distinctly visible along the paired homologous chromosomes. These proteinaceous structures mark the sites of genetic crossover events between the non-sister chromatids that were initiated during zygotene.
Proteins like MLH1 and MLH3 stabilize the crossover events, ensuring at least one obligatory crossover per chromosome arm. This gives each chromosome a minimum of two crossover sites. Additional crossovers are also possible but regulated.
DNA repair
During pachytene, any unresolved DNA double-strand breaks from previous recombination events are repaired. Mismatch repair proteins help correct any errors in base pairing between the homologs.
Treatment of male mice during meiosis with gamma radiation causes DNA damage. Homologous recombination is the principal mechanism of DNA repair acting during meiosis. From the leptotene to early pachytene stages of meiosis exogenous damage triggered the massive presence of gamma H2AX (which forms when DNA double-strand breaks appear), H2AX was present throughout the nucleus, and this was associated with DNA repair mediated by homologous recombination components DMC1 and RAD51 proteins.
The meiotic sex checkpoint
Pachytene is also a stage where a critical checkpoint operates to monitor proper chromosome synapsis and recombination. Errors detected at this stage can arrest the meiotic cell cycle and trigger apoptosis (programmed cell death) of the defective cell.
Transition to diplotene
Once crossover events are stabilized, the synaptonemal complex disassembles and chromosomes begin to gradually desynapse as the cell transitions into the diplotene stage.
Importance
The pachytene stage is essential for the extensive genetic recombination and accurate chromosome segregation in meiosis. Defects at this stage can lead to aneuploidy and nondisjunction.
References
Meiosis
Cellular processes | Pachytene | [
"Biology"
] | 628 | [
"Molecular genetics",
"Cellular processes",
"Meiosis"
] |
4,219,037 | https://en.wikipedia.org/wiki/Starred%20transform | In applied mathematics, the starred transform, or star transform, is a discrete-time variation of the Laplace transform, so-named because of the asterisk or "star" in the customary notation of the sampled signals.
The transform is an operator of a continuous-time function , which is transformed to a function in the following manner:
where is a Dirac comb function, with period of time T.
The starred transform is a convenient mathematical abstraction that represents the Laplace transform of an impulse sampled function , which is the output of an ideal sampler, whose input is a continuous function, .
The starred transform is similar to the Z transform, with a simple change of variables, where the starred transform is explicitly declared in terms of the sampling period (T), while the Z transform is performed on a discrete signal and is independent of the sampling period. This makes the starred transform a de-normalized version of the one-sided Z-transform, as it restores the dependence on sampling parameter T.
Relation to Laplace transform
Since , where:
Then per the convolution theorem, the starred transform is equivalent to the complex convolution of and , hence:
This line integration is equivalent to integration in the positive sense along a closed contour formed by such a line and an infinite semicircle that encloses the poles of X(s) in the left half-plane of p. The result of such an integration (per the residue theorem) would be:
Alternatively, the aforementioned line integration is equivalent to integration in the negative sense along a closed contour formed by such a line and an infinite semicircle that encloses the infinite poles of in the right half-plane of p. The result of such an integration would be:
Relation to Z transform
Given a Z-transform, X(z), the corresponding starred transform is a simple substitution:
This substitution restores the dependence on T.
It's interchangeable,
Properties of the starred transform
Property 1: is periodic in with period
Property 2: If has a pole at , then must have poles at , where
Citations
References
Phillips and Nagle, "Digital Control System Analysis and Design", 3rd Edition, Prentice Hall, 1995.
Transforms | Starred transform | [
"Mathematics"
] | 452 | [
"Mathematical objects",
"Functions and mappings",
"Mathematical relations",
"Transforms"
] |
4,219,048 | https://en.wikipedia.org/wiki/Tactical%20Airborne%20Reconnaissance%20Pod%20System | The Tactical Airborne Reconnaissance Pod System (TARPS) was a large and sophisticated camera pod carried by the Grumman F-14 Tomcat. It contains three camera bays with different type cameras which are pointed down at passing terrain. It was originally designed to provide an interim aerial reconnaissance capability until a dedicated F/A-18 Hornet reconnaissance version could be fielded. TARPS was pressed into service upon arrival in the fleet in 1981, and remained in use up to the end of Tomcat service in 2006.
TARPS pod and Tomcat interoperability
The pod itself is long, and weighs . and is carried on the starboard side of the tunnel between the engine nacelles. The F-14A and F-14B Tomcats had to be specially modified to carry the TARPS pod which involved routing of control wiring from the rear cockpit and environmental control system (ECS) connections to the pod. Standard allowance was at least three TARPS aircraft per designated squadron (only one per airwing). All F-14Ds were modified to be TARPS capable, which allowed greater flexibility in scheduling aircraft and conducting maintenance. A control panel is fitted to the rear cockpit and the RIO has total control over pod operation except for a pilot controlled button that can activate cameras as selected by the RIO (but seldom used).
Camera bays
Each of the camera bays was designed to carry different cameras for specific tasks on reconnaissance missions. The forward bay held a 150 mm (6") focal length serial frame camera (KS-87) on a two position rotating mount which could direct the camera's view straight down or be moved to a 45° angle for a forward oblique view. The second bay or middle bay of the TARPS pod originally held the 230 mm (9") focal length KA-99 panoramic camera which rotated from horizon to horizon and could be used for side oblique photography. Each image in the wide field of view position produced a 91 cm (36") negative. The KA-99 could carry up to of film that could be exhausted if not managed carefully by the RIO. The third camera bay held an infrared line scanner camera used for night missions or daylight mission traces. All TARPS cameras were monitored by a device called a CIPDU in the tail cone section of the pod that provided camera status to maintenance personnel and during flight provided aircraft position data onto the camera imagery for intel analysis. An electrical umbilical cord connected the pod to the control panel that was positioned on the left side of the rear cockpit. A hose from the ECS from the F-14 cooled/heated the internals of the pod in flight and kept the appropriate humidity levels constant. In 1987 VF-111 was the first squadron to deploy with a KS-153 camera system in bay two. The KS-153 used a 610 mm (24") lens and was used for stand-off photography in the Persian Gulf. During Operation Desert Shield the KS-153 was used to monitor the no fly zones in Iraq.
Tomcat TARPS squadrons were staffed with Navy photographer's mates and Avionics Technicians that maintained the cameras and worked with the carrier to process the imagery. TARPS squadrons also included an extra Intelligence officer and Intelligence Specialists to help plan TARPS missions and exploit the imagery afterwards. The TARPS shop maintained the cameras and removed or loaded the pod when and if needed. Wet film processing was conducted in a processing room connected to the ship's Intelligence Center (CVIC) where the Intelligence Specialists has a dedicated space with a light table for analyzing the hundreds of feet of film and exploiting the data.
TARPS missions
The TARPS pod provided capability for the Tomcat to conduct a variety of reconnaissance tasking including:
mapping (the Tomcat software was also upgraded to assist with this demanding and painstaking mission)
pre and post strike bomb damage assessment
standoff oblique photography
maritime ship surveillance
Upgrades
Although TARPS was originally planned to be an interim solution, combat experience with VF-32 over Lebanon in 1983 resulted in upgrades to the TARPS camera suite and to the aircraft survivability. Since the KA-99 camera was designed for low-medium altitude missions, the Tomcats were forced to fly as low as over active anti-aircraft artillery (AAA) and surface to air missile (SAM) sites in the Bekaa Valley, again by VF-32, resulting in 6th Fleet requesting higher altitude cameras such as had been available in the dedicated reconnaissance platforms such as the RA-5C, RF-8 and RF-4. As a result, the first set of four KA-93 910 mm (36") focal length Long Range Optic (LOROP) cameras were shipped to Naval Air Station Oceana in the spring of 1984 for deployment with the next Tomcat TARPS squadron. VF-102 conducted an operational evaluation (OPEVAL) of the cameras enroute to the MED in expectation of flying them over Lebanon, but the crisis had cooled down by then. The cameras then became forward deployed assets and cross-decked between TARPS squadrons. Later, KS-153 LOROP cameras were also procured and also used as forward deployed assets. The KS-87 camera bay was eventually upgraded with a digital sensor so that imagery could be captured onto a PCMCIA Type II card for debrief, but could also be transmitted as desired by the RIO.
The TARPS mission first exposed the Tomcat to the AAA and SAM threat on a routine basis and spurred upgrades not only to the cameras, but to the aircraft itself. The existing Radar Homing and Warning (RHAW) gear, the ALR-45/50, was vintage Vietnam era and could not keep up with the latest threats of the SA-5 and SA-6 missiles, both present in several threat countries in the Mediterranean. As such, TARPS Tomcats were provided with an Expanded Chaff Adapter (ECA) rail that provided 120 extra expendable rounds and another rail that mounted an ALQ-167 "Bullwinkle" jammer. Eventually, the F-14B arrived with the improved ALR-67 RHAW gear capable of keeping pace with the latest threats. Prior to that, some Tomcat squadrons used modified "Fuzz-buster" automotive police radar detectors mounted ad hoc on the pilot's glare shield to detect threats not handled by the ALR-45/50.
Operational history
TARPS was immediately impressed into the Cold War and used for surveillance of Soviet ships at sea and in their anchorages sometimes from over distant from patrolling aircraft carriers in the classic cat and mouse tactics of that era.
TARPS resulted in Tomcats being put in harm's way shortly after it was introduced to the fleet in 1981. VF-102 Tomcats had been inadvertently been fired on by AAA and a single SA-2 SAM over Somalia in April 1983 while conducting peacetime mapping prior to a major exercise. A few months later VF-32 conducted TARPS missions in support of the invasion of Grenada and went on to join VF-143 and VF-31 in flying missions in the Eastern Med where three carriers had gathered to respond to the crisis in Lebanon. Thus, TARPS was responsible for the Tomcat's first sustained combat baptism of fire when the crisis in Lebanon heated up in 1983 requiring daily overflights over hostile AAA and SAMs. During operation El Dorado Canyon in 1986, Libya launched SCUD missiles at a US outpost on an island in the Mediterranean and VF-102 flew TARPS to ascertain if there had been any damage.
Initially, TARPS was not a priority on the air tasking order during Desert Shield/Storm due to availability of strategic assets like the U-2/TR-1 and plentiful USAF RF-4 units. However, once Desert Storm started, the demand for realtime intel overwhelmed the other assets and TARPS missions were called upon to meet the demand. Immediately, it became obvious that Tomcats were favored for in country missions over the RF-4 as they required no escort and needed less fuel pre- and post-mission, which was a real concern at the time. TARPS continued to be utilized post Desert Storm and training was modified to take into account medium altitude tactics such as were flown in Desert Storm. Prior to that, the majority of TARPS missions training missions were low altitude overland and over water navigation and imagery. Only mapping was flown at medium altitudes. TARPS was used routinely in Operation Southern Watch over Iraq and called upon in Bosnia in 1995 and then again over Kosovo in 1999. The advent of LANTIRN into Tomcat operations provided a useful complement to TARPS. Since both systems need the same real estate in the rear cockpit for sensor operation control panels, they cannot be mounted on the aircraft at the same time, but they can be flown in formation yielding the best of both systems.
TARPS was used in the United States in 1993 when areas of the Mississippi River flooded. The Federal Emergency Management Agency (FEMA) requested TARPS flights be taken over the area to determine which locations were hardest hit. TARPS has also been used for hurricane damage assessment. TARPS was also used to assess damages following the Waco siege in 1993, as well as damage to the Alfred P. Murrah Federal Building following the Oklahoma City bombing. In addition, TARPS equipped F-14s were used for DEA intel missions for anti-drug operations in the early 1990s.
Notes
References
External links
FAS Intelligence Resource Program: TARPS
TARPS page at GlobalSecurity web site, retrieved 23 November 2006
Aircraft instruments | Tactical Airborne Reconnaissance Pod System | [
"Technology",
"Engineering"
] | 1,947 | [
"Aircraft instruments",
"Measuring instruments"
] |
4,219,070 | https://en.wikipedia.org/wiki/Tellurium%20tetrafluoride | Tellurium tetrafluoride, TeF4, is a stable, white, hygroscopic crystalline solid and is one of two fluorides of tellurium. The other binary fluoride is tellurium hexafluoride. The widely reported Te2F10 has been shown to be F5TeOTeF5 There are other tellurium compounds that contain fluorine, but only the two mentioned contain solely tellurium and fluorine. Tellurium difluoride, TeF2, and ditellurium difluoride, Te2F2 are not known.
Preparation
Tellurium tetrafluoride can be prepared by the following reaction:
TeO2 + 2SF4 → TeF4 + 2SOF2
It is also prepared by reacting nitryl fluoride with tellurium or from the elements at 0 °C or by reacting selenium tetrafluoride with tellurium dioxide at 80 °C.
Fluorine in nitrogen can react with TeCl2 or TeBr2 to form TeF4. PbF2 will also fluorinate tellurium to TeF4.
Reactivity
Tellurium tetrafluoride will react with water or silica and forms tellurium oxides. Copper, silver, gold or nickel will react with tellurium tetrafluoride at 185 °C. It does not react with platinum. It is soluble in SbF5 and will precipitate out the complex TeF4SbF5.
Properties
Tellurium tetrafluoride melts at 130 °C and decomposes to tellurium hexafluoride at 194 °C. In the solid phase, it consists of infinite chains of TeF3F2/2 in an octahedral geometry. A lone pair of electrons occupies the sixth position.
References
R.B. King; Inorganic Chemistry of Main Group Elements, VCH Publishers, New York, 1995.
W.C. Cooper; Tellurium, VanNostrand Reinhold Company, New York, 1971.
Tellurium(IV) compounds
Fluorides
Tellurium halides
Chalcohalides | Tellurium tetrafluoride | [
"Chemistry"
] | 455 | [
"Inorganic compounds",
"Fluorides",
"Chalcohalides",
"Salts"
] |
4,219,085 | https://en.wikipedia.org/wiki/3C%20449 | 3C 449 is a low-redshift (z = 0.017) Fanaroff and Riley class I radio galaxy. It is thought to contain a highly warped circumnuclear disk surrounding the central active galactic nucleus (AGN). The name signifies that it was the 449th object (ordered by right ascension) of the Third Cambridge Catalog of Radio Sources (3C), published in 1959.
When observed by the Very Large Array, the galaxy features two symmetrical radio jets that end up in lobes and an unresolved core. The jets are relativistic near the core, but their speed is greatly reduced at about 10 arcseconds (which corresponds to about 5 kiloparsecs at the distance of the galaxy) from the core. The lobes appear complex, with plumes and wiggles. The north lobe is elongated while the end of the south lobe is round. The total apparent size of the radio features is about 30 arcminutes. Both lobes are leaning towards the west, indicating they are pushed that way by external gas which was formed during a galaxy merger the last 1.3 – 1.6 billion years.
3C 449 is the most prominent member of the Zwicky 2231.2+3732 galaxy cluster. The halo of 3C 449 is connected via a bridge with another galaxy located 37 arcseconds to the north.
Images
References
External links
Radio images and data from the 3CRR Atlas
Astrophysical Journal article about 3C 449 (Tremblay et al. 2006)
Simbad 3C 449
Radio galaxies
Lenticular galaxies
12064
449
Lacerta | 3C 449 | [
"Astronomy"
] | 340 | [
"Lacerta",
"Galaxy stubs",
"Astronomy stubs",
"Constellations"
] |
4,219,265 | https://en.wikipedia.org/wiki/1%2C8-Diazafluoren-9-one | 1,8-Diazafluoren-9-one (DFO) is an aromatic ketone first synthesized in 1950. It is used to find fingerprints on porous surfaces. It makes fingerprints glow when they are lit by blue-green light.
DFO reacts with amino acids present in the fingerprint to form highly fluorescent derivatives. Excitation with light at ~470 nm results in emission at ~570 nm.
References
External links
RCMP report on 1,8-diazafluoren-9-one
Steroids UK
Ketones
Forensic chemicals
Nitrogen heterocycles
Heterocyclic compounds with 3 rings | 1,8-Diazafluoren-9-one | [
"Chemistry"
] | 129 | [
"Ketones",
"Functional groups"
] |
4,219,489 | https://en.wikipedia.org/wiki/Rotating%20radio%20transient | Rotating radio transients (RRATs) are sources of short, moderately bright, radio pulses, which were first discovered in 2006. RRATs are thought to be pulsars, i.e. rotating magnetised neutron stars which emit more sporadically and/or with higher pulse-to-pulse variability than the bulk of the known pulsars. The working definition of what a RRAT is, is a pulsar which is more easily discoverable in a search for bright single pulses, as opposed to in Fourier domain searches so that 'RRAT' is little more than a label (of how they are discovered) and does not represent a distinct class of objects from pulsars. over 100 have been reported.
General characteristics
Pulses from RRATs are short in duration, lasting from a few milliseconds. The pulses are comparable to the brightest single pulses observed from pulsars with flux densities of a few Jansky at 1.4 GHz. Andrew Lyne, a radio astronomer involved in the discovery of RRATs, "guesses that there are only a few dozen brighter radio sources in the sky." The time intervals between detected bursts range from seconds (one pulse period) to hours. Thus radio emission from RRATs is typically only detectable for less than one second per day.
The sporadic emission from RRATs means that they are usually not detectable in standard periodicity searches which use Fourier techniques. Nevertheless, underlying periodicity in RRATs can be determined by finding the greatest common denominator of the intervals between pulses. This yields the maximum period but once many pulse arrival times have been determined the periods which are shorter (by an integer factor) can be deemed statistically unlikely. The periods thus determined for RRATs are on the order of 1 second or longer, implying that the pulses are likely to be coming from rotating neutron stars, and led to the name "Rotating Radio Transient" being given. The periods seen in some RRATs are longer than in most radio pulsars, somewhat expected for sources which are (by definition) discovered in searches for individual pulses. Monitoring of RRATs for the past few years has revealed that they are slowing down. For some of the known RRATs this slow-down rate, while small, is larger than that for typical pulsars, and which is again more in line with that of magnetars.
The neutron star nature of RRATs was further confirmed when X-ray observations of the RRAT J1819-1458 were made using the space-based Chandra X-ray Observatory.
Cooling neutron stars have temperatures of order 1 million kelvins and so thermally emit at X-ray wavelengths. Measurement of an x-ray spectrum allows the temperature to be determined, assuming it is thermal emission from the surface of a neutron star. The resulting temperature for RRAT J1819-1458 is much cooler than that found on the surface of magnetars, and suggests that despite some shared properties between RRATs and magnetars, they belong to different populations of neutron stars. None of the other pulsars identified as RRATs has yet been detected in X-ray observation. This is in fact the only detection of these sources outside of the radio band.
Discovery
After the discovery of pulsars in 1967, searches for more pulsars relied on two key characteristics of pulsar pulses in order to distinguish pulsars from noise caused by terrestrial radio signals. The first is the periodic nature of pulsars. By performing periodicity searches through data, "pulsars are detected with much higher signal-to-noise ratios" than when simply looking for individual pulses. The second defining characteristic of pulsar signals is the dispersion in frequency of an individual pulse, due to the frequency dependence of the phase velocity of an electromagnetic wave that travels through an ionized medium. As the interstellar medium features an ionized component, waves traveling from a pulsar to Earth are dispersed, and thus pulsar surveys also focused on searching for dispersed waves. The importance of the combination of the two characteristics is such that in initial data processing from the Parkes Multibeam Pulsar Survey, which is the largest pulsar survey to date, "no search sensitive to single dispersed pulses was included."
After the survey itself had finished, searches began for single dispersed pulses. About a quarter of the pulsars already detected by the survey were found by searching for single dispersed pulses, but there were 17 sources of single dispersed pulses which were not thought to be associated with a pulsar. During follow-up observations, a few of these were found to be pulsars that had been missed in periodicity searches, but 11 sources were characterized by single dispersed pulses, with irregular intervals between pulses lasting from minutes to hours.
over 100 have been reported, with dispersion measures up to 764 cm−3pc.
Possible pulse mechanisms
In order to explain the irregularity of RRAT pulses, we note that most of the pulsars which have been labelled as RRATs are entirely consistent with pulsars which have regular underlying emission which is simply undetectable due to the low intrinsic brightness or large distance of the sources. However, assuming that when we do not detect pulses from these pulsars that they are truly 'off', several authors have proposed mechanisms whereby such sporadic emission could be explained. For example, as pulsars gradually lose energy, they approach what is called the pulsar "death valley," a theoretical area in pulsar pulsar period—period derivative space, where the pulsar emission mechanism is thought to fail but may become sporadic as pulsars approach this region. However although this is consistent with some of the behavior of RRATs, the RRATs with known periods and period derivatives do not lie near canonical death regions. Another suggestion is that asteroids might form in the debris of the supernova that formed the neutron star, and infall of these debris in to the light cone of RRATs and some other types of pulsars might cause some of the irregular behavior observed. Since most RRATs have large dispersion measures that indicate larger distances, combining with the similar emission properties, some RRATs could be due to the telescope detection threshold. Nevertheless, the possibility that RRATs share the similar emission mechanism with those pulsars with so called "giant pulses" can neither be excluded. To fully understand the emission mechanisms of RRATs would require directly observing the debris surrounding a neutron star, which is not possible now, but may be possible in the future with the Square Kilometer Array. Nevertheless, as more RRATs are detected by observatories such as Arecibo, the Green Bank Telescope, and the Parkes Observatory at which RRATs were first discovered, some of the characteristics of RRATs may become clearer.
See also
Accretion-powered pulsar
Anomalous X-ray pulsar
Fast radio burst—have large DM with some confirmed at cosmological distances
Soft gamma repeater
References
External links
Astronomers Discover Peek-A-Boo Stars
New Kind of Star Found. SciAm 2006
Neutron stars
Astronomical events | Rotating radio transient | [
"Astronomy"
] | 1,451 | [
"Rotating radio transients",
"Astronomical events"
] |
4,219,501 | https://en.wikipedia.org/wiki/Andon%20%28manufacturing%29 | In manufacturing, andon () is a system which notifies managerial, maintenance, and other workers of a quality or process problem. The alert can be activated manually by a worker using a pullcord or button or may be activated automatically by the production equipment itself. The system may include a means to pause production so the issue can be corrected. Some modern alert systems incorporate audio alarms, text, or other displays; stack lights are among the most commonly used.
“Andon” is a loanword from Japanese, originally meaning paper lantern; Japanese manufacturers began its quality-control usage.
Details
An andon system is one of the principal elements of the Jidoka quality control method pioneered by Toyota as part of the Toyota Production System and therefore now part of the lean production approach.
The principle of Andon works as such: if there are any production issues happening in Production line, the affected work station operator will need to trigger alert by pulling down the andon cord. But since 2014, Toyota is slowly replacing the andon cord with "andon button" as it can be operated wirelessly and reduce the clutter mess of tangling cables in production floor which leads to avoidance of tripping incidents in production floor. It gives workers the ability, and moreover the empowerment, to stop production when a defect is found, and immediately call for assistance. Common reasons for manual activation of the andon are:
Part shortage
Defects created or found
Tools/machines malfunction
Existence of a safety problem.
All work in production line is stopped until a solution has been found. The alerts may be logged to a database so that they can be studied as part of a continual improvement process. Once the problem is troubleshot and fixed, a second pull of the andon cord authorizes the production to be resumed.
The system typically indicates where the alert was generated, and may also provide a description of the issue. Modern andon systems can include text, graphics, or audio elements. Audio alerts may be done with coded tones, music with different tunes corresponding to the various alerts, or prerecorded verbal messages.
History
The concept/process of giving a non-management (production line) worker the authority to stop the production line because of a suspected quality issue is often attributed to W. Edwards Deming and others who developed what became Kaizen after World War II. Many attribute Japan's rise from wartime ashes to the world's second largest economy (the Japanese economic miracle) to their post-war industrial innovations:
Better design of products to improve service
Higher level of uniform product quality
Improvement of product testing in the workplace and in research centers
Greater sales through side [global] markets
See also
Stack light (commonly used in andon and lean manufacturing initiatives)
References
Japanese business terms
Lean manufacturing
Manufacturing in Japan
Toyota Production System | Andon (manufacturing) | [
"Engineering"
] | 568 | [
"Lean manufacturing"
] |
4,219,628 | https://en.wikipedia.org/wiki/NGC%201872 | NGC 1872 is an open cluster within the Large Magellanic Cloud in the constellation Dorado. It was discovered by James Dunlop in 1826.
NGC 1872 has characteristics of both globular clusters and open clusters - it is visually as rich as a typical globular but is much younger, and, like many open clusters, has bluer stars. Such intermediate clusters are common in the Large Magellanic Cloud.
Gallery
References
External links
Open clusters
Dorado
1872
Large Magellanic Cloud | NGC 1872 | [
"Astronomy"
] | 99 | [
"Dorado",
"Constellations"
] |
4,219,822 | https://en.wikipedia.org/wiki/NGC%201850 | NGC 1850 is a double cluster and a super star cluster in the Dorado constellation, located in the northwest part of the bar of the Large Magellanic Cloud, at a distance of from the Sun. It was discovered by Scottish astronomer James Dunlop in 1826.
This is an unusual cluster system because the main distribution of stars is like a globular cluster, but unlike the globular clusters of the Milky Way it is composed of young stars. The only similar object in the Milky Way is Westerlund 1. The main cluster has the appearance of a globular cluster with an age of Myr. The second is a more loosely distributed sub-cluster with an age of Myr, located at an angular separation of to the west of the main cluster. There are indications of interactions between the two, with the larger component being irregular and showing a tail toward the northwest.
The main cluster is around 100 million years old, with a tidal radius of 10 light years and an overall radius of 16 light years. It has an estimated mass of 42,000 times the mass of the Sun. The stellar component is split into two main sequence populations, with about a quarter of the stars in a blue (hotter) group and the rest in a redder (cooler) population. The cluster is embedded in an ionization region designated Henize 103.
The much younger subcluster, often designated NGC 1850A, contains a number of young, massive O/B-type stars that are on or near the main sequence, distributed up to from the central clump. Seven subcluster members have masses of , and two of those are . Lower mass members up to are still on the pre-main-sequence stage. The age distribution of the subcluster members indicate star formation has been active almost constantly since its formation. The eastern side of the cluster is more obscured and has fewer OB stars.
In November 2021, astronomers using MUSE on the Very Large Telescope reported the discovery of a stellar-mass black hole in NGC 1850 by viewing its influence on the motion of a star in close proximity, the first direct dynamical detection of a black hole in a young massive cluster.
Gallery
References
External links
NGC 1850: Not Found in the Milky Way (Astronomy Picture of the Day)
Globular clusters
Open clusters
Large Magellanic Cloud
Dorado
1850 | NGC 1850 | [
"Astronomy"
] | 476 | [
"Dorado",
"Constellations"
] |
4,219,944 | https://en.wikipedia.org/wiki/NGC%202080 | NGC 2080, also known as the Ghost Head Nebula, is a star-forming region and emission nebula to the south of the 30 Doradus (Tarantula) nebula, in the southern constellation Dorado. It belongs to the Large Magellanic Cloud, a satellite galaxy to the Milky Way, which is at a distance of 168,000 light years. NGC 2080 was discovered by John Frederick William Herschel in 1834. The Ghost Head Nebula has a diameter of 50 light-years and is named for the two distinct white patches it possesses, called the "eyes of the ghost". The western patch, called A1, has a bubble in the center which was created by the young, massive star it contains. The eastern patch, called A2, has several young stars in a newly formed cluster, but they are still obscured by their originating dust cloud. Because neither dust cloud has dissipated due to the stellar radiation, astronomers have deduced that both sets of stars formed within the past 10,000 years. These stars together have begun to create a bubble in the nebula with their outpourings of material, called stellar wind.
The presence of stars also greatly influences the color of the nebula. The western portion of the nebula has a dominant oxygen emission line because of a powerful star on the nebula's outskirts; this colors it green. The rest of the nebula's outskirts have a red hue due to the ionization of hydrogen. Because both hydrogen and oxygen are ionized in the central region, it appears pale yellow; when hydrogen is energized enough to emit a second wavelength of light, it appears blue, as in the area surrounding A1 and A2.
NGC 2080 should not be confused with the Ghost Nebula (Sh2-136) or the Little Ghost Nebula (NGC 6369).
See also
List of NGC objects (2001–3000)
References
External links
SEDS
Hubble Sends Season's Greetings from the Cosmos to Earth – Hubble Space Telescope news release
Release on NGC 2080 at ESA/Hubble
2080
Dorado
Diffuse nebulae
Large Magellanic Cloud
Tarantula Nebula
Discoveries by John Herschel | NGC 2080 | [
"Astronomy"
] | 435 | [
"Dorado",
"Constellations"
] |
4,220,182 | https://en.wikipedia.org/wiki/IC%202944 | IC 2944, also known as the Running Chicken Nebula, the Lambda Centauri Nebula or the λ Centauri Nebula, is an open cluster with an associated emission nebula found in the constellation Centaurus, near the star λ Centauri. It features Bok globules, which are frequently a site of active star formation. However, no evidence for star formation has been found in any of the globules in IC 2944. Other designations for IC 2944 include RCW 62, G40 and G42.
The ESO Very Large Telescope image on the right is a close up of a set of Bok globules discovered in IC 2944 by astronomer A. David Thackeray in 1950. These globules are now known as Thackeray's Globules. In 2MASS images, 6 stars are visible within the largest globule.
The region of nebulosity visible in modern images includes both IC 2944 and IC 2948, as well as the fainter IC 2872 nearby. IC 2948 is the brightest emission and reflection nebulae towards the southeast, while IC 2944 is the cluster of stars and surrounding nebulosity stretching towards λ Centauri. IC 2944 gets the nickname "Running Chicken Nebula" from a group of stars that resemble a running chicken. The star Lambda Centauri lies just outside IC 2944. The nebulae is 6,500 light years from earth.
References
External links
IC 2944 at ESA/Hubble
Centaurus
2944
Bright nebula IC 2944
100b
Star-forming regions | IC 2944 | [
"Astronomy"
] | 330 | [
"Centaurus",
"Constellations"
] |
11,976,532 | https://en.wikipedia.org/wiki/TOP500 | The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.
The most recent edition of TOP500 was published in November 2024 as the 64th edition of TOP500, while the next edition of TOP500 will be published in June 2025 as the 65th edition of TOP500. As of November 2024, the United States' El Capitan is the most powerful supercomputer in the TOP500, reaching 1742 petaFlops (1.742 exaFlops) on the LINPACK benchmarks. As of 2018, the United States has by far the highest share of total computing power on the list (nearly 50%). As of 2024, the United States has the highest number of systems with 173 supercomputers; China is in second place with 63, and Germany is third at 40.
The 59th edition of TOP500, published in June 2022, was the first edition of TOP500 to feature only 64-bit supercomputers; as of June 2022, 32-bit supercomputers are no longer listed. The TOP500 list is compiled by Jack Dongarra of the University of Tennessee, Knoxville, Erich Strohmaier and Horst Simon of the National Energy Research Scientific Computing Center (NERSC) and Lawrence Berkeley National Laboratory (LBNL), and, until his death in 2014, Hans Meuer of the University of Mannheim, Germany. The TOP500 project also includes lists such as Green500 (measuring energy efficiency) and HPCG (measuring I/O bandwidth).
History
In the early 1990s, a new definition of supercomputer was needed to produce meaningful statistics. After experimenting with metrics based on processor count in 1992, the idea arose at the University of Mannheim to use a detailed listing of installed systems as the basis. In early 1993, Jack Dongarra was persuaded to join the project with his LINPACK benchmarks. A first test version was produced in May 1993, partly based on data available on the Internet, including the following sources:
"List of the World's Most Powerful Computing Sites" maintained by Gunter Ahrendt
David Kahaner, the director of the Asian Technology Information Program (ATIP); published a report in 1992, titled "Kahaner Report on Supercomputer in Japan" which had an immense amount of data.
The information from those sources was used for the first two lists. Since June 1993, the TOP500 is produced bi-annually based on site and vendor submissions only. Since 1993, performance of the ranked position has grown steadily in accordance with Moore's law, doubling roughly every 14 months. In June 2018, Summit was fastest with an Rpeak of 187.6593 PFLOPS. For comparison, this is over 1,432,513 times faster than the Connection Machine CM-5/1024 (1,024 cores), which was the fastest system in November 1993 (twenty-five years prior) with an Rpeak of 131.0 GFLOPS.
Architecture and operating systems
, all supercomputers on TOP500 are 64-bit supercomputers, mostly based on CPUs with the x86-64 instruction set architecture, 384 of which are Intel EMT64-based and 101 of which are AMD AMD64-based, with the latter including the top eight supercomputers. 15 other supercomputers are all based on RISC architectures, including six based on ARM64 and seven based on the Power ISA used by IBM Power microprocessors.
In recent years, heterogeneous computing has dominated the TOP500, mostly using Nvidia's graphics processing units (GPUs) or Intel's x86-based Xeon Phi as coprocessors. This is because of better performance per watt ratios and higher absolute performance. AMD GPUs have taken the top 1 and displaced Nvidia in top 10 part of the list. The recent exceptions include the aforementioned Fugaku, Sunway TaihuLight, and K computer. Tianhe-2A is also an interesting exception, as US sanctions prevented use of Xeon Phi; instead, it was upgraded to use the Chinese-designed Matrix-2000 accelerators.
Two computers which first appeared on the list in 2018 were based on architectures new to the TOP500. One was a new x86-64 microarchitecture from Chinese manufacturer Sugon, using Hygon Dhyana CPUs (these resulted from a collaboration with AMD, and are a minor variant of Zen-based AMD EPYC) and was ranked 38th, now 117th, and the other was the first ARM-based computer on the list using Cavium ThunderX2 CPUs. Before the ascendancy of 32-bit x86 and later 64-bit x86-64 in the early 2000s, a variety of RISC processor families made up most TOP500 supercomputers, including SPARC, MIPS, PA-RISC, and Alpha.
All the fastest supercomputers since the Earth Simulator supercomputer have used operating systems based on Linux. , all the listed supercomputers use an operating system based on the Linux kernel.
Since November 2015, no computer on the list runs Windows (while Microsoft reappeared on the list in 2021 with Ubuntu based on Linux). In November 2014, Windows Azure cloud computer was no longer on the list of fastest supercomputers (its best rank was 165th in 2012), leaving the Shanghai Supercomputer Center's Magic Cube as the only Windows-based supercomputer on the list, until it also dropped off the list. It was ranked 436th in its last appearance on the list released in June 2015, while its best rank was 11th in 2008. There are no longer any Mac OS computers on the list. It had at most five such systems at a time, one more than the Windows systems that came later, while the total performance share for Windows was higher. Their relative performance share of the whole list was however similar, and never high for either. In 2004, the System X supercomputer based on Mac OS X (Xserve, with 2,200 PowerPC 970 processors) once ranked 7th place.
It has been well over a decade since MIPS systems dropped entirely off the list though the Gyoukou supercomputer that jumped to 4th place in November 2017 had a MIPS-based design as a small part of the coprocessors. Use of 2,048-core coprocessors (plus 8× 6-core MIPS, for each, that "no longer require to rely on an external Intel Xeon E5 host processor") made the supercomputer much more energy efficient than the other top 10 (i.e. it was 5th on Green500 and other such ZettaScaler-2.2-based systems take first three spots). At 19.86 million cores, it was by far the largest system by core-count, with almost double that of the then-best manycore system, the Chinese Sunway TaihuLight.
TOP500
, the number one supercomputer is El Capitan, the leader on Green500 is JEDI, a Bull Sequana XH3000 system using the Nvidia Grace Hopper GH200 Superchip. In June 2022, the top 4 systems of Graph500 used both AMD CPUs and AMD accelerators. After an upgrade, for the 56th TOP500 in November 2020,
Summit, a previously fastest supercomputer, is currently highest-ranked IBM-made supercomputer; with IBM POWER9 CPUs. Sequoia became the last IBM Blue Gene/Q model to drop completely off the list; it had been ranked 10th on the 52nd list (and 1st on the June 2012, 41st list, after an upgrade).
Microsoft is back on the TOP500 list with six Microsoft Azure instances (that use/are benchmarked with Ubuntu, so all the supercomputers are still Linux-based), with CPUs and GPUs from same vendors, the fastest one currently 11th, and another older/slower previously made 10th. And Amazon with one AWS instance currently ranked 64th (it was previously ranked 40th). The number of Arm-based supercomputers is 6; currently all Arm-based supercomputers use the same Fujitsu CPU as in the number 2 system, with the next one previously ranked 13th, now 25th.
Legend:
RankPosition within the TOP500 ranking. In the TOP500 list table, the computers are ordered first by their Rmax value. In the case of equal performances (Rmax value) for different computers, the order is by Rpeak. For sites that have the same computer, the order is by memory size and then alphabetically.
RmaxThe highest score measured using the LINPACK benchmarks suite. This is the number that is used to rank the computers. Measured in quadrillions of 64-bit floating point operations per second, i.e., petaFLOPS.
RpeakThis is the theoretical peak performance of the system. Computed in petaFLOPS.
NameSome supercomputers are unique, at least on its location, and are thus named by their owner.
ModelThe computing platform as it is marketed.
ProcessorThe instruction set architecture or processor microarchitecture, alongside GPU and accelerators when available.
InterconnectThe interconnect between computing nodes. InfiniBand is most used (38%) by performance share, while Gigabit Ethernet is most used (54%) by number of computers.
ManufacturerThe manufacturer of the platform and hardware.
SiteThe name of the facility operating the supercomputer.
CountryThe country in which the computer is located.
YearThe year of installation or last major update.
Operating systemThe operating system that the computer uses.
Other rankings
Top countries
Numbers below represent the number of computers in the TOP500 that are in each of the listed countries or territories. As of 2024, United States has the most supercomputers on the list, with 173 machines. The United States has the highest aggregate computational power at 6,324 Petaflops Rmax with Japan second (919 Pflop/s) and Germany third (396 Pflop/s).
Fastest supercomputer in TOP500 by country
(As of November 2023)
Systems ranked
HPE Cray El Capitan (Lawrence Livermore National Laboratory , November 2024Present)
HPE Cray Frontier (Oak Ridge National Laboratory , June 2022November 2024)
Supercomputer Fugaku (Riken Center for Computational Science , June 2020June 2022)
IBM Summit (Oak Ridge National Laboratory , June 2018June 2020)
NRCPC Sunway TaihuLight (National Supercomputing Center in Wuxi , June 2016November 2017)
NUDT Tianhe-2A (National Supercomputing Center of Guangzhou , June 2013June 2016)
Cray Titan (Oak Ridge National Laboratory , November 2012June 2013)
IBM Sequoia Blue Gene/Q (Lawrence Livermore National Laboratory , June 2012November 2012)
Fujitsu K computer (Riken Advanced Institute for Computational Science , June 2011June 2012)
NUDT Tianhe-1A (National Supercomputing Center of Tianjin , November 2010June 2011)
Cray Jaguar (Oak Ridge National Laboratory , November 2009November 2010)
IBM Roadrunner (Los Alamos National Laboratory , June 2008November 2009)
IBM Blue Gene/L (Lawrence Livermore National Laboratory , November 2004June 2008)
NEC Earth Simulator (Earth Simulator Center , June 2002November 2004)
IBM ASCI White (Lawrence Livermore National Laboratory , November 2000June 2002)
Intel ASCI Red (Sandia National Laboratories , June 1997November 2000)
Hitachi CP-PACS (University of Tsukuba , November 1996June 1997)
Hitachi SR2201 (University of Tokyo , June 1996November 1996)
Fujitsu Numerical Wind Tunnel (National Aerospace Laboratory of Japan , November 1994June 1996)
Intel Paragon XP/S140 (Sandia National Laboratories , June 1994November 1994)
Fujitsu Numerical Wind Tunnel (National Aerospace Laboratory of Japan , November 1993June 1994)
TMC CM-5 (Los Alamos National Laboratory , June 1993November 1993)
Additional statistics
By number of systems :
Note: All operating systems of the TOP500 systems are Linux-family based, but Linux above is generic Linux.
Sunway TaihuLight is the system with the most CPU cores (10,649,600). Tianhe-2 has the most GPU/accelerator cores (4,554,752). Aurora is the system with the greatest power consumption with 38,698 kilowatts.
New developments in supercomputing
In November 2014, it was announced that the United States was developing two new supercomputers to exceed China's Tianhe-2 in its place as world's fastest supercomputer. The two computers, Sierra and Summit, will each exceed Tianhe-2's 55 peak petaflops. Summit, the more powerful of the two, will deliver 150–300 peak petaflops. On 10 April 2015, US government agencies banned selling chips, from Nvidia to supercomputing centers in China as "acting contrary to the national security ... interests of the United States"; and Intel Corporation from providing Xeon chips to China due to their use, according to the US, in researching nuclear weaponsresearch to which US export control law bans US companies from contributing"The Department of Commerce refused, saying it was concerned about nuclear research being done with the machine."
On 29 July 2015, President Obama signed an executive order creating a National Strategic Computing Initiative calling for the accelerated development of an exascale (1000 petaflop) system and funding research into post-semiconductor computing.
In June 2016, Japanese firm Fujitsu announced at the International Supercomputing Conference that its future exascale supercomputer will feature processors of its own design that implement the ARMv8 architecture. The Flagship2020 program, by Fujitsu for RIKEN plans to break the exaflops barrier by 2020 through the Fugaku supercomputer, (and "it looks like China and France have a chance to do so and that the United States is contentfor the moment at leastto wait until 2023 to break through the exaflops barrier.") These processors will also implement extensions to the ARMv8 architecture equivalent to HPC-ACE2 that Fujitsu is developing with Arm.
In June 2016, Sunway TaihuLight became the No. 1 system with 93 petaflop/s (PFLOP/s) on the Linpack benchmark.
In November 2016, Piz Daint was upgraded, moving it from 8th to 3rd, leaving the US with no systems under the TOP3 for the 2nd time.
Inspur, based out of Jinan, China, is one of the largest HPC system manufacturers. , Inspur has become the third manufacturer to have manufactured a 64-way systema record that has previously been held by IBM and HP. The company has registered over $10B in revenue and has provided a number of systems to countries such as Sudan, Zimbabwe, Saudi Arabia and Venezuela. Inspur was also a major technology partner behind both the Tianhe-2 and Taihu supercomputers, occupying the top 2 positions of the TOP500 list up until November 2017. Inspur and Supermicro released a few platforms aimed at HPC using GPU such as SR-AI and AGX-2 in May 2017.
In June 2018, Summit, an IBM-built system at the Oak Ridge National Laboratory (ORNL) in Tennessee, US, took the No. 1 spot with a performance of 122.3 petaflop/s (PFLOP/s), and Sierra, a very similar system at the Lawrence Livermore National Laboratory, CA, US took #3. These systems also took the first two spots on the HPCG benchmark. Due to Summit and Sierra, the US took back the lead as consumer of HPC performance with 38.2% of the overall installed performance while China was second with 29.1% of the overall installed performance. For the first time ever, the leading HPC manufacturer was not a US company. Lenovo took the lead with 23.8% of systems installed. It is followed by HPE with 15.8%, Inspur with 13.6%, Cray with 11.2%, and Sugon with 11%.
On 18 March 2019, the United States Department of Energy and Intel announced the first exaFLOP supercomputer would be operational at Argonne National Laboratory by the end of 2021. The computer, named Aurora, was delivered to Argonne by Intel and Cray.
On 7 May 2019, The U.S. Department of Energy announced a contract with Cray to build the "Frontier" supercomputer at Oak Ridge National Laboratory. Frontier is anticipated to be operational in 2021 and, with a performance of greater than 1.5 exaflops, should then be the world's most powerful computer.
Since June 2019, all TOP500 systems deliver a petaflop or more on the High Performance Linpack (HPL) benchmark, with the entry level to the list now at 1.022 petaflops.
In May 2022, the Frontier supercomputer broke the exascale barrier, completing more than a quintillion 64-bit floating point arithmetic calculations per second. Frontier clocked in at approximately 1.1 exaflops, beating out the previous record-holder, Fugaku.
Large machines not on the list
Some major systems are not on the list. A prominent example is the NCSA's Blue Waters which publicly announced the decision not to participate in the list because they do not feel it accurately indicates the ability of any system to do useful work. Other organizations decide not to list systems for security and/or commercial competitiveness reasons. One such example is the National Supercomputing Center at Qingdao's OceanLight supercomputer, completed in March 2021, which was submitted for, and won, the Gordon Bell Prize. The computer is an exaflop computer, but was not submitted to the TOP500 list; the first exaflop machine submitted to the TOP500 list was Frontier. Analysts suspected that the reason the NSCQ did not submit what would otherwise have been the world's first exascale supercomputer was to avoid inflaming political sentiments and fears within the United States, in the context of the United States – China trade war. Additional purpose-built machines that are not capable or do not run the benchmark were not included, such as RIKEN MDGRAPE-3 and MDGRAPE-4.
A Google Tensor Processing Unit v4 pod is capable of 1.1 exaflops of peak performance, while TPU v5p claims over 4 exaflops in Bfloat16 floating-point format, however these units are highly specialized to run machine learning workloads and the TOP500 measures a specific benchmark algorithm using a specific numeric precision.
In March 2024, Meta AI disclosed the operation of two datacenters with 24,576 H100 GPUs, which is almost 2x as on the Microsoft Azure Eagle (#3 as of September 2024), which could have made them occupy 3rd and 4th places in TOP500, but neither have been benchmarked. During company's Q3 2024 earnings call in October, M. Zuckerberg disclosed usage of a cluster with over 100,000 H100s.
xAI Memphis Supercluster (also known as "Colossus") allegedly features 100,000 of the same H100 GPUs, which could have put in on the first place, but it is reportedly not in full operation due to power shortages.
Computers and architectures that have dropped off the list
IBM Roadrunner is no longer on the list (nor is any other using the Cell coprocessor, or PowerXCell).
Although Itanium-based systems reached second rank in 2004, none now remain.
Similarly (non-SIMD-style) vector processors (NEC-based such as the Earth simulator that was fastest in 2002) have also fallen off the list. Also the Sun Starfire computers that occupied many spots in the past now no longer appear.
The last non-Linux computers on the list the two AIX ones running on POWER7 (in July 2017 ranked 494th and 495th, originally 86th and 85th), dropped off the list in November 2017.
Notes
The first edition of TOP500 to feature only 64-bit supercomputers was the 59th edition of TOP500, which was published in June 2022.
As of June 2022, TOP500 features only 64-bit supercomputers.
The world’s most powerful supercomputers are from the United States and Japan.
See also
Computer science
Computing
Graph500
Green500
HPC Challenge Benchmark
Instructions per second
LINPACK benchmarks
List of fastest computers
References
External links
LINPACK benchmarks at TOP500
Supercomputer benchmarks
Supercomputer sites
Top lists | TOP500 | [
"Technology"
] | 4,572 | [
"Supercomputers",
"Supercomputing"
] |
11,977,034 | https://en.wikipedia.org/wiki/Forel-Ule%20scale | The Forel-Ule scale is a method to estimate the color of bodies of water. The scale provides a visual estimate of the color of a body of water, and it is used in limnology and oceanography with the aim of measuring the water's transparency and classifying its biological activity, dissolved substances, and suspended sediments.
The color scale of 21 different colors can be created using either a set of liquid vials or a set of color lighting filters in a white frame.
The classic Forel-Ule Scale uses a set of liquid vials of multiple colors. Together, the liquid vials represent a standardized color palette created by using a set of small transparent glass tubes containing colored water by adding different concentrations of stable inorganic salts. By mixing different chemicals (distilled water, ammonia, copper sulfate, potassium chromate, and cobalt sulfate) a standard color scale is produced in a set of numbered vials (1–21). The set of vials is then compared with the color of the water body. The result is a color index for the water body which gives an indication of the transparency of the water and thus helps to classify overall biological activity. The color graduations correspond to open sea and lake water colors, as they appear to an observer ashore or on board a vessel. The method is often used in conjunction with the Secchi disk submerged to half the Secchi depth, so that the color can be judged against a white background.
A set of color lighting filters against a white background can also be used as a Forel-Ule scale, called a Modern FU plastic scale. High-quality lighting filters of many colors are combined with one another to create the 21 colors of the traditional Forel-Ule Scale when viewed against a white background, such as white plexiglass.
History
The method was developed by François-Alphonse Forel and was three years later extended with greenish brown to dark brown colors by the German limnologist Wilhelm Ule. The Forel Ule scale was a simple but adequate scale to classify the color of rivers, lakes, seas and oceans. The Forel-Ule scale observations, along with temperature, salinity, bathymetry, and Secchi depth, are some of the oldest oceanographic parameters dating back to the 1800s.
Role in citizen science
In the Netherlands, a project called the Citizen’s observatory for Coast and Ocean Optical Monitoring (Citclops) project has begun crowdsourcing water color measurements from citizen scientists. Citizen scientists estimate the color of the water with the Forel-Ule scale using a smartphone app called “Eye on water.”
See also
Citizen science
Color of water
Munsell color system
Ocean color
Pt/Co scale
Secchi disk
Water quality
References
External links
Official website
Oceanography
Color scales
Color
Water | Forel-Ule scale | [
"Physics",
"Environmental_science"
] | 573 | [
"Water",
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
11,977,151 | https://en.wikipedia.org/wiki/Triconex | Triconex is a Schneider Electric brand that supplies products, systems, and services for safety, critical control, and turbo-machinery applications. Triconex also use its name for its hardware devices that use its TriStation application software. Triconex products are based on patented Triple modular redundancy (TMR) industrial safety-shutdown technology. Today, Triconex TMR products operate globally in more than 11,500 installations.
Company history
The history of Triconex was published in the book The History of a Safer World by Gary L. Wilkinson. The company was founded in September 1983 by Jon Wimer in Santa Ana, California and began operations in March 1984. The company was founded as a venture-capital funded private company. The business plan was written by Wimer and Peter Pitsker, an automation industry veteran and Stanford graduate. They presented the plan for a TMR (triple modular redundant) system named "Tricon" that would improve the safety and reliability of industrial applications. Among the customers they targeted were the petro-chemical giants, such as Exxon, Shell, Chevron, and BP.
Pitsker and Wimer presented the business plan to Los Angeles-based investor Chuck Cole, who was also a professor at USC. Cole was interested, so he contacted his personal attorney, future two-time Los Angeles Mayor Richard Riordan. Riordan agreed to invest $50,000 and Cole's venture capital team matched it, providing the seed money for Triconex. Wimer hired computer architect Ken Brody out of another computer manufacturer as Vice President of Research and Development and the number 2 employee. Ken Brody hired Wing N. Toy from Bell Labs. After two years, however, the company nearly failed due to the expense and complications of testing a new safety system. In February 1986, founder Wimer left the company and the board asked a seasoned executive, William K. Barkovitz, to become CEO; Barkovitz ended up leading the company for 9 years. At the end of his term, Triconex became the leading safety system in a market it largely created, made acquisitions, and completed an initial public offering. In January 1994, Triconex was acquired by British-based SIEBE for 90 million dollars.
The hardware architect of the company was Gary Hufton, and the software development manager was Glen Alleman. Along with Wing N. Toy (the lead engineer of the fault-tolerant ESS telephone switch), they led a small successful engineering team that built the first Tricon system, sold in June 1986. Soon after, Exxon became a customer and Honeywell agreed to distribute the Tricon. Among the software engineers who worked for Triconex were Phil Huber and Dennis Morin, who later left the company to found Wonderware.
System
The Triconex system is based on the TMR patented technology that supports up to Safety Integrity Level 3 (SIL 3) and is usually used as a safety rather than a control system.
Operating theory
Fault tolerance in the Tricon is achieved by means of a Triple-Modular Redundant (TMR) architecture. The Tricon provides error-free, uninterrupted control in the presence of either hard failures of components, or transient faults from internal or external sources. The Tricon is designed with a fully triplicated architecture throughout, from the input modules through the Main Processors to the output modules. Every I/O module houses the circuitry for three independent legs. Each leg on the input modules reads the process data and passes that information to its respective Main Processor. The three Main Processors communicate with each other using a proprietary high-speed bus system called the TriBus. Once per scan, the three Main Processors synchronize and communicate with their two neighbors over the TriBus. The Tricon votes digital input data, compares output data, and sends copies of analog input data to each Main Processor. The Main Processors execute the user written application and send outputs generated by the application to the output modules. In addition to voting the input data, the TriBus votes the output data. This is done on the output modules as close to the field as possible to detect and compensate for any errors between the Tricon voting and the final output driven to the field.
Hardware
The Triconex system usually consists of the following typical modules:
Main Processor modules (triple).
Communication module(s) .
Input and output modules: can be analog and/or digital and work singularly or in hot-spare (standby).
Power supply modules (redundant).
Backplane(s) (chassis) that can hold the previous modules.
System cabinet(s): can compact one or more chassis in one cabinet.
Marshalling cabinets to adapt and standardize interface connections between the field instruments and the Triconex system cabinets.
Human machine interface (HMI) to monitor the events.
Engineering workstation (EWS) for programming. monitoring, troubleshooting, and updating.
Software
The Triconex main processors can communicate with the so-called TriStation 1131 application software to download, update and/or monitor programs. These programs are either written in:
Function Block Diagram language,
Ladder diagram language, or
Structured text (Pascal like) Language.
Cause and Effect Matrix Programmable Language (CEMPLE).
(Function Block Diagram, Ladder diagram and Structured Text are defined in IEC1131-3)
Besides, a Sequence of Events (SOE) recorder software and Diagnostic monitor software are implemented.
Triton malware
In December 2017, it was reported that the safety systems of an unidentified power station, believed to be in Saudi Arabia were compromised when the Triconex industrial safety technology made by Schneider Electric SE was targeted in what is believed to have been a state sponsored attack. The computer security company Symantec claimed that the malware, known as "Triton", exploited a vulnerability in computers running the Microsoft Windows operating system.
References and notes
Further reading
Triton is the world's most murderous malware, and it's spreading March 5, 2019 MIT Technology Review
External links
Triconex Safety Systems, Schneider Electric
Control engineering | Triconex | [
"Engineering"
] | 1,249 | [
"Control engineering"
] |
11,977,518 | https://en.wikipedia.org/wiki/Gyrochronology | Gyrochronology is a method for estimating the age of a low-mass (cool) main sequence star (spectral class F8 V or later) from its rotation period. The term is derived from the Greek words gyros, chronos and logos, roughly translated as rotation, age, and study respectively. It was coined in 2003 by Sydney Barnes to describe the associated procedure for deriving stellar ages, and developed extensively in empirical form in 2007.
Gyrochronology builds on a work of Andrew Skumanich,
who found that the average value of (v sin i) for several open clusters was inversely proportional to the square root of the cluster's age. In the expression (v sin i), (v) is the velocity on the star's equator and (i) is the inclination angle of the star's axis of rotation, which is generally an unmeasurable quantity. The gyrochronology method depends on the relationship between the rotation period and the mass of low mass main-sequence stars of the same age, which was verified by early work on the Hyades open cluster. The associated age estimate for a star is known as the gyrochronological age.
Overview
The basic idea underlying gyrochronology is that the rotation period P, of a cool main-sequence star is a deterministic function of its age t and its mass M (or a suitable substitute such as color). Although main sequence stars of a given mass form with a range of rotation periods, their periods increase rapidly and converge to a well defined value as they lose angular momentum through magnetically channelled stellar winds. Therefore, their periods converge to a certain function of age and mass, mathematically denoted by P=P(t,M). Consequently, cool stars do not occupy the entire 3-dimensional parameter space of (mass, age, period), but instead define a 2-dimensional surface in this P-t-M space. Therefore, measuring two of these variables yields the third. Of these quantities, the mass (color) and the rotation period are the easier variables to measure, providing access to the star's age, otherwise difficult to obtain.
In order to determine the shape of this P=P(t,M) surface, the rotation periods and photometric colors (mass) of stars in clusters of known age are measured. Data has been accumulated from several clusters younger than one billion years (Gyr) of age and one cluster with an age of 2.5 Gyr. Another data point on the surface is from the Sun with an age of 4.56 Gyr and a rotation period of 25 days. Using these results, the ages of a large number of cool galactic field stars can be derived with 10% precision.
Magnetic stellar wind breaking increases the rotation period of the star and it is important in stars with convective envelopes. Stars with a color index greater than about (B-V)=0.47 mag (the Sun has a color index of 0.66 mag) have convective envelopes, but more massive stars have radiative envelopes. Also, these lower mass stars spend a considerable amount of time on a pre main sequence Hayashi track where they are nearly fully convective.
See also
Nucleocosmochronology
References
Further reading
Concepts in astrophysics
Space science | Gyrochronology | [
"Physics",
"Astronomy"
] | 691 | [
"Space science",
"Outer space",
"Concepts in astrophysics",
"Astrophysics"
] |
11,977,778 | https://en.wikipedia.org/wiki/French%20Bronze | French Bronze is a form of bronze typically consisting of 91% copper, 2% tin, 6% zinc, and 1% lead.
Other uses
The term French bronze was also used in connection with cheap zinc statuettes and other articles, which were finished to resemble real bronze, and some older texts call the faux-bronze finish itself "French bronze". Its composition was typically 5 parts hematite powder to 8 parts lead oxide, formed into a paste with spirits of wine. Variations in tint could be obtained by varying the proportions. The preparation was applied to the article to be bronzed with a soft brush, then polished with a hard brush after it had dried.
Notes
Bronze
Brass
Copper alloys | French Bronze | [
"Chemistry"
] | 142 | [
"Alloys",
"Copper alloys"
] |
11,978,154 | https://en.wikipedia.org/wiki/Journal%20of%20Biomolecular%20NMR | The Journal of Biomolecular NMR publishes research on technical developments and innovative applications of nuclear magnetic resonance spectroscopy for the study of structure and dynamic properties of biopolymers in solution, liquid crystals, solids and mixed environments. Some of the main topics include experimental and computational approaches for the determination of three-dimensional structures of proteins and nucleic acids, advancements in the automated analysis of NMR spectra, and new methods to probe and interpret molecular motions.
The journal was founded in 1991 by Kurt Wüthrich, who later received a Nobel prize in Chemistry in 2002 for his seminal contributions to the field of NMR. Now, the current editor-in-chief is Gerhard Wagner (Harvard Medical School).
According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.835.
Associate Editors
Accompanying Gerhard Wagner (editor-in-chief), the Associate Editors of the Journal of Biomolecular NMR are:
Ad Bax (NIH, USA)
Martin Billeter (Göteborg University, Sweden)
Lewis E. Kay (University of Toronto, Canada)
Rob Kaptein (Utrecht University, The Netherlands)
Gottfried Otting (Australian National University, Australia)
Arthur G. Palmer (Columbia University, USA)
Tatyana Polenova (University of Delaware, USA), and
Bernd Reif (TU Munich, Germany)
Most cited articles
According to the Web of Science, as of August 2018, there are seven Journal of Biomolecular NMR articles with over 1,500 citations:
– cited 9,252 times.
– cited 3,527 times.
– cited 3,199 times.
– cited 2,540 times.
– cited 2,288 times.
– cited 1,781 times.
– cited 1,723 times.
References
External links
Biochemistry journals
Physics journals
Springer Science+Business Media academic journals
English-language journals
Academic journals established in 1991
Monthly journals | Journal of Biomolecular NMR | [
"Chemistry"
] | 389 | [
"Biochemistry journals",
"Biochemistry literature"
] |
11,978,476 | https://en.wikipedia.org/wiki/Genetic%20stock%20center | Genetic stock centers are collections of pure genetic stock available for use in research. They are often housed at research universities, and include everything from single cell life to plants, fish, and small mammals such as mice and rats. Genetic Stock Centers often charge for research stock on a two tier scale, with non profit researchers getting stock at a lower cost than commercial researchers. Dr Myron Gordon, for example, established the Xiphophorus genetic stock center in 1939 to raise pure strains when he realized that certain Xiphophorus hybrids would be useful in cancer research. He understood that his research could not be duplicated by other scientists without pure genetic stock to use as a base. The strains that Dr Gordon started remain pure and are used to this day.
References
External links
MGSC web site
Xiphophorus Genetic Stock Center
Yale CGSC
USC collection of field mice
Biobanks | Genetic stock center | [
"Biology"
] | 181 | [
"Bioinformatics",
"Biobanks"
] |
11,978,989 | https://en.wikipedia.org/wiki/Telerate | Telerate was a US company providing financial data to market participants, specialising in commercial paper and bond prices. It was a pioneer in the electronic distribution of real-time market information in the 1970s. With its main innovation being to extend the technology that was used to obtain live stock prices, via Telequote, Quotron or Stockmaster to other sectors of the financial industry, such as corporate debt, currencies, interest rates and commodities.
The company was founded by Neil Hirsch and it became a major provider of market data through the 1970s and 1980s. The company was bought by Dow Jones & Company in 1990 but the hedonistic lifestyle of its founders and senior manager clashed with the strait-laced culture of Dow Jones & Company causing issues within Dow Jones. Dow Jones' aim was to use Telerate to compete against market dominant Reuters. However, Dow Jones lost focus and the business was eventually consigned to the backwater of the business. It was sold a number of times and renamed Bridge Telerate and later Moneyline Telerate.
Reuters eventually bought the remains of Telerate in 2005. This saw the end of the company as Reuters absorbed the business into its own market data unit. It also lost numerous customers as many clients chose Telerate as an alternative to Reuters, and they were not happy to have those products now under Reuters’ roof. Some customers had also had advantageous deals from Telerate, and Reuters was not happy to renew them on those terms.
History
Early years
The company was founded in 1969 by Neil Hirsch, a 21-year-old who had been hired by the U.S. broker Merrill Lynch, with two million of venture capital.
Neil Hirsch later attracted new investors, including Bernie Cantor, owner of a government securities broker Cantor Fitzgerald. The company saw strong growth because of the innovative technology and relatively low costs of the service compared to main rivals. However, the success and new wealth allowed Neil Hirsch to indulge in what was described by Telerate insider John Jessop "as a hedonistic lifestyle that involved drugs and alcohol in quantities that some observers saw as life-threatening".
Co-owner Bernie Cantor attracted much early controversy by using Telerate as a vehicle for advertising his company's trading prices, the first broker to do so, attracting the anger of many of its customers including such Wall Street giants as Merrill Lynch, Bankers Trust and Chemical Bank.
By 1971, the company was prepared for an IPO, but before that was completed it was contacted by the bond broker Cantor Fitzgerald in 1972 which took a 25% share of its capital.
By the mid 1970s Telerate had a monopoly on the information on the price of U.S. treasury bonds, and in 1977 the company made a profit of $1 million. Cantor Fitzgerald increased its stake to 70%. That same year Telerate entered into an alliance with Associated Press and the Dow Jones & Company to create a joint venture called AP-Dow Jones.
In 1981, while Telerate addresses the market for financial information internationally, it faced strong competition from the market dominant Reuters. Cantor Fitzgerald decided to sell its 89% stake, and it was sold to British investment group Exco International for $75 million. With the rest of the capital remaining in the hands of company management. That year Telerate made a net profit of $13.6 million. Customers typically paid $540–$700 per month for each terminal and 8,000 terminals were installed in North America, plus an additional 2,500 in 21 countries.
Association with Dow Jones
In the spring of 1983, three months after its IPO, Telerate created a subsidiary called AP-Dow Jones Telerate Co. for its international activities that held 49.9%, while Associated Press and Dow Jones and Company possessed 25.05%. The company's main competitor Reuters grew in popularity, which saw a collapse in the Telerate share price in the autumn of 1984. Neil S. Hirsch complained that the company was undervalued. The company came under pressure to launch "Telerate II", software that could run on IBM PCs.
Dow Jones & Company and Associated Press developed an integrated service that could be delivered over Telerate and Quotron technology to rival Reuters services. Within the financial community "club" Reuters opposed "club" Telerate which developed into a technology race between the two camps.
Dow Jones and Company acquired a 32% stake in 1985 for $285 million, valuing Telerate at $800 million, then reinvested $415 million for up to 56% in September 1987, just before the stock market crash October 1987. Despite this Dow Jones continued to invest in the business and invested another $148 million the next year taking its share to 67%. Telerate then launched the Matrix system in response to Reuters "Advanced Reuters Terminal (ART)" service.
The needs of traders and portfolio managers were however neglected by both "Reuters" and "Dow Jones Telerate" which allowed a new niche financial data provider, Bloomberg to start taking market share with its Bloomberg terminal. Within Dow Jones, Telerate was gradually marginalized and the services were eventually integrated with those of Dow Jones Newswire.
End of the company
In 1998 Bridge Information Systems, then the fourth largest provider of market information services behind Reuters, Dow Jones, and Bloomberg, agreed to buy the troubled Telerate business from Dow Jones for $510 million. The Dow Jones board had urged the sale despite taking a significant loss due what it perceived as insurmountable competition from its two biggest rivals, Reuters and Bloomberg, particularly as Telerate lacked the more complex historical pricing information and other analytical software that investors were looking for. Bridge Information Systems faced competition in the acquisition from Cantor Fitzgerald which was interested in re-acquiring interest in the business. However, the sale was completed with Bridge, and it was renamed Bridge Telerate.
In 2001, Bridge Telerate was sold to MoneyLine Network as part of the Bridge Information Systems bankruptcy proceedings for just $10 million. As part of the deal, MoneyLine reached an agreement with Reuters for the collection and aggregation of market data and other services for a three to four year transition period. It also reached an agreement with SAVVIS Communications Corporation for network services, so that it could continue to offer Telerate services. The business was renamed MoneyLine Telerate. However, the relationship with Reuters was troublesome and would lead to a major dispute with Reuters in 2003 when Reuters threatened to cut Telerate's data feeds, which was only narrowly avoided.
The business continued to decline, and by 2005 the company was no longer publicly traded and was now majority owned by One Equity Partners, the domestic venture capital arm of JPMorgan Chase. In June of that year, One Equity Partners sold the remains of the Moneyline Telerate business to Reuters for approximately $175 million. This saw the end of the Telerate brand as Reuters absorbed the business into its own market data unit.
References
American companies established in 1969
American companies disestablished in 2005
Financial services companies established in 1969
Financial services companies disestablished in 2005
1990 mergers and acquisitions
2005 mergers and acquisitions
Financial data vendors
Market data
Electronic trading systems
Companies based in New York (state) | Telerate | [
"Technology"
] | 1,449 | [
"Market data",
"Data"
] |
11,979,055 | https://en.wikipedia.org/wiki/Banana%20peel | A banana peel, called banana skin in British English, is the outer covering of a banana. Banana peels are used as food for animals, an ingredient in cooking, in water purification, for manufacturing of several biochemical products as well as for jokes and comical situations.
There are several methods to remove a peel from a banana.
Use
Bananas are a popular fruit consumed worldwide with a yearly production of over 165 million tonnes in 2011. Once the peel is removed, the fruit can be eaten raw or cooked and the peel is generally discarded. Because of this removal of the banana peel, a significant amount of organic waste is generated.
Banana peels are sometimes used as feedstock for cattle, goats, pigs, monkeys, poultry, rabbits, fish, zebras and several other species, typically on small farms in regions where bananas are grown. There are some concerns over the impact of tannins contained in the peels on animals that consume them.
The nutritional value of banana peel depends on the stage of maturity and the cultivar; for example plantain peels contain less fibre than dessert banana peels, and lignin content increases with ripening (from 7 to 15% dry matter). On average, banana peels contain 6-9% dry matter of protein and 20-30% fibre (measured as NDF). Green plantain peels contain 40% starch that is transformed into sugars after ripening. Green banana peels contain much less starch (about 15%) when green than plantain peels, while ripe banana peels contain up to 30% free sugars.
Banana peels are also used for water purification, to produce ethanol, cellulase, laccase, as fertilizer and in composting.
Culinary use
Cooking with banana peel is common in Southeast Asian, Indian and Venezuelan cuisine where the peel of bananas and plantains is used in recipes. In April 2019, a vegan pulled pork recipe using banana peel by food blogger Melissa Copeland aka The Stingy Vegan went viral. In 2020, The Great British Bake Off winner Nadiya Hussain revealed she uses banana peels as an alternative to pulled pork when making burgers in an effort to reduce food waste. Later that year television chef Nigella Lawson used banana skin as an ingredient for a curry on her BBC show.
In comical context
Banana peel is also part of the classic physical comedy slapstick visual gag, the "slipping on a banana peel". This gag was already seen as classic in 1920s America. It can be traced to the late 19th century, when banana peel waste was considered a public hazard in a number of American towns. Although banana peel-slipping jokes date to at least 1854, they became much more popular, beginning in the late-1860s, when the large-scale importation of bananas made them more readily available. Vaudeville comedian Cal Stewart included banana peel jokes in one of the earliest comedy albums, Uncle Josh in a Department Store in 1903. Before banana peels were readily available, orange peels and other fruit skins were recognized as slippery and used in jokes, but not to the extent of banana peels. Slipping on a banana peel was at one point a real concern with municipal ordinances governing the disposal of the peel.
The coefficient of friction of banana peel on a linoleum surface was measured at just 0.07, about half that of lubricated metal on metal. Researchers attribute this to the crushing of the natural polysaccharide follicular gel, releasing a homogenous sol. This finding was awarded the 2014 Ig Nobel Prize for physics.
MythBusters would later test the gag to see if it is as effective as portrayed in media. The results were labeled as "busted", as it turned out that slipping on banana peels every time someone trod upon them is something exaggerated in fiction; it takes much more specific circumstances for the scenario to occur reliably in reality.
Peeling methods
Most people peel a banana by cutting or snapping the stem and divide the peel into sections while pulling them away from the bared fruit. Another way of peeling a banana is done in the opposite direction, from the end with the brownish floral residue—a way usually perceived as "upside down".
When the tip of a banana is pinched with two fingers, it will split and the peel comes off in two clean sections. The inner fibres, or "strings", between the fruit and the peel will remain attached to the peel and the stem of the banana can be used as a handle when eating the banana.
Psychoactive effects of banana peels
There has been a widespread belief that banana peels contain a psychoactive substance, and that smoking them may produce a "high", or a sense of relaxation. This belief, which may be a rumor or urban legend, is often associated with the 1966 song "Mellow Yellow" by Donovan. A recipe for the extraction of the fictional chemical bananadine is found in The Anarchist Cookbook of 1971.
References
External links
The Funniest Fruit: A Brief History of Banana Humor
How Could a Banana Peel Cause You to Slip Up?
Why hippies thought smoking banana peels could get you high
How To Use Banana Peels In The Garden
Bananas
Biological waste
Bananas in popular culture | Banana peel | [
"Biology"
] | 1,064 | [
"nan"
] |
11,979,169 | https://en.wikipedia.org/wiki/National%20Transportation%20Communications%20for%20Intelligent%20Transportation%20System%20Protocol | The National Transportation Communications for Intelligent Transportation System Protocol (NTCIP) is a family of standards designed to achieve interoperability and interchangeability between computers and electronic traffic control equipment from different manufacturers.
NTCIP has been around for over 20 years, but is increasingly in use in smart city initiatives and by suppliers of technology. For example, riders who want to know where the next bus will arrive at their stop are using apps that use NTCIP, such as in the Siemens initiatives in Seattle and elsewhere. In the future, NTCIP will be used for two way communication between vehicles and traffic signals, such as the ability for buses to control traffic lights as done by SinWaves.
The protocol is the product of a joint standardization project guided by the Joint Committee on the NTCIP, which is composed of six representatives each from the National Electrical Manufacturers Association (NEMA), the American Association of State Highway and Transportation Officials (AASHTO), and the Institute of Transportation Engineers (ITE). The Joint Committee has in turn formed 14 technical working groups to develop and maintain the standards, and has initiated or produced over 50 standards and information reports.
The project receives funding under a contract with the United States Department of Transportation (USDOT) and is part of a wider effort to develop a comprehensive family of intelligent transportation system (ITS) standards.
History of the NTCIP Development
NEMA initiated the development of the NTCIP in 1992. In early 1993, the US Federal Highway Administration (FHWA) brought together transportation industry representatives to discuss obstacles to installing field equipment for new Intelligent Transportation Systems (ITS). The representatives said that the number one priority was the need for an industry-wide standard data communications protocol. Since the NEMA Transportation Section members had already started work on a new industry standard, they offered to expedite and expand the scope of their activities.
The key objectives of the new NTCIP protocol were the interchangeability of similar roadside devices, and the interoperability of different types of devices on the same communications channel.
In 1996, the FHWA suggested a partnership of standards developing organizations to expand both user and industry involvement. AASHTO and ITE signed an agreement with NEMA to establish the Joint Committee on the NTCIP, and to work together on developing and maintaining the NTCIP standards.
NTCIP Communications Standards
Center to Field Device Communications
NTCIP has enabled the center to field communication and command/control of equipment from different manufacturers to be specified, procured, deployed, and tested. NTCIP communications standards for field devices are listed below: (the corresponding NTCIP document number is shown in parentheses):
Traffic signals (NTCIP 1202)
Dynamic message signs (NTCIP 1203)
Environmental sensor stations (NTCIP 1204)
Closed circuit television cameras (NTCIP 1205)
Vehicle count stations (NTCIP 1206)
Freeway ramp meters (NTCIP 1207)
Video switches (NTCIP 1208)
Transportation sensor systems (NTCIP 1209)
Field master stations for traffic signals (NTCIP 1210)
Transit priority at traffic signals (NTCIP 1211)
Street lights (NTCIP 1213)
Center to Center Communications
Center to center (C2C) communication involves peer-to-peer communications between computers involved in information exchange in real-time transportation management in a many-to-many network. This type of communication is similar to the Internet, in that any center can request information from, or provide information to, any number of other centers.
An example of center to center communications is two traffic management centers that exchange real-time information about the inventory and status of traffic control devices. This allows each center system to know what timing plan, for example, the other center system is running to allow traffic signal coordination across center geographic boundaries. Other examples of this type of communication include:
Two or more traffic signal systems exchanging information (including second-by-second status changes) to achieve coordinated operation of traffic signals managed by the different systems and to enable personnel at one center to monitor the status of signals operated from another center;
A transit system reporting schedule adherence exceptions to a transit customer information system and to a regional traveler information system, while also asking a traffic signal management system to instruct its signals to give priority to a behind-schedule transit vehicle;
An emergency management system reporting an incident to a freeway management system, to a traffic signal management system, to two transit management systems and to a traveler information system;
A freeway management system informing an emergency management system of a warning message just posted on a dynamic message sign on the freeway in response to its notification of an incident; and
A weather monitoring system (environmental sensors) informing a freeway management system of ice forming on the roadway so that the freeway management system is able to post warning messages on dynamic message signs as appropriate.
NTCIP communications standards for center to center communications are listed below: (the corresponding NTCIP document number is shown in parentheses):
Data Exchange - DATEX-ASN (NTCIP 2304)
Web Services - XML (NTCIP 2306)
The NTCIP has coordinated with other information level standards development organizations during development of the center-to-center application profiles and supports the: ITE Traffic Management Data Dictionary (ITE TMDD), IEEE 1512 Incident Management (IEEE 1512), APTA Transit Communications Interface Profiles (APTA TCIP), and SAE J2354 Advanced Traveler Information Systems standards.
NTCIP Standards Framework
The NTCIP Framework is based primarily on the open standards of the Internet Engineering Task Force (IETF), World Wide Web Consortium (W3C), and ISO, plus NTCIP data dictionary standards specific for the task of ITS device communications. A layered, or modular, approach to communications standards, is used to represent data communications between two computers or other electronic devices.
NTCIP refers to “levels” in NTCIP, rather than “layers” to distinguish the hierarchical architecture applied from those defined by the Open System Interconnection Reference Model (OSI Model) of ISO and the Internet Engineering Task Force (IETF). The five NTCIP levels are: information level, application level, transport level, subnetwork level, and plant level.
The figure below (used with permission) shows the structure of the NTCIP Information, Application, Transport, Subnetwork, and Plant Levels.
To ensure a working system, deployers should select and specify at least one NTCIP protocol or profile at each level. A discussion of each level, and NTCIP standards that apply at that level, follows:
NTCIP Information Level — Information standards define the meaning of data and messages and generally deal with ITS information (rather than information about the communications network). This is similar to defining a dictionary and phrase list within a language. These standards are above the traditional ISO seven-layer OSI model. Information level standards represent the functionality of the system to be implemented.
NTCIP Application Level — Application standards define the rules and procedures for exchanging information data. The rules may include definitions of proper grammar and syntax of a single statement, as well as the sequence of allowed statements. This is similar to combining words and phrases to form a sentence, or a complete thought, and defining the rules for greeting each other and exchanging information. These standards are roughly equivalent to the Session, Presentation and Application Layers of the OSI model.
NTCIP Transport Level — Transport standards define the rules and procedures for exchanging the Application data between point 'A' and point 'X' on a network, including any necessary routing, message disassembly/re-assembly and network management functions. This is similar to the rules and procedures used by the telephone company to connect two remotely located telephones. Transportation level standards are roughly equivalent to the Transport and Network Layers of the OSI model.
NTCIP Subnetwork Level — Subnetwork standards define the rules and procedures for exchanging data between two 'adjacent' devices over some communications media. This is equivalent to the rules used by the telephone company to exchange data over a cellular link versus the rules used to exchange data over a twisted pair copper wire. These standards are roughly equivalent to the Data Link and Physical Layers of the OSI model.
NTCIP Plant Level — The Plant Level is shown in the NTCIP Framework only as a means of providing a point of reference to those learning about NTCIP. The Plant Level includes the communications infrastructure over which NTCIP communications standards are to be used and has a direct impact on the selection of an appropriate Subnetwork Level for use over the selected communications infrastructure. The NTCIP standards do not prescribe any one media type over another. In most cases, communications media selections are made early in the design phase.
The NTCIP Framework does not preclude combinations beyond those expressly indicated on the diagram.
References
External links
NTCIP (Official website)
The NTCIP Guide (NTCIP 9001 v04)
Testing Guide for NTCIP Center-to-Field Communications (NTCIP 9012 v01)
US Department of Transportation ITS Standards Program (USDOT)
National Electrical Manufacturers Association (NEMA)
American Association of State Highway and Transportation Officials (AASHTO)
Institute of Transportation Engineers (ITE)
Intelligent transportation systems
Application layer protocols
Open standards
Traffic signals | National Transportation Communications for Intelligent Transportation System Protocol | [
"Technology"
] | 1,914 | [
"Warning systems",
"Intelligent transportation systems",
"Information systems",
"Transport systems"
] |
11,979,828 | https://en.wikipedia.org/wiki/James%20J.%20Kay | James J. Kay (June 18, 1954 – May 30, 2004) was an ecological scientist and policy-maker. He was a respected physicist best known for his theoretical work on complexity and thermodynamics.
Biography
James Kay held a BS in physics from McGill University and a Ph.D. in systems design engineering from the University of Waterloo. His Ph.D. thesis was entitled Self-Organization in Living Systems. Much of his work relates to integrating thermodynamics into an understanding of self-organization in biological systems. For example, when water in a pot is heated, it will spontaneously form convection currents such as Bénard_cell. This is an example where as the amount of energy available to a system increases, the system self-organizes in order to dissipate energy more efficiently. Kay has examined how similar types of self-organization can occur within living systems at the level of individual organisms and ecosystems. In other words, organisms and ecosystems evolve to use the maximum amount of energy available to them. This has been backed up by studies showing that more mature ecosystems such as old growth forests are cooler (i.e. dissipate more incoming energy) than clear cuts or bare rock that receive the same amount of energy.
Kay was an associate professor of environment and resource studies at the University of Waterloo, with cross-appointments in systems design engineering, geography, management sciences, and the School of Planning. He was also cross-posted to the School of Rural Planning and Development at the University of Guelph.
Public Policy
Local
Kay was founding chair of the University of Waterloo's Greening the Campus Committee (1990–1996), which is responsible for overseeing the transition to a sustainable campus. He was also a founding member of the City of Kitchener's Environment Committee, which developed a Strategic Plan for the Environment and an ecosystem-based plan for the Huron Natural Area. He sat on the committee which developed the award-winning (Canadian Institute of Planners) bicycle master plan for Kitchener, and was on the city's committee for the transition to a hydrogen economy.
Provincial and National
Kay served as an adviser to the Ontario Ministry of the Environment and delivered guest lectures to the National Ministry of the Environment. He served on the Long Term Ecosystem Research and Monitoring Panel of the Royal Society of Canada.
International
Kay was a member of the Royal Swedish Academy of Sciences, Beijer Institute, Working Group on Complex Ecological Economic Systems Modeling. He was also an active member of the United States National Science Foundation Advisory Committee on Environmental Research and Education.
Publications
Waltner-Toews, D., Kay, J.J., and Lister, N. "The Ecosystem Approach: Complexity, Uncertainty, and Managing for Sustainability" for the Columbia University press series: Complexity in Ecological Systems. New York: Columbia University Press, 2008.
Manuel-Navarrete, D., Kay J.J., and Dolderman D. 2004. "Ecological integrity discourses: linking ecology with cultural transformation." Human Ecology Review 11.3: 215–229.
W Murray, T., Kay, J., Waltner-Toews D., Raez-Luna, E.; 2002. "Linking Human and Ecosystem Health on the Amazon Frontier: An Adaptive Ecosystem Approach", Aguirre, A. A., R. S. Ostfeld, C. A. House, G. M. Tabor and M. C. Pearl (eds.), Conservation Medicine: Ecological Health in Practice Oxford University Press. (Chapter 23)
Waltner-Toews D., Kay, J., 2002. "An Ecosystem Approach to Health", LEISA, 18:1, March 2002.
Kay, J., 2002, "On Complexity Theory, Exergy and Industrial Ecology: Some Implications for Construction Ecology" in Kibert, C., Sendzimir, J. (eds), Guy, B., Construction Ecology: Nature as a Basis for Green Buildings, Spon Press, pp. 72–107.
Boyle, M., Kay. J., and Pond, B., 2001. Monitoring in Support of Policy: an Adaptive Ecosystem Approach, in Munn, T., (editor in chief), Vol.4 Encyclopedia of Global Environmental Change, London, John Wiley and Son. pp. 116–137.
Regier, H.A., Kay, J.J., 2001. "Phase Shifts Or Flip-Flops In Complex Systems", in Munn, R., editor in chief. Vol. 5, Encyclopedia of global environmental change. London: Wiley; 2001; pp. 422–429.
Kay, J, 2001. "Ecosystems, Science and Sustainability", in Ulgiati, S., Brown, M.T., Giampietro, M., Herendeen, R., Mayumi, K., (eds) Proceedings of the international workshop: Advances in Energy Studies: exploring supplies, constraints and strategies, Porto Venere, Italy, 23–27 May 2000 pp. 319–328
Kay, J, Allen, T., Fraser, R., Luvall, J., Ulanowicz, R., 2001. "Can we use energy based indicators to characterize and measure the status of ecosystems, human, disturbed and natural?" in Ulgiati, S., Brown, M.T., Giampietro, M., Herendeen, R., Mayumi, K., (eds) Proceedings of the international workshop: Advances in Energy Studies: exploring supplies, constraints and strategies, Porto Venere, Italy, 23–27 May 2000 pp 121–133.
Kay. J., Regier, H., 2000. "Uncertainty, Complexity, And Ecological Integrity: Insights from an Ecosystem Approach", in P. Crabbe, A. Holland, L. Ryszkowski and L. Westra (eds), Implementing Ecological Integrity: Restoring Regional and Global Environmental and Human Health, Kluwer, NATO Science Series, Environmental Security pp. 121–156.
Kay. J. 2000. "Ecosystems as Self-organizing Holarchic Open Systems : Narratives and the Second Law of Thermodynamics" in Sven Erik Jorgensen, Felix Muller (eds), Handbook of Ecosystems Theories and Management, CRC Press – Lewis Publishers. pp 135–160
Waltner-Toews, D., Murray, T., Kay, J., Gitau, T., Raez-Luna, E., McDermot, J., 2000, "One Assumption, Two Observations, Some Guiding Questions and a Process for the Investigation and Practice of Agroecosystem Health", in Jabbar, M.A., Peden, D.G., Saleem, M., Li Pub, H. (eds), Agro-ecosystems, natural resources management and human health related research in East Africa: Proceedings of an IDRC-ILRI international workshop held at ILRI, Addis Ababa, Ethiopia, 11–15 May 1998. Published by International Livestock Research Institute, Nairobi. pp. 7–14
Lister, N., Kay, J.J., 1999, "Celebrating Diversity: Adaptive Planning and Biodiversity Conservation", in S. Bocking (ed), Biodiversity in Canada: An Introduction to Environmental Studies, Broadview Press. pp. 189–218.
Kay. J., Regier, H., Boyle, M. and Francis, G. 1999. "An Ecosystem Approach for Sustainability: Addressing the Challenge of Complexity" Futures Vol 31, #7, Sept. 1999, pp. 721–742.
Kay, J.J., Foster, J., 1999, "About Teaching Systems Thinking" in Savage, G., Roe, P. (eds), Proceedings of the HKK conference, 14–16 June 1999, University of Waterloo, Ontario, pp. 165–172
Kay. J., Regier, H., 1999. "An Ecosystem Approach to Erie's Ecology" in M. Munawar, T.Edsall, I.F. Munawar, (eds), International Symposium. The State of Lake Erie (SOLE) - Past, Present and Future. A tribute to Drs. Joe Leach & Henry Regier, Backhuys Academic Publishers, Netherlands, pp. 511–533
Regier, H.A., Kay, J.J., 1996 "An Heuristic Model of Transformations of the Aquatic Ecosystems of the Great Lakes-St. Lawrence River Basin", Journal of Aquatic Ecosystem Health, Vol. 5: pp. 3–21
Schneider, E.D, Kay, J.J., 1995, "Order from Disorder: The Thermodynamics of Complexity in Biology", in Michael P. Murphy, Luke A.J. O'Neill (ed), "What is Life: The Next Fifty Years. Reflections on the Future of Biology", Cambridge University Press, pp. 161–172
Schneider, E.D, Kay, J.J., 1994 "Complexity and Thermodynamics: Towards a New Ecology", Futures 24 (6) pp. 626–647, August 1994
Kay, J, Schneider, E.D,. 1994, "Embracing Complexity, The Challenge of the Ecosystem Approach", Alternatives Vol 20 No.3 pp. 32– 38
Schneider, E.D, Kay, J.J., 1994, "Life as a Manifestation of the Second Law of Thermodynamics", Mathematical and Computer Modelling, Vol 19, No. 6-8, pp. 25–48. Also available in pdf format. Included in Readings in Ecology (Oxford University Press, 1999).
Tobias, T., Kay, J.J., 1994, "The Bush Harvest in the Northern Village of Pinehouse" Arctic Vol 47, No. 3. pp. 207–221.
Kay, J.J., 1993, "On the Nature of Ecological Integrity: Some Closing Comments" in S. Woodley, J. Kay, G. Francis (Eds.), 1993. Ecological Integrity and the Management of Ecosystems, St. Lucie Press, Delray, Florida, pp. 201–212.
Schneider, E.D, Kay, J.J., 1993, "Exergy Degradation, Thermodynamics, and the Development of Ecosystems" in Tsatsaronis G., Szargut, J., Kolenda, Z., Ziebik, Z.,(eds) Energy, Systems, and Ecology, Volume 1, Proceedings of ENSEC' 93, July 5–9 Cracow, Poland., pp. 33–42.
Kay, J.J., Schneider, E.D., 1992. "Thermodynamics and Measures of Ecosystem Integrity" in Ecological Indicators, Volume 1, D.H. McKenzie, D.E. Hyatt, V.J. Mc Donald (eds.), Proceedings of the International Symposium on Ecological Indicators, Fort Lauderdale, Florida, Elsevier, pp. 159–182.
Kay, J.J., 1991. "A Non-equilibrium Thermodynamic Framework for Discussing Ecosystem Integrity", Environmental Management, Vol 15, No.4, pp. 483–495
Kay, J.J., L. Graham, R.E., Ulanowicz. 1989. "A Detailed Guide to Network Analysis" in Network Analysis in Marine Ecology: Methods and Applications, F. Wulff, J. Field, K. Mann (Eds.), pp. 15–61, Springer-Verlag.
See also
List of University of Waterloo people
References
Luvall, J. C.; Holbo, R.H. (1989), "Measurements of short-term thermal responses of coniferous forest canopies using thermal scanner data", Remote Sensing of Environment 27:1–10.
Holbo, H. R., and J. C. Luvall. 1989. Modeling surface temperature distributions in forest landscapes. Remote Sensing of Environment 27:11–24.
Canadian physicists
Canadian ecologists
Canadian systems scientists
Academic staff of the University of Waterloo
1954 births
2004 deaths
Industrial ecology
Theoretical physicists
McGill University Faculty of Science alumni
University of Waterloo alumni
Academic staff of the University of Guelph | James J. Kay | [
"Physics",
"Chemistry",
"Engineering"
] | 2,554 | [
"Theoretical physics",
"Industrial engineering",
"Environmental engineering",
"Industrial ecology",
"Theoretical physicists"
] |
11,980,877 | https://en.wikipedia.org/wiki/Loewe%20additivity | In toxicodynamics and pharmacodynamics, Loewe additivity (or dose additivity) is one of several common reference models used for measuring the effects of drug combinations.
Definition
Let and be doses of compounds 1 and 2 producing in combination an effect . We denote by and the doses of compounds 1 and 2 required to produce effect alone (assuming this conditions uniquely define them, i.e. that the individual dose-response functions are bijective).
quantifies the potency of compound 1 relatively to that of compound 2.
can be interpreted as the dose of compound 2 converted into the corresponding dose of compound 1 after accounting for difference in potency.
Loewe additivity is defined as the situation where or
.
Geometrically, Loewe additivity is the situation where isoboles are segments joining the points and in the domain .
If we denote by , and the dose-response functions of compound 1, compound 2 and of the mixture respectively, then dose additivity holds when
Testing
The Loewe additivity equation provides a prediction of the dose combination eliciting a given effect. Departure from Loewe additivity can be assessed informally by comparing this prediction to observations. This approach is known in toxicology as the model deviation ratio (MDR).
This approach can be rooted in a more formal statistical method with the derivation of approximate p-values with Monte Carlo simulation, as implemented in the R package MDR.
References
Clinical pharmacology | Loewe additivity | [
"Chemistry"
] | 310 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs",
"Clinical pharmacology"
] |
11,981,306 | https://en.wikipedia.org/wiki/Vinyl%20bromide%20%28data%20page%29 | This page provides supplementary chemical data on vinyl bromide.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions.
Structure and properties
Thermodynamic properties
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Vinyl bromide (data page) | [
"Chemistry"
] | 82 | [
"Chemical data pages",
"nan"
] |
11,981,336 | https://en.wikipedia.org/wiki/Control%20plane | In network routing, the control plane is the part of the router architecture that is concerned with establishing the network topology, or the information in a routing table that defines what to do with incoming packets. Control plane functions, such as participating in routing protocols, run in the architectural control element. In most cases, the routing table contains a list of destination addresses and the outgoing interface or interfaces associated with each. Control plane logic also can identify certain packets to be discarded, as well as preferential treatment of certain packets for which a high quality of service is defined by such mechanisms as differentiated services.
Depending on the specific router implementation, there may be a separate forwarding information base that is populated by the control plane, but used by the high-speed forwarding plane to look up packets and decide how to handle them.
In computing, the control plane is the part of the software that configures and shuts down the data plane. By contrast, the data plane is the part of the software that processes the data requests. The data plane is also sometimes referred to as the forwarding plane.
The distinction has proven useful in the networking field where it originated, as it separates the concerns: the data plane is optimized for speed of processing, and for simplicity and regularity. The control plane is optimized for customizability, handling policies, handling exceptional situations, and in general facilitating and simplifying the data plane processing.
The conceptual separation of the data plane from the control plane has been done for years. An early example is Unix, where the basic file operations are open, close for the control plane and read write for the data plane.
Building the unicast routing table
A major function of the control plane is deciding which routes go into the main routing table. "Main" refers to the table that holds the unicast routes that are active. Multicast routing may require an additional routing table for multicast routes. Several routing protocols e.g. IS-IS, OSPF and BGP maintain internal databases of candidate routes which are promoted when a route fails or when a routing policy is changed.
Several different information sources may provide information about a route to a given destination, but the router must select the "best" route to install into the routing table. In some cases, there may be multiple routes of equal "quality", and the router may install all of them and load-share across them.
Sources of routing information
There are three general sources of routing information:
Information on the status of directly connected hardware and software-defined interfaces
Manually configured static routes
Information from (dynamic) routing protocols
Local interface information
Routers forward traffic that enters on an input interface and leaves on an output interface, subject to filtering and other local rules. While routers usually forward from one physical (e.g., Ethernet, serial) to another physical interface, it is also possible to define multiple logical interfaces on a physical interface. A physical Ethernet interface, for example, can have logical interfaces in several virtual LANs defined by IEEE 802.1Q VLAN headers.
When an interface has an address configured in a subnet, such as 192.0.2.1 in the 192.0.2.0/24 (i.e., subnet mask 255.255.255.0) subnet, and that interface is considered "up" by the router, the router thus has a directly connected route to 192.0.2.0/24. If a routing protocol offered another router's route to that same subnet, the routing table installation software will normally ignore the dynamic route and prefer the directly connected route.
There also may be software-only interfaces on the router, which it treats as if they were locally connected. For example, most implementations have a "null" software-defined interface. Packets having this interface as a next hop will be discarded, which can be a very efficient way to filter traffic. Routers usually can route traffic faster than they can examine it and compare it to filters, so, if the criterion for discarding is the packet's destination address, "blackholing" the traffic will be more efficient than explicit filters.
Other software defined interfaces that are treated as directly connected, as long as they are active, are interfaces associated with tunneling protocols such as Generic Routing Encapsulation (GRE) or Multiprotocol Label Switching (MPLS). Loopback interfaces are virtual interfaces that are considered directly connected interfaces.
Static routes
Router configuration rules may contain static routes. A static route minimally has a destination address, a prefix length or subnet mask, and a definition where to send packets for the route. That definition can refer to a local interface on the router, or a next-hop address that could be on the far end of a subnet to which the router is connected. The next-hop address could also be on a subnet that is directly connected, and, before the router can determine if the static route is usable, it must do a recursive lookup of the next hop address in the local routing table. If the next-hop address is reachable, the static route is usable, but if the next-hop is unreachable, the route is ignored.
Static routes also may have preference factors used to select the best static route to the same destination. One application is called a floating static route, where the static route is less preferred than a route from any routing protocol. The static route, which might use a dialup link or other slow medium, activates only when the dynamic routing protocol(s) cannot provide a route to the destination.
Static routes that are more preferred than any dynamic route also can be very useful, especially when using traffic engineering principles to make certain traffic go over a specific path with an engineered quality of service.
Dynamic routing protocols
See routing protocols. The routing table manager, according to implementation and configuration rules, may select a particular route or routes from those advertised by various routing protocols.
Installing unicast routes
Different implementations have different sets of preferences for routing information, and these are not standardized among IP routers. It is fair to say that subnets on directly connected active interfaces are always preferred. Beyond that, however, there will be differences.
Implementers generally have a numerical preference, which Cisco calls an "administrative distance", for route selection. The lower the preference, the more desirable the route. Cisco's IOS implementation makes exterior BGP the most preferred source of dynamic routing information, while Nortel RS makes intra-area OSPF most preferred.
The general order of selecting routes to install is:
If the route is not in the routing table, install it.
If the route is "more specific" than an existing route, install it in addition to the existing routes. "More specific" means that it has a longer prefix. A /28 route, with a subnet mask of 255.255.255.240, is more specific than a /24 route, with a subnet mask of 255.255.255.0.
If the route is of equal specificity to a route already in the routing table, but comes from a more preferred source of routing information, replace the route in the table.
If the route is of equal specificity to a route in the routing table, yet comes from a source of the same preference,
Discard it if the route has a higher metric than the existing route
Replace the existing route if the new route has a lower metric
If the routes are of equal metric and the router supports load-sharing, add the new route and designate it as part of a load-sharing group. Typically, implementations will support a maximum number of routes that load-share to the same destination. If that maximum is already in the table, the new route is usually dropped.
Routing table vs. forwarding information base
See forwarding plane for more detail, but each implementation has its own means of updating the forwarding information base (FIB) with new routes installed in the routing table. If the FIB is in one-to-one correspondence with the RIB, the new route is installed in the FIB after it is in the RIB. If the FIB is smaller than the RIB, and the FIB uses a hash table or other data structure that does not easily update, the existing FIB might be invalidated and replaced with a new one computed from the updated RIB.
Multicast routing tables
Multicast routing builds on unicast routing. Each multicast group to which the local router can route has a multicast routing table entry with a next hop for the group, rather than for a specific destination as in unicast routing.
There can be multicast static routes as well as learning dynamic multicast routes from a protocol such as Protocol Independent Multicast (PIM).
See also
Management plane
Data plane
References
Internet architecture | Control plane | [
"Technology"
] | 1,816 | [
"Internet architecture",
"IT infrastructure"
] |
11,982,591 | https://en.wikipedia.org/wiki/Smart%20cut | Smart cut is a technological process that enables the transfer of very fine layers of crystalline silicon material onto a mechanical support. It was invented by Michel Bruel of CEA-Leti, and was protected by US patent 5374564. The application of this technological procedure is mainly in the production of silicon-on-insulator (SOI) wafer substrates.
The role of SOI is to electronically insulate a fine layer of monocrystalline silicon from the rest of the silicon wafer; an ultra-thin silicon film is transferred to a mechanical support, thereby introducing an intermediate, insulating layer. Semiconductor manufacturers can then fabricate integrated circuits on the top layer of the SOI wafers using the same processes they would use on plain silicon wafers.
The sequence of illustrations pictorially describes the process involved in fabricating SOI wafers using the smart cut technology.
References
See also
Silicon on insulator
Soitec
CEA-Leti
Microtechnology
Materials science
Semiconductor device fabrication
Semiconductor technology | Smart cut | [
"Physics",
"Materials_science",
"Engineering"
] | 206 | [
"Applied and interdisciplinary physics",
"Microtechnology",
"Materials science",
"Semiconductor device fabrication",
"nan",
"Semiconductor technology"
] |
11,982,808 | https://en.wikipedia.org/wiki/Volvo%20B8444S%20engine | The B8444S is an automobile V8 engine manufactured by Yamaha Motor Corporation for Volvo Cars. It was built in Japan and based on Volvo designs.
Usage
Volvo began offering a V8 engine in its large P2 platform automobiles in 2005. It was initially offered only for the Volvo XC90 but later found its way in the second generation Volvo S80, and was mated to a six speed Aisin Seiki AWTF80-SC transmission of Japan also with a Swedish Haldex all-wheel drive (AWD) system. The 4.4 L V8 Volvo engine was built by Yamaha in Japan under Volvo design and specifications.
Although the B8444S shares its Yamaha origination, its transverse layout, and its 60 degree bank angle, officials of all three companies involved insist that the Volvo V8 is not related to the SHO engine; the die-cast open-deck aluminum Volvo block is clearly different from the sand-cast closed-deck aluminum SHO engine block although the two engines share many common dimensions including bore centers, stroke, bearing journal diameters, and deck height.
As revealed in BBC's Top Gear show (Series 14 Episode 5) this basic engine is also used in the Noble M600, albeit longitudinally mounted, developing some with the addition of Garrett AiResearch twin-turbochargers. The engine also features a MoTeC M190 and Injector Dynamics ID725 electronic fuel injection. The Noble unit is custom built by a 3rd party firm expressly for Noble Cars UK.
Volvo discontinued the engine subsequent to its change in ownership and management in August, 2010. The new management intends to offer a single engine across all Volvo models, ultimately a four cylinder, and at that point Volvo Cars was owned by Ford.
Applications:
2005 Volvo XC90
2006 Volvo S80
2010 Noble M600 (twin-turbocharged)
2014 Volvo S60 (V8 Supercars racecar)
Specification
As a Volvo V8, this new engine uses the similar Volvo engine naming system. The engine is called the Volvo B8444S. B being for bensin (gasoline), 8 for the number of cylinders, 4.4 for the total displacement of the engine, the last 4 for the number of valves per cylinder and S for suction, meaning it's naturally aspirated. This engine also uses original Volvo parts.
The engine is a aluminum DOHC V8 which produces and . It has a 60 degree cylinder bank. The engine block and heads are cast from aluminium reducing its weight to a comparatively light .
To retain its 90 degree firing interval with its 60 degree bank angle and cross plane crankshaft the b8444s utilised offset crank journals.
Originally debuted in the Volvo XC90, which previously used 5- or 6-cylinder transverse inline engines, the B8444S had a number of significant packaging challenges to overcome. To save space and enable transverse orientation, the alternator is mounted directly to the engine block without brackets, The exhaust camshafts are linked to the intake camshafts with smaller secondary chains, and the left-hand cylinder bank is offset from its counterpart by half a cylinder's width. These tactics resulted in what was, at the time, the most compact V8 for its given 4.4 L displacement.
The B8444S also made strides in emissions standards as the first V8 engine to meet the Ultra-low-emission vehicle (ULEV II) standard. The emissions standards were met using a combination of four catalytic converters and continuous variable valve timing.
[[|thumb|Volvo V8 badge on a XC90]]
Motorsport
A 5.0 L version was developed for use in Volvo S60s by Garry Rogers Motorsport in the V8 Supercars series between 2014 and 2016.
Marine
The engine block is also used for the Yamaha F300V8, F350V8, and XTO Offshore outboards. The displacement ranges from 5.3 to 5.6 litres.
This displacement increase was achieved by increasing the stroke to . The heads are also modified to “reverse flow” types where the inlet ports are on the outside of the engine, and the exhaust ports exit in towards the Vee of the engine. This allows a single exhaust exit path from the center of the engine which aids packaging the unit into an outboard form factor. The compression ratio was also dropped to 9.6:1. Whilst not ideal from an efficiency standpoint, it does reduce heat and stress on the engine which increases durability; an essential attribute for marine duty. It also allows 87 octane rated fuels to be used.
The latest development of this outboard is called the ”XTO Offshore” and is rated at . It has an increased displacement of 5.6-litres achieved by increasing the bore diameter from . The stroke remains the same, however the compression ratio is now 12.0:1. It also utilizes a gasoline direct injection system, a first for a 4-stroke outboard engine, and it requires 89 Octane rated fuel.
References
B8444S
Yamaha engines
Gasoline engines by model
Automobile engines
V8 engines | Volvo B8444S engine | [
"Technology"
] | 1,038 | [
"Engines",
"Automobile engines"
] |
11,983,362 | https://en.wikipedia.org/wiki/Chicago%20Varnish%20Company%20Building | The Chicago Varnish Company Building is a building built in 1895 as the headquarters of one of the leading varnish manufacturers in the United States, the Chicago Varnish Company. The building is a rare example of Dutch Renaissance Revival-style architecture in Chicago, and is marked by a steeply pitched roof paired with stepped gables of red brick and light stone in contrasting colors. The building was designed by Henry Ives Cobb, a nationally recognized architect whose other significant works include the former Chicago Historical Society Building, the Newberry Library, and the original buildings for the University of Chicago campus. The building was listed on the National Register of Historic Places on June 14, 2001, and was designated a Chicago Landmark on July 25, 2001.
After an extensive rehabilitation, including replacement of the multi-gabled clay tile roof and rebuilding the stepped parapets, Harry Caray's Italian Steakhouse opened in the building on October 23, 1987. The restaurant has received numerous awards for its food and service, and features many items of memorabilia, including a "Holy Cow" wearing the trademark Harry Caray eyeglasses that was sourced from Chicago's CowParade.
The building is distinctive for its use of the Dutch Renaissance revival style, with its stepped gables, steeply-pitched tile roof, and contrasting brick and stone masonry.
The building and its restoration received the Chicago Landmarks Preservation Excellence award in 2006 for its careful restoration of the Ludowici roof.
See also
Chicago Landmark
References
External links
Chicago Varnish Company Building
Commercial buildings on the National Register of Historic Places in Chicago
Chicago Landmarks
Office buildings completed in 1895
Stepped gables | Chicago Varnish Company Building | [
"Chemistry",
"Engineering"
] | 321 | [
"Varnishes",
"Coatings",
"Stepped gables",
"Architecture"
] |
11,983,936 | https://en.wikipedia.org/wiki/Jinkanpo%20Atsugi%20Incinerator | The Enviro-Tech Incinerator Complex (Atsugi Incinerator) was a waste incinerator located in Ayase, Kanagawa Prefecture, Japan, (formerly Jinkanpo/Shinkampo). It began operation on March 3, 1980 and was closed on April 30, 2001. The incinerator was located near Naval Air Facility Atsugi, a base manned partly by several thousand United States Navy members and their families.
Throughout its history, the incinerator reportedly blew toxic and cancerous emissions over the neighbouring base facilities. The incinerator's owners, arrested and jailed for charges of tax evasion, neglected the maintenance of the facility. The pollution had become so much of a health concern for the American residents that if they showed signs of adverse health effects, the U.S. military authorities allowed them to leave early (usually service members are stationed at the base for a tour of three years). Many U.S. service members reported sickness and a few died from cancer shortly after moving back to the United States. However, the US Navy has not formally established a connection between their exposure and their disease. For a time, the base required service members to undergo medical screenings before being stationed at the base in order to ensure that they had no medical condition that would be worsened by the poor air quality.
In May 2001, the Japanese government purchased the plant for nearly 40 million dollars and shut it down following a United States Department of Justice lawsuit against the private incinerator owner. Dismantling was completed by the end of that year. Some former residents of Atsugi NAF still complain of health problems related to the incinerator's emissions and report that the USN has been reluctant to address their concerns. The incinerator contaminated the base, especially the housing area, with dioxin, heavy metals, and other deadly toxins. In June, 2007, the USN's Environmental Health Center announced that it would conduct a study of the health population of those stationed at NAF Atsugi during the time the incinerator was in operation.
The Navy and Marine Corps Public Health Center has stated that a new health study is currently underway and should be released in the summer of 2009.
References
Notes
Web
History of the United States Navy
United States military in Japan
Buildings and structures in Kanagawa Prefecture
Japan–United States relations
Law of Japan
Pollution in Japan
Incinerators
Environmental disasters in Japan | Jinkanpo Atsugi Incinerator | [
"Chemistry"
] | 497 | [
"Incinerators",
"Incineration"
] |
11,984,510 | https://en.wikipedia.org/wiki/Integrated%20Ballistics%20Identification%20System | The Integrated Ballistics Identification System, or IBIS, is the brand of the Automated firearms identification system manufactured by Forensic Technology WAI, Inc., of Montreal, Canada.
Use
IBIS has been adopted as the platform of the National Integrated Ballistic Information Network (NIBIN) program, which is run by the United States Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF). NIBIN tracks about 100,000 guns used in crimes. The integration of technology into about 220 sites across the continental US and its territories facilitates sharing of information between different law enforcement groups. The rapid dissemination of ballistics information, in turn, allows for tracking of gun-specific information and connection of a particular firearm to multiple crimes irrespective of geographic location. A National Research Council report has found that with the NIBIN dataset, a bullet retrieved from a crime scene will generate about 10 possible matches, with about a 75-95% chance of a successful match.
While some groups have advocated laws requiring all firearms sold be test-fired and registered in such a system, success has been mixed. In 2005, a Maryland State Police report recommended a law requiring all handguns sold in the state be registered in their IBIS system be repealed, as at the cost of $2.5 million the system had not produced "any meaningful hits". The Maryland system was shut down in 2015 due to its ineffectiveness. By 2008, the New York COBIS system, which costs $4 million per year, had not produced any hits leading to prosecutions in 7 years of operation. The system has been more successful when used to track guns used by and found on criminals.
In Television
IBIS is frequently mentioned in modern television programs, fictional and otherwise, that use forensics to aid in solving crimes. These television shows include CSI: Crime Scene Investigation and its spinoffs, amongst others. Forensic Technology helped develop an interactive exhibit, 'CSI: The Experience' that showcased the company's technology.
See also
National Ballistics Intelligence Service, a similar body in the United Kingdom
References
External links
1. https://web.archive.org/web/20070711154331/http://www.nibin.gov/ is the official Web site for the NIBIN, the National Integrated Ballistics Information Network.
2. http://www.fti-ibis.com is the Web site for the developer and supporter of IBIS technology, Forensic Technology Incorporated.
Ballistics
Forensic software | Integrated Ballistics Identification System | [
"Physics"
] | 506 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
11,984,678 | https://en.wikipedia.org/wiki/Phenotypic%20testing%20of%20mycobacteria | In microbiology, the phenotypic testing of mycobacteria uses a number of methods. The most-commonly used phenotypic tests to identify and distinguish Mycobacterium strains and species from each other are described below.
Tests
Acetamide as sole C and N sources
Media: KH2PO4 (0.5 g), MgSO>4*7H20 (0.5 g), purified agar (20 g), distilled water (1000 ml). The medium is supplemented with acetamide to a final concentration of 0.02M, adjusted to a pH of 7.0 and sterilized by autoclaving at 115°C for 30 minutes. After sloping, the medium is inoculated with one loop of the cultures and incubated. Growth is read after incubation for two weeks (rapid growers) or four weeks (slow growers).
Arylsulfatase test
Arylsulfatase enzyme is present in most mycobacteria. The rate by which arylsulfatase enzyme breaks down phenolphthalein disulfate into phenolphthalein (which forms a red color in the presence of sodium bicarbonate) and other salts is used to differentiate certain strains of Mycobacteria. 3 day arylsulfatase test is used to identify potentially pathogenic rapid growers such as M. fortuitum and M. chelonae. Slow growing M. marinum and M. szulgai are positive in the 14-day arylsulfatase test.
Catalase, semiquantitative activity
Most mycobacteria produce the enzyme catalase, but they vary in the quantity produced. Also, some forms of catalase are inactivated by heating at 68°C for 20 minutes (others are stable). Organisms producing the enzyme catalase have the ability to decompose hydrogen peroxide into water and free oxygen. The test differs from that used to detect catalase in other types of bacteria by using 30% hydrogen peroxide in a strong detergent solution (10% polysorbate 80).
Citrate
Sole carbon source
Egg medium
Growth on Löwenstein–Jensen medium (LJ medium)
L-Glutamate
Sole carbon and nitrogen source
Growth rate
The growth rate is the length of time required to form mature colonies visible without magnification on solid media. Mycobacteria forming colonies visible to the naked eye within seven days on subculture are known as rapid growers, while those requiring longer periods are termed slow growers.
Iron uptake
The ability to take up iron from an inorganic iron containing reagent helps differentiate some species of mycobacteria.
Lebek medium
Lebek is a semisolid medium used to test the oxygen preferences of mycobacterial isolates. Aerophilic growth is indicated by growth on (and above) the surface of the glass wall of the tube; microaerophilic growth is indicated by growth below the surface.
MacConkey agar without crystal violet
Niacin accumulation (paper strip method)
Niacin is formed as a metabolic byproduct by all mycobacteria, but some species possess an enzyme that converts free niacin to niacin ribonucleotide. M. tuberculosis (and some other species) lack this enzyme, and accumulate niacin as a water-soluble byproduct in the culture medium.
Nitrate reduction
Mycobacteria containing nitroreductase catalyze the reduction from nitrate to nitrite. The presence of nitrite in the test medium is detected by addition of sulfanilamide and n-naphthylethylendiamine. If nitrate is present, red diazonium dye is formed.
Photoreactivity of mycobacteria;
Some mycobacteria produce carotenoid pigments without light; others require photoactivation for pigment production. Photochromogens produce non-pigmented colonies when grown in the dark, and pigmented colonies after exposure to light and re-incubation. Scotochromogens produce deep-yellow-to-orange colonies when grown in either light or darkness. Non-photochromogens are non-pigmented in light and darkness or have a pale-yellow, buff or tan pigment which does not intensify after light exposure.
Picrate tolerance
Grows on Sauton agar containing picric acid (0.2% w/v) after three weeks
Pigmentation
Some mycobacteria produce carotenoid pigments without light; others require photoactivation for pigment production (see photoreactivity, above).
Pyrazinamide sensitivity (PZA)
The deamidation of pyrazinamide to pyrazinoic acid (assumed to be the active component of the drug pyrazinamide) in four days is a useful physiologic characteristic by which M. tuberculosis-complex members can be distinguished.
Sodium chloride tolerance
Growth on LJ medium containing 5% NaCl
Thiophene-2carboxylic acid hydrazide (TCH) sensitivity
The growth of M. bovis and M. africanum subtype II is inhibited by thiophene-2carboxylic acid hydrazide; growth of M. tuberculosis and M. africanum subtype I is uninhibited.
Polysorbate 80 hydrolysis
A test for lipase using polysorbate 80 (polyoxyethylene sorbitan monooleate, a detergent). Certain mycobacteria possess a lipase that splits it into oleic acid and polyoxyethylated sorbitol. The test solution also contains phenol red, which is stabilised by the polysorbate 80; when the latter 80 is hydrolysed, the phenol red changes from yellow to pink.
Urease (adaptation to mycobacteria)
With an inoculation loop, several loopfuls of mycobacteria test colonies are transferred to 0.5 mL of urease substrate, mixed to emulsify and incubated at 35 °C for three days; a colour change (from amber-yellow to pink-red) is sought.
References
Bacteriology
Microbiology techniques | Phenotypic testing of mycobacteria | [
"Chemistry",
"Biology"
] | 1,319 | [
"Microbiology techniques"
] |
11,984,730 | https://en.wikipedia.org/wiki/Index%20of%20human%20sexuality%20articles | Human sexuality covers a broad range of topics, including the physiological, psychological, social, cultural, political, philosophical, ethical, moral, theological, legal and spiritual or religious aspects of sex and human sexual behavior.
Articles pertaining to human sexuality include:
!$@
-phil-
$pread
A
A Mind of Its Own: A Cultural History of the Penis
A Return to Love
Abasiophilia
Abortion
Abstinence
Abstinence-only sex education
Abstinence, be faithful, use a condom
Accidental incest
Acrosome
Acrosome reaction
Acrotomophilia
Activin and inhibin
Adolescent sexuality
Adolescent sexuality in the United States
Adult Check
Adult Industry Medical Health Care Foundation
Adult Verification System
Adult video arcade
Adult video game
Adultery
Advanced maternal age
Affair
Affection
Affectional bond
Affectional orientation
African-American culture and sexual orientation
Agalmatophilia
Age at first marriage
Age disparity in sexual relationships
Age of consent
Age of consent reform
Ageplay
Ages of consent in Africa
Ages of consent in Asia
Ages of consent in Europe
Ages of consent in North America
Ages of consent in Oceania
Ages of consent in South America
AIDS
AIDS in the pornographic film industry
Alan Soble
Albanian sworn virgins
Alcohol and sex
All About Love: New Visions
Alt porn
Alt.sex
Alt.sex.stories
American Association of Sexuality Educators, Counselors and Therapists
American Birth Control League
American Fertility Association
American Institute of Bisexuality
American Journal of Sexuality Education
Anal beads
Anal fingering
Anal masturbation
Anal sex
Anaphrodisiac
Anarchism and issues related to love and sex
Anatomically correct doll
Ancient Greek eros
Andrology
Androphilia and gynephilia
Anilingus
Animal roleplay
Anovulatory cycle
Anthropophilia in animals
Anti-pornography movement
Antisexualism
Aphanisis
Aphrodisiac
Apotemnophilia
Aquaphilia (fetish)
Armpit fetishism
Arse Elektronika
Artificial hymen
Asexuality
Asherman's syndrome
Ass to mouth
Assortative mating
Astroglide
Athenian pederasty
AtomAge
Attachment in adults
Attachment in children
Attachment measures
Attachment theory
Attraction to disability
Attraction to transgender people
Autagonistophilia
Autassassinophilia
Autoerotic fatality
Autoeroticism
Autosadism
Ayoni
B
Balloon fetish
Banjee
Bare Behind Bars
Bareback (sexual act)
Barley-Break
Baseball metaphors for sex
BDSM
BDSM and the law
BDSM in culture and media
Beate Uhse-Rotermund
Beate Uhse Erotic Museum
Beginning of pregnancy controversy
Benandanti
Bend Over Boyfriend
Benjamin scale
Berl Kutchinsky
Beyaz (drug)
Biastophilia
Biblical courtship
Bidder's organ
Bikini waxing
Biology and sexual orientation
Birth control
Birth Control (film)
Birth Control Council of America
Birth control movement in the United States
Birth control sabotage
Birth dearth
Birth rate
Bisexual pornography
Bisexual pride flag
Bisexuality
Blanchard's transsexualism typology
Blood–testis barrier
Blood fetishism
Blue balls
Blunder Broad
Bob Champion
Body inflation
Body odor and subconscious human sexual attraction
Boink
Bokanovsky's process
Bondage (BDSM)
Bonk: The Curious Coupling of Science and Sex
Boot fetishism
Born-again virgin
Boston Corbett
Boston Medical Group
Boyfriend
Bracha L. Ettinger
Breast
Breast binding
Breast fetishism
British Journal of Sexual Medicine
British Society for the Study of Sex Psychology
Broken heart
Brotherly love (philosophy)
Buddhism and sexual orientation
Buddhism and sexuality
Bugchasing
Bukkake (sexual practice)
Bundling (tradition)
Bunga bunga
Burusera
Butt plug
Buttocks
Buttocks eroticism
C
Camel toe
Camgirl
Candaulism
Capacitation
Cass identity model
Casting couch
Castration
Casual relationship
Casual sex
Catfight
Catholic sex abuse cases
Catholicism and sexuality
Celibacy
Certified Sex Therapist
Chickenhawk (gay slang)
Chickenhead (sexuality)
Child-on-child sexual abuse
Child sex
Child sex tourism
Child sexual abuse
Child sexuality
Childbirth
Choice USA
Chremastistophilia
Christian side hug
Christianity and sexual orientation
Chronophilia
Cicisbeo
Circle jerk (sexual practice)
Circumcision
Clinical vampirism
Clitoral enlargement methods
Clitoral erection
Clitoral pump
Clitoris
Clitorism
Clothing fetish
Club wear
Co-sleeping
Cock ring
Cock and ball torture
Cockle bread
Coitus reservatus
Compassionate love
Compersion
Compulsory sterilization
Concept Foundation
Concubinage
Condom
Condom fatigue
Condoms, needles, and negotiation
Conjugal love
Conjugal visit
Conscience clause (medical)
Consecrated virgin
Consent
Consent in BDSM
Constitutional growth delay
Contraception
Contraception in the Republic of Ireland
Contraceptive security
Coolidge effect
Coprophilia
Corrective rape
Cortical reaction
Couple costume
Courtly love
Courtship
Courtship disorder
Covert incest
Crab louse
Creampie (sexual act)
Cretan pederasty
Crime of passion
Criminal transmission of HIV
Cross dressing
Cruising for sex
Crush fetish
Crystallization (love)
Cuban National Center for Sex Education
Cuckold
Cuckquean
Cuddle party
Cum shot
Cunnilingus
Cunt
Cupboard love
Curial response to Catholic sex abuse cases
Cutty-sark (witch)
Cyber sex
Cybersex
Cytoplasmic incompatibility
Cytoplasmic transfer
D
Dacryphilia
Damsel in distress
Dartos fascia
Date rape
Davian behavior
David Reimer
Day of Conception
De amore (Andreas Capellanus)
Dear John letter
Debagging
Decrement table
Deep-throating
Delayed ejaculation
Delayed puberty
Demographics of sexual orientation
Dendrophilia (paraphilia)
Dental dam
Desire (emotion)
Desire discrepancy
Deus caritas est
Dhat syndrome
Dildo
Diotima of Mantinea
Dippoldism
Dirty Sanchez (sexual act)
Dirty talk
Discipline (BDSM)
Document 12-571-3570
Dogging (sexual slang)
Doll fetish
Domination and submission
Dominatrix
Domnei
"Don't Stand So Close to Me"
Don Juan
Donkey punch
DontDateHimGirl.com
Douche
Downblouse
Droit du seigneur
Dry enema
Dry sex
Dual protection
Dutch Society for Sexual Reform
Dydd Santes Dwynwen
Dyspareunia
E
Easterlin hypothesis
Ecclesiastical response to Catholic sex abuse cases
Écriture féminine
Edging (sexual practice)
Education for Citizenship (Spain)
Effects of pornography
Effeminacy
Egg cell
Ego-dystonic sexual orientation
Ejaculation
Eli Coleman
Élisabeth Badinter
Emasculation
Embryo transfer
Emergency contraceptive availability by country
Emetophilia
Emotional affair
Emotional intimacy
Encyclopedia of Pleasure
Endocrinology
Enema
Enjo kōsai
Environment and sexual orientation
Enzyte
Ephebophilia
Erectile dysfunction
Erection
Erogenous zone
Eros (concept)
Erotic Awards
Erotic electrostimulation
Erotic humiliation
Erotic hypnosis
Erotic lactation
Erotic literature
Erotic massage
Erotic sexual denial
Erotic spanking
Erotica
Eroticism
Eroto-comatose lucidity
Erotolepsy
Erotomania
Erotophilia
Erotophobia
Erotosexual
Eskimo kissing
Estrogen
Ethnic pornography
Eve Kosofsky Sedgwick
Evolutionary psychology
Ex-gay movement
Exhibitionism
Exoletus
ExtenZe
F
Facesitting
Facial
Falling in love
Fallopian tubes
Family planning
Family planning in India
Family planning in Iran
Family planning in Pakistan
Fans of X-Rated Entertainment
Fat fetishism
Fear of commitment
Felching
Fellatio
Female condom
Female copulatory vocalization
Female ejaculation
Female genital cutting
Female hysteria
Female infertility
Female reproductive system
Female sex tourism
Female sexual arousal disorder
Female sodomy
Female submission
Feminism
Feminist sex wars
Feminist sexology
Feminist views of pornography
Feminist views on BDSM
Feminization (activity)
Fertility
Fertility-development controversy
Fertility and intelligence
Fertility rite
Fertility symbol
Fetish magazine
Fetish model
Fetus
Fictosexuality
Fingering
Fisting
Fixation (psychology)
Fleshlight
Flirting
Flogging
Follicular phase
Food and sexuality
Food play
Foot fetishism
Footjob
Forced orgasm
Foreplay
Foreskin
Foreskin restoration
Formicophilia
Fornication
Foursome (group sex)
Foxy boxing
Frank Harris
Fraternal birth order and male sexual orientation
Free love
Free Speech Coalition
Free union
French kiss
Friend zone
Frot
Frotteurism
Fuck
Fuck for Forest
Fur massage
G
Gametangium
Gamete
Gametogenesis
Gang bang
Gang bang pornography
Gang rape
Gangbang
Gay
Gay bomb
Gay Kids
Gay pornography
Geeta Nargund
Gender
Gender and crime
Gender apartheid
Gender identity
Gender identity disorder
Gender identity disorder in children
Gender paradigm
Gender segregation and Islam
Genetic sexual attraction
Genital corpuscles
Genital modification and mutilation
Genital piercing
Genital play
Genital wart
Genitourinary medicine
Genophobia
George Santayana
Georges Bataille
Geriatric sexology
German Society for Social-Scientific Sexuality Research
Gerontophilia
Gestation period
Giles' theory of sexual desire
Girlfriend
Girlfriend experience
Glans
Gloria E. Anzaldúa
Glory hole
Glove fetishism
Gofraid Donn
Gokkun
Golden Age of Porn
Gonadarche
Gonadotropin
Gonadotropin preparations
Gonocyte
Gonorrhea
Gratification disorder
Greek love
Greek words for love
Griselda Pollock
Groping
Gross reproduction rate
Grotesque body
Group sex
Groupie
Growing Up (1971 film)
Guy Hocquenghem
Gynaecology
Gynoecium
Gynophobia
H
Habitual abortion
Hair fetishism
Haitian Vodou and sexual orientation
Hand fetishism
Handedness and sexual orientation
Handjob
Hare Krishna movement and sexual orientation
Harmful to Minors
Harry Crookshank
Hatred
Head shaving
Heather Corinna
Heavy petting
Hebephilia
Heihaizi
Hélène Cixous
Hentai
Hepatitis
Herpes simplex virus
Herpes support groups
Heteroflexible
Heterosexual–homosexual continuum
Heterosexuality
Hey Nineteen
Hickey
Hierophilia
Hijra (South Asia)
Hirsutophilia
History of attachment theory
History of erotic depictions
History of evolutionary psychology
History of homosexuality
History of human sexuality
History of masturbation
History of narcissism
History of prostitution
History of sex in India
HIV
Hogging (sexual practice)
Homoeroticism
Homophobia
Homosexuality
Homosexuality and psychology
Hostile work environment
Hot or Not
House party
How to Have Sex in an Epidemic: One Approach
Hug
Hugs and kisses
Human anus
Human bonding
Human female sexuality
Human fertilization
Human gonad
Human male sexuality
Human penis
Human population control
Human reproduction
Human reproductive system
Human sexual activity
Human sexual response cycle
Human sexuality
Human sterilization (surgical procedure)
Hybristophilia
Hydatid of Morgagni
Hydrocele testis
Hyperactivation
Hypergamy
Hypergonadism
Hypersexual disorder
Hypersexuality
Hypoactive sexual desire disorder
Hypogonadism
Hyposexuality
I
Ideal Marriage: Its Physiology and Technique
Identity (social science)
Imagery of nude celebrities
Immanuel Kant
Impact play
Implantation (human embryo)
Imprinting (psychology)
In Praise of the Stepmother
In vitro fertilisation
Incest
Incest in popular culture
Income and fertility
Indecent exposure
Index of BDSM articles
Infertility
Infidelity
Inis Beag
Insemination
Inside Deep Throat
Institut für Sexualwissenschaft
Institute for Advanced Study of Human Sexuality
Instruction and Advice for the Young Bride
Intercrural sex
Interferon tau
International Academy of Sex Research
International Fetish Day
International Mr. Leather
Internet addiction disorder
Internet pornography
Internet relationship
Interpersonal attraction
Interracial personals
Intersex
Intersex flag
Intersex human rights
Intimate relationship
Irrumatio
Is the School House the Proper Place to Teach Raw Sex?
Ishq
Islam and sexual orientation
Islamic sexual jurisprudence
It's Perfectly Normal
It's So Amazing
It girl
J
Jacques Hassoun
Jacques Lacan
Jailbait
Jealousy
Jewish views on love
Jewish views on marriage
John D'Emilio
John Sutcliffe (designer)
John William Lloyd
Jolan Chang
Jonathan David Katz
Josephine Mutzenbacher
Jouissance
Judaism and sexual orientation
Judith Butler
Julia Kristeva
K
K-Y Jelly
Kagema
Kama sutra
Kanashimi no Belladonna
Kegel exercise
Ken Marcus
Khosrow and Shirin
Kidding Aside
KinK
Kinky sex
Kinsey Institute for Research in Sex, Gender, and Reproduction
Kinsey scale
Kiss
Kiss chase
Kissing traditions
Kizzy: Mum at 14
Klein Sexual Orientation Grid
Klismaphilia
Koro (medicine)
Kukeri
L
L word
Lack (manque)
Lactation
Lafayette Morehouse
Laskey, Jaggard and Brown v United Kingdom
Latent homosexuality
Latex and PVC fetishism
Latex clothing
Lawrence v. Texas
Layla and Majnun
Leather fetishism
Leather Pride flag
Leather subculture
Legal objections to pornography in the United States
Legal recognition of intersex people
Lesbian
Lesbian erotica
Lesbianism
Leydig cell
LGBT
LGBT sex education
LGBT themes in speculative fiction
LGBTI Health Summit
Li Yannian (musician)
Libertine
Libido
Life partner
Limbic resonance
Limbic revision
Limerence
Lingerie
List of anarchist pornographic projects and models
List of BDSM equipment
List of BDSM organizations
List of bondage positions
List of fertility deities
List of films that most frequently use the word "fuck"
List of hentai authors
List of homologues of the human reproductive system
List of PAN dating software
List of paraphilias
List of pornographic book publishers
List of pornographic magazines
List of prostitutes and courtesans
List of sex positions
List of sexology journals
List of sexology organizations
List of sexual slang
List of sovereign states and dependent territories by fertility rate
List of topics on sexual ethics
Living and Growing
Living apart together
Lolita
London amora
Long-acting reversible contraceptive
Lookism
Lost Girls (graphic novel)
Lotion play
Love
Love-in
Love–hate relationship
Love & Respect
Love (sculpture)
Love addiction
Love at first sight
Love dart
Love Is...
Love letter
Love magic
Love padlocks
Love styles
Love triangle
Lovegety
Lovemap
Lovesickness
Lovestruck
Loyalty
Luce Irigaray
Lust
Lust murder
M
Macrophilia
Magnus Hirschfeld Medal
Making out
Making sense of abstinence
Male accessory gland
Male dominance (BDSM)
Male infertility
Male prostitute
Male reproductive system
Male submission
Male waxing
Mama-san
Mammary intercourse
Mandarin Chinese profanity
Manual sex
Marital rape
Marquis de Sade
Marriage
Marriage and Morals
Marriageable age
Married Love
Masters and Johnson
Masters and Johnson Institute
Masturbate-a-thon
Masturbation
Mat (Russian profanity)
Maternal bond
Maternal health
Mechanics of human sexuality
Mechanophilia
Media coverage of Catholic sexual abuse cases
Medical abortion
Medical fetishism
Meet market
Men who have sex with men
Ménage à trois
Menarche
Menstruation
Meretrix
Michael Uebel
Michel Foucault
Mighty Jill Off
Mile high club
MILF Island
MILF pornography
Minors and abortion
Mirror stage
Misattribution of arousal
Misogyny
Mister Leather Europe
Mistress (lover)
Modern primitive
Monique Wittig
Monogamy
Monosexuality
Mosley v News Group Newspapers
Mosley v United Kingdom
Muscle worship
Mutual masturbation
My Mom's Having a Baby
Mysophilia
N
Naked Ambition: An R Rated Look at an X Rated Industry
Naked Science
Naked Women's Wrestling League
Name of the Father
Nanpa
Narcissistic parents
Narratophilia
National Birth Control League
National Gamete Donation Trust
National Longitudinal Study of Adolescent Health
National Sexuality Resource Center
National Survey of Sexual Health and Behavior
Natural fertility
Navel fetishism
Necrophilia
Necrophilia in popular culture
Neotantra
Net reproduction rate
Neuroscience and sexual orientation
New relationship energy
Nice guy
Nidamental gland
Nightwork: Sexuality, Pleasure, and Corporate Masculinity in a Tokyo Hostess Club
Nin-imma
Nipple
Nipple clamp
No Kidding!
No Secrets (Adult Protection)
Non-heterosexual
Non-penetrative sex
North American Man/Boy Love Association
Nose fetishism
Nyotaimori
O
Object sexuality
Objet petit a
Obscene phone call
Obscenity
Obsessive love disorder
Obstetrics
Oculophilia
Odalisque
Odaxelagnia
Omorashi
On-again, off-again relationship
Oncofertility Consortium
One-child policy
One sex two sex theory
OneChild
OneTaste
Online dating service
Oocyte selection
Oogamy
Open relationship
Operation Spanner
Opportunistic breeders
Oragenitalism
Oral sex
Orgasm
Orgy
Othermother
Otto Gross
Our Bodies, Ourselves
Our Whole Lives
Outline of human sexuality
Outline of relationships
Ovary
Ovotestis
P
Paddle (spanking)
Pansexual
Pansexual Pride flag
Pansexuality
Paraphilia
Paraphilic infantilism
Partialism
Party and play
Paternal bond
Pearl Index
Pearl necklace (sexual act)
Pederasty
Pederasty in ancient Greece
Pedophilia
Peer Health Exchange
Pegging (sexual practice)
Pelvic congestion syndrome
Penile fracture
Penile plethysmograph
Penile implant
Penile-vaginal intercourse
Penis captivus
Penis enlargement
Penis extension
Penis sleeve
People v. Jovanovic
Perineum
Period of viability
Persistent genital arousal disorder
Personal lubricant
Perversion
Perversion for Profit
Peter Abelard
Petroleum jelly
Peyronie's disease
Phalloorchoalgolagnia
Phallus
Philosophy of love
Philosophy of sex
Phone sex
Physical attractiveness
Physical intimacy
Physiology
Pillow talk
Pinafore eroticism
Piquerism
Platonic love
Play piercing
Playboy (lifestyle)
Playgirl
Playing doctor
Plietesials
Plurisexuality
Plushophilia
Polyamory
Polyfidelity
Polymorphous perversity
Polysexuality
Pompoir
POPLINE
Population Council
Porcine zona pellucida
Porn groove
Porn Sunday
Pornographic film actor
Pornography
Pornography addiction
Pornography in Italy
Pornophobia
Pornosonic
Post-coital tristesse
Post Office (game)
Postorgasmic illness syndrome
Precocious puberty
Pregnancy
Pregnancy fetishism
Pregnancy over age 50
Premarital sex
Premature ejaculation
Premature ovarian failure
Premenstrual stress syndrome
Prenatal development
Prenatal hormones and sexual orientation
Priapism
Prick Up Your Ears (Family Guy)
Primal scene
Primary and secondary (relationship)
Prison rape
Prison sexuality
Privacy mode
Private Case
Promiscuity
Prostaglandin 2 alpha
Prostate
Prostate massage
Prostitution
Prostitution in Asia
Prudence and the Pill
Psychoanalysis
Psychology of sexual monogamy
Psychopathia Sexualis
Psychosexual disorder
Pubarche
Puberty
Pubic hair
Public display of affection
Public indecency
Public sex
Puppy love
Purity test
Pussy
Putative father registry
Q
Queer
Queer heterosexuality
Queer pornography
Questioning (sexuality and gender)
Quickie
Quiverfull
R
R v Brown
R v Peacock
R. v. Hess; R. v. Nguyen
R. v. Stevens
Rainbow flag (LGBT movement)
Randa Mai
Rape
Rape by deception
Rating site
Rectal prolapse
Rectum
Red Triangle (family planning)
Reefer Madness (2003 book)
Reflectoporn
Refractory period (sex)
Regina Lynn
Relationship breakup
Religion and sexuality
Religious views on love
Religious views on pornography
Remarriage
Reproductive health
Reproductive Health Bill
Reproductive justice
Reproductive life plan
Reproductive medicine
Reproductive rights
Reproductive system disease
Robert Reid-Pharr
Robot fetishism
Roger T. Pipe
Roland Barthes
Roman Catholic sex abuse cases by country
Romance (love)
Romantic friendship
Rubber fetishism
Rusty trombone
S
Sadism and masochism in fiction
Sadomasochism
Safe sex
Salirophilia
Same gender loving
San Francisco Armory
San Francisco Sex Information
Sanky-panky
Sarah Kofman
Savage Grace
Savage Love
Schizoanalysis
Scientology and sexual orientation
Scopophilia
Scrotal inflation
Scrotum
Seasonal breeder
Secret admirer
Secret Museum, Naples
Section 63 of the Criminal Justice and Immigration Act 2008
Seduction
Seduction community
Seishitsu
Self-love
Semen extender
Sensual play
Serial monogamy
Serial rape
Serosorting
Service-oriented (sexuality)
SESAMO
Settlements and bankruptcies in Catholic sex abuse cases
Seven minutes in heaven
Sex
Sex-positive feminism
Sex-positive movement
Sex Addicts Anonymous
Sex after pregnancy
Sex and drugs
Sex and Love Addicts Anonymous
Sex and sexuality in speculative fiction
Sex and the law
Sex assignment
Sex at Dawn
Sex club
Sex doll
Sex education
Sex education in the United States
Sex in advertising
Sex in space
Sex industry
Sex machine
Sex magic
Sex manual
Sex museum
Sex organ
Sex party
Sex positions
Sex scandal
Sex segregation
Sex shop
Sex steroid
Sex strike
Sex surrogate
Sex symbol
Sex therapy
Sex tourism
Sex toy
Sex toy party
Sex Week at Yale
Sex work
Sex worker
Sex workers' rights
Sex, gender and the Roman Catholic Church
Sex: The Revolution
Sexaholics Anonymous
Sexercises
Sexism
Sexless marriage
Sexological testing
Sexology
Sexting
Sexual abstinence
Sexual abuse
Sexual activity during pregnancy
Sexual addiction
Sexual and Reproductive Health Matters
Sexual anorexia
Sexual arousal
Sexual arousal disorder
Sexual assault
Sexual Attitude Reassessment
Sexual attraction
Sexual bimaturism
Sexual capital
Sexual Compulsives Anonymous
Sexual consent
Sexual desire
Sexual dysfunction
Sexual ethics
Sexual fantasy
Sexual fetishism
Sexual field
Sexual frustration
Sexual function
Sexual harassment
Sexual health clinic
Sexual identity
Sexual Identity Therapy
Sexual inhibition
Sexual intercourse
Sexual intimacy
Sexual meanings
Sexual medicine
Sexual minority
Sexual misconduct
Sexual morality
Sexual narcissism
Sexual network
Sexual norm
Sexual objectification
Sexual orientation
Sexual orientation and gender identity at the United Nations
Sexual orientation and military service
Sexual orientation and the Canadian military
Sexual orientation and the military of the Netherlands
Sexual orientation and the military of the United Kingdom
Sexual orientation and the United States military
Sexual orientation change efforts
Sexual orientation hypothesis
Sexual partner
Sexual penetration
Sexual Personae
Sexual practices between men
Sexual practices between women
Sexual repression
Sexual reproduction
Sexual revolution
Sexual ritual
Sexual roleplay
Sexual script
Sexual selection in human evolution
Sexual slang
Sexual stigma
Sexual stimulation
Sexual sublimation
Sexual tension
Sexual violence
Sexual Violence: Opposing Viewpoints
Sexuality and disability
Sexuality and The Church of Jesus Christ of Latter-day Saints
Sexuality in ancient Rome
Sexuality in Ancient Rome
Sexuality in Christian demonology
Sexuality in Islam
Sexuality in Japan
Sexuality in music videos
Sexuality in older age
Sexuality in South Korea
Sexuality in Star Trek
Sexuality in China
Sexuality in the Philippines
Sexuality Information and Education Council of the United States
Sexuality of Abraham Lincoln
Sexuality of Adolf Hitler
Sexuality of David and Jonathan
Sexuality of Jesus
Sexuality of William Shakespeare
Sexualization
Sexually active life expectancy
Sexually suggestive
Sexually transmitted infection
Shalom bayit
Shelf (organization)
Shemale
Shettles Method
Shidduch
Shoe fetishism
Short-arm inspection
Sikhism and sexual orientation
Sima Qian
Simone de Beauvoir
Singles Awareness Day
Singles event
Sinthome
Situational sexual behavior
Skoptic syndrome
Sleep sex
Slut
Smirting
Smoking fetishism
Snowballing (sexual practice)
Social impact of thong underwear
Society for the Scientific Study of Sexuality
Sociobiological theories of rape
Sociosexual orientation
Sodomy
Sodomy law
Soggy biscuit
Somnophilia
Soulmate
Spandex fetishism
Spectatoring
Sperm
Sperm heteromorphism
Sperm motility
Sperm Wars
Spermalege
Spermarche
Spermatheca
Spermatid
Spermatogenesis
Spermatogenesis arrest
Spermatorrhea
Spermatozoon
Spermicide
Spin the bottle
Spinster
Spiritual marriage
Spirituality
Spooning (cuddling)
Sporogenesis
Stag film
Stalag fiction
Star-crossed
State v. Limon
Statutory rape
Stigma (1972 film)
Stillbirth
Stimulation of nipples
Strap-on dildo
Strip club
Strip poker
Stripper
Sub-replacement fertility
Sublimation (psychology)
Sumata
Survivors Healing Center
Survivors of Incest Anonymous
Suzanne Lilar
Swedish Association for Sexuality Education
Swing club
Swinging
Sybian
Syphilis
T
Tamakeri
Tanner scale
Tantric sex
Taoist sexual practices
Teabagging
Teenage pregnancy
Teenage pregnancy and sexual health in the United Kingdom
Teledildonics
Testicle
The ABC of Sex Education for Trainables
The Abortion Pill (film)
The birds and the bees
The Chapman Report
The Education of Shelby Knox
The Enchanter
The Encyclopœdia of Sexual Knowledge
The Erotic Review
The Ethical Slut
The Family Doctor
The Four Loves
The G Spot and Other Recent Discoveries About Human Sexuality
The History of Sexuality
The Imaginary (psychoanalysis)
The Little Red Schoolbook
The Man Who Would Be Queen
The Seminars of Jacques Lacan
The Sexual Life of Savages in North-Western Melanesia
The Symbolic
The Theory of Flight
The Trouble With Normal (book)
Thelarche
Theology
Therapeutic abortion
Threesome
Thy Neighbor's Wife (book)
Tickling game
Timeline of sexual orientation and medicine
Title X
Toothing
Top, bottom and versatile
Torture Garden (fetish club)
Total fertility rate
Total fertility rates by federal subjects of Russia
Tough love
Trans woman
Transactional sex
Transgender
Transgender pornography
Transgender Pride flag
Transgender sexuality
Transition nuclear protein
Transsexualism
Transvestic fetishism
Transvestism
Tribadism
Tunica albuginea
Turkey slap
Twenty Five Years of an Artist
Two-spirit
U
UK Adult Film and Television Awards
Unconditional love
Unisex
Unitarian Universalism and sexual orientation
Unrequited love
Unsimulated sex in film
Upskirt
Urethral intercourse
Urethral sounding
Urogenital triangle
Urolagnia
Urology
Urophagia
Uterine serpin
Uterus
V
Vagina
Vaginal lubrication
Vaginismus
Valentine's Day
Vanilla sex
Venous leak
Venus 2000
Venus Butterfly
Vibrator
Violet Blue (author)
Virginity
Virginity pledge
Virginity test
Virility
Virtual sex
Voltaire
Voluntary Parenthood League
Vorarephilia
Voulez-vous coucher avec moi?
Voyeurism
Vulnerability and Care Theory of Love
Vulva
W
Wakashū
Walk of shame
Wanker
War rape
Warming lubricant
Wax play
Wayne DuMond
Westermarck effect
Wet and messy fetishism
Wet Lubricants
Wetlook
Where Do Teenagers Come From?
Why Is Sex Fun?
Wildlife contraceptive
Windmill Theatre
Womb veil
Women who have sex with women
World Association for Sexual Health
Wreath money
X
XBIZ
XBIZ Award
XRCO Award
XXXchurch.com
Y
Yoni
Youth Internet Safety Survey
Z
Zestra
Zina
Zona pellucida
Zooerasty
Zoophilia
Zoosadism
Zoroastrianism and sexual orientation
Zygote
Zygote intrafallopian transfer
See also
Outline of human sexuality
Sexuality-related lists
+
Human sexuality topics | Index of human sexuality articles | [
"Biology"
] | 5,323 | [
"Human sexuality",
"Behavior",
"Human behavior",
"Sexuality"
] |
7,338,342 | https://en.wikipedia.org/wiki/Homology%20manifold | In mathematics, a homology manifold (or generalized manifold)
is a locally compact topological space X that looks locally like a topological manifold from the point of view of homology theory.
Definition
A homology G-manifold (without boundary) of dimension n over an abelian group G of coefficients is a locally compact topological space X with finite G-cohomological dimension such that for any x∈X, the homology groups
are trivial unless p=n, in which case they are isomorphic to G. Here H is some homology theory, usually singular homology. Homology manifolds are the same as homology Z-manifolds.
More generally, one can define homology manifolds with boundary, by allowing the local homology groups to vanish
at some points, which are of course called the boundary of the homology manifold. The boundary of an n-dimensional first-countable homology manifold is an n−1 dimensional homology manifold (without boundary).
Examples
Any topological manifold is a homology manifold.
An example of a homology manifold that is not a manifold is the suspension of a homology sphere that is not a sphere.
Properties
If X×Y is a topological manifold, then X and Y are homology manifolds.
References
Algebraic topology
Generalized manifolds | Homology manifold | [
"Mathematics"
] | 260 | [
"Topology stubs",
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
7,338,545 | https://en.wikipedia.org/wiki/Quincha | Quincha is a traditional construction system that uses, fundamentally, wood and cane or giant reed forming an earthquake-proof framework that is covered in mud and plaster.
History
Quincha is a Spanish term widely known in Latin America, borrowed from Quechua qincha (kincha in Kichwa). Even though Spanish and Portuguese are closely related languages, in this case, the Portuguese equivalent is completely different: Pau-a-pique.
Historically, quincha has been utilized in the Spanish and Portuguese colonies throughout the different regions of the Americas. The construction technology is said to have existed for at least 8,000 years. In Peru, it is a popular construction design in the coastal regions. It is also adopted in urban centers after the incidence of earthquakes such as the case of the rebuilding of the city of Trujillo after the 1759 earthquake.
Construction
The framework or wattle is a main feature of traditional quincha. It is constructed by interweaving pieces of wood, cane, or bamboo and is covered with a mixture of mud and straw (or daub). It is then covered on both sides with a thin lime plaster finish, which serves as a sort of wall or ceiling panels.
Quincha is known for its flexibility since it can be shaped into different designs. For example, the builders of the church at San Jose at Ingenio, Nazca modified quincha to construct its ornate twin-towered facade. Its resistance to earthquake is attributed to the combination of heavy mass (used for thermal insulation) and timber-frame structure. The lattice design of its framework also provides the quincha building stability, allowing it to shake during an earthquake without damage.
A modern iteration of quincha is called quincha metallica, a method developed by the Chilean architect Marcelo Cortés. In this system, steel and wielded wire mesh are used instead of bamboo or cane to create the matrix that holds the mud, which is also improved through the addition of lime to control the clay's expansion and water impermeability.
See also
Wattle and daub
References
Soil-based building materials | Quincha | [
"Engineering"
] | 435 | [
"Architecture stubs",
"Architecture"
] |
7,338,650 | https://en.wikipedia.org/wiki/Process%20variable | In control theory, a process variable (PV; also process value or process parameter) is the current measured value of a particular part of a process which is being monitored or controlled. An example of this would be the temperature of a furnace. The current temperature is the process variable, while the desired temperature is known as the set-point (SP).
Control system use
Measurement of process variables is essential in control systems to controlling a process. The value of the process variable is continuously monitored so that control may be exerted.
Four commonly measured variables that affect chemical and physical processes are: pressure, temperature, level and flow. but there are in fact a large number of measurement quantities which for international purposes use the International System of Units (SI)
The SP-PV error is used to exert control on a process so that the value of PV equals the value of the SP. A classic use of this is in the
PID controller.
References
Control theory | Process variable | [
"Mathematics"
] | 193 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
7,338,660 | https://en.wikipedia.org/wiki/Medical%20narcissism | Medical narcissism is a term coined by John Banja in his book, Medical Errors and Medical Narcissism.
Banja defines "medical narcissism" as the need of health professionals to preserve their self-esteem leading to the compromise of error disclosure to patients.
In the book he explores the psychological, ethical and legal effects of medical errors and the extent to which a need to constantly assert their competence can cause otherwise capable, and even exceptional, professionals to fall into narcissistic traps.
He claims that:
References
Practice of medicine
Narcissism | Medical narcissism | [
"Biology"
] | 118 | [
"Behavior",
"Narcissism",
"Human behavior"
] |
7,338,787 | https://en.wikipedia.org/wiki/Resources%2C%20Events%2C%20Agents | Resources, events, agents (REA) is a model of how an accounting system can be re-engineered for the computer age. REA was originally proposed in 1982 by William E. McCarthy as a generalized accounting model, and contained the concepts of resources, events and agents (McCarthy 1982).
REA is a standard approach in teaching accounting information systems (AIS). In business practice, REA has influenced IBM Scalable Architecture for Financial Reporting, REATechnology, and ISO 15944-4. Fallon and Polovina (2013) have shown how REA can also add value when modelling current ERP business processes by providing a tool which increases the understanding of the implementation and underlying data model.
Description
The REA model gets rid of many accounting objects that are not necessary in the computer age. Most visible of these are debits and credits—double-entry bookkeeping disappears in an REA system. Many general ledger accounts also disappear, at least as persistent objects; e.g., accounts receivable or accounts payable. The computer can generate these accounts in real time using source document records.
REA treats the accounting system as a virtual representation of the actual business. In other words, it creates computer objects that directly represent real-world-business objects. In computer science terms, REA is an ontology. The real objects included in the REA model are:
goods, services or money, i.e., resources
business transactions or agreements that affect resources, i.e., events
people or other human agencies (other companies, etc.), i.e., agents
These objects contrast with conventional accounting terms such as asset or liability, which are less directly tied to real-world objects. For example, a conventional accounting asset such as goodwill is not an REA resource.
There is a separate REA model for each business process in the company. A business process roughly corresponds to a functional department, or a function in Michael Porter's value chain. Examples of business processes would be sales, purchases, conversion or manufacturing, human resources, and financing.
At the heart of each REA model there is usually a pair of events, linked by an exchange relationship, typically referred to as the "duality" relation. One of these events usually represents a resource being given away or lost, while the other represents a resource being received or gained. For example, in the sales process, one event would be "sales"—where goods are given up—and the other would be "cash receipt", where cash is received. These two events are linked: a cash receipt occurs in exchange for a sale, and vice versa. The duality relationship can be more complex, e.g., in the manufacturing process, it would often involve more than two events (see Dunn et al. [2004] for examples).
REA systems have usually been modeled as relational databases with entity-relationship diagrams, though this is not compulsory.
The philosophy of REA draws on the idea of reusable Design Patterns, though REA patterns are used to describe databases rather than object-oriented programs, and are quite different from the 23 canonical patterns in the original designs pattern book by Gamma et al. Research in REA emphasizes patterns (e.g., Hruby et al. 2006). Here is an example of the basic REA pattern shown as an E-R diagram:
The pattern is extended to encompass commitments (promises to engage in transactions, e.g., a sales order), policies, and other constructs. Dunn et al. (2004) provide a good overview at an undergraduate level (for accounting majors), while Hruby et al. (2006) is an advanced reference for computer scientists. Here is a diagram of an extended REA pattern (from Hruby et al. 2006)
REA is a continuing influence on the electronic commerce standard ebXML, with W. McCarthy actively involved in the standards committee. The competing XBRL GL standard however is at odds with the REA concept, as it closely mimics double-entry book-keeping.
REA is now recognised by The Open Group within the TOGAF standard (an industry standard enterprise framework), as one of the modelling tools which is useful for modelling business processes.
Further reading
Hruby, P., Kiehn, J., Scheller, C. V. (2006). Model-Driven Design Using Business Patterns. Springer.
Dunn, C., Cherrington, J. O., Hollander, A. S. (2004) Enterprise Information Systems: A Pattern-Based Approach. McGraw-Hill/Irwin.
Hollander, A. S., Denna, E., Cherrington, J. O. (1999) Accounting, Information Technology, and Business Solutions. McGraw-Hill/Irwin.
Geerts, L. G., McCarthy, E. W. (2002, Vol.3) An Ontological Analysis of the Primitives of the Extended-REA Enterprise Information Architecture. The International Journal of Accounting Information Systems, pp. 1–16
Geerts, L. G., McCarthy, E. W. (2000) The Ontological Foundation of REA Enterprise Information Systems. Working paper, Michigan State University
McCarthy, E. W. (July 1982) The REA Accounting Model: A Generalized Framework for Accounting Systems in a Shared Data Environment. The Accounting Review, pp. 554–78
A Reference Ontology for Accounting.
Fallon, R. L., Polovina, S. (2013) REA Analysis of SAP HCM; Some Initial Findings, Proceedings of the 3rd CUBIST (Combining and Uniting Business Intelligence with Semantic Technologies) Workshop pp. 31-43
Partridge, Chris, (2002) Steps towards the development of a reference ontology for accounting
REA Enterprise Source Ontology.
REA Technology.
References
Accounting systems
Database management systems | Resources, Events, Agents | [
"Technology"
] | 1,210 | [
"Information systems",
"Accounting systems"
] |
7,338,978 | https://en.wikipedia.org/wiki/Tortilla%20Wall | The Tortilla Wall is a term given to a 14-mile (22.5 kilometer) section of United States border fence between the Otay Mesa border crossing in San Diego, California, and the Pacific Ocean.
This "San Diego wall" was completed in the early 1990s. While there are other walls at various points along the border, the Tortilla Wall is the longest to date. No other wall sections have evolved distinct names, so the name is often used to describe the entire set of walled defensive structures.
The Tortilla Wall is marked with graffiti, crosses, photos, pictures and remembrances of migrants who died trying to illegally enter the United States.
Effectiveness
The effectiveness of the wall has been significant according to U.S. Congressional testimony by Representative Ed Royce:
...apprehensions along the region with a security fence dropped from 202,000 in 1992 to 9,000 in 1994.
The building of the Tortilla Wall is generally considered by Mexicans to be an unfriendly gesture.
It is a symbol of the controversial immigration issue. It is argued that the wall simply forces illegal border crossings to be moved to the more dangerous area of the Arizona desert.
Expansion of the wall
In 2006, the U.S. Congress passed the Secure Fence Act of 2006
which authorized spending $1.2 billion to build 700 miles (1,100 km) of additional fencing on the southern border facing Mexico.
Anecdotal wall stories
Tunnels under the wall are still a common way to illegally cross the border. Some tunnels are quite sophisticated. One such tunnel created by smugglers ran from Tijuana to San Diego, was a long, and included a concrete floor as well as electricity. Other tunnels have included steel rails, while some tunnels are simply dirt passageways or connect to sewer or drain systems.
As a stunt, a circus cannon was placed on the south side of the wall and an acrobat was blasted over the wall into Border Field State Park in the U.S. He had his passport with him.
See also
List of walls
Roosevelt Reservation
References
External links
Otay Mesa Port of Entry
Bureau of Transportation Statistics Border Crossing Information
Demography
Mexico–United States border
Walls | Tortilla Wall | [
"Environmental_science"
] | 445 | [
"Demography",
"Environmental social science"
] |
7,338,992 | https://en.wikipedia.org/wiki/Thermally%20conductive%20pad | In computing and electronics, thermal pads (also called thermally conductive pad or thermal interface pad) are pre-formed rectangles of solid material (often paraffin wax or silicone based) commonly found on the underside of heatsinks to aid the conduction of heat away from the component being cooled (such as a CPU or another chip) and into the heatsink (usually made from aluminium or copper). Thermal pads and thermal compound are used to fill air gaps caused by imperfectly flat or smooth surfaces which should be in thermal contact; they would not be needed between perfectly flat and smooth surfaces. Thermal pads are relatively firm at room temperature, but become soft and are able to fill gaps at higher temperatures. Some, but not all, types of chip carriers include thermal pads in their design.
It is an alternative to thermal paste to be used as thermal interface material. AMD and Intel have included thermal pads on the bottom of heatsinks shipped with some of their processors, as they are cleaner and generally easier to install. However, thermal pads conduct heat less effectively than a minimal amount of thermal paste.
See also
Computer cooling
Hot-melt adhesive
Phase-change material
Thermal adhesive
Thermal paste
List of thermal conductivities
References
Computer hardware cooling
Cooling technology
Heat conduction
Thermally conductive pad | Thermally conductive pad | [
"Physics",
"Chemistry"
] | 265 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Materials stubs",
"Materials",
"Thermodynamics",
"Heat conduction",
"Matter"
] |
7,339,097 | https://en.wikipedia.org/wiki/MPEG%20program%20stream | Program stream (PS or MPEG-PS) is a container format for multiplexing digital audio, video and more. The PS format is specified in MPEG-1 Part 1 (ISO/IEC 11172-1) and MPEG-2 Part 1, Systems (ISO/IEC standard 13818-1/ITU-T H.222.0). The MPEG-2 Program Stream is analogous and similar to ISO/IEC 11172 Systems layer and it is forward compatible.
Program streams are used on DVD-Video discs and HD DVD video discs, but with some restrictions and extensions. The filename extensions are VOB and EVO respectively.
Coding structure
Program streams are created by combining one or more Packetized Elementary Streams (PES), which have a common time base, into a single stream. It is designed for reasonably reliable media such as disks, in contrast to MPEG transport stream which is for data transmission in which loss of data is likely. Program streams have variable size records and minimal use of start codes which would make over the air reception difficult, but has less overhead. Program stream coding layer allows only one program of one or more elementary streams to be packaged into a single stream, in contrast to transport stream, which allows multiple programs.
MPEG-2 Program stream can contain MPEG-1 Part 2 video, MPEG-2 Part 2 video, MPEG-1 Part 3 audio (MP3, MP2, MP1) or MPEG-2 Part 3 audio. It can also contain MPEG-4 Part 2 video, MPEG-2 Part 7 audio (AAC) or MPEG-4 Part 3 (AAC) audio, but they are rarely used. The MPEG-2 Program stream has provisions for non-standard data (e.g. AC-3 audio or subtitles) in the form of so-called private streams. International Organization for Standardization authorized SMPTE Registration Authority, LLC as the registration authority for MPEG-2 format identifiers. It publishes a list of compression formats which can be encapsulated in MPEG-2 transport stream and program stream.
Coding details
See also
Elementary stream
MPEG transport stream
References
External links
MPEG-2
Official MPEG web site
BBC On MPEG
RFC 3555 - MIME Type Registration of RTP Payload Formats (video/MP2P, video/MP1S)
Digital container formats
MPEG
MPEG-2
ITU-T recommendations | MPEG program stream | [
"Technology"
] | 505 | [
"Multimedia",
"MPEG"
] |
7,339,428 | https://en.wikipedia.org/wiki/Muirfield%20Seamount | The Muirfield Seamount is a submarine mountain located in the Indian Ocean approximately 130 kilometres (70 nautical miles) southwest of the Cocos (Keeling) Islands. The Cocos Islands are an Australian territory, and therefore the Muirfield Seamount is within Australia's Exclusive Economic Zone (EEZ). The Muirfield Seamount is a submerged archipelago, approximately in diameter and below the surface of the sea. A 1999 biological survey of the seamount performed by the Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) revealed that the area is depauperate.
The Muirfield Seamount was discovered accidentally in 1973 when the cargo ship MV Muirfield (a merchant vessel named after Muirfield, Scotland) was underway in waters charted at a depth of greater than , when she suddenly struck an unknown object, resulting in extensive damage to her keel. In 1983, , a Royal Australian Navy survey ship, surveyed the area where Muirfield was damaged, and charted in detail this previously unsuspected hazard to navigation.
The dramatic accidental discovery of the Muirfield Seamount is often cited as an example of limitations in the vertical datum accuracy of some offshore areas as represented on nautical chart especially on small-scale charts. More recently, in 2005 the submarine ran into an uncharted seamount about 560 kilometers (350 statute miles) south of Guam at a speed of , sustaining serious damage and killing one seaman.
See also
Graveyard Seamounts
Jasper Seamount
Muirfield Reef
Mud volcano
Sedlo Seamount
South Chamorro Seamount
References
External links
Physical oceanography
Seamounts of the Indian Ocean
Former islands from the last glacial maximum | Muirfield Seamount | [
"Physics"
] | 331 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
7,339,649 | https://en.wikipedia.org/wiki/Preventive%20action | A preventive action is a change implemented to address a weakness in a management system that is not yet responsible for causing nonconforming product or service.
Candidates for preventive action generally result from suggestions from customers or participants in the process but preventive action is a proactive process to identify opportunities for improvement rather than a simple reaction to identified problems or complaints. Apart from the review of the operational procedures, the preventive action might involve analysis of data, including trend and risk analyses and proficiency-testing results.
The focus for preventive actions is to avoid creating nonconformances, but also commonly includes improvements in efficiency. Preventive actions can address technical requirements related to the product or service supplied or to the internal management system.
Many organizations require that when opportunities to improve are identified or if preventive action is required, action plans are developed, implemented and monitored to reduce the likelihood of nonconformities and to take advantage of the opportunities for improvement. Additionally, a thorough preventive action process will include the application of controls to ensure that the preventive actions are effective.
In some settings, corrective action is used as an encompassing term that includes remedial actions, corrective actions and preventive actions.
Risk and decision making
Preventive actions rely upon on the consequences of change. Once changed, inevitably, risks should be taken into consideration. In this case preventive actions aim to minimize or, where possible, eliminate the risks.
Risks arise when little is known and understood about a particular situation. The chances of risk are minimized whilst one has better knowledge of the opportunities and consequences that could follow a situation. In order to reduce risk, a full analysis of potential best and worst results is required. Before taking into consideration any plan, people should be aware of the consequences of both success and failure. Not only the internal aspects - capability, expertise and willingness of staff- but also the external aspects of an organisation - stakeholders, customers, clients - should be assessed.
Strategic risk management works with defining an organisation's approach to risk in terms of condition, attitudes and expertise. It identifies the possible areas of risk and assures that the proper approach is used. Then operational risk management will insure that steps for minimizing or eliminating the risk are followed. A strategic approach of the risk management includes studying the environment and being aware of the issues that must be considered in any situation.
Risks can occur due to a range of unexpected possible and potential events outside of the organisation's control, such as: political instability, change in currency, changes of the weather which could lead to a change in customer behavior, etc.
Therefore, in an organisation it is important to know and understand what events could take place, where and why. So, managers should prioritize some steps of preventive actions in order to anticipate these kind of issues, especially focusing more on:
Patterns of behavior
Accidents
Single events and errors
"Patterns of behavior" relates to the morale and motivation of people. The effects of human behavior (such as victimization, bullying, harassment and discrimination) could affect confidence, weakening the relationships meant to lead to performance.
Accidents could happen anytime and anywhere. Thus, an organisation has to assure that the accidents are kept to a minimal level. In this situation preventive actions should focus more on the nature and quality of the working environment, safety aspects and technology.
Single events and errors are very hard to be managed and impossible to be eliminated. The risk should be kept at a minimum through supervision systems, regular inspections and procedures.
In order to perform a change, an organisation has to do a forecast, deeply understanding where that event could lead and its consequences. Thus, the risk of a particular event and its probability of occurring should be clear. Using this information, one can understand and better make future decisions, proposal and initiatives.
Examples in management
Preventive actions differ from one organisation to another. Their number is vast, among them counting:
Assessing business trends
Monitoring processes
Notifications regarding any situation
Perform risk analysis
Assessing new technology
Regular training and checking
Recovery planning
Safety and security policies
Audit analysis
Technology safety and security
Nowadays, due to fast changes in engineering, there is a large emphasis in the enhancement of safety and security regarding technology. However, in order to avoid some issues, more powerful safety analysis techniques are constantly being developed. As safety and security issues can occur anytime, intentionally or not, more preventive strategies against loss or hacking are enhanced. These actions aim to focus on the possible causes of the problem, rather than solving an already critical situation.
Computing
Computer security tries to defend computers by assuring that their networks are not accessed or disrupted. They approach different tactics in order to protect against attackers, creating barriers or lines of defense, through firewalls or encryption. However, losses result also from actions not executed properly (such as human errors) or from system errors among components.
Losses could be prevented through preventive strategies and tactics. Security analysts could find possible attackers, highlighting their reasons, potential and purpose. Owning proper knowledge, security experts could assess their own system and identify the most suitable defense strategy. Tracing is one of the methods used by people in order to find any issue or deficiency in their system.
Focusing first on strategy rather than tactics can be achieved by adopting a new system-theoretic causality model recently developed to provide a more powerful approach to engineering for safety. Causality models used in accidents are either traditional, caused by human errors, or more complex, caused by wrong interaction between components and systems errors.
STAMP (System-Theoretic Accident Model and Processes) is a model of accident causality used in investigating potential accidents that can occur. In this case, issues are seen as results of inadequate control of the safety components used.
Nowadays more powerful systems that analyse safety have been created. STPA (System-Theoretic Process Analysis) uses such techniques, being based on the STAMP model of causality. Once the cause is identified, STPA examines the system, creating a proper scenario that could solve the issue.
Information systems
Regarding technology, not only the safety and security of computers and isolated devices can be threatened, but also of entire complex information systems. As not all decisions made in an organisation are based on known rules, the analytical manager will examine in details the situation and anticipates potential issues that can occur. However. many decisions could have a great impact on some aspect of the organisation and cannot be easily reversed.
Thus, modelling and simulating play the roles of preventive actions, being applied earlier for the design of the process, where real factual data is not available. It is an abstract representation, that includes all aspects of a process so its potential impact could be better analysed. Such a representation before implementing can be done through business process modeling (BPM).
On one hand, there are indeed the deterministic systems that rely on the input data and are capable of predicting accurate output. On the other hand, there is the probabilistic system as well, which does not forecast with completely accuracy. However, both deterministic and probabilistic systems need some earlier actions that could prevent issues.
Analysis and design count are among the most important activities done before starting-up a business. During analysing, one gets a better understanding over the potential of the business; a diagrammatic model ensuring the agreement between IT professionals and system users. System design aims to design the way in which the system will work, this being eventually followed by system building.
In society
Preventive healthcare
Preventive healthcare or preventive medicine refers to the measures taken in order to prevent and treat diseases. As there is a wide range of diseases in the world, there is also a wide variety of factors that influence those health disorders, such as environment, genetic and lifestyle. Preventive healthcare relies on the anticipation of the diseases, before they take occur. Among these preventing methods, there are:
regular check-ups at the doctors, in order to prevent risk factors or to monitor different diseases
getting scanned, such as scanning of chronic diseases (cancer, diabetes, heart diseases)
vaccinations
trying to have a healthy lifestyle, through healthy eating and regularly exercising
avoiding some harmful habits, such as tobacco or alcohol
life insurance
However, these traditional healthcare strategies are not the only actions that could prevent health diseases. A very important step is recognizing and being aware of some certain health changes that can turn into real health threats. Examples of minor problems that people usually do not take seriously into consideration are numerous, such as losing involuntary weight, lasting coughs, body changes and others aches and pains. Once with noticing a disorder, people can take action by checking a specialist in order to avoid the situation getting worse.
Crime prevention
Crime prevention relies on the actions that defend and fight against criminals and crimes, such as murders, robberies, burglaries, black mail, high jacking or smuggling.
Criminologists focus on preventing the risks that can cause crime rather than reacting to crime that have already occurred.
There is a great number of techniques used in reducing crime. These could be split up into ones at a large scale, such as strategies implemented by a society or community, and others at a smaller one, such as personal security.
Examples of collective strategies preventing criminality:
Increasing capacity of the police in an area
Investing in jails
Monitoring areas
Support exchange of information regarding violent activities and events
Enforcing the security
Introducing violence preventing behavior in education
However, in most of the cases people tend to rely on their own personal skills and capabilities that could help them in preventing and defending criminal attacks. For example:
Self-defense training
Securing goods
Avoiding wilderness
Anti-terrorism operation
Preventive actions taken against acts of terrorism could either be preventive lockdown (preemptive lockdown to mitigate the risk) or an emergency lockdown (during or after the occurrence of the risk).
The August 2019 clampdown in Jammu and Kashmir is an example of preventive lockdown to eliminate the risk to the lives of civilians from the militants, violent protesters and stonepelters.
See also
Corrective and Preventive Action (CAPA)
Preventive diplomacy
Risk management
Preventive lockdown
References
Quality management
Prevention | Preventive action | [
"Engineering"
] | 2,056 | [
"Systems engineering",
"Reliability engineering"
] |
7,339,924 | https://en.wikipedia.org/wiki/Copper%20pour | In electronics, the term copper pour refers to an area on a printed circuit board filled with copper (the metal used to make connections in printed circuit boards). Copper pour is commonly used to create a ground plane. Another reason for using copper pour is to reduce the amount of etching fluid used during manufacturing.
Features
A distinctive feature of copper pour is the backoff (or stand-off) - a certain distance between the copper pour and any tracks or pads not belonging to the same electrical net. A copper pour therefore looks like it flows around other components, with the exception of pads which are connected to the copper pour using thermal connections.
Many early PCBs have a "hatched copper pour", sometimes called a "cherry pie lattice". PCB designers today almost always use completely solid areas of copper pour that completely cover the remaining area outside those tracks, pads, and stand-off regions.
While solid copper pour provides better resistive characteristics, hatched copper pour is used to balance the heat and dilatation on both sides of the board in order to avoid warping of certain substrate. Heating might cause gas bubbles between solid copper pour and certain substrates. Furthermore, it might be possible to adjust the impedance of high frequency traces by using hatched copper pour in order to reach better signal quality.
See also
References
External links
Integrated Circuit Components
Printed Circuit Board Assembly
Printed circuit board manufacturing | Copper pour | [
"Engineering"
] | 281 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
7,340,915 | https://en.wikipedia.org/wiki/Thorpe%20reaction | The Thorpe reaction is a chemical reaction described as a self-condensation of aliphatic nitriles catalyzed by base to form enamines. The reaction was discovered by Jocelyn Field Thorpe.
Thorpe–Ziegler reaction
The Thorpe–Ziegler reaction (named after Jocelyn Field Thorpe and Karl Ziegler), or Ziegler method, is the intramolecular modification with a dinitrile as a reactant and a cyclic ketone as the final reaction product after acidic hydrolysis. The reaction is conceptually related to the Dieckmann condensation.
References
External links
Thorpe-Ziegler reaction: 4-Phosphorinanone, 1-phenyl- Organic Syntheses, Coll. Vol. 6, p. 932 (1988); Vol. 53, p. 98 (1973) Link
Carbon-carbon bond forming reactions
Condensation reactions
Name reactions | Thorpe reaction | [
"Chemistry"
] | 190 | [
"Name reactions",
"Condensation reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
7,342,233 | https://en.wikipedia.org/wiki/Hassan%20Aref | Hassan Aref (Arabic: حسن عارف), (28 September 1950 – 9 September 2011) was the Reynolds Metals Professor in the Department of Engineering Science and Mechanics at Virginia Tech, and the Niels Bohr Visiting Professor at the Technical University of Denmark.
Education
He was educated at the University of Copenhagen Niels Bohr Institute, graduating in 1975 with a cand. scient degree in Physics and Mathematics. Subsequently he received a PhD degree in Physics from Cornell University in 1980.
Career
Academia and research
Prior to joining Virginia Tech as Dean of Engineering in 2003-2005 Aref was Head of the Department of Theoretical and Applied Mechanics at University of Illinois at Urbana-Champaign for a decade from 1992-2003. Before that he was on the faculty of University of California, San Diego, split between the Department of Applied Mechanics and Engineering Science and the Institute of Geophysics and Planetary Physics 1985-1992. Simultaneously, he was Chief Scientist at the San Diego Supercomputer Center for three years 1989-1992. Aref started his faculty career in the Division of Engineering at Brown University 1980-85.
Editorial work
Throughout his career Aref was involved in editorial work. He was Associate Editor of Journal of Fluid Mechanics 1984-94, founding editor with David Crighton of Cambridge Texts in Applied Mathematics, and served on the editorial board of Theoretical and Computational Fluid Dynamics and as co-editor of Advances in Applied Mechanics. He served on the editorial boards of Physics of Fluids, Physical Review E, and Regular and Chaotic Dynamics.
Notable research
Fluid mechanics
Aref was the author of some 80 articles in leading journals in the field of fluid mechanics. He has also authored chapters in several books, edited two collections of papers, and given presentations at conferences and universities around the world. Aref received the 2000 Otto Laporte Award from the American Physical Society for this work and for his work on vortex dynamics for which he is also well known.
Positions on scientific committees
Aref served as chair of the Division of Fluid Dynamics of the American Physical Society. He chaired the US National Committee on Theoretical and Applied Mechanics and has served on advisory boards for several professional societies. He was a member of the Executive Committee of the Congress Committee of the International Union of Theoretical and Applied Mechanics (IUTAM), a member of the National Academies Board on International Scientific Organizations, and a member of the Board of the Society of Engineering Science. He served as Secretary for the Midwest Mechanics Seminar, 1994-2003.
Aref was president of the 20th International Congress of Theoretical and Applied Mechanics held in Chicago in 2000. In the 70+ years of these significant congresses they have been held three times in USA: In 1938 in Boston, MA, with MIT and Harvard University as the host institutions, in 1968 with Stanford University as the host, and in 2000 with a consortium led by University of Illinois, Urbana-Champaign as the hosts.
Personal life and death
Hassan Aref was born in Alexandria, Egypt. Previously a citizen of Canada, he acquired U.S. citizenship in 1998. He died from an aortic dissection.
Honors and awards
2011 Geoffrey Ingram Taylor Medal
2011 Honorary Doctorate, Technical University of Denmark
2006 Niels Bohr Visiting Professor, Technical University of Denmark
2003 Reynolds Metals Professor, Virginia Tech
2001 Fellow, World Innovation Foundation
2000 Otto Laporte Award, American Physical Society "For his pioneering contributions to the study of chaotic motion in fluids, scientific computation, and vortex dynamics, and most notably for the development of the concept of chaotic advection."
2000 Fellow, American Academy of Mechanics
1994 Toshiba Keio Lecture, Keio University, Japan
1991 Westinghouse Distinguished Lectureship, University of Michigan
1991 Lecturer, Midwest Mechanics Seminar
1988 Fellow, American Physical Society "For the elucidation of chaotic motion in few-vortex problems and particle advection, and for the development of numerical methods based on many-vortex interactions."
1988 Stanley Corrsin Lectureship, The Johns Hopkins University
1986 Foreign Member, Danish Centre for Applied Mathematics and Mechanics
1985 Presidential Young Investigator Award, National Science Foundation
1975 NATO Fellowship; Cornell University Graduate Fellowship, 1975–1980
References
External links
Personal web page at Virginia Tech:
Personal web page at the University of Illinois at Urbana–Champaign:
Vortex Dynamics Blog by Hassan Aref
Hassan Aref's blog Blog by Hassan Aref
Author profile in the database zbMATH
1950 births
Cornell University alumni
University of Copenhagen alumni
University of California, San Diego faculty
Egyptian emigrants to the United States
University of Illinois Urbana-Champaign faculty
Virginia Tech faculty
American physicists
Egyptian physicists
Fellows of the American Physical Society
Fluid dynamicists
2011 deaths
Brown University faculty | Hassan Aref | [
"Chemistry"
] | 938 | [
"Fluid dynamicists",
"Fluid dynamics"
] |
7,342,571 | https://en.wikipedia.org/wiki/Selective%20calling | In a conventional, analog two-way radio system, a standard radio has noise squelch or carrier squelch, which allows a radio to receive all transmissions. Selective calling is used to address a subset of all two-way radios on a single radio frequency channel. Where more than one user is on the same channel (co-channel users), selective calling can address a subset of all receivers or can direct a call to a single radio. Selective calling features fit into two major categories—individual calling and group calling. Individual calls generally have longer time-constants: it takes more air-time to call an individual radio unit than to call a large group of radios.
Selective calling is akin to the use of a lock on a door. A radio with carrier squelch is unlocked and will let any signal in. Selective calling locks out all signals except ones with the correct "key", in this case a specific digital code. Selective calling systems can overlap; e.g. a radio may have CTCSS and DTMF calling.
Selective calling prevents the user from hearing others on a shared channel. It does not eliminate interference from co-channel users (other users on the same radio channel). If two users try to talk at the same time, the signal will be affected by the other party using the channel.
Some selective calling systems experience falsing. In other words, the decoder activates when a valid signal is not present. Falsing may come from a maintenance problem or poor engineering.
Group calling
In conventional FM two-way radio systems, the most common form of selective calling is CTCSS, which is based on a sub-audible tone. One implementation of this system is by Motorola and is called Private Line, or PL. Radios made by nearly any manufacturer will work acceptably with existing systems using CTCSS. The system allows groups of radios to remain muted while other users are talking on the channel. In business and industrial systems, as many as 50 sets of users could share the same channel without having to listen to calls for each other's staffs. In government systems, users can avoid having to hear users outside their own agency. (Government channels are usually separated by distance between user groups. Only one local user group is assigned to a channel.)
In uses where missed calls are allowable, selective calling can also hide the presence of interfering signals such as receiver-produced intermodulation. Receivers with poor specifications—such as scanners or low-cost mobile radios—cannot reject the unwanted signals on nearby channels in urban environments. The interference will still be present and will still degrade system performance but by using selective calling the user will not have to hear the noises produced by receiving the interference.
In the United States, Federal Communications Commission rules require users of selective calling to monitor the channel, i.e. switch to carrier squelch before transmitting. In other words, the user must monitor (listen) to make sure the channel is not in use by someone on another selective calling code before transmitting. To enforce this rule, base stations often have a monitor switch on the microphone. The push-to-talk button is split into two segments. One segment turns the selective calling off. The other segment of the button transmits. A mechanical interlock prevents the transmit button from being pressed until the monitor button is down. This is called "compulsory monitor before transmit". In mobile radios, microphones are stored in a hang-up box. When the microphone is pulled out of the hang-up, the radio reverts to carrier squelch and the selective calling feature is disabled. The user automatically monitors—verifies no one else is using the channel—by pulling the microphone out of the hang-up box. Hand-held radios sometimes have LED indicators that show when the channel is in use.
CTCSS
CTCSS (Continuous Tone-Coded Squelch System) superimposes any one of about 50 continuous audio tones on the transmitted signal, ranging from 67 to 254 Hz. At any time when the transmitter is on, the tone is encoded on the signal. CTCSS is often called PL tone (for Private Line, a trademark of Motorola), or simply tone squelch. General Electric's implementation of CTCSS is called Channel Guard (or CG). When RCA was in the land mobile radio business, their brand name was Quiet Channel (or QC). Tone codes may universally be described by their tone frequency (for example, 131.8 Hz).
SelCall
Selcall (Selective Calling) transmits a burst of five in-band audio tones to initiate the conversation. This feature is common in European systems. In a simplex system, the 5-tone just opens the speaker of the desired partner. In a repeater system, another CTCSS or tone-burst or 5-tone is needed to activate the company's repeater, depending on the systems design. If the called radio is within reach of the sender, it answers the incoming call with its stored receipt tone. Sometimes systems using Selcall are referred to as CCIR or ZVEI, specific tone encoding schemes used in Selcall systems. On the continent, people use the ZVEI scheme while in Great Britain the CCIR is very common.
In the same way that a single CTCSS tone would be used on an entire group of radios, a single five-tone sequence is used in a group of radios. All radios also have their own private callnumber stored, to be reached for an individual conversation instead of a group call. In either way the radio speaker turns on as soon as the fifth tone of a valid sequence is decoded. In case of a group call, a short announcement tone is generated on the radios speaker. In case of a private call, the receipt tone is transmitted back to the sender and then the receive path is open. The speaker stays on until the carrier squelch detects that the carrier is no longer being received. At that point, the speaker mutes and the decoder resets. The receiver speaker turns off and remains muted until another valid five-tone sequence is decoded.
A similar tone format is used for one-way tone-and-voice radio paging in the US. It is informally known as Reach format.
DCS
DCS or Digital-Coded Squelch superimposes a continuous stream of FSK digital data, at 134.5 baud, on the transmitted signal. In the same way that a single CTCSS tone would be used on an entire group of radios, the same DCS code is used in a group of radios. DCS is also referred to as DPL tone (for Digital Private Line, a trademark of Motorola), and likewise, GE's implementation of DCS is referred to a Digital Channel Guard (or DCG).
Some equipment uses a 136 Hz square wave turn off code. The turn-off signal is sent for one- to three-tenths of a second (100–300 ms) at the end of a transmission to mute the audio so that a squelch crash is not heard. Radios with DCS options are generally compatible provided the radio's encoder-decoder will use the same code as radios in the existing system. Codes are usually described as three octal digits (for example, 054). Some DCS codes are inverted data of others: one code with the marks and spaces inverted may form a different valid DCS code (413 is equivalent to 054 inverted). Because of the use of the 136 Hz code, many receivers will decode a DCS signal when tuned to the CTCSS tone of 136.5 Hz (depending on receiver system tolerance).
XTCSS
XTCSS is the newest signaling technique and it provides 99 codes with the added advantage of 'silent operation'. XTCSS fitted radios are purposed to enjoy more privacy and flexibility of operation. XTCSS is implemented using a combination of CTCSS and in-band signaling.
Tone burst or single tone
Tone burst is an obsolete method of selective calling where the radio transmits a single 0.5- to 1.5-second audio tone at the beginning of each transmission. This scheme existed before circuitry for CTCSS had been developed. This method was in wide use in the United States from the 1950s through the 1980s. Human spaceflight operations made frequent use of this method.
In the same way that a single CTCSS tone would be used on an entire group of radios, a single burst tone is used in a group of radios. The radio speaker turns on as soon as the tone is decoded and the speaker stays on until the carrier squelch detects that the carrier is no longer being received. At that point, the speaker mutes and the decoder resets. The receiver speaker turns off and remains muted until another valid burst tone is decoded.
In some cases, burst tones were used to select repeaters. By changing tones, the mobile radio would actuate a different repeater site. A typical tone scheme might use the tones 1,800 Hz, 2,000 Hz, 2,200 Hz, 2,400 Hz, and 2,552 Hz. This was the scheme used by most State of California agencies during the era when tone burst was in use. Some systems have been observed to use tones as low as 800 Hz. The default or standard five Motorola tones used for single tone format as of the 1980s: 1,350 Hz, 1,500 Hz, 1,650 Hz, 1,800 Hz, 1,950 Hz. These were identified in system documentation for a number of remote control equipment models as well as sales brochures for Motorola Syntor and Micor mobile radio Systems 90 accessories. A common tone burst frequency used by many amateur radio systems in Europe is 1,750 Hz.
In German public service radio networks the calltone 1,750 Hz (Tone I) and 2,135 Hz (Tone II) are used to activate different repeaters or call an operator. To double the calling features, tones are used in short call (1,000 ms) and long call (> 2,000 ms).
In well-designed systems, repeaters or radios usually included an audio notch filter that reduced the volume of the tone at the speaker.
A variation to the single tone scheme was seen in one-way paging receivers. In some two-tone sequential systems, sending 4–8 seconds of the second tone pages all receivers which have a code including the second tone. This is sometimes referred to as long tone B. Receivers made by Plectron and often used to page volunteer firefighters use a long single tone. The decoder in the typical Plectron receiver would not decode the tone as a valid call unless it was present for at least two to four seconds (a very long variation of the burst tone).
Conventional analog individual calling
In individual calling, a specific radio is called. Most individual calling schemes involve a sequence of tones. Most schemes have a dozen to thousands of possible individual codes. As a practical matter, more than about two hundred radios on a single channel make an unusable level of traffic. So 1,000 individual calls will usually be more than needed.
Individual calls are usually event-based. For example, a tow truck may be called to give the driver an assignment or an ambulance may be called with an emergency call.
Some Motorola pagers could decode four different individual 5-tone signals (see SelCall above). Some fire departments used this feature to implement an individual signal (using the first of the four signals), a station based signal (i.e. paging everyone from one fire station, using the second signal), a region-based signal (i.e., everyone in the northwest region, using the third signal), and an all-call (every fireman, using the fourth signal).
DTMF
In dual-tone multi-frequency (DTMF) selective calling, the radio is alerted by a string of digits. Systems typically use 2- to 7-digits. These can be dialed from a traditional telephone dial connected to a radio or may be generated as a string of DTMF digits by an automatic encoder. In some systems, a dispatching computer is connected to a DTMF encoder via a serial (RS-232) cable: the computer sends commands to the encoder that generates a pre-defined digit string that is then sent to the transmitter.
On FM two-way radios, digits are usually sent at a level that equals two-thirds, (66%,) of system deviation. For example, in a ±5 kHz deviation system, the DTMF encoder is set to produce 3.3 kHz of transmitter deviation (modulation), or less. In systems with solid received signals, tone levels are sometimes set very low so radio users are not forced to listen to them at a high level. Keeping the DTMF tone modulation below system maximum preserves the clean sine wave produced by the encoder. Sending digits at higher levels causes the transmitter's circuits that are designed to prevent over-modulation to distort or clip the waveform of the tones. Distorted wave forms may not decode properly or may include harmonics that cause falsing. Digits are usually sent at a minimum of 55 milliseconds (ms) in length with at least 55 ms of silence between each digit. Some decoders may require much longer-duration digits. DTMF digits consist of paired tones: a row tone and a column tone. The levels of row and column tones must be similar in order for a decoder to interpret them reliably.
Radios with DTMF decoders may monitor all system traffic or remain muted until called, depending on the system design. When the radio receives the correct digit string, it may momentarily buzz or sound a Sonalert. An indicator light may turn on and remain latched on. In most systems, the radio's receive audio would latch on after receiving a valid digit string if normally muted.
Many companies have trademarked names for their DTMF features. For example, Motorola calls their DTMF options, Touch Call. Because DTMF is a standardized format, most of the features are interchangeable. Generally, any radio that is equipped to decode the digit string 0-1-2-3 would be compatible with any system using DTMF.
Some systems use DTMF for push-to-talk unit ID. Each time the push-to-talk is pressed, the radio sends a string of DTMF digits. Each radio has a unique string of digits. This allows the base station to know who last called or who last pressed the push-to-talk.
Two-tone sequential
Two-tone sequential, also known as 1+1, is a selective calling method originally used in one-way, tone-and-voice paging receivers. Many companies have their own names for two-tone sequential options. General Electric Mobile Radio called it Type 99. Motorola called it Quik-Call II. For example, the encoder sends a single tone followed by 50 to 1,000 milliseconds of silence and then a second tone. Decoders look for a valid first tone followed by a valid second tone within a defined length of time, (a time window). For example, a decoder detecting a valid first tone might allow up to 2 seconds for a valid second tone to be decoded. If no valid second tone is decoded within 2 seconds, the decoder resets and waits for another valid first tone.
A widely varied set of tone plans or schemes are used for these systems. Some tone plans use tone frequencies which are close or overlap with tones used by other coding plans. For example, one plan might use very narrow filters and specify a tone of 702.3 Hz. Another may use a simple filter of capacitors and inductors and specify a tone of 700 Hz. A decoder might not be able to tell the difference between these two tones because they are so close in frequency. Systems generally use tones off of a single, designed tone plan. Individual tone plans are engineered to avoid overlapping or nearby tone frequencies that may cause falsing. Some systems use CTCSS subaudible tones as the tones composing the two-tone sequence. For example, a two tone sequence might consist of 123.0 Hz followed by 203.5 Hz.
On FM two-way radios, tones are usually sent at a level that equals two-thirds of system deviation. For example, in a ±5 kHz deviation system, the tone encoder is set to produce 3.3 kHz of transmitter deviation, (modulation,) or less. Because the tones are audible, in systems with solid received signals, tone levels are sometimes set lower so that radio users are not forced to listen to them at a high level. Keeping the tone modulation below system maximum preserves the clean sine wave produced by the encoder. Sending digits at higher levels causes the transmitter's circuits that are designed to prevent over-modulation to distort or clip the waveform of the tones. Distorted wave forms may not decode properly or may include harmonics that cause falsing. Tones are usually sent at a minimum of 500 milliseconds (ms) to 3 seconds (3,000 ms) in length.
Radios with two-tone sequential decoders may monitor all system traffic or remain muted until called, depending on the system design. When the radio receives the correct tones in the proper sequence, it may momentarily buzz or sound a Sonalert. An indicator light may turn on and remain latched on. In most systems, the radio's receive audio would latch on if normally muted. In systems using a combination of audible tone sequences and CTCSS, it is common practice to turn off the CTCSS encode while the two-tone sequence is sent. This means system users with CTCSS decoders do not have to listen to the paging tones.
Quik-Call I
Quik-Call I, also known as 2+2, is a selective calling method originally used in one-way paging receivers. The Quik-Call name is a trademark of Motorola. It sends a pair of tones followed by 50 to 1,000 milliseconds of silence and then a second pair of tones. Decoders look for a valid first tone pair followed by a valid second tone pair within a defined length of time, (a time window). For example, a decoder detecting a valid first tone pair might allow up to 2 seconds for a valid second tone pair to be decoded. If no valid second tone is decoded within 2 seconds, the decoder resets and waits for another valid first tone pair. The system is less susceptible to falsing because it employs pairs of tone decoders that must detect valid tone pairs simultaneously.
Quik-Call I is most famous for use in fire departments. The 1970s television show, Emergency!, depicted its use for base station ringdowns in the Los Angeles County Fire Department. In some systems, mobile radios had decoder options built into them. In Motorola mobile equipment, the decoders were housed in a box that bolted onto the radio control head. In the 1960s, it was also used to actuate tube-type receivers used to call out volunteer firefighters or to trigger sirens used to call out volunteers.
Radios with Quik-Call I decoders may monitor all system traffic or remain muted until called, depending on the system design. When the radio receives the correct tone pairs in the proper sequence, it may momentarily buzz or sound a Sonalert. An indicator light may turn on and remain latched on. In most systems, the radio's receive audio would latch on if normally muted. In the Emergency! television show, the decoder turned on the lighting, activated the overhead loudspeakers, activated the horn/klaxon, and probably turned off cooking appliances.
MDC-600 and MDC-1200
MDC, also known as MDC-1200 and MDC-600, is a low-speed Motorola data system using audio frequency shift keying, (AFSK). MDC-600 uses a 600 baud data rate. MDC-1200 uses a 1,200 baud data rate. Systems employ either one of the two baud rates. Mark and space tones are 1,200 Hz and 1,800 Hz. The data are sent in bursts over the radio system's voice channel.
Motorola radios with MDC options have an option allowing the radio to filter out data bursts from the receive audio. Instead of hearing the AFSK data, the user hears a short chirp from the radio speaker each time a data burst occurs. (The user must turn on this feature in the radio's option programing settings).
MDC signaling includes a number of features: unit ID, status buttons, emergency button, and selective calling. These features are programmable and could be used in any combination desired by the user. They are typically incorporated in high-end analog FM radios made by Motorola. In addition to Motorola, two other companies make compatible base station decoders for MDC-1200.
Other in-band signaling
Modat
Modat, also written MODAT, is an obsolete Motorola data system using a sequence of seven audio tones similar to the five-tone-sequential Selcall format. Some systems still use Modat today. Modat is used for unit ID and emergency buttons, rather than for selective calling. In a typical installation, each radio in a system is assigned a unique seven-tone code. Each time the radio's push-to-talk button is pressed, the radio transmits the seven tone sequence at the beginning of the transmission. To prevent the user from talking while the tone sequence is broadcast, the seven-tone sequence is played over the two-way radio receiver's speaker.
Modat tone sequences are described as either a six-digit or seven-character string. For example, a single Modat code could be described as either 698R124 or 6988124 (where the "R" tone indicated "repeat the last digit"). The data format coming from a Modat decoder is unclear.
Modat features are programmable and could be used in any combination desired by the user. For example, some systems use only push-to-talk unit ID or only emergency button. Others may use both. One setting that is adjustable is the length of time from push-to-talk press until the tone sequence starts. This delays the start of the tone sequence to allows systems with long time constants in CTCSS decoders or voting comparators to open an audio path. In addition to Motorola, other companies make add-on encoders that can modify a different brand of radio to work with a Modat system.
Modat unit ID systems are frequently heard from radios on Barbour television productions, such as the Cops television show, portraying southern California law enforcement agencies in the 1980s.
Out-of-band individual calling
Trunked radio systems have built-in unit ID and selective calling features. Each trunked system has its own unique features. See the article for a specific system to learn more.
Two-way radio systems using digital modulation schemes such as TDMA can embed unit ID and selective calling into the data stream multiplexed in parallel with the voice. See the article pertaining to a specific system to learn more.
References
Radio technology | Selective calling | [
"Technology",
"Engineering"
] | 4,849 | [
"Information and communications technology",
"Telecommunications engineering",
"Radio technology"
] |
7,343,721 | https://en.wikipedia.org/wiki/Table%20of%20AMD%20processors |
References
See also
List of AMD microprocessors
List of AMD CPU microarchitectures
List of AMD mobile microprocessors
List of AMD Athlon microprocessors
List of AMD Athlon XP microprocessors
List of AMD Athlon 64 microprocessors
List of AMD Athlon X2 microprocessors
List of AMD Duron microprocessors
List of AMD Sempron microprocessors
List of AMD Turion microprocessors
List of AMD Opteron microprocessors
List of AMD Epyc microprocessors
List of AMD Phenom microprocessors
List of AMD FX microprocessors
List of AMD Ryzen microprocessors
List of AMD processors with 3D graphics
List of Intel microprocessors
List of Intel CPU microarchitectures
Comparison of Intel processors
AMD
AMD Processors | Table of AMD processors | [
"Technology"
] | 210 | [
"Computing comparisons"
] |
7,343,746 | https://en.wikipedia.org/wiki/Adaptive%20capacity | Adaptive capacity relates to the capacity of systems, institutions, humans and other organisms to adjust to potential damage, to take advantage of opportunities, or to respond to consequences. In the context of ecosystems, adaptive capacity is determined by genetic diversity of species, biodiversity of particular ecosystems in specific landscapes or biome regions. In the context of coupled socio-ecological social systems, adaptive capacity is commonly associated with the following characteristics: Firstly, the ability of institutions and networks to learn, and store knowledge and experience. Secondly, the creative flexibility in decision making, transitioning and problem solving. And thirdly, the existence of power structures that are responsive and consider the needs of all stakeholders.
In the context of climate change adaptation, adaptive capacity depends on the inter-relationship of social, political, economic, technological and institutional factors operating at a variety of scales. Some of these are generic, and others are exposure-specific.
Benefits
Adaptive capacity confers resilience to perturbation, giving ecological and human social systems the ability to reconfigure themselves with minimum loss of function. In ecological systems, this resilience shows as net primary productivity and maintenance of biomass and biodiversity, and the stability of hydrological cycles. In human social systems it is demonstrated by the stability of social relations, the maintenance of social capital and economic prosperity.
Building adaptive capacity is particular important in the context of climate change, where it refers to a latent capacity - in terms of resources and assets - from which adaptations can be made as required depending on future circumstances. Since future climate is likely to be different from the present climate, developing adaptive capacity is a prerequisite for the adaptation that can reduce the potential negative effects of exposure to climate change. In climate change, adaptive capacity, along with hazard, exposure and vulnerability, is a key component that contributes to risk, or the potential for harm or impact.
Characteristics
Adaptive capacity can be enhanced in a number of different ways. A report by the Overseas Development Institute introduces the local adaptive capacity framework (LAC), featuring five core characteristics of adaptive capacity. These include:
Asset base: the availability of a diverse range of key livelihood assets that allow households or communities to respond to evolving circumstances
Institutions and entitlements: the existence of an appropriate and evolving institutional environment that allows for access and entitlement to key assets and capitals
Knowledge and information: the ability households and communities have to generate, receive, assess and disseminate knowledge and information in support of appropriate adaptation options
Innovation: the system creates an enabling environment to foster innovation, experimentation and the ability to explore niche solutions in order to take advantage of new opportunities
Flexible forward-looking decision-making and governance: the system is able to anticipate, incorporate and respond to changes with regards to its governance structures and future planning.
Many development interventions - such as social protection programmes and efforts to promote social safety nets - can play important roles in promoting aspects of adaptive capacity.
Relationship between adaptive capacity, states and strategies
Adaptive capacity is associated with r and K selection strategies in ecology and with a movement from explosive positive feedback to sustainable negative feedback loops in social systems and technologies.
The Resilience Alliance shows how the logistic curve of the r phase positive feedback, becoming replaced by the K negative feedback strategy is an important part of adaptive capacity. The r strategy is associated with situations of low complexity, high resilience, and growing potential. K strategies are associated with situations of high complexity, high potential and high resilience, but if the perturbations exceed certain limits, adaptive capacity may be exceeded and the system collapses into another so-called Omega state, of low potential, low complexity and low resilience.
In the context of climate change
Common enablers of adaptive capacity
An enabler, also known as a promoter or driver, represents a set of factors and conditions which can help to build and develop resilience. In a 2001 IPCC report focusing on impacts, adaptation, and vulnerability, six factors were identified as promoters of adaptive capacity. These characteristics contribute to the development and strengthening of adaptive capacity. For instance, a stable and prosperous economy is crucial, as it enables better management of the costs associated with adaptation. Generally, developed and wealthier nations are more prepared to face the impacts of climate change. Access to technology at various levels (local, regional, and national) and in all sectors is essential for staying informed about resource distribution, land use, and extraction practices. Additionally, clearly delineating roles and responsibilities for executing adaptation strategies is important at national, regional, and local levels. Discussion forums and consultations are established to disseminate climate information, ensuring clear communication and collaboration. Social institutions aim to distribute resources equitably, recognizing that power imbalances can hinder adaptive capacity. It's vital to protect existing systems with high adaptive capacity, such as traditional societies, from potential compromises resulting from modern development trajectories.
Common barriers of adaptive capacity
A barrier is an obstacle surmounted through collective efforts, creative management, mindset shifts, and adjustments in resource distribution, land uses, and institutions. Barriers are often confused with limits however, the distinguishing feature between the two is that limits cannot be overcome. Barriers are crucial to consider when assessing the level of adaptive capacity within a group, community, and organization, as they block or hinder adaptation actions. Various types of barriers including historical, political, financial, and natural can be identified. They can be either internal or external and can block or hinder the implementation of an adaptation action and consequently lower adaptive capacity. An external barrier is a factor that falls outside an organization/community/individual's control. For example, a common external barrier is the absence of land available for individuals or enterprises to relocate while faced with a major climatic event such as flooding or wildfires. An internal barrier is typically affected by an organization/community/individual beliefs and perceptions concerning climate change. For example, a common internal barrier is people's reluctance to relocate from flood-prone regions (owing to their livelihood dependence), the costs of land or property, or insufficient awareness regarding the potential flooding risks amid projected climate alterations.
Common organizational barriers include a disconnect between government recommendations/policies and concrete actions made by actors and organizations. Scholars point to other significant barriers that may impede adaptation action, like the lack of resources, financial incentives for long-term planning, and a lack of knowledge related to climate change adaptation. Another common barrier is skepticism regarding the severity and urgency of climate impacts. Local knowledge of technical, climate-adapted solutions is instrumental for organizational adaptation, but opportunities to harness this knowledge can be missed due to skeptical beliefs.
See also
Adaptability
References
Environmental economics
Social concepts | Adaptive capacity | [
"Environmental_science"
] | 1,353 | [
"Environmental economics",
"Environmental social science"
] |
7,343,944 | https://en.wikipedia.org/wiki/Connect%20Business%20Information%20Network | Connect Business Information Network, formerly known as MacNET, was a proprietary dial-up online network with a graphic user interface similar to AppleLink.
Launch
Mike Muller, a former VP of Apple Computer, launched MacNET in 1988. The mainframe end was programmed by Robert Lissner, the author of AppleWorks. The terminal software, also called MacNET, was sold through Macintosh software outlets and the network charged an hourly use fee.
Growth and decline
In the early years, customers first had to purchase disk-based software as well as pay hourly online fees. There were two groups of customers: one was members of the general public, while the second was special interest or corporate customers who would see additional dedicated content not available to the general public. The general public could make use of email, a 15-minute delayed stock price server, public message base, and download libraries. During the first year of operation, growth was significant, as MacNET represented the first time that a graphic user interface (GUI) was widely available to customers who had previously been limited to the command line interface of CompuServe and GEnie. At launch, a forum titled Mac Symposium managed by Stuart Gitlow was launched using the freeware and shareware libraries of LaserBoard and BMUG as a starting point. When the PC software became available, T. Bradley Tanner launched a comparable forum, PC Symposium.
Use grew rapidly during the first years, but there was significant competition from America Online when that service launched one year after MacNET had launched on the Macintosh platform. While AOL had comparable hourly rates, they offered their software free off charge, distributing it widely both by direct mail and by user group and magazine distribution. Eventually, the MacNET service name and the company name were changed to CONNECT and the company began to focus on its special interest and corporate customers. Forum management, using Lissner's back end interface, was much simpler on CONNECT than it was using Rainman, the back end interface for AOL's forums, thereby keeping CONNECT viable for longer than it might have been otherwise.
The software remained MacNET on the Mac side and PCNet came out for the PC market. By the early 2000s, Connect became web-based and closed within several years of the widespread adoption of the WWW standard.
References
Bulletin board systems | Connect Business Information Network | [
"Technology"
] | 470 | [
"Computing stubs",
"World Wide Web stubs"
] |
7,344,293 | https://en.wikipedia.org/wiki/Zero%20sound | Zero sound is the name given by Lev Landau in 1957 to the unique quantum vibrations in quantum Fermi liquids. The zero sound can no longer be thought of as a simple wave of compression and rarefaction, but rather a fluctuation in space and time of the quasiparticles' momentum distribution function. As the shape of Fermi distribution function changes slightly (or largely), zero sound propagates in the direction for the head of Fermi surface with no change of the density of the liquid. Predictions and subsequent experimental observations of zero sound was one of the key confirmation on the correctness of Landau's Fermi liquid theory.
Derivation from Boltzmann transport equation
The Boltzmann transport equation for general systems in the semiclassical limit gives, for a Fermi liquid,
,
where is the density of quasiparticles (here we ignore spin) with momentum and position at time , and is the energy of a quasiparticle of momentum ( and denote equilibrium distribution and energy in the equilibrium distribution). The semiclassical limit assumes that fluctuates with angular frequency and wavelength , which are much lower than and much longer than respectively, where and are the Fermi energy and momentum respectively, around which is nontrivial. To first order in fluctuation from equilibrium, the equation becomes
.
When the quasiparticle's mean free path (equivalently, relaxation time ), ordinary sound waves ("first sound") propagate with little absorption. But at low temperatures (where and scale as ), the mean free path exceeds , and as a result the collision functional . Zero sound occurs in this collisionless limit.
In the Fermi liquid theory, the energy of a quasiparticle of momentum is
,
where is the appropriately normalized Landau parameter, and
.
The approximated transport equation then has plane wave solutions
,
with
given by
.
This functional operator equation gives the dispersion relation for the zero sound waves with frequency and wave vector . The transport equation is valid in the regime where and .
In many systems, only slowly depends on the angle between and . If is an angle-independent constant with (note that this constraint is stricter than the Pomeranchuk instability) then the wave has the form and dispersion relation where is the ratio of zero sound phase velocity to Fermi velocity. If the first two Legendre components of the Landau parameter are significant, and , the system also admits an asymmetric zero sound wave solution (where and are the azimuthal and polar angle of about the propagation direction ) and dispersion relation
.
See also
Second sound
Third sound
References
Further reading
Statistical mechanics
Condensed matter physics
Lev Landau | Zero sound | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 546 | [
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Statistical mechanics",
"Matter"
] |
7,344,320 | https://en.wikipedia.org/wiki/Fixed-point%20iteration | In numerical analysis, fixed-point iteration is a method of computing fixed points of a function.
More specifically, given a function defined on the real numbers with real values and given a point in the domain of , the fixed-point iteration is
which gives rise to the sequence of iterated function applications which is hoped to converge to a point . If is continuous, then one can prove that the obtained is a fixed point of , i.e.,
More generally, the function can be defined on any metric space with values in that same space.
Examples
A first simple and useful example is the Babylonian method for computing the square root of , which consists in taking , i.e. the mean value of and , to approach the limit (from whatever starting point ). This is a special case of Newton's method quoted below.
The fixed-point iteration converges to the unique fixed point of the function for any starting point This example does satisfy (at the latest after the first iteration step) the assumptions of the Banach fixed-point theorem. Hence, the error after n steps satisfies (where we can take , if we start from .) When the error is less than a multiple of for some constant , we say that we have linear convergence. The Banach fixed-point theorem allows one to obtain fixed-point iterations with linear convergence.
The requirement that is continuous is important, as the following example shows. The iteration converges to 0 for all values of . However, 0 is not a fixed point of the function as this function is not continuous at , and in fact has no fixed points.
Attracting fixed points
An attracting fixed point of a function is a fixed point of with a neighborhood of "close enough" points around such that for any value of in , the fixed-point iteration sequence
is contained in and converges to . The basin of attraction of is the largest such neighborhood .
The natural cosine function ("natural" means in radians, not degrees or other units) has exactly one fixed point, and that fixed point is attracting. In this case, "close enough" is not a stringent criterion at all—to demonstrate this, start with any real number and repeatedly press the cos key on a calculator (checking first that the calculator is in "radians" mode). It eventually converges to the Dottie number (about 0.739085133), which is a fixed point. That is where the graph of the cosine function intersects the line .
Not all fixed points are attracting. For example, 0 is a fixed point of the function , but iteration of this function for any value other than zero rapidly diverges. We say that the fixed point of is repelling.
An attracting fixed point is said to be a stable fixed point if it is also Lyapunov stable.
A fixed point is said to be a neutrally stable fixed point if it is Lyapunov stable but not attracting. The center of a linear homogeneous differential equation of the second order is an example of a neutrally stable fixed point.
Multiple attracting points can be collected in an attracting fixed set.
Banach fixed-point theorem
The Banach fixed-point theorem gives a sufficient condition for the existence of attracting fixed points. A contraction mapping function defined on a complete metric space has precisely one fixed point, and the fixed-point iteration is attracted towards that fixed point for any initial guess in the domain of the function. Common special cases are that (1) is defined on the real line with real values and is Lipschitz continuous with Lipschitz constant , and (2) the function is continuously differentiable in an open neighbourhood of a fixed point , and .
Although there are other fixed-point theorems, this one in particular is very useful because not all fixed-points are attractive. When constructing a fixed-point iteration, it is very important to make sure it converges to the fixed point. We can usually use the Banach fixed-point theorem to show that the fixed point is attractive.
Attractors
Attracting fixed points are a special case of a wider mathematical concept of attractors. Fixed-point iterations are a discrete dynamical system on one variable. Bifurcation theory studies dynamical systems and classifies various behaviors such as attracting fixed points, periodic orbits, or strange attractors. An example system is the logistic map.
Iterative methods
In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones. Convergent fixed-point iterations are mathematically rigorous formalizations of iterative methods.
Iterative method examples
Convergence acceleration
The speed of convergence of the iteration sequence can be increased by using a convergence acceleration method such as Anderson acceleration and Aitken's delta-squared process. The application of Aitken's method to fixed-point iteration is known as Steffensen's method, and it can be shown that Steffensen's method yields a rate of convergence that is at least quadratic.
Chaos game
The term chaos game refers to a method of generating the fixed point of any iterated function system (IFS). Starting with any point , successive iterations are formed as , where is a member of the given IFS randomly selected for each iteration. Hence the chaos game is a randomized fixed-point iteration. The chaos game allows plotting the general shape of a fractal such as the Sierpinski triangle by repeating the iterative process a large number of times. More mathematically, the iterations converge to the fixed point of the IFS. Whenever belongs to the attractor of the IFS, all iterations stay inside the attractor and, with probability 1, form a dense set in the latter.
See also
Fixed-point combinator
Cobweb plot
Markov chain
Infinite compositions of analytic functions
Rate of convergence
References
Further reading
External links
Fixed-point algorithms online
Fixed-point iteration online calculator (Mathematical Assistant on Web)
Root-finding algorithms
Iterative methods | Fixed-point iteration | [
"Mathematics"
] | 1,251 | [
"Theorems in mathematical analysis",
"Fixed-point theorems",
"Theorems in topology"
] |
7,344,637 | https://en.wikipedia.org/wiki/Desert%20%28particle%20physics%29 | In the Grand Unified Theory of particle physics (GUT), the desert refers to a theorized gap in energy scales, between approximately the electroweak energy scale–conventionally defined as roughly the vacuum expectation value or VeV of the Higgs field (about 246 GeV)–and the GUT scale, in which no unknown interactions appear.
It can also be described as a gap in the lengths involved, with no new physics below 10−18 m (the currently probed length scale) and above 10−31 m (the GUT length scale).
The idea of the desert was motivated by the observation of approximate, order of magnitude, gauge coupling unification at the GUT scale. When the values of the gauge coupling constants of the weak nuclear, strong nuclear, and electromagnetic forces are plotted as a function of energy, the 3 values appear to nearly converge to a common single value at very high energies. This was one theoretical motivation for Grand Unified Theories themselves, and adding new interactions at any intermediate energy scale generally disrupts this gauge coupling unification. The disruption arises from the new quantum fields- the new forces and particles- which introduce new coupling constants and new interactions that modify the existing Standard Model coupling constants at higher energies. The fact that the convergence in the Standard Model is actually inexact, however, is one of the key theoretical arguments against the Desert, since making the unification exact requires new physics below the GUT scale.
Standard model particles
All the Standard Model particles were discovered well below the energy scale of approximately 1012 eV or 1 TeV. The heaviest Standard Model particle is the top quark, with a mass of approximately 173 GeV.
The desert
Above these energies, desert theory with the assumption of supersymmetry predicts no particles will be discovered until reaching the scale of approximately 1025 eV. According to the theory, measurements of TeV-scale physics at the Large Hadron Collider (LHC) and the near-future International Linear Collider (ILC) will allow extrapolation all the way up to the GUT scale .
The particle desert's negative implication is that experimental physics will simply have nothing more fundamental to discover, over a very long period of time. Depending on the rate of the increase in experiment energies, this period might be a hundred years or more. Presumably, even if the energy achieved in the LHC, ~ 1013 eV, were increased by up to 12 orders of magnitude, this would only result in producing more copious amounts of the particles known today, with no underlying structure being probed. The aforementioned timespan might be shortened by observing the GUT scale through a radical development in accelerator physics, or by a non-accelerator observational technology, such as examining tremendously high energy cosmic ray events, or another, yet undeveloped technology.
Alternatives to the desert exhibit particles and interactions unfolding with every few orders of magnitude increase in the energy scale.
MSSM desert
With the Minimal Supersymmetric Standard Model, adjustment of parameters can make the grand unification exact. This unification is not unique.
Such exact gauge unification is a generic feature of supersymmetric models, and remains a major theoretical motivation for developing them. Such models automatically introduce new particles ("superpartners") at a new energy scale associated with the breaking of the new symmetry, ruling out the conventional energy desert. They can, however, contain an analogous "desert" between the new energy scale and the GUT scale.
Mirror matter desert
Scenarios like the Katoptron model can also lead to exact unification after a similar energetic desert. If the known neutrino masses are due to a seesaw mechanism, the new heavy neutrino states must have masses below the GUT scale in order to produce the observed O(1 meV) masses. Indicative examples of the order of magnitude of the corresponding masses and fermion mixing parameters in accordance with experimental data have been calculated within the context of katoptrons.
Evidence
As of 2019, the LHC has excluded the existence of many new particles up to masses of a few TeV, or about 10x the mass of the top quark. Other indirect evidence in favor of a large energy desert for a certain distance above the electroweak scale (or even no particles at all beyond this scale) includes:
The absence of any observed proton decays, which has already ruled out many new physics models that can produce them up to (and beyond) the GUT scale.
Precision measurements of known particles and processes, such as extremely rare particle decays, have already indirectly probed energy scales up to 1 PeV (106 GeV) without finding any confirmed deviations from the Standard Model. This significantly constrains any new physics that might exist below those energies.
Research from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the Large Hadron Collider. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs.
The observed Higgs boson decay modes and rates are so far consistent with the Standard Model.
Counter evidence
So far there is no direct evidence of new fundamental particles with masses between the electroweak and GUT scale, consistent with the desert. However, there are some theories about why such particles might exist:
The leading theoretical explanations of neutrino masses, the various seesaw models, all require new heavy neutrino states below the GUT scale.
Both weakly interacting massive particles (WIMP) and axion models for dark matter require the new, long-lived particles to have masses far below the GUT scale.
In the Standard Model, there is no physics which stabilizes the Higgs boson mass to its actual observed value. Since the actual value is far below the GUT scale, whatever new physics ultimately does stabilize it must become apparent at lower energies too.
Precision measurements have produced several outstanding discrepancies with the Standard Model in recent years. These include anomalies in certain B meson decays and a discrepancy in the measured value of the Muon g-2 (anomalous magnetic moment). Depending on the results of currently ongoing experiments, these effects may already indicate the existence of unknown new particles below about 100 TeV.
References
External links
Grand Unified Theory
Physics beyond the Standard Model | Desert (particle physics) | [
"Physics"
] | 1,317 | [
"Unsolved problems in physics",
"Physics beyond the Standard Model",
"Grand Unified Theory",
"Particle physics"
] |
7,344,825 | https://en.wikipedia.org/wiki/Rubottom%20oxidation | The Rubottom oxidation is a useful, high-yielding chemical reaction between silyl enol ethers and peroxyacids to give the corresponding α-hydroxy carbonyl product. The mechanism of the reaction was proposed in its original disclosure by A.G. Brook with further evidence later supplied by George M. Rubottom. After a Prilezhaev-type oxidation of the silyl enol ether with the peroxyacid to form the siloxy oxirane intermediate, acid-catalyzed ring-opening yields an oxocarbenium ion. This intermediate then participates in a 1,4-silyl migration (Brook rearrangement) to give an α-siloxy carbonyl derivative that can be readily converted to the α-hydroxy carbonyl compound in the presence of acid, base, or a fluoride source.
Reaction mechanism
History
In 1974, three independent groups reported on the reaction now known as the Rubottom oxidation: A.G Brook, A. Hassner, and G.M. Rubottom. Considerable precedent for the reaction already existed. For instance, it was known as early as the 1930s that highly enolizable β-dicarbonyl compounds would react with peroxyacids, although it was not until the 1950s and 60s α-hydroxy β-dicarbonyl compounds were in fact the product.
Considerable work by A.G Brook, during the 1950s on the mechanisms of organosilicon migrations, which are now known as Brook Rearrangements. In 1974, C.H. Heathcock described the ozonolysis of silyl enol ethers to give a carboxylic acid product via oxidative cleavage where silyl migrations were observed as side reactions and exclusively in the case of a bicyclic system.
General features
The original implementations of the Rubottom oxidation featured the peroxyacid meta-chloroperoxybenzoic acid (mCPBA) as the oxidant in dichloromethane (DCM), in the case of Hassner and Brook, and hexanes for Rubottom. While the reaction has been tweaked and modified since 1974, mCPBA is still commonly used as the oxidant with slightly more variation in the solvent choice. DCM remains the most common solvent followed by various hydrocarbon solvents including pentane and toluene. Notably, the reaction proceeds at relatively low temperatures and heating beyond room temperature is not necessary. Low temperatures allow the standard Rubottom oxidation conditions to be amenable with a variety of sensitive functionalities making it ideal for complex molecule synthesis (See synthetic examples below). Silyl enol ether substrates can be prepared regioselectively from ketones or aldehydes by employing thermodynamic or kinetic control to the enolization prior to trapping with the desired organosilicon source (usually a chloride or triflate e.g. TBSCl or TBSOTf). As illustrated by the synthetic examples below, silyl enol ethers can be isolated prior to exposure to the reaction conditions, or the crude material can be immediately subjected to oxidation without isolation. Both acyclic and cyclic silyl enol ether derivatives can be prepared in this way and subsequently be used as substrates in the Rubottom oxidation. Below are some representative Rubottom oxidation products synthesized in the seminal papers.
In 1978, Rubottom showed that siloxy 1,3 dienes, derived from acyclic or cyclic enones could also serve as substrates for the Rubottom oxidation to forge α-hydroxy enones after treatment with triethyl ammonium fluoride. These substrates give a single regioisomer under the reaction conditions due to the electron-rich nature of the silyl enol pi-bond (See synthesis of Periplanone B below).
Modifications and improvements
The Rubottom oxidation has remained largely unchanged since its initial disclosure, but one of the major drawbacks of standard conditions is the acidic environment, which can lead to unwanted side reactions and degradation. A simple sodium bicarbonate buffer system is commonly employed to alleviate this issue, which is especially problematic in bicyclic and other complex molecule syntheses (see synthetic examples). The introduction of chiral oxidants has also allowed for the synthesis of enantiopure α-hydroxy carbonyl derivatives from their corresponding silyl enol ethers. The first example of an enantioselective Rubottom oxidation was published by F.A. Davis in 1987 and showcased the Davis chiral oxaziridine methodology (Davis oxidation) to give good yields but modest enantiomeric excesses. In 1992, K.B. Sharpless showed that the asymmetric dihydroxylation conditions developed in his group could be harnessed to give either (R)- or (S)- α-hydroxy ketones from the corresponding silyl enol ethers depending on which Chinchona alkaloid-derived chiral ligands were employed. The groups of Y. Shi and W. Adam published another enantioselective variant of the Rubottom oxidation in 1998 using the Shi chiral ketone in the presence of oxone in a buffered system to furnish α-hydroxy ketones in high yield and high enantiomeric excess. The Adam group also published another paper in 1998 utilizing manganese(III)-(Salen)complexes in the presence of NaOCl (bleach) as the oxidant and 4-phenylpyridine N-oxide as an additive in a phosphate buffered system. This methodology also gave high yields and enentioselectivities for silyl enol ethers as well as silyl ketene acetals derived from esters.
Along with chiral oxidants, variants of mCPBA have been examined. Stankovic and Espenson published a variation of the Rubottom oxidation where methyltrioxorhenium is used as a catalytic oxidant in the presence of stoichiometric hydrogen peroxide. This methodology gives acyclic and cyclic α-hydroxy ketones in high yield with a cheap, commercially available oxidant. An inherent problem with mCPBA is its inability to oxidize silyl ketene acetals. In order to synthesize α-hydroxy esters, different oxidants are needed such as NaOCl (see above), lead(IV) acetate, or a hypofluorous acid-acetonitrile (HOF-ACN) complex. The Rubottom group found that lead(IV) acetate in DCM or benzene gave good yields of acyclic and cyclic α-hydroxy esters after treatment of the crude reaction mixture with triethylammonium fluoride. Later, the highly electrophilic HOF-ACN complex was used by S. Rozen to oxidize a variety of electron rich silyl enol ethers, silyl ketene acetals, and bis(silyl acetals), derived from carboxylic acids, in good yields at or below room temperature.
Applications in synthesis
The following examples represent only a small portion of syntheses that highlight the use of the Rubottom oxidation to install an important α-hydroxy functionality. Some of the major features of the following syntheses include the use of buffered conditions to protect sensitive substrates and the diastereoselective installation of the α-hydroxy group due to substrate controlled facial bias. For more examples see refs
The Rubottom oxidation was used in the synthesis of periplanone B, a sex pheromone excreted by the female American cockroach. The synthesis employed an anionic oxy-Cope rearrangement coupled to a Rubottom oxidation. After heating in the presence of potassium hydride (KH) and 18-crown-6 (18-C-6) to effect the anionic oxy-Cope, the enolate intermediate was trapped with trimethylsilyl chloride (TMSCl). The silyl enol ether intermediate could then be treated with mCPBA under Rubottom oxidation conditions to give the desired α-hydroxy carbonyl compound that could then be carried on to (±)-periplanone B and its diastereomers to prove its structure.
Brevisamide, a proposed biosynthetic precursor for a polyether marine toxin, was synthesized by Ghosh and Li, one step of which is a Rubottom oxidation of the cyclic silyl enol ether under buffered conditions. Chiral chromium catalyst B was developed the Jacobsen group and confers high levels of enantio- and diastereoselectivity. The stereocenters conveniently set in the Diels-Alder reaction direct the oxidation to the less hindered face, giving a single diastereomer, which could then be carried on in 14 more steps to Brevisamide.
Wang and coworkers developed a robust, kilogram-scale synthesis of the potent derivative 2S-hydroxymutilin from pleuromutilin, an antibiotic produced by various species of basidiomycetes. Basic hydrolysis to remove the hydroxyl ester moiety of pleuromutilin yielded mutilin. Subsequent treatment with lithium hexamethyldisilazide (LiHMDS) and TMSCl gave the TMS-protected silyl enol ether, which was immediately subjected to an acetic acid- (HOAc) pyridine- (Py) buffered Rubottom oxidation before acidic hydrolysis to afford 2S-hydroxymutilin. This highly optimized sequence features two important aspects. First, the authors originally generated the silyl enol ether using triethylamine, which gave a mixture of the desired kinetic product, (shown below) the undesired thermodynamic product, and hydrolysis back to mutilin. The authors blamed the formation of the acidic triethylammonium (pKa = 10.6) byproduct for the undesired side products and remedied this by using the LiHMDS to exclusively form the desired kinetic product with no acid-catalyzed side reactions due to the significantly lower acidity of the protonated product (pKa = 26). Second, while oxidation occurred from the desired convex face of the silyl enol ether, the authors saw a significant number of overoxidation products that they attributed to the stability of the oxocarbenium ion intermediate under sodium bicarbonate buffered conditions. They hypothesized that the increased lifetime of the intermediate species would allow for over oxidation to occur. After a significant amount of optimization, it was found that an HOAc/Py buffer trapped the oxocarbenium intermediate and prevented overoxidation to exclusively give 2S-hydroxymutilin after hydrolysis of the silyl protecting groups.
Ovalicin, fumagillin, and their derivatives exhibit strong anti-angiogenesis properties and have seen numerous total syntheses since their isolation. Corey and Dittami reported the first total synthesis of racemic ovalicin in 1985 followed by two asymmetric syntheses reported in 1994 by Samadi and Corey which featured a chiral pool strategy from L-quebrachitol and an asymmetric dihydroxylation, respectively. In 2010, Yadav and coworkers reported a route that intercepted the Samadi route from the chiral pool starting material D-ribose. A standard Rubottom oxidation gives a single stereoisomer due to substrate control and represents the key stereogenic step in the route to the Samadi ketone. Once synthesized, the Samadi ketone could be elaborated to (−)-ovalicin through known steps.
Velutinol A was first synthesized by Isaka and coworkers. The authors show that the high regioselectivity of this reaction is directed by the hydroxyl group syn to the ring-fusion proton. Reactions where the stereochemistry of the hydroxyl group is inverted saw lower regioselectivity, and removal of the hydroxyl group gave the exclusive formation of the other regioisomer. It is likely that the close proximity of the hydroxyl group in the syn isomer acidifies the ring-fusion proton through hydrogen-bonding interactions, thus facilitating regioselective deprotonation by triethylamine. The silyl enol ether was then treated with excess mCPBA to facilitate a “double” Rubottom oxidation to give the exo product with both hydroxyl groups on the outside of the fused ring system. This dihydroxy product was then transformed into Velutinol A in three additional steps.
The Clive group utilized the Rubottom oxidation in the synthesis of an advanced intermediate for their degradation studies of the cholesterol-lowering fungal metabolite mevinolin. This interesting sequence features the addition of excess n-butyllithium (BuLi) in the presence of lithium diisopropylamide (LDA) for full conversion of the bicyclic ketone derivative to the corresponding silyl enol ether. Without BuLi the authors report a maximum yield of only 72%. Subsequent buffered Rubottom oxidation conditions with sodium bicarbonate in ethyl acetate afforded the α-hydroxy ketone as a single diastereomer.
The Falk group synthesized various derivatives of phosphatidyl-D-myo-inositol to aid in the study of the various phosphatidylinositol 3-kinase (PI3K) cell signaling pathways. Their route to the collection of substrate analogs exploits a substrate-controlled stereoselective Rubottom oxidation using dimethyl dioxirane(DMDO) as the oxidant and catalytic camphorsulfonic acid (CSA) to aid in hydrolysis. For protecting groups see ref
Problems and shortcomings
While the Rubottom oxidation generally gives good yields and is highly scalable (see 2S-hydroxymutilin synthesis), there are still some problems with the reaction. As mentioned above, the acidic reaction conditions are not tolerated by many complex substrates, but this can be abrogated with the use of buffer systems. Poor atom economy is also a major issue with the reaction because it requires stoichiometric oxidant, which generates large amounts of waste. Peroxides can also be dangerous to work with. mCPBA is known to detonate from shock or sparks.
α-Hydroxylation of related compounds
Although silyl enol ethers of aldehydes and ketones are the traditional substrates for the Rubottom oxidation, as mentioned above, silyl ketene acetals and bis (silyl acetals) can be oxidized to their α-hydroxy ester or carboxylic acid derivatives using lead(IV) acetate or hypofluorous acid-acetonitrile (HOF–ACN). However, these α-hydroxylations do not proceed via silyl enol ether intermediates and are therefore not technically Rubottom oxidations. Various oxidants can be used to oxidize many of these carbonyl derivatives after they are converted to their respective enolate or related anion. Some common oxidants are peroxy acids, molecular oxygen, and hypervalent iodine reagents.
References
Bibliography
Kürti, L.; Czakó, B. (2005) Strategic Applications of Named Reactions in Organic Synthesis, Elsevier, .
Li, J.J. (2009) Name Reactions: A Collection of Detailed Mechanisms and Synthetic Applications, 4th Edition, Springer,
External links
Organic Chemistry Portal
Myers' Handouts
Organic oxidation reactions
Name reactions | Rubottom oxidation | [
"Chemistry"
] | 3,354 | [
"Name reactions",
"Organic oxidation reactions",
"Organic redox reactions",
"Organic reactions"
] |
7,344,897 | https://en.wikipedia.org/wiki/Vladimir%20Vari%C4%87ak | Vladimir Varićak (sometimes also spelled Vladimir Varičak; March 1, 1865 – January 17, 1942) was a Croatian Serb mathematician and theoretical physicist.
Biography
Varićak, an ethnic Serb, was born on March 1, 1865, in the village of Švica near Otočac, Austrian Empire (present-day Croatia). He studied physics and mathematics at the University of Zagreb from 1883 to 1887. He made his PhD in 1889 and got his habilitation in 1895. In 1899 he became professor of mathematics in Zagreb, where he gave lectures until his death in 1942.
From 1903 to 1908 he wrote on hyperbolic geometry (or Bolyai–Lobachevskian geometry). In 1910, following a 1909 publication of Sommerfeld, he applied hyperbolic geometry to the special theory of relativity. Sommerfeld, using the imaginary form of Minkowski space, had shown in his 1909 paper that the Einstein formula for combination of velocities is most clearly understandable as a formula for triangular addition on the surface of a sphere of imaginary radius. Varićak reinterpreted this result as showing that rapidity combines by the triangle rule in hyperbolic space. This is a fundamental result for the hyperbolic theory which was demonstrated later by other approaches by Robb (1911) and Borel (1913). The 1910 papers also dealt with several applications of the hyperbolic theory to optics. In 1911 Varićak was invited to speak to the Deutsche Mathematiker-Vereinigung in Karlsruhe on his work. He continued to develop the hyperbolic reinterpretation of Einstein's theory collecting his results in 1924 in a textbook, Darstellung der Relativitätstheorie im drei-dimensionalen Lobatschefskijschen Raume (Relativity in Three-Dimensional Lobachevski Space), now available in English. In the period 1909 to 1913 Varićak had correspondence with Albert Einstein concerning rotation and length contraction where Varićak's interpretations differed from those of Einstein. Concerning length contraction Varićak said that in Einstein's interpretation the contraction is only an "apparent" or a "psychological" phenomenon due to the convention of clock measurements whereas in the Lorentz theory it was an objective phenomenon. Einstein published a brief rebuttal, saying that his interpretation of the contraction was closer to Lorentz's.
Walter (1999) re-examined Minkowski's non-Euclidean geometry. He begins by analysis of "the tip of a four-dimensional velocity vector" and notes Minkowski's equations where "both hypersurfaces provide a basis for a well-known model of non-Euclidean space of constant negative curvature, popularized by Helmholtz." In fact it is known as the hyperboloid model of hyperbolic geometry.
Walter goes on to say:
More than any other mathematician, Varićak devoted himself to the development of the non-euclidean style [of relativity], unfolding Minkowski's image of velocity-vector relations in hyperbolic space, and recapitulating a variety of results in terms of hyperbolic functions. The use of hyperbolic trigonometry was shown by Varićak to entail significant notational advantages. For example, he relayed the interpretation put forth by Hergloz and Klein of the Lorentz transformation as a displacement in hyperbolic space, and indicated simple expressions for proper time and the aberration of light in terms of a hyperbolic argument.
Varićak is also known as a high school teacher of Milutin Milanković and of Mileva Marić, the first wife of Einstein, and as a university instructor of Đuro Kurepa.
Varićak made scholarly contributions on the life and work of Ruđer Bošković (1711–1787) These are listed in the biography of Kurepa (1965) cited below.
Of special interest for the history of relativity is that Varićak also edited and published a little-known 1755 paper of Boscovich in Latin entitled "On absolute motion – if it is possible to distinguish it from relative motion" ("Of Space and Time"). Varićak said that the paper "contains many remarkably clear and radical ideas regarding the relativity of space, time and motion."
Although having a Serbian origin and being an Orthodox and later Greek Catholic, he disputed and dismissed the thesis that Ruđer Bošković was a Serb.
He was a member of the Yugoslav Academy of Sciences and Arts, the Czech Academy of Sciences, the Serbian Academy of Sciences and Arts, the Croatian Society for Natural Science, and the Yugoslav Mathematical Society.
See also
Ehrenfest paradox
Publications
Varićak, V. (1908) "Zur nichteuklidischen analytischen Geometrie", Proceedings of the International Congress of Mathematicians, Bd. II, SS. 213–26.
Wikisource translation: Application of Lobachevskian Geometry in the Theory of Relativity
Wikisource translation: The Theory of Relativity and Lobachevskian Geometry
Wikisource translation: The Reflection of Light at Moving Mirrors
Wikisource translation: On Ehrenfest's Paradox
Wikisource translation: On the Non-Euclidean Interpretation of the Theory of Relativity
Varićak, V.(1924) Darstellung der Relativitatstheorie im drei=dimensionalen Lobatschefskijschen Raume, Zagreb (Narodni Novini); English translation (2007) Relativity in three dimensional Lobachevski Space, A.F. Kracklauer translator , at Amazon.com.
A complete list of Varićak's publications on all subjects is given in the following paper:
Notes
External links
"Vladimir Varićak" at the University of Zagreb
Croatian mathematicians
Croatian physicists
Yugoslav mathematicians
Yugoslav physicists
Mathematicians from Austria-Hungary
Mathematical physicists
Relativity theorists
Faculty of Science, University of Zagreb alumni
Academic staff of the University of Zagreb
Rectors of the University of Zagreb
1865 births
1942 deaths
People from Otočac
Members of the Croatian Academy of Sciences and Arts
Serbs of Croatia
Members of the Serbian Academy of Sciences and Arts | Vladimir Varićak | [
"Physics"
] | 1,261 | [
"Relativity theorists",
"Theory of relativity"
] |
7,345,401 | https://en.wikipedia.org/wiki/Trional | Trional (Methylsulfonal) is a sedative-hypnotic and anesthetic drug with GABAergic actions. It has similar effects to sulfonal, except it is faster acting.
History
Trional was prepared and introduced by Eugen Baumann and Alfred Kast in 1888.
Cultural references
Appeared in Agatha Christie's Murder on the Orient Express, And Then There Were None, and other novels such as John Bude's The Lake District Murder as a sleep-inducing sedative; and in In Search of Lost Time (Sodom and Gomorrah) by Marcel Proust as a hypnotic. Sax Rohmer also references trional in his novel Dope.
See also
Sulfonal
Tetronal
References
Hypnotics
GABAA receptor positive allosteric modulators
Sulfones | Trional | [
"Chemistry",
"Biology"
] | 169 | [
"Hypnotics",
"Behavior",
"Functional groups",
"Sulfones",
"Sleep"
] |
7,345,405 | https://en.wikipedia.org/wiki/Kendall%27s%20notation | In queueing theory, a discipline within the mathematical theory of probability, Kendall's notation (or sometimes Kendall notation) is the standard system used to describe and classify a queueing node. D. G. Kendall proposed describing queueing models using three factors written A/S/c in 1953 where A denotes the time between arrivals to the queue, S the service time distribution and c the number of service channels open at the node. It has since been extended to A/S/c/K/N/D where K is the capacity of the queue, N is the size of the population of jobs to be served, and D is the queueing discipline.
When the final three parameters are not specified (e.g. M/M/1 queue), it is assumed K = ∞, N = ∞ and D = FIFO.
First example: M/M/1 queue
A M/M/1 queue means that the time between arrivals is Markovian (M), i.e. the inter-arrival time follows an exponential distribution of parameter λ. The second M means that the service time is Markovian: it follows an exponential distribution of parameter μ. The last parameter is the number of service channel which one (1).
Description of the parameters
In this section, we describe the parameters A/S/c/K/N/D from left to right.
A: The arrival process
A code describing the arrival process. The codes used are:
S: The service time distribution
This gives the distribution of time of the service of a customer. Some common notations are:
c: The number of servers
The number of service channels (or servers). The M/M/1 queue has a single server and the M/M/c queue c servers.
K: The number of places in the queue
The capacity of queue, or the maximum number of customers allowed in the queue. When the number is at this maximum, further arrivals are turned away. If this number is omitted, the capacity is assumed to be unlimited, or infinite.
Note: This is sometimes denoted c + K where K is the buffer size, the number of places in the queue above the number of servers c.
N: The calling population
The size of calling source. The size of the population from which the customers come. A small population will significantly affect the effective arrival rate, because, as more customers are in system, there are fewer free customers available to arrive into the system. If this number is omitted, the population is assumed to be unlimited, or infinite.
D: The queue's discipline
The Service Discipline or Priority order that jobs in the queue, or waiting line, are served:
Note: An alternative notation practice is to record the queue discipline before the population and system capacity, with or without enclosing parenthesis. This does not normally cause confusion because the notation is different.
References
Mathematical notation
Single queueing nodes | Kendall's notation | [
"Mathematics"
] | 593 | [
"nan"
] |
7,346,789 | https://en.wikipedia.org/wiki/Ceres%20Connection | The Ceres Connection is a cooperative program between MIT's Lincoln Laboratory and the Society for Science and the Public dedicated to promoting science education. It names asteroids discovered under the LINEAR project after teachers and contesting students who performed outstandingly in the following Society for Science and the Public competitions: the Discovery Channel Young Scientist Challenge, the Intel Science Talent Search, the Intel International Science and Engineering Fair.
Since 2002, over 200 asteroids are named each year through this program.
See also
Naming of asteroids
Lincoln Near-Earth Asteroid Research
Society for Science and the Public
External links
Official page
Asteroids
Science competitions | Ceres Connection | [
"Technology"
] | 118 | [
"Science and technology awards",
"Science competitions"
] |
7,346,797 | https://en.wikipedia.org/wiki/Polyimide%20foam | Polyimide foam is a foam originally designed for NASA by Inspec Foams Inc. under the brand name Solimide. Its primary purposes are as an insulator (such as for rocket fuels) and acoustic damper. NASA engineered the product to have relatively low outgassing (a problem in vacuum and aboard spacecraft), desirable thermal and acoustic performance, as well as uniformity during distribution and application. Typical uses of the foam include ducting, duct/piping insulation, structural components, and strengthening of hollow components while remaining lightweight. In addition to thermal and acoustic properties, polyimide foam is fire resistant, lightweight and non-toxic.
See also
References
Foams | Polyimide foam | [
"Chemistry"
] | 140 | [
"Foams",
"Chemical process stubs"
] |
7,347,241 | https://en.wikipedia.org/wiki/Spreading%20activation | Spreading activation is a method for searching associative networks, biological and artificial neural networks, or semantic networks. The search process is initiated by labeling a set of source nodes (e.g. concepts in a semantic network) with weights or "activation" and then iteratively propagating or "spreading" that activation out to other nodes linked to the source nodes. Most often these "weights" are real values that decay as activation propagates through the network. When the weights are discrete this process is often referred to as marker passing. Activation may originate from alternate paths, identified by distinct markers, and terminate when two alternate paths reach the same node. However brain studies show that several different brain areas play an important role in semantic processing.
Spreading activation in semantic networks as a model were invented in cognitive psychology to model the fan out effect.
Spreading activation can also be applied in information retrieval, by means of a network of nodes representing documents and terms contained in those documents.
Cognitive psychology
As it relates to cognitive psychology, spreading activation is the theory of how the brain iterates through a network of associated ideas to retrieve specific information. The spreading activation theory presents the array of concepts within our memory as cognitive units, each consisting of a node and its associated elements or characteristics, all connected together by edges. A spreading activation network can be represented schematically, in a sort of web diagram with shorter lines between two nodes meaning the ideas are more closely related and will typically be associated more quickly to the original concept. In memory psychology, the spreading activation model holds that people organize their knowledge of the world based on their personal experiences, which in turn form the network of ideas that is the person's knowledge of the world.
When a word (the target) is preceded by an associated word (the prime) in word recognition tasks, participants seem to perform better in the amount of time that it takes them to respond. For instance, subjects respond faster to the word "doctor" when it is preceded by "nurse" than when it is preceded by an unrelated word like "carrot". This semantic priming effect with words that are close in meaning within the cognitive network has been seen in a wide range of tasks given by experimenters, ranging from sentence verification to lexical decision and naming.
As another example, if the original concept is "red" and the concept "vehicles" is primed, they are much more likely to say "fire engine" instead of something unrelated to vehicles, such as "cherries". If instead "fruits" was primed, they would likely name "cherries" and continue on from there. The activation of pathways in the network has everything to do with how closely linked two concepts are by meaning, as well as how a subject is primed.
Algorithm
A directed graph is populated by Nodes[ 1...N ] each having an associated activation value A [ i ] which is a real number in the range [0.0 ... 1.0]. A connects source node[ i ] with target node[ j ]. Each edge has an associated weight W [ i, j ] usually a real number in the range [0.0 ... 1.0].
Parameters:
Firing threshold F, a real number in the range [0.0 ... 1.0]
Decay factor D, a real number in the range [0.0 ... 1.0]
Steps:
Initialize the graph setting all activation values A [ i ] to zero. Set one or more origin nodes to an initial activation value greater than the firing threshold F. A typical initial value is 1.0.
For each unfired node [ i ] in the graph having an activation value A [ i ] greater than the node firing threshold F:
For each Link [ i, j ] connecting the source node [ i ] with target node [ j ], adjust A [ j ] = A [ j ] + (A [ i ] * W [ i, j ] * D) where D is the decay factor.
If a target node receives an adjustment to its activation value so that it would exceed 1.0, then set its new activation value to 1.0. Likewise maintain 0.0 as a lower bound on the target node's activation value should it receive an adjustment to below 0.0.
Once a node has fired it may not fire again, although variations of the basic algorithm permit repeated firings and loops through the graph.
Nodes receiving a new activation value that exceeds the firing threshold F are marked for firing on the next spreading activation cycle.
If activation originates from more than one node, a variation of the algorithm permits marker passing to distinguish the paths by which activation is spread over the graph
The procedure terminates when either there are no more nodes to fire or in the case of marker passing from multiple origins, when a node is reached from more than one path. Variations of the algorithm that permit repeated node firings and activation loops in the graph, terminate after a steady activation state, with respect to some delta, is reached, or when a maximum number of iterations is exceeded.
Examples
See also
Connectionism
Notes
References
Nils J. Nilsson. "Artificial Intelligence: A New Synthesis". Morgan Kaufmann Publishers, Inc., San Francisco, California, 1998, pages 121-122
Rodriguez, M.A., " Grammar-Based Random Walkers in Semantic Networks", Knowledge-Based Systems, 21(7), 727-739, , 2008.
Karalyn Patterson, Peter J. Nestor & Timothy T. Rogers "Where do you know what you know? The representation of semantic knowledge in the human brain", Nature Reviews Neuroscience 8, 976-987 (December 2007)
Semantics
Psycholinguistics
Memory
Artificial intelligence
Algorithms
Search algorithms
Graph algorithms | Spreading activation | [
"Mathematics"
] | 1,178 | [
"Algorithms",
"Mathematical logic",
"Applied mathematics"
] |
7,347,295 | https://en.wikipedia.org/wiki/Avaz%20Twist%20Tower | The Avaz Twist Tower is a 40 story, 175m tall skyscraper in Sarajevo, Bosnia and Herzegovina. It is the headquarters for Dnevni avaz, a Bosnian newspaper company. The tower is located in the Marijin Dvor city neighborhood, Sarajevo's central municipality. Construction began in 2006 and was finished two years later in 2008. The tower is notable for its twisted facade. As of 2016, it was the tallest skyscraper in Bosnia and Herzegovina. In 2009, German company Schuco chose the tower amongst the 10 most beautiful buildings in the world.
See also
List of twisted buildings
References
External links
Official Page of the Tower
Dnevni Avaz's Portal
Avaz Twist Tower at Sarajevo-construction
Avaz Twist Tower at skyscrapernews.com
Buildings and structures in Sarajevo
Skyscraper office buildings in Bosnia and Herzegovina
Centar, Sarajevo
Architecture in Bosnia and Herzegovina
Postmodern architecture
Twisted buildings and structures
Office buildings completed in 2008
2008 establishments in Bosnia and Herzegovina | Avaz Twist Tower | [
"Engineering"
] | 197 | [
"Postmodern architecture",
"Architecture"
] |
7,347,399 | https://en.wikipedia.org/wiki/Whitewater%20Canal | The Whitewater Canal, which was built between and , spanned a distance of and stretched from Lawrenceburg, Indiana on the Ohio River to Hagerstown, Indiana near the West Fork of the White River.
History
Birth of a canal
As with most transportation improvements during the early nineteenth century, industry paved the way within individual states. After successful canal development projects further east in the United States, it was not long until canals were dug across the Midwest. The opening of the Erie Canal in 1825 paved the way for improvement projects across the United States and changed the course of American transportation history. The Erie Canal was an immediate financial success. This set the precedent for future canals and proved canals could provide a viable contribution to local economies.
There was the need for a high-speed transportation system that could link the Whitewater Valley to the Ohio River. Before the canal, farmers had to transport their goods and livestock to Cincinnati, Ohio on badly rutted and often impassable roads. The journey to Cincinnati could take several days.
In 1836 the Indiana State Legislature approved the Mammoth Internal Improvement Act, which allowed for the development of the Whitewater Canal and a host of other improvements throughout Indiana.
Construction
The Whitewater Canal was built based on an 1834 survey conducted by Charles Hutchens. The design called for a canal seventy six miles long starting at Nettle Creek near Hagerstown and following the river valley through Connersville, Brookville and into Harrison, Ohio, then back into Indiana to finish at Lawrenceburg. In the the canal dropped . This a very ambitious route as it was quite steep and required the crossing of the Whitewater over an aqueduct at Laurel as well as several other streams of lesser size. The drop compares to the Erie Canal that dropped but did so in . The Wabash & Erie Canal dropped in while the Chesapeake & Ohio dropped in . That meant that the Whitewater descended 6.4 feet per mile compared to the Chesapeake & Ohio at 2.9 feet per mile, the Erie at 1.7 feet per mile and the Wabash & Erie at 1 foot per mile. The steepness became a problem whenever heavy rains came.
Because of the steep grade, the canal required 56 locks and seven dams.
The canal was started as a state project and ground was broken on September 13, 1836. The first boat arrived in Brookville from Lawrenceburg on June 8, 1839. Because of budget problems construction was suspended in August 1839 not to be resumed until 1842.
In 1842, the state of Indiana transferred its ownership in the canal to the White Water Valley Canal Company which was required to complete the canal to Cambridge City in five years. By 1843 boats were arriving in Laurel. 1845 saw the canal operating into Connersville. The canal company was running out of money and borrowed from Henry Valette of Cincinnati to finish the canal into Cambridge City from Connersville. From Cambridge City to Hagerstown the Canal was built by the Hagerstown Canal Company and was finished in 1847.
Canal decline
The Whitewater Canal was a short venture, but it left a lasting mark on the communities it traveled through. The canal development project was funded under the Act of 1836 and was allotted $1,400,000 to build the canal through the Whitewater Valley. This was a huge sum at the time and investors did not take out many loans due to the prediction that they stood to make considerable profit. It was the Mammoth Internal Improvement Act of 1836 that ended up straining the coffers of the State of Indiana. Indiana went bankrupt during the summer of 1839, and canal construction was halted until 1842.
In November 1847 the Whitewater Valley flooded and many sections of the canal were washed out. The section between Harrison and Lawrenceburg was never rebuilt. This effectively ended the canal era in Lawrenceburg after only eight years of service, and only a few months after the canal was finished to Hagerstown. It was ten months before the canal was again operational north of Harrison. Debt incurred to finance repairs in 1847 were a serious problem for the rest of the canals active history.
White Water Valley Canal Company
The White Water Valley Canal Company was granted a charter by the Indiana General Assembly of 1825–26. The company was set up after the State of Indiana could no longer afford to finish the Whitewater Canal system. The White Water Valley Canal Company finished the canal through Cambridge City, Indiana.
It constructed the Canal House at Connersville in 1842. The building was added to the National Register of Historic Places in 1973.
Hagerstown Canal Company
Hagerstown was supposed to be the northernmost terminus of the Whitewater Canal, but after the state went bankrupt, Hagerstown was forced to finance and construct its own canal to Cambridge City. The Hagerstown Canal Company completed an eight-mile (13 km) long canal between Hagerstown and Cambridge City in 1847.
Cincinnati And Whitewater Canal
A connecting canal built to reach Cincinnati was known as the Cincinnati and Whitewater Canal. This canal was built by Ohio interests and went from Harrison to Cincinnati. It was completed in 1843 and replaced Lawrenceburg as the end of the line after the 1847 November flood. This stretch of canal closed in 1862 and was used as a railroad right-of-way at that time. A canal tunnel constructed to obviate a ridgeline at Cleves still exists, although badly silted up.
The Whitewater Canal today
Little is left of the Whitewater Canal today. Some towpath was bought by the Whitewater Valley Railroad Company and has been used in various train operations over the years. A section of the rail line is still in use as a tourist railroad. The Whitewater Valley Railroad operates between Connersville, Indiana and Metamora, Indiana. The remains of many of the canal locks on this section of the canal can still be seen as well as the diversion dam near Laurel, Indiana that was rebuilt in the 1940s and provides water for the restored canal section in Metamora as well as the mill. The restored grain mill in Metamora which runs on water provided by the canal shows that transportation was not the only use of the canal. Hydro power was in use for many decades after the canal was closed as a transportation route and even was used to generate electricity in Connersville in the early part of the 20th century.
The most visible area of the Whitewater Canal that exists today is in Metamora. This section from the Laurel Feeder Dam to Brookville was listed on listed on the National Register of Historic Places in 1973 as the Whitewater Canal Historic District. The district encompasses 1 contributing building and 31 contributing structures. They include the Metamora Roller Mill, Laurel Feeder Dam, Duck Creek Aqueduct, and Millville Lock. Here the Canal Era is recreated and tourists can stroll through a nineteenth-century town. There are museums, shopping, eateries, and you can even take a horse drawn ride on the canal.
Civil Engineering Landmark status
An aqueduct carries the canal over Duck Creek at Metamora. It is a twentieth-century reconstruction of the wooden aqueduct built in 1846 to replace an earlier one that was washed out by a flood, and was listed as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers (ASCE) in 1992.
Cities along the canal
Cincinnati, Ohio - by way of Cincinnati and Whitewater Canal and Ohio River
Harrison, Ohio
Lawrenceburg, Indiana
Brookville, Indiana
Metamora, Indiana
Laurel, Indiana
Connersville, Indiana
Milton, Indiana
Cambridge City, Indiana
Hagerstown, Indiana
Gallery
See also
Canal
List of canals in the United States
List of State Historic Sites in Indiana
Whitewater Valley Railroad
Wabash and Erie Canal
Indiana Central Canal
Indiana Mammoth Improvement Act
References
Further reading
Gordon Mitchell "The Whitewater Canals of Indiana and Ohio". Professional Surveyor. 2009. Frederick, MD. Online-only
External links
Canal Construction in Indiana - The Indiana Historian, September 1997
Canal Society of Indiana
White Water Valley Railroad
Cincinnati and Whitewater Canal
Canals in Indiana
Canals on the National Register of Historic Places in Indiana
Historic districts on the National Register of Historic Places in Indiana
Transportation buildings and structures in Dearborn County, Indiana
Transportation buildings and structures in Franklin County, Indiana
History of Cincinnati
Transportation buildings and structures in Fayette County, Indiana
Transportation buildings and structures in Wayne County, Indiana
Canals opened in 1847
Historic districts in Franklin County, Indiana
National Register of Historic Places in Franklin County, Indiana
1847 establishments in Indiana
Historic Civil Engineering Landmarks | Whitewater Canal | [
"Engineering"
] | 1,684 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
7,347,446 | https://en.wikipedia.org/wiki/Robocopy | Robocopy is a command-line file transfer utility for Microsoft Windows. Robocopy is functionally more comprehensive than the COPY command and XCOPY, but replaces neither. Created by Kevin Allen and first released as part of the Windows NT 4.0 Resource Kit, it has been a standard feature of Windows since Windows Vista and Windows Server 2008.
Features
Robocopy provides features not found in the built-in Windows COPY and XCOPY commands, including the following:
Recovering from temporary loss of network connectivity (Incomplete files are marked with a date stamp of 1970-01-01 and contain a recovery record so Robocopy knows where to continue from).
Detecting and skipping NTFS junction points, which, under certain circumstances, may cause copying failures because of infinite loops (with the /XJ switch).
Preserving any combination of the following: file contents, attributes, metadata (e.g., original timestamps), NTFS ACLs (DACLs, SACLs, and owner). For example, it is possible to copy ACLs from one file to another. Before version XP026, however, this capability was limited to files only, not folders.
Utilizing the Windows NT "Backup Files and Directories" privilege (SeBackupPrivilege, normally not available to standard user accounts) to bypass NTFS ACLs that would otherwise impede transfer (requires the /B switch).
Persistence by default, with a programmable number of automatic retries if a file cannot be copied.
The mirror mode, which keeps two directory trees synchronized by also deleting files in the destination that are not present in the source.
Skipping files already in the destination folder with identical size and timestamp.
Progress indicator
Support for paths exceeding 259 characters, up to a theoretical limit of about 32,000 characters.
Multithreaded copying (introduced with Windows 7 and Windows Server 2008 R2).
Return codes (used in automation).
Compression
Since Windows Server 2019 and Windows 10, Robocopy supports SMB compression for transferring files across a network. If the /compress is specified, the destination computer supports SMB compression, and the files being copied are compressible, the operation enjoys significant performance improvements. The SMB compression adds inline whitespace compression to file transfers. Compression is also available with the XCOPY command and Hyper-V live migration with SMB.
Examples of use
Here are some examples of usage, which is not case-sensitive. If more than one option is specified, they must be separated by spaces.
Example 1
Copy directory contents of the source to the destination (including file data, attributes and timestamps), recursively with empty directories (/E):
Robocopy "C:\Directory A" "C:\Directory B" /E
If directory names have non-standard characters, such as spaces, they must be enclosed in double quotes, as is usual in the command line.
Example 2
Copy directory recursively (/E), copy all file information (/COPYALL, equivalent to /COPY:DATSOU, D=Data, A=Attributes, T=Timestamps, S=Security=NTFS ACLs, O=Owner info, U=Auditing info), do not retry locked files (/R:0) (the number of retries on failed copies default value is 1 million), preserve original directories' Timestamps (/DCOPY:T - requires version XP026 or later):
Robocopy C:\A C:\B /COPYALL /E /R:0 /DCOPY:T
Example 3
Mirror A to B, destroying any files in B that are not present in A (/MIR), copy files in resume mode (/Z) in case network connection is lost:
Robocopy C:\A \\backupserver\B /MIR /Z
For the full reference, see the Microsoft TechNet Robocopy page.
Syntactic focus on copying folders
Robocopy syntax is markedly different from its predecessors (copy and xcopy), in that it accepts only folder names, without trailing backslash, as its source and destination arguments. File names and wildcard characters (such as * and ?) are not valid as source or destination arguments; files may be selected or excluded using the optional "file" filtering argument (which supports wildcards) along with various other options.
For example, to copy two files from folder c:\bar to c:\baz, the following syntax is used:
robocopy c:\bar c:\baz file1.txt file2.db
And to copy all PDF files from c:\bar to c:\baz:
robocopy c:\bar c:\baz *.pdf
The files named are copied only from the folder selected for copying; fully qualified path names are not supported.
CAUTION: A long-standing issue with Robocopy means that if you back up from the root folder of a drive [ e.g., ], the destination files will be given attributes including SH. This means that they will be invisible to normal access (including DIR in cmd.exe). To fix this, add to the robocopy command line - or do an ATTRIB command to remove them afterwards.
Output
Robocopy outputs to the screen, or optionally to a log file, the names of all the directories it encounters, in alphabetical order. Each name is preceded by the number of files in the directory that fulfill the criteria for being copied. If the directory does not yet exist in the target, it is marked "New Dir"; if the directory is empty and the /E option is not used, or it contains no files meeting the criteria, a new directory will not be created.
If the /NFL (no file names in log) option is not used, the files being copied will be listed after the name of the directory they are in.
At the end of the output is a table giving numbers of directories, files, and bytes. For each of these, the table gives the total number found in the source, the number copied (including directories marked "New Dir" even if they are not copied), the number skipped (because they already exist in the target), and the number of mismatches, FAILED, and extras. "Failed" can mean that there was an I/O error that prevented a file being copied, or that access was denied. There is also a row of time taken (in which the time spent on failed files seems to be in the wrong column).
Bandwidth throttling
Robocopy's "inter-packet gap" (IPG) option allows some control over the network bandwidth used in a session. In theory, the following formula expresses the delay (, in milliseconds) required to simulate a desired bandwidth (, in kilobits per second), over a network link with an available bandwidth of kbps:
In practice however, some experimentation is usually required to find a suitable delay, due to factors such as the nature and volume of other traffic on the network. The methodology employed by the IPG option may not offer the same level of control provided by some other bandwidth throttling technologies, such as BITS (which is used by Windows Update and BranchCache).
Limitations
Robocopy does not copy open files. Any process may open files for exclusive read access by withholding the FILE_SHARE_READ flag during opening. Normally Volume Shadow Copy Service is used for such situations, but Robocopy does not use it. Consequently, Robocopy is not suitable for backing up live operating system volumes. However, a separate utility such as ShadowSpawn (under MIT License) or DiskShadow (included with Windows Server 2008), can be used beforehand to create a shadow copy of a given volume, which Robocopy can then back up.
Robocopy versions on systems older than Windows Vista do not mirror properly. They ignore changed security attributes of previously mirrored files.
When specifying the /MT[:n] option to enable multithreaded copying, the /NP option to disable reporting of the progress percentage for files is ignored. By default the MT switch provides 8 threads. The n is the number of threads you specify if you do not want to use the default.
GUI
Although Robocopy itself is a command-line tool, Microsoft TechNet provided a GUI front-end called Robocopy GUI. It was developed by Derk Benisch, a systems engineer with the MSN Search group at Microsoft, and required .NET Framework 2.0. It included a copy of Robocopy version XP026. It is no longer available from Microsoft, but may be downloaded from the Internet Archive's Wayback Machine.
There are non-Microsoft GUIs for Robocopy:
RoboCopy GUI by PC Assist Software v3.0 (includes job scheduling) (April 19, 2024)
Cinchoo's ChoEazyCopy, Simple and powerful RoboCopy GUI v2.0.0.1 (March 11, 2022)
"Easy RoboCopy", latest version 1.0.16 released on January 11, 2022.
"WinRoboCopy" revision 1.3.5953.40896 released on April 19, 2016.
RoboCop RoboCopy, Robocopy GUI Skin and script generator with Progress Monitoring, 10 September 2015.
A program by SH-Soft, also called "Robocopy GUI" v1.0.0.24 (October 8, 2005).
Ken Tamaru of Microsoft developed a copying program with functionality similar to Robocopy, called RichCopy, this was discontinued in 2010. It is not based on Robocopy, and does not require .NET Framework.
Versions
All versions of Robocopy store their version number and release date in their executable file header, viewable with File Explorer or PowerShell. Some of them (not all) report their version numbers in their textual output.
See also
List of file copying software
Command line
List of DOS commands
rsync
GUI
SyncToy
Ultracopier
References
External links
Robocopy documentation on Microsoft Learn
RoboCopy documentation on SS64.com
File copy utilities | Robocopy | [
"Technology"
] | 2,132 | [
"Windows commands",
"Computing commands"
] |
7,348,421 | https://en.wikipedia.org/wiki/Gargantua%20%28comics%29 | Gargantua (Edward Cobert; initially known as Leviathan) is a fictional character appearing in American comic books published by Marvel Comics.
Publication history
Gargantua first appeared in The New Defenders #126 (December 1983), and was created by writer J. M. DeMatteis and artist Alan Kupperberg.
Fictional character biography
Edward Cobert was initially a S.H.I.E.L.D. agent and scientist before testing the Project: Lazarus serum on himself, permanently transforming him into a giant form with limited intelligence. After being subdued by S.H.I.E.L.D. and dubbed Leviathan, he escapes custody and battles the Defenders on multiple occasions.
In subsequent appearances, Cobert becomes known as Gargantua, joins Doctor Octopus' incarnation of the Masters of Evil, and serves the Mad Thinker.
Powers and abilities
After subjecting himself to artificial cellular enhancement, Gargantua possesses enhanced strength and size, and can grow further by drawing mass from another dimension. He has limited intelligence as a side effect.
Edward Cobert was a S.H.I.E.L.D. Academy graduate and earned a Ph.D. in biochemistry before his transformation.
Other versions
Gargantua appears in JLA/Avengers #4 as a brainwashed minion of Krona.
References
External links
Gargantua at Marvel.com
Characters created by J. M. DeMatteis
Comics characters introduced in 1983
Fictional biochemists
Fictional special forces personnel
Marvel Comics characters with superhuman durability or invulnerability
Marvel Comics characters with superhuman strength
Marvel Comics giants
Marvel Comics male supervillains
Marvel Comics mutates
Marvel Comics scientists
Marvel Comics spies | Gargantua (comics) | [
"Chemistry"
] | 344 | [
"Fictional biochemists",
"Biochemists"
] |
7,348,443 | https://en.wikipedia.org/wiki/Air%20blaster | An air blaster or air cannon is a de-clogging device with two main components: a pressure vessel (storing air pressure) and a triggering mechanism (high speed release of compressed air). They are permanently installed on silos, bins and hoppers for powdery materials, and are used to prevent caking and to allow maximum storage capacity. They are also used in the film and theatre industries to project simulated debris from explosions, and as surprise effects in Halloween haunts and other attractions.
Air blasters do not need any specific air supply. Available plant air is enough with a minimum of 4 bar air pressure (60 psi or 400 kPa), although 5 to 6 bar are preferred for better results (75 to 90 psi). The average air consumption is moderate, and depends on the number of firings per hour, size of the pressure vessel, and number of blasters installed. For instance, a 50-liter air blaster consumes 0.60 Nm³/hour at 6 bar air pressure (90 psi or 600 kPa), with 2 firings per hour.
When the air in the pressure vessel is quickly released, the blast, called the impact force, evacuates material sticking to the container's walls (referred to as "rat holing"), and breaks potential accumulation points for subsequent clogging ("bridging"). The blasts are usually programmed with an automatic sequencer.
Operating principle
Phase 1: Air feeding: Air supply from the air compressor passes through a 3/2 way solenoid valve feed, the Quick Release Valve (QRV), and reach the triggering mechanism with its piston disc in closed position. The air reservoir is then pressurized in less than 15 seconds, depending on the air pressure and air volume used.
Phase 2: Waiting: An air pressure equilibrium between air circuit, triggering mechanism, and pressure vessel is created.
Phase 3: Blasting: When activated, a solenoid valve purges the air circuit, thus creating an air vacuum. Then, the piston inside triggering mechanism is abruptly pushed back by negative pressure, thus creating a sudden blast from the air contained in the pressure vessel. This phase is measured in milliseconds.
Then the cycle repeats again at Phase 1.
Design criteria and construction
An efficient air blaster should be designed to ensure:
Complete safety for the operators, thus avoiding harsh rodding or other manual cleaning methods;
A sturdy design, able to cope with the most severe operating conditions;
Easy maintenance, due to an easily accessible triggering device;
A metal-to-metal construction design, making the air blaster extremely reliable even in harsh environment (such as exposed to heat and/or dust);
A cost effective solution to all customers that prevents hopper, bin, and silo discharge interruption, as well as process disr
Construction
Usually 2 versions exist
A high temperature version: mainly for heat exchanger and cooler applications to remove clogging and to avoid costly plant stoppages and downtime.
A low temperature version: to eliminate build-up and dead stock for powdery and granular materials thus preventing caking and allowing optimization of storage capacity.
Installation
Air blasters solves problem occurring in cement factories among other industries, with blockages occurring in preheater towers (Kiln inlet, Cyclones, riser ducts...etc.) and in grate coolers, thus providing substantial savings.
Sources
https://www.martin-eng.com/
https://www.martin-eng.com/content/product_subcategory/491/air-cannons-products
Staminair website
https://www.standard-industrie.com/en/
INWET website
Industrial equipment
Hardware (mechanical)
Cleaning tools | Air blaster | [
"Physics",
"Technology",
"Engineering"
] | 768 | [
"Machines",
"Physical systems",
"Construction",
"nan",
"Hardware (mechanical)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.