id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
986,353
https://en.wikipedia.org/wiki/Information%20leakage
Information leakage happens whenever a system that is designed to be closed to an eavesdropper reveals some information to unauthorized parties nonetheless. In other words: Information leakage occurs when secret information correlates with, or can be correlated with, observable information. For example, when designing an encrypted instant messaging network, a network engineer without the capacity to crack encryption codes could see when messages are transmitted, even if he could not read them. Risk vectors A modern example of information leakage is the leakage of secret information via data compression, by using variations in data compression ratio to reveal correlations between known (or deliberately injected) plaintext and secret data combined in a single compressed stream. Another example is the key leakage that can occur when using some public-key systems when cryptographic nonce values used in signing operations are insufficiently random. Bad randomness cannot protect proper functioning of a cryptographic system, even in a benign circumstance, it can easily produce crackable keys that cause key leakage. Information leakage can sometimes be deliberate: for example, an algorithmic converter may be shipped that intentionally leaks small amounts of information, in order to provide its creator with the ability to intercept the users' messages, while still allowing the user to maintain an illusion that the system is secure. This sort of deliberate leakage is sometimes known as a subliminal channel. Generally, only very advanced systems employ defenses against information leakage. Following are the commonly implemented countermeasures : Use steganography to hide the fact that a message is transmitted at all. Use chaffing to make it unclear to whom messages are transmitted (but this does not hide from others the fact that messages are transmitted). For busy re-transmitting proxies, such as a Mixmaster node: randomly delay and shuffle the order of outbound packets - this will assist in disguising a given message's path, especially if there are multiple, popular forwarding nodes, such as are employed with Mixmaster mail forwarding. When a data value is no longer going to be used, erase it from the memory. See also Kleptographic attack Side-channel attack Traffic analysis References Cryptography
Information leakage
Mathematics,Engineering
448
11,648,907
https://en.wikipedia.org/wiki/Video%20buffering%20verifier
The Video Buffering Verifier (VBV) is a theoretical MPEG video buffer model, used to ensure that an encoded video stream can be correctly buffered, and played back at the decoder device. By definition, the VBV shall not overflow nor underflow when its input is a compliant stream, (except in the case of low_delay). It is therefore important when encoding such a stream that it comply with the VBV requirements. One way to think of the VBV is to consider both a maximum bitrate and a maximum buffer size. You'll need to know how quickly the video data is coming into the buffer. Keep in mind that video data is always changing the bitrate so there is no constant number to note how fast the data is arriving. The larger question is how long before the buffer overflows. A larger buffer size simply means that the decoder will tolerate high bitrates for longer periods of time, but no buffer is infinite, so eventually even a large buffer will overflow. Operation Modes There are two operational modes of VBV: Constant Bit Rate (CBR) and Variable Bit Rate (VBR). In CBR, the decoder's buffer is filled over time at a constant data rate. In VBR, the buffer is filled at a non-constant rate. In both cases, data is removed from the buffer in varying chunks, depending on the actual size of the coded frames. Standards In the H.264 and VC-1 standards, the VBV is replaced with generalized version called Hypothetical Reference Decoder (HRD). References MPEG Tutorial - Video Buffering Verifier Film and video technology MPEG
Video buffering verifier
Technology
353
75,049,414
https://en.wikipedia.org/wiki/Tremella%20olens
Tremella olens is a species of fungus in the family Tremellaceae. It produces soft, whitish, lobed to frondose, gelatinous basidiocarps (fruit bodies) and is parasitic on other fungi on dead branches of broad-leaved trees. It was originally described from Tasmania. Taxonomy Tremella olens was first published in 1860 by British mycologist Miles Joseph Berkeley based on a collection made in Tasmania. Description Fruit bodies are soft, gelatinous, whitish, and lobed. Microscopically, the basidia are tremelloid (ellipsoid, with oblique to vertical septa), 4-celled. The basidiospores are ellipsoid, smooth, 7.5 to 8.5 by 5.5 to 6.5 μm. Similar species Tremella olens belongs to a complex of similar species that have been differentiated by DNA sequencing and minor microscopic features. Tremella fibulifera and T. subfibulifera were both originally described from Brazil; Tremella neofibulifera and T. lloydiae-candidae were originally described from Japan; Tremella australe, T. cheejenii, T. guangxiensis, and T. latispora were originally described from China. Tremella fuciformis is a white species also recorded from Australia, but fruit bodies have thin, erect fronds, often crisped at the edges. Habitat and distribution Tremella olens is a parasite on lignicolous fungi, but its host species is unknown. It is found on dead, attached or fallen branches of broad-leaved trees. The species was originally described from Tasmania and has also been reported from Christmas Island. Reports from Venezuela and Jamaica refer to the South American species T. fibulifera or T. subfibulifera. Reports from Cameroon and Sabah belong to the species complex, but which species is uncertain. References olens Fungi described in 1860 Fungi of Australia Taxa named by Miles Joseph Berkeley Fungus species Parasitic fungi
Tremella olens
Biology
431
23,665
https://en.wikipedia.org/wiki/Pixel
In digital imaging, a pixel (abbreviated px), pel, or picture element is the smallest addressable element in a raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, pixels are the smallest element that can be manipulated through software. Each pixel is a sample of an original image; more samples typically provide more accurate representations of the original. The intensity of each pixel is variable. In color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black. In some contexts (such as descriptions of camera sensors), pixel refers to a single scalar element of a multi-component representation (called a photosite in the camera sensor context, although sensel is sometimes used), while in yet other contexts (like MRI) it may refer to a set of component intensities for a spatial position. Software on early consumer computers was necessarily rendered at a low resolution, with large pixels visible to the naked eye; graphics made under these limitations may be called pixel art, especially in reference to video games. Modern computers and displays, however, can easily render orders of magnitude more pixels than was previously possible, necessitating the use of large measurements like the megapixel (one million pixels). Etymology The word pixel is a combination of pix (from "pictures", shortened to "pics") and el (for "element"); similar formations with 'el include the words voxel , and texel . The word pix appeared in Variety magazine headlines in 1932, as an abbreviation for the word pictures, in reference to movies. By 1938, "pix" was being used in reference to still pictures by photojournalists. The word "pixel" was first published in 1965 by Frederic C. Billingsley of JPL, to describe the picture elements of scanned images from space probes to the Moon and Mars. Billingsley had learned the word from Keith E. McFarland, at the Link Division of General Precision in Palo Alto, who in turn said he did not know where it originated. McFarland said simply it was "in use at the time" (). The concept of a "picture element" dates to the earliest days of television, for example as "Bildpunkt" (the German word for pixel, literally 'picture point') in the 1888 German patent of Paul Nipkow. According to various etymologies, the earliest publication of the term picture element itself was in Wireless World magazine in 1927, though it had been used earlier in various U.S. patents filed as early as 1911. Some authors explain pixel as picture cell, as early as 1972. In graphics and in image and video processing, pel is often used instead of pixel. For example, IBM used it in their Technical Reference for the original PC. Pixilation, spelled with a second i, is an unrelated filmmaking technique that dates to the beginnings of cinema, in which live actors are posed frame by frame and photographed to create stop-motion animation. An archaic British word meaning "possession by spirits (pixies)", the term has been used to describe the animation process since the early 1950s; various animators, including Norman McLaren and Grant Munro, are credited with popularizing it. Technical thought of as the smallest single component of a digital image. However, the definition is highly context-sensitive. For example, there can be "printed pixels" in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera (photosensor elements). This list is not exhaustive and, depending on context, synonyms include pel, sample, byte, bit, dot, and spot. Pixels can be used as a unit of measure such as: 2400 pixels per inch, 640 pixels per line, or spaced 10 pixels apart. The measures "dots per inch" (dpi) and "pixels per inch" (ppi) are sometimes used interchangeably, but have distinct meanings, especially for printer devices, where dpi is a measure of the printer's density of dot (e.g. ink droplet) placement. For example, a high-quality photographic image may be printed with 600 ppi on a 1200 dpi inkjet printer. Even higher dpi numbers, such as the 4800 dpi quoted by printer manufacturers since 2002, do not mean much in terms of achievable resolution. The more pixels used to represent an image, the closer the result can resemble the original. The number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition. Pixel counts can be expressed as a single number, as in a "three-megapixel" digital camera, which has a nominal three million pixels, or as a pair of numbers, as in a "640 by 480 display", which has 640 pixels from side to side and 480 from top to bottom (as in a VGA display) and therefore has a total number of 640 × 480 = 307,200 pixels, or 0.3 megapixels. The pixels, or color samples, that form a digitized image (such as a JPEG file used on a web page) may or may not be in one-to-one correspondence with screen pixels, depending on how a computer displays an image. In computing, an image composed of pixels is known as a bitmapped image or a raster image. The word raster originates from television scanning patterns, and has been widely used to describe similar halftone printing and storage techniques. Sampling patterns For convenience, pixels are normally arranged in a regular two-dimensional grid. By using this arrangement, many common operations can be implemented by uniformly applying the same operation to each pixel independently. Other arrangements of pixels are possible, with some sampling patterns even changing the shape (or kernel) of each pixel across the image. For this reason, care must be taken when acquiring an image on one device and displaying it on another, or when converting image data from one pixel format to another. For example: LCD screens typically use a staggered grid, where the red, green, and blue components are sampled at slightly different locations. Subpixel rendering is a technology which takes advantage of these differences to improve the rendering of text on LCD screens. The vast majority of color digital cameras use a Bayer filter, resulting in a regular grid of pixels where the color of each pixel depends on its position on the grid. A clipmap uses a hierarchical sampling pattern, where the size of the support of each pixel depends on its location within the hierarchy. Warped grids are used when the underlying geometry is non-planar, such as images of the earth from space. The use of non-uniform grids is an active research area, attempting to bypass the traditional Nyquist limit. Pixels on computer monitors are normally "square" (that is, have equal horizontal and vertical sampling pitch); pixels in other systems are often "rectangular" (that is, have unequal horizontal and vertical sampling pitch – oblong in shape), as are digital video formats with diverse aspect ratios, such as the anamorphic widescreen formats of the Rec. 601 digital video standard. Resolution of computer monitors Computer monitors (and TV sets) generally have a fixed native resolution. What it is depends on the monitor, and size. See below for historical exceptions. Computers can use pixels to display an image, often an abstract image that represents a GUI. The resolution of this image is called the display resolution and is determined by the video card of the computer. Flat-panel monitors (and TV sets), e.g. OLED or LCD monitors, or E-ink, also use pixels to display an image, and have a native resolution, and it should (ideally) be matched to the video card resolution. Each pixel is made up of triads, with the number of these triads determining the native resolution. On older, historically available, CRT monitors the resolution was possibly adjustable (still lower than what modern monitor achieve), while on some such monitors (or TV sets) the beam sweep rate was fixed, resulting in a fixed native resolution. Most CRT monitors do not have a fixed beam sweep rate, meaning they do not have a native resolution at all – instead they have a set of resolutions that are equally well supported. To produce the sharpest images possible on a flat-panel, e.g. OLED or LCD, the user must ensure the display resolution of the computer matches the native resolution of the monitor. Resolution of telescopes The pixel scale used in astronomy is the angular distance between two objects on the sky that fall one pixel apart on the detector (CCD or infrared chip). The scale measured in radians is the ratio of the pixel spacing and focal length of the preceding optics, . (The focal length is the product of the focal ratio by the diameter of the associated lens or mirror.) Because is usually expressed in units of arcseconds per pixel, because 1 radian equals (180/π) × 3600 ≈ 206,265 arcseconds, and because focal lengths are often given in millimeters and pixel sizes in micrometers which yields another factor of 1,000, the formula is often quoted as . Bits per pixel The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). A 1 bpp image uses 1 bit for each pixel, so each pixel can be either on or off. Each additional bit doubles the number of colors available, so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8 colors: 1 bpp, 21 = 2 colors (monochrome) 2 bpp, 22 = 4 colors 3 bpp, 23 = 8 colors 4 bpp, 24 = 16 colors 8 bpp, 28 = 256 colors 16 bpp, 216 = 65,536 colors ("Highcolor" ) 24 bpp, 224 = 16,777,216 colors ("Truecolor") For color depths of 15 or more bits per pixel, the depth is normally the sum of the bits allocated to each of the red, green, and blue components. Highcolor, usually meaning 16 bpp, normally has five bits for red and blue each, and six bits for green, as the human eye is more sensitive to errors in green than in the other two primary colors. For applications involving transparency, the 16 bits may be divided into five bits each of red, green, and blue, with one bit left for transparency. A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is available: this means that each 24-bit pixel has an extra 8 bits to describe its opacity (for purposes of combining with another image). Subpixels Many display and image-acquisition systems are not capable of displaying or sensing the different color channels at the same site. Therefore, the pixel grid is divided into single-color regions that contribute to the displayed or sensed color when viewed at a distance. In some displays, such as LCD, LED, and plasma displays, these single-color regions are separately addressable elements, which have come to be known as subpixels, mostly RGB colors. For example, LCDs typically divide each pixel vertically into three subpixels. When the square pixel is divided into three subpixels, each subpixel is necessarily rectangular. In display industry terminology, subpixels are often referred to as pixels, as they are the basic addressable elements in a viewpoint of hardware, and hence pixel circuits rather than subpixel circuits is used. Most digital camera image sensors use single-color sensor regions, for example using the Bayer filter pattern, and in the camera industry these are known as pixels just like in the display industry, not subpixels. For systems with subpixels, two different approaches can be taken: The subpixels can be ignored, with full-color pixels being treated as the smallest addressable imaging element; or The subpixels can be included in rendering calculations, which requires more analysis and processing time, but can produce apparently superior images in some cases. This latter approach, referred to as subpixel rendering, uses knowledge of pixel geometry to manipulate the three colored subpixels separately, producing an increase in the apparent resolution of color displays. While CRT displays use red-green-blue-masked phosphor areas, dictated by a mesh grid called the shadow mask, it would require a difficult calibration step to be aligned with the displayed pixel raster, and so CRTs do not use subpixel rendering. The concept of subpixels is related to samples. Logical pixel In graphic, web design, and user interfaces, a "pixel" may refer to a fixed length rather than a true pixel on the screen to accommodate different pixel densities. A typical definition, such as in CSS, is that a "physical" pixel is . Doing so makes sure a given element will display as the same size no matter what screen resolution views it. There may, however, be some further adjustments between a "physical" pixel and an on-screen logical pixel. As screens are viewed at difference distances (consider a phone, a computer display, and a TV), the desired length (a "reference pixel") is scaled relative to a reference viewing distance ( in CSS). In addition, as true screen pixel densities are rarely multiples of 96 dpi, some rounding is often applied so that a logical pixel is an integer amount of actual pixels. Doing so avoids render artifacts. The final "pixel" obtained after these two steps becomes the "anchor" to which all other absolute measurements (e.g. the "centimeter") are based on. Worked example, with a 2160p TV placed away from the viewer: Calculate the scaled pixel size as . Calculate the DPI of the TV as . Calculate the real-pixel count per logical-pixel as . A browser will then choose to use the 1.721× pixel size, or round to a 2× ratio. Megapixel A megapixel (MP''') is a million pixels; the term is used not only for the number of pixels in an image but also to express the number of image sensor elements of digital cameras or the number of display elements of digital displays. For example, a camera that makes a 2048 × 1536 pixel image (3,145,728 finished image pixels) typically uses a few extra rows and columns of sensor elements and is commonly said to have "3.2 megapixels" or "3.4 megapixels", depending on whether the number reported is the "effective" or the "total" pixel count. The number of pixels is sometimes quoted as the "resolution" of a photo. This measure of resolution can be calculated by multiplying the width and height of a sensor in pixels. Digital cameras use photosensitive electronics, either charge-coupled device (CCD) or complementary metal–oxide–semiconductor (CMOS) image sensors, consisting of a large number of single sensor elements, each of which records a measured intensity level. In most digital cameras, the sensor array is covered with a patterned color filter mosaic having red, green, and blue regions in the Bayer filter arrangement so that each sensor element can record the intensity of a single primary color of light. The camera interpolates the color information of neighboring sensor elements, through a process called demosaicing, to create the final image. These sensor elements are often called "pixels", even though they only record one channel (only red or green or blue) of the final color image. Thus, two of the three color channels for each sensor must be interpolated and a so-called N-megapixel'' camera that produces an N-megapixel image provides only one-third of the information that an image of the same size could get from a scanner. Thus, certain color contrasts may look fuzzier than others, depending on the allocation of the primary colors (green has twice as many elements as red or blue in the Bayer arrangement). DxO Labs invented the Perceptual MegaPixel (P-MPix) to measure the sharpness that a camera produces when paired to a particular lens – as opposed to the MP a manufacturer states for a camera product, which is based only on the camera's sensor. The new P-MPix claims to be a more accurate and relevant value for photographers to consider when weighing up camera sharpness. As of mid-2013, the Sigma 35 mm f/1.4 DG HSM lens mounted on a Nikon D800 has the highest measured P-MPix. However, with a value of 23 MP, it still more than one-third of the D800's 36.3 MP sensor. In August 2019, Xiaomi released the Redmi Note 8 Pro as the world's first smartphone with 64 MP camera. On December 12, 2019, Samsung released Samsung A71 that also has a 64 MP camera. In late 2019, Xiaomi announced the first camera phone with 108 MP 1/1.33-inch across sensor. The sensor is larger than most of bridge camera with 1/2.3-inch across sensor. One new method to add megapixels has been introduced in a Micro Four Thirds System camera, which only uses a 16 MP sensor but can produce a 64 MP RAW (40 MP JPEG) image by making two exposures, shifting the sensor by a half pixel between them. Using a tripod to take level multi-shots within an instance, the multiple 16 MP images are then generated into a unified 64 MP image. See also Computer display standard Dexel Gigapixel image Image resolution Intrapixel and Interpixel processing LCD crosstalk PenTile matrix family Pixel advertising Pixel art Pixel art scaling algorithms Pixel aspect ratio Pixelation Pixelization Point (typography) Glossary of video terms Voxel Vector graphics References External links A Pixel Is Not A Little Square: Microsoft Memo by computer graphics pioneer Alvy Ray Smith. "Pixels and Me", 2016 lecture by Richard F. Lyon at the Computer History Museum Square and non-Square Pixels: Technical info on pixel aspect ratios of modern video standards (480i, 576i, 1080i, 720p), plus software implications. Computer graphics data structures Digital geometry Digital imaging Digital photography Display technology Image processing Television technology
Pixel
Technology,Engineering
3,856
37,504,915
https://en.wikipedia.org/wiki/Phosphinooxazolines
Phosphinooxazolines (often abbreviated PHOX) are a class of chiral ligands used in asymmetric catalysis. Colorless solids, PHOX ligands feature a tertiary phosphine group, often diphenyl, and an oxazoline ligand in the ortho position. The oxazoline, which carries the stereogenic center, coordinates through nitrogen, the result being that PHOX ligands are P,N-chelating ligands. Most phosphine ligands used in asymmetric catalysis are diphosphines, so the PHOX ligands are distinctive. Some evidence exists that PHOX ligands are hemilabile. Synthesis The synthesis of phosphinooxazolines is modular. Methods exist for installing the phosphine ligand before the oxazoline and the reverse. Commonly a phenyloxazoline is combined with a source of diphenylphosphine. Methods for doing this depend on the nature of the substituent in the X position: When X = fluorine coupling involves anionic displacement with a alkali metal diphenylphosphide. 2-Bromoaryl oxazolines can be converted into a Grignard reagent, which react with chlorodiphenylphosphine. Alternatively, the aryl bromide can be coupled with diphenylphosphine via a copper iodide-catalysed reaction. Aryl oxazolines undergo directed ortho lithiation, and the resulting 2-lithio derivative then can be treated with chlorodiphenylphosphine. Of these methods, the copper iodide catalysed reaction method is popular. Catalysis Phosphinooxazoline complexes have been widely tested in homogeneous catalysis. Allylic substitutions PHOX-based palladium complexes catalyse enantioselective allylic substitutions. Substitutions include allylic alkylations (Tsuji-Trost reaction), aminations, and sulfonylations. Heck Reaction Palladium complexes containing chiral phosphinooxazolines are efficient catalysts for the Heck reaction. Pd-PHOX catalysts have also been used for intramolecular Heck reactions and examples exist where they have been shown to be superior to more common ligands such as BINAP. Asymmetric Hydrogenation In asymmetric hydrogenation iridium complexes of phosphinooxazolines catalyse 'classic' hydrogenation. Related ruthenium and palladium catalysts effect transfer hydrogenation. In addition to theoretical studies, the structural and kinetic properties See also Other oxazoline based ligands (S)-iPr-PHOX - A specific PHOX ligand Bisoxazolines (BOX) Trisoxazolines (TRISOX) Structurally related ligands Trost ligand Diphenyl-2-pyridylphosphine References Catalysis Coordination chemistry Ligands Oxazolines Phosphines
Phosphinooxazolines
Chemistry
643
1,529,485
https://en.wikipedia.org/wiki/Neighbourhood%20%28mathematics%29
In topology and related areas of mathematics, a neighbourhood (or neighborhood) is one of the basic concepts in a topological space. It is closely related to the concepts of open set and interior. Intuitively speaking, a neighbourhood of a point is a set of points containing that point where one can move some amount in any direction away from that point without leaving the set. Definitions Neighbourhood of a point If is a topological space and is a point in then a neighbourhood of is a subset of that includes an open set containing , This is equivalent to the point belonging to the topological interior of in The neighbourhood need not be an open subset of When is open (resp. closed, compact, etc.) in it is called an (resp. closed neighbourhood, compact neighbourhood, etc.). Some authors require neighbourhoods to be open, so it is important to note their conventions. A set that is a neighbourhood of each of its points is open since it can be expressed as the union of open sets containing each of its points. A closed rectangle, as illustrated in the figure, is not a neighbourhood of all its points; points on the edges or corners of the rectangle are not contained in any open set that is contained within the rectangle. The collection of all neighbourhoods of a point is called the neighbourhood system at the point. Neighbourhood of a set If is a subset of a topological space , then a neighbourhood of is a set that includes an open set containing ,It follows that a set is a neighbourhood of if and only if it is a neighbourhood of all the points in Furthermore, is a neighbourhood of if and only if is a subset of the interior of A neighbourhood of that is also an open subset of is called an of The neighbourhood of a point is just a special case of this definition. In a metric space In a metric space a set is a neighbourhood of a point if there exists an open ball with center and radius such that is contained in is called a uniform neighbourhood of a set if there exists a positive number such that for all elements of is contained in Under the same condition, for the -neighbourhood of a set is the set of all points in that are at distance less than from (or equivalently, is the union of all the open balls of radius that are centered at a point in ): It directly follows that an -neighbourhood is a uniform neighbourhood, and that a set is a uniform neighbourhood if and only if it contains an -neighbourhood for some value of Examples Given the set of real numbers with the usual Euclidean metric and a subset defined as then is a neighbourhood for the set of natural numbers, but is a uniform neighbourhood of this set. Topology from neighbourhoods The above definition is useful if the notion of open set is already defined. There is an alternative way to define a topology, by first defining the neighbourhood system, and then open sets as those sets containing a neighbourhood of each of their points. A neighbourhood system on is the assignment of a filter of subsets of to each in such that the point is an element of each in each in contains some in such that for each in is in One can show that both definitions are compatible, that is, the topology obtained from the neighbourhood system defined using open sets is the original one, and vice versa when starting out from a neighbourhood system. Uniform neighbourhoods In a uniform space is called a uniform neighbourhood of if there exists an entourage such that contains all points of that are -close to some point of that is, for all Deleted neighbourhood A deleted neighbourhood of a point (sometimes called a punctured neighbourhood) is a neighbourhood of without For instance, the interval is a neighbourhood of in the real line, so the set is a deleted neighbourhood of A deleted neighbourhood of a given point is not in fact a neighbourhood of the point. The concept of deleted neighbourhood occurs in the definition of the limit of a function and in the definition of limit points (among other things). See also Notes References General topology Mathematical analysis
Neighbourhood (mathematics)
Mathematics
799
67,061,566
https://en.wikipedia.org/wiki/PJ352%E2%80%9315
PSO J352.4034–15.3373, or PJ352-15, is a quasar with an astrophysical jet ascribed to a billion-solar-mass supermassive black hole. It is one of the brightest objects so far discovered. Its discovery, using the Chandra X-Ray Observatory, was reported in March 2021. At 12.7 billion light years from Earth, the X-ray jet became an observational distance record at the time of its discovery. See also List of the most distant astronomical objects List of quasars References Quasars Supermassive black holes Astronomical objects discovered in 2021 Aquarius (constellation)
PJ352–15
Physics,Astronomy
137
20,490,804
https://en.wikipedia.org/wiki/CI-988
CI-988 (PD-134,308) is a drug which acts as a cholecystokinin antagonist, selective for the CCKB subtype. In animal studies it showed anxiolytic effects and potentiated the analgesic action of both morphine and endogenous opioid peptides, as well as preventing the development of tolerance to opioids and reducing symptoms of withdrawal. Consequently, it was hoped that it might have clinical applications for the treatment of pain and anxiety in humans, but trial results were disappointing with only minimal therapeutic effects observed even at high doses. The reason for the failure of CI-988 and other CCKB antagonists in humans despite their apparent promise in pre-clinical animal studies is unclear, although poor pharmacokinetic properties of the currently available drugs are a possible explanation, and CCKB antagonists are still being researched for possible uses as adjuvants to boost the activity of other drugs. References Cholecystokinin antagonists Adamantanes Tryptamines Carboxylic acids Carboxamides Carbamates
CI-988
Chemistry
228
75,259,344
https://en.wikipedia.org/wiki/Tatreez
Tatreez () is a form of traditional Palestinian embroidery. Tatreez, meaning ‘embroidery’ in Arabic, is used to refer to the traditional style of embroidery practiced in Palestine and Palestinian diaspora communities. The contemporary form of tatreez is often dated back to the 19th century, but the style of cross-stitch embroidery called fallahi has been practiced amongst Arab communities in the Mediterranean for centuries. The embroidery is particularly associated with embellishments on traditional dress (the thobe,) with the motifs and colors representing regional identity and social relationships. Tatreez is commonly used on garments and includes a variety of symbols including birds, trees and flowers. The craft was originally practiced in rural areas of Palestine, but is now common across the Palestinian diaspora. In 2021, the art of embroidery in Palestine was recognized by UNESCO as an important intangible cultural heritage. According to Reem Kassis, this style of embroidery in particular is often celebrated as one of the most rich and exquisite. Historically, each village in Palestine had their own tatreez patterns, with unique designs telling stories about the local people, legends, animals and plants, and various beliefs people had. The different styles of tatreez have become less distinct and have continued to evolve with the diaspora. The practice of tatreez has accreted an additional politicized significance within the context of Palestinian displacement and resistance. Tatreez patterns have incorporated nationalist symbolism within the Nakba, the Intilaqa, and the Six-Days War, and the practice remains imbued with social significance as a way to embody and propagate cultural heritage. Origins Hanan K. Munayyer, a scholar of Palestinian dress, considers one of the first examples of Tatreez-like embroidery to be a fragment of geometric silk cross-stitch embroidery from 11th-century Alexandria. Dresses found in a 1283 CE burial in Lebanon represent the earliest intact garments with Tatreez embroidery, and strongly resemble the regional dress of Ramallah. Many popular Tatreez motifs mimic elements of Arab architectural and mosaic design. For example, the cup and acanthus leaf motif is often found in stone carvings. Historically, the materials used for Tatreez embroidery were from the local area of the embroiderer. Silk was cultivated in Palestine from the sixth to the nineteenth century in order to make the embroidery thread. Before industrialization, the fabric was woven by men in home looms. Plant dyes, such as indigo or madder,  were used to color the embroidery thread. Terminology The cross-stitch mode of embroidery is known as fallahi, from the Arabic word denoting a rural person ‘fallah.’ This is the most common mode of embroidery throughout Palestine, but certain regions are known for idiosyncratic techniques. Bethlehem embroidery usually uses a satin stitch known as tahiri, and may incorporate metallic thread embroidery (qasab,) because the thobe material was often finely woven and unsuitable for fallahi. Embroidered aspects of a thobe were often decorated before being assembled. The parts of a thobe that would often be embroidered include the chest panel (qabbah,) which has patterns and colors that vary from region to region and identify the wearer easily. Other commonly embroidered areas of the thobe include the radah (shoulderpiece,) side skirt panels (benayiq,) the front of the skirt (hiijer,) and the lower back panel (shinyar.) Prominent motifs A completed piece of tatreez comprises many motifs. These motifs each carry symbolic meanings, telling a story through textile. Different motifs are passed down through generations, becoming identifiers of particular villages, towns, regions, or even families. The variation reflects the deep connection between tatreez and Palestinian heritage - motifs depict the environment, history, and even daily life of Palestinians. Tree of life: Also known as the Cypress Tree, this motif is one of the most popular, seen across regions. Symbolizing longevity, resilience, and stability, the cypress tree is often depicted in symmetrical patterns, reflecting its persistent role in the natural environment of Palestine. Palm tree: Another common motif is the palm tree, though it is seen most frequently in relation to thobes from Ramallah. The palm tree is reflective of the Palestinian natural landscape, it symbolizes fertility and life - as is the nature of Palestine. Khemet el Basha: A motif related to Palestinian history is the Khemet el Basha. The Khemet el Basha is also referred to as Pasha’s Tent, associated with a high ranking official from the Ottoman Empire. Star of Bethlehem: As it’s name alludes to, this motif is incredibly popular in the region of Bethlehem, communicating messages of love and family with its ties to the Canaanite goddess of fertility. Border Motifs: Bordering elements also play a significant role in the symbolic nature of Tatreez and Palestinian thobes. Often working to frame larger motifs, or the edges of a garment borders depict feathers, roses, soap, triangles, and toothed lines. Each completed thobe, with a tapestry of motifs, is a direct reflection of the personal circumstances of the artist, displaying social and marital status, wealth, village of origin, even communicating personality. Regional differences In each region, the embroidery serves as a form of aesthetic and cultural expression and carries deep social significance, reflecting the community's identity and traditions. For instance,  In the Northern regions like Bethlehem and Ramallah, the embroidery is distinguished by its intricate motifs and the use of luxurious materials, reflecting the historical and religious importance of the area. Bethlehem is especially well-known for its extravagant use of silver and gold threads in wedding attire, representing prestige and wealth. Ramallah's embroidery features detailed geometric and floral patterns, often adorning handwoven white linen used in traditional dresses, marking celebrations and significant life events. On the other hand, the Southern areas, such as Gaza and Hebron, exhibit distinct styles. Although Gazan dresses are more simple and represent the practical needs of everyday life, they are exquisitely adorned with vibrant embroidery down the sleeves and hems. Hebron, known for its vivid cross-stitch designs on thick linen, exhibits a strong aesthetic with rich reds and greens that represent a bond with the land and local traditions. In addition to adding beauty to the clothing, the tatreez of each region is a form of identity and expression passed down through the generations, reflecting the artistry and resilience of Palestinian culture. Tatreez after the Nakba After the 1948 ethnic cleansing and displacement of over 700,000 Palestinians (known as the Nakba, Arabic for catastrophe), the practice of tatreez was disrupted, as was every aspect of daily life. Countless treasured works were lost as the vast majority of Palestinians became refugees. Before the Nakba, embroidery was largely a family business, the technique passed down through generations of women. After Palestinians were torn away from almost every aspect of their everyday lives, tatreez, like other routine practices, became something to preserve. The process of “heritagization” turned smaller aspects of Palestinian life into hallmarks of their existence, and resistance. Tatreez also became a necessary part of the economy for refugees- the practice was commodified as women sold their art to support their families. Many women’s organizations cropped up around this time, centered around giving women the supplies to take back their craft and their lives. Tatreez after the 1967 Six-Day War A second tatreez revival occurred in the 1970’s and 80’s, after the Six Day War. Commercialization of tatreez increased,the market drawing international attention to Palestine. At this point, practicing and teaching tatreez became a revolutionary act. The Palestine Liberation Organization even established tatreez workshops in refugee camps. “In short, in the 1970s and 80s, people extended resistance and the political struggle to culture and other nonpolitical domains, like the domestic sphere, that were less exposed to immediate Israeli repression: This culture resistance was imbued with folklore.” Embroidery gave Palestinian women work, power, and identity: though at this time the art was no longer just women’s work. It is now not only a symbol for Palestinian heritage but a new form of mobilization and activism. Tatreez and the Intifadas Throughout the First Intifada, nationalist art and imagery had a strong influence on resistance, and vice versa. This escalated Israeli censorship of Palestinian art, to the extent that the colors of the flag itself were not permitted to be shown in public.  Tatreez, along with other tasks traditionally performed by women, was not only publicized but politicized into a format for nationalist rebellion. Using tatreez to display the Palestinian flag on thobes became a popular form of resistance for Palestinian women. These dresses came to be known as “intifada dresses” or “flag dresses”. Thobes became an excellent media for the palestinian flag after it was banned in public places. Women could wear their “intifada dresses” in their homes or other private places, and even if displayed in public at a protest, the dresses could not be removed off their bodies. See also Palestinian handicrafts Palestinian traditional costumes References Palestinian handicrafts History of Palestine (region) Embroidery Palestinian inventions National symbols of the State of Palestine Textile design Palestinian arts
Tatreez
Engineering
1,917
13,110,217
https://en.wikipedia.org/wiki/Fique
Fique is a natural fibre that grows in the leaves of plants in the genus Furcraea. Common names include fique, cabuya, pita, penca, penco, maguey, cabui, chuchao and coquiza. History The Indigenous peoples of the Americas extracted and used the fique fibers to make garments, ropes, and hammocks—among many things—for several centuries before the arrival of Spanish conquerors. In the 17th century, Dutch colonists carried the plant from their Brazilian colonies in Pernambuco to the island of Mauritius. The native inhabitants of the island learned to use the fibre and called it , or . The fibre was also introduced to St. Helena, India, Sri Lanka, Algeria, Madagascar, East Africa, Mexico and Costa Rica. In the 18th century, in Dagua, Valle del Cauca, Colombia, the priest Feliciano Villalobos started the first rope and wrapping materials manufacturing industry; his products were made of fique. In 1880 the Colombian government reported a yearly production of three million kilograms of fibres, the exportation to Venezuela of two million, the fabrication of five millions pairs of alpargatas and four million metres of rope. Between 1970 and 1975 the fique industry suffered a crisis brought about by the development of polypropylene, which costs less and is produced faster. Today, fique is considered the Colombian national fibre and is used in the fabrication of ethnic products, Colombian handicrafts and recently (since July 2007) has been used for the heat protectors (handmade in Barichara) placed around the Colombian coffee cups sold in the Juan Valdez coffee shops worldwide. Uses Packing: The main use of the Colombian cabuya is for the fabrication of sacks and packages for agriculture. According with the number of threads, the products are classified as: Dense: 6000 to 10,000 threads per square metre. Used for flour and small grains such rice. Semidense: 4800 to 5500. Used for bigger grains such coffee and beans. Loose: 300 to 360. Used for fruits, vegetables and panela. Ropes: with cabuya one can make very resistant ropes and strings of different calibres, from threads to manilas one inch in diameter. Such ropes are used in the industries of transportation, construction, sailing and many others. Arriería accessories: many of the elements used in the pack animals, such as enjalmas, cinchas, retrancas, lazos, pretales, tapa de enjalma, and cinchos are handmade with fique. Tapestry: the mixed and crude cabuya is used in rugs and tapestry of different size and quality. The fibres can be stained with different organic materials, such as avocado seed, achiote and eucalyptus cortex. Others: handcrafts, purses, bags, handbags, mattresses, curtains, shoes, umbrellas, baskets and many other products. Subproducts Pulp: Used to produce organic fertilizer and paper Leaves juice: Can be used for fabrication of soap, fungicides, alcoholic beverages (homemade tapetusa), organic fuel and animal food. Floral stem: The strong floral stem of the fique plant is used in the construction of houses and ladders. Bulbs: The pickled terminal bulbs of the plant are edible. Medicinal uses: Peasants use the leaves in topic preparations for treatment of boils. The extract of leaves is used against horse lice. Cultivation The fique can be obtained from several species of Furcraea, such as F. macrophylla Baker, cabuya Trel, andina Trel, and castilla. Depending on the processing of the fiber and the species used, many varieties of fique fibers can be obtained. Among others: Main varieties Ceniza (ash-colored) Espinosa (rough texture) Castilla or Golden border Sisal Secondary varieties Cabuya verde (green) Uña de águila (eagle nail) Negra común (black common) Chachagueña Genoia Tunosa común (common spiked) Jardineña Espadilla Rabo de chucha (opossum tail). Optimal conditions for the growing of the fique plant are: Temperature: 19 °C – 23 °C Altitude: 1,300 m – 1,900 m Annual rainfall: 1,000 mm – 1,600 mm Sunlight: 5–6 hours/day Soil: dry, rich in silicates. Fique crops bring nitrogen to the soil, improving its fertility. The plant is very adaptable to different ecological conditions. A fique plant can produce 1 to 6 kg of fiber each year. Diseases Llaga macana or rayadilla: a viral disease that attacks all varieties of fique and all the parts of the plant, especially in crops over 1900 m altitude. The disease has no chemical control. It must be managed with preventive measures. Pink disease: caused by the fungus Erythricium salmonicolor. The disease damages the leaves, disrupting the fibers. Treatment is undertaken with copper-based fungicides. Peasants treat this disease by applying ashes to the base of the leaves. Leaf cochineal (Diaspis bromelia): caused by a parasitic insect. Leaf beetle: a beetle that perforates the base of the leaves. References Furcraea Fiber plants Biodegradable materials Cellulose Crops originating from Colombia Crops originating from Ecuador Crops originating from Peru Crops originating from South America Culture of Colombia Garden plants of South America Plants described in 1915
Fique
Physics,Chemistry
1,147
61,011,970
https://en.wikipedia.org/wiki/Estradiol%20benzoate/estradiol%20valerate/norethisterone%20acetate/testosterone%20enanthate
Estradiol benzoate/estradiol valerate/norethisterone acetate/testosterone enanthate (EB/EV/NETA/TE), sold under the brand name Ablacton, is an injectable combination medication of estradiol benzoate (EB), an estrogen, estradiol valerate (EV), an estrogen, norethisterone acetate (NETA), a progestin, and testosterone enanthate (TE), an androgen/anabolic steroid, which has been used to suppress lactation in women. It contains 5 mg EB, 8 mg EV, 20 mg NETA, and 180 mg TE in oil solution and is provided in the form of ampoules. It is given as a single intramuscular injection following childbirth. The medication was manufactured by Schering and was previously marketed in Italy and Spain, but is no longer available. See also List of combined sex-hormonal preparations § Estrogens, progestogens, and androgens References Abandoned drugs Combined estrogen–progestogen–androgen formulations
Estradiol benzoate/estradiol valerate/norethisterone acetate/testosterone enanthate
Chemistry
240
21,039,740
https://en.wikipedia.org/wiki/Lithium%20tetrachloroaluminate
Lithium tetrachloroaluminate is an inorganic compound with the formula . It consists of lithium cations and tetrahedral tetrachloroaluminate anions . Uses Lithium tetrachloroaluminate is used in some lithium batteries. A solution of lithium tetrachloroaluminate in thionyl chloride is the liquid cathode and electrolyte in those baterries, e.g. the lithium-thionyl chloride cell. Another cathode-electrolyte formulation is lithium tetrachloroaluminate + thionyl chloride + sulfur dioxide + bromine. Reactions Reacts violently with water, alcohols and oxidizing agents. Upon exposure to heat or fire, it decomposes emitting irritating and toxic fumes and smoke of hydrogen chloride, lithium oxide and aluminium oxide. Toxicity Upon contact with skin, causes burns. Inhalation causes coughing and corrosive injuries to the respiratory system, which can lead to pneumonia. This compound is extremely destructive to the mucous tissues. May cause pulmonary edema and edema of the larynx, laryngitis and edema of bronchi, leading to shortness of breath. May cause damage to the eyes, headache and nausea. If swallowed, may cause damage. References Lithium salts Tetrachloroaluminates
Lithium tetrachloroaluminate
Chemistry
275
37,364,815
https://en.wikipedia.org/wiki/Triptolide
Triptolide is a diterpenoid epoxide which is produced by the thunder god vine, Tripterygium wilfordii. It has in vitro and in vivo activities against mouse models of polycystic kidney disease and pancreatic cancer, but its physical properties and severe toxicity limit its therapeutic potential. Consequently, a synthetic water-soluble prodrug, minnelide, is being studied clinically instead. Triptolide is a component of ContraPest, a contraceptive pest control liquid used to reduce rat populations in the United States. Mechanism of action Several putative target proteins of triptolide have been reported, including polycystin-2, ADAM10, DCTPP1, TAB1, and XPB. Multiple triptolide-resistant mutations exist in XPB (ERCC3) and its partner protein GTF2H4. However, no triptolide-resistant mutations were found in polycystin-2, ADAM10, DCTPP1 and TAB1. Cys342 of XPB was identified as the residue that undergoes covalent modification by the 12,13-epoxide group of triptolide, and the XPB-C342T mutant rendered the T7115 cell line nearly completely resistant to triptolide. The level of resistance conferred by the C342T mutation is about 100-fold higher than the most triptolide-resistant mutants previously identified. Together, these results validate XPB as a target responsible for the antiproliferative activity of triptolide. The disruption of super-enhancer networks has also been suggested as a mechanism of action. Water-soluble prodrugs Minnelide is a more water-soluble synthetic prodrug of triptolide which is converted to triptolide in vivo. In a preclinical mouse model of pancreatic cancer, it was "even more effective than gemcitabine". Its Phase II clinical trials are expected to conclude in February 2019. Glutriptolide, a glucose conjugate of triptolide with better solubility and lower toxicity, did not inhibit XPB activity in vitro, but exhibited tumor control in vivo, which is likely due to sustained stepwise release of active triptolide within cancer cells. A second generation glutriptolide has been recently reported for targeting hypoxic cancer cells with increased glucose transporter expression. References Diterpenes Epoxides Furanones Secondary alcohols Isopropyl compounds Plant toxins
Triptolide
Chemistry
519
3,470,484
https://en.wikipedia.org/wiki/Enhanced%20biological%20phosphorus%20removal
Enhanced biological phosphorus removal (EBPR) is a sewage treatment configuration applied to activated sludge systems for the removal of phosphate. The common element in EBPR implementations is the presence of an anaerobic tank (nitrate and oxygen are absent) prior to the aeration tank. Under these conditions a group of heterotrophic bacteria, called polyphosphate-accumulating organisms (PAO) are selectively enriched in the bacterial community within the activated sludge. In the subsequent aerobic phase, these bacteria can accumulate large quantities of polyphosphate within their cells and the removal of phosphorus is said to be enhanced. EBPR can also integrate biological nitrogen removal through the addition of an anoxic zone where nitrate is present due to the denitrification capabilities of some PAO. Generally speaking, all bacteria contain a fraction (1-2%) of phosphorus in their biomass due to its presence in cellular components, such as membrane phospholipids and DNA. Therefore, as bacteria in a wastewater treatment plant consume nutrients in the wastewater, they grow and phosphorus is incorporated into the bacterial biomass. When PAOs grow they not only consume phosphorus for cellular components but also accumulate large quantities of polyphosphate within their cells. Thus, the phosphorus fraction of phosphorus accumulating biomass is 5-7%. In mixed bacterial cultures the phosphorus content will be maximal 3 - 4 % on total organic mass. If additional chemical precipitation takes place, for example to reach discharge limits, the P-content could be higher, but that is not affected by EBPR. This biomass is then separated from the treated (purified) water at end of the process and the phosphorus is thus removed. Thus if PAOs are selectively enriched by the EBPR configuration, considerably more phosphorus is removed, compared to the relatively poor phosphorus removal in conventional activated sludge systems. See also List of waste-water treatment technologies References Further reading External links Handbook Biological Waste Water Treatment - Principles, Configuration and Model EPBR Metagenomics: The Solution to Pollution is Biotechnological Revolution - A Review from the Science Creative Quarterly Website of the Technische Universität Darmstadt and the CEEP about Phosphorus Recovery Biotechnology Waste treatment technology
Enhanced biological phosphorus removal
Chemistry,Engineering,Biology
456
636,222
https://en.wikipedia.org/wiki/Chyle
Chyle () is a milky bodily fluid consisting of lymph and emulsified fats, or free fatty acids (FFAs). It is formed in the small intestine during digestion of fatty foods, and taken up by lymph vessels specifically known as lacteals. The lipids in the chyle are colloidally suspended in chylomicrons. Clinical significance A chyle fistula occurs when defect(s) of lymphatic vessel(s) result in leakage of lymphatic fluid, typically accumulating in the thoracic (pleural) or abdominal (peritoneal) cavities, leading to a chylous pleural effusion (chylothorax) or chylous ascites, respectively. Diagnosis of a chyle fistula may be accomplished by analysis of pleural/peritoneal fluid. Identifying the source (localizing the lymphatic defect) is often challenging, but may be accomplished with lymphangiography, which is occasionally associated with a serendipitous therapeutic effect (resolution of the leak), thought to be secondary to a sclerosant effect of the lymphangiography contrast. Due to the extreme friability of the lymphatic vessels, direct repair of defects is impractical. Therefore, treatment of chyle fistulae relies upon either decreased production of lymphatic fluid to allow for healing of lymphatic defect(s) or permanent diversion of lymphatic fluid away from lymphatic defect(s). Decreased production of lymphatic fluid may be accomplished by dietary restriction (or complete replacement of oral intake with total parenteral nutrition), as well as by the medications octreotide (a synthetic analogue of the hormone somatostatin) and orlistat (a lipase inhibitor that decreases absorption of dietary fats). Permanent diversion of lymphatic fluid may be accomplished by thoracic duct embolization (a needle-based procedure to occlude the duct by depositing glue/embolic material into it) or by thoracic duct ligation (an open surgical procedure to occlude the duct by suturing tightly around it). See also Chyme References Body fluids Digestive system Lymphatic system
Chyle
Biology
490
29,159,433
https://en.wikipedia.org/wiki/ML%20domain
The MD-2-related lipid-recognition (ML) domain is implicated in lipid recognition, particularly in the recognition of pathogen related products. It has an immunoglobulin-like beta-sandwich fold similar to that of immunoglobulin E-set domains. This domain is present in the following proteins: Epididymal secretory protein E1 (also known as Niemann-Pick C2 protein), which is known to bind cholesterol. Niemann-Pick disease type C2 is a fatal hereditary disease characterised by accumulation of low-density lipoprotein-derived cholesterol in lysosomes. House dust mite allergen proteins such as Der f 2 from Dermatophagoides farinae and Der p 2 from Dermatophagoides pteronyssinus. Human proteins containing this domain LY86; LY96; MMD-1; References Protein domains Protein families
ML domain
Biology
198
30,487,327
https://en.wikipedia.org/wiki/Cundall%20%28engineering%20consultancy%29
Cundall is a multi-disciplinary engineering consultancy. Originally based in Newcastle and Edinburgh, the company had spread its operations across five continents. The firm was founded in 1976 on the basis that it would offer a more client-focused service with a multi-disciplinary and ecologically friendly approach to projects. Five years after its establishment, Cundall expanded into London, and thereafter various other locations, securing increasingly prominent work as a result. During the 1990s and 2000s, a new generation of partners gradually took on operations from Cundall's original founding partners; the company's first managing director, David Dryden, was appointed in 2002. Significant expansion of the company occurred during the 2010s, although job losses occurred during the COVID-19 pandemic of the early 2020s. Cundall has frequently advocated for environmental sustainability and sympathetic development; it plans for all of the firm's undertakings to achieve new zero carbon by 2030. Development During 1976, Cundall was established, having been co-founded by Geoffrey Cundall, Rick Carr, Michael Burch, David Gandy and Bernard Johnston. A common belief held by the founders was that the construction industry was chaotic and could be better organised; Cundall thus sought to deliver projects via a people-centric multi-disciplinary approach that incorporated structural, civil, electrical and mechanical engineering. As directed by several of its founders, the company has long maintained an emphasis on ecologically friendly development. Referred to as low energy design early on, sustainability in Cundall's undertakings was pursued from the firm's early years. Company representatives have often publicly spoken out on the topic and promoted the incorporation of low energy solutions and new technologies to minimise environmental impact and increase efficiency; the company has also set a goal for all of its undertakings to achieve new zero carbon by 2030. Initially, the firm's activities were initially centred around the northern cities of Newcastle and Edinburgh. During 1981, at the urging of Carr, the company's London office was established; Carr and Laurie Clark secured numerous key clients in London that led to Cundall being awarded roles in numerous high profile projects, including the headquarters of several major firms, such as Swiss Bank, British Airways and Deutsche Bank. The next three offices opened by the firm were in Birmingham, Manchester, and Sydney - the latter being the start of Cundall's international expansion. The company had pursued a strategy of wholly organic expansion; this has been not only in terms of geographic coverage but also in terms of the disciplines offered to prospective clients, such as IT, geotechnics, fire protection engineering, lighting design and acoustics. During 1989, founder Geoffrey Cundall departed the firm; he died in early 2015. During 2002, David Dryden was appointed as Cundall's first managing partner; a new generation of partners gradually took on day-to-day operations of the firm from the remaining founds around this time. In early 2009, Cundall took legal action against a hotel group over its failure to pay for work performed on a luxury hotel adjacent to St Paul's cathedral in London. The 2010s saw significant expansion of the company. By 2014, Cundall was operating numerous offices around the world; in the United Kingdom, it had officies in London, Newcastle, Edinburgh, Birmingham, Belfast, and Manchester; its Australian offices were in Sydney, Perth, Melbourne, Brisbane, and Adelaide; the Asian offices included Hong Kong, Shanghai, Manila, and Singapore; and its Middle East and North African (MENA) offices were in Dubai, Doha, and Tripoli, and European offices in Dublin, Bucharest, Paphos, Madrid, and Wroclaw. In 2014, Tomás Neeson took over as managing partner. In July 2020, amid the COVID-19 pandemic, the firm announced that it would shed as many as 40 jobs; at the time, it employed 550 UK-based employees. In July 2024, Rick Carr, one of Cundall's founding partners and a key figure at the firm for almost half a century, died. Awards The Construction Skills Cut the Carbon Award, 2013 The Building Awards. The open BIM Build Qatar Live, 2012 Build Qatar Live. The Legacy Award – Sustainability, 2012 West Midland Centre for Consulting Excellence. Consultancy Practice of the Year, 2012 Constructing Excellence in the North East Awards. Australia's Zero Carbon Sustainable house: Collaborative Future, 2012 Zero Carbon Challenge. Romania Green Building Council Awards, two awards: Sustainable Company of the Year, 2011. Green Service Provider of the Year, 2011. Sustainable Consultant of the year award, 2010 Building Sustainability Awards. Most Sustainable Remediation Project, 2010 Remediation Innovation Awards Research, Studies and Consulting Award, 2010 ACE Engineering Excellence awards. Cadbury Bournville Place, two awards: 2009 British Council for Offices (BCO) Awards, Corporate Workplace, Regional Award. 2009 Royal Institute of British Architects (RIBA) Award, West Midlands Region. David Clark awarded the 2008 Sustainability Champion of the year award, UK Sustainable Building Services Awards 2008. 180 Great Portland Street, London, 2008 British Council for Offices (BCO) Awards, Innovation category. ISG Headquarters, Aldgate House, London. Three major awards at the 2007 British Council for Offices (BCO) Awards. Best of the best award National award, fit-out of workplace Regional award, London, fit-out of workplace In 2016, Cundall won the Consultant of the Year award at the Construction News Awards, as organised by Construction News. Selected projects United Kingdom 2 Snow Hill, Birmingham BA Waterside, Harmondsworth, London Centre Point, London UK Astronomy Technology Centre, Edinburgh Vodafone headquarters building, Newbury Excelsior Academy, Newcastle Mann Island Buildings, Liverpool One Hyde Park, London Sage Group headquarters building, Newcastle Wellcome Trust Gibbs Building, Euston Road, London Lambeth Academy, London University of St Andrews Arts Faculty Building, St Andrews Cadbury Bournville Place, Bournville, Birmingham New Street Square, London Capital One, Loxley House, Nottingham Durham Gateway, Durham University, Durham Tyneside Cinema, Newcastle Bede Academy, Northumberland Aston University Engineering Academy, Birmingham Australia 1 Bligh Street, Sydney St Leonard's College (Melbourne) Sustainability Centre, Melbourne 480 Queen Street, Brisbane 30 The Bond, Sydney Coca-Cola Place, Sydney Mildura Airport, Victoria (Australia) Royal Children's Hospital, Melbourne Sydney Airport, Sydney Westfield Sydney, Sydney Milson Island Recreation Centre, New South Wales Rouse Hill Town Centre, Sydney Ravenswood School for Girls, Sydney MENA Deloitte Emaar Square, UAE Desert Canyon Resort, UAE Dubawi Island, UAE Jumeirah Zabeel Saray, UAE Nurai Island, UAE Porto Dubai Island, UAE Regulation and Supervision Bureau (RSB) Office, UAE Tiara United Towers, UAE TNS, Makeen Tower, UAE Libyan European Hospital, Libya Santa Monica Beach Resort, Boa Vista Island, Cape Verde Europe General Electric Headquarters, Madrid, Spain Paris Data Centre, Paris, France Bukowice – Low energy detached house, Bokowice, Poland Stara Mennica, Warsaw, Poland Facebook Luleå, Luleå, Sweden Colosseum Shopping Centre, Bucharest, Romania Cultural Buildings, Tasnad Refurbishment, Tasnad, Romania Dealul Lomb, Cluj, Romania Hampton Hotel, Brasov, Romania Italiana 24, Bucharest, Romania Vatra Dornei Hotel, Vatra Dornei, Romania Asia Shenzhen Office Building, Hong Kong Eaton Hotel Chiller Replacement, Hong Kong Happy Valley Data Centre, Hong Kong Hong Kong Children's Hospital, Hong Kong Hong Kong Science Park, Phase 3, Hong Kong Jurong Data Centre, Jurong, Singapore Ascendas iHub Suzhou, Suzhou, China Corporate fit-out, Shanghai, China Da Zhongli, Shanghai, China Hakkasan, Shanghai, China HASSELL Shanghai Studio, China MGM MACAU, Macau, China UNICO Restaurant at The Bund, Shanghai, China Taiwan Tower International Competition, Taichung, Taiwan Gallery References External links Official site Construction and civil engineering companies of the United Kingdom Engineering consulting firms of the United Kingdom International engineering consulting firms Companies based in Newcastle upon Tyne Construction and civil engineering companies established in 1976 1976 establishments in England
Cundall (engineering consultancy)
Engineering
1,697
4,887,687
https://en.wikipedia.org/wiki/List%20mining
List mining can be defined as the use, for purposes of scientific research, of messages sent to Internet-based electronic mailing lists. List mining raises novel issues in Internet research ethics. These ethical issues are especially important for health related lists. Some questions that need to be considered by a Research Ethics Committee (or an Institutional Review Board) when reviewing research proposals that involve list mining include these: Are participants in mailing lists "research subjects"? Should those participants in a health related electronic mailing list who were the original sources of messages sent to such lists be regarded as "research subjects"? If so, then several ethical issues need to be considered. These include those pertaining to privacy, informed consent, whether the research is intrusive and has potential for harm, and whether the list should be perceived as "private" or "public" space. Are participants in mailing lists "published authors"? Should those who were the sources of messages sent to such lists be regarded as "published authors"? Or, perhaps, as "amateur authors"? If so, there are issues of copyright and proper attribution to be considered if messages sent to such lists are cited verbatim. Even short excerpts from such messages raise such issues. Are participants in mailing lists "members of a community"? Participants on mailing lists such as electronic support groups may regard themselves as members of an online "community". Are they? To provide an answer to this question, characteristics of various types of communities need to be defined and considered. For example, if one defining characteristic of a community is "self-identification as community", then virtual groups often have this characteristic. However, if "geographic localization" or "legitimate political authority" are considered to be other defining characteristics of a community, then virtual groups rarely or never possess this characteristic. Of particular importance are virtual groups that, instead of being supportive, may endanger public health in some way. Examples would be mailing lists that attempt to promote actions that may be illegal (such as inciting race hatred), or actions that may be unpopular, but not currently illegal (such as promoting the sale of cigarettes to adults). From a perspective of Internet research ethics, judgements about the potential of particular mailing lists to cause more harms than benefits should be made by a Research Ethics Committee (or an Institutional Review Board), rather than by the researchers themselves. See also Electronic mailing list Internet research ethics External links Till, J. E. "List mining" raises novel issues in research ethics. BMJ 2006; 332(7547): 939 (Rapid Response, 24 April 2006) Read Rapid Responses; BMJ 2006(6 May); 332(7549): 1095 Letter Eysenbach, G. and Till, J. E. Ethical issues in qualitative research on internet communities. : BMJ 2001(Nov 10);323(7321):1103-5. Full text. Bruckman, A. Internet Research Ethics: Studying the Amateur Artist: A Perspective on Disguising Data Collected in Human Subjects Research on the Internet. Part of a collection of papers from members of a panel organized for the Computer Ethics: Philosophical Enquiries (CEPE) conference held at Lancaster University, December 14–16, 2001. Abstract. Galegher, J., Sproull, L., and Keisler, S. Legitimacy, Authority, and Community in Electronic Support Groups. Written Communication 1998(Oct); 15(4): 493–530. Archived text. Weijer, C., Emanuel, E.J. Ethics. Protecting communities in biomedical research. Science 2000(Aug 18); 289(5482): 1142–4. PubMed citation Chapman, S. Respect for privacy of groups that endanger public health? BMJ 2001; 323(7321): 1103 (Rapid Response, 12 November 2001). Read Rapid Responses. Madge, C. Developing a geographers' agenda for online research ethics. Prog Hum Geogr 2007; 31(5); 654–74. Abstract Ruttan, S. The Internet, Access, Accuracy and Abuse. Sandra Ruttan blog, 29 September 2007. Blog post Categories Internet ethics
List mining
Technology
880
14,718,234
https://en.wikipedia.org/wiki/Johnsen%E2%80%93Rahbek%20effect
The Johnsen–Rahbek effect occurs when an electric potential is applied across the boundary between a metallic surface and the surface of a semiconducting material or a polyelectrolyte. Under these conditions an attractive force appears, whose magnitude depends on the voltage and the specific materials involved. The attractive force is much larger than would be produced by Coulombic attraction. The effect is named after Danish engineers F. A. Johnsen and K. Rahbek, the first to investigate the effect at length. References External links "Edison's Loud-Speaking Telephone" Electrical engineering Classical mechanics
Johnsen–Rahbek effect
Physics,Engineering
124
22,405,720
https://en.wikipedia.org/wiki/CS-BLAST
CS-BLAST (Context-Specific BLAST) is a tool that searches a protein sequence that extends BLAST (Basic Local Alignment Search Tool), using context-specific mutation probabilities. More specifically, CS-BLAST derives context-specific amino-acid similarities on each query sequence from short windows on the query sequences. Using CS-BLAST doubles sensitivity and significantly improves alignment quality without a loss of speed in comparison to BLAST. CSI-BLAST (Context-Specific Iterated BLAST) is the context-specific analog of PSI-BLAST (Position-Specific Iterated BLAST), which computes the mutation profile with substitution probabilities and mixes it with the query profile. CSI-BLAST (Context-Specific Iterated BLAST) is the context specific analog of PSI-BLAST (Position-Specific Iterated BLAST). Both of these programs are available as web-server and are available for free download. Background Homology is the relationship between biological structures or sequences derived from a common ancestor. Homologous proteins (proteins who have common ancestry) are inferred from their sequence similarity. Inferring homologous relationships involves calculating scores of aligned pairs minus penalties for gaps. Aligning pairs of proteins identify regions of similarity indicating a relationship between the two, or more, proteins. In order to have a homologous relationship, the sum of scores over all the aligned pairs of amino acids or nucleotides must be sufficiently high [2]. Standard methods of sequence comparisons use a substitution matrix to accomplish this [4]. Similarities between amino acids or nucleotides are quantified in these substitution matrices. The substitution score () of amino acids and can we written as follows: where denotes the probability of amino acid mutating into amino acid [2]. In a large set of sequence alignments, counting the number of amino acids as well as the number of aligned pairs will allow you to derive the probabilities and . Since protein sequences need to maintain a stable structure, a residue’s substitution probabilities are largely determined by the structural context of where it is found. As a result, substitution matrices are trained for structural contexts. Since context information is encoded in transition probabilities between states, mixing mutation probabilities from substitution matrices weighted for corresponding states achieves improved alignment qualities when compared to standard substitution matrices. CS-BLAST improves further upon this concept. The figure illustrates the sequence to sequence and profile to sequence equivalence with the alignment matrix. The query profile results from the artificial mutations in which the bar heights are proportional to the corresponding amino acid probabilities. (A FIGURE NEEDS TO GO HERE THIS IS THE CAPTION) “Sequence search/alignment algorithms find the path that maximizes the sum of similarity scores (color-coded blue to red). Substitution matrix scores are equivalent to profile scores if the sequence profile (colored histogram) is generated from the query sequence by adding artificial mutations with the substitution matrix pseudocount scheme. Histogram bar heights represent the fraction of amino acids in profile columns”. Performance CS-BLAST greatly improves alignment quality over the entire range of sequence identities and especially for difficult alignments in comparison to regular BLAST and PSI-BLAST. PSI-BLAST (Position-Specific Iterated BLAST) runs at about the same speed per iteration as regular BLAST, but is able to detect weaker sequence similarities that are still biologically relevant. Alignment quality is based on alignment sensitivity and alignment precision. Alignment Quality Alignment sensitivity is measured by correctly comparing predicted alignments of residue pairs to the total number of possible alignable pairs. This is calculated with the fraction: (pairs correctly aligned)/(pairs structurally alignable) Alignment precision is measured by the correctness of aligned residue pairs. This is calculated with the fraction: (pairs correctly aligned)/(pairs aligned) Search Performance The graph is the benchmark Biegert and Söding used to evaluate homology detection. The benchmark compares CS-BLAST to BLAST using true positives from the same superfamily versus false positive of pairs from different folds. (A GRAPH NEEDS TO GO HERE) The other graph uses detects true positives (with a different scale than the previous graph) and false positives of PSI-BLAST and CSI-BLAST and compares the two for one to five iterations. (A DIFFERENT GRAPH NEEDS TO GO HERE) CS-BLAST offers improved sensitivity and alignment quality in sequence comparison. Sequence searches with CS-BLAST are more than twice as sensitive as BLAST. It produces higher quality alignments and generates reliable E-values without a loss of speed. CS-BLAST detects 139% more homologous proteins at a cumulative error rate of 20%. At a 10% error rate, 138% more homologs are detected, and for the easiest cases at a 1% error rate, CS-BLAST was still 96% more effective than BLAST. Additionally, CS-BLAST in 2 iterations is more sensitive than 5 iterations of PSI-BLAST. About 15% more homologs were detected in comparison. Method The CS-BLAST method derives similarities between sequence context-specific amino acids for 13 residue windows centered on each residue. CS-BLAST works by generating a sequence profile for a query sequence by using context-specific mutations and then jumpstarting a profile-to-sequence search method. CS-BLAST starts by predicting the expected mutation probabilities for each position. For a certain residue, a sequence window of ten total surrounding residues is selected as seen in the image. Then, Biegert and Söding compared the sequence window to a library with thousands of context profiles. The library is generated by clustering a representative set of sequence profile windows. The actual predicting of mutation probabilities is achieved by weighted mixing of the central columns of the most similar context profiles. This aligns short profiles that are nonhomologous and ungapped which gives higher weight to better matching profiles, making them easier to detect. A sequence profile represents a multiple alignment of homologous sequences and describes what amino acids are likely to occur at each position in related sequences. With this method substitution matrices are unnecessary. In addition, there is no need for transition probabilities as a result of the fact that context information is encoded within the context profiles. This makes computation simpler and allows for runtime to be scaled linearly instead of quadratically. The context specific mutation probability, the probability of observing a specific amino acid in a homologous sequence given a context, is calculated by a weighted mixing of the amino acids in the central columns of the most similar context profiles. The image illustrates the calculation of expected mutation probabilities for a specific residue at a certain position. As seen in the image, the library of context profiles all contribute based on similarity to the context specific sequence profile for the query sequence. Models In predicting substitution probabilities using only the amino acid’s local sequence context, you gain the advantage of not needing to know the structure of the query protein while still allowing for the detection of more homologous proteins than standard substitution matrices [4]. Bigert and Söding’s approach to predicting substitution probabilities was based on a generative model. In another paper in collaboration with Angermüller, they develop a discriminative machine learning method that improves prediction accuracy [2]. Generative Model Given an observed variable and a target variable , a generative model defines the probabilities and separately. In order to predict the unobserved target variable, , Bayes’ theorem, is used. A generative model, as the name suggests, allows one to generate new data points . The joint distribution is described as . To train a generative model, the following equation is used to maximize the joint probability . Discriminative Model The discriminative model is a logistic regression maximum entropy classifier. With the discriminative model, the goal is to predict a context specific substitution probability given a query sequence. The discriminative approach for modeling substitution probabilities, where describes a sequence of amino acids around position of a sequence, is based on context states. Context states are characterized by parameters emission weight (), bias weight (), and context weight () [2]. Emission probabilities from a context state are given by the emission weights as follows for to : where is the emission probability and is the context state. In the discriminative approach, probability for a context state given context is modeled directly by the exponential of an affine function of the context account profile where is the context count profile with a normalization constant normalizes the probability to 1. This equation is as follows where the first summation takes to and the second summation takes to : . As with the generative model, target distribution is obtained by mixing the emission probabilities of each context state weighted by the similarity. Using CS-BLAST The MPI Bioinformatics toolkit in an interactive website and service that allows anyone to do comprehensive and collaborative protein analysis with a variety of different tools including CS-BLAST as well as PSI-BLAST [1]. This tool allows for input of a protein and select options for you to customize your analysis. It also can forward the output to other tools as well. See also Sequence alignment software Multiple sequence alignment Position-specific scoring matrix BLAST (Basic Local Alignment Search Tool) HH-suite software package References [1] Alva, Vikram, Seung-Zin Nam, Johannes Söding, and Andrei N. Lupas. “The MPI Bioinformatics Toolkit as an Integrative Platform for Advanced Protein Sequence and Structure Analysis.” Nucleic Acids Research 44.Web server Issue (2016): W410-415. NCBI. Web. 2 Nov. 2016. [2] Angermüller, Christof, Andreas Biegert, and Johannes Söding. “Discriminative Modelling of Context-specific Amino Acid Substitution Properties” BIOINFORMATICS 28.24 (2012): 3240-247. Oxford Journals. Web. 2 Nov. 2016. [3] Astschul, Stephen F., et al. “Gapped BLAST and PSI-BLAST: A New Generation of Protein Database Search Programs.” Nucleic Acids Research 25.17 (1997): 3389-402. Oxford University Press. Print [4] Bigert, A., and J. Söding. “Sequence Context-specific Profiles for Homology Searching.” Proceedings of the National Academy of Sciences 106.10 (2009): 3770-3775. PNAS. Web. 23 Oct. 2016. External links CS-BLAST — free server at University of Munich (LMU) CS-BLAST — free server at Max-Planck Institute in Tuebingen CS-BLAST source code Bioinformatics software Computational science
CS-BLAST
Mathematics,Biology
2,197
25,451,462
https://en.wikipedia.org/wiki/SEMAT
SEMAT (Software Engineering Method and Theory) is an initiative to reshape software engineering such that software engineering qualifies as a rigorous discipline. The initiative was launched in December 2009 by Ivar Jacobson, Bertrand Meyer, and Richard Soley with a call for action statement and a vision statement. The initiative was envisioned as a multi-year effort for bridging the gap between the developer community and the academic community and for creating a community giving value to the whole software community. The work is now structured in four different but strongly related areas: Practice, Education, Theory, and Community. The Practice area primarily addresses practices. The Education area is concerned with all issues related to training for both the developers and the academics including students. The Theory area is primarily addressing the search for a General Theory in Software Engineering. Finally, the Community area works with setting up legal entities, creating websites and community growth. It was expected that the Practice area, the Education area and the Theory area would at some point in time integrate in a way of value to all of them: the Practice area would be a "customer" of the Theory area, and direct the research to useful results for the developer community. The Theory area would give a solid and practical platform for the Practice area. And, the Education area would communicate the results in proper ways. Practice area The first step was here to develop a common ground or a kernel including the essence of software engineering – things we always have, always do, always produce when developing software. The second step was envisioned to add value on top of this kernel in the form of a library of practices to be composed to become specific methods, specific for all kinds of reasons such as the preferences of the team using it, kind of software being built, etc. The first step is as of this writing just about to be concluded. The results are a kernel including universal elements for software development – called the Essence Kernel, and a language – called the Essence Language - to describe these elements (and elements built on top of the kernel (practices, methods, and more). Essence, including both the kernel and language, has been published as an OMG standard in beta status in July 2013 and is expected to become a formally adopted standard in early 2014. The second step has just started, and the Practice area will be divided into a number of separate but interconnected tracks: the practice (library track), the tool track are so far identified and work has started or is about to get started. The practice track is currently working on a Users Guide. Education area The area focuses on leveraging the work of SEMAT in software engineering education, both within academia and industry. It promotes global education based on a common ground called Essence. The area's target groups are instructors such as university professors and industrial coaches as well as their students and learning practitioners. The goal of the area is to create educational courses and course materials that are internationally viable, identify pedagogical approaches that are appropriate and effective for specific target groups and disseminate experience and lessons learned. The area includes members from a number of universities and institutes worldwide. Most members have already been involved in leveraging aspects of SEMAT in the context of their software engineering courses. They are gathering their resources and starting a common venture towards defining a new generation of SEMAT-powered software engineering curricula. As of 2018, some studies of utilizing Essence in educational settings exist. One example of the use of Essence in university education was a software engineering course carried out in Norwegian University of Science and Technology. A study was conducted by introducing Essence into a project-based software engineering course, with the aim of understanding what difficulties the students faced in using Essence, and whether they considered it to have been useful. The results indicated that Essence could also be useful for novice software engineers by (1) encouraging them to look up and study new practices and methods in order to create their own, (2) encouraging them to adjust their way-of-working reflectively and in a situation-specific manner, (3) helping them structure their way of working. The findings of another study introducing students to Essence through a digital game supported these findings: the students felt that Essence will be useful to them in future, real-world projects, and that they wish to utilize it in them. Theory area An important part of SEMAT is that a general theory of software engineering is planned to emerge with significant benefits. A series of workshops held under the title SEMAT Workshop on a General Theory of Software Engineering (GTSE) are a key component in awareness building around general theories. In addition to community awareness building, SEMAT also aims to contribute with a specific general theory of software engineering. This theory should be solidly based on the SEMAT Essence language and kernel, and should support software engineering practitioners' goal-oriented decision making. As argued elsewhere, such support is predicated on the predictive capabilities of the theory. Thus, the SEMAT Essence should be augmented to allow the prediction of critical software engineering phenomena. The GTSE workshop series assists in the development of the SEMAT general software engineering theory by engaging a larger community in the search for, development of, and evaluation of promising theories, which may be used as a base for the SEMAT theory. Organizational structure Main organization SEMAT is chaired by Sumeet S. Malhotra of Tata Consultancy Services. The CEO of the organization is Ste Nadin of Fujitsu. The Executive Management Committee of SEMAT are Ivar Jacobson, Ste Nadin, Sumeet S. Malhotra, Paul E. McMahon, Michael Goedicke and Cecile Peraire. Japan Chapter Japan Chapter was established in April 2013, and it has more than 250 members as of November 2013. Member activities include carrying out seminars about SEMAT, considering utilization of SEMAT Essence for integrating different requirements engineering techniques and body of knowledges (BoKs), and translating articles into Japanese. Korea Chapter The chapter was inaugurated with about 50 members in October 2013. Member activities include: 2e Consulting started rewriting their IT service engagement methods using the Essence kernel, and uEngine Solutions started developing a tool to orchestrate Essence-kernel based practices into a project method. Korean government supported KAIST to conduct research in Essence. Latin American Chapter Semat Latin American Chapter was created in August 2011 in Medellin (Colombia) by Ivar Jacobson during the Latin American Software Engineering Symposium. This Chapter has 9 Executive Committee members from Colombia, Venezuela, Peru, Brazil, Argentina, Chile, and Mexico, chaired by Dr. Carlos Zapata from Colombia. More than 80 people signed the initial declaration of the Chapter and nowadays the Chapter members are in charge of disseminating the Semat ideas in all Latin America. Chapter members have participated in various Latin American conferences, including the Latin American Conference on Informatics (CLEI), the Ibero American Software Engineering and Knowledge Engineering Journeys (JIISIC), the Colombian Computing Conference (CCC), and the Chilean Computing Meeting (ECC). The Chapter contributed in the submission sent in response to the OMG call for proposals and currently studies didactic strategies for teaching the Semat kernel by games, theoretical studies about some kernel elements, and practical representations of several software development and quality methods by using the Semat kernel. Some of the members also translated the Essence book and some other Semat materials and papers into Spanish. Russia Chapter Russian Chapter has about 20 members. A few universities have incorporated SEMAT in their training courses , including Moscow State University, Moscow Institute of Physics and Technology, Higher School of Economics, Moscow State University of Economics, Statistics, and Informatics. The chapter and some commercial companies are carrying out seminars about SEMAT. INCOSE Russian Chapter is working on an extension of SEMAT to Systems Engineering. EC-leasing is working on an extension of the Kernel for Software Life Cycle. Russian Chapter attended in two conferences: Actual Problems of System and Software Engineering and SECR with SEMAT section and articles. Translation of the Essence book into Russian is in progress. Practical Applications of SEMAT Ideas developed by the SEMAT community have been applied by both industry and academia. Notable examples include: Reinsurance company Munich Re have assembled a family of "collaboration models" to cover the whole spectrum of software and application work. Four collaboration models — exploratory, standard, maintenance, and support — have been built on the same kernel from the same set of 12 practices. Tools supporting SEMAT The first tool that supported the authoring and development of SEMAT practices based on a kernel was the EssWork Practice Workbench tool provided by Ivar Jacobson International. The Practice Workbench tool was made available to the SEMAT community in June 2012 and is now publicly available and free to use. The Practice Workbench is an Integrated Practice Development Environment with support for collaborative practice and method development. Key features of the Practice Workbench include: Interactive presentation of the Essence Kernel Practice authoring and extension using the Essence Language Method composition Innovative card-based representation Publication of methods, practices and kernels as card-based HTML web-sites Export to the EssWork deployment environment Other publicly available tools supporting SEMAT's Essence include: SematAcc, the Essence Accelerator System, designed to speed up the learning of Essence Theory in Software Engineering and to easily test it with any software project The Essence Board Game, intended to teach the basics of Essence in a fun fashion Essencery, an Open Source alternative for composing methods using the Essence graphical language syntax References External links The SEMAT Initiative: A Call for Action Why We Need a Theory for Software Engineering Methods Need Theory SEMAT - Software Engineering Method and Theory The Essence of Software Engineering: The SEMAT Kernel Software engineering organizations Software engineering Operations research
SEMAT
Mathematics,Technology,Engineering
1,980
27,240,024
https://en.wikipedia.org/wiki/Gudea%20cylinders
The Gudea cylinders are a pair of terracotta cylinders dating to , on which is written in cuneiform a Sumerian myth called the Building of Ningirsu's temple. The cylinders were made by Gudea, the ruler of Lagash, and were found in 1877 during excavations at Telloh (ancient Girsu), Iraq and are now displayed in the Louvre in Paris, France. They are the largest cuneiform cylinders yet discovered and contain the longest known text written in the Sumerian language. Compilation Discovery The cylinders were found in a drain by Ernest de Sarzec under the Eninnu temple complex at Telloh, the ancient ruins of the Sumerian holy city of Girsu, during the first season of excavations in 1877. They were found next to a building known as the Agaren, where a brick pillar (pictured) was found containing an inscription describing its construction by Gudea within Eninnu during the Second Dynasty of Lagash. The Agaren was described on the pillar as a place of judgement, or mercy seat, and it is thought that the cylinders were either kept there or elsewhere in the Eninnu. They are thought to have fallen into the drain during the destruction of Girsu generations later. In 1878 the cylinders were shipped to Paris, France where they remain on display today at the Louvre, Department of Near East antiquities, Richelieu, ground floor, room 2, accession numbers MNB 1511 and MNB 1512. Description The two cylinders were labelled A and B, with A being 61 cm high with a diameter of 32 cm and B being 56 cm with a diameter of 33 cm. The cylinders were hollow with perforations in the centre for mounting. These were originally found with clay plugs filling the holes, and the cylinders themselves filled with an unknown type of plaster. The clay shells of the cylinders are approximately 2.5 to 3 cm thick. Both cylinders were cracked and in need of restoration and the Louvre still holds 12 cylinder fragments, some of which can be used to restore a section of cylinder B. Cylinder A contains thirty columns and cylinder B twenty four. These columns are divided into between sixteen and thirty-five cases per column containing between one and six lines per case. The cuneiform was meant to be read with the cylinders in a horizontal position and is a typical form used between the Akkadian Empire and the Ur III dynasty, typical of inscriptions dating to the 2nd Dynasty of Lagash. Script differences in the shapes of certain signs indicate that the cylinders were written by different scribes. Translations and commentaries Detailed reproductions of the cylinders were made by Ernest de Sarzec in his excavation reports which are still used in modern times. The first translation and transliteration was published by Francois Threau-Dangin in 1905. Another edition with a notable concordance was published by Ira Maurice Price in 1927. Further translations were made by M. Lambert and R. Tournay in 1948, Adam Falkenstein in 1953, Giorgio Castellino in 1977, Thorkild Jacobsen in 1987, and Dietz Otto Edzard in 1997. The latest translation by the Electronic Text Corpus of Sumerian Literature (ETCSL) project was provided by Joachim Krecher with legacy material from Hermann Behrens and Bram Jagersma. Samuel Noah Kramer also published a detailed commentary in 1966 and in 1988. Herbert Sauren proposed that the text of the cylinders comprised a ritual play, enactment or pageant that was performed during yearly temple dedication festivities and that certain sections of both cylinders narrate the script and give the ritual order of events of a seven-day festival. This proposition was met with limited acceptance. Composition Interpretation of the text faces substantial limitations for modern scholars, who are not the intended recipients of the information and do not share a common knowledge of the ancient world and the background behind the literature. Irene Winter points out that understanding the story demands "the viewer's prior knowledge and correct identification of the scene – a process of 'matching' rather than 'reading' of imagery itself qua narrative." The hero of the story is Gudea (statue pictured), king of the city-state of Lagash at the end of the third millennium BC. A large quantity of sculpted and inscribed artifacts have survived pertaining to his reconstruction and dedication of the Eninnu, the temple of Ningursu, the patron deity of Lagash. These include foundation nails (pictured), building plans (pictured) and pictorial accounts sculpted on limestone stelae. The temple, Eninnu was a formidable complex of buildings, likely including the E-pa, Kasurra and sanctuary of Bau among others. There are no substantial architectural remains of Gudea's buildings, so the text is the best record of his achievements. Cylinder X Some fragments of another Gudea inscription were found that could not be pieced together with the two in the Louvre. This has led some scholars to suggest that there was a missing cylinder preceding the texts recovered. It has been argued that the two cylinders present a balanced and complete literary with a line at the end of Cylinder A having been suggested by Falkenstein to mark the middle of the composition. This colophon has however also been suggested to mark the cylinder itself as the middle one in a group of three. The opening of cylinder A also shows similarities to the openings of other myths with the destinies of heaven and earth being determined. Various conjectures have been made regarding the supposed contents of an initial cylinder. Victor Hurowitz suggested it may have contained an introductory hymn praising Ningirsu and Lagash. Thorkild Jacobsen suggested it may have explained why a relatively recent similar temple built by Ur-baba (or Ur-bau), Gudea's father-in-law "was deemed insufficient". Cylinder A Cylinder A opens on a day in the distant past when destinies were determined with Enlil, the highest god in the Sumerian pantheon, in session with the Divine Council and looking with admiration at his son Ningirsu (another name for Ninurta) and his city, Lagash. Ningirsu responds that his governor will build a temple dedicated to great accomplishments. Gudea is then sent a dream where a giant man – with wings, a crown, and two lions – commanded him to build the E-ninnu temple. Two figures then appear: a woman holding a gold stylus, and a hero holding a lapis lazuli tablet on which he drew the plan of a house. The hero placed bricks in a brick mold and carrying basket, in front of Gudea – while a donkey gestured impatiently with its hoof. After waking, Gudea could not understand the dream so traveled to visit the goddess Nanse by canal for interpretation of the oracle. Gudea stops at several shrines on the route to make offerings to various other deities. Nanse explains that the giant man is her brother Ningirsu, and the woman with the golden stylus is Nisaba goddess of writing, directing him to lay out the temple astronomically aligned with the "holy stars". The hero is Nindub an architect-god surveying the plan of the temple. The donkey was supposed to represent Gudea himself, eager to get on with the building work. Nanse instructs Gudea to build Ningirsu – a decorated chariot with emblem, weapons, and drums, which he does and takes into the temple with "Ushumgalkalama", his minstrel or harp (bull-shaped harp sound-box pictured). He is rewarded with Ningirsu's blessing and a second dream where he is given more detailed instructions of the structure. Gudea then instructs the people of Lagash and gives judgement on the city with a 'code of ethics and morals'. Gudea takes to the work zealously and measures the building site, then lays the first brick in a festive ritual. Materials for the construction are brought from over a wide area including Susa, Elam, Magan Meluhha and Lebanon. Cedars of Lebanon are apparently floated down from Lebanon on the Euphrates and the "Iturungal" canal to Girsu. He is then sent a third dream revealing the different form and character of the temples. The construction of the structure is then detailed with the laying of the foundations, involving participation from the Anunnaki including Enki, Nanse, and Bau. Different parts of the temple are described along with its furnishings and the cylinder concludes with a hymn of praise to it. Lines 738 to 758 describe the house being finished with "kohl" and a type of plaster from the "edin" canal: Thorkild Jacobsen considered this "Idedin" canal referred to an unidentified "Desert Canal", which he considered "probably refers to an abandoned canal bed that had filled with the characteristic purplish dune sand still seen in southern Iraq." Cylinder B The second cylinder begins with a narrative hymn starting with a prayer to the Anunnaki. Gudea then announces the house ready for the accommodation of Ningirsu and his wife Bau. Food and drink are prepared, incense is lit and a ceremony is organized to welcome the gods into their home. The city is then judged again and a number of deities are appointed by Enki to fill various positions within the structure. These include a gatekeeper, bailiff, butler, chamberlain, coachman, goatherd, gamekeeper, grain and fisheries inspectors, musicians, armourers and a messenger. After a scene of sacred marriage between Ningirsu and Bau, a seven-day celebration is given by Gudea for Ningirsu with a banquet dedicated to Anu, Enlil and Ninmah (Ninhursag), the major gods of Sumer, who are all in attendance. The text closes with lines of praise for Ningirsu and the Eninnu temple. The building of Ningirsu's temple The modern name for the myth contained on both cylinders is "The building of Ningirsu's temple". Ningirsu was associated with the yearly spring rains, a force essential to early irrigation agriculture. Thorkild Jacobsen describes the temple as an intensely sacred place and a visual assurance of the presence of the god in the community, suggesting the structure was "in a mystical sense, one with him." The element "Ninnu" in the name of the temple "E-Ninnu" is a name of Ningirsu with the full form of its name, "E-Ninnu-Imdugud-babbara" meaning "house Ninnu, the flashing thunderbird". It is directly referred to as thunderbird in Gudea's second dream and in his blessing of it. Later use Preceded by the Kesh temple hymn, the Gudea cylinders are one of the first ritual temple building stories ever recorded. The style, traditions and format of the account has notable similarities to those in the Bible such as the building of the tabernacle of Moses in and . Victor Hurowitz has also noted similarities to the later account of the construction of Solomon's temple in 1 Kings 6:1–38, 1 Kings Chapter 7, and Chapter 8 and in the Book of Chronicles. See also Atra-Hasis Babylonian literature Barton Cylinder Deluge (mythology) Debate between sheep and grain Debate between bird and fish Debate between Winter and Summer Enlil and Ninlil Eridu Genesis Gilgamesh flood myth Hymn to Enlil Kesh temple hymn Lament for Ur Old Babylonian oracle Self-praise of Shulgi (Shulgi D) Sumerian literature Sumerian religion The Garden of Eden References Further reading Edzard, D.O., Gudea and His Dynasty (The Royal Inscriptions of Mesopotamia. Early Periods, 3, I). Toronto/Buffalo/London: University of Toronto Press, 68–101, 1997. Falkenstein, Adam, Grammatik der Sprache Gudeas von Lagas, I-II (Analecta Orientalia, 29–30). Roma: Pontificium Institutum Biblicum, 1949–1950. Falkenstein, Adam – von Soden, Wolfram, Sumerische und akkadische Hymnen und Gebete.Zürich/Stuttgart: Artemis, 192–213, 1953. Jacobsen, Th., The Harps that Once ... Sumerian Poetry in Translation. New Haven/London: Yale University Press, 386–444: 1987. Suter, C.E., "Gudeas vermeintliche Segnungen des Eninnu", Zeitschrift für Assyriologie 87, 1–10: partial source transliteration, partial translation, commentaries, 1997. Witzel, M., Gudea. Inscriptiones: Statuae A-L. Cylindri A & B. Roma: Pontificio Isituto Biblico, fol. 8–14,1, 1932. External links Louvre – The Gudea Cylinders Cylinder A – The building of Ningirsu's temple., Black, J.A., Cunningham, G., Robson, E., and Zólyomi, G., The Electronic Text Corpus of Sumerian Literature, Oxford 1998–. Cylinder B – The building of Ningirsu's temple., Black, J.A., Cunningham, G., Robson, E., and Zólyomi, G., The Electronic Text Corpus of Sumerian Literature, Oxford 1998–. Composite text of Cylinder A: "The building of Ningirsu's temple, The Electronic Text Corpus of Sumerian Literature, Oxford 1998–. CDLI Full transcription of Cylinder A CDLI Full transcription of Cylinder B Composite text of Cylinder B: "The building of Ningirsu's temple, The Electronic Text Corpus of Sumerian Literature, Oxford 1998–. Bibliography – The Electronic Text Corpus of Sumerian Literature, Oxford 1998–. 22nd-century BC literature 1877 archaeological discoveries Ancient Near and Middle East clay objects Sumerian literature Clay tablets Mesopotamian mythology Creation myths Religious cosmologies Near Eastern and Middle Eastern antiquities in the Louvre Archaeological discoveries in Iraq Lagash Terracotta
Gudea cylinders
Astronomy
2,978
3,251,788
https://en.wikipedia.org/wiki/Lithol%20Rubine%20BK
Lithol Rubine BK is a reddish synthetic azo dye. It has the appearance of a red powder and is magenta when printed. It is slightly soluble in hot water, insoluble in cold water, and insoluble in ethanol. When dissolved in dimethylformamide, its absorption maximum lies at about 442 nm. It is usually supplied as a calcium salt. It is prepared by azo coupling with 3-hydroxy-2-naphthoic acid. It is used to dye plastics, paints, printing inks, and for textile printing. It is normally used as a standard magenta in the three and four color printing processes. When used as a food dye, it has E number E180. It is used to color cheese rind, and it is a component in some lip balms. References Azo dyes Food colorings Inks Calcium compounds Benzenesulfonates 2-Naphthols Naphthoic acids E-number additives
Lithol Rubine BK
Chemistry
208
51,933,299
https://en.wikipedia.org/wiki/Limbatustoxin
Limbatustoxin (LbTX; α-KTx 1.4), is an ion channel toxin from the venom of the Centruroides limbatus scorpion. This toxin is a selective blocker of BK channels, calcium-activated potassium channels. Etymology and source Limbatustoxin is purified from the venom of the Centruroides limbatus, a bark scorpion that lives in Central America. Chemistry Limbatustoxin (LbTX; α-KTx 1.4) is a 37-amino acid peptide, which belongs to the α-KTx 1.x subfamily, a group of short peptides consisting of 36-37 amino acid residues and three disulfide bridges. LbTX displays 57% sequence homology with charybdotoxin and 70% sequence homology with iberiotoxin. LbTX contains a β-sheet formed by three anti-parallel β-strands on one side of the molecule and a helix on the other side. This structure is important for binding to BK channels. Target and mode of action Limbatustoxin is highly selective for calcium-activated potassium channels, also called maxi-K channels, slo1 or BK (big potassium) channels. These channels play an important role in the excitability of neurons and the control of muscle contractions. Residues on β-sheet face of the helix as well as residues in the turn between the helix and the second anti-parallel strand and in the second and third strands of the β-sheet are critical for the binding of the toxin to the BK channel. When binding to the channel, limbatustoxin is known to block and inhibit the function of the BK channel. It seems likely that limbatustoxin modifies the gating mechanism of the channel, because the toxin binds to the β-subunit of the channel, which has a modulating function on the gating of the channel. Based on the 70% homology with iberiotoxin, it seems likely that the limbatustoxin selectively inhibits the current through the BK channel by decreasing probability of opening and the time that the channel is open. Toxicity The venom causes local burning pain and systemic symptoms, such as parasthesias, flushing, hypertension and wheezing. It is not considered dangerous to humans. References Ion channel toxins Scorpion toxins Toxins
Limbatustoxin
Environmental_science
486
38,210,237
https://en.wikipedia.org/wiki/32%20Pegasi
32 Pegasi is a binary star system in the northern constellation of Pegasus. It is visible to the naked eye as a faint, blue-white hued point of light with an apparent visual magnitude of 4.81 The system is located approximately 560 light years away from the Sun based on parallax, and is drifting further away with a radial velocity of +11.4 km/s. The brighter member of this system, designated component Aa, has visual magnitude 4.83 with a stellar classification of B9III, matching a late B-type star with the luminosity class of a giant. It is spinning with a projected rotational velocity of 60 km/s, and is radiating 541 times the luminosity of the Sun from its photosphere at an effective temperature of 11,403 K. The fainter secondary, component Ab, is of magnitude 8.86 with an angular separation of along a position angle of 288° from the primary, as of 2005. Visual companions include component B, at a 70.7″ separation from the primary and magnitude 10.73; C, at a separation from B of 3.2″ and magnitude 12.4; as well as D (separation from A of 42.8″ and magnitude 11.9) and E (separation from A of 58.3" and magnitude 11.9). References B-type giants Binary stars Pegasus (constellation) BD+27 4299 Pegasi, 32 212097 110371 8522
32 Pegasi
Astronomy
308
32,688,617
https://en.wikipedia.org/wiki/MChip
mChip is a portable blood test device which is capable of diagnosing an infection of HIV or Syphilis within 15 minutes and could be used effectively against HIV/AIDS in developing countries. The mChip costs about US$ 1 and the entire diagnostic kit costs about US$ 100. mChip was developed so that people in regions with poor health facilities can access portable diagnosis for HIV/AIDS rather than travelling long distances to go to clinics for diagnosis. Background Lateral flow test is one of the blood testing methods used, in which a blood sample or oral fluid is placed on a strip of paper. In this method, a colored band indicates infection. People in lesser developed regions like the Sub-Saharan Africa are adversely affected by HIV/AIDS and have very limited access to clinical labs or hospitals. There have been estimates which have indicated that there are about 22.5 million people with HIV/AIDS in such regions and hence there is a high demand for portable blood test devices. Hence devices like mChip will be able to diagnose HIV/AIDS in such regions Development mChip was developed by scientists at Columbia University in New York City. Initial testing of this device was undertaken in a village in Rwanda, where, according to the World Health Organization, approximately 3 percent of the population has HIV/AIDS. Of the 400 volunteers who turned up for testing, 399 were correctly diagnosed with an accuracy of nearly 100 percent. mChip was also tested for its effectiveness in diagnosing syphilis, where, out of the 67 volunteers who turned up for testing, 63 were correctly diagnosed with an accuracy of nearly 94 percent. The appearance of mChip resembles a credit card. and is estimated to cost just US$ 1. Operation The operation of mChip is similar to that of ELISA (Enzyme-Linked Immunosorbent Assay). The ELISA can be performed to evaluate either the presence of antigen or the presence of antibody in a sample. It is a useful tool for determining serum antibody concentrations such as with the HIV test. The mChip contains 10 zones which detect the passage of a small amount (about 1μl) of blood. The results can be obtained in a color-coded format in about 15 minutes. See also Blood test HIV test BDNA test Pregnancy test Reference ranges for blood tests Schumm test References Blood tests
MChip
Chemistry
473
45,197,429
https://en.wikipedia.org/wiki/Modified%20aldol%20tandem%20reaction
Modified aldol tandem reaction is a sequential chemical transformation that combines aldol reaction with other chemical reactions that generate enolates. Enolates are a common building block in chemical syntheses and are typically formed by the addition of base to a ketone or aldehyde. Modified Aldol tandem reactions allow similar reactivity to be produced without the need for a base which may have adverse effects in a given chemical synthesis. A representative example is the decarboxylative aldol reaction (Figure "Modified aldol tandem reaction, decarboxylative aldol reaction as an example"), where the enolate is generated via decarboxylation reaction mediated by either transition metals or organocatalysts. Key advantage of this reaction over other types of aldol reaction is the selective generation of an enolate in the presence of aldehydes. This allows for the directed aldol reaction to produce a desired cross aldol. Transition metals have been used to mediate the modified aldol tandem reaction. Allyl β-keto carboxylates can be used as substrate for palladium-mediated decarboxylative aldol reaction (Figure "Palladium-mediated decarboxylative aldol reaction with allyl β-keto carboxylates"). The allyl group can be removed by palladium, following decarboxylation reaction selectively generates the enolate at the β-keto group, which could further react with aldehyde to generate aldols. Using decarboxylation reaction to generate enolate is a common strategy in biosynthetic pathways such as polyketide synthesis, where malonic acid half thioester can be converted to the corresponding enolate for Claisen condensation reaction. Inspired by this, a modified tandem aldol reaction has been developed using the malonic acid half thioester as the enolate source. A copper based catalyst system has been developed for efficient aldol generation at mild conditions (Figure "Decarboxylative aldol reaction with malonic acid half thioester"). References Organic reactions
Modified aldol tandem reaction
Chemistry
435
2,238,152
https://en.wikipedia.org/wiki/Association%20scheme
The theory of association schemes arose in statistics, in the theory of experimental design for the analysis of variance. In mathematics, association schemes belong to both algebra and combinatorics. In algebraic combinatorics, association schemes provide a unified approach to many topics, for example combinatorial designs and the theory of error-correcting codes. In algebra, association schemes generalize groups, and the theory of association schemes generalizes the character theory of linear representations of groups. Definition An n-class association scheme consists of a set X together with a partition S of X × X into n + 1 binary relations, R0, R1, ..., Rn which satisfy: ; it is called the identity relation. Defining , if R in S, then R* in S. If , the number of such that and is a constant depending on , , but not on the particular choice of and . An association scheme is commutative if for all , and . Most authors assume this property. Note, however, that while the notion of an association scheme generalizes the notion of a group, the notion of a commutative association scheme only generalizes the notion of a commutative group. A symmetric association scheme is one in which each is a symmetric relation. That is: if (x, y) ∈ Ri, then (y, x) ∈ Ri. (Or equivalently, R* = R.) Every symmetric association scheme is commutative. Two points x and y are called i th associates if . The definition states that if x and y are i th associates then so are y and x. Every pair of points are i th associates for exactly one . Each point is its own zeroth associate while distinct points are never zeroth associates. If x and y are k th associates then the number of points which are both i th associates of and j th associates of is a constant . Graph interpretation and adjacency matrices A symmetric association scheme can be visualized as a complete graph with labeled edges. The graph has vertices, one for each point of , and the edge joining vertices and is labeled if and are  th associates. Each edge has a unique label, and the number of triangles with a fixed base labeled having the other edges labeled and is a constant , depending on but not on the choice of the base. In particular, each vertex is incident with exactly edges labeled ; is the valency of the relation . There are also loops labeled at each vertex , corresponding to . The relations are described by their adjacency matrices. is the adjacency matrix of for and is a v × v matrix with rows and columns labeled by the points of . The definition of a symmetric association scheme is equivalent to saying that the are v × v (0,1)-matrices which satisfy I. is symmetric, II. (the all-ones matrix), III. , IV. . The (x, y)-th entry of the left side of (IV) is the number of paths of length two between x and y with labels i and j in the graph. Note that the rows and columns of contain 's: Terminology The numbers are called the parameters of the scheme. They are also referred to as the structural constants. History The term association scheme is due to but the concept is already inherent in . These authors were studying what statisticians have called partially balanced incomplete block designs (PBIBDs). The subject became an object of algebraic interest with the publication of and the introduction of the Bose–Mesner algebra. The most important contribution to the theory was the thesis of P. Delsarte who recognized and fully used the connections with coding theory and design theory. Generalizations have been studied by D. G. Higman (coherent configurations) and B. Weisfeiler (distance regular graphs). Basic facts , i.e., if then and the only such that is . ; this is because the partition . The Bose–Mesner algebra The adjacency matrices of the graphs generate a commutative and associative algebra (over the real or complex numbers) both for the matrix product and the pointwise product. This associative, commutative algebra is called the Bose–Mesner algebra of the association scheme. Since the matrices in are symmetric and commute with each other, they can be diagonalized simultaneously. Therefore, is semi-simple and has a unique basis of primitive idempotents . There is another algebra of matrices which is isomorphic to , and is often easier to work with. Examples The Johnson scheme, denoted by J(v, k), is defined as follows. Let S be a set with v elements. The points of the scheme J(v, k) are the subsets of S with k elements. Two k-element subsets A, B of S are i th associates when their intersection has size k − i. The Hamming scheme, denoted by H(n, q), is defined as follows. The points of H(n, q) are the qn ordered n-tuples over a set of size q. Two n-tuples x, y are said to be i th associates if they disagree in exactly i coordinates. E.g., if x = (1,0,1,1), y = (1,1,1,1), z = (0,0,1,1), then x and y are 1st associates, x and z are 1st associates and y and z are 2nd associates in H(4,2). A distance-regular graph, G, forms an association scheme by defining two vertices to be i th associates if their distance is i. A finite group G yields an association scheme on , with a class Rg for each group element, as follows: for each let where is the group operation. The class of the group identity is R0. This association scheme is commutative if and only if G is abelian. A specific 3-class association scheme: Let A(3) be the following association scheme with three associate classes on the set X = {1,2,3,4,5,6}. The (i, j ) entry is s if elements i and j are in relation Rs. Coding theory The Hamming scheme and the Johnson scheme are of major significance in classical coding theory. In coding theory, association scheme theory is mainly concerned with the distance of a code. The linear programming method produces upper bounds for the size of a code with given minimum distance, and lower bounds for the size of a design with a given strength. The most specific results are obtained in the case where the underlying association scheme satisfies certain polynomial properties; this leads one into the realm of orthogonal polynomials. In particular, some universal bounds are derived for codes and designs in polynomial-type association schemes. In classical coding theory, dealing with codes in a Hamming scheme, the MacWilliams transform involves a family of orthogonal polynomials known as the Krawtchouk polynomials. These polynomials give the eigenvalues of the distance relation matrices of the Hamming scheme. See also Block design Bose–Mesner algebra Combinatorial design Notes References . (Chapters from preliminary draft are available on-line.) Design of experiments Analysis of variance Algebraic combinatorics Representation theory
Association scheme
Mathematics
1,503
10,499,099
https://en.wikipedia.org/wiki/NGC%205824
NGC 5824 is a globular cluster in the constellation Lupus, almost on its western border with Centaurus. Astronomers James Dunlop (1826), John Herschel (1831) and E.E. Barnard (1882) all claim to have independently discovered the cluster. It is condensed and may be observed with small telescopes, but larger apertures are required to resolve its stellar core. A stellar stream, known as the Triangulum stellar stream, is thought to have originated from NGC 5824. It is located quite far from NGC 5824 and is part of its leading tail. Meanwhile, its trailing tail has also been detected, spanning about 50 degrees through the sky. References External links Globular clusters Lupus (constellation) 5824
NGC 5824
Astronomy
152
4,635,792
https://en.wikipedia.org/wiki/Dexfenfluramine
Dexfenfluramine, formerly sold under the brand name Redux, is a serotonergic drug that was used as an appetite suppressant to promote weight loss. It is the d-enantiomer of fenfluramine and is structurally similar to amphetamine, but lacks any psychologically stimulating effects. Dexfenfluramine was, for some years in the mid-1990s, approved by the United States Food and Drug Administration (FDA) for the purposes of weight loss. However, following multiple concerns about its cardiovascular side effects, the FDA withdrew the approval in 1997. After it was removed in the US, dexfenfluramine was also pulled out in other global markets. It was later superseded by sibutramine, which, although initially considered a safer alternative to both dexfenfluramine and fenfluramine, was likewise removed from the US market in 2010. The drug was developed by Interneuron Pharmaceuticals, a company co-founded by Richard Wurtman, aimed at marketing discoveries by Massachusetts Institute of Technology scientists. Interneuron licensed the patent to Wyeth-Ayerst Laboratories. Although at the time of its release, some optimism prevailed that it might herald a new approach, there remained some reservations amongst neurologists, twenty-two of whom petitioned the FDA to delay approval. Their concern was based on the work of George A. Ricaurte, whose techniques and conclusions were later questioned. See also Benfluorex Fenfluramine Levofenfluramine Norfenfluramine References External links Drug description Dexfenfluramine hydrochloride Questions and Answers about Withdrawal of Fenfluramine (Pondimin) and Dexfenfluramine (Redux) Frontline: Dangerous prescriptions—Interview with Leo Lutwak, M.D. in which he discuses the side effects of fenfluramine, its successor Redux, and the Fen-Phen combination 5-HT2B agonists Cardiotoxins Enantiopure drugs Serotonin receptor agonists Serotonin releasing agents Substituted amphetamines Trifluoromethyl compounds Withdrawn anti-obesity drugs Withdrawn drugs
Dexfenfluramine
Chemistry
451
29,119,741
https://en.wikipedia.org/wiki/Axes%20conventions
In ballistics and flight dynamics, axes conventions are standardized ways of establishing the location and orientation of coordinate axes for use as a frame of reference. Mobile objects are normally tracked from an external frame considered fixed. Other frames can be defined on those mobile objects to deal with relative positions for other objects. Finally, attitudes or orientations can be described by a relationship between the external frame and the one defined over the mobile object. The orientation of a vehicle is normally referred to as attitude. It is described normally by the orientation of a frame fixed in the body relative to a fixed reference frame. The attitude is described by attitude coordinates, and consists of at least three coordinates. While from a geometrical point of view the different methods to describe orientations are defined using only some reference frames, in engineering applications it is important also to describe how these frames are attached to the lab and the body in motion. Due to the special importance of international conventions in air vehicles, several organizations have published standards to be followed. For example, German DIN has published the DIN 9300 norm for aircraft (adopted by ISO as ISO 1151–2:1985). Earth bounded axes conventions World reference frames: ENU and NED Basically, as lab frame or reference frame, there are two kinds of conventions for the frames: East, North, Up (ENU), used in geography North, East, Down (NED), used specially in aerospace This frame referenced w.r.t. Global Reference frames like Earth Center Earth Fixed (ECEF) non-inertial system. World reference frames for attitude description To establish a standard convention to describe attitudes, it is required to establish at least the axes of the reference system and the axes of the rigid body or vehicle. When an ambiguous notation system is used (such as Euler angles) the convention used should also be stated. Nevertheless, most used notations (matrices and quaternions) are unambiguous. Tait–Bryan angles are often used to describe a vehicle's attitude with respect to a chosen reference frame, though any other notation can be used. The positive x-axis in vehicles points always in the direction of movement. For positive y- and z-axis, we have to face two different conventions: In case of land vehicles like cars, tanks etc., which use the ENU-system (East-North-Up) as external reference (World frame), the vehicle's (body's) positive y- or pitch axis always points to its left, and the positive z- or yaw axis always points up. World frame's origin is fixed at the center of gravity of the vehicle. By contrast, in case of air and sea vehicles like submarines, ships, airplanes etc., which use the NED-system (North-East-Down) as external reference (World frame), the vehicle's (body's) positive y- or pitch axis always points to its right, and its positive z- or yaw axis always points down. World frame's origin is fixed at the center of gravity of the vehicle. Finally, in case of space vehicles like the Space Shuttle etc., a modification of the latter convention is used, where the vehicle's (body's) positive y- or pitch axis again always points to its right, and its positive z- or yaw axis always points down, but “down” now may have two different meanings: If a so-called local frame is used as external reference, its positive z-axis points “down” to the center of the Earth as it does in case of the earlier mentioned NED-system, but if the inertial frame is used as reference, its positive z-axis will point now to the north celestial pole, and its positive x-axis to the Vernal Equinox or some other reference meridian. Frames mounted on vehicles Specially for aircraft, these frames do not need to agree with the earth-bound frames in the up-down line. It must be agreed what ENU and NED mean in this context. Conventions for land vehicles For land vehicles it is rare to describe their complete orientation, except when speaking about electronic stability control or satellite navigation. In this case, the convention is normally the one of the adjacent drawing, where RPY stands for roll-pitch-yaw. Conventions for sea vehicles As well as aircraft, the same terminology is used for the motion of ships and boats. Some words commonly used were introduced in maritime navigation. For example, the yaw angle or heading, has a nautical origin, with the meaning of "bending out of the course". Etymologically, it is related with the verb 'to go'. It is related to the concept of bearing. It is typically assigned the shorthand notation . Conventions for aircraft local reference frames Coordinates to describe an aircraft attitude (Heading, Elevation and Bank) are normally given relative to a reference control frame located in a control tower, and therefore ENU, relative to the position of the control tower on the earth surface. Coordinates to describe observations made from an aircraft are normally given relative to its intrinsic axes, but normally using as positive the coordinate pointing downwards, where the interesting points are located. Therefore, they are normally NED. These axes are normally taken so that X axis is the longitudinal axis pointing ahead, Z axis is the vertical axis pointing downwards, and the Y axis is the lateral one, pointing in such a way that the frame is right-handed. The motion of an aircraft is often described in terms of rotation about these axes, so rotation about the X-axis is called rolling, rotation about the Y-axis is called pitching, and rotation about the Z-axis is called yawing. Frames for space navigation For satellites orbiting the Earth it is normal to use the Equatorial coordinate system. The projection of the Earth's equator onto the celestial sphere is called the celestial equator. Similarly, the projections of the Earth's north and south geographic poles become the north and south celestial poles, respectively. Deep space satellites use other Celestial coordinate system, like the Ecliptic coordinate system. Local conventions for space ships as satellites If the goal is to keep the shuttle during its orbits in a constant attitude with respect to the sky, e.g. in order to perform certain astronomical observations, the preferred reference is the inertial frame, and the RPY angle vector (0|0|0) describes an attitude then, where the shuttle's wings are kept permanently parallel to the Earth's equator, its nose points permanently to the vernal equinox, and its belly towards the northern polar star (see picture). (Note that rockets and missiles more commonly follow the conventions for aircraft where the RPY angle vector (0|0|0) points north, rather than toward the vernal equinox). On the other hand, if the goal is to keep the shuttle during its orbits in a constant attitude with respect to the surface of the Earth, the preferred reference will be the local frame, with the RPY angle vector (0|0|0) describing an attitude where the shuttle's wings are parallel to the Earth's surface, its nose points to its heading, and its belly down towards the centre of the Earth (see picture). Frames used to describe attitudes Normally the frames used to describe a vehicle's local observations are the same frames used to describe its attitude with respect to the ground tracking stations. i.e. if an ENU frame is used in a tracking station, also ENU frames are used onboard and these frames are also used to refer local observations. An important case in which this does not apply is aircraft. Aircraft observations are performed downwards and therefore normally NED axes convention applies. Nevertheless, when attitudes with respect to ground stations are given, a relationship between the local earth-bound frame and the onboard ENU frame is used. See also Attitude dynamics and control (spacecraft) Euler's rotation theorem Gyroscope Triad Method Rotation formalisms in three dimensions Geographic coordinate system Astronomical coordinate systems References Euclidean symmetries Rotation in three dimensions
Axes conventions
Physics,Mathematics
1,653
33,664,739
https://en.wikipedia.org/wiki/Centre%20for%20Nanoscience%20and%20Quantum%20Information
The Centre for Nanoscience and Quantum Information (abbreviated NSQI) is a research center within the University of Bristol. The center opened in 2009 and was initially intended to serve multiple institutions; however, it was eventually absorbed into the School of Physics of the University of Bristol in 2016. The building was designed to provide a unique ultra-low-vibration research space, with some claims calling it "the quietest building in the world". The Building Building layout The building has four floors, housing the following facilities: The basement is for the most audio-sensitive experimental tasks. It houses seven low noise labs, two ultra-low noise labs, an anechoic chamber, a class 1000 cleanroom, and three prep labs. The ground floor houses Quantum Information labs, staff offices, a seminar room, and a foyer. The first floor houses offices for researchers, students and operations staff associated with QETLabs, the QE-CDT and affiliated quantum technologies groups. The second Floor houses a Quantum Engineering Technology lab, a main inter-disciplinary communal lab that includes a clean room area, several annex labs, and offices. The third floor is a maintenance floor, housing building infrastructure (water, electricity, steam, etc.). Building design features The building was designed by Architect Percy Thomas of Capita Architecture in 2004 and built by Willmott Dixon. The criteria set for the research space exceeded any standard vibration criterion curve, and required significant design and engineering solutions. Low vibrations The primary source of noise for researchers at the nanoscale is mechanical vibration. Activities within a building generate noise that can travel through the structure and vibrations created outside (such as from road traffic) can travel through the ground and enter the building. The following methods were employed to reduce vibration generation and penetration into the lab space: The main structure of the building is massive. It is built atop 2 meter thick concrete foundations and has thick concrete floors, measuring one-half meter of thickness. All plant machinery is housed on the third floor, as far from the low-noise lab space as possible. All services and plant machinery is suspended on springs, rubber pads or damper pads to reduce coupling between the mechanism and the building. All services are balanced to reduce turbulence within pipe and ductwork. All corridors are floating, separate from the main structure, stopping vibrations crossing the floor and major foot traffic from affecting the building. The lift shaft is decoupled from the building structure. The building is decoupled from the building next door. All services pass through a flexible hose coupling before entering the low noise labs. All Low noise labs have a seven tonne concrete isolation block set on damper pads, within the ground slab. Both ultra-low noise labs have either a 23-tonne or 27-tonne concrete isolation block supported by pneumatic rams. The block is T-shaped in cross-section, to keep the center of gravity lower (reducing wobble within the block). The block is surrounded by a floating floor to allow researchers to inhabit the room without disturbing their measuring equipment. Labs have an adjacent control room. Control equipment can be removed from the labs and installed in the neighboring control room. The control room has its own isolation block and is heavily soundproofed. Conduits allow cables to run between the labs, allowing experiments to be overseen from control rooms for their duration. Soundproofing Experimental rooms are far from the busy University precinct, dug underground and in an area that is not used for teaching or as thoroughfare routes. The thickness of the floor ensures that little sound penetrates across, and the walls between labs and doors of the labs are soundproof. Plant machinery being located on the top floor further reduces noise, and the services are tuned as precisely as possible to reduce any sounds from the water supply, chilled water system or air vents. Low electrical noise Many of the experiments in the center involve recording small electrical currents (as low as a few picoamps). External electrical noise disturbs the measurements. Each basement research lab is designed as a Faraday cage, and all service pipework changes to plastic before entering the lab. For the data network, optical fiber is used instead of copper cabling. All labs are also supplied with an independent earth contact and 'clean' power supply, the mains having been filtered by a 1:1 transformer. References University of Bristol Quantum information science Nanotechnology institutions
Centre for Nanoscience and Quantum Information
Materials_science
892
54,651,190
https://en.wikipedia.org/wiki/Para-Nitroblebbistatin
para-Nitroblebbistatin is a non-phototoxic, photostable myosin inhibitor with low fluorescence. Its myosin inhibitory properties are very similar to those of blebbistatin. Myosin specificity Applications para-Nitroblebbistatin has been successfully used in fluorescent imaging experiments involving myosin IIA-GFP expressing live dendritic cells and synaptophysin-pHluorin expressing live neurons. References Imaging Tertiary alcohols Nitrogen heterocycles Ketones 4-Nitrophenyl compounds Heterocyclic compounds with 3 rings
Para-Nitroblebbistatin
Chemistry
129
14,655,670
https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282003%29
This page summarizes projects that brought more than of new liquid fuel capacity to market with the first production of fuel beginning in 2003. This is part of the Wikipedia summary of Oil Megaprojects—see that page for further details. 2003 saw 30 projects come on stream with an aggregate capacity of when full production was reached (which may not have been in 2003). Quick Links to Other Years Detailed Project Table for 2003 See also 2003 world oil market chronology References 2003 Oil fields Proposed energy projects Projects established in 2003 2003 in the environment 2003 in technology
Oil megaprojects (2003)
Engineering
112
75,428,852
https://en.wikipedia.org/wiki/Adebrelimab
Adebrelimab is a drug that is being evaluated for the treatment of solid tumors. Adebrelimab is recombinant humanized IgG4 monoclonal antibody with specificity for PD-L1. In 2023, adebrelimab was approved for use in China for the treatment of small cell lung cancer. References Cancer immunotherapy Monoclonal antibodies
Adebrelimab
Chemistry
84
13,205,517
https://en.wikipedia.org/wiki/Krapcho%20decarboxylation
Krapcho decarboxylation is a chemical reaction used to manipulate certain organic esters. This reaction applies to esters with a beta electron-withdrawing group (EWG). The reaction proceeds by nucleophilic dealkylation of the ester by the halide followed by decarboxylation, followed by hydrolysis of the resulting stabilized carbanion. Reaction conditions The reaction is carried in dipolar aprotic solvents such as dimethyl sulfoxide (DMSO) at high temperatures, often around 150 °C. A variety of salts assist in the reaction including NaCl, LiCl, KCN, and NaCN. It is suggested that the salts were not necessary for reaction, but greatly accelerates the reaction when compared to the reaction with water alone. The ester must contain an EWG in the beta position . The reaction works best with a methyl esters. which are more susceptible to SN2 reactions. Mechanisms The mechanisms are still not fully uncovered. However, the following are suggested mechanisms for two different substituents: α,α-Disubstituted Ester For an α,α-disubstituted ester, it is suggested that the anion in the salt attacks the R3 in an SN2 fashion, kicking off R3 and leaving a negative charge on the oxygen. Then, decarboxylation occurs to produce a carbanion intermediate. The intermediate picks up a hydrogen from water to form the products. The byproducts of the reaction (X-R3 and CO2) are often lost as gases, which helps drive the reaction; entropy increases and Le Chatelier's principle takes place. α-Monosubstituted Ester For an α-monosubstituted ester, it is speculated that the anion in the salt attacks the carbonyl group to form a negative charge on the oxygen, which then cleaves off the cyanoester. With the addition of water, the cyanoester is then hydrolyzed to form CO2 and alcohol, and the carbanion intermediate is protonated. The byproduct of this reaction (CO2) is also lost as gas, which helps drive the reaction; entropy increases and Le Chatelier's principle takes place. Advantages The Krapcho decarboxylation is a comparatively simpler method to manipulate malonic esters because it cleaves only one ester group, without affecting the other ester group. The conventional method involves saponification to form carboxylic acids, followed by decarboxylation to cleave the carboxylic acids, and an esterification step to regenerate the esters. Additionally, Krapcho decarboxylation avoids harsh alkaline or acidic conditions. References Elimination reactions Substitution reactions Name reactions
Krapcho decarboxylation
Chemistry
589
6,959,832
https://en.wikipedia.org/wiki/Cation%20channels%20of%20sperm
The cation channels of sperm also known as Catsper channels or CatSper, are ion channels that are related to the two-pore channels and distantly related to TRP channels. The four members of this family form voltage-gated Ca2+ channels that seem to be specific to sperm. As sperm encounter the more alkaline environment of the female reproductive tract, CatSper channels become activated by the altered ion concentration. These channels are required for proper fertilization. The study of these channels has been slow because they do not traffic to the cell membrane in many heterologous systems. There are several factors that can activate the CatSper calcium channel, depending on species. In the human, the channel is activated by progesterone released by the oocyte. Progesterone binds to the protein ABHD2 which is present in the sperm plasma membrane, which causes ABHD2 to cleave an inhibitor of CatSper (2-arachidonoylglycerol) into arachidonic acid and glycerol. The human CatSper channel is pH-sensitive, and requires a high-pH environment. CatSper plays a key role in mediating hyperactive motility – prior to fertilization, sperm become entrapped within the fingerlike projections of the microvilli of the oviduct. In order for the sperm to fertilize the oocyte, CatSper must be present in order to initiate hyperactive motility, allowing the sperm to escape the microvilli and reach the oocyte for fertilization. Certain substances act as agonist or inhibitor of CatSper (e. g. Pregnenolone sulfate is an agonist, pristimerin and lupeol are inhibitors). Of the four members of the Catsper family, Catsper1 is found in the primary piece of sperm. Catsper1 plays an important role in evoked Ca2+ entry and regulation of hyperactivation in sperm. Catsper2 is localized in the sperm tail and is responsible for regulation of hyperactivation. Catsper3 and Catsper4 are found in both, the testes and sperm and play an important role in the motility of hyperactivated sperm. In humans, CatSper is distributed in quadrilateral nanodomains along the principal piece. Although Catsper seems to play an important role in sperm function, Catspers1-4 null mice have been found to have normal testicular histology, sperm counts and morphology, which is indicative of normal progression of spermatogenesis. See also Acrosome reaction References External links GeneReviews/NCBI/NIH/UW entry on CATSPER-Related Male Infertility Ion channels Voltage-gated ion channels Transmembrane proteins Electrophysiology Calcium channels
Cation channels of sperm
Chemistry
580
12,851,824
https://en.wikipedia.org/wiki/Flexible%20cable
Flexible cables, or 'continuous-flex' cables, are electrical cables specially designed to cope with the tight bending radii and physical stress associated with moving applications, such as inside cable carriers. Due to increasing demands within the field of automation technology in the 1980s, such as increasing loads, moving cables guided inside cable carriers often failed, although the cable carriers themselves did not. In extreme cases, failures caused by "corkscrews" and core ruptures brought entire production lines to a standstill, at high cost. As a result, specialized, highly flexible cables were developed with unique characteristics to differentiate them from standard designs. These are sometimes called “chain-suitable,” “high-flex,” or “continuous flex” cables. A higher level of flexibility means the service life of a cable inside a cable carrier can be greatly extended. A normal cable typically manages 50,000 cycles, but a dynamic cable can complete between one and three million cycles. Construction Flexible cables can be divided into two types: those with conductors stranded in layers inside the cable, and those that have bundled or braided conductors. Stranding in layers Stranding in layers is easier to produce, and therefore usually less expensive. The cable cores are stranded firmly and left relatively long in several layers around the center and are then enclosed in an extruded tube shaped jacket. In the case of shielded cables, the cores are wrapped up with fleece or foils. However, this type of construction means that, during the bending process, the inner radius compresses and the outer radius stretches as the cable core moves. Initially, this works quite well, because the elasticity of the material is still sufficient, but material fatigue can set in and cause permanent deformations. The cores move and begin to make their own compressing and stretching zones, which can lead to a “corkscrew” shape, and ultimately, core rupture. Stranding in bundles The unique cable construction technique of braiding conductors around a tension-proof centre instead of layering them is the second type of construction. Eliminating multi-layers guarantees a uniform bend radius across each conductor. At any point where the cable flexes, the path of any core moves quickly from the inside to the outside of the cable. The result is that no single core compresses near the inside of the bend or stretches near the outside of the bend—which reduces overall stresses. An outer jacket is still required to prevent the cores untwisting. A pressure filled jacket, rather than a simple extruded jacket, is preferable here. This fills all the gussets around the cores and ensures that the cores cannot untwist. The resulting dynamic cable is often stiffer than a standard cable, but lasts longer in applications where it must constantly flex. Use within cable carriers/drag chains With a number of different applications, flexible cables are a versatile material to work with. When used within construction flexible cables often need to be protected using cable carriers/drag chains. This is no only for the health and safety of workers but also the protection of the cable itself. If a cable were to get damaged, it is likely that the damage will be costly to repair. Similarly, if cables are left to hang mid-air or run along the floor there is a heightened risk of injury. References List of flexible cable suppliers Another list from the Kellysearch industrial directory Engineering equipment Electrical circuits
Flexible cable
Engineering
694
1,460,183
https://en.wikipedia.org/wiki/Swale%20%28landform%29
A swale is a shady spot, or a sunken or marshy place. In US usage in particular, it is a shallow channel with gently sloping sides. Such a swale may be either natural or human-made. Artificial swales are often infiltration basins, designed to manage water runoff, filter pollutants, and increase rainwater infiltration. Bioswales are swales that involve the inclusion of plants or vegetation in their construction, specifically. On land The use of swales has been popularized as a rainwater-harvesting and soil-conservation strategy by Bill Mollison, David Holmgren, and other advocates of permaculture. In this context a swale is usually a water-harvesting ditch on contour, also called a contour bund. Swales as used in permaculture are designed by permaculturalists to slow and capture runoff by spreading it horizontally across the landscape (along an elevation contour line), facilitating runoff infiltration into the soil. This archetypal form of swale is a dug-out, sloped, often grassed or "ditch" or "lull" in the landform. One option involves piling the soil onto a new bank on the still lower slope, in which case a bund or berm is formed, mitigating the natural (and often hardscape-increased) risks to slopes below and to any linked watercourse from flash flooding. In arid and seasonally dry places, vegetation (existing or planted) in the swale benefits heavily from the concentration of runoff. Trees and shrubs along the swale can provide shade and mulch which decrease evaporation. On beaches The term "swale" or "beach swale" is also used to describe long, narrow, usually shallow troughs between ridges or sandbars on a beach, that run parallel to the shoreline. See also Contour trenching Gutter Keyline design Rain garden Stormwater Water-sensitive urban design References External links Fact Sheet: Dry and Wet Vegetated Swales from Federal Highway Administration Wetlands of the Great Lakes: The Beach Swale & Dune and Swale Types from Michigan State University Video showing swales used to rehabilitate desert terrain Environmental engineering Landforms Landscape Sustainable gardening Sustainable design Water Water pollution es:Zanja
Swale (landform)
Chemistry,Engineering,Environmental_science
472
45,004,618
https://en.wikipedia.org/wiki/Hydrodenitrogenation
Hydrodenitrogenation (HDN) is an industrial process for the removal of nitrogen from petroleum. Organonitrogen compounds, even though they occur at low levels, are undesirable because they cause poisoning of downstream catalysts. Furthermore, upon combustion, organonitrogen compounds generate NOx, a pollutant. HDN is effected as general hydroprocessing, which traditionally focuses on hydrodesulfurization (HDS) because sulfur compounds are even more problematic. To some extent, hydrodeoxygenation (HDO) is also effected. Typical organonitrogen compounds in petroleum include quinolines and porphyrins and their derivatives. The total nitrogen content is typically less than 1% and the targeted levels are in the ppm range. As described in organic geochemistry, organonitrogen compounds are derivatives or degradation products of the compounds in the living matter that comprised the precursor to fossil fuels. In HDN, the organonitrogen compounds are treated at high temperatures with hydrogen in the presence of a catalyst, the net transformation being: R3N + 3 H2 → 3 RH + NH3 The catalysts generally consist of cobalt and nickel as well as molybdenum disulfide or less often tungsten disulfide supported on alumina. The precise composition of the catalyst, i.e. Co/Ni and Mo/W ratios, are tuned for particular feedstocks. A wide variety of catalyst compositions have been considered, including metal phosphides. References Oil refining Chemical processes Natural gas technology
Hydrodenitrogenation
Chemistry
327
33,289,253
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%207
In molecular biology, glycoside hydrolase family 7 is a family of glycoside hydrolases , which are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. Glycoside hydrolase family 7 CAZY GH_7 comprises enzymes with several known activities including endoglucanase () and cellobiohydrolase (). These enzymes were formerly known as cellulase family C. Exoglucanases and cellobiohydrolases play a role in the conversion of cellulose to glucose by cutting the disaccharide cellobiose from the non-reducing end of the cellulose polymer chain. Structurally, cellulases and xylanases frequently consist of a catalytic domain joined to a cellulose-binding domain (CBD) via a linker region that is rich in proline and/or hydroxy-amino acids. In type I exoglucanases, the CBD domain is found at the C-terminal extremity of these enzyme (this short domain forms a hairpin loop structure stabilised by 2 disulphide bridges). References EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 7
Biology
344
77,263,284
https://en.wikipedia.org/wiki/Syntrichia%20caninervis
Syntrichia caninervis, also known as steppe screw moss, is a desert moss species distributed throughout the world. As an extremophile, it is able to withstand desiccation under dry conditions with little access to water and is commonly found in hypolithic communities. It makes use of a novel adaptation to the desert environment to harvest and collect water sources such as dew, fog, snow, and rain, using tiny hairs instead of roots. In laboratory experiments, S. caninervis has shown the ability to survive in a simulated Martian environment. Description The plant was first described by English bryologist William Mitten (1819–1906) to the Linnean Society of London in May 1858, with a description published in their journal in February 1859. It belongs to the Syntrichia genus and the Pottiaceae family. It is commonly known as steppe screw moss. Distribution and habitat S. caninervis has a widespread global distribution and is an extremophile commonly found in extreme desert environments and hypolithic communities with the capacity to withstand desiccation under dry conditions. It has been observed growing in China, Mongolia, Siberia, central and southwestern Asia, Europe, and North America. In Tibet, Antarctica, and circumpolar regions, it is part of the biological soil crust, which is a resilient type of ground cover often found in arid lands. In North America, the plant is found throughout the western and northwestern United States and in two western Canadian provinces. In the United States, it is found as far east as New Mexico, Colorado, Wyoming and Montana, all the way through Idaho, Utah, Arizona, and Nevada, and as far west as California, Oregon, and Washington. Two of the most common plant communities in the United States are found in the Mojave Desert and in the Columbia River drainage basin. In Canada, it is found in British Columbia and Alberta. Extremophile characteristics Drought tolerance S. caninervis is well-known for its ability to tolerate drought conditions, making it well-adapted to desert environments. Among these adaptions is its tiny hairs on the leaves that allow it to exploit multiple different sources of water, such as dew, fog, snow, and rain. Another example is its ability to photosynthesize once remoistened after desiccation. Extreme temperature tolerance Research has shown that S. caninervis can survive freezing temperatures as low as (in liquid nitrogen) for up to 30 days. It has also demonstrated the ability to withstand storage at for up to 5 years. In both cases, the moss was able to regenerate upon thawing, with dehydrated specimens showing faster recovery compared to hydrated ones. Radiation resistance S. caninervis exhibits remarkable tolerance to gamma radiation. It can survive exposure to doses of up to 500 Gy, which is lethal to most plants and far exceeds the lethal dose for humans (around 50 Gy). Some studies have even suggested that exposure to 500 Gy of gamma radiation may promote the plant's growth. Simulated Martian conditions In laboratory experiments, S. caninervis has demonstrated the ability to survive simulated Martian conditions. These conditions included an atmosphere composed of 95% CO₂, temperature fluctuations between , high levels of UV radiation, and low atmospheric pressure. Dried moss plants achieved a 100% regeneration rate within 30 days after being subjected to these conditions for up to 7 days. Varieties The Global Biodiversity Information Facility lists the following five varieties for Syntrichia caninervis: Syntrichia caninervis var. abranchesii (Luisier) R.H.zander Syntrichia caninervis var. astrakhanica Ignatov, Ignatova - Suragina Syntrichia caninervis var. caninervis Mitt. Syntrichia caninervis var. gypsophila (J.J.Amann ex G.Roth) Ochyra Syntrichia caninervis var. pseudodesertorum (Vondr.) M.T.Gallego References Extremophiles Pottiaceae Plants described in 1877
Syntrichia caninervis
Biology,Environmental_science
824
53,332,613
https://en.wikipedia.org/wiki/Huawei%20P10
The Huawei P10 is an Android phablet smartphone manufactured by Huawei. Announced at Mobile World Congress 2017 on 26 February 2017, the P10 is the successor to the Huawei P9 and was succeeded by the Huawei P20 in 2018. Specifications Hardware The P10 is constructed with a metal chassis, available in various color finishes. Two color options, "Dazzling Blue" and "Dazzling Gold", feature a patterned "Hyper Diamond Cut" finish which reduces its susceptibility to fingerprints. The front of the device features a button-like fingerprint reader, which can also be used for gesture-based navigation. The P10 features a 5.1-inch 1080p display. A larger version, known as the P10 Plus, features a 5.5-inch 1440p display. The P10 utilizes Huawei's octa-core Kirin 960 system-on-chip, with four 1.84 GHz Cortex-A53 cores and four Cortex-A73 cores at 2.36 GHz. The P10 utilizes 4 GB of RAM, while the P10 Plus uses 4 or 6 GB. It comes with 32, 64, or 128 GB of internal storage. Camera Like the P9, the P10 utilizes a dual camera array on the back with Leica lenses, consisting of a monochrome 20-megapixel sensor and a 12-megapixel color sensor. The P10 cameras utilize an f/2.2 aperture. Software The P10 ships with Android 7.0 "Nougat" and Huawei's EMUI software suite. An update to Android 8.0 "Oreo" and EMUI 8.1 was released in March 2018. In March 2019, Huawei released an update to Android 9.0 "Pie" and EMUI 9. Reception The Huawei P10 received mixed reviews. The Verge felt that the design of the P10 was "attractive" and an "up-to-date" derivative of the iPhone 6 (noting its slim build and other accenting), although arguing that the "Hyper Diamond Cut" finish made the device feel cheaper than intended. It was also noted that the fingerprint sensor's swiping gestures made Android more difficult to navigate. The camera was praised for having "dramatically better image quality than its closest competitors", its software and effects, and for lacking a noticeable "bump" protrusion around its lenses. In conclusion, it was argued that the P10 was overpriced and otherwise developed "without confidence or direction", although it received some upgrades from last year's model, such a camera upgrade (addition of OIS, brighter aperture, 4K video recording, 2x lossless zoom). Some P10 devices utilize LPDDR3 RAM instead of LPDDR4, while its storage memory is mixed between eMMC 5.1 and Universal Flash Storage (UFS) 2.0 or 2.1 components. Huawei faced complaints over the variances between devices, with Chinese users noting differences in benchmark performance between these different memory types. In April 2017, Huawei defended the differences as the standard practice of sourcing specific components from multiple sources in order to meet market demand, also citing an industry-wide shortage of flash memory. References External links Mobile phones introduced in 2017 Android (operating system) devices Mobile phones with multiple rear cameras Mobile phones with 4K video recording Discontinued flagship smartphones Huawei smartphones Mobile phones with infrared transmitter
Huawei P10
Technology
715
59,524
https://en.wikipedia.org/wiki/Next-Generation%20Secure%20Computing%20Base
The Next-Generation Secure Computing Base (NGSCB; codenamed Palladium and also known as Trusted Windows) is a software architecture designed by Microsoft which claimed to provide users of the Windows operating system with better privacy, security, and system integrity. NGSCB was the result of years of research and development within Microsoft to create a secure computing solution that equaled the security of closed platforms such as set-top boxes while simultaneously preserving the backward compatibility, flexibility, and openness of the Windows operating system. Microsoft's primary stated objective with NGSCB was to "protect software from software." Part of the Trustworthy Computing initiative when unveiled in 2002, NGSCB was to be integrated with Windows Vista, then known as "Longhorn." NGSCB relied on hardware designed by the Trusted Computing Group to produce a parallel operation environment hosted by a new hypervisor (referred to as a sort of kernel in documentation) called the "Nexus" that existed alongside Windows and provided new applications with features such as hardware-based process isolation, data encryption based on integrity measurements, authentication of a local or remote machine or software configuration, and encrypted paths for user authentication and graphics output. NGSCB would facilitate the creation and distribution of digital rights management (DRM) policies pertaining the use of information. NGSCB was subject to much controversy during its development, with critics contending that it would impose restrictions on users, enforce vendor lock-in, and undermine fair use rights and open-source software. It was first demonstrated by Microsoft at WinHEC 2003 before undergoing a revision in 2004 that would enable earlier applications to benefit from its functionality. Reports indicated in 2005 that Microsoft would change its plans with NGSCB so that it could ship Windows Vista by its self-imposed deadline year, 2006; instead, Microsoft would ship only part of the architecture, BitLocker, which can optionally use the Trusted Platform Module to validate the integrity of boot and system files prior to operating system startup. Development of NGSCB spanned approximately a decade before its cancellation, the lengthiest development period of a major feature intended for Windows Vista. NGSCB differed from technologies Microsoft billed as "pillars of Windows Vista"—Windows Presentation Foundation, Windows Communication Foundation, and WinFS—during its development in that it was not built with the .NET Framework and did not focus on managed code software development. NGSCB has yet to fully materialize; however, aspects of it are available in features such as BitLocker of Windows Vista, Measured Boot and UEFI of Windows 8, Certificate Attestation of Windows 8.1, Device Guard of Windows 10. and Device Encryption in Windows 11 Home editions, with TPM 2.0 mandatory for installation. History Early development Development of NGSCB began in 1997 after Peter Biddle conceived of new ways to protect content on personal computers. Biddle enlisted assistance from members from the Microsoft Research division and other core contributors eventually included Blair Dillaway, Brian LaMacchia, Bryan Willman, Butler Lampson, John DeTreville, John Manferdelli, Marcus Peinado, and Paul England. Adam Barr, a former Microsoft employee who worked to secure the remote boot feature during development of Windows 2000 was approached by Biddle and colleagues during his tenure with an initiative tentatively known as "Trusted Windows," which aimed to protect DVD content from being copied. To this end, Lampson proposed the use of a hypervisor to execute a limited operating system dedicated to DVD playback alongside Windows 2000. Patents for a DRM operating system were later filed in 1999 by England, DeTreville and Lampson; Lampson noted that these patents were for NGSCB. Biddle and colleagues realized by 1999 that NGSCB was more applicable to privacy and security than content protection, and the project was formally given the green-light by Microsoft in October, 2001. During WinHEC 1999, Biddle discussed intent to create a "trusted" architecture for Windows to leverage new hardware to promote confidence and security while preserving backward compatibility with previous software. On October 11, 1999, the Trusted Computing Platform Alliance, a consortium of various technology companies including Compaq, Hewlett-Packard, IBM, Intel, and Microsoft was formed in an effort to promote personal computing confidence and security. The TCPA released detailed specifications for a trusted computing platform with focus on features such as code validation and encryption based on integrity measurements, hardware-based key storage, and machine authentication; these features required a new hardware component designed by the TCPA called the "Trusted Platform Module" (referred to as a "Security Support Component", "Security CoProcessor", or "Security Support Processor" in early NGSCB documentation). At WinHEC 2000, Microsoft released a technical presentation on the topics of protection of privacy, security, and intellectual property titled "Privacy, Security, and Content in Windows Platforms", which focused on turning Windows into a "platform of trust" for computer security, user content, and user privacy. Notable in the presentation is the contention that "there is no difference between privacy protection, computer security, and content protection"—"assurances of trust must be universally true". Microsoft reiterated these claims at WinHEC 2001. NGSCB intended to protect all forms of content, unlike traditional rights management schemes which focus only on the protection of audio tracks or movies instead of users they have the potential to protect which made it, in Biddle's words, "egalitarian". As "Palladium" Microsoft held its first design review for the NGSCB in April 2002, with approximately 37 companies under a non-disclosure agreement. NGSCB was publicly unveiled under its codename "Palladium" in a June 2002 article by Steven Levy for Newsweek that focused on its design, feature set, and origin. Levy briefly described potential features: access control, authentication, authorization, DRM, encryption, as well as protection from junk mail and malware, with example policies being email accessible only to an intended recipient and Microsoft Word documents readable for only a week after their creation; Microsoft later release a guide clarifying these assertions as being hyperbolic; namely, that NGSCB would not intrinsically enforce content protection, or protect against junk mail or malware. Instead, it would provide a platform on which developers could build new solutions that did not exist by isolating applications and store secrets for them. Microsoft was not sure whether to "expose the feature in the Control Panel or present it as a separate utility," but NGSCB would be an opt-in solution—disabled by default. Microsoft PressPass later interviewed John Manferdelli, who restated and expanded on many of the key points discussed in the article by Newsweek. Manferdelli described it as evolutionary platform for Windows in July, articulating how "'Palladium' will not require DRM, and DRM will not require 'Palladium'. Microsoft sought a group program manager in August to assist in leading the development of several Microsoft technologies including NGSCB. Paul Otellini announced Intel's support for NGSCB with a set of chipset, platform, and processor codenamed "LaGrande" at Intel Developer Forum 2002, which would provide an NGSCB hardware foundation and preserve backward compatibility with previous software. As NGSCB NGSCB was known as "Palladium" until January 24, 2003 when Microsoft announced it had been renamed as "Next-Generation Secure Computing Base." Project manager Mario Juarez stated this name was chosen to avoid legal action from an unnamed company which had acquired the rights to the "Palladium" name, as well as to reflect Microsoft's commitment to NGSCB in the upcoming decade. Juarez acknowledged the previous name was controversial, but denied it was changed by Microsoft to dodge criticism. The Trusted Computing Platform Alliance was superseded by the Trusted Computing Group in April 2003. A principal goal of the new consortium was to produce a Trusted Platform Module (TPM) specification compatible with NGSCB; the previous specification, TPM 1.1 did not meet its requirements. TPM 1.2 was designed for compliance with NGSCB and introduced many features for such platforms. The first TPM 1.2 specification, Revision 62 was released in 2003. Biddle emphasized in June 2003 that hardware vendors and software developers were vital to NGSCB. Microsoft publicly demonstrated NGSCB for the first time at WinHEC 2003, where it protected data in memory from an attacker; prevented access to—and alerted the user of—an application that had been changed; and prevented a remote administration tool from capturing an instant messaging conversation. Despite Microsoft's desire to demonstrate NGSCB on hardware, software emulation was required for as few hardware components were available. Biddle reiterated that NGSCB was a set of evolutionary enhancements to Windows, basing this assessment on preserved backward compatibility and employed concepts in use before its development, but said the capabilities and scenarios it would enable would be revolutionary. Microsoft also revealed its multi-year roadmap for NGSCB, with the next major development milestone scheduled for the Professional Developers Conference, indicating that subsequent versions would ship concurrently with pre-release builds of Windows Vista; however, news reports suggested that NGSCB would not be integrated with Windows Vista when release, but it would instead be made available as separate software for the operating system. Microsoft also announced details related to adoption and deployment of NGSCB at WinHEC 2003, stating that it would create a new value proposition for customers without significantly increasing the cost of computers; NGSCB adoption during the year of its introductory release was not anticipated and immediate support for servers was not expected. On the last day of the conference, Biddle said NGSCB needed to provide users with a way to differentiate between secured and unsecured windows—that a secure window should be "noticeably different" to help protect users from spoofing attacks; Nvidia was the earliest to announce this feature. WinHEC 2003 represented an important development milestone for NGSCB. Microsoft dedicated several hours to presentations and released many technical whitepapers, and companies including Atmel, Comodo Group, Fujitsu, and SafeNet produced preliminary hardware for the demonstration. Microsoft also demonstrated NGSCB at several U.S. campuses in California and in New York in June 2003. NGSCB was among the topics discussed during Microsoft's PDC 2003 with a pre-beta software development kit, known as the Developer Preview, being distributed to attendees. The Developer Preview was the first time that Microsoft made NGSCB code available to the developer community and was offered by the company as an educational opportunity for NGSCB software development. With this release, Microsoft stated that it was primarily focused on supporting business and enterprise applications and scenarios with the first version of the NGSCB scheduled to ship with Windows Vista, adding that it intended to address consumers with a subsequent version of the technology, but did not provide an estimated time of delivery for this version. At the conference, Jim Allchin said that Microsoft was continuing to work with hardware vendors so that they would be able to support the technology, and Bill Gates expected a new generation of central processing units (CPUs) to offer full support. Following PDC 2003, NGSCB was demonstrated again on prototype hardware during the annual RSA Security conference in November. Microsoft announced at WinHEC 2004 that it would revise NSCB in response to feedback from customers and independent software vendors who did not desire to rewrite their existing programs in order to benefit from its functionality; the revision would also provide more direct support for Windows with protected environments for the operating system, its components, and applications, instead of it being an environment to itself and new applications. The NGSCB secure input feature would also undergo a significant revision based on cost assessments, hardware requirements, and usability issues of the previous implementation. There were subsequent reports that Microsoft would cease developing NGSCB; Microsoft denied these reports and reaffirmed its commitment to delivery. Additional reports published later that year suggested that Microsoft would make even additional changes based on feedback from the industry. Microsoft's absence of continual updates on NGSCB progress in 2005 had caused industry insiders to speculate that NGSCB had been cancelled. At the Microsoft Management Summit event, Steve Ballmer said that the company would build on the security foundation it had started with the NGSCB to create a new set of virtualization technologies for Windows, which were later Hyper-V. Reports during WinHEC 2005 indicated Microsoft scaled back its plans for NGSCB, so that it could to ship Windows Vista—which had already been beset by numerous delays and even a "development reset"—within a reasonable timeframe; instead of isolating components, NGSCB would offer "Secure Startup" ("BitLocker Drive Encryption") to encrypt disk volumes and validate both pre-boot firmware and operating system components. Microsoft intended to deliver other aspects of NGSCB later. Jim Allchin stated NGSCB would "marry hardware and software to gain better security", which was instrumental in the development of BitLocker. Architecture and technical details A complete Microsoft-based Trusted Computing-enabled system will consist not only of software components developed by Microsoft but also of hardware components developed by the Trusted Computing Group. The majority of features introduced by NGSCB are heavily reliant on specialized hardware and so will not operate on PCs predating 2004. In current Trusted Computing specifications, there are two hardware components: the Trusted Platform Module (TPM), which will provide secure storage of cryptographic keys and a secure cryptographic co-processor, and a curtained memory feature in the CPU. In NGSCB, there are two software components, the Nexus, a security kernel that is part of the Operating System that provides a secure environment (Nexus mode) for trusted code to run in, and Nexus Computing Agents (NCAs), trusted modules which run in Nexus mode within NGSCB-enabled applications. Secure storage and attestation At the time of manufacture, a cryptographic key is generated and stored within the TPM. This key is never transmitted to any other component, and the TPM is designed in such a way that it is extremely difficult to retrieve the stored key by reverse engineering or any other method, even to the owner. Applications can pass data encrypted with this key to be decrypted by the TPM, but the TPM will only do so under certain strict conditions. Specifically, decrypted data will only ever be passed to authenticated, trusted applications, and will only ever be stored in curtained memory, making it inaccessible to other applications and the Operating System. Although the TPM can only store a single cryptographic key securely, secure storage of arbitrary data is by extension possible by encrypting the data such that it may only be decrypted using the securely stored key. The TPM is also able to produce a cryptographic signature based on its hidden key. This signature may be verified by the user or by any third party, and so can therefore be used to provide remote attestation that the computer is in a secure state. Curtained memory NGSCB also relies on a curtained memory feature provided by the CPU. Data within curtained memory can only be accessed by the application to which it belongs, and not by any other application or the Operating System. The attestation features of the TPM can be used to confirm to a trusted application that it is genuinely running in curtained memory; it is therefore very difficult for anyone, including the owner, to trick a trusted application into running outside of curtained memory. This in turn makes reverse engineering of a trusted application extremely difficult. Applications NGSCB-enabled applications are to be split into two distinct parts, the NCA, a trusted module with access to a limited Application Programming Interface (API), and an untrusted portion, which has access to the full Windows API. Any code which deals with NGSCB functions must be located within the NCA. The reason for this split is that the Windows API has developed over many years and is as a result extremely complex and difficult to audit for security bugs. To maximize security, trusted code is required to use a smaller, carefully audited API. Where security is not paramount, the full API is available. Uses and scenarios NGSCB enables new categories of applications and scenarios. Examples of uses cited by Microsoft include decentralized access control policies; digital rights management services for consumers, content providers, and enterprises; protected instant messaging conversations and online transactions; and more secure forms of machine health compliance, network authentication, and remote access. NGSCB-secured virtual private network access was one of the earliest scenarios envisaged by Microsoft. NGSCB can also strengthen software update mechanisms such as those belonging to antivirus software or Windows Update. An early NGSCB privacy scenario conceived of by Microsoft is the "wine purchase scenario," where a user can safely conduct a transaction with an online merchant without divulging personally identifiable information during the transaction. With the release of the NGSCB Developer Preview during PDC 2003, Microsoft emphasized the following enterprise applications and scenarios: document signing, secured data viewing, secured instant messaging, and secured plug-ins for emailing. WinHEC 2004 scenarios During WinHEC 2004, Microsoft revealed two features based on its revision of NGSCB, Cornerstone and Code Integrity Rooting: Cornerstone would protect a user's login and authentication information by securely transmitting it to NGSCB-protected Windows components for validation, finalizing the user authentication process by releasing access to the SYSKEY if validation was successful. It was intended to protect data on laptops that had been lost or stolen to prevent hackers or thieves from accessing it even if they had performed a software-based attack or booted into an alternative operating system. Code Integrity Rooting would validate boot and system files prior to the startup of Microsoft Windows. If validation of these components failed, the SYSKEY would not be released. BitLocker is the combination of these features; "Cornerstone" was the codename of BitLocker, and BitLocker validates pre-boot firmware and operating system components before boot, which protects SYSKEY from unauthorized access; an unsuccessful validation prohibits access to a protected system. Reception Reaction to NGSCB after its unveiling by Newsweek was largely negative. While its security features were praised, critics contended that NGSCB could be used to impose restrictions on users; lock-out competing software vendors; and undermine fair use rights and open source software such as Linux. Microsoft's characterization of NGSCB as a security technology was subject to criticism as its origin focused on DRM. NGSCB's announcement occurred only a few years after Microsoft was accused of anti-competitive practices during the United States v. Microsoft Corporation antitrust case, a detail which called the company's intentions for the technology into question—NGSCB was regarded as an effort by the company to maintain its dominance in the personal computing industry. The notion of a "Trusted Windows" architecture—one that implied Windows itself was untrustworthy—would also be a source of contention within the company itself. After NGSCB's unveiling, Microsoft drew frequent comparisons to Big Brother, an oppressive dictator of a totalitarian state in George Orwell's dystopian novel Nineteen Eighty-Four. The Electronic Privacy Information Center legislative counsel, Chris Hoofnagle, described Microsoft's characterization of the NGSCB as "Orwellian." Big Brother Awards bestowed Microsoft with an award because of NGSCB. Bill Gates addressed these comments at a homeland security conference by stating that NGSCB "can make our country more secure and prevent the nightmare vision of George Orwell at the same time." Steven Levy—the author who unveiled the existence of the NGSCB—claimed in a 2004 front-page article for Newsweek that NGSCB could eventually lead to an "information infrastructure that encourages censorship, surveillance, and suppression of the creative impulse where anonymity is outlawed and every penny spent is accounted for." However, Microsoft outlined a scenario enabled by NGSCB that allows a user to conduct a transaction without divulging personally identifiable information. Ross Anderson of Cambridge University was among the most vocal critics of NGSCB and of Trusted Computing. Anderson alleged that the technologies were designed to satisfy federal agency requirements; enable content providers and other third-parties to remotely monitor or delete data in users' machines; use certificate revocation lists to ensure that only content deemed "legitimate" could be copied; and use unique identifiers to revoke or validate files; he compared this to the attempts by the Soviet Union to "register and control all typewriters and fax machines." Anderson also claimed that the TPM could control the execution of applications on a user's machine and, because of this, bestowed to it a derisive "Fritz Chip" name in reference to United States Senator Ernest "Fritz" Hollings, who had recently proposed DRM legislation such as the Consumer Broadband and Digital Television Promotion Act for consumer electronic devices. Anderson's report was referenced extensively in the news media and appeared in publications such as BBC News, The New York Times, and The Register. David Safford of IBM Research stated that Anderson presented several technical errors within his report, namely that the proposed capabilities did not exist within any specification and that many were beyond the scope of trusted platform design. Anderson later alleged that BitLocker was designed to facilitate DRM and to lock out competing software on an encrypted system, and, in spite of his allegation that NGSCB was designed for federal agencies, advocated for Microsoft to add a backdoor to BitLocker. Similar sentiments were expressed by Richard Stallman, founder of the GNU Project and Free Software Foundation, who alleged that Trusted Computing technologies were designed to enforce DRM and to prevent users from running unlicensed software. In 2015, Stallman stated that "the TPM has proved a total failure" for DRM and that "there are reasons to think that it will not be feasible to use them for DRM." After the release of Anderson's report, Microsoft stated in an NGSCB FAQ that "enhancements to Windows under the NGSCB architecture have no mechanism for filtering content, nor do they provide a mechanism for proactively searching the Internet for 'illegal' content [...] Microsoft is firmly opposed to putting 'policing functions' into nexus-aware PCs and does not intend to do so" and that the idea was in direct opposition with the design goals set forth for NGSCB, which was "built on the premise that no policy will be imposed that is not approved by the user." Concerns about the NGSCB TPM were also raised in that it would use what are essentially unique machine identifiers, which drew comparisons to the Intel Pentium III processor serial number, a unique hardware identification number of the 1990s viewed as a risk to end-user privacy. NGSCB, however, mandates that disclosure or use of the keys provided by the TPM be based solely on user discretion; in contrast, Intel's Pentium III included a unique serial number that could potentially be revealed to any application. NGSCB, also unlike Intel's Pentium III, would provide optional features to allow users to indirectly identify themselves to external requestors. In response to concerns that NGSCB would take control away from users for the sake of content providers, Bill Gates stated that the latter should "provide their content in easily accessible forms or else it ends up encouraging piracy." Bryan Willman, Marcus Peinado, Paul England, and Peter Biddle—four NGSCB engineers—realized early during the development of NGSCB that DRM would ultimately fail in its efforts to prevent piracy. In 2002, the group released a paper titled "The Darknet and the Future of Content Distribution" that outlined how content protection mechanisms are demonstrably futile. The paper's premise circulated within Microsoft during the late 1990s and was a source of controversy within Microsoft; Biddle stated that the company almost terminated his employment as a result of the paper's release. A 2003 report published by Harvard University researchers suggested that NGSCB and similar technologies could facilitate the secure distribution of copyrighted content across peer-to-peer networks. Not all assessments were negative. Paul Thurrott praised NGSCB, stating that it was "Microsoft's Trustworthy Computing initiative made real" and that it would "form the basis of next-generation computer systems." Scott Bekker of Redmond Magazine stated that NGSCB was misunderstood because of its controversy and that it appeared to be a "promising, user-controlled defense against privacy intrusions and security violations." In February 2004, In-Stat/MDR, publisher of the Microprocessor Report, bestowed NGSCB with its Best Technology award. Malcom Crompton, Australian Privacy Commissioner, stated that "NGSCB has great privacy enhancing potential [...] Microsoft has recognised there is a privacy issue [...] we should all work with them, give them the benefit of the doubt and urge them to do the right thing." When Microsoft announced at WinHEC 2004 that it would be revising NGSCB so that previous applications would not have to be rewritten, Martin Reynolds of Gartner praised the company for this decision as it would create a "more sophisticated" version of NGSCB that would simplify development. David Wilson, writing for South China Morning Post, defended NGSCB by saying that "attacking the latest Microsoft monster is an international blood sport" and that "even if Microsoft had a new technology capable of ending Third World hunger and First World obesity, digital seers would still lambaste it because they view Bill Gates as a grey incarnation of Satan." Microsoft noted that negative reaction to NGSCB gradually waned after events such as the USENIX Annual Technical Conference in 2003, and several Fortune 500 companies also expressed interest in it. When reports announced in 2005 that Microsoft would scale back its plans and incorporate only BitLocker with Windows Vista, concerns pertaining digital rights management, erosion of user rights, and vendor lock-in remained. In 2008, Biddle stated that negative perception was the most significant contributing factor responsible for the cessation of NGSCB's development. Vulnerability In a 2003 article, Dan Boneh and David Brumley indicated that projects like NGSCB may be vulnerable to timing attacks. See also Microsoft Pluton Secure Boot Trusted Execution Technology Trusted Computing Trusted Platform Module Intel Management Engine References External links Microsoft's NGSCB home page (Archived on 2006-07-05) Trusted Computing Group home page System Integrity Team blog — team blog for NGSCB technologies (Archived on 2008-10-21) Security WMI Providers Reference on MSDN, including BitLocker Drive Encryption and Trusted Platform Module (both components of NGSCB) TPM Base Services on MSDN Development Considerations for Nexus Computing Agents Cryptographic software Discontinued Windows components Disk encryption Microsoft criticisms and controversies Microsoft initiatives Microsoft Windows security technology Trusted computing Windows Vista
Next-Generation Secure Computing Base
Mathematics,Engineering
5,595
46,220,142
https://en.wikipedia.org/wiki/Beta%20Microscopii
Beta Microscopii (Beta Mic), Latinized from β Microscopii, is a solitary star in the constellation Microscopium. It is close to the lower limit of stars that are visible to the naked eye having an apparent visual magnitude of 6.05 Based upon an annual parallax shift of as seen from Earth, this star is located 502 light years away from the Sun. At that distance, the visual magnitude is diminished by an extinction factor of 0.19 due to interstellar dust. Beta Mic has a stellar classification of A1 IV, indicating that it is an evolved A-type subgiant. Older sources give it a class of A2 Vn, suggesting that it is an A-type main-sequence star with nebulous absorption lines due to rapid rotation. Consistent with the older classification, the star is spinning rapidly with a projected rotational velocity of . The star has 2.96 times the mass of the Sun and due to its evolved status, has a radius of . It radiates at 77.2 times the luminosity of the Sun from its photosphere at an effective temperature of , giving a white hue. Beta Mic has a solar metallicity and is estimated to be around 340 million years old. References Microscopium Microscopii, Beta Durchmusterung objects 198529 102989 7979 A-type subgiants Microscopii, 32
Beta Microscopii
Astronomy
295
65,296,384
https://en.wikipedia.org/wiki/Andrew%20Pollard%20%28immunologist%29
Sir Andrew John Pollard (born 29 August 1965) is the Ashall Professor of Infection & Immunity at the University of Oxford and a Fellow of St Cross College, Oxford. He is an honorary consultant paediatrician at John Radcliffe Hospital and the director of the Oxford Vaccine Group. He is the chief investigator on the University of Oxford COVID-19 Vaccine (ChAdOx-1 n-CoV-19) trials and has led research on vaccines for many life-threatening infectious diseases, including typhoid fever, Neisseria meningitidis, Haemophilus influenzae type b, streptococcus pneumoniae, pertussis, influenza, rabies, and Ebola. Because "In order to prevent any perceived conflict of interest it was agreed that the Joint Committee on Vaccination and Immunisation (JCVI) Chair (Professor Andrew Pollard), who is involved in the development of a SARS-CoV-2 vaccine at Oxford, would recuse himself from all JCVI COVID-19 meetings", JCVI Deputy Chair Professor Anthony Harnden acts in his stead on these matters. Pollard was awarded the coveted James Spence Medal by the Royal College of Paediatrics and Child Health (RCPCH) in 2022. Education Pollard attended St Peter's Catholic School, Bournemouth, where he was head boy. He attended Guy's Hospital Medical School graduating with a BSc in 1986, and subsequently obtained an MBBS from the University of London (1989) at St Bartholomew's Hospital Medical School, where he was awarded the Wheelwright's Prize in Paediatrics (1988) and Honours Colours. After house jobs at Barts and Whipps Cross Hospital and working as an A&E senior house officer at the Whittington Hospital, London, he trained in Paediatrics at Birmingham Children's Hospital, UK, specialising in Paediatric Infectious Diseases at St Mary's Hospital, London, and at British Columbia Children's Hospital, Vancouver. He obtained his PhD at St Mary's Hospital, from the University of London in 1999. Career He chaired the scientific panel of the Spencer Dayman Meningitis Laboratories Charitable Trust (2002–2006) and was a member of the scientific committee of the Meningitis Research Foundation (2009–2014). He is currently chair of trustees of the Knoop Trust and a trustee of the Jenner Vaccine Foundation. Pollard has been the chair of the UK's JCVI since 2013, but does not participate in the COVID-19 vaccine Committee. Pollard has been a member of the WHO Strategic Advisory Group of Experts (SAGE) on Immunization since 2016. He was Director of Graduate Studies in the Department of Paediatrics at the University of Oxford 2012-2020 and was Vice-Master of the University of Oxford's St Cross College, Oxford 2017–2021 and remains a Fellow of the college. He has been a member of the British Commission on Human Medicines' Clinical Trials, Biologicals and Vaccines expert advisory group since 2013, and chaired the European Medicines Agency Scientific Advisory Group on Vaccines over the years between 2012 and 2020. Honours and awards Pollard has received multiple awards throughout his career. For example, he received the “Science Honor and Truth Award” of the Instituto de Patologia en la Altura in La Paz, Bolivia in 2002. In 2020, Pollard received the Oxford University Vice Chancellor's Innovation Award for his work on typhoid vaccines. In 2021, Pollard was knighted in the Birthday Honours for services to public health, particularly during the COVID-19 pandemic. In 2022, Brazil awarded him the Order of Medical Merit. He was elected as a Fellow of the Royal Society (FRS) in 2024. Publications , Pollard has published five books (including one on mountaineering), six book chapters, 12 conference papers, and 647 journal articles. His most cited works are: Personal life Pollard is an avid runner, cyclist, and mountaineer. References External links PubMed search for Andrew J. Pollard 1965 births Living people People educated at St Peter's Catholic School, Bournemouth Alumni of King's College London Alumni of the Medical College of St Bartholomew's Hospital Alumni of Imperial College London Vaccinologists British immunologists Fellows of the Academy of Medical Sciences (United Kingdom) Academics of the University of Oxford Fellows of the Higher Education Academy Fellows of St Cross College, Oxford Knights Bachelor Vaccination advocates Fellows of the Royal College of Paediatrics and Child Health Fellows of the Royal Society
Andrew Pollard (immunologist)
Biology
930
2,167,630
https://en.wikipedia.org/wiki/James%20Till
James Edgar Till (born August 25, 1931) is a University of Toronto biophysicist, best known for demonstrating – in a partnership with Ernest McCulloch – the existence of stem cells. Early work Till was born in Lloydminster, which is located on the border between Saskatchewan and Alberta. The family farm was located north of Lloydminster, in Alberta; the eastern margin of the farm was the Alberta–Saskatchewan boundary. He attended the University of Saskatchewan with scholarships awarded by the Standard Oil Company and the National Research Council, graduating with a B.Sc. in 1952 and a M.Sc. in physics in 1954. Some of his early work was conducted with Harold E. Johns, a pioneer in cobalt-60 radiotherapy. Till proceeded to Yale University, where he received a Ph.D. in biophysics in 1957. He then became a post-doctoral fellow at the University of Toronto. Stem cells Harold E. Johns recruited Till to the Ontario Cancer Institute at Princess Margaret Hospital shortly after he completed his work at Yale. Subsequently, Till chose to work with Ernest McCulloch at the University of Toronto. Thus, the older physician's insight was combined with the younger physicist's rigorous and thorough nature. In the early 1960s, McCulloch and Till started a series of experiments that involved injecting bone marrow cells into irradiated mice. They observed that small raised lumps grew on the spleens of the mice, in proportion to the number of bone marrow cells injected. Till and McCulloch dubbed the lumps 'spleen colonies', and speculated that each lump arose from a single marrow cell: perhaps a stem cell. In later work, Till & McCulloch were joined by graduate student Andy Becker. They cemented their stem cell theory and in 1963 published their results in Nature. In the same year, in collaboration with Lou Siminovitch, a trailblazer for molecular biology in Canada, they obtained evidence that these same marrow cells were capable of self-renewal, a crucial aspect of the functional definition of stem cells that they had formulated. In 1969, Till became a Fellow of the Royal Society of Canada. Later career In the 1980s Till's focus shifted, moving gradually into evaluation of cancer therapies, quality of life issues, and Internet research, including Internet research ethics and the ethics of List mining. Till holds the distinguished title of University Professor Emeritus at the University of Toronto. Recently, Till has been a vocal proponent of open access to scientific publications. Until 2019, Till was an editorial member of the open access journal Journal of Medical Internet Research. Till was a founding member of the Board of Directors of the Canadian Stem Cell Foundation (no longer active). Honours 1969, he and Ernest A. McCulloch were awarded the Canada Gairdner International Award 1993, awarded Robert L. Noble Prize by the National Cancer Institute of Canada, now the research arm of the Canadian Cancer Society 1994, made an Officer of the Order of Canada 2000, made a Fellow of the Royal Society of London 2004, inducted into the Canadian Medical Hall of Fame 2005, he and Ernest A. McCulloch were awarded the Albert Lasker Award for Basic Medical Research 2006, made a member of Order of Ontario 2018, awarded Edogawa-NICHE Prize Selected publications External links Canadian Medical Hall of Fame entry James Till CV, Community of Science Joint publications by Till and McCulloch, 1961-1969; full text courtesy University of Toronto Follow Jim Till on twitter James E. Till archival papers held at the University of Toronto Archives and Records Management Services U of Toronto researcher James Till receives International Honour Inaugural Edogawa NICHE Prize awarded to Prof James Till 1931 births Living people Canadian cancer researchers Canadian fellows of the Royal Society Fellows of the Royal Society of Canada Members of the Order of Ontario Officers of the Order of Canada Stem cell researchers University of Saskatchewan alumni People from Lloydminster Recipients of the Albert Lasker Award for Basic Medical Research Yale University alumni Canadian biophysicists 20th-century Canadian scientists 21st-century Canadian scientists Scientists from Saskatchewan
James Till
Biology
815
72,482,500
https://en.wikipedia.org/wiki/Subdivision%20%28simplicial%20complex%29
A subdivision (also called refinement) of a simplicial complex is another simplicial complex in which, intuitively, one or more simplices of the original complex have been partitioned into smaller simplices. The most commonly used subdivision is the barycentric subdivision, but the term is more general. The subdivision is defined in slightly different ways in different contexts. In geometric simplicial complexes Let K be a geometric simplicial complex (GSC). A subdivision of K is a GSC L such that: |K| = |L|, that is, the union of simplices in K equals the union of simplices in L (they cover the same region in space). each simplex of L is contained in some simplex of K. As an example, let K be a GSC containing a single triangle {A,B,C} (with all its faces and vertices). Let D be a point on the face AB. Let L be the complex containing the two triangles {A,D,C} and {B,D,C} (with all their faces and vertices). Then L is a subdivision of K, since the two triangles {A,D,C} and {B,D,C} are both contained in {A,B,C}, and similarly the faces {A,D}, {D,B} are contained in the face {A,B}, and the face {D,C} is contained in {A,B,C}. Subdivision by starring One way to obtain a subdivision of K is to pick an arbitrary point x in |K|, remove each simplex s in K that contains x, and replace it with the closure of the following set of simplices:where is the join of the point x and the face t. This process is called starring at x. A stellar subdivision is a subdivision obtained by sequentially starring at different points. A derived subdivision is a subdivision obtained by the following inductive process. Star each 1-dimensional simplex (a segment) at some internal point; Star each 2-dimensional simplex at some internal point, over the subdivision of the 1-dimensional simplices; ... Star each k-dimensional simplex at some internal point, over the subdivision of the (k-1)-dimensional simplices. The barycentric subdivision is a derived subdivision where the points used for starring are always barycenters of simplices. For example, if D, E, F, G are the barycenters of {A,B}, {A,C}, {B,C}, {A,B,C} respectively, then the first barycentric subdivision of {A,B,C} is the closure of {A,D,G}, {B,D,G}, {A,E,G}, {C,E,G}, {B,F,G}, {C,F,G}. Iterated subdivisions can be used to attain arbitrarily fine triangulations of a given polyhedron. In abstract simplicial complexes Let K be an abstract simplicial complex (ASC). The face poset of K is a poset made of all nonempty simplices of K, ordered by inclusion (which is a partial order). For example, the face-poset of the closure of {A,B,C} is the poset with the following chains: {A} < {A,B} < {A,B,C} {A} < {A,C} < {A,B,C} {B} < {A,B} < {A,B,C} {B} < {B,C} < {A,B,C} {C} < {A,C} < {A,B,C} {C} < {B,C} < {A,B,C} The order complex of a poset P is an ASC whose vertices are the elements of P and whose simplices are the chains of P. The first barycentric subdivision of an ASC K is the order complex of its face poset. The order complex of the above poset is the closure of the following simplices: { {A} , {A,B} , {A,B,C} } { {A} , {A,C} , {A,B,C} } { {B} , {A,B} , {A,B,C} } { {B} , {B,C} , {A,B,C} } { {C} , {A,C} , {A,B,C} } { {C} , {B,C} , {A,B,C} } Note that this ASC is isomorphic to the ASC {A,D,G}, {B,D,G}, {A,E,G}, {C,E,G}, {B,F,G}, {C,F,G}, with the assignment: A={A}, B={B}, C={C}, D={A,B}, E={A,C}, F={B,C}, G={A,B,C}. The geometric realization of the subdivision of K is always homeomorphic to the geometric realization of K. Simplicial sets References
Subdivision (simplicial complex)
Mathematics
1,170
46,670,461
https://en.wikipedia.org/wiki/Pentyl%20nitrite
Pentyl nitrite is a chemical compound with the molecular formula, classified as an alkyl nitrite, used as an antihypertensive medicine. It is also used to treat cyanide poisoning. It is one of the active ingredients for a recreational drug known as poppers. References alkyl nitrites Inhalants Amyl esters
Pentyl nitrite
Chemistry
78
14,755,566
https://en.wikipedia.org/wiki/Glypican%203
Glypican-3 is a protein that, in humans, is encoded by the GPC3 gene. The GPC3 gene is located on human X chromosome (Xq26) where the most common gene (Isoform 2, GenBank Accession No.: NP_004475) encodes a 70-kDa core protein with 580 amino acids. Three variants have been detected that encode alternatively spliced forms termed Isoforms 1 (NP_001158089), Isoform 3 (NP_001158090) and Isoform 4 (NP_001158091). Structure and function The protein core of GPC3 consists of two subunits, where the N-terminal subunit has a size of ~40 kDa and the C-terminal subunit is ~30 kDa. Six glypicans (GPC1-6) have been identified in mammals. Cell surface heparan sulfate proteoglycans are composed of a membrane-associated protein core substituted with a variable number of heparan sulfate chains. Members of the glypican-related integral membrane proteoglycan family (GRIPS) contain a core protein anchored to the cytoplasmic membrane via a glycosyl phosphatidylinositol linkage. These proteins may play a role in the control of cell division and growth regulation. GPC3 has been found to regulate Wnt/β-catenin and Yap signaling pathways. GPC3 interacts with both Wnt and frizzled (FZD) to form a complex and triggers downstream signaling. The core protein of GPC3 may serve as a co-receptor or a receiver for Wnt. A cysteine-rich domain at the N-lobe of GPC3 has been identified as a hydrophobic groove that interacts with Wnt3a. Blocking the Wnt binding domain on GPC3 using the HN3 single domain antibody can inhibit Wnt activation. Wnt also recognizes a heparan sulfate structure on GPC3, which contains IdoA2S and GlcNS6S, and that the 3-O-sulfation in GlcNS6S3S significantly enhances the binding of Wnt to heparan sulfate. GPC3 also modulates Yap signaling. It interacts with FAT1, a potential upstream cell surface receptor of YAP1 in human cells. GPC3 is also found to bind Alpha-fetoprotein in liver cancer. Disease linkage Deletion mutations in this gene are associated with Simpson–Golabi–Behmel syndrome. Diagnostic utility Glypican 3 immunostaining has utility for differentiating hepatocellular carcinoma (HCC) and dysplastic changes in cirrhotic livers; HCC stains with glypican 3, while liver with dysplastic changes and/or cirrhotic changes does not. Using the YP7 murine monoclonal antibody, GPC3 protein expression is found in HCC, not in normal liver and cholangiocarcinoma. The YP7 murine antibody has been humanized and named as 'hYP7'. GPC3 is also expressed to a lesser degree in melanoma, ovarian clear-cell carcinomas, yolk sac tumors, neuroblastoma, hepatoblastoma, Wilms' tumor cells, and other tumors. However, the significance of GPC3 as a diagnostic tool for human tumors other than HCC is unclear. Therapeutic potential To validate GPC3 as a therapeutic target in liver cancer, the anti-GPC3 therapeutic antibodies GC33, YP7, HN3 and HS20 have been made and widely tested. The laboratory of Dr. Mitchell Ho at the National Cancer Institute, NIH (Bethesda, Maryland, US) has generated YP7 murine monoclonal antibody that recognizes the C-lobe of GPC3 by hybridoma technology. The antibody has been humanized (named hYP7) via antibody engineering for clinical applications. The Ho lab has also identified the human single-domain antibody ('human nanobody') HN3 targeting the N-lobe of GPC3 and the human monoclonal antibody HS20 targeting the heparan sulfate chains on GPC3 by phage display technology. Both HN3 and HS20 antibodies inhibit Wnt signaling in liver cancer cells . The immunotoxins based on HN3, the antibody-drug conjugates based on hYP7 and the T-cell engaging bispecific antibodies derived from YP7 and GC33, have been developed for treating liver cancer. The chimeric antigen receptor (CAR) T cell immunotherapies based on GC33, hYP7 and HN3 are being reported at various stages for treating liver cancer. In mice with xenograft or orthoptic liver tumors, CAR (hYP7) T cells can eliminate GPC3-positive cancer cells, by inducing perforin- and granzyme-mediated cell death and reducing Wnt signaling in tumor cells. CAR (hYP7) T cells are being evaluated at a clinical trial at the NIH. See also Glypican References Further reading External links GeneReviews/NIH/NCBI/UW entry on Simpson-Golabi-Behmel Syndrome Immunologic tests
Glypican 3
Biology
1,145
11,143,722
https://en.wikipedia.org/wiki/Mef2
In the field of molecular biology, myocyte enhancer factor-2 (Mef2) proteins are a family of transcription factors which through control of gene expression are important regulators of cellular differentiation and consequently play a critical role in embryonic development. In adult organisms, Mef2 proteins mediate the stress response in some tissues. Mef2 proteins contain both MADS-box and Mef2 DNA-binding domains. Discovery Mef2 was originally identified as a transcription factor complex through promoter analysis of the muscle creatine kinase (mck) gene to identify nuclear factors interacting with the mck enhancer region during muscle differentiation. Three human mRNA coding sequences designated RSRF (Related to Serum Response Factor) were cloned and shown to dimerize, bind a consensus sequence similar to the one present in the MCK enhancer region, and drive transcription. RSRFs were subsequently demonstrated to encode human genes now named Mef2A, Mef2B and Mef2D. Species distribution The Mef2 gene is widely expressed in all branches of eukaryotes from yeast to humans. While Drosophila has a single Mef2 gene, vertebrates have at least four versions of the Mef2 gene (human versions are denoted as MEF2A, MEF2B, MEF2C, and MEF2D), all expressed in distinct but overlapping patterns during embryogenesis through adulthood. Sequence and structure All of the mammalian Mef2 genes share approximately 50% overall amino acid identity and about 95% similarity throughout the highly conserved N-terminal MADS-box and Mef2 domains, however their sequences diverge in their C-terminal transactivation domain (see figure to the right). The MADS-box serves as the minimal DNA-binding domain, however an adjacent 29-amino acid extension called the Mef2 domain is required for high affinity DNA-binding and dimerization. Through an interaction with the MADS-box, Mef2 transcription factors have the ability to homo- and heterodimerize, and a classic nuclear localization sequence (NLS) in the C-terminus of Mef2A, -C, and – D ensures nuclear localization of the protein. D-Mef2 and human MEF2B lack this conserved NLS but are still found in the nucleus. Function Development In Drosophila, Mef2 regulates muscle development. Mammalian Mef2 can cooperate with bHLH transcription factors to turn non-muscle cells in culture into muscle. bHLH factors can activate Mef2c expression, which then acts to maintain its own expression. Loss of Mef2c in neural crest cells results in craniofacial defects in the developing embryo and neonatal death caused by blocking of the upper airway passages. Mef2c upregulates the expression of the homeodomain transcription factors DLX5 and DLX6, two transcription factors that are necessary for craniofacial development. Stress response In adult tissues, Mef2 proteins regulate the stress-response during cardiac hypertrophy and tissue remodeling in cardiac and skeletal muscle. Cardiovascular system Mef2 is a critical regulator in heart development and cardiac gene expression. In vertebrates, there are four genes in the Mef2 transcription factor family: Mef2a, Mef2b, Mef2c, and Mef2d. Each is expressed at specific times during development. Mef2c, the first gene to be expressed in the heart, is necessary for the development of the anterior (secondary) heart field (AHF), which helps to form components of the cardiac outflow tract and most of the right ventricle. In addition, Mef2 genes are indicated in activating gene expression to aid in sprouting angiogenesis, the formation of new blood vessels from existing vessels. Knockout studies In mice, knockout studies of Mef2c have demonstrated that crucial role that it plays in heart development. Mice without the Mef2c die during embryonic day 9.5–10 with major heart defects, including improper looping, outflow tract abnormalities, and complete lack of the right ventricle. This indicates improper differentiation of the anterior heart field. When Mef2c is knocked out specifically in the AHF, the mice die at birth with a range of outflow tract defects and severe cyanosis. Thus, Mef2 is necessary for many aspects of heart development, specifically by regulating the anterior heart field. Additional Information MEF2, Myocyte Enhancer Factor 2, is a transcription factor with four specific numbers such as MEF2A, B, C, and D. Each MEF2 gene is located on a specific chromosome. MEF2 is known to be involved in the development and the looping of the heart (Chen) MEF2 is necessary for myocyte differentiation and gene activation (Black). Both roles contribute to the heart structure, and if there is a disruption with MEF2 in embryonic development, it can lead to two phenotypic problems (Karamboulas). The Type-I phenotype can cause severe malformations to the heart and the type-II phenotype, while it looks normal, has a thin-walled myocardium which can cause cardiac insufficiency. Another problem that can arise is from the knockout gene MEF2C. MEF2C is known to be directly related to congenital heart disease when associated with Tdgf1 (teratocarcinoma-derived growth factor 1). If MEF2C improperly regulates Tdgf1, developmental defects arise, especially within the embryonic development of the heart. (Chen). The way that MEF2C interacts with the protein Tdgf1 is through the 〖Ca〗^(2+) signaling pathway, which is required to regulate different mechanisms. MicroRNA's, non-small coding RNAs, also play a specific role in regulating MEF2C. The expression of congenital heart disease is upregulated due to the downregulation of the microRNA miR-29C (Chen). A few other known diseases associated with the MEF2 family are liver fibrosis, cancers, and neurodegenerative diseases (Chen). References Black, Brian L., and Richard M. Cripps. “Myocyte Enhancer Factor 2 Transcription Factors in Heart Development and Disease.” Heart Development and Regeneration, 2010, pp. 673–699., doi:10.1016/b978-0-12-381332-9.00030-x. Chen, Xiao, et al. “MEF2 Signaling and Human Diseases.” Oncotarget, vol. 8, no. 67, 2017, pp. 112152–112165., doi:10.18632/oncotarget.22899. Karamboulas, C., et al. “Disruption of MEF2 Activity in Cardiomyoblasts Inhibits Cardiomyogenesis.” Journal of Cell Science, vol. 120, no. 1, 2006, pp. 4315–4318., doi:10.1242/jcs.03369. External links OrthoDB Orthology in all Eukaryotes Drosophila melanogaster genes Transcription factors
Mef2
Chemistry,Biology
1,535
67,640,100
https://en.wikipedia.org/wiki/Tetrodocain
Tetrodocain () is medical injection produced by the Korea Jangsaeng Joint Venture Company () in North Korea. The injection was first claimed to be invented by the company in 2004. According to the state-run Korean Central News Agency (KCNA), the main ingredient of the injection is tetrodotoxin, isolated from puffer poisons, and operates as an anaesthetic. It has been sold for international export on sites based in Russia and China. KCNA described the medicine as efficacious in treating a wide range of diseases, including cancer, tuberculosis, chronic hepatitis, pancreatitis and HIV/AIDS. These claims have been deemed to be either exaggerated or false. The North Korean government also marketed its use in drug detoxification from narcotics such as opium, cocaine and heroin. Related works North Korea released a supposed clinical research thesis about the usage of tetrodocain for anesthetics in 2015. See also Traditional Korean medicine Kumdang-2 Neo-Viagra-Y.R. Royal Blood-Fresh References Healthcare in North Korea Traditional Korean medicine Drugs
Tetrodocain
Chemistry
229
5,658,096
https://en.wikipedia.org/wiki/Exotic%20material
Exotic Materials are metals that have high strength and hardness. It does not mean any metal that is rare, but rather strong in its characteristics. Exotic Materials are used for high performance tasks. Exotic Materials can include plastics, superalloys, semiconductors, superconductors, and ceramics. Exotic metals and alloys Examples of metals and alloys that can be exotic: Chromium Cobalt Hastelloy Inconel Mercury (element) (aka quicksilver, hydrargyrum) Molybdenum Monel Platinum Tantalum Stainless Steel Titanium Tungsten or Wolframite Waspaloy Materials with high alloy content, known as super alloys or exotic alloys, offer enhanced performance properties including excellent strength and durability, and resistance to oxidation, corrosion and deforming at high temperatures or under extreme pressure. Because of these properties, super alloys make the best spring materials for demanding working conditions, which can be encountered across various industry sectors, including the automotive, marine and aerospace sectors as well as oil and gas extraction, thermal processing, petrochemical processing and power generation. Notes See also Exotic matter Materials
Exotic material
Physics
224
3,226,546
https://en.wikipedia.org/wiki/Ronald%20Drever
Ronald William Prest Drever (26 October 1931 – 7 March 2017) was a Scottish experimental physicist. He was a professor emeritus at the California Institute of Technology, co-founded the LIGO project, and was a co-inventor of the Pound–Drever–Hall technique for laser stabilisation, as well as the Hughes–Drever experiment. This work was instrumental in the first detection of gravitational waves in September 2015. Drever died on 7 March 2017, aged 85, seven months before his colleagues Rainer Weiss, Kip Thorne, and Barry Barish won the Nobel Prize in Physics for their work on the observation of gravitational waves. The trio of Drever, Thorne and Weiss shared several major physics prizes in 2016, so it is widely believed that Drever would have won the Nobel Prize in the place of Barry Barish had he not died before the Nobel Committee made their decision. Education Drever was educated at Glasgow Academy followed by University of Glasgow where he was awarded a bachelor's degree in 1953 followed by a PhD in 1959 for research on orbital electron capture using proportional counters. Career and research After receiving his PhD from the University of Glasgow in 1959, Drever initiated the Glasgow project to detect gravitational waves in the sixties, after which he established the University’s first dedicated gravitational wave research group in 1970. The same year Drever was recruited to form a gravitational wave program at Caltech. In 1984 Drever left Glasgow to work full-time at Caltech. Drever's contributions to the design and implementation of the LIGO interferometers were critically important to their ability to function in the extreme sensitivity realm required for detection of gravitational waves (10−23 strain). Drever's final work involved the development of magnetically levitated optical tables for seismic isolation of experimental apparatus. Honors and awards Drever was recognized by numerous awards including: Fellowship of the American Physical Society (1998) Inducted into the American Academy of Arts and Sciences (2002) Shared the Einstein Prize (2007) with Rainer Weiss The Special Breakthrough Prize in Fundamental Physics (2016) The Gruber Prize in Cosmology (2016) The Shaw Prize (2016) (together with Kip Thorne and Rainer Weiss). The Kavli Prize in Astrophysics (2016). Smithsonian, American Ingenuity Award (2016) The Harvey Prize (2016) Fellowship of the Norwegian Academy of Science and Letters Artistic inspiration Robert Crawford wrote a meditation on the life of Ronald Drever. Further reading Marcia Bartusiak, Einstein's Unfinished Symphony (Joseph Henry Press, Washington D.C., 2000) - Contains coverage of his work with gravity wave detectors, including LIGO References 1931 births 2017 deaths Scottish physicists Experimental physicists Laser researchers Fellows of the American Academy of Arts and Sciences California Institute of Technology faculty Gravitational-wave astronomy Members of the Norwegian Academy of Science and Letters Kavli Prize laureates in Astrophysics Fellows of the American Physical Society British expatriate academics in the United States Scottish expatriates in the United States People from Bishopton People educated at the Glasgow Academy 20th-century Scottish scientists Alumni of the University of Glasgow
Ronald Drever
Physics,Astronomy
634
360,507
https://en.wikipedia.org/wiki/Gimel%20function
In axiomatic set theory, the gimel function is the following function mapping cardinal numbers to cardinal numbers: where cf denotes the cofinality function; the gimel function is used for studying the continuum function and the cardinal exponentiation function. The symbol is a serif form of the Hebrew letter gimel. Values of the gimel function The gimel function has the property for all infinite cardinals by König's theorem. For regular cardinals , , and Easton's theorem says we don't know much about the values of this function. For singular , upper bounds for can be found from Shelah's PCF theory. The gimel hypothesis The gimel hypothesis states that . In essence, this means that for singular is the smallest value allowed by the axioms of Zermelo–Fraenkel set theory (assuming consistency). Under this hypothesis cardinal exponentiation is simplified, though not to the extent of the continuum hypothesis (which implies the gimel hypothesis). Reducing the exponentiation function to the gimel function showed that all cardinal exponentiation is determined (recursively) by the gimel function as follows. If is an infinite regular cardinal (in particular any infinite successor) then If is infinite and singular and the continuum function is eventually constant below then If is a limit and the continuum function is not eventually constant below then The remaining rules hold whenever and are both infinite: If then If for some then If and for all and then If and for all and then See also Aleph number Beth number References Thomas Jech, Set Theory, 3rd millennium ed., 2003, Springer Monographs in Mathematics, Springer, . Cardinal numbers
Gimel function
Mathematics
339
28,809,645
https://en.wikipedia.org/wiki/NetPoint
NetPoint is a graphically-oriented project planning and scheduling software application first released for commercial use in 2009. NetPoint's headquarters are located in Chicago, Illinois. The application uses a time-scaled activity network diagram to facilitate interactive project planning and collaboration. NetPoint provides planning, scheduling, resource management, and other project controls functions. NetPoint is capable of calculating schedules using both the Critical Path Method (CPM) as well as the Graphical Path Method (GPM). Schedules created in NetPoint can be exported for use in Primavera, Microsoft Project, and other CPM-based Project management software. See also Project planning Project management Project management software Comparison of project management software Schedule (project management) Critical path method References External links Gilbane: Interactive Scheduling Mosaic: List of Scheduling Tools CPM in Construction Management: List of CPM Software Project management software Critical Path Scheduling Schedule (project management)
NetPoint
Physics
184
26,724,003
https://en.wikipedia.org/wiki/Solid-state%20dye%20laser
A solid-state dye laser (SSDL) is a solid-state lasers in which the gain medium is a laser dye-doped organic matrix such as poly(methyl methacrylate) (PMMA), rather than a liquid solution of the dye. These lasers are also referred to as solid-state organic lasers and solid-state dye-doped polymer lasers. SSDLs were introduced in 1967 by Soffer and McFarland. Organic gain media In the 1990s, new forms of improved PMMA, such as modified PMMA, with high optical quality characteristics were introduced. Gain media research for SSDL has been rather active in the 21st century, and various new dye-doped solid-state organic matrices have been discovered. Notable among these new gain media are organic-inorganic dye-doped polymer-nanoparticle composites. An additional form of organic-inorganic dye-doped solid-state laser gain media are the ORMOSILs. High performance solid-state dye laser oscillators This improved gain medium was central to the demonstration of the first tunable narrow-linewidth solid-state dye laser oscillators, by Duarte, which were later optimized to deliver pulse emission in the kW regime in nearly diffraction limited beams with single-longitudinal-mode laser linewidths of ≈ 350 MHz (or ≈ 0.0004 nm, at a laser wavelength of 590 nm). These tunable laser oscillators use multiple-prism grating architectures yielding very high intracavity dispersions that can be nicely quantified using the multiple-prism grating equations. Distributed feedback and waveguide solid-state dye lasers Additional developments in solid-state dye lasers were demonstrated with the introduction of distributed feedback laser designs in 1999 and distributed feedback waveguides in 2002. See also Laser linewidth Organic laser Organic photonics Tunable laser Multiple-prism grating laser oscillator References Solid-state lasers
Solid-state dye laser
Chemistry
409
4,068,867
https://en.wikipedia.org/wiki/Projective%20cone
A projective cone (or just cone) in projective geometry is the union of all lines that intersect a projective subspace R (the apex of the cone) and an arbitrary subset A (the basis) of some other subspace S, disjoint from R. In the special case that R is a single point, S is a plane, and A is a conic section on S, the projective cone is a conical surface; hence the name. Definition Let X be a projective space over some field K, and R, S be disjoint subspaces of X. Let A be an arbitrary subset of S. Then we define RA, the cone with top R and basis A, as follows : When A is empty, RA = A. When A is not empty, RA consists of all those points on a line connecting a point on R and a point on A. Properties As R and S are disjoint, one may deduce from linear algebra and the definition of a projective space that every point on RA not in R or A is on exactly one line connecting a point in R and a point in A. (RA) S = A When K is the finite field of order q, then = + , where r = dim(R). See also Cone (geometry) Cone (algebraic geometry) Cone (topology) Cone (linear algebra) Conic section Ruled surface Hyperboloid Projective geometry
Projective cone
Mathematics
292
1,179,005
https://en.wikipedia.org/wiki/Cerulean
The color cerulean (American English) or caerulean (British English, Commonwealth English), is a variety of the hue of blue that may range from a light azure blue to a more intense sky blue, and may be mixed as well with the hue of green. The first recorded use of cerulean as a color name in English was in 1590. The word is derived from the Latin word caeruleus (), "dark blue, blue, or blue-green", which in turn probably derives from caerulum, diminutive of caelum, "heaven, sky". "Cerulean blue" is the name of a blue-green pigment consisting of cobalt stannate (). The pigment was first synthesized in the late eighteenth century by Albrecht Höpfner, a Swiss chemist, and it was known as Höpfner blue during the first half of the nineteenth century. Art suppliers began referring to cobalt stannate as cerulean in the second half of the nineteenth century. It was not widely used by artists until the 1870s when it became available in oil paint. Pigment characteristics The primary chemical constituent of the pigment is cobalt(II) stannate (). The pigment is a greenish-blue color. In watercolor, it has a slight chalkiness. When used in oil paint, it loses this quality. Today, cobalt chromate is sometimes marketed under the cerulean blue name but is darker and greener than the cobalt stannate version. The chromate makes excellent turquoise colors and is identified by Rex Art and some other manufacturers as "cobalt turquoise". Cerulean is inert with good light resistance, and it exhibits a high degree of stability in both watercolor and acrylic paint. History Cobalt stannate pigment was first synthesized in 1789 by the Swiss chemist Albrecht Höpfner by heating roasted cobalt and tin oxides together. Subsequently, there was limited German production under the name of Cölinblau. It was generally known as Höpfner blue from the late eighteenth century until the middle of the nineteenth century. In the late 1850s, art suppliers begin referring to the pigment as "ceruleum" blue. The London Times of 28 December 1859 had an advertisement for "Caeruleum, a new permanent color prepared for the use of artists." Ure's Dictionary of Arts from 1875 describes the pigment as "Caeruleum . . . consisting of stannate of protoxide of cobalt, mixed with stannic acid and sulphate of lime." Cerulean was also referred to as coeurleum, cerulium, bleu céleste (celestial blue). Other nineteenth century English pigment names included "ceruleum blue" and "corruleum blue". By 1935, Max Doerner referred to the pigment as cerulean, as do most modern sources, though ceruleum is still used. Some sources claim that cerulean blue was first marketed in the United Kingdom by colourman George Rowney, as "coeruleum" in the early 1860s. However, the British firm of Roberson was buying "Blue No. 58 (Cerulium)" from a German firm of Frauenknecht and Stotz prior to Rowney. Cerulean blue was only available as a watercolor in the 1860s and was not widely adopted until the 1870s when it was used in oil paint. It was popular with artists including Claude Monet, Paul Signac, and Picasso. Van Gogh created his own approximation of cerulean blue using a mixture of cobalt blue, cadmium yellow, and white. Notable occurrences In 1877, Monet had added the pigment to his palette, using it in a painting from his series La Gare Saint-Lazare (now in the National Gallery, London). The blues in the painting include cobalt and cerulean blue, with some areas of ultramarine. Laboratory analysis conducted by the National Gallery identified a relatively pure example of cerulean blue pigment in the shadows of the station's canopy. Researchers at the National Gallery suggested that "cerulean probably offered a pigment of sufficiently greenish tone to displace Prussian blue, which may not have been popular by this time." Berthe Morisot painted the blue coat of the woman in her Summer's Day, 1879 in cerulean blue in conjunction with artificial ultramarine and cobalt blue. When the United Nations was formed at the end of World War II, they adopted cerulean blue for their emblem. The designer Oliver Lundquist stated that he chose the color because it was "the opposite of red, the color of war." In the Catholic Church, cerulean vestments are permitted on certain Marian feast days, primarily the Immaculate Conception in diocese currently or formerly under the Spanish Crown. Other color variations Pale cerulean Pantone, in a press release, declared the pale hue of cerulean at right, which they call cerulean, as the "color of the millennium". The source of this color is the "Pantone Textile Paper eXtended (TPX)" color list, color #15-4020 TPX—Cerulean. Cerulean (Crayola) This bright tone of cerulean is the color called cerulean by Crayola crayons. Cerulean frost At right is displayed the color cerulean frost. Cerulean frost is one of the colors in the special set of metallic colored Crayola crayons called Silver Swirls, the colors of which were formulated by Crayola in 1990. Curious Blue Curious Blue is one of the brighter-toned colors of cerulean. In nature Cerulean cuckooshrike Cerulean kingfisher Cerulean flycatcher Cerulean warbler Cerulean-capped manakin See also The Devil Wears Prada (film) § Cerulean sweater speech Pusher (The X-Files episode) § "Cerulean blue is a gentle breeze" List of colors Pigment Blue pigments Explanatory notes References External links A page on Cerulean Blue Cerulean blue at ColourLex Quaternary colors Pigments Inorganic pigments Shades of azure Shades of blue Shades of cyan Bird colours Cobalt compounds
Cerulean
Chemistry
1,289
15,062,488
https://en.wikipedia.org/wiki/IZMIRAN
The Pushkov Institute of Terrestrial Magnetism, Ionosphere and Radiowave Propagation of the Russian Academy of Sciences (IZMIRAN, ) is a scientific institution of the Russian Academy of Sciences. this institute was founded in 1939 by Nikolay Pushkov. Institute owns several space satellites programs: CORONAS - Complex ORbital Near Earth Solar Activities (experiment ended) COMPASS (Kompas) - Complex Orbital Magneto-Plasma Autonomous Small Satellite (see 2001 in spaceflight) Interheliozond Intercosmos-19 (Cosmos-1809) - research of the Earth ionospheric structure and of the electromagnetic processes in it (experiment ended) Prognoz - a series of magnetometer satellites APEX - Active Plasma Experiments References External links Official IZMIRAN website, Russian/English mixed content Solar-Terrestrial Physics Division of IZMIRAN Complex Orbital Magneto-Plasma Autonomous Small Satellite IZMIRAN Geophysical Situation Forecasting Center 1939 establishments in the Soviet Union Earth science research institutes Ionosphere Institutes of the Russian Academy of Sciences Research institutes in the Soviet Union Troitsky Administrative Okrug Troitsk Settlement Research institutes established in 1939
IZMIRAN
Physics,Astronomy
230
13,920,709
https://en.wikipedia.org/wiki/Seifallah%20Randjbar-Daemi
Seifallah Randjbar-Daemi (, born 1950) is an Iranian theoretical physicist. He is currently an emeritus scientist at the International Centre for Theoretical Physics. Education and Academic career Seifallah Randjbar-Daemi received his PhD in 1980 from Imperial College London, University of London, UK. Randjbar-Daemi's contributions are in the area of theoretical high energy physics, quantum field theory, superstring theory, supersymmetry and supergravity theories in all dimensions and cosmology. He collaborated with Abdus Salam very closely at both scientific and humanitarian level since his student days at Imperial College. He joined the International Centre for Theoretical Physics(ICTP) in 1988 as Research Physicist and Coordinator of the High energy Section. Previously he had been at the Department of Theoretical Physics, University of Zurich. In 1994, he was designated as head of the High Energy Group at ICTP. In August 2005, he was promoted to assistant director of ICTP. Since April 2011, he served as the acting deputy director until his retirement on 31 December 2015. He initiated several programmes in ICTP, for instance, the Diploma Programme and the special basic physics programme for Sub-Saharan African students. He has contributed to ICTP in many ways which are at times beyond his professional responsibilities. When ICTP was on the verge of a shutdown due to major financial crisis in 1991, his interaction with the scientific community of Iran prompted the minister of science and higher education to provide ICTP with a loan, which ensured the continuation of ICTP's operation. Because of his outstanding contributions in promoting science and technology in the developing countries, he received the Spirit of Abdus Salam Award in the year of 2016. See also List of theoretical physicists References External links ICTP web page Seifallah Randjbar-Daemi's papers in SPIRES database Spirit of Abdus Salam Award 2016 1950 births Living people 21st-century Iranian physicists Alumni of Imperial College London Theoretical physicists Scientists from Tabriz
Seifallah Randjbar-Daemi
Physics
416
23,568,909
https://en.wikipedia.org/wiki/Superagonist
In the field of pharmacology, a superagonist is a type of agonist that is capable of producing a maximal response greater than the endogenous agonist for the target receptor, and thus has an efficacy of more than 100%. For example, goserelin is a superagonist of the gonadotropin-releasing hormone receptor. See also Agonist References Receptor agonists
Superagonist
Chemistry
85
30,876,071
https://en.wikipedia.org/wiki/Electric%20dipole%20moment
The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system: that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-metre (C⋅m). The debye (D) is another unit of measurement used in atomic physics and chemistry. Theoretically, an electric dipole is defined by the first-order term of the multipole expansion; it consists of two equal and opposite charges that are infinitesimally close together, although real dipoles have separated charge. Elementary definition Often in physics, the dimensions of an object can be ignored so it can be treated as a pointlike object, i.e. a point particle. Point particles with electric charge are referred to as point charges. Two point charges, one with charge and the other one with charge separated by a distance , constitute an electric dipole (a simple case of an electric multipole). For this case, the electric dipole moment has a magnitude and is directed from the negative charge to the positive one. A stronger mathematical definition is to use vector algebra, since a quantity with magnitude and direction, like the dipole moment of two point charges, can be expressed in vector form where is the displacement vector pointing from the negative charge to the positive charge. The electric dipole moment vector also points from the negative charge to the positive charge. With this definition the dipole direction tends to align itself with an external electric field (and note that the electric flux lines produced by the charges of the dipole itself, which point from positive charge to negative charge, then tend to oppose the flux lines of the external field). Note that this sign convention is used in physics, while the opposite sign convention for the dipole, from the positive charge to the negative charge, is used in chemistry. An idealization of this two-charge system is the electrical point dipole consisting of two (infinite) charges only infinitesimally separated, but with a finite . This quantity is used in the definition of polarization density. Energy and torque An object with an electric dipole moment p is subject to a torque τ when placed in an external electric field E. The torque tends to align the dipole with the field. A dipole aligned parallel to an electric field has lower potential energy than a dipole making some non-zero angle with it. For a spatially uniform electric field across the small region occupied by the dipole, the energy U and the torque are given by The scalar dot "" product and the negative sign shows the potential energy minimises when the dipole is parallel with the field, maximises when it is antiparallel, and is zero when it is perpendicular. The symbol "" refers to the vector cross product. The E-field vector and the dipole vector define a plane, and the torque is directed normal to that plane with the direction given by the right-hand rule. A dipole in such a uniform field may twist and oscillate, but receives no overall net force with no linear acceleration of the dipole. The dipole twists to align with the external field. However, in a non-uniform electric field a dipole may indeed receive a net force since the force on one end of the dipole no longer balances that on the other end. It can be shown that this net force is generally parallel to the dipole moment. Expression (general case) More generally, for a continuous distribution of charge confined to a volume V, the corresponding expression for the dipole moment is: where r locates the point of observation and d3r′ denotes an elementary volume in V. For an array of point charges, the charge density becomes a sum of Dirac delta functions: where each ri is a vector from some reference point to the charge qi. Substitution into the above integration formula provides: This expression is equivalent to the previous expression in the case of charge neutrality and . For two opposite charges, denoting the location of the positive charge of the pair as r+ and the location of the negative charge as r−: showing that the dipole moment vector is directed from the negative charge to the positive charge because the position vector of a point is directed outward from the origin to that point. The dipole moment is particularly useful in the context of an overall neutral system of charges, such as a pair of opposite charges or a neutral conductor in a uniform electric field. For such a system, visualized as an array of paired opposite charges, the relation for electric dipole moment is: where r is the point of observation and di = ri − ri, ri being the position of the negative charge in the dipole i, and ri the position of the positive charge. This is the vector sum of the individual dipole moments of the neutral charge pairs. (Because of overall charge neutrality, the dipole moment is independent of the observer's position r.) Thus, the value of p is independent of the choice of reference point, provided the overall charge of the system is zero. When discussing the dipole moment of a non-neutral system, such as the dipole moment of the proton, a dependence on the choice of reference point arises. In such cases it is conventional to choose the reference point to be the center of mass of the system, not some arbitrary origin. This choice is not only a matter of convention: the notion of dipole moment is essentially derived from the mechanical notion of torque, and as in mechanics, it is computationally and theoretically useful to choose the center of mass as the observation point. For a charged molecule the center of charge should be the reference point instead of the center of mass. For neutral systems the reference point is not important, and the dipole moment is an intrinsic property of the system. Potential and field of an electric dipole An ideal dipole consists of two opposite charges with infinitesimal separation. We compute the potential and field of such an ideal dipole starting with two opposite charges at separation , and taking the limit as . Two closely spaced opposite charges ±q have a potential of the form: corresponding to the charge density by Coulomb's law, where the charge separation is: Let R denote the position vector relative to the midpoint , and the corresponding unit vector: Taylor expansion in (see multipole expansion and quadrupole) expresses this potential as a series. where higher order terms in the series are vanishing at large distances, R, compared to d. Here, the electric dipole moment p is, as above: The result for the dipole potential also can be expressed as: which relates the dipole potential to that of a point charge. A key point is that the potential of the dipole falls off faster with distance R than that of the point charge. The electric field of the dipole is the negative gradient of the potential, leading to: Thus, although two closely spaced opposite charges are not quite an ideal electric dipole (because their potential at short distances is not that of a dipole), at distances much larger than their separation, their dipole moment p appears directly in their potential and field. As the two charges are brought closer together (d is made smaller), the dipole term in the multipole expansion based on the ratio d/R becomes the only significant term at ever closer distances R, and in the limit of infinitesimal separation the dipole term in this expansion is all that matters. As d is made infinitesimal, however, the dipole charge must be made to increase to hold p constant. This limiting process results in a "point dipole". Dipole moment density and polarization density The dipole moment of an array of charges, determines the degree of polarity of the array, but for a neutral array it is simply a vector property of the array with no information about the array's absolute location. The dipole moment density of the array p(r) contains both the location of the array and its dipole moment. When it comes time to calculate the electric field in some region containing the array, Maxwell's equations are solved, and the information about the charge array is contained in the polarization density P(r) of Maxwell's equations. Depending upon how fine-grained an assessment of the electric field is required, more or less information about the charge array will have to be expressed by P(r). As explained below, sometimes it is sufficiently accurate to take P(r) = p(r). Sometimes a more detailed description is needed (for example, supplementing the dipole moment density with an additional quadrupole density) and sometimes even more elaborate versions of P(r) are necessary. It now is explored just in what way the polarization density P(r) that enters Maxwell's equations is related to the dipole moment p of an overall neutral array of charges, and also to the dipole moment density p(r) (which describes not only the dipole moment, but also the array location). Only static situations are considered in what follows, so P(r) has no time dependence, and there is no displacement current. First is some discussion of the polarization density P(r). That discussion is followed with several particular examples. A formulation of Maxwell's equations based upon division of charges and currents into "free" and "bound" charges and currents leads to introduction of the D- and P-fields: where P is called the polarization density. In this formulation, the divergence of this equation yields: and as the divergence term in E is the total charge, and ρf is "free charge", we are left with the relation: with ρb as the bound charge, by which is meant the difference between the total and the free charge densities. As an aside, in the absence of magnetic effects, Maxwell's equations specify that which implies Applying Helmholtz decomposition: for some scalar potential φ, and: Suppose the charges are divided into free and bound, and the potential is divided into Satisfaction of the boundary conditions upon φ may be divided arbitrarily between φf and φb because only the sum φ must satisfy these conditions. It follows that P is simply proportional to the electric field due to the charges selected as bound, with boundary conditions that prove convenient. In particular, when no free charge is present, one possible choice is . Next is discussed how several different dipole moment descriptions of a medium relate to the polarization entering Maxwell's equations. Medium with charge and dipole densities As described next, a model for polarization moment density p(r) results in a polarization restricted to the same model. For a smoothly varying dipole moment distribution p(r), the corresponding bound charge density is simply as we will establish shortly via integration by parts. However, if p(r) exhibits an abrupt step in dipole moment at a boundary between two regions, ∇·p(r) results in a surface charge component of bound charge. This surface charge can be treated through a surface integral, or by using discontinuity conditions at the boundary, as illustrated in the various examples below. As a first example relating dipole moment to polarization, consider a medium made up of a continuous charge density ρ(r) and a continuous dipole moment distribution p(r). The potential at a position r is: where ρ(r) is the unpaired charge density, and p(r) is the dipole moment density. Using an identity: the polarization integral can be transformed: where the vector identity was used in the last steps. The first term can be transformed to an integral over the surface bounding the volume of integration, and contributes a surface charge density, discussed later. Putting this result back into the potential, and ignoring the surface charge for now: where the volume integration extends only up to the bounding surface, and does not include this surface. The potential is determined by the total charge, which the above shows consists of: showing that: In short, the dipole moment density p(r) plays the role of the polarization density P for this medium. Notice, p(r) has a non-zero divergence equal to the bound charge density (as modeled in this approximation). It may be noted that this approach can be extended to include all the multipoles: dipole, quadrupole, etc. Using the relation: the polarization density is found to be: where the added terms are meant to indicate contributions from higher multipoles. Evidently, inclusion of higher multipoles signifies that the polarization density P no longer is determined by a dipole moment density p alone. For example, in considering scattering from a charge array, different multipoles scatter an electromagnetic wave differently and independently, requiring a representation of the charges that goes beyond the dipole approximation. Surface charge Above, discussion was deferred for the first term in the expression for the potential due to the dipoles. Integrating the divergence results in a surface charge. The figure at the right provides an intuitive idea of why a surface charge arises. The figure shows a uniform array of identical dipoles between two surfaces. Internally, the heads and tails of dipoles are adjacent and cancel. At the bounding surfaces, however, no cancellation occurs. Instead, on one surface the dipole heads create a positive surface charge, while at the opposite surface the dipole tails create a negative surface charge. These two opposite surface charges create a net electric field in a direction opposite to the direction of the dipoles. This idea is given mathematical form using the potential expression above. Ignoring the free charge, the potential is: Using the divergence theorem, the divergence term transforms into the surface integral: with dA0 an element of surface area of the volume. In the event that p(r) is a constant, only the surface term survives: with dA0 an elementary area of the surface bounding the charges. In words, the potential due to a constant p inside the surface is equivalent to that of a surface charge which is positive for surface elements with a component in the direction of p and negative for surface elements pointed oppositely. (Usually the direction of a surface element is taken to be that of the outward normal to the surface at the location of the element.) If the bounding surface is a sphere, and the point of observation is at the center of this sphere, the integration over the surface of the sphere is zero: the positive and negative surface charge contributions to the potential cancel. If the point of observation is off-center, however, a net potential can result (depending upon the situation) because the positive and negative charges are at different distances from the point of observation. The field due to the surface charge is: which, at the center of a spherical bounding surface is not zero (the fields of negative and positive charges on opposite sides of the center add because both fields point the same way) but is instead: If we suppose the polarization of the dipoles was induced by an external field, the polarization field opposes the applied field and sometimes is called a depolarization field. In the case when the polarization is outside a spherical cavity, the field in the cavity due to the surrounding dipoles is in the same direction as the polarization. In particular, if the electric susceptibility is introduced through the approximation: where , in this case and in the following, represent the external field which induces the polarization. Then: Whenever χ(r) is used to model a step discontinuity at the boundary between two regions, the step produces a surface charge layer. For example, integrating along a normal to the bounding surface from a point just interior to one surface to another point just exterior: where An, Ωn indicate the area and volume of an elementary region straddling the boundary between the regions, and a unit normal to the surface. The right side vanishes as the volume shrinks, inasmuch as ρb is finite, indicating a discontinuity in E, and therefore a surface charge. That is, where the modeled medium includes a step in permittivity, the polarization density corresponding to the dipole moment density necessarily includes the contribution of a surface charge. A physically more realistic modeling of p(r) would have the dipole moment density drop off rapidly, but smoothly to zero at the boundary of the confining region, rather than making a sudden step to zero density. Then the surface charge will not concentrate in an infinitely thin surface, but instead, being the divergence of a smoothly varying dipole moment density, will distribute itself throughout a thin, but finite transition layer. Dielectric sphere in uniform external electric field The above general remarks about surface charge are made more concrete by considering the example of a dielectric sphere in a uniform electric field. The sphere is found to adopt a surface charge related to the dipole moment of its interior. A uniform external electric field is supposed to point in the z-direction, and spherical polar coordinates are introduced so the potential created by this field is: The sphere is assumed to be described by a dielectric constant κ, that is, and inside the sphere the potential satisfies Laplace's equation. Skipping a few details, the solution inside the sphere is: while outside the sphere: At large distances, φ> → φ∞ so B = −E∞ . Continuity of potential and of the radial component of displacement D = κε0E determine the other two constants. Supposing the radius of the sphere is R, As a consequence, the potential is: which is the potential due to applied field and, in addition, a dipole in the direction of the applied field (the z-direction) of dipole moment: or, per unit volume: The factor is called the Clausius–Mossotti factor and shows that the induced polarization flips sign if . Of course, this cannot happen in this example, but in an example with two different dielectrics κ is replaced by the ratio of the inner to outer region dielectric constants, which can be greater or smaller than one. The potential inside the sphere is: leading to the field inside the sphere: showing the depolarizing effect of the dipole. Notice that the field inside the sphere is uniform and parallel to the applied field. The dipole moment is uniform throughout the interior of the sphere. The surface charge density on the sphere is the difference between the radial field components: This linear dielectric example shows that the dielectric constant treatment is equivalent to the uniform dipole moment model and leads to zero charge everywhere except for the surface charge at the boundary of the sphere. General media If observation is confined to regions sufficiently remote from a system of charges, a multipole expansion of the exact polarization density can be made. By truncating this expansion (for example, retaining only the dipole terms, or only the dipole and quadrupole terms, or etc.), the results of the previous section are regained. In particular, truncating the expansion at the dipole term, the result is indistinguishable from the polarization density generated by a uniform dipole moment confined to the charge region. To the accuracy of this dipole approximation, as shown in the previous section, the dipole moment density p(r) (which includes not only p but the location of p) serves as P(r). At locations inside the charge array, to connect an array of paired charges to an approximation involving only a dipole moment density p(r) requires additional considerations. The simplest approximation is to replace the charge array with a model of ideal (infinitesimally spaced) dipoles. In particular, as in the example above that uses a constant dipole moment density confined to a finite region, a surface charge and depolarization field results. A more general version of this model (which allows the polarization to vary with position) is the customary approach using electric susceptibility or electrical permittivity. A more complex model of the point charge array introduces an effective medium by averaging the microscopic charges; for example, the averaging can arrange that only dipole fields play a role. A related approach is to divide the charges into those nearby the point of observation, and those far enough away to allow a multipole expansion. The nearby charges then give rise to local field effects. In a common model of this type, the distant charges are treated as a homogeneous medium using a dielectric constant, and the nearby charges are treated only in a dipole approximation. The approximation of a medium or an array of charges by only dipoles and their associated dipole moment density is sometimes called the point dipole approximation, the discrete dipole approximation, or simply the dipole approximation. Electric dipole moments of fundamental particles Not to be confused with the magnetic dipole moments of particles, much experimental work is continuing on measuring the electric dipole moments (EDM; or anomalous electric dipole moment) of fundamental and composite particles, namely those of the electron and neutron, respectively. As EDMs violate both the parity (P) and time-reversal (T) symmetries, their values yield a mostly model-independent measure of CP-violation in nature (assuming CPT symmetry is valid). Therefore, values for these EDMs place strong constraints upon the scale of CP-violation that extensions to the standard model of particle physics may allow. Current generations of experiments are designed to be sensitive to the supersymmetry range of EDMs, providing complementary experiments to those done at the LHC. Indeed, many theories are inconsistent with the current limits and have effectively been ruled out, and established theory permits a much larger value than these limits, leading to the strong CP problem and prompting searches for new particles such as the axion. We know at least in the Yukawa sector from neutral kaon oscillations that CP is broken. Experiments have been performed to measure the electric dipole moment of various particles like the electron and the neutron. Many models beyond the standard model with additional CP-violating terms generically predict a nonzero electric dipole moment and are hence sensitive to such new physics. Instanton corrections from a nonzero θ term in quantum chromodynamics predict a nonzero electric dipole moment for the neutron and proton, which have not been observed in experiments (where the best bounds come from analysing neutrons). This is the strong CP problem and is a prediction of chiral perturbation theory. Dipole moments of molecules Dipole moments in molecules are responsible for the behavior of a substance in the presence of external electric fields. The dipoles tend to be aligned to the external field which can be constant or time-dependent. This effect forms the basis of a modern experimental technique called dielectric spectroscopy. Dipole moments can be found in common molecules such as water and also in biomolecules such as proteins. By means of the total dipole moment of some material one can compute the dielectric constant which is related to the more intuitive concept of conductivity. If is the total dipole moment of the sample, then the dielectric constant is given by where k is a constant and is the time correlation function of the total dipole moment. In general the total dipole moment have contributions coming from translations and rotations of the molecules in the sample, Therefore, the dielectric constant (and the conductivity) has contributions from both terms. This approach can be generalized to compute the frequency dependent dielectric function. It is possible to calculate dipole moments from electronic structure theory, either as a response to constant electric fields or from the density matrix. Such values however are not directly comparable to experiment due to the potential presence of nuclear quantum effects, which can be substantial for even simple systems like the ammonia molecule. Coupled cluster theory (especially CCSD(T)) can give very accurate dipole moments, although it is possible to get reasonable estimates (within about 5%) from density functional theory, especially if hybrid or double hybrid functionals are employed. The dipole moment of a molecule can also be calculated based on the molecular structure using the concept of group contribution methods. See also Anomalous magnetic dipole moment Bond dipole moment Neutron electric dipole moment Electron electric dipole moment Toroidal dipole moment Dynamic toroidal dipole Multipole expansion Multipole moments Solid harmonics Axial multipole moments Cylindrical multipole moments Spherical multipole moments Laplace expansion Legendre polynomials Notes References Further reading External links Electric Dipole Moment – from Eric Weisstein's World of Physics Electrostatic Dipole Multiphysics Model Electric dipole moment Electromagnetic quantities
Electric dipole moment
Physics,Mathematics
5,067
4,899,414
https://en.wikipedia.org/wiki/Quasi-bialgebra
In mathematics, quasi-bialgebras are a generalization of bialgebras: they were first defined by the Ukrainian mathematician Vladimir Drinfeld in 1990. A quasi-bialgebra differs from a bialgebra by having coassociativity replaced by an invertible element which controls the non-coassociativity. One of their key properties is that the corresponding category of modules forms a tensor category. Definition A quasi-bialgebra is an algebra over a field equipped with morphisms of algebras along with invertible elements , and such that the following identities hold: Where and are called the comultiplication and counit, and are called the right and left unit constraints (resp.), and is sometimes called the Drinfeld associator. This definition is constructed so that the category is a tensor category under the usual vector space tensor product, and in fact this can be taken as the definition instead of the list of above identities. Since many of the quasi-bialgebras that appear "in nature" have trivial unit constraints, ie. the definition may sometimes be given with this assumed. Note that a bialgebra is just a quasi-bialgebra with trivial unit and associativity constraints: and . Braided quasi-bialgebras A braided quasi-bialgebra (also called a quasi-triangular quasi-bialgebra) is a quasi-bialgebra whose corresponding tensor category is braided. Equivalently, by analogy with braided bialgebras, we can construct a notion of a universal R-matrix which controls the non-cocommutativity of a quasi-bialgebra. The definition is the same as in the braided bialgebra case except for additional complications in the formulas caused by adding in the associator. Proposition: A quasi-bialgebra is braided if it has a universal R-matrix, ie an invertible element such that the following 3 identities hold: Where, for every , is the monomial with in the th spot, where any omitted numbers correspond to the identity in that spot. Finally we extend this by linearity to all of . Again, similar to the braided bialgebra case, this universal R-matrix satisfies (a non-associative version of) the Yang–Baxter equation: Twisting Given a quasi-bialgebra, further quasi-bialgebras can be generated by twisting (from now on we will assume ) . If is a quasi-bialgebra and is an invertible element such that , set Then, the set is also a quasi-bialgebra obtained by twisting by F, which is called a twist or gauge transformation. If was a braided quasi-bialgebra with universal R-matrix , then so is with universal R-matrix (using the notation from the above section). However, the twist of a bialgebra is only in general a quasi-bialgebra. Twistings fulfill many expected properties. For example, twisting by and then is equivalent to twisting by , and twisting by then recovers the original quasi-bialgebra. Twistings have the important property that they induce categorical equivalences on the tensor category of modules: Theorem: Let , be quasi-bialgebras, let be the twisting of by , and let there exist an isomorphism: . Then the induced tensor functor is a tensor category equivalence between and . Where . Moreover, if is an isomorphism of braided quasi-bialgebras, then the above induced functor is a braided tensor category equivalence. Usage Quasi-bialgebras form the basis of the study of quasi-Hopf algebras and further to the study of Drinfeld twists and the representations in terms of F-matrices associated with finite-dimensional irreducible representations of quantum affine algebra. F-matrices can be used to factorize the corresponding R-matrix. This leads to applications in statistical mechanics, as quantum affine algebras, and their representations give rise to solutions of the Yang–Baxter equation, a solvability condition for various statistical models, allowing characteristics of the model to be deduced from its corresponding quantum affine algebra. The study of F-matrices has been applied to models such as the XXZ in the framework of the Algebraic Bethe ansatz. See also Bialgebra Hopf algebra Quasi-Hopf algebra References Further reading Vladimir Drinfeld, Quasi-Hopf algebras, Leningrad Math J. 1 (1989), 1419-1457 J.M. Maillet and J. Sanchez de Santos, Drinfeld Twists and Algebraic Bethe Ansatz, Amer. Math. Soc. Transl. (2) Vol. 201, 2000 Coalgebras Non-associative algebras
Quasi-bialgebra
Mathematics
1,032
23,984
https://en.wikipedia.org/wiki/Pyxis
Pyxis is a small and faint constellation in the southern sky. Abbreviated from Pyxis Nautica, its name is Latin for a mariner's compass (contrasting with Circinus, which represents a draftsman's compasses). Pyxis was introduced by Nicolas-Louis de Lacaille in the 18th century, and is counted among the 88 modern constellations. The plane of the Milky Way passes through Pyxis. A faint constellation, its three brightest stars—Alpha, Beta and Gamma Pyxidis—are in a rough line. At magnitude 3.68, Alpha is the constellation's brightest star. It is a blue-white star approximately distant and around 22,000 times as luminous as the Sun. Pyxis is located close to the stars that formed the old constellation Argo Navis, the ship of Jason and the Argonauts. Parts of Argo Navis were the Carina (the keel or hull), the Puppis (the stern), and the Vela (the sails). These eventually became their own constellations. In the 19th century, John Herschel suggested renaming Pyxis to Malus (meaning the mast) but the suggestion was not followed. T Pyxidis, located about 4 degrees northeast of Alpha Pyxidis, is a recurrent nova that has flared up to magnitude 7 every few decades. Also, three star systems in Pyxis have confirmed exoplanets. The Pyxis globular cluster is situated about 130,000 light-years away in the galactic halo. This region was not thought to contain globular clusters. The possibility has been raised that this object might have escaped from the Large Magellanic Cloud. History In ancient Chinese astronomy, Alpha, Beta, and Gamma Pyxidis formed part of Tianmiao, a celestial temple honouring the ancestors of the emperor, along with stars from neighbouring Antlia. The French astronomer Nicolas-Louis de Lacaille first described the constellation in French as la Boussole (the Marine Compass) in 1752, after he had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honoured instruments that symbolised the Age of Enlightenment. Lacaille Latinised the name to Pixis [sic] Nautica on his 1763 chart. The Ancient Greeks identified the four main stars of Pyxis as the mast of the mythological Jason's ship, Argo Navis. German astronomer Johann Bode defined the constellation Lochium Funis, the Log and Line—a nautical device once used for measuring speed and distance travelled at sea—around Pyxis in his 1801 star atlas, but the depiction did not survive. In 1844 John Herschel attempted to resurrect the classical configuration of Argo Navis by renaming it Malus the Mast, a suggestion followed by Francis Baily, but Benjamin Gould restored Lacaille's nomenclature. For instance, Alpha Pyxidis is referenced as α Mali in an old catalog of the United States Naval Observatory (star 3766, page 97). Characteristics Covering 220.8 square degrees and hence 0.535% of the sky, Pyxis ranks 65th of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 52°N. It is most visible in the evening sky in February and March. A small constellation, it is bordered by Hydra to the north, Puppis to the west, Vela to the south, and Antlia to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Pyx". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of eight sides (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −17.41° and −37.29°. Features Stars Lacaille gave Bayer designations to ten stars now named Alpha to Lambda Pyxidis, skipping the Greek letters iota and kappa. Although a nautical element, the constellation was not an integral part of the old Argo Navis and hence did not share in the original Bayer designations of that constellation, which were split between Carina, Vela and Puppis. Pyxis is a faint constellation, its three brightest stars—Alpha, Beta and Gamma Pyxidis—forming a rough line. Overall, there are 41 stars within the constellation's borders with apparent magnitudes brighter than or equal to 6.5. With an apparent magnitude of 3.68, Alpha Pyxidis is the brightest star in the constellation. Located 880 ± 30 light-years distant from Earth, it is a blue-white giant star of spectral type B1.5III that is around 22,000 times as luminous as the Sun and has 9.4 ± 0.7 times its diameter. It began life with a mass 12.1 ± 0.6 times that of the Sun, almost 15 million years ago. Its light is dimmed by 30% due to interstellar dust, so would have a brighter magnitude of 3.31 if not for this. The second brightest star at magnitude 3.97 is Beta Pyxidis, a yellow bright giant or supergiant of spectral type G7Ib-II that is around 435 times as luminous as the Sun, lying 420 ± 10 light-years distant away from Earth. It has a companion star of magnitude 12.5 separated by 9 arcseconds. Gamma Pyxidis is a star of magnitude 4.02 that lies 207 ± 2 light-years distant. It is an orange giant of spectral type K3III that has cooled and swollen to 3.7 times the diameter of the Sun after exhausting its core hydrogen. Kappa Pyxidis was catalogued but not given a Bayer designation by Lacaille, but Gould felt the star was bright enough to warrant a letter. Kappa has a magnitude of 4.62 and is 560 ± 50 light-years distant. An orange giant of spectral type K4/K5III, Kappa has a luminosity approximately 965 times that of the Sun. It is separated by 2.1 arcseconds from a magnitude 10 star. Theta Pyxidis is a red giant of spectral type M1III and semi-regular variable with two measured periods of 13 and 98.3 days, and an average magnitude of 4.71, and is 500 ± 30 light-years distant from Earth. It has expanded to approximately 54 times the diameter of the Sun. Located around 4 degrees northeast of Alpha is T Pyxidis, a binary star system composed of a white dwarf with around 0.8 times the Sun's mass and a red dwarf that orbit each other every 1.8 hours. This system is located around 15,500 light-years away from Earth. A recurrent nova, it has brightened to the 7th magnitude in the years 1890, 1902, 1920, 1944, 1966 and 2011 from a baseline of around 14th magnitude. These outbursts are thought to be due to the white dwarf accreting material from its companion and ejecting periodically. TY Pyxidis is an eclipsing binary star whose apparent magnitude ranges from 6.85 to 7.5 over 3.2 days. The two components are both of spectral type G5IV with a diameter 2.2 times, and mass 1.2 times that of the Sun, and revolve around each other every 3.2 days. The system is classified as a RS Canum Venaticorum variable, a binary system with prominent starspot activity, and lies 184 ± 5 light-years away. The system emits X-rays, and analysing the emission curve over time led researchers to conclude that there was a loop of material arcing between the two stars. RZ Pyxidis is another eclipsing binary system, made up of two young stars less than 200,000 years old. Both are hot blue-white stars of spectral type B7V and are around 2.5 times the size of the Sun. One is around five times as luminous as the Sun and the other around four times as luminous. The system is classified as a Beta Lyrae variable, the apparent magnitude varying from 8.83 to 9.72 over 0.66 days. XX Pyxidis is one of the more-studied members of a class of stars known as Delta Scuti variables—short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study astroseismology. Astronomers made more sense of its pulsations when it became clear that it is also a binary star system. The main star is a white main sequence star of spectral type A4V that is around 1.85 ± 0.05 times as massive as the Sun. Its companion is most likely a red dwarf of spectral type M3V, around 0.3 times as massive as the Sun. The two are very close—possibly only 3 times the diameter of the Sun between them—and orbit each other every 1.15 days. The brighter star is deformed into an egg shape. AK Pyxidis is a red giant of spectral type M5III and semi-regular variable that varies between magnitudes 6.09 and 6.51. Its pulsations take place over multiple periods simultaneously of 55.5, 57.9, 86.7, 162.9 and 232.6 days. UZ Pyxidis is another semi-regular variable red giant, this time a carbon star, that is around 3560 times as luminous as the Sun with a surface temperature of 3482 K, located 2116 light-years away from Earth. It varies between magnitudes 6.99 and 7.83 over 159 days. VY Pyxidis is a BL Herculis variable (type II Cepheid), ranging between apparent magnitudes 7.13 and 7.40 over a period of 1.24 days. Located around 650 light-years distant, it shines with a luminosity approximately 45 times that of the Sun. The closest star to Earth in the constellation is Gliese 318, a white dwarf of spectral class DA5 and magnitude 11.85. Its distance has been calculated to be 26 light-years, or 28.7 ± 0.5 light-years distant from Earth. It has around 45% of the Sun's mass, yet only 0.15% of its luminosity. WISEPC J083641.12-185947.2 is a brown dwarf of spectral type T8p located around 72 light-years from Earth. Discovered by infrared astronomy in 2011, it has a magnitude of 18.79. Planetary systems Pyxis is home to three stars with confirmed planetary systems—all discovered by Doppler spectroscopy. A hot Jupiter, HD 73256 b, that orbits HD 73256 every 2.55 days, was discovered using the CORALIE spectrograph in 2003. The host star is a yellow star of spectral type G9V that has 69% of our Sun's luminosity, 89% of its diameter and 105% of its mass. Around 119 light-years away, it shines with an apparent magnitude of 8.08 and is around a billion years old. HD 73267 b was discovered with the High Accuracy Radial Velocity Planet Searcher (HARPS) in 2008. It orbits HD 73267 every 1260 days, a 7 billion-year-old star of spectral type G5V that is around 89% as massive as the Sun. A red dwarf of spectral type M2.5V that has around 42% the Sun's mass, Gliese 317 is orbited by two gas giant planets. Around 50 light-years distant from Earth, it is a good candidate for future searches for more terrestrial rocky planets. Deep sky objects Pyxis lies in the plane of the Milky Way, although part of the eastern edge is dark, with material obscuring our galaxy arm there. NGC 2818 is a planetary nebula that lies within a dim open cluster of magnitude 8.2. NGC 2818A is an open cluster that lies on line of sight with it. K 1-2 is a planetary nebula whose central star is a spectroscopic binary composed of two stars in close orbit with jets emanating from the system. The surface temperature of one component has been estimated at as high as 85,000 K. NGC 2627 is an open cluster of magnitude 8.4 that is visible in binoculars. Discovered in 1995, the Pyxis globular cluster is a 13.3 ± 1.3 billion year-old globular cluster situated around 130,000 light-years distant from Earth and around 133,000 light-years distant from the centre of the Milky Way—a region not previously thought to contain globular clusters. Located in the galactic halo, it was noted to lie on the same plane as the Large Magellanic Cloud and the possibility has been raised that it might be an escaped object from that galaxy. NGC 2613 is a spiral galaxy of magnitude 10.5 which appears spindle-shaped as it is almost edge-on to observers on Earth. Henize 2-10 is a dwarf galaxy which lies 30 million light-years away. It has a black hole of around a million solar masses at its centre. Known as a starburst galaxy due to very high rates of star formation, it has a bluish colour due to the huge numbers of young stars within it. See also Pyxis (Chinese astronomy) Notes References Southern constellations Constellations listed by Lacaille
Pyxis
Astronomy
2,877
1,524,006
https://en.wikipedia.org/wiki/Shirt-sleeve%20environment
"Shirt-sleeve environment" is a term used in aircraft design to describe the interior of an aircraft in which no special clothing need be worn. Early aircraft had no internal pressurization, so the crews of those that reached the stratosphere had to be garbed to withstand the low temperature and pressure of the air outside. Respirator masks needed to cover the mouth and nose. Silk socks were worn to retain heat. Sometimes leather clothing, such as boots, were electrically heated. When jet fighter aircraft reached still higher altitudes, something similar to a space suit had to be worn, and pilots of the highest reconnaissance aircraft wore real space suits. Commercial jet airliners fly in the stratosphere, but because they are pressurized, they could be said to have a shirt-sleeve environment. Crews of the US Apollo spacecraft always began the flight phases of launch, docking, and re-entry in space suits, although they could remove them for many hours. The Soviets tried to perfect this to save weight. This worked well, until an accidental depressurization on entry resulted in the deaths of an entire Soyuz crew. Protocols were changed shortly thereafter to require at least partial spacesuits. Early Soyuz spacecraft had no provision for space suits in the re-entry module, although the orbital module was intended for use as an airlock. Thus these operated in a shirt-sleeve environment except for spacewalks. This term is also used in science fiction to describe an alien planet with an atmosphere breathable by humans without special equipment. The Space Shuttle's Spacelab Habitable module was an area with expanded volume for astronauts to work in a shirt sleeve environment and had space for equipment racks and related support equipment for operations in Low Earth orbit. One of the goals for MOLAB rover was to achieve a shirt-sleeve environment (compared to the Lunar Roving Vehicle which was open to space and required the use of space suits to operate). One of the considerations was the habitable volume that could be occupied. References Aerospace engineering Safety engineering Human habitats
Shirt-sleeve environment
Engineering
414
58,448,044
https://en.wikipedia.org/wiki/Aspergillus%20cavernicola
Aspergillus cavernicola (also named A. amylovorus) is a species of fungus in the genus Aspergillus. It is from the Cavernicolus section. The species was first described in 1969. It has been isolated from the wall of a cave in Romania and from wheat starch in Ukraine. Growth and morphology A. cavernicola has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References cavernicola Fungi described in 1969 Fungus species
Aspergillus cavernicola
Biology
134
12,528,789
https://en.wikipedia.org/wiki/Barry%20Azzopardi
Barry Azzopardi (1947–2017) was a professor of chemical engineering specialising in multiphase flow research. He was a chartered engineer and Fellow of the Institution of Chemical Engineers. Life Azzopardi was born in Gibraltar, obtained a BTech degree in chemical engineering at the University of Bradford in 1972 and a PhD in the same subject at the University of Exeter in 1977. He died 22 May 2017 due to cancer. Career After work at the University of Oxford and the AERE, Harwell he took up the position of Lady Trent Professor of Chemical Engineering at the University of Nottingham in 1990 and Head of the Department of Chemical Engineering until 1997. He retained this position until his death in 2017. He was an editor of Chemical Engineering Research and Design: Part A, an Official Journal of the European Federation of Chemical Engineering (EFCE) and had been an organiser and member of scientific committees of the Assembly of World Conferences on Experimental Heat Transfer, fluid mechanics and thermodynamics, as well as the 2007 International Conference on Multiphase Flow (ICMF). At these, and other, conferences he gave keynote lectures. He was also the chair of the EFCE Working Party on Multi-Phase Flow from 1991 to 2000. Selected publications YANG, L. and AZZOPARDI, B. J., 2007. Phase split of liquid-liquid two-phase flow at a horizontal T-junction. Internal Journal of Multiphase Flow, 33(2), 207–216. AZZOPARDI, B., 2006. Flow controlled critical heat flux: developments in annular flow modelling. Archives of Thermodynamics, 27(2), 3–22. LESTER, E., BLOOD, P., DENYER, J., GIDDINGS, D., AZZOPARDI, B. and POLIAKOFF, M., 2006. Reaction engineering: the supercritical water hydrothermal synthesis of nano-particles. Journal of Supercritical Fluids, 37(2), 209–214. MAK, C. Y., OMEBERE-IYARI, N. K. and AZZOPARDI, B. J., 2006. The split of vertical two-phase flow at a small diameter T-junction. Chemical Engineering Science, 61(19), 6261–6272. YANG, L., AZZOPARDI, B.J., BELGHAZI, A. and NAKANISHI, S., 2006. Phase separation of liquid-liquid two-phase flow at a T-junction. AIChE Journal, 52(1), 141–149. CHONG, L. Y., AZZOPARDI, B. J. and BATE, D. J., 2005. Calculation of considerations at which dryout occurs in the serpentine channels of fired reboilers. Chemical Engineering Research & Design, 83(4), 412–422. DAS, G., DAS, P. K. and AZZOPARDI, B. J., 2005. The split of stratified gas–liquid flow at a small diameter T-junction. International Journal of Multiphase Flow, 31(4), 514–528. EASTWICK, C.N., HEUBNER, K., AZZOPARDI, B.J., SIMMONS, K.A., YOUNG, C. and MORRISON, R., 2005. Film flow around bearing chamber support structures. In: Proceedings of ASME Turbo Expo 2005: Power for Land, Sea and Air, Reno, USA. New York: ASME Press, 3. HANKINS, N., HILAL, N., OGUNBIYI, O. O. and AZZOPARDI, B., 2005. Inverted polarity micellar enhanced ultrafiltration for the treatment of heavy metal polluted wastewater. Desalination, 185(1–3), 185–202. References External links University of Nottingham, Barry Azzopardi 1947 births 2017 deaths British chemical engineers Academics of the University of Nottingham Gibraltarian emigrants to England Fellows of the Institution of Chemical Engineers Chemical engineering academics Gibraltarian scientists
Barry Azzopardi
Chemistry
863
23,897,117
https://en.wikipedia.org/wiki/Slope%20stability%20analysis
Slope stability analysis is a static or dynamic, analytical or empirical method to evaluate the stability of slopes of soil- and rock-fill dams, embankments, excavated slopes, and natural slopes in soil and rock. It is performed to assess the safe design of a human-made or natural slopes (e.g. embankments, road cuts, open-pit mining, excavations, landfills etc.) and the equilibrium conditions. Slope stability is the resistance of inclined surface to failure by sliding or collapsing. The main objectives of slope stability analysis are finding endangered areas, investigation of potential failure mechanisms, determination of the slope sensitivity to different triggering mechanisms, designing of optimal slopes with regard to safety, reliability and economics, and designing possible remedial measures, e.g. barriers and stabilization. Successful design of the slope requires geological information and site characteristics, e.g. properties of soil/rock mass, slope geometry, groundwater conditions, alternation of materials by faulting, joint or discontinuity systems, movements and tension in joints, earthquake activity etc. The presence of water has a detrimental effect on slope stability. Water pressure acting in the pore spaces, fractures or other discontinuities in the materials that make up the pit slope will reduce the strength of those materials. Choice of correct analysis technique depends on both site conditions and the potential mode of failure, with careful consideration being given to the varying strengths, weaknesses and limitations inherent in each methodology. Before the computer age stability analysis was performed graphically or by using a hand-held calculator. Today engineers have a lot of possibilities to use analysis software, ranges from simple limit equilibrium techniques through to computational limit analysis approaches (e.g. Finite element limit analysis, Discontinuity layout optimization) to complex and sophisticated numerical solutions (finite-/distinct-element codes). The engineer must fully understand limitations of each technique. For example, limit equilibrium is most commonly used and simple solution method, but it can become inadequate if the slope fails by complex mechanisms (e.g. internal deformation and brittle fracture, progressive creep, liquefaction of weaker soil layers, etc.). In these cases more sophisticated numerical modelling techniques should be utilised. Also, even for very simple slopes, the results obtained with typical limit equilibrium methods currently in use (Bishop, Spencer, etc.) may differ considerably. In addition, the use of the risk assessment concept is increasing today. Risk assessment is concerned with both the consequence of slope failure and the probability of failure (both require an understanding of the failure mechanism). Limit equilibrium analysis Conventional methods of slope stability analysis can be divided into three groups: kinematic analysis, limit equilibrium analysis, and rock fall simulators. Most slope stability analysis computer programs are based on the limit equilibrium concept for a two- or three-dimensional model. Two-dimensional sections are analyzed assuming plane strain conditions. Stability analyses of two-dimensional slope geometries using simple analytical approaches can provide important insights into the initial design and risk assessment of slopes. Limit equilibrium methods investigate the equilibrium of a soil mass tending to slide down under the influence of gravity. Translational or rotational movement is considered on an assumed or known potential slip surface below the soil or rock mass. In rock slope engineering, methods may be highly significant to simple block failure along distinct discontinuities. All these methods are based on the comparison of forces, moments, or stresses resisting movement of the mass with those that can cause unstable motion (disturbing forces). The output of the analysis is a factor of safety, defined as the ratio of the shear strength (or, alternatively, an equivalent measure of shear resistance or capacity) to the shear stress (or other equivalent measure) required for equilibrium. If the value of factor of safety is less than 1.0, the slope is unstable. All limit equilibrium methods assume that the shear strengths of the materials along the potential failure surface are governed by linear (Mohr-Coulomb) or non-linear relationships between shear strength and the normal stress on the failure surface. The most commonly used variation is Terzaghi's theory of shear strength which states that where is the shear strength of the interface, is the effective stress ( is the total stress normal to the interface and is the pore water pressure on the interface), is the effective friction angle, and is the effective cohesion. The methods of slices is the most popular limit equilibrium technique. In this approach, the soil mass is discretized into vertical slices. Several versions of the method are in use. These variations can produce different results (factor of safety) because of different assumptions and inter-slice boundary conditions. The location of the interface is typically unknown but can be found using numerical optimization methods. For example, functional slope design considers the critical slip surface to be the location where that has the lowest value of factor of safety from a range of possible surfaces. A wide variety of slope stability software use the limit equilibrium concept with automatic critical slip surface determination. Typical slope stability software can analyze the stability of generally layered soil slopes, embankments, earth cuts, and anchored sheeting structures. Earthquake effects, external loading, groundwater conditions, stabilization forces (i.e., anchors, geo-reinforcements etc.) can also be included. Analytical techniques: Method of slices Many slope stability analysis tools use various versions of the methods of slices such as Bishop simplified, Ordinary method of slices (Swedish circle method/Petterson/Fellenius), Spencer, Sarma etc. Sarma and Spencer are called rigorous methods because they satisfy all three conditions of equilibrium: force equilibrium in horizontal and vertical direction and moment equilibrium condition. Rigorous methods can provide more accurate results than non-rigorous methods. Bishop simplified or Fellenius are non-rigorous methods satisfying only some of the equilibrium conditions and making some simplifying assumptions. Some of these approaches are discussed below. Swedish Slip Circle Method of Analysis The Swedish Slip Circle method assumes that the friction angle of the soil or rock is equal to zero, i.e., . In other words, when friction angle is considered to be zero, the effective stress term goes to zero, thus equating the shear strength to the cohesion parameter of the given soil. The Swedish slip circle method assumes a circular failure interface, and analyzes stress and strength parameters using circular geometry and statics. The moment caused by the internal driving forces of a slope is compared to the moment caused by forces resisting slope failure. If resisting forces are greater than driving forces, the slope is assumed stable. Ordinary Method of Slices In the method of slices, also called OMS or the Fellenius method, the sliding mass above the failure surface is divided into a number of slices. The forces acting on each slice are obtained by considering the mechanical (force and moment) equilibrium for the slices. Each slice is considered on its own and interactions between slices are neglected because the resultant forces are parallel to the base of each slice. However, Newton's third law is not satisfied by this method because, in general, the resultants on the left and right of a slice do not have the same magnitude and are not collinear. This allows for a simple static equilibrium calculation, considering only soil weight, along with shear and normal stresses along the failure plane. Both the friction angle and cohesion can be considered for each slice. In the general case of the method of slices, the forces acting on a slice are shown in the figure below. The normal () and shear () forces between adjacent slices constrain each slice and make the problem statically indeterminate when they are included in the computation. For the ordinary method of slices, the resultant vertical and horizontal forces are where represents a linear factor that determines the increase in horizontal force with the depth of the slice. Solving for gives Next, the method assumes that each slice can rotate about a center of rotation and that moment balance about this point is also needed for equilibrium. A balance of moments for all the slices taken together gives where is the slice index, are the moment arms, and loads on the surface have been ignored. The moment equation can be used to solve for the shear forces at the interface after substituting the expression for the normal force: Using Terzaghi's strength theory and converting the stresses into moments, we have where is the pore pressure. The factor of safety is the ratio of the maximum moment from Terzaghi's theory to the estimated moment, Modified Bishop’s Method of Analysis The Modified Bishop's method is slightly different from the ordinary method of slices in that normal interaction forces between adjacent slices are assumed to be collinear and the resultant interslice shear force is zero. The approach was proposed by Alan W. Bishop of Imperial College. The constraint introduced by the normal forces between slices makes the problem statically indeterminate. As a result, iterative methods have to be used to solve for the factor of safety. The method has been shown to produce factor of safety values within a few percent of the "correct" values. The factor of safety for moment equilibrium in Bishop's method can be expressed as where where, as before, is the slice index, is the effective cohesion, is the effective internal angle of internal friction, is the width of each slice, is the weight of each slice, and is the water pressure at the base of each slice. An iterative method has to be used to solve for because the factor of safety appears both on the left and right hand sides of the equation. Lorimer's method Lorimer's Method is a technique for evaluating slope stability in cohesive soils. It differs from Bishop's Method in that it uses a clothoid slip surface in place of a circle. This mode of failure was determined experimentally to account for effects of particle cementation. The method was developed in the 1930s by Gerhardt Lorimer (Dec 20, 1894-Oct 19, 1961), a student of geotechnical pioneer Karl von Terzaghi. Spencer’s Method Spencer's Method of analysis requires a computer program capable of cyclic algorithms, but makes slope stability analysis easier. Spencer's algorithm satisfies all equilibria (horizontal, vertical and driving moment) on each slice. The method allows for unconstrained slip plains and can therefore determine the factor of safety along any slip surface. The rigid equilibrium and unconstrained slip surface result in more precise safety factors than, for example, Bishop's Method or the Ordinary Method of Slices. Sarma method The Sarma method, proposed by Sarada K. Sarma of Imperial College is a Limit equilibrium technique used to assess the stability of slopes under seismic conditions. It may also be used for static conditions if the value of the horizontal load is taken as zero. The method can analyse a wide range of slope failures as it may accommodate a multi-wedge failure mechanism and therefore it is not restricted to planar or circular failure surfaces. It may provide information about the factor of safety or about the critical acceleration required to cause collapse. Comparisons The assumptions made by a number of limit equilibrium methods are listed in the table below. The table below shows the statical equilibrium conditions satisfied by some of the popular limit equilibrium methods. Rock slope stability analysis Rock slope stability analysis based on limit equilibrium techniques may consider following modes of failures: Planar failure -> case of rock mass sliding on a single surface (special case of general wedge type of failure); two-dimensional analysis may be used according to the concept of a block resisting on an inclined plane at limit equilibrium Polygonal failure -> sliding of a nature rock usually takes place on polygonally-shaped surfaces; calculation is based on a certain assumptions (e.g. sliding on a polygonal surface which is composed from N parts is kinematically possible only in case of development at least (N - 1) internal shear surfaces; rock mass is divided into blocks by internal shear surfaces; blocks are considered to be rigid; no tensile strength is permitted etc.) Wedge failure -> three-dimensional analysis enables modelling of the wedge sliding on two planes in a direction along the line of intersection Toppling failure -> long thin rock columns formed by the steeply dipping discontinuities may rotate about a pivot point located at the lowest corner of the block; the sum of the moments causing toppling of a block (i.e. horizontal weight component of the block and the sum of the driving forces from adjacent blocks behind the block under consideration) is compared to the sum of the moments resisting toppling (i.e. vertical weight component of the block and the sum of the resisting forces from adjacent blocks in front of the block under consideration); toppling occur if driving moments exceed resisting moments Limit analysis A more rigorous approach to slope stability analysis is limit analysis. Unlike limit equilibrium analysis which makes ad hoc though often reasonable assumptions, limit analysis is based on rigorous plasticity theory. This enables, among other things, the computation of upper and lower bounds on the true factor of safety. Programs based on limit analysis include: OptumG2 (2014-) General purpose software for geotechnical applications (also includes elastoplasticity, seepage, consolidation, staged construction, tunneling, and other relevant geotechnical analysis types). LimitState:GEO (2008-) General purpose geotechnical software application based on Discontinuity layout optimization for plane strain problems including slope stability. Stereographic and kinematic analysis Kinematic analysis examines which modes of failure can possibly occur in the rock mass. Analysis requires the detailed evaluation of rock mass structure and the geometry of existing discontinuities contributing to block instability. Stereographic representation (stereonets) of the planes and lines is used. Stereonets are useful for analyzing discontinuous rock blocks. Program DIPS allows for visualization structural data using stereonets, determination of the kinematic feasibility of rock mass and statistical analysis of the discontinuity properties. Rockfall simulators Rock slope stability analysis may design protective measures near or around structures endangered by the falling blocks. Rockfall simulators determine travel paths and trajectories of unstable blocks separated from a rock slope face. Analytical solution method described by Hungr & Evans assumes rock block as a point with mass and velocity moving on a ballistic trajectory with regard to potential contact with slope surface. Calculation requires two restitution coefficients that depend on fragment shape, slope surface roughness, momentum and deformational properties and on the chance of certain conditions in a given impact. Numerical methods of analysis Numerical modelling techniques provide an approximate solution to problems which otherwise cannot be solved by conventional methods, e.g. complex geometry, material anisotropy, non-linear behavior, in situ stresses. Numerical analysis allows for material deformation and failure, modelling of pore pressures, creep deformation, dynamic loading, assessing effects of parameter variations etc. However, numerical modelling is restricted by some limitations. For example, input parameters are not usually measured and availability of these data is generally poor. User also should be aware of boundary effects, meshing errors, hardware memory and time restrictions. Numerical methods used for slope stability analysis can be divided into three main groups: continuum, discontinuum and hybrid modelling. Continuum modelling Modelling of the continuum is suitable for the analysis of soil slopes, massive intact rock or heavily jointed rock masses. This approach includes the finite-difference and finite element methods that discretize the whole mass to finite number of elements with the help of generated mesh (Fig. 3). In finite-difference method (FDM) differential equilibrium equations (i.e. strain-displacement and stress-strain relations) are solved. finite element method (FEM) uses the approximations to the connectivity of elements, continuity of displacements and stresses between elements. Most of numerical codes allows modelling of discrete fractures, e.g. bedding planes, faults. Several constitutive models are usually available, e.g. elasticity, elasto-plasticity, strain-softening, elasto-viscoplasticity etc. Discontinuum modelling Discontinuum approach is useful for rock slopes controlled by discontinuity behaviour. Rock mass is considered as an aggregation of distinct, interacting blocks subjected to external loads and assumed to undergo motion with time. This methodology is collectively called the discrete-element method (DEM). Discontinuum modelling allows for sliding between the blocks or particles. The DEM is based on solution of dynamic equation of equilibrium for each block repeatedly until the boundary conditions and laws of contact and motion are satisfied. Discontinuum modelling belongs to the most commonly applied numerical approach to rock slope analysis and following variations of the DEM exist: distinct-element method Discontinuous Deformation Analysis (DDA) particle flow codes The distinct-element approach describes mechanical behaviour of both, the discontinuities and the solid material. This methodology is based on a force-displacement law (specifying the interaction between the deformable rock blocks) and a law of motion (determining displacements caused in the blocks by out-of-balance forces). Joints are treated as [boundary conditions. Deformable blocks are discretized into internal constant-strain elements. Discontinuum program UDEC (Universal distinct element code) is suitable for high jointed rock slopes subjected to static or dynamic loading. Two-dimensional analysis of translational failure mechanism allows for simulating large displacements, modelling deformation or material yielding. Three-dimensional discontinuum code 3DEC contains modelling of multiple intersecting discontinuities and therefore it is suitable for analysis of wedge instabilities or influence of rock support (e.g. rockbolts, cables). In Discontinuous Deformation Analysis (DDA) displacements are unknowns and equilibrium equations are then solved analogous to finite element method. Each unit of finite element type mesh represents an isolated block bounded by discontinuities. Advantage of this methodology is possibility to model large deformations, rigid body movements, coupling or failure states between rock blocks. Discontinuous rock mass can be modelled with the help of distinct-element methodology in the form of particle flow code, e.g. program PFC2D/3D. Spherical particles interact through frictional sliding contacts. Simulation of joint bounded blocks may be realized through specified bond strengths. Law of motion is repeatedly applied to each particle and force-displacement law to each contact. Particle flow methodology enables modelling of granular flow, fracture of intact rock, transitional block movements, dynamic response to blasting or seismicity, deformation between particles caused by shear or tensile forces. These codes also allow to model subsequent failure processes of rock slope, e.g. simulation of rock Hybrid/coupled modelling Hybrid codes involve the coupling of various methodologies to maximize their key advantages, e.g. limit equilibrium analysis combined with finite element groundwater flow and stress analysis; coupled particle flow and finite-difference analyses; hydro-mechanically coupled finite element and material point methods for simulating the entire process of rainfall-induced landslides. Hybrid techniques allows investigation of piping slope failures and the influence of high groundwater pressures on the failure of weak rock slope. Coupled finite-distinct-element codes provide for the modelling of both intact rock behavior and the development and behavior of fractures. See also References Further reading Coduto, Donald P. (1998). Geotechnical Engineering: Principles and Practices. Prentice-Hall. Fredlund, D. G., H. Rahardjo, M. D. Fredlund (2014). Unsaturated Soil Mechanics in Engineering Practice. Wiley-Interscience. External links Landslide analysis, prevention and mitigation
Slope stability analysis
Environmental_science
4,000
22,149,789
https://en.wikipedia.org/wiki/Weingarten%20function
In mathematics, Weingarten functions are rational functions indexed by partitions of integers that can be used to calculate integrals of products of matrix coefficients over classical groups. They were first studied by who found their asymptotic behavior, and named by , who evaluated them explicitly for the unitary group. Unitary groups Weingarten functions are used for evaluating integrals over the unitary group Ud of products of matrix coefficients of the form where denotes complex conjugation. Note that where is the conjugate transpose of , so one can interpret the above expression as being for the matrix element of . This integral is equal to where Wg is the Weingarten function, given by where the sum is over all partitions λ of q . Here χλ is the character of Sq corresponding to the partition λ and s is the Schur polynomial of λ, so that sλd(1) is the dimension of the representation of Ud corresponding to λ. The Weingarten functions are rational functions in d. They can have poles for small values of d, which cancel out in the formula above. There is an alternative inequivalent definition of Weingarten functions, where one only sums over partitions with at most d parts. This is no longer a rational function of d, but is finite for all positive integers d. The two sorts of Weingarten functions coincide for d larger than q, and either can be used in the formula for the integral. Values of the Weingarten function for simple permutations The first few Weingarten functions Wg(σ, d) are (The trivial case where q = 0) where permutations σ are denoted by their cycle shapes. There exist computer algebra programs to produce these expressions. Explicit expressions for the integrals in the first cases The explicit expressions for the integrals of first- and second-degree polynomials, obtained via the formula above, are: Asymptotic behavior For large d, the Weingarten function Wg has the asymptotic behavior where the permutation σ is a product of cycles of lengths Ci, and cn = (2n)!/n!(n + 1)! is a Catalan number, and |σ| is the smallest number of transpositions that σ is a product of. There exists a diagrammatic method to systematically calculate the integrals over the unitary group as a power series in 1/d. Orthogonal and symplectic groups For orthogonal and symplectic groups the Weingarten functions were evaluated by . Their theory is similar to the case of the unitary group. They are parameterized by partitions such that all parts have even size. External links References Random matrices Mathematical physics
Weingarten function
Physics,Mathematics
558
43,962,754
https://en.wikipedia.org/wiki/Gliese%20251
Gliese 251, also known as HIP 33226 or HD 265866, is a star located 18.2 light-years away from the Solar System. Located in the constellation of Gemini, it is the nearest star in this constellation. It is located near the boundary with Auriga, 49 arcminutes away from the bright star Theta Geminorum; due to its apparent magnitude of +9.89 it cannot be observed with the naked eye. The closest star to Gliese 251 is QY Aurigae, which is located 3.5 light years away. Gliese 251 is a red dwarf with a spectral type of M3V with an effective temperature of about 3300 K. Its mass has been measured to be around 0.36 solar masses and its radius is about 36% solar radii. Its metallicity is likely slightly less than that of the Sun. Observations at infrared wavelengths rule out the presence of a circumstellar disk around it. Planetary system In 2019, two candidate planets were detected by the radial velocity method to orbit Gliese 251 at orbits of 1.74 and 607 days. However, a new study in 2020 using CARMENES data refuted both candidates, as they found that both signals were caused by stellar activity. Based on the CARMENES data, the team announced that Gliese 251 is orbited by one single super-Earth (Gliese 251 b) at an orbit of 14.238 days. See also List of exoplanets discovered in 2020 - Gliese 251 b References Notes Gemini (constellation) M-type main-sequence stars 265866 0251 J06544902+3316058 033226 0294 Planetary systems with one confirmed planet TIC objects
Gliese 251
Astronomy
364
51,021,329
https://en.wikipedia.org/wiki/HIP%2041378
HIP 41378 is a star located 346 light-years away in the constellation of Cancer. The star has an apparent magnitude of 8.92. This F-type main sequence dwarf has a mass of and a radius of . It has a surface temperature of about . Planetary system In 2016, the K2 Kepler mission discovered five planets around HIP 41378, with sizes ranging from 2 times the size of Earth to the size of Jupiter, out to about 1 AU for the outermost planet. The semi-major axes were not known until K2 Haute-Provence Observatory radial velocity data was obtained in 2019. Also, a sixth non-transiting planet, HIP 41378 g, was discovered, along with speculation that additional planets may exist between HIP 41378 g and HIP 41378 d. The planet HIP 41378 f was also found to likely have optically-thick rings or a highly extended atmosphere. See also Planetary system References F-type main-sequence stars 2016 in outer space Planetary systems with six confirmed planets 041378 Durchmusterung objects Cancer (constellation)
HIP 41378
Astronomy
226
48,526,070
https://en.wikipedia.org/wiki/Fort%20des%20Bordes
The , renamed Fort des Bordes by the French in 1919, is a military structure located in the district of Boric in Metz. It is part of the first fortified belt of the forts of Metz. Buried since the construction of the Eastern expressway in 1968, it is covered by a green space, although some remnants of the fort remain visible. Historical context The first fortified belt of Metz consists of forts de Saint-Privat (1870) of Queuleu (1867), des Bordes (1870) Saint-Julien (1867), Gambetta, Déroulède, Decaen, Plappeville (1867) and St. Quentin (1867), most unfinished or still in the planning stages is in 1870, when the Franco-Prussian War burst out. During the Annexation, Metz will oscillate between a German garrison of 15,000 and 20,000 men at the beginning of period, and will exceed 25,000 men before the First World War gradually became the most important stronghold of the German Reich. Construction and facilities The fort Zastrow is designed in the spirit of the "detached forts" concept developed by Hans Alexis von Biehler in Germany. The goal was to form a discontinuous enclosure around Metz with strong artillery batteries spaced with a range of guns. The interval between the Fort Queuleu ( Goeben ) and the Fort Saint-Julien ( Manteuffel ), being judged excessive, it was decided to strengthen the fortified line by building Fort Zastrow. The Feste Zatrow was quickly built by German engineers between 1873 and 1875. Successive assignments From 1890 the relief of the garrison is guaranteed by the fort troops Corps XVI stationed at Metz and Thionville. The Fort des Bordes serves as a depot and barracks from 1873-1918. Then it is reinvested by the French army in 1919, and serves as an internment camp for draft evaders and deserters. In 1940, the fort is reinvested again by the German army. It is not remilitarised after 1945. The Fort des Bordes is closed permanently in 1954. Second World War In early September 1944, at the beginning of the Battle of Metz. the German command integrated the defenses around Metz. On September 2, 1944, Metz was declared in effect by the Reich fortress Hitler. The fortress must be defended to the last by German troops, whose leaders were all sworn to the Führer. The next day, September 3, 1944, General Krause, then commander of the fortress of Metz, established his High Command, the main command post in the barracks fort Alvensleben. Fort Plappeville was indeed the center of the defenses of Metz, with the west fort Manstein (Girardin), run by the SS colonel Joachim von Siegroth, North Fort Zastrow (Les Bordes) held by SS Colonel Wagner and south Fort Prince August von Württemberg (Saint-Privat) held by the SS colonel Ernst Kemper. The same day, the troops of General Krause took position on a line from Pagny-sur-Moselle to Mondelange, passing to the west of Metz by Chambley-Bussières, Mars-la-Tour, Jarny and Briey. On November 9, 1944, no less than 1,299 heavy bombers, both B-17s and B-24s, dump hundreds of bombs on fortifications and strategic points in the combat zone of Third army. Most bombers dropped their bombs without visibility from over 20,000 feet, so they often missed. In Metz, 689 loads of bombs hit the seven forts designated as priority targets. The bombing did nothing but cause collateral damage. At Thionville and Saarbrücken, the result is inconclusive, proving once again the inadequacy of the massive bombing of military targets. But one by one, the isolated forts, attacked by the advancing troops of the 95th Infantry Division to the north, and 5th Infantry Divisionto the south, succumb. The Fort des Bordes was taken by 5th Infantry Division, part of George S. Patton's army, on November 21, 1944. at the end of the Battle of Metz. The fort Jeanne-d’Arc was the last of the forts of Metz to surrender. Determined German resistance, bad weather and floods, inopportunity, and a general tendency to underestimate the firepower of the fortifications of Metz, have helped slow the US offensive, giving the opportunity to the German Army to withdraw in good order to the Saar. The objective of the German staff, which was to save time by stalling US troops for the longest possible period, before they could advance to the front of the Siegfried Line, has been largely achieved. See also Fortifications of Metz Battle of Metz References See as well Bibliography Fortifications of Metz World War II defensive lines
Fort des Bordes
Engineering
1,000
5,145,378
https://en.wikipedia.org/wiki/SkyBitz
SkyBitz is an American company based in Herndon, Virginia, that provides machine to machine (M2M) products for the tracking and management of mobile assets. Parent company Telular Corporation is a fully owned subsidiary of Avista Capital Partners, a private equity firm based in the United States. SkyBitz is a remote asset tracking and information management service provider, specializing in real-time decision-making tools for companies with unpowered assets such as tractor-trailers, intermodal containers, chassis, rail cars, power generators, heavy equipment, and other assets. The company's asset tracking products are delivered using a software as a service (SaaS) model to commercial, transportation, military, and public safety customers, including sensitive shipment haulers of Arms, Ammunition, and Explosives (AA&E) cargos. With the acquisition of commercial telematics companies Reltima and GPS North America in 2015, SkyBitz entered the local fleet management market. Overview SkyBitz was founded in 1993 in the Washington, D.C., metropolitan area under the name Eagle Eye Technologies, Inc. and used both DARPA and the United States Air Force SBIR funding to develop its patented Global Locating System (GLS). GLS technology differs from traditional Global Positioning System (GPS) technology in that positioning calculations are done centrally and automatically at an operations center instead of locally on each device. In early 2002, the company—now called SkyBitz—announced the commercial launch of its GLS technology. By 2007 SkyBitz had acquired more than 400 customers in North America and added two new products to its existing suite. It was ranked in Inc. 5000 for the first time that year, as well as in numerous Fast 500 programs from Deloitte Touche Tohmatsu Limited. By 2012 SkyBitz had captured the interest of Telular Corporation, a global leader in connecting businesses and machines over wireless networks. The acquisition of SkyBitz by Telular for a total of $42 million was announced in December 2012. Less than a year later, private equity firm Avista Capital Partners completed its acquisition of Telular for a 31 percent premium to its stock market price. SkyBitz operates as a wholly owned subsidiary of Telular and remains headquartered in the Washington, D.C., metropolitan area. Today SkyBitz continues to utilize its GLS technology and also offers a variety of other asset tracking solutions based on a portfolio of GPS units and sensors. They operate over many communications networks, ranging from Low Earth orbit and geostationary satellite systems to GSM/GPRS networks. In 2014 SkyBitz won two awards from M2M Evolution Magazine for its GPS-operating asset management product—the M2M Product of the Year Award for Exceptional Innovation and the M2M Evolution Asset Tracking Award. In 2014 SkyBitz also began offering a subscription service, which amortizes all of the costs into one monthly recurring fee. Traditionally technology products and services are sold through the combination of an initial capital expense for the hardware and a monthly service fee. The SkyBitz subscription includes the hardware costs in a monthly subscription. History Research and development In 1992 Matthew Schor started the company as Eagle Eye Location Services Corp. The firm was awarded its first SBIR contract from DARPA in July 1994 titled "Miniature, Affordable Satellite Beacon". In November 1995 the company changed its name to Eagle Eye Technologies, Inc., and the Defense Advanced Research Projects Agency (DARPA) awarded Eagle Eye a $500,000 contract to miniaturize the RF front end of the satellite tag transceiver. In 1997 the Virginia Center for Innovative Technology provided a research grant that allowed Eagle Eye to work with professor Warren Stutzman at Virginia Tech in Blacksburg, Virginia to further develop their technology. In 1998 the United States Air Force awarded Eagle Eye the first of another two contracts to miniaturize the tracking system and reduce its power consumption. The company's chief scientist, Mark Sullivan, as well as the vice president, James Kilfeather, received patents on technologies used by SkyBitz. The company's chief technical officer, Jay Brosius, led the design effort for the communications protocol (Slotted Aloha) and architecture. In October 1998 World Wireless Communications of Utah announced an agreement to produce a prototype mobile terminal employing Global Locating System (GLS) for asset location. Eagle Eye was able to demonstrate the GLS technology via the MSAT geostationary satellite built by Hughes Space and Electronics (now owned by Boeing). In July 2000, Zero Gravity Venture Partners invested in a Series A venture capital round. Raising capital at this time could not have been more fortuitous for SkyBitz, because the unforeseen events of 9/11 made raising capital even more difficult. It took an extended year to close the next round of financing and bring the product and service to market. The dedication of the entire team culminated in the pre-production manufacturing of the product with Solectron, development of the web-based service, and field testing with early key accounts. Landstar had tested 500 units and found that SkyBitz met their requirements. These efforts ultimately attracted new investors to set the stage for market introduction in 2002. Commercial launch In early 2002, the company hired Michael Fitzgerald as CEO. Michael Fitzgerald had a stellar career in the trucking industry including one of the original management team at Federal Express. The company subsequently announced the launch of the GLS technology. In November 2002, AIG Highstar Capital and Cordova Ventures led an $18 million Series B investment. Helen Bentley Delich via Helen Bentley & Associates received $34,000 to assist and advise SkyBitz as it commercially launched into the trucking and transportation industry. Landstar System, one of the largest transportation firms in the U.S., adopted the service in January 2003. In early 2003, SkyBitz announced its existing manufacturing relationship with Solectron, which included new product introduction (NPI), test development, printed circuit board layout and design, card assembly and design of the external housing for SkyBitz's mobile terminal. Soon after, the company received a $1 million grant from the Transportation Security Administration for implementing GLS at the Port of Long Beach by the Maritime Administration. To support this trial, Matthew Schor was interviewed on MSNBC in February 2003 on how U.S. ports were vulnerable to attack and how technology could help. In May 2003 it was announced that the Defense Threat Reduction Agency issued a contract to Eagle Eye Technologies, Inc. for communication studies in support of its mission. In early 2003 the company then hired Andy Wood as CEO to replace Mr. Fitzgerald. In January 2004, Inverness Capital and Motorola made a $16 million Series C investment in SkyBitz. In April 2004, Andy Wood resigned as CEO and the company's CFO, Rick Burtner, became CEO. In 2005, SkyBitz acquired customers in the transportation industry including R&R Trucking, Tri-State Motor, Quality Distribution and J Rayl. The company was also selected as a "2005 Future 50" technology company by SmartCEO Magazine for its strategic direction and customer growth. In 2006, SkyBitz announced smart sensor tracking technology to optimize trailer utilization, improve reporting and maximize security. The company was named a “Rising Star” in Deloitte & Touche USA LLP's Technology Fast 50 program for the state of Virginia. SkyBitz also became the official tracking solution for the delivery of the United States Capitol Christmas Tree. Also in 2006, Bob Blair joined the company as CFO. In February 2007, the Canadian Imperial Bank of Commerce (via CIBC Capital Partners) led the fourth round of funding totaling $10 million. In October 2007, Homaira Akbari replaced Burtner as CEO. By the end of 2007 SkyBitz had acquired more than 400 customers in North America and added two new products: a cargo sensor and tractor/trailer ID. The company was also ranked in Inc. 500, the Deloitte Wireless Fast 50, Deloitte Technology Fast 500, The Deloitte Technology Fast 50 for the states of Virginia and Maryland, and the Heavy Duty Trucking Nifty Fifty Award. Expansion In January 2009, SkyBitz and research firm CSMG, the strategy division of TMNG Global, announced new research quantifying the benefits of remote asset management. Then in April 2009, SkyBitz expanded its sales coverage into Canada with a partnership with ELM Technologies. Also in April, a case study was released by the Defense Advanced Research Projects Agency (DARPA) highlighting SkyBitz technology. Later in 2009 the company launched a new terrestrial-based tracking solution on Kore Networks, announced it received Defense Transportation Tracking System II (DTTS) certification by The Military Surface Deployment and Distribution Commands (SDDC), and launched a new asset tracking software for trailer leasing companies. In 2010, SkyBitz announced a strategic partnership with Iridium Communications Inc. By 2012 SkyBitz had launched a new Iridium-based global solution. This quick expansion phase resulted in SkyBitz being named to Inc. 5000 for five consecutive years from 2007 to 2011. It also drew the attention of Telular Corporation, an American company that utilizes wireless technology to provide security alarm systems and liquid tank level monitoring. Telular, under CEO Joseph Beatty, acquired SkyBitz in December 2012 for a purchase price of $42 million ($35 million cash and $7 million newly issued shares of common stock). The acquisition increased the size of Telular by about 70 percent and created the largest "pure player" asset tracking and management company in the world. Former Senior Vice President of Sales Henry Popplewell was named Senior Vice President and General Manager of SkyBitz. In 2013 Telular was sold to private equity firm Avista Capital Partners through a tender offer for all outstanding shares of Telular stock at a 31 percent premium. Beatty stepped down as CEO in conjunction with the deal. Doug Milner was named CEO of Telular Corporation in August 2013. Also in 2013, SkyBitz launched a new custom-built cellular product that would win two awards from M2M Evolution magazine—the M2M Product of the Year Award for Exceptional Innovation and the M2M Evolution Asset Tracking Award. In 2014 SkyBitz introduced a new purchase option that eliminated initial capital expenses for hardware. Traditionally, technology products and services in the transportation and logistics industry are sold with the initial capital expense and a monthly service fee. The new SkyBitz offering is a subscription-based solution that requires only a monthly fee, representing a shift in purchasing style in the industry. U.S. XPress Enterprises, Interstate Distributor, and Transport America were early adopters of the new SaaS-based business model. In February 2015, SkyBitz announced a partnership with Pressure Systems International, Inc. (PSI), the "world leader in automatic tire inflation systems". Through the collaboration, joint customers gained the ability to view tire status and trailer location together in real-time. The partnership was aimed at reducing maintenance costs for fleets of trailers. Also in 2015, Telular acquired two companies that were integrated under the SkyBitz division. The acquisition of these two companies, GPS North America and Reltima, allowed SkyBitz to enter the local fleet management market. In October 2017, SkyBitz's parent company Telular announced the partnership between SkyBitz and Velociti, a provider of "technology deployment services". The focus of the partnership is to provide North American customers with faster deployment times of SkyBitz's technologies across their fleets. In April 2018, SkyBitz became a "NASPO approved technology vendor". This partnership allows SkyBitz to work with government contractors in all 50 states, the District of Columbia, and the organized United States territories. References Transportation engineering Electronics companies of the United States Trucking industry in the United States American companies established in 1993 Electronics companies established in 1993 1993 establishments in Virginia Companies based in Fairfax County, Virginia
SkyBitz
Engineering
2,447
4,338,718
https://en.wikipedia.org/wiki/PortMidi
PortMidi is a computer library for real time input and output of MIDI data. It is designed to be portable to many different operating systems. PortMidi is part of the PortMusic project. See also PortAudio External links portmidi.h – definition of the API and contains the documentation for PortMidi Audio libraries Computer libraries
PortMidi
Technology
71
13,677,019
https://en.wikipedia.org/wiki/Cold%20drop
A cold drop is a term used in Spain and France that has commonly come to refer to any high impact rainfall event occurring in the autumn along the Spanish Mediterranean coast or across France. In Europe, cold drops belong to the characteristics of the Mediterranean climate. The Spanish-language name of was directly adapted from the term introduced by German meteorologists, and became very popular in 1980s Spain as a blanket term to refer to any high-impact rainfall event. In the Spanish Levante, these events are typically caused by the interaction of upper-level low pressure systems strangled and ultimately detached from the zonal (eastward) circulation displaying stationary or retrograde (westward) circulation with humid warmer air masses that form over an overheated Mediterranean Sea in the Autumn. The Spanish equivalent of cut-off low is DANA (Depresión Aislada en Niveles Altos). Such recurring synoptic configurations are not necessarily associated to cold drop events. Occurrence Spain If a sudden cut off in the jet stream takes place (particularly on the Atlantic Ocean), a pocket of cold air detaches from the main jet stream, moving southward over the Pyrenees into the warm air over Spain, causing its most dramatic effects in the southeast of Spain, particularly along the Spanish Mediterranean coast, especially in the Valencian Community. The torrential rain caused by a cold drop can result in devastation caused by torrents and flash floods. This phenomenon is associated with extremely violent downpours and storms, but not always accompanied by significant rainfall. For this, high atmospheric instability in the lower air layers needs to combine with a significant amount of moisture. Disasters The great Valencia flood on 14 October 1957 was the result of a three-day-long cold drop and caused the deaths of at least 81 people. The Vallès floods on 25 September 1962 in the province of Barcelona were caused by a cold drop (gota fría), producing heavy rain, overflowing the Llobregat and Besòs rivers. The official death toll was 617. On the night of 29-30 October 2024, a DANA event caused considerable loss of life and extensive damage, especially in the Valencian Community and the provinces of Albacete, Almería, and Málaga. Other areas Cut-off lows are apparent near the Sierra Nevada de Santa Marta in the Colombian Caribbean, with peaks surpassing 5 km in altitude in close proximity to a warm sea. They can also occur elsewhere in the southern hemisphere, such as in South Africa, Namibia, South America and southern Australia. In the northern hemisphere, besides Southern Europe and France, they can occur in China and Siberia, North Pacific, Northeastern United States and the northeast Atlantic. See also Cold-core low Cold pool Polar vortex Notes References Flood Types of cyclone Atmospheric dynamics Meteorological phenomena Cold Weather events Climate of Spain Climate of France
Cold drop
Physics,Chemistry,Environmental_science
571
1,262,571
https://en.wikipedia.org/wiki/Diazo
In organic chemistry, the diazo group is an organic moiety consisting of two linked nitrogen atoms at the terminal position. Overall charge-neutral organic compounds containing the diazo group bound to a carbon atom are called diazo compounds or diazoalkanes and are described by the general structural formula . The simplest example of a diazo compound is diazomethane, . Diazo compounds () should not be confused with azo compounds () or with diazonium compounds (). Structure The electronic structure of diazo compounds is characterized by π electron density delocalized over the α-carbon and two nitrogen atoms, along with an orthogonal π system with electron density delocalized over only the terminal nitrogen atoms. Because all octet rule-satisfying resonance forms of diazo compounds have formal charges, they are members of a class of compounds known as 1,3-dipoles. Some of the most stable diazo compounds are α-diazo-β-diketones and α-diazo-β-diesters, in which the electron density is further delocalized into an electron-withdrawing carbonyl group. In contrast, most diazoalkanes without electron-withdrawing substituents, including diazomethane itself, are explosive. A commercially relevant diazo compound is ethyl diazoacetate (N2CHCOOEt). A group of isomeric compounds with only few similar properties are the diazirines, where the carbon and two nitrogens are linked as a ring. Four resonance structures can be drawn: Compounds with the diazo moiety should be distinguished from diazonium compounds, which have the same terminal azo group but bear an overall positive charge, and azo compounds in which the azo group bridges two organic substituents. History Diazo compounds were first produced by Peter Griess who had discovered a versatile new chemical reaction, as detailed in his 1858 paper "Preliminary notice on the influence of nitrous acid on aminonitro- and aminodinitrophenol." Synthesis Several methods exist for the preparation of diazo compounds. From amines Alpha-acceptor-substituted primary aliphatic amines R-CH2-NH2 (R = COOR, CN, CHO, COR) react with nitrous acid to generate the diazo compound. From diazomethyl compounds An example of an electrophilic substitution using a diazomethyl compound is that of a reaction between an acyl halide and diazomethane, for example the first step in the Arndt-Eistert synthesis. By diazo transfer In diazo transfer certain carbon acids react with tosyl azide in the presence of a weak base like triethylamine or DBU. The byproduct is the corresponding tosylamide (p-toluenesulfonamide). This reaction is also called the Regitz diazo transfer. Examples are the synthesis of tert-butyl diazoacetate and diazomalonate. Methyl phenyldiazoacetate is generated in this way by treating methyl phenylacetate with p-acetamidobenzenesulfonyl azide in the presence of base. The mechanism involves attack of the enolate at the terminal nitrogen, proton transfer, and expulsion of the anion of the sulfonamide. Use of the β-carbonyl aldehyde leads to a deformylative variant of the Regitz transfer, which is useful for the preparation of diazo compounds stabilized by only one carbonyl group. From N-alkyl-N-nitroso compounds Diazo compounds can be obtained in an elimination reaction of N-alkyl-N-nitroso compounds, such as in the synthesis of diazomethane from Diazald or MNNG: (The mechanism shown here is one possibility. For an alternative mechanism for the analogous formation of diazomethane from an N-nitrososulfonamide, see the page on Diazald.) From hydrazones Hydrazones are oxidized (dehydrogenation) for example with silver oxide or mercury oxide for example the synthesis of from acetone hydrazone. Other oxidizing reagents are lead tetraacetate, manganese dioxide and the Swern reagent. Tosyl hydrazones RRC=N-NHTs are reacted with base for example triethylamine in the synthesis of crotyl diazoacetate and in the synthesis of phenyldiazomethane from PhCHNHTs and sodium methoxide. Reaction of a carbonyl group with the hydrazine 1,2-bis(tert-butyldimethylsilyl)hydrazine to form the hydrazone is followed by reaction with the iodane difluoroiodobenzene yields the diazo compound: From azides One method is described for the synthesis of diazo compounds from azides using phosphines: Reactions In cycloadditions Diazo compounds react as 1,3-dipoles in diazoalkane 1,3-dipolar cycloadditions. As carbene precursors Diazo compounds are used as precursors to carbenes, which are generated by thermolysis or photolysis, for example in the Wolff rearrangement. (In this regard, they resemble diazirenes.) As such they are used in cyclopropanation for example in the reaction of ethyl diazoacetate with styrene. Certain diazo compounds can couple to form alkenes in a formal carbene dimerization reaction. Diazo compounds are intermediates in the Bamford–Stevens reaction of tosylhydrazones to alkenes, again with a carbene intermediate: In the Doyle–Kirmse reaction, certain diazo compounds react with allyl sulfides to the homoallyl sulfide. Intramolecular reactions of diazocarbonyl compounds provide access to cyclopropanes. In the Buchner ring expansion, diazo compounds react with aromatic rings with ring-expansion. As nucleophile The Buchner-Curtius-Schlotterbeck reaction yields ketones from aldehydes and aliphatic diazo compounds: The reaction type is nucleophilic addition. Occurrence in nature Several families of naturally occurring products feature the diazo group. The kinamycins and lomaiviticin are DNA intercalation, with diazo functionality as their "warheads". In the presence of a reducing agent, loss of N2 occurs to generate a DNA-cleaving fluorenyl radical. One biochemical process for diazo formation is the L-aspartate-nitro-succinate (ANS) pathway. It involves a sequence of enzyme-mediated redox reactions to generate nitrite by way of a nitrosuccinic acid intermediate. This pathway appears to be active in several different Streptomyces species, and homologous genes appear widespread in actinobacteria. See also Reprography Whiteprint Notes References Functional groups
Diazo
Chemistry
1,479
15,057,268
https://en.wikipedia.org/wiki/Grassy%20Knob%20Wilderness
Grassy Knob Wilderness is a wilderness area in the Klamath Mountains of southwestern Oregon, within the Rogue River-Siskiyou National Forest. It was designated wilderness by the United States Congress in 1984 and now comprises a total of . Like most wilderness areas in Oregon, Grassy Knob is managed by the Forest Service. Topography Elevations in Grassy Knob Wilderness range from near sea level to at the summit of Grassy Knob. Many small streams tumble for short distances over waterfalls and through ravines in the Wilderness. The primary drainage is Dry Creek, a tributary of the Sixes River, and marks the northern boundary of the Wilderness, while the Elk River marks the southern border. The Elk has Wild and Scenic designation. Vegetation Grassy Knob Wilderness is home to the Port Orford cedar, including some stands of old growth with some trunks exceeding six feet in diameter. Old growth Douglas fir exists in the Wilderness as well. Fish Both the Elk and Sixes Rivers are major steelhead and salmon streams. Some claim that the Elk River is the most productive salmon stream of its size outside of Alaska. Wild native cutthroat trout can also be found here. Recreation Popular recreational activities in the Grassy Knob Wilderness include fishing, wildlife watching and hiking, though there are very few established trails. The Wilderness also offers extraordinary opportunities for solitude. See also List of Oregon Wildernesses List of U.S. Wilderness Areas Wilderness Act List of old growth forests References External links Grassy Knob Wilderness - Rogue River-Siskiyou National Forest Rogue River-Siskiyou National Forest - Wild and Scenic Elk River Klamath Mountains Rogue River-Siskiyou National Forest Protected areas of Curry County, Oregon Wilderness areas of Oregon Old-growth forests 1984 establishments in Oregon Protected areas established in 1984
Grassy Knob Wilderness
Biology
349
11,774,456
https://en.wikipedia.org/wiki/Phoma%20draconis
Phoma draconis is a fungal plant pathogen. See also List of foliage plant diseases (Agavaceae) References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases draconis Fungi described in 1983 Fungus species
Phoma draconis
Biology
52
11,015,023
https://en.wikipedia.org/wiki/Selective%20exposure%20theory
Selective exposure is a theory within the practice of psychology, often used in media and communication research, that historically refers to individuals' tendency to favor information which reinforces their pre-existing views while avoiding contradictory information. Selective exposure has also been known and defined as "congeniality bias" or "confirmation bias" in various texts throughout the years. According to the historical use of the term, people tend to select specific aspects of exposed information which they incorporate into their mindset. These selections are made based on their perspectives, beliefs, attitudes, and decisions. People can mentally dissect the information they are exposed to and select favorable evidence, while ignoring the unfavorable. The foundation of this theory is rooted in the cognitive dissonance theory , which asserts that when individuals are confronted with contrasting ideas, certain mental defense mechanisms are activated to produce harmony between new ideas and pre-existing beliefs, which results in cognitive equilibrium. Cognitive equilibrium, which is defined as a state of balance between a person's mental representation of the world and his or her environment, is crucial to understanding selective exposure theory. According to Jean Piaget, when a mismatch occurs, people find it to be "inherently dissatisfying". Selective exposure relies on the assumption that one will continue to seek out information on an issue even after an individual has taken a stance on it. The position that a person has taken will be colored by various factors of that issue that are reinforced during the decision-making process. According to Stroud (2008), theoretically, selective exposure occurs when people's beliefs guide their media selections. Selective exposure has been displayed in various contexts such as self-serving situations and situations in which people hold prejudices regarding outgroups, particular opinions, and personal and group-related issues. Perceived usefulness of information, perceived norm of fairness, and curiosity of valuable information are three factors that can counteract selective exposure. Also of great concern is the theory of "Selective Participation" proposed by Sir Godson David in 2024 This theory suggests that individuals have the ability to selectively participate in certain aspects of events or activities that are most meaningful or important to them, while being fully aware of the consequences of neglecting other aspects. In this theory, individuals may prioritize certain elements of an event based on personal values, interests, or goals, and may choose to invest their time, energy, and resources in these specific areas. They may also make conscious decisions to limit participation in other aspects of the event, recognizing that they cannot engage fully in all aspects simultaneously. By selectively participating in specific aspects of events, individuals can focus on what matters most to them, optimize their resources and efforts in those areas, and compensate for any potential neglect in other areas. This approach may allow individuals to maintain a sense of control, satisfaction, and well-being while navigating complex events or activities. Overall, the theory of Selective Participation emphasizes the importance of intentional decision-making and prioritization in event participation, acknowledging that individuals have the agency to choose where to direct their time and attention based on their individual preferences and goals. Effect on decision-making Individual versus group decision-making Selective exposure can often affect the decisions people make as individuals or as groups because they may be unwilling to change their views and beliefs either collectively or on their own, despite conflicting and reliable information. An example of the effects of selective exposure is the series of events leading up to the Bay of Pigs Invasion in 1961. President John F. Kennedy was given the go ahead by his advisers to authorize the invasion of Cuba by poorly trained expatriates despite overwhelming evidence that it was a foolish and ill-conceived tactical maneuver. The advisers were so eager to please the President that they confirmed their cognitive bias for the invasion rather than challenging the faulty plan. Changing beliefs about one's self, other people, and the world are three variables as to why people fear new information. A variety of studies has shown that selective exposure effects can occur in the context of both individual and group decision making. Numerous situational variables have been identified that increase the tendency toward selective exposure. Social psychology, specifically, includes research with a variety of situational factors and related psychological processes that eventually persuade a person to make a quality decision. Additionally, from a psychological perspective, the effects of selective exposure can both stem from motivational and cognitive accounts. Effect of information quantity According to research study by Fischer, Schulz-Hardt, et al. (2008), the quantity of decision-relevant information that the participants were exposed to had a significant effect on their levels of selective exposure. A group for which only two pieces of decision-relevant information were given had experienced lower levels of selective exposure than the other group who had ten pieces of information to evaluate. This research brought more attention to the cognitive processes of individuals when they are presented with a very small amount of decision-consistent and decision-inconsistent information. The study showed that in situations such as this, an individual becomes more doubtful of their initial decision due to the unavailability of resources. They begin to think that there is not enough data or evidence in this particular field in which they are told to make a decision about. Because of this, the subject becomes more critical of their initial thought process and focuses on both decision-consistent and inconsistent sources, thus decreasing his level of selective exposure. For the group who had plentiful pieces of information, this factor made them confident in their initial decision because they felt comfort from the fact that their decision topic was well-supported by a large number of resources. Therefore, the availability of decision-relevant and irrelevant information surrounding individuals can influence the level of selective exposure experienced during the process of decision-making. Selective exposure is prevalent within singular individuals and groups of people and can influence either to reject new ideas or information that is not commensurate with the original ideal. In Jonas et al. (2001) empirical studies were done on four different experiments investigating individuals' and groups' decision making. This article suggests that confirmation bias is prevalent in decision making. Those who find new information often draw their attention towards areas where they hold personal attachment. Thus, people are driven toward pieces of information that are coherent with their own expectations or beliefs as a result of this selective exposure theory occurring in action. Throughout the process of the four experiments, generalization is always considered valid and confirmation bias is always present when seeking new information and making decisions. Accuracy motivation and defense motivation Fischer and Greitemeyer (2010) explored individuals' decision making in terms of selective exposure to confirmatory information. Selective exposure posed that individuals make their decisions based on information that is consistent with their decision rather than information that is inconsistent. Recent research has shown that "Confirmatory Information Search" was responsible for the 2008 bankruptcy of the Lehman Brothers Investment Bank which then triggered the Global Financial Crisis. In the zeal for profit and economic gain, politicians, investors, and financial advisors ignored the mathematical evidence that foretold the housing market crash in favor of flimsy justifications for upholding the status quo. Researchers explain that subjects have the tendency to seek and select information using their integrative model. There are two primary motivations for selective exposure: Accuracy Motivation and Defense Motivation. Accuracy Motivation explains that an individual is motivated to be accurate in their decision making and Defense Motivation explains that one seeks confirmatory information to support their beliefs and justify their decisions. Accuracy motivation is not always beneficial within the context of selective exposure and can instead be counterintuitive, increasing the amount of selective exposure. Defense motivation can lead to reduced levels of selective exposure. Personal attributes Selective exposure avoids information inconsistent with one's beliefs and attitudes. For example, former Vice President Dick Cheney would only enter a hotel room after the television was turned on and tuned to a conservative television channel. When analyzing a person's decision-making skills, his or her unique process of gathering relevant information is not the only factor taken into account. Fischer et al. (2010) found it important to consider the information source itself, otherwise explained as the physical being that provided the source of information. Selective exposure research generally neglects the influence of indirect decision-related attributes, such as physical appearance. In Fischer et al. (2010) two studies hypothesized that physically attractive information sources resulted in decision makers to be more selective in searching and reviewing decision-relevant information. Researchers explored the impact of social information and its level of physical attractiveness. The data was then analyzed and used to support the idea that selective exposure existed for those who needed to make a decision. Therefore, the more attractive an information source was, the more positive and detailed the subject was with making the decision. Physical attractiveness affects an individual's decision because the perception of quality improves. Physically attractive information sources increased the quality of consistent information needed to make decisions and further increased the selective exposure in decision-relevant information, supporting the researchers' hypothesis. Both studies concluded that attractiveness is driven by a different selection and evaluation of decision-consistent information. Decision makers allow factors such as physical attractiveness to affect everyday decisions due to the works of selective exposure. In another study, selective exposure was defined by the amount of individual confidence. Individuals can control the amount of selective exposure depending on whether they have a low self-esteem or high self-esteem. Individuals who maintain higher confidence levels reduce the amount of selective exposure. Albarracín and Mitchell (2004) hypothesized that those who displayed higher confidence levels were more willing to seek out information both consistent and inconsistent with their views. The phrase "decision-consistent information" explains the tendency to actively seek decision-relevant information. Selective exposure occurs when individuals search for information and show systematic preferences towards ideas that are consistent, rather than inconsistent, with their beliefs. On the contrary, those who exhibited low levels of confidence were more inclined to examine information that did not agree with their views. The researchers found that in three out of five studies participants showed more confidence and scored higher on the Defensive Confidence Scale, which serves as evidence that their hypothesis was correct. Bozo et al. (2009) investigated the anxiety of fearing death and compared it to various age groups in relation to health-promoting behaviors. Researchers analyzed the data by using the terror management theory and found that age had no direct effect on specific behaviors. The researchers thought that a fear of death would yield health-promoting behaviors in young adults. When individuals are reminded of their own death, it causes stress and anxiety, but eventually leads to positive changes in their health behaviors. Their conclusions showed that older adults were consistently better at promoting and practicing good health behaviors, without thinking about death, compared to young adults. Young adults were less motivated to change and practice health-promoting behaviors because they used the selective exposure to confirm their prior beliefs. Selective exposure thus creates barriers between the behaviors in different ages, but there is no specific age at which people change their behaviors. Though physical appearance will impact one's personal decision regarding an idea presented, a study conducted by Van Dillen, Papies, and Hofmann (2013) suggests a way to decrease the influence of personal attributes and selective exposure on decision-making. The results from this study showed that people do pay more attention to physically attractive or tempting stimuli; however, this phenomenon can be decreased through increasing the "cognitive load." In this study, increasing cognitive activity led to a decreased impact of physical appearance and selective exposure on the individual's impression of the idea presented. This is explained by acknowledging that we are instinctively drawn to certain physical attributes, but if the required resources for this attraction are otherwise engaged at the time, then we might not notice these attributes to an equal extent. For example, if a person is simultaneously engaging in a mentally challenging activity during the time of exposure, then it is likely that less attention will be paid to appearance, which leads to a decreased impact of selective exposure on decision-making. Theories accounting for selective exposure Cognitive dissonance theory Leon Festinger is widely considered as the father of modern social psychology and as an important figure to that field of practice as Freud was to clinical psychology and Piaget was to developmental psychology. He was considered to be one of the most significant social psychologists of the 20th century. His work demonstrated that it is possible to use the scientific method to investigate complex and significant social phenomena without reducing them to the mechanistic connections between stimulus and response that were the basis of behaviorism. Festinger proposed the groundbreaking theory of cognitive dissonance that has become the foundation of selective exposure theory today despite the fact that Festinger was considered as an "avant-garde" psychologist when he had first proposed it in 1957. In an ironic twist, Festinger realized that he himself was a victim of the effects of selective exposure. He was a heavy smoker his entire life and when he was diagnosed with terminal cancer in 1989, he was said to have joked, "Make sure that everyone knows that it wasn't lung cancer!" Cognitive dissonance theory explains that when a person either consciously or unconsciously realizes conflicting attitudes, thoughts, or beliefs, they experience mental discomfort. Because of this, an individual will avoid such conflicting information in the future since it produces this discomfort, and they will gravitate towards messages sympathetic to their own previously held conceptions. Decision makers are unable to evaluate information quality independently on their own (Fischer, Jonas, Dieter & Kastenmüller, 2008). When there is a conflict between pre-existing views and information encountered, individuals will experience an unpleasant and self-threatening state of aversive-arousal which will motivate them to reduce it through selective exposure. They will begin to prefer information that supports their original decision and neglect conflicting information. Individuals will then exhibit confirmatory information to defend their positions and reach the goal of dissonance reduction. Cognitive dissonance theory insists that dissonance is a psychological state of tension that people are motivated to reduce . Dissonance causes feelings of unhappiness, discomfort, or distress. asserted the following: "These two elements are in a dissonant relation if, considering these two alone, the obverse of one element would follow from the other." To reduce dissonance, people add consonant cognition or change evaluations for one or both conditions in order to make them more consistent mentally. Such experience of psychological discomfort was found to drive individuals to avoid counterattitudinal information as a dissonance-reduction strategy. In Festinger's theory, there are two basic hypotheses: 1) The existence of dissonance, being psychologically uncomfortable, will motivate the person to try to reduce the dissonance and achieve consonance. 2) When dissonance is present, in addition to trying to reduce it, the person will actively avoid situations and information which would likely increase the dissonance . The theory of cognitive dissonance was developed in the mid-1950s to explain why people of strong convictions are so resistant in changing their beliefs even in the face of undeniable contradictory evidence. It occurs when people feel an attachment to and responsibility for a decision, position or behavior. It increases the motivation to justify their positions through selective exposure to confirmatory information (Fischer, 2011). Fischer suggested that people have an inner need to ensure that their beliefs and behaviors are consistent. In an experiment that employed commitment manipulations, it impacted perceived decision certainty. Participants were free to choose attitude-consistent and inconsistent information to write an essay. Those who wrote an attitude-consistent essay showed higher levels of confirmatory information search (Fischer, 2011). The levels and magnitude of dissonance also play a role. Selective exposure to consistent information is likely under certain levels of dissonance. At high levels, a person is expected to seek out information that increases dissonance because the best strategy to reduce dissonance would be to alter one's attitude or decision (Smith et al., 2008). Subsequent research on selective exposure within the dissonance theory produced weak empirical support until the dissonance theory was revised and new methods, more conducive to measuring selective exposure, were implemented. To date, scholars still argue that empirical results supporting the selective exposure hypothesis are still mixed. This is possibly due to the problems with the methods of the experimental studies conducted. Another possible reason for the mixed results may be the failure to simulate an authentic media environment in the experiments. According to Festinger, the motivation to seek or avoid information depends on the magnitude of dissonance experienced (Smith et al., 2008). It is observed that there is a tendency for people to seek new information or select information that supports their beliefs in order to reduce dissonance. There exist three possibilities which will affect extent of dissonance : Relative absence of dissonance. When little or no dissonance exists, there is little or no motivation to seek new information. For example, when there is an absence of dissonance, the lack of motivation to attend or avoid a lecture on 'The Advantages of Automobiles with Very High Horsepower Engines' will be independent of whether the car a new owner has recently purchased has a high or low horsepower engine. However, it is important to note the difference between a situation when there is no dissonance and when the information has no relevance to the present or future behavior. For the latter, accidental exposure, which the new car owner does not avoid, will not introduce any dissonance; while for the former individual, who also does not avoid information, dissonance may be accidentally introduced. The presence of moderate amounts of dissonance. The existence of dissonance and consequent pressure to reduce it will lead to an active search of information, which will then lead people to avoid information that will increase dissonance. However, when faced with a potential source of information, there will be an ambiguous cognition to which a subject will react in terms of individual expectations about it. If the subject expects the cognition to increase dissonance, they will avoid it. In the event that one's expectations are proven wrong, the attempt at dissonance reduction may result in increasing it instead. It may in turn lead to a situation of active avoidance. The presence of extremely large amounts of dissonance. If two cognitive elements exist in a dissonant relationship, the magnitude of dissonance matches the resistance to change. If the dissonance becomes greater than the resistance to change, then the least resistant elements of cognition will be changed, reducing dissonance. When dissonance is close to the maximum limit, one may actively seek out and expose oneself to dissonance-increasing information. If an individual can increase dissonance to the point where it is greater than the resistance to change, he will change the cognitive elements involved, reducing or even eliminating dissonance. Once dissonance is increased sufficiently, an individual may bring himself to change, hence eliminating all dissonance . The reduction in cognitive dissonance following a decision can be achieved by selectively looking for decision-consonant information and avoiding contradictory information. The objective is to reduce the discrepancy between the cognitions, but the specification of which strategy will be chosen is not explicitly addressed by the dissonance theory. It will be dependent on the quantity and quality of the information available inside and outside the cognitive system. Klapper's selective exposure In the early 1960s, Columbia University researcher Joseph T. Klapper asserted in his book The Effects Of Mass Communication that audiences were not passive targets of political and commercial propaganda from mass media but that mass media reinforces previously held convictions. Throughout the book, he argued that the media has a small amount of power to influence people and, most of the time, it just reinforces our preexisting attitudes and beliefs. He argued that the media effects of relaying or spreading new public messages or ideas were minimal because there is a wide variety of ways in which individuals filter such content. Due to this tendency, Klapper argued that media content must be able to ignite some type of cognitive activity in an individual in order to communicate its message. Prior to Klapper's research, the prevailing opinion was that mass media had a substantial power to sway individual opinion and that audiences were passive consumers of prevailing media propaganda. However, by the time of the release of The Effects of Mass Communication, many studies led to a conclusion that many specifically targeted messages were completely ineffective. Klapper's research showed that individuals gravitated towards media messages that bolstered previously held convictions that were set by peer groups, societal influences, and family structures and that the accession of these messages over time did not change when presented with more recent media influence. Klapper noted from the review of research in the social science that given the abundance of content within the mass media, audiences were selective to the types of programming that they consumed. Adults would patronize media that was appropriate for their demographics and children would eschew media that was boring to them. So individuals would either accept or reject a mass media message based upon internal filters that were innate to that person. The following are Klapper's five mediating factors and conditions to affect people: Predispositions and the related processes of selective exposure, selective perception, and selective retention. The groups, and the norms of groups, to which the audience members belong. Interpersonal dissemination of the content of communication The exercise of opinion leadership The nature of mass media in a free enterprise society. Three basic concepts: Selective exposure – people keep away from communication of opposite hue. Selective perception – If people are confronting unsympathetic material, they do not perceive it, or make it fit for their existing opinion. Selective retention – refers to the process of categorizing and interpreting information in a way that favors one category or interpretation over another. Furthermore, they just simply forget the unsympathetic material. Groups and group norms work as mediators. For example, one can be strongly disinclined to change to the Democratic Party if their family has voted Republican for a long time. In this case, the person's predisposition to the political party is already set, so they don't perceive information about Democratic Party or change voting behavior because of mass communication. Klapper's third assumption is inter-personal dissemination of mass communication. If someone is already exposed by close friends, which creates predisposition toward something, it will lead to an increase in exposure to mass communication and eventually reinforce the existing opinion. An opinion leader is also a crucial factor to form one's predisposition and can lead someone to be exposed by mass communication. The nature of commercial mass media also leads people to select certain types of media contents. Cognitive economy model This new model combines the motivational and cognitive processes of selective exposure. In the past, selective exposure had been studied from a motivational standpoint. For instance, the reason behind the existence of selective exposure was that people felt motivated to decrease the level of dissonance they felt while encountering inconsistent information. They also felt motivated to defend their decisions and positions, so they achieved this goal by exposing themselves to consistent information only. However, the new cognitive economy model not only takes into account the motivational aspects, but it also focuses on the cognitive processes of each individual. For instance, this model proposes that people cannot evaluate the quality of inconsistent information objectively and fairly because they tend to store more of the consistent information and use this as their reference point. Thus, inconsistent information is often observed with a more critical eye in comparison to consistent information. According to this model, the levels of selective exposure experienced during the decision-making process are also dependent on how much cognitive energy people are willing to invest. Just as people tend to be careful with their finances, cognitive energy or how much time they are willing to spend evaluating all the evidence for their decisions works the same way. People are hesitant to use this energy; they tend to be careful so they don't waste it. Thus, this model suggests that selective exposure does not happen in separate stages. Rather, it is a combined process of the individuals' certain acts of motivations and their management of the cognitive energy. Implications Media Recent studies have shown relevant empirical evidence for the pervasive influence of selective exposure on the greater population at large due to mass media. Researchers have found that individual media consumers will seek out programs to suit their individual emotional and cognitive needs. Individuals will seek out palliative forms of media during the recent times of economic crisis to fulfill a "strong surveillance need" and to decrease chronic dissatisfaction with life circumstances as well as fulfill needs for companionship. Consumers tend to select media content that exposes and confirms their own ideas while avoiding information that argues against their opinion. A study conducted in 2012 has shown that this type of selective exposure affects pornography consumption as well. Individuals with low levels of life satisfaction are more likely to have casual sex after consumption of pornography that is congruent with their attitudes while disregarding content that challenges their inherently permissive 'no strings attached' attitudes. Music selection is also affected by selective exposure. A 2014 study conducted by Christa L. Taylor and Ronald S. Friedman at the SUNY University at Albany, found that mood congruence was effected by self-regulation of music mood choices. Subjects in the study chose happy music when feeling angry or neutral but listened to sad music when they themselves were sad. The choice of sad music given a sad mood was due less to mood-mirroring but as a result of subjects having an aversion to listening to happy music that was cognitively dissonant with their mood. Politics are more likely to inspire selective exposure among consumers as opposed to single exposure decisions. For example, in their 2009 meta-analysis of Selective Exposure Theory, Hart et al. reported that "A 2004 survey by The Pew Research Center for the People & the Press (2006) found that Republicans are about 1.5 times more likely to report watching Fox News regularly than are Democrats (34% for Republicans and 20% of Democrats). In contrast, Democrats are 1.5 times more likely to report watching CNN regularly than Republicans (28% of Democrats vs. 19% of Republicans). Even more striking, Republicans are approximately five times more likely than Democrats to report watching "The O'Reilly Factor" regularly and are seven times more likely to report listening to "Rush Limbaugh" regularly." As a result, when the opinions of Republicans who only tune into conservative media outlets were compared to those of their fellow conservatives in a study by Stroud (2010), their beliefs were considered to be more polarized. The same result was retrieved from the study of liberals as well. Due to our greater tendency toward selective exposure, current political campaigns have been characterized as being extremely partisan and polarized. As Bennett and Iyengar (2008) commented, "The new, more diversified information environment makes it not only more feasible for consumers to seek out news they might find agreeable but also provides a strong economic incentive for news organizations to cater to their viewers' political preferences." Selective exposure thus plays a role in shaping and reinforcing individuals' political attitudes. In the context of these findings, Stroud (2008) comments "The findings presented here should at least raise the eyebrows of those concerned with the noncommercial role of the press in our democratic system, with its role in providing the public with the tools to be good citizens." The role of public broadcasting, through its noncommercial role, is to counterbalance media outlets that deliberately devote their coverage to one political direction, thus driving selective exposure and political division in a democracy. Many academic studies on selective exposure, however, are based on the electoral system and media system of the United States. Countries with a strong public service broadcasting like many European countries, on the other hand, have less selective exposure based on political ideology or political party. In Sweden, for instance, there were no differences in selective exposure to public service news between the political left and right over a period of 30 years. In early research, selective exposure originally provided an explanation for limited media effects. The "limited effects" model of communication emerged in the 1940s with a shift in the media effects paradigm. This shift suggested that while the media has effects on consumers' behavior such as their voting behavior, these effects are limited and influenced indirectly by interpersonal discussions and the influence of opinion leaders. Selective exposure was considered one necessary function in the early studies of media's limited power over citizens' attitudes and behaviors. Political ads deal with selective exposure as well because people are more likely to favor a politician that agrees with their own beliefs. Another significant effect of selective exposure comes from Stroud (2010) who analyzed the relationship between partisan selective exposure and political polarization. Using data from the 2004 National Annenberg Election Survey, analysts found that over time partisan selective exposure leads to polarization. This process is plausible because people can easily create or have access to blogs, websites, chats, and online forums where those with similar views and political ideologies can congregate. Much of the research has also shown that political interaction online tends to be polarized. Further evidence for this polarization in the political blogosphere can be found in the Lawrence et al. (2010)'s study on blog readership that people tend to read blogs that reinforce rather than challenge their political beliefs. According to Cass Sunstein's book, Republic.com, the presence of selective exposure on the web creates an environment that breeds political polarization and extremism. Due to easy access to social media and other online resources, people are "likely to hold even stronger views than the ones they started with, and when these views are problematic, they are likely to manifest increasing hatred toward those espousing contrary beliefs." This illustrates how selective exposure can influence an individual's political beliefs and subsequently his participation in the political system. One of the major academic debates on the concept of selective exposure is whether selective exposure contributes to people's exposure to diverse viewpoints or polarization. Scheufele and Nisbet (2012) discuss the effects of encountering disagreement on democratic citizenship. Ideally, true civil deliberation among citizens would be the rational exchange of non-like-minded views (or disagreement). However, many of us tend to avoid disagreement on a regular basis because we do not like to confront with others who hold views that are strongly opposed to our own. In this sense, the authors question about whether exposure to non-like-minded information brings either positive or negative effects on democratic citizenship. While there are mixed findings of peoples' willingness to participate in the political processes when they encounter disagreement, the authors argue that the issue of selectivity needs to be further examined in order to understand whether there is a truly deliberative discourse in online media environment. See also Cherry picking Selection bias Voldemort effect References Bibliography Media studies Sociology of technology
Selective exposure theory
Technology
6,299
30,818,436
https://en.wikipedia.org/wiki/CAN/ULC%20S801
CAN/ULC S801: Standard on Electric Utility Workplace Electrical Safety for Generation, Transmission and Distribution. This National Standard of Canada applies to the construction, operation, maintenance and replacement of electric utility systems that are used to generate, transform, transmit, distribute and deliver electrical power or energy to consumer services or their equivalent. Purpose Live Working (high voltage) may present potential safety risks to workers and the general public. CAN/ULC-S801 gives electric utilities a foundation for safe working environments for their employees across Canada. CAN/ULC-S801 provides a complete safety guide addressing numerous electric utility workplace safety concerns, such as: Fundamental requirements Minimum approach distances for working near or on energized electrical lines or equipment Protective tools, equipment & devices Working on energized electrical lines and equipment Arc flash protection Radio frequency hazards Working on isolated electric utility systems Working near electric utility systems See also Electric Utility High-voltage hazards Arc Flash Lockout-Tagout Canadian Electrical Code Protective clothing References Electrical safety Construction industry of Canada Electrical standards Standards of Canada
CAN/ULC S801
Physics
216
61,142,179
https://en.wikipedia.org/wiki/C18H18N2O
{{DISPLAYTITLE:C18H18N2O}} The molecular formula C18H18N2O (molar mass: 278.348 g/mol) may refer to: AC-262,536 Demexiptiline Mariptiline Proquazone Molecular formulas
C18H18N2O
Physics,Chemistry
65
49,115,109
https://en.wikipedia.org/wiki/Albert%20de%20Grossouvre
Marie Félix Albert Durand de Grossouvre (23 August 1849, Bourges – 18 May 1932, Bourges) was a French geologist, best known for his research in the fields of stratigraphy and paleontology. Biography He studied at the École Polytechnique and the École des Mines de Paris, and afterwards worked as a mining engineer in his hometown of Bourges. In 1889 he attained the post of chief mining engineer. He conducted stratigraphic investigations throughout France, and in the process, uncovered numerous fossils, most notably ammonites. As a cartographer, he participated in the creation of geological maps of central France (Issoudun, Châteauroux, Valençay). In 1878 he was one of the founders of the Société scientifique, historique et archéologique de la Corrèze. He was an officer of the Légion d'honneur, and in 1913 became a correspondent member of the Académie des sciences (mineralogy section). In 1894 he circumscribed the ammonite subfamily Acanthoceratinae. He was also the taxonomic authority of the ammonite genera Barroisiceras, Gaudryceras, Hauericeras, Kossmaticeras and Peroniceras. Published works His major work on stratigraphy and paleontology, "Recherches sur la craie supérieure" (1893–1901) was published in two volumes (4 tomes). In 1911 he authored a book on the history of Bourges, titled "Le vieux Bourges". His other noteworthy written efforts include: Etude sur les gisements de phosphate de chaux du centre de la France, 1885 – Study on the deposits of phosphate of lime in central France. Etude sur les gisements de minerai de fer du centre de la France, 1886 – Study on the deposits of iron ore in central France. Sur le terrain crétacé dans le Sud-Ouest du bassin de Paris, 1889 – On the Cretaceous strata southwest of the Paris Basin. Contributions à la stratigraphie des Pyrénées, 1892 – Contributions to the stratigraphy of the Pyrénées. Sur l'Ammonites peramplus et quelques autres fossiles Turoniens, 1899 – On ammonites and some other Turonian fossils. Sur quelques fossiles Crétacés de Madagascar, 1899 – On some fossils from Madagascar. Crétacé de la Touraine et du Maine, 1900 – The Cretaceous strata of Touraine and Maine. Etude paleogéographique sur le détroit de Poitiers, 1901 – Paleontological study on the strait of Poitiers. Sur le crétacé des Corbières, 1912 – On the Cretaceous strata of the Corbières Massif. References 1849 births 1932 deaths Scientists from Bourges French geologists French paleontologists Mining engineers
Albert de Grossouvre
Engineering
607
32,500
https://en.wikipedia.org/wiki/Vacuum%20pump
A vacuum pump is a type of pump device that draws gas particles from a sealed volume in order to leave behind a partial vacuum. The first vacuum pump was invented in 1650 by Otto von Guericke, and was preceded by the suction pump, which dates to antiquity. History Early pumps The predecessor to the vacuum pump was the suction pump. Dual-action suction pumps were found in the city of Pompeii. Arabic engineer Al-Jazari later described dual-action suction pumps as part of water-raising machines in the 13th century. He also said that a suction pump was used in siphons to discharge Greek fire. The suction pump later appeared in medieval Europe from the 15th century. By the 17th century, water pump designs had improved to the point that they produced measurable vacuums, but this was not immediately understood. What was known was that suction pumps could not pull water beyond a certain height: 18 Florentine yards according to a measurement taken around 1635, or about . This limit was a concern in irrigation projects, mine drainage, and decorative water fountains planned by the Duke of Tuscany, so the duke commissioned Galileo Galilei to investigate the problem. Galileo suggested, incorrectly, in his Two New Sciences (1638) that the column of a water pump will break of its own weight when the water has been lifted to 34 feet. Other scientists took up the challenge, including Gasparo Berti, who replicated it by building the first water barometer in Rome in 1639. Berti's barometer produced a vacuum above the water column, but he could not explain it. A breakthrough was made by Galileo's student Evangelista Torricelli in 1643. Building upon Galileo's notes, he built the first mercury barometer and wrote a convincing argument that the space at the top was a vacuum. The height of the column was then limited to the maximum weight that atmospheric pressure could support; this is the limiting height of a suction pump. In 1650, Otto von Guericke invented the first vacuum pump. Four years later, he conducted his famous Magdeburg hemispheres experiment, showing that teams of horses could not separate two hemispheres from which the air had been evacuated. Robert Boyle improved Guericke's design and conducted experiments on the properties of vacuum. Robert Hooke also helped Boyle produce an air pump that helped to produce the vacuum. By 1709, Francis Hauksbee improved on the design further with his two-cylinder pump, where two pistons worked via a rack-and-pinion design that reportedly "gave a vacuum within about one inch of mercury of perfect." This design remained popular and only slightly changed until well into the nineteenth century. 19th century Heinrich Geissler invented the mercury displacement pump in 1855 and achieved a record vacuum of about 10 Pa (0.1 Torr). A number of electrical properties become observable at this vacuum level, and this renewed interest in vacuum. This, in turn, led to the development of the vacuum tube. The Sprengel pump was a widely used vacuum producer of this time. 20th century The early 20th century saw the invention of many types of vacuum pump, including the molecular drag pump, the diffusion pump, and the turbomolecular pump. Types Pumps can be broadly categorized according to three techniques: positive displacement, momentum transfer, and entrapment. Positive displacement pumps use a mechanism to repeatedly expand a cavity, allow gases to flow in from the chamber, seal off the cavity, and exhaust it to the atmosphere. Momentum transfer pumps, also called molecular pumps, use high-speed jets of dense fluid or high-speed rotating blades to knock gas molecules out of the chamber. Entrapment pumps capture gases in a solid or adsorbed state; this includes cryopumps, getters, and ion pumps. Positive displacement pumps are the most effective for low vacuums. Momentum transfer pumps, in conjunction with one or two positive displacement pumps, are the most common configuration used to achieve high vacuums. In this configuration the positive displacement pump serves two purposes. First it obtains a rough vacuum in the vessel being evacuated before the momentum transfer pump can be used to obtain the high vacuum, as momentum transfer pumps cannot start pumping at atmospheric pressures. Second the positive displacement pump backs up the momentum transfer pump by evacuating to low vacuum the accumulation of displaced molecules in the high vacuum pump. Entrapment pumps can be added to reach ultrahigh vacuums, but they require periodic regeneration of the surfaces that trap air molecules or ions. Due to this requirement their available operational time can be unacceptably short in low and high vacuums, thus limiting their use to ultrahigh vacuums. Pumps also differ in details like manufacturing tolerances, sealing material, pressure, flow, admission or no admission of oil vapor, service intervals, reliability, tolerance to dust, tolerance to chemicals, tolerance to liquids and vibration. Positive displacement pump A partial vacuum may be generated by increasing the volume of a container. To continue evacuating a chamber indefinitely without requiring infinite growth, a compartment of the vacuum can be repeatedly closed off, exhausted, and expanded again. This is the principle behind a positive displacement pump, for example the manual water pump. Inside the pump, a mechanism expands a small sealed cavity to reduce its pressure below that of the atmosphere. Because of the pressure differential, some fluid from the chamber (or the well, in our example) is pushed into the pump's small cavity. The pump's cavity is then sealed from the chamber, opened to the atmosphere, and squeezed back to a minute size. More sophisticated systems are used for most industrial applications, but the basic principle of cyclic volume removal is the same: Rotary vane pump, the most common Diaphragm pump, zero oil contamination Liquid ring high resistance to dust Piston pump, fluctuating vacuum Scroll pump, highest speed dry pump Screw pump (10 Pa) Wankel pump External vane pump Roots blower, also called a booster pump, has highest pumping speeds but low compression ratio Multistage Roots pump that combine several stages providing high pumping speed with better compression ratio Toepler pump Lobe pump The base pressure of a rubber- and plastic-sealed piston pump system is typically 1 to 50 kPa, while a scroll pump might reach 10 Pa (when new) and a rotary vane oil pump with a clean and empty metallic chamber can easily achieve 0.1 Pa. A positive displacement vacuum pump moves the same volume of gas with each cycle, so its pumping speed is constant unless it is overcome by backstreaming. Momentum transfer pump In a momentum transfer pump (or kinetic pump), gas molecules are accelerated from the vacuum side to the exhaust side (which is usually maintained at a reduced pressure by a positive displacement pump). Momentum transfer pumping is only possible below pressures of about 0.1 kPa. Matter flows differently at different pressures based on the laws of fluid dynamics. At atmospheric pressure and mild vacuums, molecules interact with each other and push on their neighboring molecules in what is known as viscous flow. When the distance between the molecules increases, the molecules interact with the walls of the chamber more often than with the other molecules, and molecular pumping becomes more effective than positive displacement pumping. This regime is generally called high vacuum. Molecular pumps sweep out a larger area than mechanical pumps, and do so more frequently, making them capable of much higher pumping speeds. They do this at the expense of the seal between the vacuum and their exhaust. Since there is no seal, a small pressure at the exhaust can easily cause backstreaming through the pump; this is called stall. In high vacuum, however, pressure gradients have little effect on fluid flows, and molecular pumps can attain their full potential. The two main types of molecular pumps are the diffusion pump and the turbomolecular pump. Both types of pumps blow out gas molecules that diffuse into the pump by imparting momentum to the gas molecules. Diffusion pumps blow out gas molecules with jets of an oil or mercury vapor, while turbomolecular pumps use high speed fans to push the gas. Both of these pumps will stall and fail to pump if exhausted directly to atmospheric pressure, so they must be exhausted to a lower grade vacuum created by a mechanical pump, in this case called a backing pump. As with positive displacement pumps, the base pressure will be reached when leakage, outgassing, and backstreaming equal the pump speed, but now minimizing leakage and outgassing to a level comparable to backstreaming becomes much more difficult. Entrapment pump An entrapment pump may be a cryopump, which uses cold temperatures to condense gases to a solid or adsorbed state, a chemical pump, which reacts with gases to produce a solid residue, or an ion pump, which uses strong electrical fields to ionize gases and propel the ions into a solid substrate. A cryomodule uses cryopumping. Other types are the sorption pump, non-evaporative getter pump, and titanium sublimation pump (a type of evaporative getter that can be used repeatedly). Other types Regenerative pump Regenerative pumps utilize vortex behavior of the fluid (air). The construction is based on hybrid concept of centrifugal pump and turbopump. Usually it consists of several sets of perpendicular teeth on the rotor circulating air molecules inside stationary hollow grooves like multistage centrifugal pump. They can reach to 1×10−5 mbar (0.001 Pa)(when combining with Holweck pump) and directly exhaust to atmospheric pressure. Examples of such pumps are Edwards EPX (technical paper ) and Pfeiffer OnTool™ Booster 150. It is sometimes referred as side channel pump. Due to high pumping rate from atmosphere to high vacuum and less contamination since bearing can be installed at exhaust side, this type of pumps are used in load lock in semiconductor manufacturing processes. This type of pump suffers from high power consumption(~1 kW) compared to turbomolecular pump (<100W) at low pressure since most power is consumed to back atmospheric pressure. This can be reduced by nearly 10 times by backing with a small pump. More examples Additional types of pump include the: Venturi vacuum pump (aspirator) (10 to 30 kPa) Steam ejector (vacuum depends on the number of stages, but can be very low) Performance measures Pumping speed refers to the volume flow rate of a pump at its inlet, often measured in volume per unit of time. Momentum transfer and entrapment pumps are more effective on some gases than others, so the pumping rate can be different for each of the gases being pumped, and the average volume flow rate of the pump will vary depending on the chemical composition of the gases remaining in the chamber. Throughput refers to the pumping speed multiplied by the gas pressure at the inlet, and is measured in units of pressure·volume/unit time. At a constant temperature, throughput is proportional to the number of molecules being pumped per unit time, and therefore to the mass flow rate of the pump. When discussing a leak in the system or backstreaming through the pump, throughput refers to the volume leak rate multiplied by the pressure at the vacuum side of the leak, so the leak throughput can be compared to the pump throughput. Positive displacement and momentum transfer pumps have a constant volume flow rate (pumping speed), but as the chamber's pressure drops, this volume contains less and less mass. So although the pumping speed remains constant, the throughput and mass flow rate drop exponentially. Meanwhile, the leakage, evaporation, sublimation and backstreaming rates continue to produce a constant throughput into the system. Techniques Vacuum pumps are combined with chambers and operational procedures into a wide variety of vacuum systems. Sometimes more than one pump will be used (in series or in parallel) in a single application. A partial vacuum, or rough vacuum, can be created using a positive displacement pump that transports a gas load from an inlet port to an outlet (exhaust) port. Because of their mechanical limitations, such pumps can only achieve a low vacuum. To achieve a higher vacuum, other techniques must then be used, typically in series (usually following an initial fast pump down with a positive displacement pump). Some examples might be use of an oil sealed rotary vane pump (the most common positive displacement pump) backing a diffusion pump, or a dry scroll pump backing a turbomolecular pump. There are other combinations depending on the level of vacuum being sought. Achieving high vacuum is difficult because all of the materials exposed to the vacuum must be carefully evaluated for their outgassing and vapor pressure properties. For example, oils, greases, and rubber or plastic gaskets used as seals for the vacuum chamber must not boil off when exposed to the vacuum, or the gases they produce would prevent the creation of the desired degree of vacuum. Often, all of the surfaces exposed to the vacuum must be baked at high temperature to drive off adsorbed gases. Outgassing can also be reduced simply by desiccation prior to vacuum pumping. High-vacuum systems generally require metal chambers with metal gasket seals such as Klein flanges or ISO flanges, rather than the rubber gaskets more common in low vacuum chamber seals. The system must be clean and free of organic matter to minimize outgassing. All materials, solid or liquid, have a small vapour pressure, and their outgassing becomes important when the vacuum pressure falls below this vapour pressure. As a result, many materials that work well in low vacuums, such as epoxy, will become a source of outgassing at higher vacuums. With these standard precautions, vacuums of 1 mPa are easily achieved with an assortment of molecular pumps. With careful design and operation, 1 μPa is possible. Several types of pumps may be used in sequence or in parallel. In a typical pumpdown sequence, a positive displacement pump would be used to remove most of the gas from a chamber, starting from atmosphere (760 Torr, 101 kPa) to 25 Torr (3 kPa). Then a sorption pump would be used to bring the pressure down to 10−4 Torr (10 mPa). A cryopump or turbomolecular pump would be used to bring the pressure further down to 10−8 Torr (1 μPa). An additional ion pump can be started below 10−6 Torr to remove gases which are not adequately handled by a cryopump or turbo pump, such as helium or hydrogen. Ultra-high vacuum generally requires custom-built equipment, strict operational procedures, and a fair amount of trial-and-error. Ultra-high vacuum systems are usually made of stainless steel with metal-gasketed vacuum flanges. The system is usually baked, preferably under vacuum, to temporarily raise the vapour pressure of all outgassing materials in the system and boil them off. If necessary, this outgassing of the system can also be performed at room temperature, but this takes much more time. Once the bulk of the outgassing materials are boiled off and evacuated, the system may be cooled to lower vapour pressures to minimize residual outgassing during actual operation. Some systems are cooled well below room temperature by liquid nitrogen to shut down residual outgassing and simultaneously cryopump the system. In ultra-high vacuum systems, some very odd leakage paths and outgassing sources must be considered. The water absorption of aluminium and palladium becomes an unacceptable source of outgassing, and even the absorptivity of hard metals such as stainless steel or titanium must be considered. Some oils and greases will boil off in extreme vacuums. The porosity of the metallic vacuum chamber walls may have to be considered, and the grain direction of the metallic flanges should be parallel to the flange face. The impact of molecular size must be considered. Smaller molecules can leak in more easily and are more easily absorbed by certain materials, and molecular pumps are less effective at pumping gases with lower molecular weights. A system may be able to evacuate nitrogen (the main component of air) to the desired vacuum, but the chamber could still be full of residual atmospheric hydrogen and helium. Vessels lined with a highly gas-permeable material such as palladium (which is a high-capacity hydrogen sponge) create special outgassing problems. Applications Vacuum pumps are used in many industrial and scientific processes, including: Vacuum deaerator Composite plastic moulding processes; Production of most types of electric lamps, vacuum tubes, and CRTs where the device is either left evacuated or re-filled with a specific gas or gas mixture; Semiconductor processing, notably ion implantation, dry etch and PVD, ALD, PECVD and CVD deposition and so on in photolithography; Electron microscopy; Medical processes that require suction; Uranium enrichment; Medical applications such as radiotherapy, radiosurgery and radiopharmacy; Analytical instrumentation to analyse gas, liquid, solid, surface and bio materials; Mass spectrometers to create a high vacuum between the ion source and the detector; vacuum coating on glass, metal and plastics for decoration, for durability and for energy saving, such as low-emissivity glass, hard coating for engine components (as in Formula One), ophthalmic coating, milking machines and other equipment in dairy sheds; Vacuum impregnation of porous products such as wood or electric motor windings; Air conditioning service (removing all contaminants from the system before charging with refrigerant); Trash compactor; Vacuum engineering; Sewage systems (see EN1091:1997 standards); Freeze drying; and Fusion research. In the field of oil regeneration and re-refining, vacuum pumps create a low vacuum for oil dehydration and a high vacuum for oil purification. A vacuum may be used to power, or provide assistance to mechanical devices. In hybrid and diesel engine motor vehicles, a pump fitted on the engine (usually on the camshaft) is used to produce a vacuum. In petrol engines, instead, the vacuum is typically obtained as a side-effect of the operation of the engine and the flow restriction created by the throttle plate but may be also supplemented by an electrically operated vacuum pump to boost braking assistance or improve fuel consumption. This vacuum may then be used to power the following motor vehicle components: vacuum servo booster for the hydraulic brakes, motors that move dampers in the ventilation system, throttle driver in the cruise control servomechanism, door locks or trunk releases. In an aircraft, the vacuum source is often used to power gyroscopes in the various flight instruments. To prevent the complete loss of instrumentation in the event of an electrical failure, the instrument panel is deliberately designed with certain instruments powered by electricity and other instruments powered by the vacuum source. Depending on the application, some vacuum pumps may either be electrically driven (using electric current) or pneumatically-driven (using air pressure), or powered and actuated by other means. Hazards Old vacuum-pump oils that were produced before circa 1980 often contain a mixture of several different dangerous polychlorinated biphenyls (PCBs), which are highly toxic, carcinogenic, persistent organic pollutants. See also An Experiment on a Bird in the Air Pump Vacuum sewerage References Bibliography External links Pumps Pumps Pumps 1640s introductions 1642 beginnings Gas technologies German inventions 17th-century inventions
Vacuum pump
Physics,Chemistry,Engineering
4,016
310,889
https://en.wikipedia.org/wiki/Coproduct
In category theory, the coproduct, or categorical sum, is a construction which includes as examples the disjoint union of sets and of topological spaces, the free product of groups, and the direct sum of modules and vector spaces. The coproduct of a family of objects is essentially the "least specific" object to which each object in the family admits a morphism. It is the category-theoretic dual notion to the categorical product, which means the definition is the same as the product but with all arrows reversed. Despite this seemingly innocuous change in the name and notation, coproducts can be and typically are dramatically different from products within a given category. Definition Let be a category and let and be objects of An object is called the coproduct of and written or or sometimes simply if there exist morphisms and satisfying the following universal property: for any object and any morphisms and there exists a unique morphism such that and That is, the following diagram commutes: The unique arrow making this diagram commute may be denoted or The morphisms and are called , although they need not be injections or even monic. The definition of a coproduct can be extended to an arbitrary family of objects indexed by a set The coproduct of the family is an object together with a collection of morphisms such that, for any object and any collection of morphisms there exists a unique morphism such that That is, the following diagram commutes for each : The coproduct of the family is often denoted or Sometimes the morphism may be denoted to indicate its dependence on the individual s. Examples The coproduct in the category of sets is simply the disjoint union with the maps ij being the inclusion maps. Unlike direct products, coproducts in other categories are not all obviously based on the notion for sets, because unions don't behave well with respect to preserving operations (e.g. the union of two groups need not be a group), and so coproducts in different categories can be dramatically different from each other. For example, the coproduct in the category of groups, called the free product, is quite complicated. On the other hand, in the category of abelian groups (and equally for vector spaces), the coproduct, called the direct sum, consists of the elements of the direct product which have only finitely many nonzero terms. (It therefore coincides exactly with the direct product in the case of finitely many factors.) Given a commutative ring R, the coproduct in the category of commutative R-algebras is the tensor product. In the category of (noncommutative) R-algebras, the coproduct is a quotient of the tensor algebra (see free product of associative algebras). In the case of topological spaces, coproducts are disjoint unions with their disjoint union topologies. That is, it is a disjoint union of the underlying sets, and the open sets are sets open in each of the spaces, in a rather evident sense. In the category of pointed spaces, fundamental in homotopy theory, the coproduct is the wedge sum (which amounts to joining a collection of spaces with base points at a common base point). The concept of disjoint union secretly underlies the above examples: the direct sum of abelian groups is the group generated by the "almost" disjoint union (disjoint union of all nonzero elements, together with a common zero), similarly for vector spaces: the space spanned by the "almost" disjoint union; the free product for groups is generated by the set of all letters from a similar "almost disjoint" union where no two elements from different sets are allowed to commute. This pattern holds for any variety in the sense of universal algebra. The coproduct in the category of Banach spaces with short maps is the sum, which cannot be so easily conceptualized as an "almost disjoint" sum, but does have a unit ball almost-disjointly generated by the unit ball is the cofactors. The coproduct of a poset category is the join operation. Discussion The coproduct construction given above is actually a special case of a colimit in category theory. The coproduct in a category can be defined as the colimit of any functor from a discrete category into . Not every family will have a coproduct in general, but if it does, then the coproduct is unique in a strong sense: if and are two coproducts of the family , then (by the definition of coproducts) there exists a unique isomorphism such that for each . As with any universal property, the coproduct can be understood as a universal morphism. Let be the diagonal functor which assigns to each object the ordered pair and to each morphism the pair . Then the coproduct in is given by a universal morphism to the functor from the object in . The coproduct indexed by the empty set (that is, an empty coproduct) is the same as an initial object in . If is a set such that all coproducts for families indexed with exist, then it is possible to choose the products in a compatible fashion so that the coproduct turns into a functor . The coproduct of the family is then often denoted by and the maps are known as the natural injections. Letting denote the set of all morphisms from to in (that is, a hom-set in ), we have a natural isomorphism given by the bijection which maps every tuple of morphisms (a product in Set, the category of sets, which is the Cartesian product, so it is a tuple of morphisms) to the morphism That this map is a surjection follows from the commutativity of the diagram: any morphism is the coproduct of the tuple That it is an injection follows from the universal construction which stipulates the uniqueness of such maps. The naturality of the isomorphism is also a consequence of the diagram. Thus the contravariant hom-functor changes coproducts into products. Stated another way, the hom-functor, viewed as a functor from the opposite category to Set is continuous; it preserves limits (a coproduct in is a product in ). If is a finite set, say , then the coproduct of objects is often denoted by . Suppose all finite coproducts exist in C, coproduct functors have been chosen as above, and 0 denotes the initial object of C corresponding to the empty coproduct. We then have natural isomorphisms These properties are formally similar to those of a commutative monoid; a category with finite coproducts is an example of a symmetric monoidal category. If the category has a zero object , then we have a unique morphism (since is terminal) and thus a morphism . Since is also initial, we have a canonical isomorphism as in the preceding paragraph. We thus have morphisms and , by which we infer a canonical morphism . This may be extended by induction to a canonical morphism from any finite coproduct to the corresponding product. This morphism need not in general be an isomorphism; in Grp it is a proper epimorphism while in Set* (the category of pointed sets) it is a proper monomorphism. In any preadditive category, this morphism is an isomorphism and the corresponding object is known as the biproduct. A category with all finite biproducts is known as a semiadditive category. If all families of objects indexed by have coproducts in , then the coproduct comprises a functor . Note that, like the product, this functor is covariant. See also Product Limits and colimits Coequalizer Direct limit References External links Interactive Web page which generates examples of coproducts in the category of finite sets. Written by Jocelyn Paine. Limits (category theory)
Coproduct
Mathematics
1,766
72,458,942
https://en.wikipedia.org/wiki/Amanita%20suballiacea
Amanita suballiacea is a species of Amanita found in US coast of the Gulf of Mexico occurring with Quercus and Pinus. References External links suballiacea Fungi of North America Fungus species
Amanita suballiacea
Biology
46
8,533,740
https://en.wikipedia.org/wiki/Wenker%20synthesis
The Wenker synthesis is an organic reaction converting a beta amino alcohol to an aziridine with the help of sulfuric acid. It is used industrially for the synthesis of aziridine itself. The original Wenker synthesis of aziridine itself takes place in two steps. In the first step ethanolamine is reacted with sulfuric acid at high temperatures (250 °C) to form the sulfate monoester. This salt is then reacted with sodium hydroxide in the second step forming aziridine. The base abstracts an amine proton enabling it to displace the sulfate group. A modification of this reaction involving lower reaction temperatures (140–180 °C) and therefore reduced charring increases the yield of the intermediate. The Wenker synthesis protocol using trans-2-aminocyclooctanol, available from reaction of ammonia with the epoxide of cyclooctene, gives a mixture of cyclooctenimine (the Wenker aziridine product) and cyclooctanone (a competing Hofmann elimination product). Further reading References Aziridines Nitrogen heterocycle forming reactions Heterocycle forming reactions Substitution reactions Name reactions
Wenker synthesis
Chemistry
238
10,928,330
https://en.wikipedia.org/wiki/Hollow%20atom
Hollow atoms (discovered in 1990 by a French team of researchers around Jean-Pierre Briand) are short-lived multiply excited neutral atoms which carry a large part of their Z electrons (Z ... projectile nuclear charge) in high-n levels while inner shells remain (transiently) empty. The hollow atoms are exotic atomic species whose all, or most, electrons lie in excited states, while the innermost shells are empty. These atomic species were first observed during the interaction of highly charged ions with surfaces. population inversion arises for typically 100 femtoseconds during the interaction of a slow highly charged ion (HCI) with a solid surface. Despite this limited lifetime, the formation and decay of a hollow atom can be conveniently studied from ejected electrons and soft X-rays, and the trajectories, energy loss and final charge state distribution of surface-scattered projectiles. For impact on insulator surfaces the potential energy contained by hollow atom may also cause the release of target atoms and -ions via potential sputtering and the formation of nanostructures on a surface. External links Review article on hollow atoms. EU Network ITS-LEIF MPI Heidelberg, Germany NIST, USA TU Wien, Austria Atoms References
Hollow atom
Physics
254
2,657,761
https://en.wikipedia.org/wiki/Indole-3-butyric%20acid
Indole-3-butyric acid (1H-indole-3-butanoic acid, IBA) is a white to light-yellow crystalline solid, with the molecular formula C12H13NO2. It melts at 125°C in atmospheric pressure and decomposes before boiling. IBA is a plant hormone in the auxin family and is an ingredient in many commercial horticultural plant rooting products. Plant hormone Since IBA is not completely soluble in water, it is typically dissolved in 75% or purer alcohol for use in plant rooting, making a solution of between 10,000 and 50,000 ppm. This alcohol solution is then diluted with distilled water to the desired concentration. IBA is also available as a salt, which is soluble in water. The solution should be kept in a cool, dark place for best results. This compound had been thought to be strictly synthetic; however, it was reported that the compound was isolated from leaves and seeds of maize and other species. In maize, IBA has been shown to be biosynthesized in vivo from IAA and other compounds as precursors. This chemical may also be extracted from any of the Salix (Willow) genus. Plant tissue culture In plant tissue culture IBA and other auxins are used to initiate root formation in vitro in a procedure called micropropagation. Micropropagation of plants is the process of using small samples of plants called explants and causing them to undergo growth of differentiated or undifferentiated cells. In connection with cytokinins like kinetin, auxins like IBA can be used to cause the formation of masses of undifferentiated cells called callus. Callus formation is often used as a first step process in micropropagation where the callus cells are then caused to form other tissues such as roots by exposing them to certain hormones like auxins that produce roots. The process of callus to root formation is called indirect organogenesis whereas if roots are formed from the explant directly it is called direct organogenesis. In a study of Camellia sinensis, the effect of three different auxins, IBA, IAA and NAA were examined to determine the relative effect of each auxin on root formation. According to the result for the species, IBA was shown to produce a higher yield of roots compared to the other auxins. The effect of IBA is in concurrence with other studies where IBA is the most commonly used auxin for root formation. Mechanism Although the exact method of how IBA works is still largely unknown, genetic evidence has been found that suggests that IBA may be converted into IAA through a similar process to β-oxidation of fatty acids. The conversion of IBA to IAA then suggests that IBA works as a storage sink for IAA in plants. There is other evidence that suggests that IBA is not converted to IAA but acts as an auxin on its own. References External links Auxins Indoles Carboxylic acids Plant growth regulators
Indole-3-butyric acid
Chemistry
636
10,324,687
https://en.wikipedia.org/wiki/Geometric%20median
In geometry, the geometric median of a discrete point set in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the property of minimizing the sum of distances or absolute differences for one-dimensional data. It is also known as the spatial median, Euclidean minisum point, Torricelli point, or 1-median. It provides a measure of central tendency in higher dimensions and it is a standard problem in facility location, i.e., locating a facility to minimize the cost of transportation. The geometric median is an important estimator of location in statistics, because it minimizes the sum of the L2 distances of the samples. It is to be compared to the mean, which minimizes the sum of the squared L2 distances; and to the coordinate-wise median which minimizes the sum of the L1 distances. The more general k-median problem asks for the location of k cluster centers minimizing the sum of L2 distances from each sample point to its nearest center. The special case of the problem for three points in the plane (that is, = 3 and = 2 in the definition below) is sometimes also known as Fermat's problem; it arises in the construction of minimal Steiner trees, and was originally posed as a problem by Pierre de Fermat and solved by Evangelista Torricelli. Its solution is now known as the Fermat point of the triangle formed by the three sample points. The geometric median may in turn be generalized to the problem of minimizing the sum of weighted distances, known as the Weber problem after Alfred Weber's discussion of the problem in his 1909 book on facility location. Some sources instead call Weber's problem the Fermat–Weber problem, but others use this name for the unweighted geometric median problem. provides a survey of the geometric median problem. See for generalizations of the problem to non-discrete point sets. Definition Formally, for a given set of m points with each , the geometric median is defined as the sum of the L2 distances minimizer Here, arg min means the value of the argument which minimizes the sum. In this case, it is the point in n-dimensional Euclidean space from where the sum of all Euclidean distances to the 's is minimum. Properties For the 1-dimensional case, the geometric median coincides with the median. This is because the univariate median also minimizes the sum of distances from the points. (More precisely, if the points are p1, ..., pn, in that order, the geometric median is the middle point if n is odd, but is not uniquely determined if n is even, when it can be any point in the line segment between the two middling points and .) The geometric median is unique whenever the points are not collinear. The geometric median is equivariant for Euclidean similarity transformations, including translation and rotation. This means that one would get the same result either by transforming the geometric median, or by applying the same transformation to the sample data and finding the geometric median of the transformed data. This property follows from the fact that the geometric median is defined only from pairwise distances, and does not depend on the system of orthogonal Cartesian coordinates by which the sample data is represented. In contrast, the component-wise median for a multivariate data set is not in general rotation invariant, nor is it independent of the choice of coordinates. The geometric median has a breakdown point of 0.5. That is, up to half of the sample data may be arbitrarily corrupted, and the median of the samples will still provide a robust estimator for the location of the uncorrupted data. Special cases For 3 (non-collinear) points, if any angle of the triangle formed by those points is 120° or more, then the geometric median is the point at the vertex of that angle. If all the angles are less than 120°, the geometric median is the point inside the triangle which subtends an angle of 120° to each three pairs of triangle vertices. This is also known as the Fermat point of the triangle formed by the three vertices. (If the three points are collinear then the geometric median is the point between the two other points, as is the case with a one-dimensional median.) For 4 coplanar points, if one of the four points is inside the triangle formed by the other three points, then the geometric median is that point. Otherwise, the four points form a convex quadrilateral and the geometric median is the crossing point of the diagonals of the quadrilateral. The geometric median of four coplanar points is the same as the unique Radon point of the four points. Computation Despite the geometric median's being an easy-to-understand concept, computing it poses a challenge. The centroid or center of mass, defined similarly to the geometric median as minimizing the sum of the squares of the distances to each point, can be found by a simple formula — its coordinates are the averages of the coordinates of the points — but it has been shown that no explicit formula, nor an exact algorithm involving only arithmetic operations and kth roots, can exist in general for the geometric median. Therefore, only numerical or symbolic approximations to the solution of this problem are possible under this model of computation. However, it is straightforward to calculate an approximation to the geometric median using an iterative procedure in which each step produces a more accurate approximation. Procedures of this type can be derived from the fact that the sum of distances to the sample points is a convex function, since the distance to each sample point is convex and the sum of convex functions remains convex. Therefore, procedures that decrease the sum of distances at each step cannot get trapped in a local optimum. One common approach of this type, called Weiszfeld's algorithm after the work of Endre Weiszfeld, is a form of iteratively re-weighted least squares. This algorithm defines a set of weights that are inversely proportional to the distances from the current estimate to the sample points, and creates a new estimate that is the weighted average of the sample according to these weights. That is, This method converges for almost all initial positions, but may fail to converge when one of its estimates falls on one of the given points. It can be modified to handle these cases so that it converges for all initial points. describe more sophisticated geometric optimization procedures for finding approximately optimal solutions to this problem. show how to compute the geometric median to arbitrary precision in nearly linear time. Note also that the problem can be formulated as the second-order cone program which can be solved in polynomial time using common optimization solvers. Characterization of the geometric median If y is distinct from all the given points, xi, then y is the geometric median if and only if it satisfies: This is equivalent to: which is closely related to Weiszfeld's algorithm. In general, y is the geometric median if and only if there are vectors ui such that: where for xi ≠ y, and for xi = y, An equivalent formulation of this condition is It can be seen as a generalization of the median property, in the sense that any partition of the points, in particular as induced by any hyperplane through y, has the same and opposite sum of positive directions from y on each side. In the one dimensional case, the hyperplane is the point y itself, and the sum of directions simplifies to the (directed) counting measure. Generalizations The geometric median can be generalized from Euclidean spaces to general Riemannian manifolds (and even metric spaces) using the same idea which is used to define the Fréchet mean on a Riemannian manifold. Let be a Riemannian manifold with corresponding distance function , let be weights summing to 1, and let be observations from . Then we define the weighted geometric median (or weighted Fréchet median) of the data points as . If all the weights are equal, we say simply that is the geometric median. See also Medoid Geometric median absolute deviation Notes References . Translated into English as Means Multivariate statistics Nonparametric statistics Mathematical optimization Geometric algorithms Descriptive statistics
Geometric median
Physics,Mathematics
1,696
35,369,355
https://en.wikipedia.org/wiki/Nuclear%20equivalence
According to the principle of nuclear equivalence, the nuclei of essentially all differentiated adult cells of an individual are genetically (though not necessarily metabolically) identical to one another and to the nucleus of the zygote from which they descended. This means that virtually all somatic cells in an adult have the same genes. However, different cells express different subsets of these genes. The evidence for nuclear equivalence comes from cases in which differentiated cells or their nuclei have been found to retain the potential of directing the development of the entire organism. Such cells or nuclei are said to exhibit totipotency. References Cell biology
Nuclear equivalence
Biology
124
9,737
https://en.wikipedia.org/wiki/Eugenics
Eugenics ( ; ) is a set of beliefs and practices that aim to improve the genetic quality of a human population. Historically, eugenicists have attempted to alter the frequency of various human phenotypes by inhibiting the fertility of people and groups they considered inferior, or promoting that of those considered superior. The contemporary history of eugenics began in the late 19th century, when a popular eugenics movement emerged in the United Kingdom, and then spread to many countries, including the United States, Canada, Australia, and most European countries (e.g. Sweden and Germany). In this period, people from across the political spectrum espoused eugenic ideas. Consequently, many countries adopted eugenic policies, intended to improve the quality of their populations' genetic stock. Historically, the idea of eugenics has been used to argue for a broad array of practices ranging from prenatal care for mothers deemed genetically desirable to the forced sterilization and murder of those deemed unfit. To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, British-Indian scientist J. B. S. Haldane wrote in 1940 that "the motor bus, by breaking up inbred village communities, was a powerful eugenic agent." Debate as to what exactly counts as eugenics continues today. Early eugenicists were mostly concerned with factors of measured intelligence that often correlated strongly with social class. Although it originated as a progressive social movement in the 19th century, in contemporary usage in the 21st century, the term is closely associated with scientific racism. New, liberal eugenics seeks to dissociate itself from old, authoritarian eugenics by rejecting coercive state programs and relying on parental choice. Common distinctions Eugenic programs included both positive measures, such as encouraging individuals deemed particularly "fit" to reproduce, and negative measures, such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction. In other words, positive eugenics is aimed at encouraging reproduction among the genetically advantaged, for example, the eminently intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally "undesirable". This includes abortions, sterilization, and other methods of family planning. Both positive and negative eugenics can be coercive; in Nazi Germany, for example, abortion was illegal for women deemed by the state to be fit. As opposed to "euthenics" Historical eugenics Ancient and medieval origins Academic origins The term eugenics and its modern field of study were first formulated by Francis Galton in 1883, directly drawing on the recent work delineating natural selection by his half-cousin Charles Darwin. He published his observations and conclusions chiefly in his influential book Inquiries into Human Faculty and Its Development. Galton himself defined it as "the study of all agencies under human control which can improve or impair the racial quality of future generations". The first to systematically apply Darwinism theory to human relations, Galton believed that various desirable human qualities were also hereditary ones, although Darwin strongly disagreed with this elaboration of his theory. Eugenics became an academic discipline at many colleges and universities and received funding from various sources. Organizations were formed to win public support for and to sway opinion towards responsible eugenic values in parenthood, including the British Eugenics Education Society of 1907 and the American Eugenics Society of 1921. Both sought support from leading clergymen and modified their message to meet religious ideals. In 1909, the Anglican clergymen William Inge and James Peile both wrote for the Eugenics Education Society. Inge was an invited speaker at the 1921 International Eugenics Conference, which was also endorsed by the Roman Catholic Archbishop of New York Patrick Joseph Hayes. Three International Eugenics Conferences presented a global venue for eugenicists, with meetings in 1912 in London, and in 1921 and 1932 in New York City. Eugenic policies in the United States were first implemented by state-level legislators in the early 1900s. Eugenic policies also took root in France, Germany, and Great Britain. Later, in the 1920s and 1930s, the eugenic policy of sterilizing certain mental patients was implemented in other countries including Belgium, Brazil, Canada, Japan and Sweden. Frederick Osborn's 1937 journal article "Development of a Eugenic Philosophy" framed eugenics as a social philosophy—a philosophy with implications for social order. That definition is not universally accepted. Osborn advocated for higher rates of sexual reproduction among people with desired traits ("positive eugenics") or reduced rates of sexual reproduction or sterilization of people with less-desired or undesired traits ("negative eugenics"). In addition to being practiced in a number of countries, eugenics was internationally organized through the International Federation of Eugenics Organizations. Its scientific aspects were carried on through research bodies such as the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics, the Cold Spring Harbor Carnegie Institution for Experimental Evolution, and the Eugenics Record Office. Politically, the movement advocated measures such as sterilization laws. In its moral dimension, eugenics rejected the doctrine that all human beings are born equal and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure "Nordic race" or "Aryan" genetic pool and the eventual elimination of "unfit" races. Many leading British politicians subscribed to the theories of eugenics. Winston Churchill supported the British Eugenics Society and was an honorary vice president for the organization. Churchill believed that eugenics could solve "race deterioration" and reduce crime and poverty. As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals. Many countries enacted various eugenics policies, including: genetic screenings, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, ultimately culminating in genocide. By 2014, gene selection (rather than "people selection") was made possible through advances in genome editing, leading to what is sometimes called new eugenics, also known as "neo-eugenics", "consumer eugenics", or "liberal eugenics"; which focuses on individual freedom and allegedly pulls away from racism, sexism or a focus on intelligence. Early opposition Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward, the English writer G. K. Chesterton, and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward's 1913 article "Eugenics, Euthenics, and Eudemics", Chesterton's 1917 book Eugenics and Other Evils, and Franz Boas' 1916 article "Eugenics" (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement. Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben. Other biologists who were themselves eugenicists, such as J. B. S. Haldane and R. A. Fisher, however, also expressed skepticism in the belief that sterilization of "defectives" (i.e. a purely negative eugenics) would lead to the disappearance of undesirable genetic traits. Among institutions, the Catholic Church was an opponent of state-enforced sterilizations, but accepted isolating people with hereditary diseases so as not to let them reproduce. Attempts by the Eugenics Education Society to persuade the British government to legalize voluntary sterilization were opposed by Catholics and by the Labour Party. The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii. In this, Pope Pius XI explicitly condemned sterilization laws: "Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason." In fact, more generally, "[m]uch of the opposition to eugenics during that era, at least in Europe, came from the right." The eugenicists' political successes in Germany and Scandinavia were not at all matched in such countries as Poland and Czechoslovakia, even though measures had been proposed there, largely because of the Catholic church's moderating influence. Concerns over human devolution Dysgenics Compulsory sterilization Eugenic feminism North American eugenics Eugenics in Mexico Nazism and the decline of eugenics The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rüdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in in 1925 and emulated eugenic legislation for the sterilization of "defectives" that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as "degenerate" or "unfit", and therefore led to segregation, institutionalization, sterilization, and even mass murder. The Nazi policy of identifying German citizens deemed mentally or physically unfit and then systematically killing them with poison gas, referred to as the Aktion T4 campaign, is understood by historians to have paved the way for the Holocaust. By the end of World War II, many eugenics laws were abandoned, having become associated with Nazi Germany. H. G. Wells, who had called for "the sterilization of failures" in 1904, stated in his 1940 book The Rights of Man: Or What Are We Fighting For? that among the human rights, which he believed should be available to all people, was "a prohibition on mutilation, sterilization, torture, and any bodily punishment". After World War II, the practice of "imposing measures intended to prevent births within [a national, ethnical, racial or religious] group" fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide. The Charter of Fundamental Rights of the European Union also proclaims "the prohibition of eugenic practices, in particular those aiming at selection of persons". In Singapore Lee Kuan Yew, the founding father of Singapore, actively promoted eugenics as late as 1983. In 1984, Singapore began providing financial incentives to highly educated women to encourage them to have more children. For this purpose was introduced the "Graduate Mother Scheme" that incentivized graduate women to get married as much as the rest of their populace. The incentives were extremely unpopular and regarded as eugenic, and were seen as discriminatory towards Singapore's non-Chinese ethnic population. In 1985, the incentives were partly abandoned as ineffective, while the government matchmaking agency, the Social Development Network, remains active. Modern eugenics Developments in genetic, genomic, and reproductive technologies at the beginning of the 21st century have raised numerous questions regarding the ethical status of eugenics, sparking renewed interest in the topic. Liberal eugenics, also called new eugenics, aims to make genetic interventions morally acceptable by rejecting coercive state programs and relying on parental choice. Bioethicist Nicholas Agar, who coined the term, argues for example that the state should only intervene to forbid interventions that excessively limit a child’s ability to shape their own future. Unlike "authoritarian" or "old" eugenics, liberal eugenics draws on modern scientific knowledge of genomics to enable informed choices aimed at improving well-being. Julien Savulescu further argues that some eugenic practices like prenatal screening for Down syndrome are already widely practiced, without being labeled "eugenics", as they are seen as enhancing freedom rather than restricting it. Some critics, such as UC Berkeley sociologist Troy Duster, have argued that modern genetics is a "back door to eugenics". This view was shared by then-White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a "new era of eugenics", and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, "where children are increasingly regarded as made-to-order consumer products". The United Nations' International Bioethics Committee also noted that while human genetic engineering should not be confused with the 20th century eugenics movements, it nonetheless challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want or cannot afford the technology. In 2025, geneticist Peter Visscher published a paper in Nature, arguing genome editing of human embryos and germ cells may become feasible in the 21st century, and raising ethical considerations in the context of previous eugenics movements. A response argued that human embryo genetic editing is "unsafe and unproven". Nature also published an editorial, stating: "The fear that polygenic gene editing could be used for eugenics looms large among them, and is, in part, why no country currently allows genome editing in a human embryo, even for single variants". Contested scientific status One general concern that many bring to the table, is that the reduced genetic diversity some argue to be a likely feature of long-term, species-wide eugenics plans, could eventually result in inbreeding depression, increased spread of infectious disease, and decreased resilience to changes in the environment. Arguments for scientific validity In his original lecture "Darwinism, Medical Progress and Eugenics", Karl Pearson claimed that everything concerning eugenics fell into the field of medicine. Anthropologist Aleš Hrdlička said in 1918 that "[t]he growing science of eugenics will essentially become applied anthropology." The economist John Maynard Keynes was a lifelong proponent of eugenics and described it as a branch of sociology. In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction. Objections to scientific validity Amanda Caleb, Professor of Medical Humanities at Geisinger Commonwealth School of Medicine, says "Eugenic laws and policies are now understood as part of a specious devotion to a pseudoscience that actively dehumanizes to support political agendas and not true science or medicine." The first major challenge to conventional eugenics based on genetic inheritance was made in 1915 by Thomas Hunt Morgan. He demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family with red eyes, demonstrating that major genetic changes occurred outside of inheritance. Additionally, Morgan criticized the view that certain traits, such as intelligence and criminality, were hereditary because these traits were subjective. Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect. Andrzej Pękalski, from the University of Wroclaw, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects a pleiotropic gene that could possibly be associated with a positive trait. Pękalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together. While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology, at this point there is no agreed objective means of determining which traits might be ultimately desirable or undesirable. Some conditions such as sickle-cell disease and cystic fibrosis respectively confer immunity to malaria and resistance to cholera when a single copy of the recessive allele is contained within the genotype of the individual, so eliminating these genes is undesirable in places where such diseases are common. Edwin Black, journalist, historian, and author of War Against the Weak, argues that eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is a cultural choice rather than a matter that can be determined through objective scientific inquiry. This aspect of eugenics is often considered to be tainted with scientific racism and pseudoscience. Contested ethical status Contemporary ethical opposition In a book directly addressed at socialist eugenicist J.B.S. Haldane and his once-influential Daedalus, Betrand Russell, had one serious objection of his own: eugenic policies might simply end up being used to reproduce existing power relations "rather than to make men happy." Environmental ethicist Bill McKibben argued against germinal choice technology and other advanced biotechnological strategies for human enhancement. He writes that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to "improve" themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome with technology. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, he argues, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using Ming China, Tokugawa Japan and the contemporary Amish as examples. Contemporary ethical advocacy Bioethicist Stephen Wilkinsonhas said that some aspects of modern genetics can be classified as eugenics, but that this classification does not inherently make modern genetics immoral. Historian Nathaniel C. Comfort has claimed that the change from state-led reproductive-genetic decision-making to individual choice has moderated the worst abuses of eugenics by transferring the decision-making process from the state to patients and their families. In their book published in 2000, From Chance to Choice: Genetics and Justice, bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals' reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements. In science fiction The novel Brave New World by the English author Aldous Huxley (1931), is a dystopian social science fiction novel which is set in a futuristic World State, whose citizens are environmentally engineered into an intelligence-based social hierarchy. Various works by the author Robert A. Heinlein mention the Howard Foundation, a group which attempts to improve human longevity through selective breeding. Among Frank Herbert's other works, the Dune series, starting with the eponymous 1965 novel, describes selective breeding by a powerful sisterhood, the Bene Gesserit, to produce a supernormal male being, the Kwisatz Haderach. The Star Trek franchise features a race of genetically engineered humans which is known as "Augments", the most notable of them is Khan Noonien Singh. These "supermen" were the cause of the Eugenics Wars, a dark period in Earth's fictional history, before they were deposed and exiled. They appear in many of the franchise's story arcs, most frequently, they appear as villains. The film Gattaca (1997) provides a fictional example of a dystopian society that uses eugenics to decide what people are capable of and their place in the world. The title alludes to the letters G, A, T and C, the four nucleobases of DNA, and depicts the possible consequences of genetic discrimination in the present societal framework. Relegated to the role of a cleaner owing to his genetically projected death at age 32 due to a heart condition (being told: "The only way you'll see the inside of a spaceship is if you were cleaning it"), the protagonist observes enhanced astronauts as they are demonstrating their superhuman athleticism. Although it was not a box office success, it was critically acclaimed and influenced the debate over human genetic engineering in the public consciousness. As to its accuracy, its production company, Sony Pictures, consulted with a gene therapy researcher and prominent critic of eugenics known to have stated that "[w]e should not step over the line that delineates treatment from enhancement", W. French Anderson, to ensure that the portrayal of science was realistic. Disputing their success in this mission, Philim Yam of Scientific American called the film "science bashing" and Nature's Kevin Davies called it a "surprisingly pedestrian affair", while molecular biologist Lee Silver described its extreme determinism as "a straw man". In his 2018 book Blueprint, the behavioral geneticist Robert Plomin writes that while Gattaca warned of the dangers of genetic information being used by a totalitarian state, genetic testing could also favor better meritocracy in democratic societies which already administer a variety of standardized tests to select people for education and employment. He suggests that polygenic scores might supplement testing in a manner that is essentially free of biases. See also Ableism Bioconservatism Culling Dor Yeshorim Dysgenics Eugenic feminism Genetic engineering Genetic enhancement Hereditarianism Heritability of IQ New eugenics Mendelian traits in humans Simple Mendelian genetics in humans Moral enhancement Project Prevention Social Darwinism Wrongful life Eugenics in France References Notes Further reading Paul, Diane B.; Spencer, Hamish G. (1998). "Did Eugenics Rest on an Elementary Mistake?" (PDF). In: The Politics of Heredity: Essays on Eugenics, Biomedicine, and the Nature-Nurture Debate, SUNY Press (pp. 102–118) Gantsho, Luvuyo (2022). "The principle of procreative beneficence and its implications for genetic engineering." Theoretical Medicine and Bioethics 43 (5):307-328. . Harris, John (2009). "Enhancements are a Moral Obligation." In J. Savulescu & N. Bostrom (Eds.), Human Enhancement, Oxford University Press, pp. 131–154 Kamm, Frances (2010). "What Is And Is Not Wrong With Enhancement?" In Julian Savulescu & Nick Bostrom (eds.), Human Enhancement. Oxford University Press. Kamm, Frances (2005). "Is There a Problem with Enhancement?", The American Journal of Bioethics, 5(3), 5–14. PMID 16006376 . Ranisch, Robert (2022). "Procreative Beneficence and Genome Editing", The American Journal of Bioethics, 22(9), 20–22. . Robertson, John (2021). Children of Choice: Freedom and the New Reproductive Technologies. Princeton University Press, . Saunders, Ben (2015). "Why Procreative Preferences May be Moral – And Why it May not Matter if They Aren't." Bioethics, 29(7), 499–506. . Savulescu, Julian (2001). Procreative beneficence: why we should select the best children. Bioethics. 15(5–6): pp. 413–26 Singer, Peter (2010). "Parental Choice and Human Improvement." In Julian Savulescu & Nick Bostrom (eds.), Human Enhancement. Oxford University Press. South, David (1993). Award-winning research on history of eugenics reaps honours. Hannah Institute for the History of Medicine Number 19 Fall 1993, p. 3 Wikler, Daniel (1999). "Can we learn from eugenics?" (PDF). J Med Ethics. 25(2):183-94. . PMID 10226926; PMCID: PMC479205. External links Embryo Editing for Intelligence: A cost-benefit analysis of CRISPR-based editing for intelligence with 2015-2016 state-of-the-art Embryo Selection For Intelligence: A cost-benefit analysis of the marginal cost of IVF-based embryo selection for intelligence and other traits with 2016-2017 state-of-the-art Eugenics: Its Origin and Development (1883–Present) by the National Human Genome Research Institute (30 November 2021) Eugenics and Scientific Racism Fact Sheet by the National Human Genome Research Institute (3 November 2021) Ableism Applied genetics Bioethics Nazism Pseudo-scholarship Pseudoscience Racism Technological utopianism White supremacy
Eugenics
Technology
5,337
69,832,190
https://en.wikipedia.org/wiki/Institute%20of%20Particle%20Physics
The Institute of Particle Physics (IPP) is a Canadian organization that fosters expertise in particle physics research and advanced education. IPP is a nonprofit organization operated by the institutional and individual members for the benefit of particle physics research in Canada. IPP supported projects can be accessed on the group's website. Currently, the IPP Scientific Council administers the IPP Research Scientist Program. The IPP director and council focus on future planning, advocacy with funding sources, and on its activities in international public relations. History The IPP was established in 1971 to administer anticipated funds from the National Research Council Canada to steer the Canadian program working at Fermilab, Argonne National Lab, and SLAC National Accelerator Laboratory. IPP formed a Scientific Council, elected by the membership, to be responsible for the Scientific program and the operation of the Institute. IPP council vetted projects and advocated within the funding regime and internationally. Eventually, the Natural Sciences and Engineering Research Council (NSERC) developed better communication with, and funding model for, experimental groups, alleviating the need for IPP to directly administer research grant funds. Community support Long range planning An important part of the Institute of Particle Physics’ mission is to coordinate community input for long range planning exercises. This involves solicitations of community input, hosting of town hall meetings where the projects underway and future projects are discussed, and the concerns of the community can be aired. This input results in the preparation of a brief, usually solicited by NSERC, that serves as input to the Subatomic Physics long range planning exercise. IPP Early Career Theory Fellowship The Institute of Particle Physics Early Career Theory Fellowship is designed to enable outstanding theory PhD students and postdoctoral researchers to be present for a period at an international university, laboratory, or institute. The purpose of the fellowship is to encourage scientific collaboration between theorists in Canada and those abroad, and also to enhance the career prospects of the junior researcher. IPP high school teacher awards The Institute of Particle Physics has supported Canadian high school teachers attending the CERN high school teacher program. IPP summer student program The Institute of Particle Physics supports Canadian undergraduate students participating in the CERN summer student program. References External links Official website 1971 establishments in Canada Particle physics Science education in Canada
Institute of Particle Physics
Physics
462
32,095,347
https://en.wikipedia.org/wiki/Reststrahlen%20effect
The Reststrahlen effect (German: “residual rays”) is a reflectance phenomenon in which electromagnetic radiation within a narrow energy band cannot propagate within a given medium due to a change in refractive index concurrent with the specific absorbance band of the medium in question; this narrow energy band is termed the Reststrahlen band. As a result of this inability to propagate, normally incident Reststrahlen band radiation experiences strong-reflection or total-reflection from that medium. The energies at which Reststrahlen bands occur vary and are particular to the individual compound. Numerous physical attributes of a compound will have an effect on the appearance of the Reststrahlen band. These include phonon band-gap, particle/grain size, strongly absorbing compounds, compounds with optically opaque bands in the infrared. Appearance The term Reststrahlen was coined following the observation by Heinrich Rubens in 1898 that repeated reflection of an infrared beam at the surface of a given material suppresses radiation at all wavelengths except for certain spectral intervals, and Rubens detected wavelengths of sizes around 60 μm. The measured intensity for these special intervals (the Reststrahlen range) indicates a reflectance of up to 80% or even more, while the maximum reflectance due to infrared bands of dielectric materials are usually <10%. After four reflections, the intensity of the latter is reduced by a factor of 10−4 compared to the intensity of the incident radiation, while the light in the Reststrahlen range can maintain 40% of its original intensity by the time it reaches the detector. Obviously, this contrast increases with the number of reflections and explains the observation made by Rubens and the term Reststrahlen (residual rays) used to describe this spectral selection. Reststrahlen bands manifest in diffuse reflectance infrared absorption spectra as complete band reversal, or in infrared emission spectra as a minimum in emissivity. Application The Reststrahlen effect is used to investigate the properties of semiconductors, it is also used in geophysics and meteorology. See also Absorbance Ellipsometry Emissivity Transmittance Reflectivity Lyddane–Sachs–Teller relation References Elachi, C. et al. (2006) Introduction to the physics and techniques of remote sensing. John Wiley and Sons. Griffiths, P.R. (1983) Fourier transform infrared spectrometry. Science, 222, 297–302. Goldberg, A. et al. (2003) Detection of buried land mines using a dual-band LWIR/LWIR QWIP focal plane array. Infrared Physics & Technology, 44 (5–6), 427–437. Anderson, M. S. et al. (2005) Fourier transform infrared spectroscopy for Mars science. Rev. Sci. Instrum., 76 (3). Spectroscopy Infrared technology Scientific techniques
Reststrahlen effect
Physics,Chemistry
589
15,642,481
https://en.wikipedia.org/wiki/Furaneol
Furaneol, or strawberry furanone, is an organic compound used in the flavor and perfume industry. It is formally a derivative of furan. It is a white or colorless solid that is soluble in water and organic solvents. Odor and occurrence Although malodorous at high concentrations, it exhibits a sweet strawberry aroma when dilute. It is found in strawberries and a variety of other fruits and it is partly responsible for the smell of fresh pineapple. It is also an important component of the odours of buckwheat, and tomato. Furaneol accumulation during ripening has been observed in strawberries and can reach a high concentration of 37 μg/g. Furaneol acetate The acetate ester of furaneol, also known as caramel acetate and strawberry acetate, is also popular with flavorists to achieve a fatty toffee taste and it is used in traces in perfumery to add a sweet gourmand note. Stereoisomerism Furaneol has two enantiomers, (R)-(+)-furaneol and (S)-(−)-furaneol. The (R)-form is mainly responsible for the smell. Biosynthesis It is one of several products from the dehydration of glucose. Its immediate biosynthetic precursor is the glucoside, derived from dehydration of sucrose. References Flavors Enones Enols Dihydrofurans Sweet-smelling chemicals
Furaneol
Chemistry
313
61,078,954
https://en.wikipedia.org/wiki/Red%20Hook%20Wi-Fi
Red Hook Wi-Fi is a free-to-use, Wi-Fi mesh network that provides internet access to the Red Hook neighborhood of Brooklyn, New York. It is operated by the Red Hook Initiative. Background Due to the location of Red Hook, Brooklyn, between the Red Hook Channel and the Buttermilk Channel, many of its residents face various challenges in accessing broadband service. A survey found out that many people in the area accessed the internet primarily through mobile phones and that over 30% of the population did not have broadband access at home. Beginning in Fall 2011, the Red Hook Initiative (RHI), a Brooklyn non-profit, approached the Open Technology Institute about collaborating on a community wireless network. RHI wanted a way to communicate with the residents immediately around its community center. When the network was initially launched, it had support for up to 150 simultaneous users and ran on an open-software platform called Commotion. Hurricane Sandy In 2012, after Hurricane Sandy struck the area, and many internet and communication systems were down throughout much of the city, Red Hook remained connected through its mesh network and the headquarters of the Red Hook Initiative became a hub for volunteer coordination, donation collections food distribution as residents came to the Red Hook Initiative's office to charge their devices and connect to the internet. Shortly afterwards, the Federal Emergency Management Agency (FEMA) connected Red Hook Wi-Fi to its satellite system, linking itself, the residents and the Red Cross into a communication matrix that could be used to find out about emergency relief, food banks as well as shelter locations. After the relief efforts had finished, a team led by the Red Hook Initiative continued to make improvements to the mesh network by installing nano stations powered by solar panels on rooftops around the Red Hook neighborhood. Though the Red Hook Wi-Fi project was already in the works before Hurricane Sandy struck, it gained additional media attention after the storm. In 2015, Red Hook Wi-Fi was selected to be part of the city's resiliency initiative — from a group of 27 finalists competing in the Resiliency Innovations for a Stronger Economy. References External links Mesh networking Red Hook, Brooklyn Community networks
Red Hook Wi-Fi
Technology
437
31,199,340
https://en.wikipedia.org/wiki/Persistence%20%28discontinuity%29
Persistence determines the possibilities of relative movement along a discontinuity in a soil or rock mass in geotechnical engineering. Discontinuities are usually differentiated in persistent, non-persistent, and abutting discontinuities (figure). Persistent discontinuity A persistent discontinuity is a continuous plane in a soil or rock mass. Shear displacement takes place if the shear stress along the discontinuity plane exceeds the shear strength of the discontinuity plane. Non-persistent discontinuity A non-persistent discontinuity ends in intact soil or rock. Before movement of the material on both sides of a non-persistent discontinuity is possible, the discontinuity has to extend and break through intact material. As intact material has virtually always far higher shear strength than the discontinuity, a non-persistent discontinuity will have larger shear strength than a persistent discontinuity. Abutting discontinuity An abutting discontinuity abuts against another discontinuity. Abutting discontinuities might continue at the other side of the intersecting discontinuity, however, with a displacement to give so-called ‘stepped planes’. Shear displacement along the discontinuity can take place if the shear strength along the discontinuity plane is exceeded, and the blocks of material against which the discontinuity abuts can move or break. Anisotropic persistence A discontinuity might be persistent in dip direction but be not persistent perpendicular to the dip direction or vice versa. See also Discontinuity (geotechnical engineering) Rock mechanics Shear strength (discontinuity) References Soil mechanics Rock mass classification
Persistence (discontinuity)
Physics
353
571,325
https://en.wikipedia.org/wiki/Sample%20and%20hold
In electronics, a sample and hold (also known as sample and follow) circuit is an analog device that samples (captures, takes) the voltage of a continuously varying analog signal and holds (locks, freezes) its value at a constant level for a specified minimum period of time. Sample and hold circuits and related peak detectors are the elementary analog memory devices. They are typically used in analog-to-digital converters to eliminate variations in input signal that can corrupt the conversion process. They are also used in electronic music, for instance to impart a random quality to successively-played notes. A typical sample and hold circuit stores electric charge in a capacitor and contains at least one switching device such as a FET (field effect transistor) switch and normally one operational amplifier. To sample the input signal, the switch connects the capacitor to the output of a buffer amplifier. The buffer amplifier charges or discharges the capacitor so that the voltage across the capacitor is practically equal, or proportional to, input voltage. In hold mode, the switch disconnects the capacitor from the buffer. The capacitor is invariably discharged by its own leakage currents and useful load currents, which makes the circuit inherently volatile, but the loss of voltage (voltage drop) within a specified hold time remains within an acceptable error margin for all but the most demanding applications. Purpose Sample and hold circuits are used in linear systems. In some kinds of analog-to-digital converters (ADCs), the input is compared to a voltage generated internally from a digital-to-analog converter (DAC). The circuit tries a series of values and stops converting once the voltages are equal, within some defined error margin. If the input value was permitted to change during this comparison process, the resulting conversion would be inaccurate and possibly unrelated to the true input value. Such successive approximation converters will often incorporate internal sample and hold circuitry. In addition, sample and hold circuits are often used when multiple samples need to be measured at the same time. Each value is sampled and held, using a common sample clock. For practically all commercial liquid crystal active matrix displays based on TN, IPS or VA electro-optic LC cells (excluding bi-stable phenomena), each pixel represents a small capacitor, which has to be periodically charged to a level corresponding to the greyscale value (contrast) desired for a picture element. In order to maintain the level during a scanning cycle (frame period), an additional electric capacitor is attached in parallel to each LC pixel to better hold the voltage. A thin-film FET switch is addressed to select a particular LC pixel and charge the picture information for it. In contrast to an S/H in general electronics, there is no output operational amplifier and no electrical signal AO. Instead, the charge on the hold capacitors controls the deformation of the LC molecules and thereby the optical effect as its output. The invention of this concept and its implementation in thin-film technology have been honored with the IEEE Jun-ichi Nishizawa Medal. During a scanning cycle, the picture doesn't follow the input signal. This does not allow the eye to refresh and can lead to blurring during motion sequences, also the transition is visible between frames because the backlight is constantly illuminated, adding to display motion blur. Sample and hold circuits are also frequently found on synthesizers, either as a discrete module or as an integral component. They are used to take periodic samples of an incoming signal, typically as a source of modulation for other components of the synthesizer. When a sample and hold circuit is plugged into a white noise generator the result is a sequence of random values, which - depending on the amplitude of modulation - can be used to provide subtle variations in a signal or wildly varying random tones. Implementation To keep the input voltage as stable as possible, it is essential that the capacitor have very low leakage, and that it not be loaded to any significant degree which calls for a very high input impedance. See also Analog signal to discrete time interval converter Notes References Paul Horowitz, Winfield Hill (2001 ed.). The Art of Electronics. Cambridge University Press. . Alan P. Kefauver, David Patschke (2007). Fundamentals of digital audio. A-R Editions, Inc. . Analog Devices 21 page Tutorial "Sample and Hold Amplifiers" http://www.analog.com/static/imported-files/tutorials/MT-090.pdf Applications of Monolithic Sample and hold Amplifiers-Intersil Electronic circuits Digital signal processing
Sample and hold
Engineering
946
48,432,026
https://en.wikipedia.org/wiki/Vserv%20Digital%20Services
Vserv is a platform for mobile marketing and commerce. The platform has a mobile internet user base in India and Southeast Asia with insights on users in these markets. Founded in 2010, Vserv has over 500 million user profiles and is backed by Maverick Capital, IDG Ventures India and Epiphany Ventures. History Vserv was founded in January 2010 by Dippak Khurana and Ashay Padwal. Key offerings Vserv Smart Data platform In October 2014, Vserv launched its Vserv Smart Data platform for mobile marketing. This platform aggregates data from Vserv's data management platform which is augmented by combining data from multiple sources such as telcos, apps and offline partners. Vserv Smart Data builds user personas and triggers intent signals in real-time to help drive results for marketers, telcos and app publishers. Vserv has partnerships with telcos for accelerating its smart data offering to users in India and Southeast Asia. Vserv Commerce solution In May 2015, Vserv entered the commerce space. The company provides solutions for Telcos and DTH operators. With the integration of buy buttons within ads, Telcos can provide segmented offers to their customers. For DTH operators, this solution enables them to sell segmented channel packs to their customers. Products AppWrapper AppWrapper allows developers and publishers to enable app development lifecycle services like ad monetization, analytics, bug tracking and in-app purchases. Funding In January 2010, Vserv received an initial investment of US$175,000 from Ajay Adiseshann, who also runs a mobile payment company called PayMate. In July 2011, Vserv received a first round of funding of US$2.7 million by IDG Ventures India, in July 2011. In September 2012, Vserv raised US$4 million from Epiphany Ventures and IDG Ventures India. In March 2015, Vserv raised US$11 million from Maverick Capital Ventures. In 2015, Vserv raised US$15 million from hedge fund Maverick Capital and venture capital firm IDG Ventures India, signalling a selective revival of investor interest in local ad tech firms. Data, analytics, and decisioning company Experian has picked up a strategic stake in smart data mobile marketing platform Vserv Digital Services. While the deal size and amount were undisclosed, the company stated the investment was in line with its vision to boost financial inclusion, by ensuring a friction-free digital onboarding experience for its consumers. References Mobile marketing Indian companies established in 2010
Vserv Digital Services
Technology
528
3,428,791
https://en.wikipedia.org/wiki/Blue%20hour
The blue hour (from French ; ) is the period of twilight (in the morning or evening, around the nautical stage) when the Sun is at a significant depth below the horizon. During this time, the remaining sunlight takes on a mostly blue shade. This shade differs from the colour of the sky on a clear day, which is caused by Rayleigh scattering. The blue hour occurs when the Sun is far enough below the horizon so that the sunlight's blue wavelengths dominate due to the Chappuis absorption caused by ozone. Since the term is colloquial, it lacks an official definition such as dawn, dusk, or the three stages of twilight. Rather, blue hour refers to the state of natural lighting that usually occurs around the nautical stage of the twilight period (at dawn or dusk). The blue hour is shorter in regions near the equator due to the sun rising and setting at steep angles. In places closer to the poles, the illumination and twilight periods are longer as the sun rises and sets at shallower angles. Explanation and times of occurrence The still commonly presented incorrect explanation claims that Earth's post-sunset and pre-sunrise atmosphere solely receives and disperses the sun's shorter blue wavelengths and scatters the longer, reddish wavelengths to explain why the hue of this hour is so blue. In fact, the blue hour occurs when the Sun is far enough below the horizon so that the sunlight's blue wavelengths dominate due to the Chappuis absorption caused by ozone. When the sky is clear, the blue hour can be a colourful spectacle, with the indirect sunlight tinting the sky yellow, orange, red, and blue. This effect is caused by the relative diffusibility of shorter wavelengths (bluer rays) of visible light versus the longer wavelengths (redder rays). During the blue "hour", red light passes through space while blue light is scattered in the atmosphere, and thus reaches Earth's surface. Blue hour usually lasts about 20–96 minutes right after sunset and right before sunrise. Time of year, location, and air quality all have an influence on the exact time of blue hour. For instance in Egypt (every 21st of June), when sunset is at 7:59 PM: blue hour occurs from 7:59 PM to 9:35 PM. When sunrise is at 5:54 AM: blue hour occurs from 4:17 AM to 5:54 AM. Golden hour occurs from 5:54 AM to 6:28 AM and from 7:25 PM to 7:59 PM. Art Photography Many artists value this period for the quality of the soft light. Although the blue hour does not have an official definition, the blue color spectrum is most prominent when the Sun is between 4° and 8° below the horizon. Photographers use blue hour for the tranquil mood it sets. When photographing during blue hour it can be favourable to capture subjects that have artificial light sources, such as buildings, monuments, cityscapes, or bridges. See also Color temperature Green flash Golden hour (photography) Notes References External links Blue hour mobile application (iOS, Iphone/iPad) bluehoursite.com: Everything about Blue Hour and Night Photography (news, articles, tips and calculator) Twilight Calculator, Golden Hour/Blue Hour table Earth phenomena Parts of a day Visibility Night Atmospheric optical phenomena Photography
Blue hour
Physics,Astronomy,Mathematics,Technology
681