id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,571,731
https://en.wikipedia.org/wiki/Astragal
An astragal is a moulding profile composed of a half-round surface surrounded by two flat planes (fillets). An astragal is sometimes referred to as a miniature torus. It can be an architectural element used at the top or base of a column, but is also employed as a framing device on furniture and woodwork. The word "astragal" comes from the Greek for "ankle-joint", , . On doors An astragal is commonly used to seal between a pair of doors. The astragal closes the clearance gap created by bevels on one or both mating doors, and helps deaden sound. The vertical member (molding) attaches to a stile on one of a pair of either sliding or swinging doors, against which the other door seals when closed. Exterior astragals are kerfed for weatherstripping. The weatherstripping at the bottom of garage doors is also referred to as an astragal. An astragal may also be known as a "meeting stile seal". It is sometimes confused with the wooden trim that divides the panes of a multi-light window or door, known as a muntin. References Architectural elements Woodworking tools de:Astragal
Astragal
[ "Technology", "Engineering" ]
263
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
1,571,780
https://en.wikipedia.org/wiki/Condensation%20algorithm
The condensation algorithm (Conditional Density Propagation) is a computer vision algorithm. The principal application is to detect and track the contour of objects moving in a cluttered environment. Object tracking is one of the more basic and difficult aspects of computer vision and is generally a prerequisite to object recognition. Being able to identify which pixels in an image make up the contour of an object is a non-trivial problem. Condensation is a probabilistic algorithm that attempts to solve this problem. The algorithm itself is described in detail by Isard and Blake in a publication in the International Journal of Computer Vision in 1998. One of the most interesting facets of the algorithm is that it does not compute on every pixel of the image. Rather, pixels to process are chosen at random, and only a subset of the pixels end up being processed. Multiple hypotheses about what is moving are supported naturally by the probabilistic nature of the approach. The evaluation functions come largely from previous work in the area and include many standard statistical approaches. The original part of this work is the application of particle filter estimation techniques. The algorithm’s creation was inspired by the inability of Kalman filtering to perform object tracking well in the presence of significant background clutter. The presence of clutter tends to produce probability distributions for the object state which are multi-modal and therefore poorly modeled by the Kalman filter. The condensation algorithm in its most general form requires no assumptions about the probability distributions of the object or measurements. Algorithm overview The condensation algorithm seeks to solve the problem of estimating the conformation of an object described by a vector at time , given observations of the detected features in the images up to and including the current time. The algorithm outputs an estimate to the state conditional probability density by applying a nonlinear filter based on factored sampling and can be thought of as a development of a Monte-Carlo method. is a representation of the probability of possible conformations for the objects based on previous conformations and measurements. The condensation algorithm is a generative model since it models the joint distribution of the object and the observer. The conditional density of the object at the current time is estimated as a weighted, time-indexed sample set with weights . N is a parameter determining the number of sample sets chosen. A realization of is obtained by sampling with replacement from the set with probability equal to the corresponding element of . The assumptions that object dynamics form a temporal Markov chain and that observations are independent of each other and the dynamics facilitate the implementation of the condensation algorithm. The first assumption allows the dynamics of the object to be entirely determined by the conditional density . The model of the system dynamics determined by must also be selected for the algorithm, and generally includes both deterministic and stochastic dynamics. The algorithm can be summarized by initialization at time and three steps at each time t: Initialization Form the initial sample set and weights by sampling according to the prior distribution. For example, specify as Gaussian and set the weights equal to each other. Iterative procedure Sample with replacement times from the set with probability to generate a realization of . Apply the learned dynamics to each element of this new set, to generate a new set . To take into account the current observation , set for each element . This algorithm outputs the probability distribution which can be directly used to calculate the mean position of the tracked object, as well as the other moments of the tracked object. Cumulative weights can instead be used to achieve a more efficient sampling. Implementation considerations Since object-tracking can be a real-time objective, consideration of algorithm efficiency becomes important. The condensation algorithm is relatively simple when compared to the computational intensity of the Ricatti equation required for Kalman filtering. The parameter , which determines the number of samples in the sample set, will clearly hold a trade-off in efficiency versus performance. One way to increase efficiency of the algorithm is by selecting a low degree of freedom model for representing the shape of the object. The model used by Isard 1998 is a linear parameterization of B-splines in which the splines are limited to certain configurations. Suitable configurations were found by analytically determining combinations of contours from multiple views, of the object in different poses, and through principal component analysis (PCA) on the deforming object. Isard and Blake model the object dynamics as a second order difference equation with deterministic and stochastic components: where is the mean value of the state, and , are matrices representing the deterministic and stochastic components of the dynamical model respectively. , , and are estimated via Maximum Likelihood Estimation while the object performs typical movements. The observation model cannot be directly estimated from the data, requiring assumptions to be made in order to estimate it. Isard 1998 assumes that the clutter which may make the object not visible is a Poisson random process with spatial density and that any true target measurement is unbiased and normally distributed with standard deviation . The basic condensation algorithm is used to track a single object in time. It is possible to extend the condensation algorithm using a single probability distribution to describe the likely states of multiple objects to track multiple objects in a scene at the same time. Since clutter can cause the object probability distribution to split into multiple peaks, each peak represents a hypothesis about the object configuration. Smoothing is a statistical technique of conditioning the distribution based on both past and future measurements once the tracking is complete in order to reduce the effects of multiple peaks. Smoothing cannot be directly done in real-time since it requires information of future measurements. Applications The algorithm can be used for vision-based robot localization of mobile robots. Instead of tracking the position of an object in the scene, however, the position of the camera platform is tracked. This allows the camera platform to be globally localized given a visual map of the environment. Extensions of the condensation algorithm have also been used to recognize human gestures in image sequences. This application of the condensation algorithm impacts the range of human–computer interactions possible. It has been used to recognize simple gestures of a user at a whiteboard to control actions such as selecting regions of the boards to print or save them. Other extensions have also been used for tracking multiple cars in the same scene. The condensation algorithm has also been used for face recognition in a video sequence. Resources An implementation of the condensation algorithm in C can be found on Michael Isard’s website. An implementation in MATLAB can be found on the Mathworks File Exchange. An example of implementation using the OpenCV library can be found on the OpenCV forums. See also Particle filter – Condensation is the application of Sampling Importance Resampling (SIR) estimation to contour tracking References Computer vision
Condensation algorithm
[ "Engineering" ]
1,382
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
1,571,800
https://en.wikipedia.org/wiki/HD%2010647
HD 10647 (q1 Eridani) is a 6th-magnitude yellow-white dwarf star, 57 light-years away in the constellation of Eridanus. The star is visible to the unaided eye under very dark skies. It is slightly hotter and more luminous than the Sun, and at 1.75 billion years old, it is also younger. An extrasolar planet was discovered orbiting this star in 2003. Planetary system In 2003, Michel Mayor's team announced the discovery of a new planet, HD 10647 b, in Paris at the XIX IAP Colloquium Extrasolar Planets: Today & Tomorrow* . The Anglo-Australian Planet Search team initially did not detect the planet in 2004, though a solution was made by 2006. The CORALIE data was finally published in 2013. The IRAS infrared space telescope detected an excess of infrared radiation from the star, indicating a possible circumstellar disk. Out of the 300 nearest Sun-like stars, the disk has the highest fractional luminosity out of all of them. It is unusually bright, but not unusually massive; the lower bound of the mass is 8 times that of the Earth. The inclination of the disk is relatively high, and the disk is asymmetrical, being more extended in the northeast direction than the southwest. It extends from 34 astronomical units (AU) at the inner edge to 134 AU at the outer edge. The inner edge is sharp, suggesting the existence of a planet that carved out the edge. HD 10647 b, with a semimajor axis of about 2 AU, is too far to be responsible. However, other potential planets may be responsible for this feature. There is some evidence for an additional, warm asteroid belt-like component further in, at 3 to 10 AU away from the star. References External links Sky Map: HD 10647 Eridani, Q1 Eridanus (constellation) 010647 007978 0506 F-type main-sequence stars Eridani, 5 Durchmusterung objects 3109 Planetary systems with one confirmed planet
HD 10647
[ "Astronomy" ]
430
[ "Eridanus (constellation)", "Constellations" ]
1,571,836
https://en.wikipedia.org/wiki/Azoth
Azoth is a universal remedy or potent solvent sought after in the realm of alchemy, akin to alkahest—a distinct alchemical substance. The quest for Azoth was the crux of numerous alchemical endeavors, symbolized by the Caduceus. Initially coined to denote an esoteric formula pursued by alchemists, akin to the Philosopher's Stone, the term Azoth later evolved into a poetic expression for the element mercury. The etymology of 'Azoth' traces to Medieval Latin as a modification of 'azoc,' ultimately derived from the Arabic al-za'buq (الزئبق), meaning 'the mercury.' The scientific community does not recognize the existence of this substance. The myth of Azoth may stem from misinterpreted observations of solvents like mercury, capable of dissolving gold. Additionally, the myth might have been fueled by the occult inclinations nurtured by alchemists, who rooted and steered their chemical explorations in superstitions and dogmas. Description Azoth was believed to be the essential agent of transformation in alchemy. It is the name given by ancient alchemists to mercury, which they believed to be the animating spirit hidden in all matter that makes transmutation possible. The word comes from the Arabic al-zā'būq which means "mercury". The word occurs in the writings of many early alchemists, such as Zosimos, Olympiodorus, and Jābir ibn Hayyān (Geber). Mystical traditions and philosophy Azoth has also been linked to various mystical and spiritual practices beyond alchemy. In the context of Renaissance magic, it was often associated with the idea of spiritual enlightenment and the purification of the soul. Some mystical traditions regarded Azoth as a metaphor for the internal transformation required to achieve a higher state of consciousness. It was thought to embody the process of turning base human traits into divine virtues, akin to the transformation of base metals into gold. This spiritual interpretation of Azoth influenced numerous esoteric and hermetic schools of thought, contributing to its lasting legacy in Western mystical traditions. Additionally, Azoth's connection to mercury and its fluid, transformative properties also made it a symbol of adaptability and change in broader philosophical contexts. In the Kabbalah, Azoth is related to the Ein Soph or 'the Endless One'. See also Anima mundi Panacea (medicine) Prima materia Viriditas References External links Interpretation of Azoth of the Philosophers (by Dennis William Hauck) What is the Azoth? and The Azoth Ritual at Azothalchemy.org Alchemical substances Mythological medicines and drugs
Azoth
[ "Chemistry" ]
556
[ "Alchemical substances" ]
1,571,840
https://en.wikipedia.org/wiki/HD%2045350
HD 45350 is a solar analog star with an exoplanetary companion in the northern constellation of Auriga. It has an apparent visual magnitude of 7.89, which means it is an 8th magnitude star that is too dim to be readily visible to the naked eye. The system is located at a distance of 153 light-years from the Sun based on parallax measurements, but is drifting closer with a radial velocity of −21 km/s. This is an ordinary G-type main-sequence star with a stellar classification of G5 V, which indicates it is generating energy through core hydrogen fusion. Age estimates are in the range of 6–7 billion years and it has an absolute magnitude of 4.45, placing it about 0.8 magnitudes above the main sequence. The star is chromospherically quiet but metal-rich with a projected rotational velocity of 4.7 km/s. The mass of the star is about the same as the Sun, but it is 24% larger in radius and is a radiating 43% higher luminosity. The star HD 45350 is named Lucilinburhuc. The name was selected in the NameExoWorlds campaign by Luxembourg, during the 100th anniversary of the IAU. The Lucilinburhuc fortress was built in 963 by the founder of Luxembourg, Count Siegfried. The year 2019-2020 class of 3B from the Luxembourgish Echternach high school won the contest to name both the star and its planet. Planetary system In January 2005, the discovery of a very eccentric extrasolar planet orbiting the star was announced by the California and Carnegie Planet Search team. See also List of extrasolar planets References External links G-type main-sequence stars Solar analogs Planetary systems with one confirmed planet Auriga BD+39 1637 045350 030860
HD 45350
[ "Astronomy" ]
385
[ "Auriga", "Constellations" ]
1,571,859
https://en.wikipedia.org/wiki/Hydrazoic%20acid
Hydrazoic acid, also known as hydrogen azide, azic acid or azoimide, is a compound with the chemical formula . It is a colorless, volatile, and explosive liquid at room temperature and pressure. It is a compound of nitrogen and hydrogen, and is therefore a pnictogen hydride. It was first isolated in 1890 by Theodor Curtius. The acid has few applications, but its conjugate base, the azide ion, is useful in specialized processes. Hydrazoic acid, like its fellow mineral acids, is soluble in water. Undiluted hydrazoic acid is dangerously explosive with a standard enthalpy of formation ΔfHo (l, 298K) = +264 kJ/mol. When dilute, the gas and aqueous solutions (<10%) can be safely prepared but should be used immediately; because of its low boiling point, hydrazoic acid is enriched upon evaporation and condensation such that dilute solutions incapable of explosion can form droplets in the headspace of the container or reactor that are capable of explosion. Production The acid is usually formed by acidification of an azide salt like sodium azide. Normally solutions of sodium azide in water contain trace quantities of hydrazoic acid in equilibrium with the azide salt, but introduction of a stronger acid can convert the primary species in solution to hydrazoic acid. The pure acid may be subsequently obtained by fractional distillation as an extremely explosive colorless liquid with an unpleasant smell. Its aqueous solution can also be prepared by treatment of barium azide solution with dilute sulfuric acid, filtering the insoluble barium sulfate. It was originally prepared by the reaction of aqueous hydrazine with nitrous acid: With the hydrazinium cation this reaction is written as: Other oxidizing agents, such as hydrogen peroxide, nitrosyl chloride, trichloramine or nitric acid, can also be used to produce hydrazoic acid from hydrazine. Destruction prior to disposal Hydrazoic acid reacts with nitrous acid: This reaction is unusual in that it involves compounds with nitrogen in four different oxidation states. Reactions In its properties hydrazoic acid shows some analogy to the halogen acids, since it forms poorly soluble (in water) lead, silver and mercury(I) salts. The metallic salts all crystallize in the anhydrous form and decompose on heating, leaving a residue of the pure metal. It is a weak acid (pKa = 4.75.) Its heavy metal salts are explosive and readily interact with the alkyl iodides. Azides of heavier alkali metals (excluding lithium) or alkaline earth metals are not explosive, but decompose in a more controlled way upon heating, releasing spectroscopically-pure gas. Solutions of hydrazoic acid dissolve many metals (e.g. zinc, iron) with liberation of hydrogen and formation of salts, which are called azides (formerly also called azoimides or hydrazoates). Hydrazoic acid may react with carbonyl derivatives, including aldehydes, ketones, and carboxylic acids, to give an amine or amide, with expulsion of nitrogen. This is called Schmidt reaction or Schmidt rearrangement. Dissolution in the strongest acids produces explosive salts containing the aminodiazonium ion , for example: The ion is isoelectronic to diazomethane . The decomposition of hydrazoic acid, triggered by shock, friction, spark, etc. produces nitrogen and hydrogen: Hydrazoic acid undergoes unimolecular decomposition at sufficient energy: The lowest energy pathway produces NH in the triplet state, making it a spin-forbidden reaction. This is one of the few reactions whose rate has been determined for specific amounts of vibrational energy in the ground electronic state, by laser photodissociation studies. In addition, these unimolecular rates have been analyzed theoretically, and the experimental and calculated rates are in reasonable agreement. Toxicity Hydrazoic acid is volatile and highly toxic. It has a pungent smell and its vapor can cause violent headaches. The compound acts as a non-cumulative poison. Applications 2-Furonitrile, a pharmaceutical intermediate and potential artificial sweetening agent has been prepared in good yield by treating furfural with a mixture of hydrazoic acid () and perchloric acid () in the presence of magnesium perchlorate in the benzene solution at 35 °C. The all gas-phase iodine laser (AGIL) mixes gaseous hydrazoic acid with chlorine to produce excited nitrogen chloride, which is then used to cause iodine to lase; this avoids the liquid chemistry requirements of COIL lasers. References External links OSHA: Hydrazoic Acid Acids Azides Nitrogen hydrides Explosive chemicals Explosive gases Foul-smelling chemicals
Hydrazoic acid
[ "Chemistry" ]
1,014
[ "Explosive chemicals", "Azides", "Acids", "Explosive gases" ]
1,571,876
https://en.wikipedia.org/wiki/HD%20114386
HD 114386 is a star with a pair of orbiting exoplanets in the southern constellation of Centaurus. It has an apparent visual magnitude of 8.73, which means it cannot be viewed with the naked eye but can be seen with a telescope or good binoculars. Based on parallax measurements, the system is located at a distance of 91 light years from the Sun. It is receding with a radial velocity of 33.4 km/s. The star shows a high proper motion, traversing the celestial sphere at an angular rate of . The spectrum of HD 114386 yields a stellar classification of K3 V, matching a K-type main-sequence star, or orange dwarf. It has 76% of the mass of the Sun and 73% of the Sun's radius. HD 114386 is a much older star than the Sun with an estimated age of roughly nine billion years. The abundance of iron in the stellar atmosphere, a measure of the star's metallicity, is nearly solar. It is rather dim compared to the Sun, radiating just 28% of the luminosity of the Sun from its photosphere at an effective temperature of 4,926 K. Planetary system In 2004, the Geneva Extrasolar Planet Search Team announced the discovery of an extrasolar planet orbiting the star. The preliminary data for a second exoplanet was released in 2011. See also 47 Ursae Majoris List of extrasolar planets References K-type main-sequence stars Planetary systems with two confirmed planets Centaurus Durchmusterung objects 114386 064295
HD 114386
[ "Astronomy" ]
328
[ "Centaurus", "Constellations" ]
1,571,880
https://en.wikipedia.org/wiki/Graphic%20designer
A graphic designer is a professional who practices the discipline of graphic design, either within companies or organizations or independently. They are professionals in design and visual communication, with their primary focus on transforming linguistic messages into graphic manifestations, whether tangible or intangible. They are responsible for planning, designing, projecting, and conveying messages or ideas through visual communication. Graphic design is one of the most in-demand professions with significant job opportunities, as it allows leveraging technological advancements and working online from anywhere in the world. Generally, a graphic designer works in areas such as branding, corporate identity, advertising, technical and artistic drawing, multimedia, etc. It is a profession that exposes individuals to various academic fields during their university career, because they need to understand human anatomy, psychology, photography, painting and printing techniques, mathematics, marketing, digital animation, 3D modeling, and some professionals even complement their skills with programming, providing a comprehensive view of a company by addressing the three essential factors evaluated: structure, team, and product. Graphic designers can work with singular clients or multiple people including collaborations. This is where communication is crucial because misunderstandings can lead to setbacks. Professional requirements for graphic designers vary from one place to another. Designers must undergo specialized training, including advanced education and practical experience (internship) to develop skills and expertise in the workplace, which is necessary to obtain a credential that allows them to practice the profession. Practical, technical, and academic requirements to become a graphic designer vary by country or jurisdiction, although the formal study of design in academic institutions has played a crucial role in the overall development of the profession. Salary According the Bureau of Labor Statistics, the median salary for graphic designers is $58,900 as of May 2023. The bottom 10% earned less than $36,420 while the top 10% earned more than $100,450. Qualifications Designers should be able to solve visual communication problems or challenges. In doing so, the designer must identify the communications issue, gather and analyze information related to the issue, and generate potential approaches aimed at solving the problem. Iterative prototyping and user testing can be used to determine the success or failure of a visual solution. Approaches to a communications problem are developed in the context of an audience and a media channel. Graphic designers must understand the social and cultural norms of that audience in order to develop visual solutions that are perceived as relevant, understandable and effective. Directly speaking with individuals from set audiences can prevent any complications. Graphic designers should also have a thorough understanding of production and rendering methods. Some of the technologies and methods of production are drawing, offset printing, photography, and time-based and interactive media (film, video, computer multimedia). Frequently, designers are also called upon to manage color in different media. For instance, graphic designers use different colors for digital and print advertisements. RGB — standing for red, green, blue — is an additive color model used for digital media designs. However, the CMYK color model is made up of subtractive colors — cyan, magenta, yellow, and black — and used in designing print media. The reason for the different models is that when designing print ads, colors look different on the screen and when printed onto paper. For example, the colors appear darker on paper than on screen. See also Graphic arts Graphic design occupations List of graphic designers Mood board References Computer occupations Computational fields of study Mass media occupations Visual arts occupations Office and administrative support occupations
Graphic designer
[ "Technology" ]
702
[ "Computational fields of study", "Computer occupations", "Computing and society" ]
1,571,916
https://en.wikipedia.org/wiki/Piloti
Pilotis, or piers, are supports such as columns, pillars, or stilts that lift a building above ground or water. They are traditionally found in stilt and pole dwellings such as fishermen's huts in Asia and Scandinavia using wood, and in elevated houses such as Old Queenslanders in Australia's tropical Northern state, where they are called "stumps". Pilotis are a fixture of modern architecture, and were recommended by the modern architect Le Corbusier in his manifesto, the Five Points of Architecture. Function In modern architecture, pilotis are ground-level supporting columns. A prime example is Le Corbusier's Villa Savoye in Poissy, France. Another is Patrick Gwynne's The Homewood in Surrey, England. Beyond their support function, the pilotis (or piers) raise the architectural volume, lighten it and free a space for circulation under the construction. They refine a building's connectivity with the land by allowing for parking, garden or driveway below while allowing a sense of floating and lightness in the architecture itself. In hurricane-prone areas, pilotis may be used to raise the inhabited space of a building above typical storm surge levels. Le Corbusier used them in a variety of forms from slender posts to the massive Brutalist look of the Marseilles Housing Unit (1945–1952) with a range of bases, inclusions and surfaces. This was part of Le Corbusier's idea of machine-like efficiency where land, people and buildings would work together optimally. Notes References Article: "Pilote, die" in: Pevsner, Honour, Fleming: Lexikon der Weltarchitektur, München 1987 Primer on architecture of the Musée Picardie, accessed 2009-06-20 "pilotis", The Urban Conservation Glossary, University of Dundee, , online version accessed 2009-06-20 Architectural elements
Piloti
[ "Technology", "Engineering" ]
395
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
1,571,927
https://en.wikipedia.org/wiki/Social%20support
Social support is the perception and actuality that one is cared for, has assistance available from other people, and, most popularly, that one is part of a supportive social network. These supportive resources can be emotional (e.g., nurturance), informational (e.g., advice), or companionship (e.g., sense of belonging); tangible (e.g., financial assistance) or intangible (e.g., personal advice). Social support can be measured as the perception that one has assistance available, the actual received assistance, or the degree to which a person is integrated in a social network. Support can come from many sources, such as family, friends, pets, neighbors, coworkers, organizations, etc. Social support is studied across a wide range of disciplines including psychology, communications, medicine, sociology, nursing, public health, education, rehabilitation, and social work. Social support has been linked to many benefits for both physical and mental health, but "social support" (e.g., gossiping about friends) is not always beneficial. Social support theories and models were prevalent as intensive academic studies in the 1980s and 1990s, and are linked to the development of caregiver and payment models, and community delivery systems in the US and around the world. Two main models have been proposed to describe the link between social support and health: the buffering hypothesis and the direct effects hypothesis. Gender and cultural differences in social support have been found in fields such as education "which may not control for age, disability, income and social status, ethnic and racial, or other significant factors". Categories and definitions Distinctions in measurement Social support can be categorized and measured in several different ways. There are four common functions of social support: Emotional support is the offering of empathy, concern, affection, love, trust, acceptance, intimacy, encouragement, or caring. It is the warmth and nurturance provided by sources of social support. Providing emotional support can let the individual know that he or she is valued. Tangible support is the provision of financial assistance, material goods, or services. Also called instrumental support, this form of social support encompasses the concrete, direct ways people assist others. Informational support is the provision of advice, guidance, suggestions, or useful information to someone. This type of information has the potential to help others problem-solve. Companionship support is the type of support that gives someone a sense of social belonging (and is also called belonging). This can be seen as the presence of companions to engage in shared social activities. Formerly, it was also referred to as "esteem support" or "appraisal support", but these have since developed into alternative forms of support under the name "appraisal support" along with normative and instrumental support. Researchers also commonly make a distinction between perceived and received support. Perceived support refers to a recipient's subjective judgment that providers will offer (or have offered) effective help during times of need. Received support (also called enacted support) refers to specific supportive actions (e.g., advice or reassurance) offered by providers during times of need. Furthermore, social support can be measured in terms of structural support or functional support. Structural support (also called social integration) refers to the extent to which a recipient is connected within a social network, like the number of social ties or how integrated a person is within his or her social network. Family relationships, friends, and membership in clubs and organizations contribute to social integration. Functional support looks at the specific functions that members in this social network can provide, such as the emotional, instrumental, informational, and companionship support listed above. Data suggests that emotional support may play a more significant role in protecting individuals from the deleterious effects of stress than structural means of support, such as social involvement or activity. These different types of social support have different patterns of correlations with health, personality, and personal relationships. For example, perceived support is consistently linked to better mental health whereas received support and social integration are not. In fact, research indicates that perceived social support that is untapped can be more effective and beneficial than utilized social support. Some have suggested that invisible support, a form of support where the person has support without his or her awareness, may be the most beneficial. This view has been complicated, however, by more recent research suggesting the effects of invisible social support – as with visible support – are moderated by provider, recipient, and contextual factors such as recipients' perceptions of providers' responsiveness to their needs, or the quality of the relationship between the support provider and recipient. Sources Social support can come from a variety of sources, including (but not limited to): family, friends, romantic partners, pets, community ties, and coworkers. Sources of support can be natural (e.g., family and friends) or more formal (e.g., mental health specialists or community organizations). The source of the social support is an important determinant of its effectiveness as a coping strategy. Support from a romantic partner is associated with health benefits, particularly for men. However, one study has found that although support from spouses buffered the negative effects of work stress, it did not buffer the relationship between marital and parental stresses, because the spouses were implicated in these situations.However, work-family specific support worked more to alleviate work-family stress that feeds into marital and parental stress. Employee humor is negatively associated with burnout, and positively with, stress, health and stress coping effectiveness. Additionally, social support from friends did provide a buffer in response to marital stress, because they were less implicated in the marital dynamic. Early familial social support has been shown to be important in children's abilities to develop social competencies, and supportive parental relationships have also had benefits for college-aged students. Teacher and school personnel support have been shown to be stronger than other relationships of support. This is hypothesized to be a result of family and friend social relationships to be subject to conflicts whereas school relationships are more stable. Online social support Social support is also available among social media sites. As technology advances, the availability for online support increases. Social support can be offered through social media websites such as blogs, Facebook groups, health forums, and online support groups. Early theories and research into Internet use tended to suggest negative implications for offline social networks (e.g., fears that Internet use would undermine desire for face-to-face interaction) and users' well-being. However, additional work showed null or even positive effects, contributing to a more nuanced understanding of online social processes. Emerging data increasingly suggest that, as with offline support, the effects of online social support are shaped by support provider, recipient, and contextual factors. For example, the interpersonal-connection-behaviors framework reconciles conflicts in the research literature by suggesting that social network site use is likely to contribute to well-being when users engage in ways that foster meaningful interpersonal connection. Conversely, use may harm well-being when users engage in passive consumption of social media. Online support can be similar to face-to-face social support, but may also offer convenience, anonymity, and non-judgmental interactions. Online sources such as social media may be less redundant sources of social support for users with relatively little in-person support compared to persons with high in-person support. Online sources may be especially important as potential social support resources for individuals with limited offline support, and may be related to physical and psychological well-being. However, socially isolated individuals may also be more drawn to computer-mediated vs. in-person forms of interaction, which may contribute to bidirectional associations between online social activity and isolation or depression. Support sought through social media can also provide users with emotional comfort that relates them to others while creating awareness about particular health issues. Research conducted by Winzelberg et al. evaluated an online support group for women with breast cancer finding participants were able to form fulfilling supportive relationships in an asynchronous format and this form of support proved to be effective in reducing participants' scores on depression, perceived stress, cancer-related trauma measures, and even IVF treatments. This type of online communication can increase the ability to cope with stress. Social support through social media is potentially available to anyone with Internet access and allows users to create relationships and receive encouragement for a variety of issues, including rare conditions or circumstances. Coulson claims online support groups provide a unique opportunity for health professionals to learn about the experiences and views of individuals. This type of social support can also benefit users by providing them with a variety of information. Seeking informational social support allows users to access suggestions, advice, and information regarding health concerns or recovery. Many need social support, and its availability on social media may broaden access to a wider range of people in need. Both experimental and correlational research have indicated that increased social network site use can lead to greater perceived social support and increased social capital, both of which predict enhanced well-being. An increasing number of interventions aim to create or enhance social support in online communities. While preliminary data often suggest such programs may be well received by users and may yield benefits, additional research is needed to more clearly establish the effectiveness of many such interventions. Until the late 2010s, research examining online social support tended to use ad hoc instruments or measures that were adapted from offline research, resulting in the possibility that measures were not well-suited for measuring online support, or had weak or unknown psychometric properties. Instruments specifically developed to measure social support in online contexts include the Online Social Support Scale (which has sub scales for esteem/emotional support, social companionship, informational support, and instrumental support) and the Online Social Experiences Measure (which simultaneously assesses positive and negative aspects of online social activity and has predictive validity regarding cardiovascular implications of online social support). Links to mental and physical health Benefits Mental health Social support profile is associated with increased psychological well-being in the workplace and in response to important life events. There has been an ample amount of evidence showing that social support aids in lowering problems related to one's mental health. As reported by Cutrona, Russell, and Rose, in the elderly population that was in their studies, their results showed that elderly individuals who had relationships where their self-esteem was elevated were less likely to have a decline in their health. In stressful times, social support helps people reduce psychological distress (e.g., anxiety or depression). Social support can simultaneously function as a problem-focused (e.g. receiving tangible information that helps resolve an issue) and emotion-focused coping strategy (e.g. used to regulate emotional responses that arise from the stressful event) Social support has been found to promote psychological adjustment in conditions with chronic high stress like HIV, rheumatoid arthritis, cancer, stroke, and coronary artery disease. Whereas a lack of social support has been associated with a risk for an individual's mental health. This study also shows that the social support acts as a buffer to protect individuals from different aspects in regards to their mental and physical health, such as helping against certain life stressors. Additionally, social support has been associated with various acute and chronic pain variables (for more information, see Chronic pain). People with low social support report more sub-clinical symptoms of depression and anxiety than do people with high social support. In addition, people with low social support have higher rates of major mental disorder than those with high support. These include post-traumatic stress disorder (PTSD), panic disorder, social phobia, major depressive disorder, dysthymic disorder, and eating disorders. Among people with schizophrenia, those with low social support have more symptoms of the disorder. In addition, people with low support have more suicidal ideation, and more alcohol and (illicit and prescription) drug problems. Similar results have been found among children. Religious coping has especially been shown to correlate positively with positive psychological adjustment to stressors with enhancement of faith-based social support hypothesized as the likely mechanism of effect. However, more recent research reveals the role of religiosity/spirituality in enhancing social support may be overstated and in fact disappears when the personality traits of "agreeableness" and "conscientiousness" are also included as predictors. In a 2013 study, Akey et al. did a qualitative study of 34 men and women diagnosed with an eating disorder and used the Health Belief Model (HBM) to explain the reasons for which they forgo seeking social support. Many people with eating disorders have a low perceived susceptibility, which can be explained as a sense of denial about their illness. Their perceived severity of the illness is affected by those to whom they compare themselves to, often resulting in people believing their illness is not severe enough to seek support. Due to poor past experiences or educated speculation, the perception of benefits for seeking social support is relatively low. The number of perceived barriers towards seeking social support often prevents people with eating disorders from getting the support they need to better cope with their illness. Such barriers include fear of social stigma, financial resources, and availability and quality of support. Self-efficacy may also explain why people with eating disorders do not seek social support, because they may not know how to properly express their need for help. This research has helped to create a better understanding of why individuals with eating disorders do not seek social support, and may lead to increased efforts to make such support more available. Eating disorders are classified as mental illnesses but can also have physical health repercussions. Creating a strong social support system for those affected by eating disorders may help such individuals to have a higher quality of both mental and physical health. Various studies have been performed examining the effects of social support on psychological distress. Interest in the implications of social support were triggered by a series of articles published in the mid-1970s, each reviewing literature examining the association between psychiatric disorders and factors such as change in marital status, geographic mobility, and social disintegration. Researchers realized that the theme present in each of these situations is the absence of adequate social support and the disruption of social networks. This observed relationship sparked numerous studies concerning the effects of social support on mental health. One particular study documented the effects of social support as a coping strategy on psychological distress in response to stressful work and life events among police officers. Talking things over among coworkers was the most frequent form of coping utilized while on duty, whereas most police officers kept issues to themselves while off duty. The study found that the social support between co-workers significantly buffered the relationship between work-related events and distress. Other studies have examined the social support systems of single mothers. One study by D'Ercole demonstrated that the effects of social support vary in both form and function and will have drastically different effects depending upon the individual. The study found that supportive relationships with friends and co-workers, rather than task-related support from family, was positively related to the mother's psychological well-being. D'Ercole hypothesizes that friends of a single parent offer a chance to socialize, match experiences, and be part of a network of peers. These types of exchanges may be more spontaneous and less obligatory than those between relatives. Additionally, co-workers can provide a community away from domestic life, relief from family demands, a source of recognition, and feelings of competence. D'Ercole also found an interesting statistical interaction whereby social support from co-workers decreased the experience of stress only in lower income individuals. The author hypothesizes that single women who earn more money are more likely to hold more demanding jobs which require more formal and less dependent relationships. Additionally, those women who earn higher incomes are more likely to be in positions of power, where relationships are more competitive than supportive. Many studies have been dedicated specifically to understanding the effects of social support in individuals with (PTSD). In a study by Haden et al., when victims of severe trauma perceived high levels of social support and engaged in interpersonal coping styles, they were less likely to develop severe PTSD when compared to those who perceived lower levels of social support. These results suggest that high levels of social support alleviate the strong positive association between level of injury and severity of PTSD, and thus serves as a powerful protective factor. In general, data shows that the support of family and friends has a positive influence on an individual's ability to cope with trauma. In fact, a meta-analysis by Brewin et al. found that social support was the strongest predictor, accounting for 40%, of variance in PTSD severity. However, perceived social support may be directly affected by the severity of the trauma. In some cases, support decreases with increases in trauma severity. College students have also been the target of various studies on the effects of social support on coping. Reports between 1990 and 2003 showed college stresses were increasing in severity. Studies have also shown that college students' perceptions of social support have shifted from viewing support as stable to viewing them as variable and fluctuating. In the face of such mounting stress, students naturally seek support from family and friends in order to alleviate psychological distress. A study by Chao found a significant two-way correlation between perceived stress and social support, as well as a significant three-way correlation between perceived stress, social support, and dysfunctional coping. The results indicated that high levels of dysfunctional coping deteriorated the association between stress and well-being at both high and low levels of social support, suggesting that dysfunctional coping can deteriorate the positive buffering action of social support on well-being. Students who reported social support were found more likely to engage in less healthy activities, including sedentary behavior, drug and alcohol use, and too much or too little sleep. Lack of social support in college students is also strongly related to life dissatisfaction and suicidal behavior. Physical health Social support has a clearly demonstrated link to physical health outcomes in individuals, with numerous ties to physical health including mortality. People with low social support are at a much higher risk of death from a variety of diseases (e.g., cancer or cardiovascular disease). Numerous studies have shown that people with higher social support have an increased likelihood for survival. Individuals with lower levels of social support have: more cardiovascular disease, more inflammation and less effective immune system functioning, more complications during pregnancy, and more functional disability and pain associated with rheumatoid arthritis, among many other findings. Conversely, higher rates of social support have been associated with numerous positive outcomes, including faster recovery from coronary artery surgery, less susceptibility to herpes attacks, a lowered likelihood to show age-related cognitive decline, and better diabetes control. People with higher social support are also less likely to develop colds and are able to recover faster if they are ill from a cold. There is sufficient evidence linking cardiovascular, neuroendocrine, and immune system function with higher levels of social support. Social support predicts less atherosclerosis and is associated with the slower progression of an already diagnosed cardiovascular disease. There is also a clearly demonstrated link between social support and better immune function, especially in older adults. While links have been shown between neuroendocrine functionality and social support, further understanding is required before specific significant claims can be made. Social support is also hypothesized to be beneficial in the recovery from less severe cancers. Research focuses on breast cancers, but in more serious cancers factors such as severity and spread are difficult to measure in the context of impacts of social support. The field of physical health often struggles with the combination of variables set by external factors that are difficult to control, such as the entangled impact of life events on social support and the buffering impact these events have. There are serious ethical concerns involved with controlling too many factors of social support in individuals, leading to an interesting crossroads in the research. Costs Social support is integrated into service delivery schemes and sometimes are a primary service provided by governmental contracted entities (e.g., companionship, peer services, family caregivers). Community services known by the nomenclature community support, and workers by a similar title, Direct Support Professional, have a base in social and community support "ideology". All supportive services from supported employment to supported housing, family support, educational support, and supported living are based upon the relationship between "informal and formal" supports, and "paid and unpaid caregivers". Inclusion studies, based upon affiliation and friendship, or the conversely, have a similar theoretical basis as do "person-centered support" strategies. Social support theories are often found in "real life" in cultural, music and arts communities, and as might be expected within religious communities. Social support is integral in theories of aging, and the "social care systems" have often been challenged (e.g., creativity throughout the lifespan, extra retirement hours). Ed Skarnulis' (state director) adage, "Support, don't supplant the family" applies to other forms of social support networks. Although there are many benefits to social support, it is not always beneficial. It has been proposed that in order for social support to be beneficial, the social support desired by the individual has to match the support given to him or her; this is known as the matching hypothesis. Psychological stress may increase if a different type of support is provided than what the recipient wishes to receive (e.g., informational is given when emotional support is sought). Additionally, elevated levels of perceived stress can impact the effect of social support on health-related outcomes. Other costs have been associated with social support. For example, received support has not been linked consistently to either physical or mental health; perhaps surprisingly, received support has sometimes been linked to worse mental health. Additionally, if social support is overly intrusive, it can increase stress. It is important when discussing social support to always consider the possibility that the social support system is actually an antagonistic influence on an individual. Two dominant models There are two dominant hypotheses addressing the link between social support and health: the buffering hypothesis and the direct effects hypothesis. The main difference between these two hypotheses is that the direct effects hypothesis predicts that social support is beneficial all the time, while the buffering hypothesis predicts that social support is mostly beneficial during stressful times. Evidence has been found for both hypotheses. In the buffering hypothesis, social support protects (or "buffers") people from the bad effects of stressful life events (e.g., death of a spouse, job loss). Evidence for stress buffering is found when the correlation between stressful events and poor health is weaker for people with high social support than for people with low social support. The weak correlation between stress and health for people with high social support is often interpreted to mean that social support has protected people from stress. Stress buffering is more likely to be observed for perceived support than for social integration or received support. The theoretical concept or construct of resiliency is associated with coping theories. In the direct effects (also called main effects) hypothesis, people with high social support are in better health than people with low social support, regardless of stress. In addition to showing buffering effects, perceived support also shows consistent direct effects for mental health outcomes. Both perceived support and social integration show main effects for physical health outcomes. However, received (enacted) support rarely shows main effects. Theories to explain the links Several theories have been proposed to explain social support's link to health. Stress and coping social support theory dominates social support research and is designed to explain the buffering hypothesis described above. According to this theory, social support protects people from the bad health effects of stressful events (i.e., stress buffering) by influencing how people think about and cope with the events. An example in 2018 are the effects of school shootings on the well-being and future of children and children's health. According to stress and coping theory, events are stressful insofar as people have negative thoughts about the event (appraisal) and cope ineffectively. Coping consists of deliberate, conscious actions such as problem solving or relaxation. As applied to social support, stress and coping theory suggests that social support promotes adaptive appraisal and coping. Evidence for stress and coping social support theory is found in studies that observe stress buffering effects for perceived social support. One problem with this theory is that, as described previously, stress buffering is not seen for social integration, and that received support is typically not linked to better health outcomes. Relational regulation theory (RRT) is another theory, which is designed to explain main effects (the direct effects hypothesis) between perceived support and mental health. As mentioned previously, perceived support has been found to have both buffering and direct effects on mental health. RRT was proposed in order to explain perceived support's main effects on mental health which cannot be explained by the stress and coping theory. RRT hypothesizes that the link between perceived support and mental health comes from people regulating their emotions through ordinary conversations and shared activities rather than through conversations on how to cope with stress. This regulation is relational in that the support providers, conversation topics and activities that help regulate emotion are primarily a matter of personal taste. This is supported by previous work showing that the largest part of perceived support is relational in nature. Life-span theory is another theory to explain the links of social support and health, which emphasizes the differences between perceived and received support. According to this theory, social support develops throughout the life span, but especially in childhood attachment with parents. Social support develops along with adaptive personality traits such as low hostility, low neuroticism, high optimism, as well as social and coping skills. Together, support and other aspects of personality ("psychological theories") influence health largely by promoting health practices (e.g., exercise and weight management) and by preventing health-related stressors (e.g., job loss, divorce). Evidence for life-span theory includes that a portion of perceived support is trait-like, and that perceived support is linked to adaptive personality characteristics and attachment experiences. Lifespan theories are popular from their origins in Schools of Human Ecology at the universities, aligned with family theories, and researched through federal centers over decades (e.g., University of Kansas, Beach Center for Families; Cornell University, School of Human Ecology). Of the Big Five personality traits, agreeableness is associated with people receiving the most social support and having the least-strained relationships at work and home. Receiving support from a supervisor in the workplace is associated with alleviating tensions both at work and at home, as are inter-dependency and idiocentrism of an employee. Biological pathways Many studies have tried to identify biopsychosocial pathways for the link between social support and health. Social support has been found to positively impact the immune, neuroendocrine, and cardiovascular systems. Although these systems are listed separately here, evidence has shown that these systems can interact and affect each other. Immune system: Social support is generally associated with better immune function. For example, being more socially integrated is correlated with lower levels of inflammation (as measured by C-reactive protein, a marker of inflammation), and people with more social support have a lower susceptibility to the common cold. Neuroendocrine system: Social support has been linked to lower cortisol ("stress hormone") levels in response to stress. Neuroimaging work has found that social support decreases activation of regions in the brain associated with social distress, and that this diminished activity was also related to lowered cortisol levels. Cardiovascular system: Social support has been found to lower cardiovascular reactivity to stressors. It has been found to lower blood pressure and heart rates, which are known to benefit the cardiovascular system. Though many benefits have been found, not all research indicates positive effects of social support on these systems. For example, sometimes the presence of a support figure can lead to increased neuroendocrine and physiological activity. Support groups Social support groups can be a source of informational support, by providing valuable educational information, and emotional support, including encouragement from people experiencing similar circumstances. Studies have generally found beneficial effects for social support group interventions for various conditions, including Internet support groups. These groups may be termed "self help" groups in nation-states, may be offered by non-profit organizations, and in 2018, may be paid for as part of governmental reimbursement schemes. According to Drebing, previous studies have shown that those going to support groups later show enhanced social support... in regard to groups such as Alcoholics Anonymous (AA) and Narcotics Anonymous (NA), were shown to have a positive correlation with participation in their subsequent groups and abstaining from their addiction. Because correlation does not equal causation, going to those meeting does not cause one to abstain from divulging back into old habits rather that this been shown to be helpful in establishing sobriety. While many support groups are held where the discussions can be face to face there has been evidence that shows online support offers the same amount of benefits. Coulson found that through discussion forums several benefits can be added such as being able to cope with things and having an overall sense of well-being. Providing support There are both costs and benefits to providing support to others. Providing long-term care or support for someone else is a chronic stressor that has been associated with anxiety, depression, alterations in the immune system, and increased mortality. Thus, family caregivers and "university personnel" alike have advocated for both respite or relief, and higher payments related to ongoing, long-term care giving. However, providing support has also been associated with health benefits. In fact, providing instrumental support to friends, relatives, and neighbors, or emotional support to spouses has been linked to a significant decrease in the risk for mortality. Researchers found that within couples where one has been diagnosed with breast cancer, not only does the spouse with the illness benefit from the provision and receipt of support but so does the spouse with no illness. It was found that the relationship well being was the area that benefited for the spouses of those with breast cancer Also, a recent neuroimaging study found that giving support to a significant other during a distressful experience increased activation in reward areas of the brain. Social defense system In 1959 Isabel Menzies Lyth identified that threat to a person's identity in a group where they share similar characteristics develops a defense system inside the group which stems from emotions experienced by members of the group, which are difficult to articulate, cope with and finds solutions to. Together with an external pressure on efficiency, a collusive and injunctive system develops that is resistant to change, supports their activities and prohibit others from performing their major tasks. Gender and culture Gender differences Gender differences have been found in social support research. Women provide more social support to others and are more engaged in their social networks. Evidence has also supported the notion that women may be better providers of social support. In addition to being more involved in the giving of support, women are also more likely to seek out social support to deal with stress, especially from their spouses. However, one study indicates that there are no differences in the extent to which men and women seek appraisal, informational, and instrumental types of support. Rather, the big difference lies in seeking emotional support. Additionally, social support may be more beneficial to women. Shelley Taylor and her colleagues have suggested that these gender differences in social support may stem from the biological difference between men and women in how they respond to stress (i.e., flight or fight versus tend and befriend). Married men are less likely to be depressed compared to non-married men after the presence of a particular stressor because men are able to delegate their emotional burdens to their partner, and women have been shown to be influenced and act more in reaction to social context compared to men. It has been found that men's behaviors are overall more asocial, with less regard to the impact their coping may have upon others, and women more prosocial with importance stressed on how their coping affects people around them. This may explain why women are more likely to experience negative psychological problems such as depression and anxiety based on how women receive and process stressors. In general, women are likely to find situations more stressful than males are. It is important to note that when the perceived stress level is the same, men and women have much fewer differences in how they seek and use social support. Cultural differences Although social support is thought to be a universal resource, cultural differences exist in social support. In many Asian cultures, the person is seen as more of a collective unit of society, whereas Western cultures are more individualistic and conceptualize social support as a transaction in which one person seeks help from another. In more interdependent Eastern cultures, people are less inclined to enlist the help of others. For example, European Americans have been found to call upon their social relationships for social support more often than Asian Americans or Asians during stressful occasions, and Asian Americans expect social support to be less helpful than European Americans. These differences in social support may be rooted in different cultural ideas about social groups. It is important to note that these differences are stronger in emotional support than instrumental support. Additionally, ethnic differences in social support from family and friends have been found. Cultural differences in coping strategies other than social support also exist. One study shows that Koreans are more likely to report substance abuse than European Americans are. Further, European Americans are more likely to exercise in order to cope than Koreans. Some cultural explanations are that Asians are less likely to seek it from fear of disrupting the harmony of their relationships and that they are more inclined to settle their problems independently and avoid criticism. However, these differences are not found among Asian Americans relative to their Europeans American counterparts. Different cultures have different ways of social support. In African American households support is limited. Many black mothers raise their children without a male figure. Women struggle with job opportunities due to job biases and racial discrimination. Many Black women face this harsh reality causing them to go through poverty. When there is poverty within home, the main focus is to make sure the bills are paid. Sometimes causing children to play adult roles at a very young age.  Women trying to balance the mom and dad role, takes away from the moral support certain kids need. See also Family support Interpersonal emotion regulation Invisible support Narcissistic supply Respect Peer support Prosocial behavior Social capital Social connection Social undermining Stress (psychological) Supported employment Supported housing Welfare Social Support Questionnaire Help-seeking References Social networks Clinical psychology Communication theory Social influence Economic sociology
Social support
[ "Biology" ]
7,152
[ "Behavioural sciences", "Behavior", "Clinical psychology" ]
1,571,968
https://en.wikipedia.org/wiki/Gerhard%20Ringel
Gerhard Ringel (October 28, 1919 in Kollnbrunn, Austria – June 24, 2008 in Santa Cruz, California) was a German mathematician. He was one of the pioneers in graph theory and contributed significantly to the proof of the Heawood conjecture (now the Ringel–Youngs theorem), a mathematical problem closely linked with the four color theorem. Although born in Austria, Ringel was raised in Czechoslovakia and attended Charles University before being drafted into the Wehrmacht in 1940 (after Germany had taken control of much of what had been Czechoslovakia). After the war Ringel served for over four years in a Soviet prisoner of war camp. He earned his PhD from the University of Bonn in 1951 with a thesis written under the supervision of Emanuel Sperner and Ernst Peschl. Ringel started his academic career as professor at the Free University Berlin. In 1970 he left Germany due to bureaucratic consequences of the German student movement, and continued his career at the University of California, Santa Cruz, having been invited there by his coauthor, John W. T. (Ted) Youngs. He was awarded honorary doctorate degrees from the Karlsruhe Institute of Technology and the Free University of Berlin. Besides his mathematical skills he was a widely acknowledged entomologist. His main emphasis lay on collecting and breeding butterflies. Prior to his death, he gave his outstanding collection of butterflies to the UCSC Museum of Natural History Collections. Publications References 1919 births 2008 deaths People from Gänserndorf District 20th-century American mathematicians 21st-century American mathematicians 20th-century German mathematicians 21st-century German mathematicians Graph theorists University of Bonn alumni Academic staff of the Free University of Berlin University of California, Santa Cruz faculty Charles University alumni German emigrants to Czechoslovakia Immigrants to the United States German military personnel of World War II German prisoners of war in World War II held by the Soviet Union
Gerhard Ringel
[ "Mathematics" ]
377
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
1,572,074
https://en.wikipedia.org/wiki/Hammer%20drill
A hammer drill, also known as a percussion drill or impact drill, is a power tool used chiefly for drilling in hard materials. It is a type of rotary drill with an impact mechanism that generates a hammering motion. The percussive mechanism provides a rapid succession of short hammer thrusts to pulverize the material to be bored, so as to provide quicker drilling with less effort. If a hammer drill's impact mechanism can be switched off, the tool can be used like a conventional drill to also perform tasks such as screwdriving. History Ancient China's principal drilling technique, percussive drilling, was invented during the Han dynasty. The process involved two to six men jumping on a lever at rhythmic intervals to raise a heavy iron bit attached to long bamboo cables from a bamboo derrick. Utilizing cast iron bits and tools constructed of bamboo, the early Chinese were able to use percussion drilling to drill holes to a depth of . The construction of large wells took more than two to three generations of workers to complete. The cable tool drilling machines developed by the early Chinese involved raising and dropping a heavy string of drilling tools to crush through rocks into diminutive fragments. In addition, the Chinese also used a cutting head secured to bamboo rods to drill to depths of . The raising and dropping of the bamboo drill strings allowed the drilling machine to penetrate less dense and unconsolidated rock formations. In 1848 J.J. Couch invented the first pneumatic percussion drill. The origin of the first hammer drill is a matter of contention. German company Fein patented a ("drill with electro-pneumatic striking mechanism") in 1914. German company Bosch produced the first "Bosch-Hammer" around 1932 in mass production. The US company Milwaukee Electric Tool Corporation states that in 1935, it was selling a lightweight electric hammer drill (cam-action). Hand-cranked percussion drills were made in the UK in the mid-twentieth century. Design Hammer drills have a cam-action or percussion hammering mechanism, in which two sets of toothed gears mechanically interact with each other to hammer while rotating the drill bit. With cam-action drills, the chuck has a mechanism whereby the entire chuck and bit move forward and backward on the axis of rotation. This type of drill is often used with or without the hammer action, but it is not possible to use the hammer action alone as it is the rotation over the cams which causes the hammer motion. A hammer drill has a specially designed clutch that allows it to not only spin the drill bit, but also to punch it in and out (along the axis of the bit). The actual distance the bit travels in and out and the force of its blow are both very small, and the hammering action is very rapid—thousands of "BPM" (blows per minute) or "IPM" (impacts per minute). Although each blow is of relatively low force, these thousands of blows per minute are more than adequate to break up concrete or brick, using the masonry drill bit's carbide wedge to pulverize it for the spiral flutes to whisk away. For this reason, a hammer drill drills much faster than a regular drill through concrete, brick, and thick lumber. In standardized drilling speed tests, the most effective hammer drills improve drilling speeds by upwards of 30% compared to completing the same task with the hammer mode disabled. Hammer drills are increasingly powered by cordless technology. Uses Holes in hard materials are needed for anchor bolts, concrete screws, and wall plugs. Hammer drills are not typically used for production construction drilling, but rather for occasional drilling of holes into concrete, masonry or stone. They are also used to drill holes in concrete footings to pin concrete wall forms and to drill holes in concrete floors to pin wall framing. Slotted drive shaft or slotted drive system (SDS) rotary drills are more commonly used as dedicated masonry drilling tools in construction. The system was designed by Bosch in 1975 and stands for "Stecken – Drehen – Sichern" which is German for "Insert – Twist – Secure". Hammer drills almost always have a lever or switch that locks off the special "hammer clutch," turning the tool into a conventional drill for wood or metal work. Hammer drills are more expensive and more bulky than regular drills, but are preferable for applications where the material to be drilled, concrete block or wood studs, is unknown. For example, an electrician mounting an electrical box to a wall would be able to use the same hammer drill to drill into either wood studs (hammer disabled) or masonry walls (hammer enabled). See also References External links NIOSH Sound Power and Vibrations Database New York City Quiet Vendor Guidelines Power tools Hand-held power tools
Hammer drill
[ "Physics" ]
963
[ "Power (physics)", "Power tools", "Physical quantities" ]
1,572,108
https://en.wikipedia.org/wiki/The%20Red%20Balloon
The Red Balloon () is a 1956 French fantasy comedy-drama featurette written, produced, and directed by Albert Lamorisse. The thirty-four-minute short, which follows the adventures of a young boy who one day finds a sentient, mute, red balloon, was filmed in the Ménilmontant neighborhood of Paris. Lamorisse used his children as actors in the film. His son, Pascal, plays himself in the main role, and his daughter, Sabine, portrays a young girl. The film won numerous awards, including an Oscar for Lamorisse for writing the Best Original Screenplay in 1956 and the Palme d'Or for short films at the 1956 Cannes Film Festival. It also became popular with children and educators. It is the only short film to win the Oscar for Best Original Screenplay. Plot The film follows Pascal (Pascal Lamorisse), a young boy who discovers a large, helium-filled red balloon on his way to school one morning. As he plays with it, he realizes it has a mind of its own. The balloon begins to follow him wherever he goes, never straying far, and sometimes floating outside his apartment window since his mother will not allow it inside. As Pascal and the balloon wander through the streets of Paris, they draw a lot of attention and envy from other children. At one point, the balloon enters his classroom, causing an uproar among his classmates. This alerts the principal, who locks Pascal in his office. Later, after being set free, Pascal and the balloon encounter a young girl (Sabine Lamorisse) with a blue balloon that also seems to have a mind of its own, just like his. One Sunday, Pascal is told to leave the balloon at home while he and his mother go to church. However, the balloon follows them through an open window and into the church, where a scolding beadle leads them out. As Pascal and the balloon continue to explore the neighborhood, a gang of older boys, envious of the balloon, steal it while Pascal is inside a bakery. He manages to retrieve it, but the boys eventually catch up to them after a chase through narrow alleys. They hold Pascal back as they bring the balloon down with slingshots and stones, and one of them finally destroys it by stomping on it. The film ends with all the other balloons in Paris coming to Pascal's aid, lifting him up, and taking him on a cluster balloon ride over the city. Themes The film, set in post-World War II Paris, features a dark and grey mise-en-scène that adds a somber tone to the setting and mood. In contrast, the bright red balloon serves as a symbol of hope and light throughout the film. The cluster balloon ride in the final scene can also be interpreted as a religious or spiritual metaphor. For example, when the balloon is destroyed, its "spirit" seems to live on through all the other balloons in the city, which some view as a metaphor for Christ. Themes of self-realization and loneliness are also present in the film. Additionally, the theme of innocence is a central focus, as the film shows how a cynical world is transformed into a magical one through the eyes of a child, highlighting the power of innocence and imagination. Author Myles P. Breen has identified thematic and stylistic elements in the film that reflect the qualities of poetry. Breen supports this view by quoting film theorist Christian Metz, who states, "In a poem, there is no story line, and nothing intrudes between the author and the reader." Breen categorizes the film as a "filmic poem," partly due to its loose, non-narrative structure. Production The film serves as a visual record of the Belleville and Ménilmontant areas of Paris, which had fallen into decay by the 1960s. This decline led the Parisian government to demolish much of the area as part of a slum-clearance effort. While some of the site was rebuilt with housing projects, the rest remained wasteland for 20 years. Many of the locations featured in the film no longer exist, including one of the bakeries, the school, the famous staircase located just beyond the equally famous café "Au Repos de la Montagne," the steep steps of passage Julien Lacroix where Pascal finds the balloon, and the empty lot where many of the battles take place. Today, the Parc de Belleville stands in that area. However, some locations remain intact, such as the apartment where Pascal lives with his mother at 15, rue du Transvaal, the Église Notre-Dame-de-la-Croix de Ménilmontant, and the Pyrénées-Ménilmontant bus stop at the intersection of rue des Pyrénées and rue de Ménilmontant. Lamorisse, a former auditor at the Institut des hautes études cinématographiques (IDHEC), employed a crew composed entirely of IDHEC graduates for the film. The main role of Pascal is played by Lamorisse's son, Pascal Lamorisse. French singer Renaud and his brother appear at the end of the film as twin brothers in red coats. They were cast in the roles through their uncle, Edmond Séchan, the film's director of photography. Release The film premiered and opened nationwide in France on 19 October 1956; it was released in the United Kingdom on 23 December 1956 (as the supporting film to the 1956 Royal Performance Film The Battle of the River Plate, which ensured it a wide distribution) and in the United States on 11 March 1957. The film has been featured in many festivals over the years, including the Wisconsin International Children's Film Festival; the Los Angeles Outfest Gay and Lesbian Film Festival; the Wisconsin Film Festival; and others. The film, in its American television premiere, was introduced by then-actor Ronald Reagan as an episode of the CBS anthology series General Electric Theater on 2 April 1961. The film is popular in elementary classrooms throughout the United States and Canada. A four-minute clip is on the rotating list of programming on Classic Arts Showcase. Reception Since its first release in 1956, the film has generally received overwhelmingly favorable reviews from critics. The film critic for The New York Times, Bosley Crowther, hailed the simple tale and praised director Lamorisse, writing: "Yet with the sensitive cooperation of his own beguiling son and with the gray-blue atmosphere of an old Paris quarter as the background for the shiny balloon, he has got here a tender, humorous drama of the ingenuousness of a child and, indeed, a poignant symbolization of dreams and the cruelty of those who puncture them." When the film was re-released in the United States in late 2006 by Janus Films, Entertainment Weekly magazine film critic Owen Gleiberman praised its direction and simple story line that reminded him of his youth, and wrote: "More than any other children's film, The Red Balloon turns me into a kid again whenever I see it...[to] see The Red Balloon is to laugh, and cry, at the impossible joy of being a child again." Film critic Brian Gibson wrote: "So far, this seems a post-Occupation France happy to forget the blood and death of Adolf Hitler's war a decade earlier. But soon people’s occasional, playful efforts to grab the floating, carefree balloon become grasping and destructive. In a gorgeous sequence, light streaming down alleys as children's shoes clack and clatter on the cobblestones, the balloon bouncing between the walls, Pascal is hunted down for his floating pet. Its ballooning sense of hope and freedom is deflated by a fierce, squabbling mass. Then, fortunately, it floats off, with the breeze of magic-realism, into a feeling of escape and peace, The Red Balloon taking hold of Pascal, lifting him out of this rigid, petty, earthbound life." In a review in The Washington Post, critic Philip Kennicott had a cynical view: "[The film takes] place in a world of lies. Innocent lies? Not necessarily. The Red Balloon may be the most seamless fusion of capitalism and Christianity ever put on film. A young boy invests in a red balloon the love of which places him on the outside of society. The balloon is hunted down and killed on a barren hilltop—think Calvary—by a mob of cruel boys. The ending, a bizarre emotional sucker punch, is straight out of the New Testament. Thus is investment rewarded—with Christian transcendence or, at least, an old-fashioned Assumption. This might be sweet. Or it might be a very cynical reduction of the primary impulse to religious faith." The review aggregator Rotten Tomatoes reported that 95% of critics gave the film a positive review, based on twenty reviews. The critical consensus reads: "The Red Balloon invests the simplest of narratives with spectacular visual inventiveness, making for a singularly wondrous portrait of innocence." Accolades Prix Louis Delluc: Prix Louis Delluc; Albert Lamorisse, 1956. Cannes Film Festival: Palme d'Or du court métrage/Golden Palm; Best Short Film, Albert Lamorisse, 1956. British Academy of Film and Television Arts: BAFTA Award; Special Award, France, 1957. Academy Awards: Oscar; Best Writing, Best Original Screenplay, Albert Lamorisse, 1957. National Board of Review: Top Foreign Films, 1957. Legacy In 1960, Lamorisse released a second film, Stowaway in the Sky, which also starred Pascal and was a spiritual successor to the film. Bob Godfrey's and Zlatko Grgic's 1979 animated film Dream Doll has a very similar plot and ending to the film, except instead of a boy being obsessed with a red balloon, the protagonist is a man obsessed with an inflatable nude woman. A stage adaptation by Anthony Clark was performed at the Royal National Theatre in 1996. Don Hertzfeld's 1997 short film Billy's Balloon, which also showed at Cannes, is a parody of the film. The music video for "Son of Sam" by Elliott Smith, from his 2000 album Figure 8, is a direct homage to the film. Hou Hsiao-hsien's 2007 film Flight of the Red Balloon is a direct homage to the film. A boy with a bright red balloon is featured in the epilogue of Damien Chazelle's 2016 musical film La La Land. The Pascal and Sabine restaurant in Asbury Park, New Jersey is named in honor of the film. Guitarist Keith Calmes' album Follow the Red Balloon is named as an homage to the spirit of Pascal and Sabine. In The Simpsons episode "The Crepes of Wrath", Bart returns from France bearing gifts for his family; his gift to Maggie is a red balloon. The red ballon appears (in three images on pages 162 and 163) of Jacques Tardi's Du Rififi à Menilmontant (Casterman, 2024), where private investigator Nestor Burma perambulates in the 20ème arrondissement during Christmas season, 1957. This is an original story by Tardi. Merchandise Home media The film was first released on VHS by Embassy Home Entertainment in 1984. A laserdisc of it was later released by The Criterion Collection in 1986, and was produced by Criterion, Janus Films, and Voyager Press. Included in it was Lamorisse's award-winning short White Mane (1953). A DVD version became available in 2008, and a Blu-ray version was released in the United Kingdom on January 18, 2010; it has now been confirmed as region-free. Book A tie-in book was first published by Doubleday Books, (now Penguin Random House), in 1957, using black and white and color stills from the film, with added prose. It was highly acclaimed and went on to win a 'New York Times Best Illustrated Children's Book of the Year'. Lamorisse was credited as its sole author. References External links The Red Balloon at Janus Films (official web site) The Red Balloon information site and DVD/Blu-ray review at DVD Beaver (includes images) Le Ballon rouge at Cinefeed 1950s French-language films 1950s fantasy comedy-drama films French fantasy comedy-drama films French comedy-drama short films Balloons 1950s children's fantasy films Films directed by Albert Lamorisse Films set in the 1950s Films shot in Paris Films whose writer won the Best Original Screenplay Academy Award Louis Delluc Prize winners Short Film Palme d'Or winners 1956 comedy-drama films 1956 films 1950s French films Films scored by Maurice Le Roux
The Red Balloon
[ "Chemistry" ]
2,593
[ "Balloons", "Fluid dynamics" ]
1,572,316
https://en.wikipedia.org/wiki/Off-balance-sheet
In accounting, "off-balance-sheet" (OBS), or incognito leverage, usually describes an asset, debt, or financing activity not on the company's balance sheet. Total return swaps are an example of an off-balance-sheet item. Some companies may have significant amounts of off-balance-sheet assets and liabilities. For example, financial institutions often offer asset management or brokerage services to their clients. The assets managed or brokered as part of these offered services (often securities) usually belong to the individual clients directly or in trust, although the company provides management, depository or other services to the client. The company itself has no direct claim to the assets, so it does not record them on its balance sheet (they are off-balance-sheet assets), while it usually has some basic fiduciary duties with respect to the client. Financial institutions may report off-balance-sheet items in their accounting statements formally, and may also refer to "assets under management", a figure that may include on- and off-balance-sheet items. Under previous accounting rules both in the United States (U.S. GAAP) and internationally (IFRS), operating leases were off-balance-sheet financing. Under current accounting rules (ASC 842, IFRS 16), operating leases are on the balance sheet. Financial obligations of unconsolidated subsidiaries (because they are not wholly owned by the parent) may also be off-balance-sheet. Such obligations were part of the accounting fraud at Enron. The formal accounting distinction between on- and off-balance-sheet items can be quite detailed and will depend to some degree on management judgments, but in general terms, an item should appear on the company's balance sheet if it is an asset or liability that the company owns or is legally responsible for; uncertain assets or liabilities must also meet tests of being probable, measurable and meaningful. For example, a company that is being sued for damages would not include the potential legal liability on its balance sheet until a legal judgment against it is likely and the amount of the judgment can be estimated; if the amount at risk is small, it may not appear on the company's accounts until a judgment is rendered. Differences between on and off balance sheets Traditionally, banks lend to borrowers under tight lending standards, keep loans on their balance sheets and retain credit risk—the risk that borrowers will default (be unable to repay interest and principal as specified in the loan contract). In contrast, securitization enables banks to remove loans from balance sheets and transfer the credit risk associated with those loans. Therefore, two types of items are of interest: on balance sheet and off balance sheet. The former is represented by traditional loans, since banks indicate loans on the asset side of their balance sheets. However, securitized loans are represented off the balance sheet, because securitization involves selling the loans to a third party (the loan originator and the borrower being the first two parties). Banks disclose details of securitized assets only in notes to their financial statements. Banking example A bank may have substantial sums in off-balance-sheet accounts, and the distinction between these accounts may not seem obvious. For example, when a bank has a customer who deposits $1 million in a regular bank deposit account, the bank has a $1 million liability. If the customer chooses to transfer the deposit to a money market mutual fund account sponsored by the same bank, the $1 million would not be a liability of the bank, but an amount held in trust for the client (formally as shares or units in a form of collective fund). If the funds are used to purchase stock, the stock is similarly not owned by the bank, and do not appear as an asset or liability of the bank. If the client subsequently sells the stock and deposits the proceeds in a regular bank account, these would now again appear as a liability of the bank. As an example, UBS has CHF 60.31 billion Undrawn irrevocable credit facilities off its balance sheet in 2008 (US$60.37 billion.) Citibank has US$960 billion in off-balance-sheet assets in 2010, which amounts to 6% of the GDP of the United States. References External links Off-Balance-Sheet Entities: The Good, The Bad And The Ugly – Investopedia Depository Institutions: Off-Balance-Sheet Items – Federal Reserve Accounting systems
Off-balance-sheet
[ "Technology" ]
922
[ "Information systems", "Accounting systems" ]
1,572,317
https://en.wikipedia.org/wiki/HD%20150706
HD 150706 is a star with an orbiting exoplanet in the northern constellation of Ursa Minor. It is located 92 light years away from the Sun, based on parallax measurements. At that distance, it is not visible to the unaided eye. However, with an apparent visual magnitude of 7.02, it is an easy target for binoculars. It is located only about 10° from the northern celestial pole so it is always visible in the northern hemisphere except for near the equator. Likewise, it is never visible in most of the southern hemisphere. The star is drifting closer to the Sun with a radial velocity of −17.2 km/s. The Sun-like spectrum of HD 150706 presents as a G-type main-sequence star with a stellar classification of G0V. It has a similar mass, radius, and metallicity as the Sun. The star is radiating 1.18 times the luminosity of the Sun from its photosphere at an effective temperature of 5,921 K. It displays magnetic activity in its chromosphere in the form of star spots. Age estimates are poorly bounded, ranging from 1.16 up to 5.1 billion years. Based on an infrared excess, a dusty debris disk is orbiting the star. There is a hole in the center of this disk with a radius of . It may be kept free of dust by a planetary system. Exoplanet The existence of an exoplanet orbiting this star was announced at the Scientific Frontiers in Research on Extrasolar Planets conference in 2002. The claimed planet had a minimum mass equal to the mass of Jupiter and was thought to be located in an elliptical orbit with a period of 264 days. However independent measurements of the star failed to confirm the existence of this planet. A different planet was discovered in the system in 2012; this Jupiter-twin completes one orbit in roughly 16 years. Its eccentricity and orbit is very poorly constrained. In 2023, the inclination and true mass of HD 150706 b were determined via astrometry, and its orbit was revised, finding a substantially wider but still poorly constrained orbit with a period of about 36 years. See also HD 149143 List of extrasolar planets References External links G-type main-sequence stars Circumstellar disks Planetary systems with one confirmed planet Ursa Minor BD+80 0519 150706 080902 0632
HD 150706
[ "Astronomy" ]
494
[ "Ursa Minor", "Constellations" ]
1,572,371
https://en.wikipedia.org/wiki/Allan%20J.%20C.%20Cunningham
Allan Joseph Champneys Cunningham (1842–1928) was a British-Indian mathematician. Biography Born in Delhi, Cunningham was the son of Sir Alexander Cunningham, archaeologist and the founder of the Archaeological Survey of India. He started a military career with the East India Company's Bengal Engineers at a young age. From 1871 to 1881, he was instructor in mathematics at the Indian Institute Of Technology Roorkee (IIT Roorkee). Upon returning to the United Kingdom in 1881, he continued teaching at military institutes in Chatham, Dublin and Shorncliffe. He left the army in 1891. He spent the rest of his life studying number theory. He applied his expertise to finding factors of large numbers of the form an ± bn, such as Mersenne numbers () and Fermat numbers () which have b = 1. His work is continued in the Cunningham project. References External links Number Theory Web, Allan Joseph Champneys Cunningham (based on the obituary by A.E. Western). 1842 births 1928 deaths People from Delhi British people in colonial India Mathematicians from British India 19th-century British mathematicians 20th-century British mathematicians Number theorists British East India Company Army officers
Allan J. C. Cunningham
[ "Mathematics" ]
240
[ "Number theorists", "Number theory" ]
1,572,435
https://en.wikipedia.org/wiki/OGLE-TR-132
OGLE-TR-132 is a distant magnitude 15.72 star in the star fields of the constellation Carina. Because of its great distance, about 4,900 light-years, and location in the crowded field it was not notable in any way. Because its apparent brightness changes when one of its planets transits, the star has been given the variable star designation V742 Carinae. The spectral type of the star is type F. A yellow-white, very metal-rich dwarf star, it is slightly hotter and more luminous than the Sun. Planetary system In 2003 the Optical Gravitational Lensing Experiment (OGLE) detected periodic dimming in the star's light curve indicating a transiting, planetary-sized object. Since low-mass red dwarfs and brown dwarfs may mimic a planet radial velocity measurements were necessary to calculate the mass of the body. In 2004 the object was proved to be a new transiting extrasolar planet, OGLE-TR-132b. See also Optical Gravitational Lensing Experiment OGLE-TR-113 Lists of exoplanets References External links F-type main-sequence stars Planetary transit variables Carina (constellation) Planetary systems with one confirmed planet Carinae, V742
OGLE-TR-132
[ "Astronomy" ]
249
[ "Carina (constellation)", "Constellations" ]
1,572,446
https://en.wikipedia.org/wiki/Cunningham%20Project
The Cunningham Project is a collaborative effort started in 1925 to factor numbers of the form bn ± 1 for b = 2, 3, 5, 6, 7, 10, 11, 12 and large n. The project is named after Allan Joseph Champneys Cunningham, who published the first version of the table together with Herbert J. Woodall. There are three printed versions of the table, the most recent published in 2002, as well as an online version by Samuel Wagstaff. The current limits of the exponents are: Factors of Cunningham number Two types of factors can be derived from a Cunningham number without having to use a factorization algorithm: algebraic factors of binomial numbers (e.g. difference of two squares and sum of two cubes), which depend on the exponent, and aurifeuillean factors, which depend on both the base and the exponent. Algebraic factors From elementary algebra, for all k, and for odd k. In addition, b2n − 1 = (bn − 1)(bn + 1). Thus, when m divides n, bm − 1 and bm + 1 are factors of bn − 1 if the quotient of n over m is even; only the first number is a factor if the quotient is odd. bm + 1 is a factor of bn − 1, if m divides n and the quotient is odd. In fact, and See this page for more information. Aurifeuillean factors When the number is of a particular form (the exact expression varies with the base), aurifeuillean factorization may be used, which gives a product of two or three numbers. The following equations give aurifeuillean factors for the Cunningham project bases as a product of F, L and M: Let b = s2 × k with squarefree k, if one of the conditions holds, then have aurifeuillean factorization. (i) and (ii) and Other factors Once the algebraic and aurifeuillean factors are removed, the other factors of bn ± 1 are always of the form 2kn + 1, since the factors of bn − 1 are all factors of , and the factors of bn + 1 are all factors of . When n is prime, both algebraic and aurifeuillean factors are not possible, except the trivial factors (b − 1 for bn − 1 and b + 1 for bn + 1). For Mersenne numbers, the trivial factors are not possible for prime n, so all factors are of the form 2kn + 1. In general, all factors of (bn − 1)&hairsp;/(b − 1) are of the form 2kn + 1, where b ≥ 2 and n is prime, except when n divides b − 1, in which case (bn − 1)/(b − 1) is divisible by n itself. Cunningham numbers of the form bn − 1 can only be prime if b = 2 and n is prime, assuming that n ≥ 2; these are the Mersenne numbers. Numbers of the form bn + 1 can only be prime if b is even and n is a power of 2, again assuming n ≥ 2; these are the generalized Fermat numbers, which are Fermat numbers when b = 2. Any factor of a Fermat number 22n + 1 is of the form k2n+2 + 1. Notation bn − 1 is denoted as b,n−. Similarly, bn + 1 is denoted as b,n+. When dealing with numbers of the form required for aurifeuillean factorization, b,nL and b,nM are used to denote L and M in the products above. References to b,n− and b,n+ are to the number with all algebraic and aurifeuillean factors removed. For example, Mersenne numbers are of the form 2,n− and Fermat numbers are of the form 2,2n+; the number Aurifeuille factored in 1871 was the product of 2,58L and 2,58M. See also Cunningham number ECMNET and NFS@Home, two collaborations working for the Cunningham project References External links Cunningham project homepage Factorizations of bn±1, b = 2, 3, 5, 6, 7, 10, 11, 12 Up to High Powers, second edition Factorizations of bn±1, b = 2, 3, 5, 6, 7, 10, 11, 12 Up to High Powers, third edition Main table of The Cunningham project Older main table of The Cunningham project Main table of The third edition of the Cunningham book Machine-readable Cunningham tables The Cunningham Project Brent-Montgomery-te Riele table (Cunningham tables for higher bases (bases 13 ≤ b ≤ 99, perfect powers excluded, since a power of bn is also a power of b)) Online factor collection Cunningham project on Prime Wiki Cunningham project on PrimePages Number theory
Cunningham Project
[ "Mathematics" ]
1,033
[ "Discrete mathematics", "Number theory" ]
1,572,523
https://en.wikipedia.org/wiki/On-board%20diagnostics
On-board diagnostics (OBD) is a term referring to a vehicle's self-diagnostic and reporting capability. In the United States, this capability is a requirement to comply with federal emissions standards to detect failures that may increase the vehicle tailpipe emissions to more than 150% of the standard to which it was originally certified. OBD systems give the vehicle owner or repair technician access to the status of the various vehicle sub-systems. The amount of diagnostic information available via OBD has varied widely since its introduction in the early 1980s versions of onboard vehicle computers. Early versions of OBD would simply illuminate a tell-tale light if a problem was detected, but would not provide any information as to the nature of the problem. Modern OBD implementations use a standardized digital communications port to provide real-time data and diagnostic trouble codes which allow malfunctions within the vehicle to be rapidly identified. History 1968: Volkswagen introduces the first on-board computer system, in their fuel-injected Type 3 models. This system is entirely analog with no diagnostic capabilities. 1975: Bosch and Bendix EFI systems are adopted by major automotive manufacturers to improve tailpipe (exhaust) emissions. These systems are also analog, though some provide rudimentary diagnostic capability through factory tools, such as the Kent Moore J-25400, compatible with the Datsun 280Z, and the Cadillac Seville. 1980: General Motors introduces the first data link on their 1980 Cadillac Eldorado and Seville models. Diagnostic Trouble Codes (DTCs) are displayed through the electronic climate control system's digital readout when in diagnostic mode. 1981: General Motors introduced its "Computer Command Control" system on all US passenger vehicles for model year 1981. Included in this system is a proprietary 5-pin ALDL that interfaces with the Engine Control Module (ECM) to initiate a diagnostic request and provide a serial data stream. The protocol communicates at 160 baud with Pulse-width modulation (PWM) signaling and monitors all engine management functions. It reports real-time sensor data, component overrides, and Diagnostic Trouble Codes. The specification for this link is as defined by GM's Emissions Control System Project Center document XDE-5024B. 1982: RCA defines an analog STE/ICE (simplified test equipment for internal combustion engines) vehicle diagnostic standard used in the CUCV, M60 tank and other military vehicles of the era for the US Army. 1986: General Motors introduces an upgraded version of the ALDL protocol, which communicates at 8192 baud with half-duplex UART signaling on some models. 1988: The California Air Resources Board (CARB) requires that all new vehicles sold in California from 1988 onward have some basic OBD capability (such as detecting problems with fuel metering and Exhaust gas recirculation.) These requirements are generally referred to as "OBD-I", though this name is a retronym applied after the introduction of OBD-II. The data link connector and its position are not standardized, nor is the data protocol. The Society of Automotive Engineers (SAE) recommends a standardized diagnostic connector and set of diagnostic test signals. ~1994: Motivated by a desire for a state-wide emissions testing program, the CARB issues the OBD-II specification and mandates that it be adopted for all cars sold in California starting in model year 1996 (see CCR Title 13 Section 1968.1 and 40 CFR Part 86 Section 86.094). The DTCs and connectors suggested by the SAE are incorporated into this specification. 1996: The OBD-II specification is made mandatory for all passenger cars and petrol-powered light trucks with a gross vehicle weight rating less than in the United States. The OBD-II specification is also made mandatory for all petrol-powered vehicles with California emissions with a gross vehicle weight rating up to . 1997: The OBD-II specification is made mandatory for California emissions diesel-engined vehicles with a gross vehicle weight rating up to . 2001: The European Union makes EOBD mandatory for all petrol vehicles sold in the European Union, starting in MY2001 (see European emission standards Directive 98/69/EC). 2004: The European Union makes EOBD mandatory for all diesel vehicles sold in the European Union. All petrol-powered vehicles in the United States with a gross vehicle weight rating of up to are required to have OBD-II. 2006: All vehicles manufactured in Australia and New Zealand are required to be OBD-II compliant after January 1, 2006. All vehicles in the United States of gross vehicle weight rating and under are required to have OBD-II. 2007: All California emissions vehicles over gross vehicle weight rating are required to support EMD/EMD+ or OBD-II. 2008: All cars sold in the United States are required to use the ISO 15765-4 signaling standard (a variant of the Controller Area Network (CAN) bus). 2008: Certain light vehicles in China are required by the Environmental Protection Administration Office to implement OBD (standard GB18352) by July 1, 2008. Some regional exemptions may apply. 2010: Start of required phase-in of the OBD-II specification to all vehicles with a gross vehicle weight rating of and above, this was completed by the 2013 model year. Vehicles that did not have OBD-II during this time period were required to have EMD/EMD+. Standard interfaces ALDL GM's ALDL (Assembly Line Diagnostic Link) is sometimes referred to as a predecessor to, or a manufacturer's proprietary version of, an OBD-I diagnostic starting in 1981. This interface was made in different varieties and changed with power train control modules (aka PCM, ECM, ECU). Different versions had slight differences in pin-outs and baud rates. Earlier versions used a 160 baud rate, while later versions went up to 8192 baud and used bi-directional communications to the PCM. OBD-I The regulatory intent of OBD-I was to encourage auto manufacturers to design reliable emission control systems that remain effective for the vehicle's "useful life". The hope was that by forcing annual emissions testing for California starting in 1988, and denying registration to vehicles that did not pass, drivers would tend to purchase vehicles that would more reliably pass the test. OBD-I was largely unsuccessful, as the means of reporting emissions-specific diagnostic information was not standardized. Technical difficulties with obtaining standardized and reliable emissions information from all vehicles led to an inability to implement the annual testing program effectively. The Diagnostic Trouble Codes (DTC's) of OBD-I vehicles can usually be found without an expensive scan tool. Each manufacturer used their own Diagnostic Link Connector (DLC), DLC location, DTC definitions, and procedure to read the DTC's from the vehicle. DTC's from OBD-I cars are often read through the blinking patterns of the 'Check Engine Light' (CEL) or 'Service Engine Soon' (SES) light. By connecting certain pins of the diagnostic connector, the 'Check Engine' light will blink out a two-digit number that corresponds to a specific error condition. The DTC's of some OBD-I cars are interpreted in different ways, however. Cadillac fuel-injected vehicles are equipped with actual onboard diagnostics, providing trouble codes, actuator tests and sensor data through the new digital Electronic Climate Control display. Holding down 'Off' and 'Warmer' for several seconds activates the diagnostic mode without the need for an external scan tool. Some Honda engine computers are equipped with LEDs that light up in a specific pattern to indicate the DTC. General Motors, some 1989–1995 Ford vehicles (DCL), and some 1989–1995 Toyota/Lexus vehicles have a live sensor data stream available; however, many other OBD-I equipped vehicles do not. OBD-I vehicles have fewer DTC's available than OBD-II equipped vehicles. OBD-1.5 OBD 1.5 refers to a partial implementation of OBD-II which General Motors used on some vehicles in 1994, 1995, & 1996. (GM did not use the term OBD 1.5 in the documentation for these vehicles — they simply have an OBD and an OBD-II section in the service manual.) For example, the 1994–1995 model year Corvettes have one post-catalyst oxygen sensor (although they have two catalytic converters), and have a subset of the OBD-II codes implemented. This hybrid system was present on GM B-body cars (the Chevrolet Caprice, Impala, and Buick Roadmaster) for 1994–1995model years, H-body cars for 1994–1995, W-body cars (Buick Regal, Chevrolet Lumina (for 1995 only), Chevrolet Monte Carlo (1995 only), Pontiac Grand Prix, Oldsmobile Cutlass Supreme) for 1994–1995, L-body (Chevrolet Beretta/Corsica) for 1994–1995, Y-body (Chevrolet Corvette) for 1994–1995, on the F-body (Chevrolet Camaro and Pontiac Firebird) for 1995 and on the J-Body (Chevrolet Cavalier and Pontiac Sunfire) and N-Body (Buick Skylark, Oldsmobile Achieva, Pontiac Grand Am) for 1995 and 1996 and also for North American delivered 1994–1995 Saab vehicles with the naturally aspirated 2.3. The pinout for the ALDL connection on these cars is as follows: For ALDL connections, pin 9 is the data stream, pins 4 and 5 are ground, and pin 16 is the battery voltage. An OBD 1.5 compatible scan tool is required to read codes generated by OBD 1.5. Additional vehicle-specific diagnostic and control circuits are also available on this connector. For instance, on the Corvette there are interfaces for the Class 2 serial data stream from the PCM, the CCM diagnostic terminal, the radio data stream, the airbag system, the selective ride control system, the low tire pressure warning system, and the passive keyless entry system. An OBD 1.5 has also been used in the Ford Scorpio since 95. OBD-II OBD-II is an improvement over OBD-I in both capability and standardization. The OBD-II standard specifies the type of diagnostic connector and its pinout, the electrical signalling protocols available, and the messaging format. It also provides a candidate list of vehicle parameters to monitor along with how to encode the data for each. There is a pin in the connector that provides power for the scan tool from the vehicle battery, which eliminates the need to connect a scan tool to a power source separately. However, some technicians might still connect the scan tool to an auxiliary power source to protect data in the unusual event that a vehicle experiences a loss of electrical power due to a malfunction. Finally, the OBD-II standard provides an extensible list of DTCs. As a result of this standardization, a single device can query the on-board computer(s) in any vehicle. This OBD-II came in two models OBD-IIA and OBD-IIB. OBD-II standardization was prompted by emissions requirements, and though only emission-related codes and data are required to be transmitted through it, most manufacturers have made the OBD-II Data Link Connector the only one in the vehicle through which all systems are diagnosed and programmed. OBD-II Diagnostic Trouble Codes are 4-digit, preceded by a letter: P for powertrain (engine and transmission), B for body, C for chassis, and U for network. OBD-II diagnostic connector The OBD-II specification provides for a standardized hardware interface — the female 16-pin (2x8) J1962 connector, where type A is used for 12-volt vehicles and type B for 24-volt vehicles. Unlike the OBD-I connector, which was sometimes found under the bonnet of the vehicle, the OBD-II connector is required to be within of the steering wheel (unless an exemption is applied for by the manufacturer, in which case it is still somewhere within reach of the driver). SAE J1962 defines the pinout of the connector as: The assignment of unspecified pins is left to the vehicle manufacturer's discretion. EOBD The European on-board diagnostics (EOBD) regulations are the European equivalent of OBD-II, and apply to all passenger cars of category M1 (with no more than 8 passenger seats and a Gross Vehicle Weight rating of or less) first registered within EU member states since January 1, 2001 for petrol-engined cars and since January 1, 2004 for diesel engined cars. For newly introduced models, the regulation dates applied a year earlier – January 1, 2000 for petrol and January 1, 2003, for diesel. For passenger cars with a Gross Vehicle Weight rating of greater than 2500 kg and for light commercial vehicles, the regulation dates applied from January 1, 2002, for petrol models, and January 1, 2007, for diesel models. The technical implementation of EOBD is essentially the same as OBD-II, with the same SAE J1962 diagnostic link connector and signal protocols being used. With Euro V and Euro VI emission standards, EOBD emission thresholds are lower than previous Euro III and IV. EOBD fault codes Each of the EOBD fault codes consists of five characters: a letter, followed by four numbers. The letter refers to the system being interrogated e.g. Pxxxx would refer to the powertrain system. The next character would be a 0 if complies to the EOBD standard. So it should look like P0xxx. The next character would refer to the sub system. P00xx – Fuel and Air Metering and Auxiliary Emission Controls. P01xx – Fuel and Air Metering. P02xx – Fuel and Air Metering (Injector Circuit). P03xx – Ignition System or Misfire. P04xx – Auxiliary Emissions Controls. P05xx – Vehicle Speed Controls and Idle Control System. P06xx – Computer Output Circuit. P07xx – Transmission. P08xx – Transmission. The following two characters would refer to the individual fault within each subsystem. EOBD2 The term "EOBD2" is marketing speak used by some vehicle manufacturers to refer to manufacturer-specific features that are not actually part of the OBD or EOBD standard. In this case "E" stands for Enhanced. JOBD JOBD is a version of OBD-II for vehicles sold in Japan. ADR 79/01 & 79/02 (Australian OBD standard) The ADR 79/01 (Vehicle Standard (Australian Design Rule 79/01 – Emission Control for Light Vehicles) 2005) standard is the Australian equivalent of OBD-II. It applies to all vehicles of category M1 and N1 with a Gross Vehicle Weight rating of or less, registered from new within Australia and produced since January 1, 2006 for petrol-engined cars and since January 1, 2007 for diesel-engined cars. For newly introduced models, the regulation dates applied a year earlier – January 1, 2005 for petrol and January 1, 2006, for diesel. The ADR 79/01 standard was supplemented by the ADR 79/02 standard which imposed tighter emissions restrictions, applicable to all vehicles of class M1 and N1 with a Gross Vehicle Weight rating of 3500 kg or less, from July 1, 2008, for new models, July 1, 2010, for all models. The technical implementation of this standard is essentially the same as OBD-II, with the same SAE J1962 diagnostic link connector and signal protocols being used. EMD/EMD+ In North America, EMD and EMD+ are on-board diagnostic systems that were used on vehicles with a gross vehicle weight rating of or more between the 2007 and 2012 model years if those vehicles did not already implement OBD-II. EMD was used on California emissions vehicles between model years 2007 and 2009 that did not already have OBD-II. EMD was required to monitor fuel delivery, exhaust gas recirculation, the diesel particulate filter (on diesel engines), and emissions-related powertrain control module inputs and outputs for circuit continuity, data rationality, and output functionality. EMD+ was used on model year 2010-2012 California and Federal petrol-engined vehicles with a gross vehicle weight rating of over , it added the ability to monitor nitrogen oxide catalyst performance. EMD and EMD+ are similar to OBD-I in logic but use the same SAE J1962 data connector and CAN bus as OBD-II systems. OBD-II signal protocols Five signaling protocols are permitted with the OBD-II interface. Most vehicles implement only one of the protocols. It is often possible to deduce the protocol used based on which pins are present on the J1962 connector: SAE J1850 PWM (pulse-width modulation — 41.6 kB/sec, standard of the Ford Motor Company) pin 2: Bus+ pin 10: Bus– High voltage is +5 V Message length is restricted to 12 bytes, including CRC Employs a multi-master arbitration scheme called 'Carrier Sense Multiple Access with Non-Destructive Arbitration' (CSMA/NDA) SAE J1850 VPW (variable pulse width — 10.4/41.6 kB/sec, standard of General Motors) pin 2: Bus+ Bus idles low High voltage is +7 V Decision point is +3.5 V Message length is restricted to 12 bytes, including CRC Employs CSMA/NDA ISO 9141-2. This protocol has an asynchronous serial data rate of 10.4 kbit/s. It is somewhat similar to RS-232; however, the signal levels are different, and communications happen on a single, bidirectional line without additional handshake signals. ISO 9141-2 is primarily used in Chrysler, European, and Asian vehicles. pin 7: K-line pin 15: L-line (optional) UART signaling K-line idles high, with a 510 ohm resistor to Vbatt The active/dominant state is driven low with an open-collector driver. Message length is Max 260Bytes. Data field MAX 255. ISO 14230 KWP2000 (Keyword Protocol 2000) pin 7: K-line pin 15: L-line (optional) Physical layer identical to ISO 9141-2 Data rate 1.2 to 10.4 kBaud Message may contain up to 255 bytes in the data field ISO 15765 CAN (250 kbit/s or 500 kbit/s). The CAN protocol was developed by Bosch for automotive and industrial control. Unlike other OBD protocols, variants are widely used outside of the automotive industry. While it did not meet the OBD-II requirements for U.S. vehicles prior to 2003, as of 2008 all vehicles sold in the US are required to implement CAN as one of their signaling protocols. pin 6: CAN High pin 14: CAN Low All OBD-II pinouts use the same connector, but different pins are used with the exception of pin 4 (battery ground) and pin 16 (battery positive). OBD-II diagnostic data available OBD-II provides access to data from the engine control unit (ECU) and offers a valuable source of information when troubleshooting problems inside a vehicle. The SAE J1979 standard defines a method for requesting various diagnostic data and a list of standard parameters that might be available from the ECU. The various available parameters are addressed by "parameter identification numbers" or PIDs which are defined in J1979. For a list of basic PIDs, their definitions, and the formula to convert raw OBD-II output to meaningful diagnostic units, see OBD-II PIDs. Manufacturers are not required to implement all PIDs listed in J1979 and they are allowed to include proprietary PIDs that are not listed. The PID request and data retrieval system gives access to real time performance data as well as flagged DTCs. For a list of generic OBD-II DTCs suggested by the SAE, see Table of OBD-II Codes. Individual manufacturers often enhance the OBD-II code set with additional proprietary DTCs. Mode of operation/OBD services Here is a basic introduction to the OBD communication protocol according to ISO 15031. In SAE J1979 these "modes" were renamed to "services", starting in 2003. Service / Mode $01 shows current sensor live data from PIDs ("Parameter IDs"). See OBD-II PIDs#Service_01 for an extensive list. Service / Mode $02 makes Freeze Frame data accessible via the same PIDs. See OBD-II PIDs#Service_02 for a list. Service / Mode $03 lists the emission-related "confirmed" diagnostic trouble codes stored. It either displays numeric, 4 digit codes identifying the faults or maps them to a letter (P, B, U, C) plus 4 digits. See #OBD-II_diagnostic_trouble_codes. Service / Mode $04 is used to clear emission-related diagnostic information. This includes clearing the stored pending/confirmed DTCs and Freeze Frame data. Service / Mode $05 displays the oxygen sensor monitor screen and the test results gathered about the oxygen sensor. There are ten numbers available for diagnostics: $01 Rich-to-Lean O2 sensor threshold voltage $02 Lean-to-Rich O2 sensor threshold voltage $03 Low sensor voltage threshold for switch time measurement $04 High sensor voltage threshold for switch time measurement $05 Rich-to-Lean switch time in ms $06 Lean-to Rich switch time in ms $07 Minimum voltage for test $08 Maximum voltage for test $09 Time between voltage transitions in ms See OBD-II PIDs#Service_05 for a list. Service / Mode $06 is a Request for On-Board Monitoring Test Results for Continuously and Non-Continuously Monitored System. There are typically a minimum value, a maximum value, and a current value for each non-continuous monitor. Service / Mode $07 is a Request for emission-related diagnostic trouble codes detected during current or last completed driving cycle. It enables the external test equipment to obtain "pending" diagnostic trouble codes detected during current or last completed driving cycle for emission-related components/systems. This is used by service technicians after a vehicle repair, and after clearing diagnostic information to see test results after a single driving cycle to determine if the repair has fixed the problem. See #OBD-II_diagnostic_trouble_codes. Service / Mode $08 could enable the off-board test device to control the operation of an on-board system, test, or component. Service / Mode $09 is used to retrieve vehicle information. Among others, the following information is available: VIN (Vehicle Identification Number): Vehicle ID CALID (Calibration Identification): ID for the software installed on the ECU CVN (Calibration Verification Number): Number used to verify the integrity of the vehicle software. The manufacturer is responsible for determining the method of calculating CVN(s), e.g. using checksum. In-use performance counters Petrol engine : Catalyst, Primary oxygen sensor, Evaporating system, EGR system, VVT system, Secondary air system, and Secondary oxygen sensor Diesel engine : NMHC catalyst, NOx reduction catalyst, NOx absorber Particulate matter filter, Exhaust gas sensor, EGR system, VVT system, Boost pressure control, Fuel system. See OBD-II PIDs#Service_09 for an extensive list. Service / Mode $0A lists emission-related "permanent" diagnostic trouble codes stored. As per CARB, any diagnostic trouble codes that is commanding MIL on and stored into non-volatile memory shall be logged as a permanent fault code. See #OBD-II_diagnostic_trouble_codes. Applications Various tools are available that plug into the OBD connector to access OBD functions. These range from simple generic consumer level tools to highly sophisticated OEM dealership tools to vehicle telematic devices. Hand-held scan tools A range of rugged hand-held scan tools is available. Simple fault code readers/reset tools are mostly aimed at the consumer level. Professional hand-held scan tools may possess more advanced functions Access more advanced diagnostics Set manufacturer- or vehicle-specific ECU parameters Access and control other control units, such as air bag or ABS Real-time monitoring or graphing of engine parameters to facilitate diagnosis or tuning Mobile device-based tools and analysis Mobile device applications allow mobile devices such as cell phones and tablets to display and manipulate the OBD-II data accessed via USB adaptor cables or Bluetooth adapters plugged into the car's OBD II connector. Newer devices on the market are equipped with GPS sensors and the ability to transmit vehicle location and diagnostics data over a cellular network. Modern OBD-II devices can therefore nowadays be used to for example locate vehicles, monitor driving behavior in addition to reading Diagnostics Trouble Codes (DTC). Even more advanced devices allow users to reset engine DTC codes, effectively turning off engine lights in the dashboard; however, resetting the codes does not address the underlying issues and can in worst-case scenarios even lead to engine breakage where the source issue is serious and left unattended for long periods. OBD-II Software An OBD-II software package when installed in a computer (Windows, Mac, or Linux) can help diagnose the onboard system, read and erase DTCs, turn off MIL, show real-time data, and measure vehicle fuel economy. To use OBD-II software, one needs to have an OBD-II adapter (commonly using Bluetooth, Wi-Fi or USB) plugged in the OBD-II port to enable the vehicle to connect with the computer where the software is installed. PC-based scan tools and analysis platforms A PC-based OBD analysis tool that converts the OBD-II signals to serial data (USB or serial port) standard to PCs or Macs. The software then decodes the received data to a visual display. Many popular interfaces are based on the ELM327 or STN OBD Interpreter ICs, both of which read all five generic OBD-II protocols. Some adapters now use the J2534 API allowing them to access OBD-II Protocols for both cars and trucks. In addition to the functions of a hand-held scan tool, the PC-based tools generally offer: Large storage capacity for data logging and other functions Higher resolution screen than handheld tools The ability to use multiple software programs adding flexibility The identification and clearance of fault code Data shown by intuitive graphs and charts The extent that a PC tool may access manufacturer or vehicle-specific ECU diagnostics varies between software products as it does between hand-held scanners. Data loggers Data loggers are designed to capture vehicle data while the vehicle is in normal operation, for later analysis. Data logging uses include: Engine and vehicle monitoring under normal operation, for diagnosis or tuning. Some US auto insurance companies offer reduced premiums if OBD-II vehicle data loggers or cameras are installed – and if the driver's behaviour meets requirements. This is a form of auto insurance risk selection Monitoring of driver behaviour by fleet vehicle operators. Analysis of vehicle black box data may be performed periodically, automatically transmitted wirelessly to a third party or retrieved for forensic analysis after an event such as an accident, traffic infringement or mechanical fault. Emission testing In the United States, many states now use OBD-II testing instead of tailpipe testing in OBD-II compliant vehicles (1996 and newer). Since OBD-II stores trouble codes for emissions equipment, the testing computer can query the vehicle's onboard computer and verify there are no emission related trouble codes and that the vehicle is in compliance with emission standards for the model year it was manufactured. In the Netherlands, 2006 and later vehicles get a yearly EOBD emission check. Driver's supplementary vehicle instrumentation Driver's supplementary vehicle instrumentation is instrumentation installed in a vehicle in addition to that provided by the vehicle manufacturer and intended for display to the driver during normal operation. This is opposed to scanners used primarily for active fault diagnosis, tuning, or hidden data logging. Auto enthusiasts have traditionally installed additional gauges such as manifold vacuum, battery current etc. The OBD standard interface has enabled a new generation of enthusiast instrumentation accessing the full range of vehicle data used for diagnostics, and derived data such as instantaneous fuel economy. Instrumentation may take the form of dedicated trip computers, carputer or interfaces to PDAs, smartphones, or a Garmin navigation unit. As a carputer is essentially a PC, the same software could be loaded as for PC-based scan tools and vice versa, so the distinction is only in the reason for use of the software. These enthusiast systems may also include some functionality similar to the other scan tools. Vehicle telematics OBD II information is commonly used by vehicle telematics devices that perform fleet tracking, monitor fuel efficiency, prevent unsafe driving, as well as for remote diagnostics and by Pay-As-You-Drive insurance. Although originally not intended for the above purposes, commonly supported OBD II data such as vehicle speed, RPM, and fuel level allow GPS-based fleet tracking devices to monitor vehicle idling times, speeding, and over-revving. By monitoring OBD II DTCs a company can know immediately if one of its vehicles has an engine problem and by interpreting the code the nature of the problem. It can be used to detect reckless driving in real time based on the sensor data provided through the OBD port. This detection is done by adding a complex events processor (CEP) to the backend and on the client's interface. OBD II is also monitored to block mobile phones when driving and to record trip data for insurance purposes. OBD-II diagnostic trouble codes OBD-II diagnostic trouble codes (DTCs) are five characters long, with the first letter indicating a category, and the remaining four being a hexadecimal number. The first character, representing category can only be one of the following four letters, given here with their associated meanings. (This restriction in number is due to how only two bits of memory are used to indicate the category when DTCs are stored and transmitted). P – Powertrain (engine, transmission and ignition) C – Chassis (includes ABS and brake fluid) B – Body (includes air conditioning and airbag) U – Network (wiring bus) The second character is a number in the range of 0–3. (This restriction is again due to memory storage limitations). 0 – Indicates a generic (SAE defined) code. 1 – Indicates a manufacturer-specific (OEM) code. 2 – Category dependent: For the 'P' category this indicates a generic (SAE defined) code. For other categories indicates a manufacturer-specific (OEM) code. 3 – Category dependent: For the 'P' category this is indicates a code that has been 'jointly' defined. For other categories this has been reserved for future use. The third character may denote a particular vehicle system that the fault relates to. 0 – Fuel and air metering and auxiliary emission controls 1 – Fuel and air metering 2 – Fuel and air metering (injector circuit) 3 – Ignition systems or misfires 4 – Auxiliary emission controls 5 – Vehicle speed control and idle control systems 6 – Computer and output circuit 7 – Transmission 8 – Transmission A-F – Hybrid Trouble Codes Finally the fourth and fifth characters define the exact problem detected. Standards documents SAE standards documents on OBD-II J1962 – Defines the physical connector used for the OBD-II interface. J1850 – Defines a serial data protocol. There are 2 variants: 10.4 kbit/s (single wire, VPW) and 41.6 kbit/s (2 wire, PWM). Mainly used by US manufacturers, also known as PCI (Chrysler, 10.4K), Class 2 (GM, 10.4K), and SCP (Ford, 41.6K) J1978 – Defines minimal operating standards for OBD-II scan tools J1979 – Defines standards for diagnostic test modes J2012 – Defines standards trouble codes and definitions. J2178-1 – Defines standards for network message header formats and physical address assignments J2178-2 – Gives data parameter definitions J2178-3 – Defines standards for network message frame IDs for single byte headers J2178-4 – Defines standards for network messages with three byte headers* J2284-3 – Defines 500K CAN physical and data link layer J2411 – Describes the GMLAN (Single-Wire CAN) protocol, used in newer GM vehicles. Often accessible on the OBD connector as PIN 1 on newer GM vehicles. SAE standards documents on HD (Heavy Duty) OBD J1939 – Defines a data protocol for heavy duty commercial vehicles ISO standards ISO 9141: Road vehicles – Diagnostic systems. International Organization for Standardization, 1989. Part 1: Requirements for interchange of digital information Part 2: CARB requirements for interchange of digital information Part 3: Verification of the communication between vehicle and OBD II scan tool ISO 11898: Road vehicles – Controller area network (CAN). International Organization for Standardization, 2003. Part 1: Data link layer and physical signalling Part 2: High-speed medium access unit Part 3: Low-speed, fault-tolerant, medium-dependent interface Part 4: Time-triggered communication ISO 14230: Road vehicles – Diagnostic systems – Keyword Protocol 2000, International Organization for Standardization, 1999. Part 1: Physical layer Part 2: Data link layer Part 3: Application layer Part 4: Requirements for emission-related systems ISO 15031: Communication between vehicle and external equipment for emissions-related diagnostics, International Organization for Standardization, 2010. Part 1: General information and use case definition Part 2: Guidance on terms, definitions, abbreviations and acronyms Part 3: Diagnostic connector and related electrical circuits, specification and use Part 4: External test equipment Part 5: Emissions-related diagnostic services Part 6: Diagnostic trouble code definitions Part 7: Data link security ISO 15765: Road vehicles – Diagnostics on Controller Area Networks (CAN). International Organization for Standardization, 2004. Part 1: General information Part 2: Network layer services ISO 15765-2 Part 3: Implementation of unified diagnostic services (UDS on CAN) Part 4: Requirements for emissions-related systems Security issues Researchers at the University of Washington and University of California examined the security around OBD and found that they were able to gain control over many vehicle components via the interface. Furthermore, they were able to upload new firmware into the engine control units. Their conclusion is that vehicle embedded systems are not designed with security in mind. There have been reports of thieves using specialist OBD reprogramming devices to enable them to steal cars without the use of a key. The primary causes of this vulnerability lie in the tendency for vehicle manufacturers to extend the bus for purposes other than those for which it was designed, and the lack of authentication and authorization in the OBD specifications, which instead rely largely on security through obscurity. See also OBD-II PIDs ("Parameter IDs") Unified Diagnostic Services Engine control unit Immobiliser References Notes Birnbaum, Ralph and Truglia, Jerry. Getting to Know OBD II. New York, 2000. . SAE International. On-Board Diagnostics for Light and Medium Duty Vehicles Standards Manual. Pennsylvania, 2003. . External links Directive 98/69/EC of the European Parliament and of the Council of 13 October 1998. National OBD Clearing House Center for Automotive Science and Technology at Weber State University United States Environmental Protection Agency OBD information for repair technicians, vehicle owners, and manufacturers Automotive technologies Industrial computing Vehicle security systems
On-board diagnostics
[ "Technology", "Engineering" ]
7,474
[ "Industrial computing", "Industrial engineering", "Automation" ]
1,572,553
https://en.wikipedia.org/wiki/Shravana
Shravana (Devanagari: श्रवण), also known as Thiruvonam in Tamil and Malayalam (Tamil: திருவோணம், Malayalam: തിരുവോണം), is the 22nd nakshatra (Devanagari नक्षत्र) or lunar mansion as used in Hindu astronomy, Hindu calendar and Hindu astrology. It belongs to the constellation Makara (Devanagari: मकर), a legendary sea creature resembling a crocodile] or Capricorn. The name alludes to Shravan, a mythological character who attained repute due to his utmost devotion to his aged and blind parents. Lord Venkateswara of Tirupati and Lord Oppiliappan near Kumbakonam, who married Markandeya Rishi's daughter Bhuvalli, are believed to be born in this Nakshatra in the Bhadrapada maasa. Onam, the biggest festival of Kerala, is celebrated on this Nakshathra in the Malayalam month of Chingam. Traditional Hindu given names are determined by which pada (quarter) of a nakshatra the Ascendant/Lagna was in at the time of birth. In the case of Shravana Nakshatra, the given name would begin with the following syllables: Khi (Devanagari: खी) Khu (Devanagari: खू) Khe (Devanagari: खे) Kho (Devanagari: खो) References Nakshatra
Shravana
[ "Astronomy" ]
314
[ "Nakshatra", "Constellations" ]
1,572,605
https://en.wikipedia.org/wiki/Mehdi%20Golshani
Mehdi Golshani (Persian: مهدی گلشنی, born 1939 in Isfahan, Iran) is a contemporary Iranian theoretical physicist, academic, scholar, philosopher and distinguished professor at Sharif University of Technology. He is also member of Iranian Science and Culture Hall of Fame, senior fellow of Academy of Sciences of Iran and a founding fellow of the Institute for Studies in Theoretical Physics and Mathematics. He is a former member of the Supreme Council of the Cultural Revolution. History He received his B.Sc. in Physics from Tehran University in 1959 and his Ph.D. in Physics with a specialization in particle physics in 1969 from the University of California, Berkeley. The title of his doctoral dissertation is "Electron impact excitation of heavily ionized atoms". Life Career Mehdi Golshani is a distinguished lecturer. His main research areas include foundational physics, particle physics, physical cosmology and philosophical implications of quantum mechanics. He is known as a thinker for his writings on science, religion and their interrelation. Golshani is the founder and chairman of the Faculty of Philosophy of Science at Sharif University of Technology. He is also the director of the Institute of Humanities and Cultural Studies, Tehran, Iran, and a professor at Physics Department of Sharif University of Technology, as well as a Senior Fellow of School of Physics at Institute for Studies in Theoretical Physics and Mathematics (IPM). He is a member of American Association of Physics Teachers, and Center for Theology and Natural Science, as well as a Senior Associate of International Centre for Theoretical Physics, Trieste, Italy. He is also Member of Philosophy of Science Association, Michigan, U.S. and European Society for the Study of Science and Theology. He has been among the winners of the first year of the Templeton Science & Religion course program and also among the Former Judges of The Templeton Prize. Golshani is a fellow of Islamic World Academy of Sciences IAS. He has written numerous books and articles on physics, philosophy of physics, science and religion, as well as science and theology. In most of Golshani's works, there is a clear attempt to help revive the scientific spirit in the Muslim world. Views On the foundation of quantum mechanics he is mainly concerned with the orthodox interpretation of quantum mechanics and the possible more realistic alternatives, particularly Bohmian Mechanics. On the interrelationship of science and religion He is a Muslim scientist and thinker who has deep roots in both science and religion. On Christianity and the development of modern science The biblical world view has had a significant impact in the development of science. Professor Mehdi Golshani connotes a connection between a belief in the Biblical God and scientific breakthroughs by stating that Copernicus, Kepler, Galileo, Boyle, Newton and many other founders of science were all devout Christians. Western Science was largely constructed within the framework of a Christian world view, and was influenced by the following Biblical concepts: Quotes "The conception of an omniscient and omnipotent personal God, [w]ho made everything in accordance with a rational plan and purpose, contributed to the notion of a rationally structured creation". "The notion of a transcendent God, [w]ho exists separate from His creation, served to counter the notion that the physical world, or any part of it, is sacred. Since the entire physical world is a mere creation, it was thus a fit object of study and transformation". "Since man was made in the image of God (Gen.1:26), which included rationality and creativity, it was deemed possible that man could discern the rational structure of the physical universe that God had made". "The cultural mandate, which appointed man to be God's steward over creation (Gen1:28), provided the motivation for studying nature and for applying that study towards practical ends, at the same time glorifying God for His wisdom and goodness". "In the popular mind, the two greatest historical conflicts between science and religion have been those involving Galileo and Darwin. "The Galileo affair, in the early 17th century, was a complex dispute, inflamed by politics and personalities. It was primarily a family squabble within Christianity. Two different scientific research programs clashed, each program supported by its own group of Christian scientists. The central issue was the epistemological question of how to determine absolute motion. Should the absolute frame of reference be set by Biblical standards, by Aristotelian philosophy, by mathematical simplicity [...] or by other considerations? The difficulty was that the observational data in themselves can yield information only about relative motion. The question of absolute motion must thus be settled by extra-scientific definitions and considerations. As is now widely recognized, the resolution of this issue depends largely on one's worldview assumptions". "The conflict precipitated by Darwin concerns primarily origins. How did life, in all its manifold forms, come to be? The dispute is not so much about observations of living things, fossils, geological formations, etc. but how to explain how they came to be. As such, the conflict involves questions concerning the ultimate nature of reality (e.g., can mind be explained entirely in terms of matter?), eschatology (e.g., does man have a non-material soul that survives physical death?)[...] and causation (e.g., does the origin of life require special divine acts?). Again, a central issue is one of epistemology: what role should divine revelation (e.g., the Bible) play in interpreting the results of observational science, in choosing the theories of science [...] and in informing our view of origins, etc? Here, too, it is clear that this conflict is rooted in a clash of opposing extra-scientific presuppositions". Works Books تحليلى بر ديدگاههاى فلسفى فيزيكدانان معاصر (a Probe into the Philosophical Viewpoints of Contemporary Physicists). in Persian. علم دینی و علم سکولار (Secular and religious science). in Persian. Golshani, Mehdi. Holy Quran and the Sciences of Nature. Paperback ed. Studies in Contemporary Philosophical Th., 1997. From Physics to Metaphysics, Institute for Humanities and Cultural Studies, Tehran, 1998 Golshani, Mehdi. Can Science Dispense with Religion? Hardcover ed. I.H.C.S., 1998. English Translation of the Holy Qur'an, Vol. 1, Islamic Propagation Organization, Tehran, 1991 As a contributor "The Sciences of Nature in an Islamic Perspective" in The Concept of Nature in Science & Theology (SSTh 4/1996), ed. by N. H. Gregersen et al. (Geneva: Labor et Fides, 1998), pp. 56–62. _ and Shojai, A. "Direct Particle Quantum Interaction" in Contemporary Fundamental Physics, 1, ed. by V.V. Dvoeglazor (Huntington, New York: Nova Publishers, Inc., 2000), p. 270. "Ways of Understanding Nature in the Qur’anic Perspective" in The Interplay between Scientific and Theological Worldviews (SSTh 6/1998), ed. by N. H. Gregersen et al. (Geneva: Labor et Fides, 1999), p. 183. "Philosophy of Science from the Qur’anic Perspective" in Towards Islamization of Disciplines (Hendon, Virginia: International Institute of Islamic Thought, 1989), p. 71. "Theistic Science" in God for the Twenty First Century (USA: John Templeton Foundation, 2000). "Have Physicists Been Able to Dispense with Philosophy?" in Recent Advances in Relativity Theory, ed. by M. C. Duffy & M. Wegener (Palm Harbor, Fl. : Hadronic Press, 2001) p. 90. "The Ladder of God" in Faith in Science: Scientists Search for Truth (London: Routledge, Fall 2001). "Causality in the Islamic Outlook and in Modern Physics" in Studies in Science and Theology, Vol. 8, ed. by N. H. Gregersen (ESSSAT, Fall 2001). Articles Golshani, Mehdi. "Does Science Offer Evidence of a Transcendent Reality and Purpose:." Islam and Science (Refereed) 1 (2003): 45-65. Golshani, Mehdi. "Some Important Questions Concerning the Relationship Between Science and Religion." Islam and Science 3.1 (2003): 63-83. Scientific Papers References External links Homepage - Sharif Univ. of Tech. Webpage - Islamic World Academy of Sciences Webpage - Institute for Studies in Theoretical Physics and Mathematics Webpage - Centre for Islam and Science Interview (audio) - Meta Library Quantum physicists 20th-century Iranian physicists Members of the International Society for Science and Religion Particle physicists Scientists from Isfahan Academic staff of Sharif University of Technology University of California, Berkeley alumni University of Tehran alumni 1939 births Living people Recipients of the Order of Knowledge Iranian Science and Culture Hall of Fame recipients in Mathematics and Physics Iran's Book of the Year Awards recipients Muslim evolutionists
Mehdi Golshani
[ "Physics" ]
1,933
[ "Quantum mechanics", "Quantum physicists", "Particle physicists", "Particle physics" ]
1,572,728
https://en.wikipedia.org/wiki/Rho%20Coronae%20Borealis
Rho Coronae Borealis (ρ CrB, ρ Coronae Borealis) is a yellow dwarf star away in the constellation of Corona Borealis. The star is thought to be similar to the Sun with nearly the same mass, radius, and luminosity. It is orbited by four known exoplanets. Stellar properties Rho Coronae Borealis is a yellow main-sequence star of the spectral type G0V. The star is thought to have 96 percent of the Sun's mass, along with 1.3 times its radius and 1.7 times its luminosity. It may only be 51 to 65 percent as enriched with elements heavier than hydrogen (based on its abundance of iron) and is likely somewhat older than the Sun at around ten billion years old. The rotation period of Rho Coronae Borealis is approximately 20 days, even though at this age stars are hypothesized to decouple their rotational evolution and magnetic activity. Multiple star catalogs list a 10th-magnitude companion about two arc-minutes away, but it is an unrelated background object. Planetary system An extrasolar planet in a 39.8-day orbit around Rho Coronae Borealis was discovered in 1997 by observing the star's radial velocity variations. This detection method only gives a lower limit on the true mass of the companion. In 2001, preliminary Hipparcos astrometric satellite data indicated that the orbital inclination of the star's companion was 0.5°, nearly face-on, implying that its mass was as much as 115 times Jupiter's. A paper published in 2011 supported this claim using a new reduction of the astrometric data, with an updated mass value of 169.7 times Jupiter, with a 3σ confidence region 100.1 to 199.6 Jupiter masses. Such a massive body would be a dim red dwarf star, not a planet. In 2016, however, a paper was published that used interferometry to rule out any stellar companions to this star, in addition to detecting a second planetary companion in a 102-day orbit. Another two planets were discovered in 2023. The evolution of the parent star, nearing the conclusion of its life cycle, has been regarded as a model for the potential evolution of our planetary system. This is especially relevant for predicting whether the Sun will eventually engulf the Earth at the end of its own lifecycle (cf. Future of Earth). Circumstellar material In October 1999, astronomers at the University of Arizona announced the existence of a circumstellar disk around the star. Follow-up observations with the Spitzer Space Telescope failed to detect any infrared excess at 24- or 70-micrometre wavelengths, which would be expected if a disk were present. No evidence for a disk was detected in observations with the Herschel Space Observatory either. See also List of exoplanets discovered before 2000 - Rho Coronae Borealis b List of exoplanets discovered in 2016 - Rho Coronae Borealis c List of exoplanets discovered in 2023 - Rho Coronae Borealis d and Rho Coronae Borealis e References External links Coronae Borealis, Rho Corona Borealis Coronae Borealis, 15 143761 078459 5968 G-type main-sequence stars Solar analogs 9537 BD+33 2663 J16010264+3318124 Planetary systems with four confirmed planets
Rho Coronae Borealis
[ "Astronomy" ]
707
[ "Corona Borealis", "Constellations" ]
1,572,792
https://en.wikipedia.org/wiki/Colonial%20architecture
Colonial architecture is a hybrid architectural style that arose as colonists combined architectural styles from their country of origin with design characteristics of the settled country. Colonists frequently built houses and buildings in a style that was familiar to them but with local characteristics more suited to their new climate. Below are links to specific articles about colonial architecture, specifically the modern colonies: Spanish colonial architecture Spanish colonial architecture is still found in the former colonies of the Spanish Empire in the Americas and in the Philippines. In Mexico, it is found in the Historic center of Mexico City, Puebla, Zacatecas, Querétaro, Guanajuato, and Morelia. Antigua Guatemala in Guatemala is also known for its well-preserved Spanish colonial style architecture. Other cities known for Spanish colonial heritage are Ciudad Colonial of Santo Domingo, the ports of Cartagena, Colombia, and Old San Juan in Puerto Rico. North America Viceroyalty of New Spain New Spanish Baroque Spanish Colonial Revival architecture Caribbean Spanish West Indies South America Viceroyalty of Peru, Viceroyalty of New Granada, and Viceroyalty of the Río de la Plata Asia Spanish East Indies Earthquake Baroque Bahay na Bato Portuguese colonial architecture Portuguese colonial architecture is most visible in Brazil, Madeira, North Africa and Sub-Saharan Africa, Macau, the Malaysian city of Malacca, city of Goa in India, and Moluccas and Java in Indonesia. Asia Sino-Portuguese architecture South America British colonial architecture British colonial architecture are most visible in North America, the British West Indies, South Asia, Australia, New Zealand and South Africa. North America American colonial architecture Federal Architecture First Period Colonial Georgian architecture British colonial architecture in Canada South Asia British colonial architecture in India British colonial architecture in Pakistan Colonial architecture in Sri Lanka Australia Colonial architecture of Australia Federation architecture Asia-Pacific British colonial architecture in Hong Kong British colonial architecture in Singapore British Consulate at Takao French colonial architecture French colonial architecture is most visible in North America and Indochina. Indochina North America French colonial architecture in North America South Asia French colonial architecture in India Dutch colonial architecture Dutch colonial architecture is most visible in Indonesia (especially Java and Sumatra), the United States, South Asia, and South Africa. In Indonesia, formerly Dutch East Indies, colonial architecture was studied academically and had developed into a new tropical architecture form which emphasizes on conforming to the tropical climate of the Indies and not completely imitating the architectural language of the Dutch colonists. Indonesia Dutch colonial architecture of Indonesia Old Indies Style Indies Empire style New Indies Style North America Dutch colonial architecture in North America Dutch Colonial Revival architecture South Asia Dutch colonial architecture in India Colonial architecture in Sri Lanka South Africa Cape Dutch architecture Italian colonial architecture Eritrea was Italy's first African colony. Its first capital, Massawa, contains a large amount of early Italian colonial architecture, characterized by historicism and inspiration from Venetian Gothic and Italian Neoclassical architecture. The colonial architecture and orthogonal street grid of Asmara, the colony's second capital, was inscribed as a UNESCO World Heritage Site in 2017. Much of the city's colonial architecture dates to the fascist era, during which Benito Mussolini encouraged architects and planners to transform the city into a "Little Rome". Somalia also contains a wide range of Italian colonial architecture, dating back to its colonial era. In Mogadishu, the residence of most of the colony's eventual 50,000 Italian residents, colonial architects undertook large planning projects and erected monuments such as the still-extant triumphal arch dedicated to Umberto I, the largely destroyed Cathedral of Mogadiscio, and various government buildings. The Italian-built Villa Somalia remains Somalia's presidential residence. Unlike colonial schemes in Libya and Eritrea, Italian colonial authorities built within existing cities in Somalia, not building new villages or towns for settlers. Before the consolidation of Italian Cyrenaica and Italian Tripolitania, Libya's colonial masters undertook significant building projects in Italian styles, such as the construction of Tripoli's Cathedral, built in a Venetian Gothic style. Following the founding of Italian Libya, Italian Fascist architecture became the standard for the massive infrastructural and settlement-related projects that Mussolini's Italy undertook. In cities such as Tripoli and Benghazi, colonial architects and urban planners undertook large-scale urban projects, such as the construction of Benghazi's monumental Lungomare (sea-walk), new urban districts for Italian settlers, and Catholic religious buildings, including Benghazi's and Tripoli's cathedrals. The fascist government's constructions were usually characterized by use of the Italian Rationalist and Neoclassical styles. Starting in 1938, the colony's Public Works Department sponsored the building of 27 new villages meant for Italian settlement, mostly in Cyrenaica, which epitomized a Rationalism informed by local Arab architectural mores. Giovanni Pellegrini, one of the most prominent designers of these agrarian villages, attempted to synthesize Arab and Italian architecture to settlements best fitted to Cyrenaica's arid climate. Italy's occupation of the Dodecanese bore a significant amount of modernist and art deco buildings throughout the archipelago. Colonial architects also constructed several new towns and villages, such as Portolago, now known as Lakki. Contrasting with much of the built remnants of Italian colonialism in Africa, Italian architecture in the Dodecanese often remains in good repair. Italy's brief colonial undertaking in Albania resulted in a prominent collection of Rationalist buildings, including the Bank of Albania, the Prime Minister's Office, and the National Theatre. See also Colonial Revival architecture American colonial architecture References External links Website on colonial architecture with 29,000 pictures of colonial buildings around the world by Gauvin Alexander Bailey of Queen's University and funded by the Social Sciences and Humanities Research Council of Canada and the National Endowment for the Humanities Architectural styles Architectural history History of European colonialism
Colonial architecture
[ "Engineering" ]
1,172
[ "Architectural history", "Architecture" ]
1,572,814
https://en.wikipedia.org/wiki/Di-tert-butyl%20ether
Di-tert-butyl ether is a tertiary ether, primarily of theoretical interest as the simplest member of the class of di-tertiary ethers. See also Ether Methyl tert-butyl ether Dimethyl ether Diethyl ether Diisopropyl ether References Dialkyl ethers Ether solvents Tert-butyl compounds Symmetrical ethers
Di-tert-butyl ether
[ "Chemistry" ]
74
[]
1,572,831
https://en.wikipedia.org/wiki/Plant%20senescence
Plant senescence is the process of aging in plants. Plants have both stress-induced and age-related developmental aging. Chlorophyll degradation during leaf senescence reveals the carotenoids, such as anthocyanin and xanthophylls, which are the cause of autumn leaf color in deciduous trees. Leaf senescence has the important function of recycling nutrients, mostly nitrogen, to growing and storage organs of the plant. Unlike animals, plants continually form new organs and older organs undergo a highly regulated senescence program to maximize nutrient export. Hormonal regulation of senescence Programmed senescence seems to be heavily influenced by plant hormones. The hormones abscisic acid, ethylene, jasmonic acid and salicylic acid are accepted by most scientists as promoters of senescence, but at least one source lists gibberellins, brassinosteroids and strigolactone as also being involved. Cytokinins help to maintain the plant cell and expression of cytokinin biosynthesis genes late in development prevents leaf senescence. A withdrawal of or inability of the cell to perceive cytokinin may cause it to undergo apoptosis or senescence. In addition, mutants that cannot perceive ethylene show delayed senescence. Genome-wide comparison of mRNAs expressed during dark-induced senescence versus those expressed during age-related developmental senescence demonstrate that jasmonic acid and ethylene are more important for dark-induced (stress-related) senescence while salicylic acid is more important for developmental senescence. Annual versus perennial benefits Some plants have evolved into annuals which die off at the end of each season and leave seeds for the next, whereas closely related plants in the same family have evolved to live as perennials. This may be a programmed "strategy" for the plants. The benefit of an annual strategy may be genetic diversity, as one set of genes does continue year after year, but a new mix is produced each year. Secondly, being annual may allow the plants a better survival strategy, since the plant can put most of its accumulated energy and resources into seed production rather than saving some for the plant to overwinter, which would limit seed production. Conversely, the perennial strategy may sometimes be the more effective survival strategy, because the plant has a head start every spring with growing points, roots, and stored energy that have survived through the winter. In trees for example, the structure can be built on year after year so that the tree and root structure can become larger, stronger, and capable of producing more fruit and seed than the year before, out-competing other plants for light, water, nutrients, and space. This strategy will fail when environmental conditions change rapidly. If a certain bug quickly takes advantage and kills all of the nearly identical perennials, then there will be a far lesser chance that a random mutation will slow the bug compared to more diverse annuals. Plant self-pruning There is a speculative hypothesis on how and why a plant induces part of itself to die off. The theory holds that leaves and roots are routinely pruned off during the growing season whether they are annual or perennial. This is done mainly to mature leaves and roots and is for one of two reasons; either both the leaves and roots that are pruned are no longer efficient enough nutrient acquisition-wise or that energy and resources are needed in another part of the plant because that part of the plant is faltering in its resource acquisition. Poor productivity reasons for plant self pruning – the plant rarely prunes young dividing meristematic cells, but if a fully grown mature cell is no longer acquiring nutrients that it should acquire, then it is pruned. Shoot efficiency self pruning reasons – for instance, presumably a mature shoot cell must on average produce enough sugar, and acquire enough oxygen and carbon dioxide to support both it and a similar sized root cell. Actually, since plants are obviously interested in growing it is arguable, that the "directive" of the average shoot cell, is to "show a profit" and produce or acquire more than enough sugar and gases than is necessary to support both it and a similar sized root cell. If this "profit" is not shown, the shoot cell is killed off and resources are redistributed to "promising" other young shoots or leaves in the hope that they will be more productive. Root efficiency self pruning reasons – similarly a mature root cell must acquire on average, more than enough minerals and water needed to support both it and a similar sized shoot cell that does not acquire water and minerals. If this does not happen, the root is killed off and resources sent to new young root candidates. Shortage/need-based reason for plant self pruning – this is the other side of efficiency problems. Shoot shortages – if a shoot is not getting enough root derived minerals and water, the idea is that it will kill part of itself off, and send the resources to the root to make more roots. Root shortages – the idea here is that if the root is not getting enough shoot derived sugar and gases it will kill part of itself off and send resources to the shoot, to allow more shoot growth. This is an oversimplification, in that it is arguable that some shoot and root cells serve other functions than to acquire nutrients. In these cases, whether they are pruned or not would be "calculated" by the plant using some other criteria. It is also arguable that, for example, mature nutrient-acquiring shoot cells would have to acquire more than enough shoot nutrients to support both it and its share of both shoot and root cells that do not acquire sugar and gases whether they are of a structural, reproductive, immature, or just plain, root nature. The idea that a plant does not impose efficiency demands on immature cells is that most immature cells are part of so-called dormant buds in plants. These are kept small and non-dividing until the plant needs them. They are found in buds, for instance in the base of every lateral stem. Theory of hormonal induction of senescence There is little theory on how plants induce themselves to senesce, although it is reasonably widely accepted that some of it is done hormonally. Botanists generally concentrate on ethylene and abscisic acid as culprits in senescence, but neglect gibberellin and brassinosteroid which inhibits root growth if not causing actual root pruning. This is perhaps because roots are below the ground and thus harder to study. Shoot pruning – it is now known that ethylene induces the shedding of leaves much more than abscisic acid. ABA originally received its name because it was discovered to have a role in leaf abscission. Its role is now seen to be minor and only occurring in special cases. Hormonal shoot pruning theory – a new simple theory says that even though ethylene may be responsible for the final act of leaf shedding, it is ABA and strigolactones that induces senescence in leaves due to a run away positive feedback mechanism. What supposedly happens is that ABA and strigolactones are released by mostly mature leaves under water and or mineral shortages. The ABA and strigolactones act in mature leaf cells however, by pushing out minerals, water, sugar, gases and even the growth hormones auxin and cytokinin (and possibly jasmonic and salicylic acid in addition). This causes even more ABA and strigolactones to be made until the leaf is drained of all nutrients. When conditions get particularly bad in the emptying mature leaf cell, it will experience sugar and oxygen deficiencies and so lead to gibberellin and finally ethylene emanation. When the leaf senses ethylene it knows its time to excise. Root pruning – the concept that plants prune the roots in the same kind of way as they abscise leaves, is not a well discussed topic among plant scientists, although the phenomena undoubtedly exists. If gibberellin, brassinosteroid and ethylene are known to inhibit root growth it takes just a little imagination to assume they perform the same role as ethylene does in the shoot, that is to prune the roots too. Hormonal root pruning theory – in the new theory just like ethylene, GA, BA and Eth are seen both to be induced by sugar (GA/BA) and oxygen (ETH) shortages (as well as maybe excess levels of carbon dioxide for Eth) in the roots, and to push sugar and oxygen, as well as minerals, water and the growth hormones out of the root cell causing a positive feedback loop resulting in the emptying and death of the root cell. The final death knell for a root might be strigolactone or most probably ABA as these are indicators of substances that should be abundant in the root and if they cannot even support themselves with these nutrients then they should be senesced. Parallels to cell division – the theory, perhaps even more controversially, asserts that just as both auxin and cytokinin seem to be needed before a plant cell divides, in the same way perhaps ethylene and GA/BA (and ABA and strigolactones) are needed before a cell would senesce. Seed senescence Seed germination performance is a major determinant of crop yield. Deterioration of seed quality with age is associated with accumulation of DNA damage. In dry, aging rye seeds, DNA damages occur with loss of viability of embryos. Dry seeds of Vicia faba accumulate DNA damage with time in storage, and undergo DNA repair upon germination. In Arabidopsis, a DNA ligase is employed in repair of DNA single- and double-strand breaks during seed germination and this ligase is an important determinant of seed longevity. In eukaryotes, the cellular repair response to DNA damage is orchestrated, in part, by the DNA damage checkpoint kinase ATM. ATM has a major role in controlling germination of aged seeds by integrating progression through germination with the repair response to DNA damages accumulated during the dry quiescent state. See also Ageing Senescence DNA damage theory of aging References Special issue about plant senescence in Plant Biology volume 10 issue s1 External links The Adaptive Reasons For And The Physiological Causes Of Senescence In Annual Plants The Start at a General Theory of Plant Senescence Plant physiology Senescence in non-human organisms
Plant senescence
[ "Biology" ]
2,171
[ "Plant physiology", "Senescence in non-human organisms", "Senescence", "Plants" ]
1,572,904
https://en.wikipedia.org/wiki/Mecoptera
Mecoptera (from the Greek: mecos = "long", ptera = "wings") is an order of insects in the superorder Holometabola with about six hundred species in nine families worldwide. Mecopterans are sometimes called scorpionflies after their largest family, Panorpidae, in which the males have enlarged genitals raised over the body that look similar to the stingers of scorpions, and long beaklike rostra. The Bittacidae, or hangingflies, are another prominent family and are known for their elaborate mating rituals, in which females choose mates based on the quality of gift prey offered to them by the males. A smaller group is the snow scorpionflies, family Boreidae, adults of which are sometimes seen walking on snowfields. In contrast, the majority of species in the order inhabit moist environments in tropical locations. The Mecoptera are closely related to the Siphonaptera (fleas), and a little more distantly to the Diptera (true flies). They are somewhat fly-like in appearance, being small to medium-sized insects with long slender bodies and narrow membranous wings. Most breed in moist environments such as leaf litter or moss, and the eggs may not hatch until the wet season arrives. The larvae are caterpillar-like and mostly feed on vegetable matter, and the non-feeding pupae may pass through a diapause until weather conditions are favorable. Early Mecoptera may have played an important role in pollinating extinct species of gymnosperms before the evolution of other insect pollinators such as bees. Adults of modern species are overwhelmingly predators or consumers of dead organisms. In a few areas, some species are the first insects to arrive at a cadaver, making them useful in forensic entomology. Diversity Mecopterans vary in length from . There are about six hundred extant species known, divided into thirty-four genera in nine families. The majority of the species are contained in the families Panorpidae and Bittacidae. Besides this there are about four hundred known fossil species in about eighty-seven genera, which are more diverse than the living members of the order. The group is sometimes called the scorpionflies, from the turned-up "tail" of the male's genitalia in the Panorpidae. Distribution of mecopterans is worldwide; the greatest diversity at the species level is in the Afrotropic and Palearctic realms, but there is greater diversity at the generic and family level in the Neotropic, Nearctic and Australasian realms. They are absent from Madagascar and many islands and island groups; this may demonstrate that their dispersal ability is low, with Trinidad, Taiwan and Japan, where they are found, having had recent land bridges to the nearest continental land masses. Evolution and phylogeny Taxonomic history The European scorpionfly was named Panorpa communis by Linnaeus in 1758. The Mecoptera were named by Alpheus Hyatt and Jennie Maria Arms in 1891. The name is from the Greek, mecos meaning long, and ptera meaning wings. The families of Mecoptera are well accepted by taxonomists but their relationships have been debated. In 1987, R. Willman treated the Mecoptera as a clade, containing the Boreidae as sister to the Meropeidae, but in 2002 Michael F. Whiting declared the Mecoptera so-defined as paraphyletic, with the Boreidae as sister to another order, the Siphonaptera (fleas). Fossil history Among the earliest members of the Mecoptera are the Nannochoristidae of Upper Permian age. Fossil Mecoptera become abundant and diverse during the Cretaceous, for example in China, where panorpids such as Jurassipanorpa, hangingflies (Bittacidae and Cimbrophlebiidae), Orthophlebiidae, and Cimbrophlebiidae have been found. Extinct Mecoptera species may have been important pollinators of early gymnosperm seed plants during the late Middle Jurassic to mid–Early Cretaceous periods before other pollinating groups such as the bees evolved. These were mainly wind-pollinated plants, but fossil mecopterans had siphon-feeding apparatus that could have fertilized these early gymnosperms by feeding on their nectar and pollen. The lack of iron enrichment in their fossilized probosces rules out their use for drinking blood. Eleven species have been identified from three families, Mesopsychidae, Aneuretopsychidae, and Pseudopolycentropodidae within the clade Aneuretopsychina. Their lengths range from in Parapolycentropus burmiticus to in Lichnomesopsyche gloriae. The proboscis could be as long as . It has been suggested that these mecopterans transferred pollen on their mouthparts and head surfaces, as do bee flies and hoverflies today, but no such associated pollen has been found, even when the insects were finely preserved in Eocene Baltic amber. They likely pollinated plants such as Caytoniaceae, Cheirolepidiaceae, and Gnetales, which have ovulate organs that are either poorly suited for wind pollination or have structures that could support long-proboscid fluid feeding. The Aneuretopsychina were the most diverse group of mecopterans in the Latest Permian, taking the place of the Permochoristidae, to the Middle Triassic. During the Late Triassic through the Middle Jurassic, Aneuretopsychina species were gradually replaced by species from the Parachoristidae and Orthophlebiidae. Modern mecopteran families are derived from the Orthophlebiidae. External relationships Mecoptera have special importance in the evolution of the insects. Two of the most important insect orders, Lepidoptera (butterflies and moths) and Diptera (true flies), along with Trichoptera (caddisflies), probably evolved from ancestors belonging to, or strictly related to, the Mecoptera. Evidence includes anatomical and biochemical similarities as well as transitional fossils, such as Permotanyderus and Choristotanyderus, which lie between the Mecoptera and Diptera. The group was once much more widespread and diverse than it is now, with four suborders during the Mesozoic. It is unclear as of 2020 whether the Mecoptera form a single clade, or whether the Siphonaptera (fleas) are inside that clade, so that the traditional "Mecoptera" taxon is paraphyletic. However the earlier suggestion that the Siphonaptera are sister to the Boreidae is not supported; instead, there is the possibility that they are sister to another Mecopteran family, the Nannochoristidae. The two possible trees are shown below: (a) Mecoptera (clades in boldface) is paraphyletic, containing Siphonaptera: (b) Mecoptera is monophyletic, sister to Siphonaptera: Internal relationships All the families were formerly treated as part of a single order, Mecoptera. The relationships between the families are, however, a matter of debate. The cladogram, from Cracraft and Donoghue 2004, places the Nannochoristidae as a separate order, with the Boreidae, as the sister group to the Siphonaptera, also as its own order. The Eomeropidae is suggested to be the sister group to the rest of the Mecoptera, with the position of the Bittacidae unclear. Of those other families, the Meropeidae is the most basal, and the relationships of the rest are not completely clear. Biology Morphology Mecoptera are small to medium-sized insects with long beaklike rostra, membranous wings and slender, elongated bodies. They have relatively simple mouthparts, with a long labium, long mandibles and fleshy palps, which resemble those of the more primitive true flies. Like many other insects, they possess compound eyes on the sides of their heads, and three ocelli on the top. The antennae are filiform (thread-shaped) and contain multiple segments. The fore and hind wings are similar in shape, being long and narrow, with numerous cross-veins, and somewhat resembling those of primitive insects such as mayflies. A few genera, however, have reduced wings, or have lost them altogether. The abdomen is cylindrical with eleven segments, the first of which is fused to the metathorax. The cerci consist of one or two segments. The abdomen typically curves upwards in the male, superficially resembling the tail of a scorpion, the tip containing an enlarged structure called the genital bulb. The caterpillar-like larvae have hard sclerotised heads with mandibles (jaws), short true legs on the thorax, prolegs on the first eight abdominal segments, and a suction disc or pair of hooks on the terminal tenth segment. The pupae have free appendages rather than being secured within a cocoon (they are exarate). Ecology Mecopterans mostly inhabit moist environments although a few species are found in semi-desert habitats. Scorpionflies, family Panorpidae, generally live in broad-leaf woodlands with plentiful damp leaf litter. Snow scorpionflies, family Boreidae, appear in winter and are to be seen on snowfields and on moss; the larvae being able to jump like fleas. Hangingflies, family Bittacidae, occur in forests, grassland and caves with high moisture levels. They mostly breed among mosses, in leaf litter and other moist places, but their reproductive habits have been little studied, and at least one species, Nannochorista philpotti, has aquatic larvae. Adult mecopterans are mostly scavengers, feeding on decaying vegetation and the soft bodies of dead invertebrates. Panorpa raid spider webs to feed on trapped insects and even the spiders themselves, and hangingflies capture flies and moths with their specially modified legs. Some groups consume pollen, nectar, midge larvae, carrion and moss fragments. Most mecopterans live in moist environments; in hotter climates, the adults may therefore be active and visible only for short periods of the year. Mating behaviour Various courtship behaviours have been observed among mecopterans, with males often emitting pheromones to attract mates. The male may provide an edible gift such as a dead insect or a brown salivary secretion to the female. Some boreids have hook-like wings which the male uses to pick up and place the female on his back while copulating. Male panorpids vibrate their wings or even stridulate while approaching a female. Hangingflies (Bittacidae) provide a nuptial meal in the form of a captured insect prey, such as a caterpillar, bug, or fly. The male attracts a female with a pheromone from vesicles on his abdomen; he retracts these once a female is nearby, and presents her with the prey. While she evaluates the gift, he locates her genitalia with his. If she stays to eat the prey, his genitalia attach to hers, and the female lowers herself into an upside-down hanging position, and eats the prey while mating. Larger prey result in longer mating times. In Hylobittacus apicalis, prey long give between 1 and 17 minutes of mating. Larger males of that species give prey as big as houseflies, earning up to 29 minutes of mating, maximal sperm transfer, more oviposition, and a refractory period during which the female does not mate with other males: all of these increase the number of offspring the male is likely to have. Life-cycle The female lays the eggs in close contact with moisture, and the eggs typically absorb water and increase in size after deposition. In species that live in hot conditions, the eggs may not hatch for several months, the larvae only emerging when the dry season has finished. More typically, however, they hatch after a relatively short period of time. The larvae are usually quite caterpillar-like, with short, clawed, true legs, and a number of abdominal prolegs. They have sclerotised heads with mandibulate mouthparts. Larvae possess compound eyes, which is unique among holometabolous insects. The tenth abdominal segment bears either a suction disc, or, less commonly, a pair of hooks. They generally eat vegetation or scavenge for dead insects, although some predatory larvae are known. The larva crawls into the soil or decaying wood to pupate, and does not spin a cocoon. The pupae are exarate, meaning the limbs are free of the body, and are able to move their mandibles, but are otherwise entirely nonmotile. In drier environments, they may spend several months in diapause, before emerging as adults once the conditions are more suitable. Interaction with humans Forensic entomology makes use of scorpionflies' habit of feeding on human corpses. In areas where the family Panorpidae occurs, such as the eastern United States, these scorpionflies can be the first insects to arrive at a donated human cadaver, and remain on a corpse for one or two days. The presence of scorpionflies thus indicates that a body must be fresh. Scorpionflies are sometimes described as looking "sinister", particularly from the male's raised "tail" resembling a scorpion's sting. A popular but incorrect belief is that they can sting with their tails. References External links Mecoptera at the Tree of Life Mecoptera image gallery at myrmecos.net Video of Mecoptera from Austria Mecoptera in UK on BBC wildlife website (third image in) Insect orders Extant Permian first appearances Paraphyletic groups
Mecoptera
[ "Biology" ]
2,934
[ "Phylogenetics", "Paraphyletic groups" ]
1,572,934
https://en.wikipedia.org/wiki/HD%2020367
HD 20367 is a star in the constellation of Aries, close to the border with the Perseus constellation. It is a yellow-white hued star that is a challenge to view with the naked eye, having an apparent visual magnitude of 6.40. Based upon parallax measurements, it is located 85 light years from the Sun. It is drifting further away with a radial velocity of +6.5 km/s. Based upon its movement through space, it is a candidate member of the Ursa Major Moving Group of co-moving stars that probably share a common origin. This object is a late F-type main-sequence star with a stellar classification of F8V. It is about three billion years old and is spinning with a projected rotational velocity of 5.5 km/s. The star is 12% larger and 13% more massive than the Sun. It is radiating 1.58 times the luminosity of the Sun from its photosphere at an effective temperature of 6,100 K. Claims of a planetary system In June 2002, an announcement was made that a Jupiter-mass or larger extrasolar planet had been found orbiting the star, with a period of and an eccentricity of 0.32. The eccentric nature of this planet's orbit meant that it spends part of each circuit around the star outside the habitable zone. However, subsequent observations in 2009 put the existence of this planet in doubt. See also List of extrasolar planets References External links HIP 15323 Catalog Image HD 20367 Sky map F-type main-sequence stars Hypothetical planetary systems Aries (constellation) BD+30 0520 020367 015323
HD 20367
[ "Astronomy" ]
344
[ "Aries (constellation)", "Constellations" ]
1,572,944
https://en.wikipedia.org/wiki/1%2C4-Dioxane
1,4-Dioxane () is a heterocyclic organic compound, classified as an ether. It is a colorless liquid with a faint sweet odor similar to that of diethyl ether. The compound is often called simply dioxane because the other dioxane isomers (1,2- and 1,3-) are rarely encountered. Dioxane is used as a solvent for a variety of practical applications as well as in the laboratory, and also as a stabilizer for the transport of chlorinated hydrocarbons in aluminium containers. Synthesis Dioxane is produced by the acid-catalysed dehydration of diethylene glycol, which in turn is obtained from the hydrolysis of ethylene oxide. In 1985, the global production capacity for dioxane was between 11,000 and 14,000 tons. In 1990, the total U.S. production volume of dioxane was between 5,250 and 9,150 tons. Structure The dioxane molecule is centrosymmetric, meaning that it adopts a chair conformation, typical of relatives of cyclohexane. However, the molecule is conformationally flexible, and the boat conformation is easily adopted, e.g. in the chelation of metal cations. Dioxane resembles a smaller crown ether with only two ethyleneoxyl units. Uses Trichloroethane transport In the 1980s, most of the dioxane produced was used as a stabilizer for 1,1,1-trichloroethane for storage and transport in aluminium containers. Normally aluminium is protected by a passivating oxide layer, but when these layers are disturbed, the metallic aluminium reacts with trichloroethane to give aluminium trichloride, which in turn catalyses the dehydrohalogenation of the remaining trichloroethane to vinylidene chloride and hydrogen chloride. Dioxane "poisons" this catalysis reaction by forming an adduct with aluminium trichloride. As a solvent Dioxane is used in a variety of applications as a versatile aprotic solvent, e.g. for inks, adhesives, and cellulose esters. It is substituted for tetrahydrofuran (THF) in some processes, because of its lower toxicity and higher boiling point (101 °C, versus 66 °C for THF). While diethyl ether is rather insoluble in water, dioxane is miscible and in fact is hygroscopic. At standard pressure, the mixture of water and dioxane in the ratio 17.9:82.1 by mass is a positive azeotrope that boils at 87.6 °C. The oxygen atoms are weakly Lewis-basic. It forms adducts with a variety of Lewis acids. It is classified as a hard base and its base parameters in the ECW model are EB = 1.86 and CB = 1.29. Dioxane produces coordination polymers by linking metal centers. In this way, it is used to drive the Schlenk equilibrium, allowing the synthesis of dialkyl magnesium compounds. Dimethylmagnesium is prepared in this manner: 2 CHMgBr + (CHO) → MgBr(CHO) + (CH)Mg Spectroscopy Dioxane is used as an internal standard for nuclear magnetic resonance spectroscopy in deuterium oxide. Toxicology Safety Dioxane has an of 5170 mg/kg in rats. It is irritating to the eyes and respiratory tract. Exposure may cause damage to the central nervous system, liver and kidneys. In a 1978 mortality study conducted on workers exposed to 1,4-dioxane, the observed number of deaths from cancer was not significantly different from the expected number. Dioxane is classified by the National Toxicology Program as "reasonably anticipated to be a human carcinogen". It is also classified by the IARC as a Group 2B carcinogen: possibly carcinogenic to humans because it is a known carcinogen in other animals. The United States Environmental Protection Agency classifies dioxane as a probable human carcinogen (having observed an increased incidence of cancer in controlled animal studies, but not in epidemiological studies of workers using the compound), and a known irritant (with a no-observed-adverse-effects level of 400 milligrams per cubic meter) at concentrations significantly higher than those found in commercial products. Animal studies in rats suggest that the greatest health risk is associated with inhalation of vapors in the pure form. The State of New York has adopted a first-in-the-nation drinking water standard for 1,4-Dioxane and set the maximum contaminant level of 1 part per billion. Explosion hazard Like some other ethers, dioxane combines with atmospheric oxygen upon prolonged exposure to air to form potentially explosive peroxides. Distillation of these mixtures is dangerous. Storage over metallic sodium could limit the risk of peroxide accumulation. Environment Dioxane tends to concentrate in the water and has little affinity for soil. It is resistant to abiotic degradation in the environment, and was formerly thought to also resist biodegradation. However, more recent studies since the 2000s have found that it can be biodegraded through a number of pathways, suggesting that bioremediation can be used to treat 1,4-dioxane contaminated water. Dioxane has affected groundwater supplies in several areas. Dioxane at the level of 1 μg/L (~1 ppb) has been detected in many locations in the US. In the U.S. state of New Hampshire, it had been found at 67 sites in 2010, ranging in concentration from 2 ppb to over 11,000 ppb. Thirty of these sites are solid waste landfills, most of which have been closed for years. In 2019, the Southern Environmental Law Center successfully sued Greensboro, North Carolina's Wastewater treatment after 1,4-Dioxane was found at 20 times above EPA safe levels in the Haw River. Cosmetics As a byproduct of the ethoxylation process, a route to some ingredients found in cleansing and moisturizing products, dioxane can contaminate cosmetics and personal care products such as deodorants, perfumes, shampoos, toothpastes and mouthwashes. The ethoxylation process makes the cleansing agents, such as sodium laureth sulfate and ammonium laureth sulfate, less abrasive and offers enhanced foaming characteristics. 1,4-Dioxane is found in small amounts in some cosmetics, a yet unregulated substance used in cosmetics in both China and the U.S. Research has found the chemical in ethoxylated raw ingredients and in off-the-shelf cosmetic products. The Environmental Working Group (EWG) found that 97% of hair relaxers, 57% of baby soaps and 22 percent of all products in Skin Deep, their database for cosmetic products, are contaminated with 1,4-dioxane. Since 1979 the U.S. Food and Drug Administration (FDA) have conducted tests on cosmetic raw materials and finished products for the levels of 1,4-dioxane. 1,4-Dioxane was present in ethoxylated raw ingredients at levels up to 1410 ppm (~0.14%wt), and at levels up to 279 ppm (~0.03%wt) in off the shelf cosmetic products. Levels of 1,4-dioxane exceeding 85 ppm (~0.01%wt) in children's shampoos indicate that close monitoring of raw materials and finished products is warranted. While the FDA encourages manufacturers to remove 1,4-dioxane, it is not required by federal law. On 9 December 2019, New York passed a bill to ban the sale of cosmetics with more than 10 ppm of 1,4-dioxane as of the end of 2022. The law will also prevent the sale of household cleaning and personal care products containing more than 2 ppm of 1,4-dioxane at the end of 2022. See also Dioxolane 9-crown-3 Dioxane tetraketone Oxalic anhydride Dioxanone References Dioxanes Ether solvents IARC Group 2B carcinogens Crown ethers Sweet-smelling chemicals
1,4-Dioxane
[ "Chemistry" ]
1,762
[ "Crown ethers" ]
1,573,046
https://en.wikipedia.org/wiki/HD%20130322
HD 130322 is a star with a close orbiting exoplanet in the constellation of Virgo. The distance to this system is 104 light years, as determined using parallax measurements. It is drifting closer to the Sun with a radial velocity of −12.4 km/s. With an apparent visual magnitude of 8.04, it is too dim to be visible to the naked eye; requiring binoculars or a small telescope to view. Being almost exactly on the celestial equator the star is visible everywhere in the world except for the North Pole. The star shows a high proper motion, traversing the celestial sphere at an angular rate of . The spectrum of this star presents as a K-type main-sequence star, an orange dwarf, with a stellar classification of K0V. The star has 92% of the mass of the Sun and 85% of the Sun's radius. It is spinning with a rotation period of 26.5 days. HD 130322 is radiating 62% of the luminosity of the Sun from its photosphere at an effective temperature of 5,387 K. It is estimated to be around six billion years old. The star HD 130322 is named Mönch and its companion is Eiger. The names were selected in the NameExoWorlds campaign by Switzerland, during the 100th anniversary of the IAU. Mönch and Eiger are prominent peaks of the Bernese Alps. Planetary system In 2000, an extrasolar planet was discovered orbiting the star using Doppler spectroscopy. As the inclination of the orbital plane is unknown, only a lower bound on the mass can be estimated. Most likely this is a hot Jupiter as it is orbiting close to the host star and has at least the mass of Jupiter. The star rotates at an inclination of 76 degrees relative to Earth. It has been assumed that the planet shares that inclination. But several "hot Jupiters" are known to be oblique relative to the stellar axis. See also Lists of exoplanets References K-type main-sequence stars Planetary systems with one confirmed planet Virgo (constellation) Durchmusterung objects 130322 072339
HD 130322
[ "Astronomy" ]
443
[ "Virgo (constellation)", "Constellations" ]
1,573,057
https://en.wikipedia.org/wiki/John%20F.%20Allen%20%28physicist%29
John Frank Allen, FRS FRSE (May 5, 1908 – April 22, 2001) was a Canadian physicist. At the same time as Pyotr Leonidovich Kapitsa in Moscow, Don Misener and Allen independently discovered the superfluid phase of matter in 1937 using liquid helium in the Royal Society Mond Laboratory in Cambridge, England. Life Allen was born in Winnipeg; he was also known as Jack Allen. His father, Frank Allen, was a professor in physics at the University of Manitoba. John Allen studied physics initially at the University of Manitoba, where he received his bachelor's degree in 1928. Afterwards, he went to the University of Toronto to pursue postgraduate studies. He obtained his master's degree in 1930 and undertook his PhD working with John McLennan about superconductivity. He there developed and built his first cryostat which was taken by John McLennan for a demonstration of superconductivity in a public lecture to the Royal Institution in London. He obtained his PhD degree in 1933. With a two-year US National Research Council Fellowship which he obtained in 1933, he went to work as a postdoctoral researcher at Caltech between 1933 and 1935. In 1935, he joined the Mond Laboratory of the Royal Society in Cambridge to work with Pyotr Kapitsa on low temperature experiments. However, Kapitsa could not return from a visit of his mother in the Soviet Union in 1934 and never returned to Cambridge. So John Allen worked independently of Kapitsa on properties of helium at very low temperatures and published reports on the discovery of superfluidity in helium which were published side by side in Nature in January 1938. Despite the independent discovery at about the same time, the Nobel prize for Superfluidity was awarded only to Kapitsa in 1978. He stayed in Cambridge until 1947, when he took up an appointment as a professor in natural philosophy at the University of St Andrews, Scotland in 1947. In 1949, he was elected a Fellow of the Royal Society. During his tenure at the University of St Andrews, he was twice dean of the Faculty of Science, and oversaw the creation of a separate Faculty of Applied Science at Dundee as well as the development of the Science complex on the North Haugh in St Andrews, which opened in 1966. He was chair of the Very Low Temperature Commission of the International Union of Pure and Applied Physics from 1966 to 1969 and member of the British National Committee for Physics of the Royal Society. In 1978, he retired, retaining emeritus status until his death. He died in St Andrews in Fife of a stroke. Allen received an honorary doctorate from Heriot-Watt University in 1984. The building of the School of Physics and Astronomy of the University of St Andrews is named after John Allen, as is the library in the J.F. Allen building. Allen died of a stroke on 22 April 2001. Family Allen married his wife, Elfriede Hiebert, in 1933. The two divorced later. They had one adopted son. Scientific work During his work on low temperature physics, Allen developed a number of techniques that are still in use today. In 1937, he introduced the O-ring for use as a seal for vacuum systems. He further invented in 1947 indium gaskets to create leak tight seals for low temperature applications. In 1937, Allen discovered superfluid helium together with his student Don Misener in the Mond laboratory in Cambridge, independent of Pyotr Kapitsa in Moscow. His student, Ernest Ganz, later observed the second sound in liquid helium, and Allen and his collaborator possibly also measured the third sound that occurs in thin films, however they did not report their results. When World War II broke out and he worked on projects supporting the army. During World War II, this included the development of on-board oxygen generators for bombers, and a variable time fuse for anti-aircraft shells. Allen also used a movie camera to film his experiments, such as the superfluid helium fountain, which he discovered in 1938 with the help of a pocket flashlight. Over a ten-year period Allen made a movie of the various two-fluid phenomena exhibited by liquid helium-4. The photography of these effects was a real challenge, because liquid helium-4 is essentially transparent. This unique colour movie (the fifth edition was completed in 1982) is one of Allen's great legacies to physics. His was an early user of moving images to document experiments and inform students and the general public. At some stage (likely in 1984) he modified the long-running St. Andrews pitch drop experiment to bring its setup closer to that of the University of Queensland's similar pitch-drop experiment. Pictorial biography See also Timeline of low-temperature technology John Allen's video on superfluid helium References Canadian physicists Fellows of the Royal Society Fellows of the Royal Society of Edinburgh Canadian people of Scottish descent Scottish physicists Academics of the University of St Andrews Canadian expatriate academics in the United Kingdom 1908 births 2001 deaths Superfluidity University of Manitoba alumni University of Toronto alumni Scientists from Winnipeg Academics from Winnipeg
John F. Allen (physicist)
[ "Physics", "Chemistry", "Materials_science" ]
1,037
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Superfluidity", "Condensed matter physics", "Exotic matter", "Matter", "Fluid dynamics" ]
1,573,063
https://en.wikipedia.org/wiki/Don%20Misener
Don Misener (A.D. Misener) (1911–1996) was a physicist. Along with Pyotr Leonidovich Kapitsa and John F. Allen, Misener discovered the superfluid phase of matter in 1937. Misener was a graduate student at the University of Toronto in 1935. He joined Allen at Cambridge University in about 1937. Misener later returned to Canada to work at the University of Western Ontario. Journal references E. F. Burton, J. O. Wilhelm, and A. D. Misener, Trans. Roy. Soc. Can. 28(111) p. 65 (1934) J. F. Allen and A. D. Misener, Flow of Liquid Helium II, Nature 141(3558) p. 75 (8 Jan 1938) A. D. Misener, The Specific Heat of Superconducting Mercury, Indium and Thallium, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, 174(957) pp. 262–272 (1940) See also Timeline of low-temperature technology Allan Griffin A Brief History of Our Understanding of BEC: From Bose to Beliaev arXiv:cond-mat/9901123 p. 5 References External links U of T and the Discover of Superfluidity Reference to the late Don Misener Canadian physicists University of Toronto alumni Superfluidity 1911 births 1996 deaths Presidents of the Canadian Association of Physicists
Don Misener
[ "Physics", "Chemistry", "Materials_science" ]
302
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Superfluidity", "Condensed matter physics", "Exotic matter", "Matter", "Fluid dynamics" ]
1,573,097
https://en.wikipedia.org/wiki/Dimethoxyethane
Dimethoxyethane, also known as glyme, monoglyme, dimethyl glycol, ethylene glycol dimethyl ether, dimethyl cellosolve, and DME, is a colorless, aprotic, and liquid ether that is used as a solvent, especially in batteries. Dimethoxyethane is miscible with water. Production Monoglyme is produced industrially by the reaction of dimethylether with ethylene oxide: CH3OCH3 + CH2CH2O → CH3OCH2CH2OCH3 Applications as solvent and ligand Together with a high-permittivity solvent (e.g. propylene carbonate), dimethoxyethane is used as the low-viscosity component of the solvent for electrolytes of lithium batteries. In the laboratory, DME is used as a coordinating solvent. Dimethoxyethane is often used as a higher-boiling-point alternative to diethyl ether and tetrahydrofuran. Dimethoxyethane acts as a bidentate ligand for some metal cations. It is therefore often used in organometallic chemistry. Grignard reactions and hydride reductions are typical application. It is also suitable for palladium-catalyzed reactions including Suzuki reactions and Stille couplings. Dimethoxyethane is also a good solvent for oligo- and polysaccharides. Sodium naphthalide dissolved in dimethoxyethane is used as a PTFE etching solution that removes fluorine atoms from the surface, which get replaced by oxygen, hydrogen, and water. This physically etches the surface as well to prepare the surface for better adhesion. References External links Clariant Glymes Homepage www.glymes.com 1,2-Dimethoxyethane - chemical product info: properties, production, applications. International Chemical Safety Card 1568 Chemical hazard links Glycol ethers Ether solvents Ligands Hazardous air pollutants
Dimethoxyethane
[ "Chemistry" ]
430
[ "Ligands", "Coordination chemistry" ]
1,573,182
https://en.wikipedia.org/wiki/Behavior%20change%20%28public%20health%29
Behavior change, in context of public health, refers to efforts put in place to change people's personal habits and attitudes, to prevent disease. Behavior change in public health can take place at several levels and is known as social and behavior change (SBC). More and more, efforts focus on prevention of disease to save healthcare care costs. This is particularly important in low and middle income countries, where supply side health interventions have come under increased scrutiny because of the cost. Aims The 3-4-50 concept outlines that there are 3 behaviors (poor diet, little to no physical activity, and smoking), that lead to four diseases (heart disease/stroke, diabetes, cancer, pulmonary disease), that account for 50% of deaths worldwide. This is why so much emphasis in public health interventions have been on changing behaviors or intervening early on to decrease the negative impacts that come with these behaviors. With successful intervention, there is the possibility of decreasing healthcare costs by a drastic amount, as well as general costs to society (morbidity and mortality). A good public health intervention is not only defined by the results they create, but also the number of levels it hits on the socioecological model (individual, interpersonal, community and/or environment). The challenge that public health interventions face is generalizability: what may work in one community may not work in others. However, there is the development of Healthy People 2020 that has national objectives aimed to accomplish in 10 years to improve the health of all Americans. Health conditions and infections are associated with risky behaviors. Tobacco use, alcoholism, multiple sex partners, substance use, reckless driving, obesity, or unprotected sexual intercourse are some examples. Human beings have, in principle, control over their conduct. Behavior modification can contribute to the success of self-control, and health-enhancing behaviors. Risky behaviors can be eliminated including physical exercise, weight control, preventive nutrition, dental hygiene, condom use, or accident prevention. Health behavior change refers to the motivational, volitional, and action based processes of abandoning such health-compromising behaviors in favor of adopting and maintaining health-enhancing behaviors. Addiction that is associated with risky behavior may have a genetic component. Theories Behavior change programs tend to focus on a few behavioral change theories which gained ground in the 1980s. These theories share a major commonality in defining individual actions as the locus of change. Behavior change programs that are usually focused on activities that help a person or a community to reflect upon their risk behaviors and change them to reduce their risk and vulnerability are known as interventions. Examples include: "transtheoretical (stages of change) model of behavior change", "theory of reasoned action", "health belief model", "theory of planned behavior", diffusion of innovation", and the health action process approach. Developments in health behavior change theories since the late 1990s have focused on incorporating disparate theories of health behavior change into a single unified theory. Individual and interpersonal Health belief model: It is a psychological model attempting to provide an explanation and prediction of health behaviors through a focus on the attitudes and beliefs of individuals. Based on the belief that the perception an individual has determines their success in taking on that behavior change. Factors: perceived susceptibility/severity/benefits/barriers, readiness to act, cues to action, and self-efficacy. Protection motivation theory: Focuses on understanding the fear appeal that mediates behavior change and describes how threat/coping appraisal is related to how adaptive or maladaptive when coping with a health threat. Factors: perceived severity, vulnerability, response efficacy. Transtheoretical model: This theory uses "stages of change" to create a nexus between powerful principles and processes of behavior change derived from leading theories of behavior change. Incorporates aspects of the integrative biopsychosocial model (CITE). Self-regulation theory: Embodies the belief that people have control over their own behavior change journey, as long as they have the resources and understanding to do so. Aims to create long-term effects for particular situations and contexts. Mainly focuses on stopping negative behaviors. Relapse prevention model: Focuses on immediate determinants and underhanded antecedent behaviors/factors that contribute and/or lead to relapse. Aims to identify high-risk situations and work with participants to cope with such conditions. Factors: self-efficacy, stimulus control. Behaviorist learning theory: Aims to understand prior context of behavior development that leads to certain consequences. Social cognitive theory: Explains behavior learning through observation and social contexts. Centered on the belief that behavior is a context of the environment through psychological processes. Factors: self-efficacy, knowledge, behavioral capability, goal setting, outcome expectations, observational learning, reciprocal determinism, reinforcement. Self-determination theory: Centers around support for natural and/or intrinsic tendencies with behavior and provides participants with healthy and effective ways to work with those. Factors: autonomy, competence, and skills. Theory of planned behavior: Aims to predict the specific plan of an individual to engage in a behavior (time and place), and apply to behaviors over which people have the ability to enact self-control over. Factors: behavioral intent, evaluation of risks and behavior. Health action process approach: HAPA suggests that the adoption, initiation, and maintenance of health behaviors should be conceived of as a structured process including a motivation phase and a volition phase. The former describes the intention formation while the latter refers to planning, and action (initiative, maintenance, recovery). Community Community-based participatory research (CBPR): Utilizes community researcher partnership and collaboration. People in the designated community work with the researcher to play an active role as well as being the subjects of the study. Diffusion of innovations: Seeks to explain how new ideas and behaviors are communicated and spread throughout groups. Factors: relative advantage, compatibility, complexity, trial-ability, observability. Tools Behavior change communication (BCC) Behavior change communication, or BCC, is an approach to behavior change focused on communication. It is also known as social and behavior change communication, or SBCC. The assumptions is that through communication of some kind, individuals and communities can somehow be persuaded to behave in ways that will make their lives safer and healthier. BCC was first employed in HIV and TB prevention projects. More recently, its ambit has grown to encompass any communication activity whose goal is to help individuals and communities select and practice behavior that will positively impact their health, such as immunization, cervical cancer check up, employing single-use syringes, etc. List of behavior change strategies Motivational interviewing Goal oriented technique for eliciting and strengthening intrinsic motivation for change. Behavioral contract Intent formation, making a commitment, being ready to change. (usually written) Knowledge Educational information through behavior, consequences and benefits, getting help, acquisition of skills. Behavioral capabilities Skill development through practice, modeling, imitation, reenacting, rehearsing. Choices Building autonomy and intrinsic motivation through relevance, interests and control Graded tasks Planning ahead Anticipate barriers Problem solving Self-reporting Self-adjustment Rewards Stimulus control Social support Examples Organizations, foundations and programs Johns Hopkins Center for Communication Programs specializes in health-related BCC (behavior change communication) programs, primarily in developing countries. It includes programs in reproductive health and family planning, malaria, and HIV/AIDS. Development Media International uses mass media to promote healthy behaviors in Burkina Faso, DRC and Mozambique. Young 1ove provides information to youth to reduce the spread of HIV/AIDS in Botswana. Science of Behavior Change (SOBC) aims to promote basic research on the initiation, personalization, and maintenance of behavior change. Chocolate Moose Media, founded by Firdaus Kharas in 1995, creates animated public service announcement content for health-and-social-justice behaviour change communications. Physical activity and diet: Look AHEAD (Action for Health in Diabetes), Shape-up Somerville, Diabetes Prevention Program (DPP) Quitting smoking: The Truth Initiative, Campaign for Tobacco-Free Kids, Family Smoking Prevention and Tobacco Control 2009 Care groups are groups of 10–15 volunteer, community-based health educators who regularly meet together. Barrier analysis is a rapid assessment tool used in behavior change projects to identify behavioral determinants. Community-led total sanitation is a behaviour change tool used in the sanitation sector for mainly rural settings in developing countries with the aim to stop open defecation. The method uses shame, disgust and to some extent peer pressure which leads to the "spontaneous" construction and long-term use of toilets after an initial triggering process has taken place. See also Behavior change methods Behavioural change theories Design for behavior change Global Handwashing Day Lifestyle medicine Nudge theory References Human behavior Public health Sanitation
Behavior change (public health)
[ "Biology" ]
1,785
[ "Behavior", "Human behavior" ]
1,573,372
https://en.wikipedia.org/wiki/Scale%20%28map%29
The scale of a map is the ratio of a distance on the map to the corresponding distance on the ground. This simple concept is complicated by the curvature of the Earth's surface, which forces scale to vary across a map. Because of this variation, the concept of scale becomes meaningful in two distinct ways. The first way is the ratio of the size of the generating globe to the size of the Earth. The generating globe is a conceptual model to which the Earth is shrunk and from which the map is projected. The ratio of the Earth's size to the generating globe's size is called the nominal scale (also called principal scale or representative fraction). Many maps state the nominal scale and may even display a bar scale (sometimes merely called a "scale") to represent it. The second distinct concept of scale applies to the variation in scale across a map. It is the ratio of the mapped point's scale to the nominal scale. In this case 'scale' means the scale factor (also called point scale or particular scale). If the region of the map is small enough to ignore Earth's curvature, such as in a town plan, then a single value can be used as the scale without causing measurement errors. In maps covering larger areas, or the whole Earth, the map's scale may be less useful or even useless in measuring distances. The map projection becomes critical in understanding how scale varies throughout the map. When scale varies noticeably, it can be accounted for as the scale factor. Tissot's indicatrix is often used to illustrate the variation of point scale across a map. History The foundations for quantitative map scaling goes back to ancient China with textual evidence that the idea of map scaling was understood by the second century BC. Ancient Chinese surveyors and cartographers had ample technical resources used to produce maps such as counting rods, carpenter's square's, plumb lines, compasses for drawing circles, and sighting tubes for measuring inclination. Reference frames postulating a nascent coordinate system for identifying locations were hinted by ancient Chinese astronomers that divided the sky into various sectors or lunar lodges. The Chinese cartographer and geographer Pei Xiu of the Three Kingdoms period created a set of large-area maps that were drawn to scale. He produced a set of principles that stressed the importance of consistent scaling, directional measurements, and adjustments in land measurements in the terrain that was being mapped. Terminology Representation of scale Map scales may be expressed in words (a lexical scale), as a ratio, or as a fraction. Examples are: 'one centimetre to one hundred metres'    or    1:10,000   or    1/10,000 'one inch to one mile'    or    1:63,360    or    1/63,360 'one centimetre to one thousand kilometres'   or   1:100,000,000    or    1/100,000,000.  (The ratio would usually be abbreviated to 1:100M) Bar scale vs. lexical scale In addition to the above many maps carry one or more (graphical) bar scales. For example, some modern British maps have three bar scales, one each for kilometres, miles and nautical miles. A lexical scale in a language known to the user may be easier to visualise than a ratio: if the scale is an inch to two miles and the map user can see two villages that are about two inches apart on the map, then it is easy to work out that the villages are about four miles apart on the ground. A lexical scale may cause problems if it expressed in a language that the user does not understand or in obsolete or ill-defined units. For example, a scale of one inch to a furlong (1:7920) will be understood by many older people in countries where Imperial units used to be taught in schools. But a scale of one pouce to one league may be about 1:144,000, depending on the cartographer's choice of the many possible definitions for a league, and only a minority of modern users will be familiar with the units used. Large scale, medium scale, small scale Contrast to spatial scale. A small-scale map cover large regions, such as world maps, continents or large nations. In other words, they show large areas of land on a small space. They are called small scale because the representative fraction is relatively small. Large-scale maps show smaller areas in more detail, such as county maps or town plans might. Such maps are called large scale because the representative fraction is relatively large. For instance a town plan, which is a large-scale map, might be on a scale of 1:10,000, whereas the world map, which is a small scale map, might be on a scale of 1:100,000,000. The following table describes typical ranges for these scales but should not be considered authoritative because there is no standard: The terms are sometimes used in the absolute sense of the table, but other times in a relative sense. For example, a map reader whose work refers solely to large-scale maps (as tabulated above) might refer to a map at 1:500,000 as small-scale. In the English language, the word large-scale is often used to mean "extensive". However, as explained above, cartographers use the term "large scale" to refer to less extensive maps – those that show a smaller area. Maps that show an extensive area are "small scale" maps. This can be a cause of confusion. Scale variation Mapping large areas causes noticeable distortions because it significantly flattens the curved surface of the earth. How distortion gets distributed depends on the map projection. Scale varies across the map, and the stated map scale is only an approximation. This is discussed in detail below. Large-scale maps with curvature neglected The region over which the earth can be regarded as flat depends on the accuracy of the survey measurements. If measured only to the nearest metre, then curvature of the earth is undetectable over a meridian distance of about and over an east-west line of about 80 km (at a latitude of 45 degrees). If surveyed to the nearest , then curvature is undetectable over a meridian distance of about 10 km and over an east-west line of about 8 km. Thus a plan of New York City accurate to one metre or a building site plan accurate to one millimetre would both satisfy the above conditions for the neglect of curvature. They can be treated by plane surveying and mapped by scale drawings in which any two points at the same distance on the drawing are at the same distance on the ground. True ground distances are calculated by measuring the distance on the map and then multiplying by the inverse of the scale fraction or, equivalently, simply using dividers to transfer the separation between the points on the map to a bar scale on the map. Point scale (or particular scale) As proved by Gauss’s Theorema Egregium, a sphere (or ellipsoid) cannot be projected onto a plane without distortion. This is commonly illustrated by the impossibility of smoothing an orange peel onto a flat surface without tearing and deforming it. The only true representation of a sphere at constant scale is another sphere such as a globe. Given the limited practical size of globes, we must use maps for detailed mapping. Maps require projections. A projection implies distortion: A constant separation on the map does not correspond to a constant separation on the ground. While a map may display a graphical bar scale, the scale must be used with the understanding that it will be accurate on only some lines of the map. (This is discussed further in the examples in the following sections.) Let P be a point at latitude and longitude on the sphere (or ellipsoid). Let Q be a neighbouring point and let be the angle between the element PQ and the meridian at P: this angle is the azimuth angle of the element PQ. Let P' and Q' be corresponding points on the projection. The angle between the direction P'Q' and the projection of the meridian is the bearing . In general . Comment: this precise distinction between azimuth (on the Earth's surface) and bearing (on the map) is not universally observed, many writers using the terms almost interchangeably. Definition: the point scale at P is the ratio of the two distances P'Q' and PQ in the limit that Q approaches P. We write this as where the notation indicates that the point scale is a function of the position of P and also the direction of the element PQ. Definition: if P and Q lie on the same meridian , the meridian scale is denoted by . Definition: if P and Q lie on the same parallel , the parallel scale is denoted by . Definition: if the point scale depends only on position, not on direction, we say that it is isotropic and conventionally denote its value in any direction by the parallel scale factor . Definition: A map projection is said to be conformal if the angle between a pair of lines intersecting at a point P is the same as the angle between the projected lines at the projected point P', for all pairs of lines intersecting at point P. A conformal map has an isotropic scale factor. Conversely isotropic scale factors across the map imply a conformal projection. Isotropy of scale implies that small elements are stretched equally in all directions, that is the shape of a small element is preserved. This is the property of orthomorphism (from Greek 'right shape'). The qualification 'small' means that at some given accuracy of measurement no change can be detected in the scale factor over the element. Since conformal projections have an isotropic scale factor they have also been called orthomorphic projections. For example, the Mercator projection is conformal since it is constructed to preserve angles and its scale factor is isotropic, a function of latitude only: Mercator does preserve shape in small regions. Definition: on a conformal projection with an isotropic scale, points which have the same scale value may be joined to form the isoscale lines. These are not plotted on maps for end users but they feature in many of the standard texts. (See Snyder pages 203—206.) The representative fraction (RF) or principal scale There are two conventions used in setting down the equations of any given projection. For example, the equirectangular cylindrical projection may be written as cartographers:              mathematicians:             Here we shall adopt the first of these conventions (following the usage in the surveys by Snyder). Clearly the above projection equations define positions on a huge cylinder wrapped around the Earth and then unrolled. We say that these coordinates define the projection map which must be distinguished logically from the actual printed (or viewed) maps. If the definition of point scale in the previous section is in terms of the projection map then we can expect the scale factors to be close to unity. For normal tangent cylindrical projections the scale along the equator is k=1 and in general the scale changes as we move off the equator. Analysis of scale on the projection map is an investigation of the change of k away from its true value of unity. Actual printed maps are produced from the projection map by a constant scaling denoted by a ratio such as 1:100M (for whole world maps) or 1:10000 (for such as town plans). To avoid confusion in the use of the word 'scale' this constant scale fraction is called the representative fraction (RF) of the printed map and it is to be identified with the ratio printed on the map. The actual printed map coordinates for the equirectangular cylindrical projection are printed map:              This convention allows a clear distinction of the intrinsic projection scaling and the reduction scaling. From this point we ignore the RF and work with the projection map. Visualisation of point scale: the Tissot indicatrix Consider a small circle on the surface of the Earth centred at a point P at latitude and longitude . Since the point scale varies with position and direction the projection of the circle on the projection will be distorted. Tissot proved that, as long as the distortion is not too great, the circle will become an ellipse on the projection. In general the dimension, shape and orientation of the ellipse will change over the projection. Superimposing these distortion ellipses on the map projection conveys the way in which the point scale is changing over the map. The distortion ellipse is known as Tissot's indicatrix. The example shown here is the Winkel tripel projection, the standard projection for world maps made by the National Geographic Society. The minimum distortion is on the central meridian at latitudes of 30 degrees (North and South). (Other examples). Point scale for normal cylindrical projections of the sphere The key to a quantitative understanding of scale is to consider an infinitesimal element on the sphere. The figure shows a point P at latitude and longitude on the sphere. The point Q is at latitude and longitude . The lines PK and MQ are arcs of meridians of length where is the radius of the sphere and is in radian measure. The lines PM and KQ are arcs of parallel circles of length with in radian measure. In deriving a point property of the projection at P it suffices to take an infinitesimal element PMQK of the surface: in the limit of Q approaching P such an element tends to an infinitesimally small planar rectangle. Normal cylindrical projections of the sphere have and equal to a function of latitude only. Therefore, the infinitesimal element PMQK on the sphere projects to an infinitesimal element P'M'Q'K' which is an exact rectangle with a base and height . By comparing the elements on sphere and projection we can immediately deduce expressions for the scale factors on parallels and meridians. (The treatment of scale in a general direction may be found below.) parallel scale factor   meridian scale factor  Note that the parallel scale factor is independent of the definition of so it is the same for all normal cylindrical projections. It is useful to note that at latitude 30 degrees the parallel scale is at latitude 45 degrees the parallel scale is at latitude 60 degrees the parallel scale is at latitude 80 degrees the parallel scale is at latitude 85 degrees the parallel scale is The following examples illustrate three normal cylindrical projections and in each case the variation of scale with position and direction is illustrated by the use of Tissot's indicatrix. Three examples of normal cylindrical projection The equirectangular projection The equirectangular projection, also known as the Plate Carrée (French for "flat square") or (somewhat misleadingly) the equidistant projection, is defined by    where is the radius of the sphere, is the longitude from the central meridian of the projection (here taken as the Greenwich meridian at ) and is the latitude. Note that and are in radians (obtained by multiplying the degree measure by a factor of /180). The longitude is in the range and the latitude is in the range . Since the previous section gives parallel scale,  meridian scale For the calculation of the point scale in an arbitrary direction see addendum. The figure illustrates the Tissot indicatrix for this projection. On the equator h=k=1 and the circular elements are undistorted on projection. At higher latitudes the circles are distorted into an ellipse given by stretching in the parallel direction only: there is no distortion in the meridian direction. The ratio of the major axis to the minor axis is . Clearly the area of the ellipse increases by the same factor. It is instructive to consider the use of bar scales that might appear on a printed version of this projection. The scale is true (k=1) on the equator so that multiplying its length on a printed map by the inverse of the RF (or principal scale) gives the actual circumference of the Earth. The bar scale on the map is also drawn at the true scale so that transferring a separation between two points on the equator to the bar scale will give the correct distance between those points. The same is true on the meridians. On a parallel other than the equator the scale is so when we transfer a separation from a parallel to the bar scale we must divide the bar scale distance by this factor to obtain the distance between the points when measured along the parallel (which is not the true distance along a great circle). On a line at a bearing of say 45 degrees () the scale is continuously varying with latitude and transferring a separation along the line to the bar scale does not give a distance related to the true distance in any simple way. (But see addendum). Even if a distance along this line of constant planar angle could be worked out, its relevance is questionable since such a line on the projection corresponds to a complicated curve on the sphere. For these reasons bar scales on small-scale maps must be used with extreme caution. Mercator projection The Mercator projection maps the sphere to a rectangle (of infinite extent in the -direction) by the equations where a, and are as in the previous example. Since the scale factors are: parallel scale      meridian scale    In the mathematical addendum it is shown that the point scale in an arbitrary direction is also equal to so the scale is isotropic (same in all directions), its magnitude increasing with latitude as . In the Tissot diagram each infinitesimal circular element preserves its shape but is enlarged more and more as the latitude increases. Lambert's equal area projection Lambert's equal area projection maps the sphere to a finite rectangle by the equations where a, and are as in the previous example. Since the scale factors are parallel scale       meridian scale    The calculation of the point scale in an arbitrary direction is given below. The vertical and horizontal scales now compensate each other (hk=1) and in the Tissot diagram each infinitesimal circular element is distorted into an ellipse of the same area as the undistorted circles on the equator. Graphs of scale factors The graph shows the variation of the scale factors for the above three examples. The top plot shows the isotropic Mercator scale function: the scale on the parallel is the same as the scale on the meridian. The other plots show the meridian scale factor for the Equirectangular projection (h=1) and for the Lambert equal area projection. These last two projections have a parallel scale identical to that of the Mercator plot. For the Lambert note that the parallel scale (as Mercator A) increases with latitude and the meridian scale (C) decreases with latitude in such a way that hk=1, guaranteeing area conservation. Scale variation on the Mercator projection The Mercator point scale is unity on the equator because it is such that the auxiliary cylinder used in its construction is tangential to the Earth at the equator. For this reason the usual projection should be called a tangent projection. The scale varies with latitude as . Since tends to infinity as we approach the poles the Mercator map is grossly distorted at high latitudes and for this reason the projection is totally inappropriate for world maps (unless we are discussing navigation and rhumb lines). However, at a latitude of about 25 degrees the value of is about 1.1 so Mercator is accurate to within 10% in a strip of width 50 degrees centred on the equator. Narrower strips are better: a strip of width 16 degrees (centred on the equator) is accurate to within 1% or 1 part in 100. A standard criterion for good large-scale maps is that the accuracy should be within 4 parts in 10,000, or 0.04%, corresponding to . Since attains this value at degrees (see figure below, red line). Therefore, the tangent Mercator projection is highly accurate within a strip of width 3.24 degrees centred on the equator. This corresponds to north-south distance of about . Within this strip Mercator is very good, highly accurate and shape preserving because it is conformal (angle preserving). These observations prompted the development of the transverse Mercator projections in which a meridian is treated 'like an equator' of the projection so that we obtain an accurate map within a narrow distance of that meridian. Such maps are good for countries aligned nearly north-south (like Great Britain) and a set of 60 such maps is used for the Universal Transverse Mercator (UTM). Note that in both these projections (which are based on various ellipsoids) the transformation equations for x and y and the expression for the scale factor are complicated functions of both latitude and longitude. Secant, or modified, projections The basic idea of a secant projection is that the sphere is projected to a cylinder which intersects the sphere at two parallels, say north and south. Clearly the scale is now true at these latitudes whereas parallels beneath these latitudes are contracted by the projection and their (parallel) scale factor must be less than one. The result is that deviation of the scale from unity is reduced over a wider range of latitudes. As an example, one possible secant Mercator projection is defined by The numeric multipliers do not alter the shape of the projection but it does mean that the scale factors are modified: secant Mercator scale,    Thus the scale on the equator is 0.9996, the scale is k = 1 at a latitude given by where so that degrees, k=1.0004 at a latitude given by for which degrees. Therefore, the projection has , that is an accuracy of 0.04%, over a wider strip of 4.58 degrees (compared with 3.24 degrees for the tangent form). This is illustrated by the lower (green) curve in the figure of the previous section. Such narrow zones of high accuracy are used in the UTM and the British OSGB projection, both of which are secant, transverse Mercator on the ellipsoid with the scale on the central meridian constant at . The isoscale lines with are slightly curved lines approximately 180 km east and west of the central meridian. The maximum value of the scale factor is 1.001 for UTM and 1.0007 for OSGB. The lines of unit scale at latitude (north and south), where the cylindrical projection surface intersects the sphere, are the standard parallels of the secant projection. Whilst a narrow band with is important for high accuracy mapping at a large scale, for world maps much wider spaced standard parallels are used to control the scale variation. Examples are Behrmann with standard parallels at 30N, 30S. Gall equal area with standard parallels at 45N, 45S. The scale plots for the latter are shown below compared with the Lambert equal area scale factors. In the latter the equator is a single standard parallel and the parallel scale increases from k=1 to compensate the decrease in the meridian scale. For the Gall the parallel scale is reduced at the equator (to k=0.707) whilst the meridian scale is increased (to k=1.414). This gives rise to the gross distortion of shape in the Gall-Peters projection. (On the globe Africa is about as long as it is broad). Note that the meridian and parallel scales are both unity on the standard parallels. Mathematical addendum For normal cylindrical projections the geometry of the infinitesimal elements gives The relationship between the angles and is For the Mercator projection giving : angles are preserved. (Hardly surprising since this is the relation used to derive Mercator). For the equidistant and Lambert projections we have and respectively so the relationship between and depends upon the latitude . Denote the point scale at P when the infinitesimal element PQ makes an angle with the meridian by It is given by the ratio of distances: Setting and substituting and from equations (a) and (b) respectively gives For the projections other than Mercator we must first calculate from and using equation (c), before we can find . For example, the equirectangular projection has so that If we consider a line of constant slope on the projection both the corresponding value of and the scale factor along the line are complicated functions of . There is no simple way of transferring a general finite separation to a bar scale and obtaining meaningful results. Ratio symbol While the colon is often used to express ratios, Unicode can express a symbol specific to ratios, being slightly raised: . See also Geographic distance Scale (analytical tool) Scale (ratio) Scaling (geometry) Spatial scale On Exactitude in Science Notes References Cartography Chinese inventions Measurement
Scale (map)
[ "Physics", "Mathematics" ]
5,085
[ "Quantity", "Physical quantities", "Measurement", "Size" ]
1,573,393
https://en.wikipedia.org/wiki/Conversion%20%28chemistry%29
Conversion and its related terms yield and selectivity are important terms in chemical reaction engineering. They are described as ratios of how much of a reactant has reacted (X — conversion, normally between zero and one), how much of a desired product was formed (Y — yield, normally also between zero and one) and how much desired product was formed in ratio to the undesired product(s) (S — selectivity). There are conflicting definitions in the literature for selectivity and yield, so each author's intended definition should be verified. Conversion can be defined for (semi-)batch and continuous reactors and as instantaneous and overall conversion. Assumptions The following assumptions are made: The following chemical reaction takes place: , where and are the stoichiometric coefficients. For multiple parallel reactions, the definitions can also be applied, either per reaction or using the limiting reaction. Batch reaction assumes all reactants are added at the beginning. Semi-Batch reaction assumes some reactants are added at the beginning and the rest fed during the batch. Continuous reaction assumes reactants are fed and products leave the reactor continuously and in steady state. Conversion Conversion can be separated into instantaneous conversion and overall conversion. For continuous processes the two are the same, for batch and semi-batch there are important differences. Furthermore, for multiple reactants, conversion can be defined overall or per reactant. Instantaneous conversion Semi-batch In this setting there are different definitions. One definition regards the instantaneous conversion as the ratio of the instantaneously converted amount to the amount fed at any point in time: . with as the change of moles with time of species i. This ratio can become larger than 1. It can be used to indicate whether reservoirs are built up and it is ideally close to 1. When the feed stops, its value is not defined. In semi-batch polymerisation, the instantaneous conversion is defined as the total mass of polymer divided by the total mass of monomer fed: . Overall conversion Batch (This is the generally stated form) Semi-batch Total conversion of the formulation: Total conversion of the fed reactants: Continuous (This is the generally stated form) Yield Yield in general refers to the amount of a specific product (p in 1..m) formed per mole of reactant consumed (Definition 1). However, it is also defined as the amount of product produced per amount of product that could be produced (Definition 2). If not all of the limiting reactant has reacted, the two definitions contradict each other. Combining those two also means that stoichiometry needs to be taken into account and that yield has to be based on the limiting reactant (k in 1..n): Continuous The version normally found in the literature: Selectivity Instantaneous selectivity is the production rate of one component per production rate of another component. For overall selectivity the same problem of the conflicting definitions exists. Generally, it is defined as the number of moles of desired product per the number of moles of undesired product (Definition 1). However, the definitions of the total amount of reactant to form a product per total amount of reactant consumed is used (Definition 2) as well as the total amount of desired product formed per total amount of limiting reactant consumed (Definition 3). This last definition is the same as definition 1 for yield. Batch or semi-batch The version normally found in the literature: Continuous The version normally found in the literature: Combination For batch and continuous reactors (semi-batch needs to be checked more carefully) and the definitions marked as the ones generally found in the literature, the three concepts can be combined: For a process with the only reaction A -> B this mean that S=1 and Y=X. Abstract example For the following abstract example and the amounts depicted on the right, the following calculation can be performed with the above definitions, either in batch or a continuous reactor. A -> B A -> C B is the desired product. There are 100 mol of A at the beginning or at the entry to the continuous reactor and 10 mol A, 72 mol B and 18 mol C at the end of the reaction or the exit of the continuous reactor. The three properties are found to be: The property holds. In this reaction, 90% of substance A is converted (consumed), but only 80% of the 90% is converted to the desired substance B and 20% to undesired by-products C. So, conversion of A is 90%, selectivity for B 80% and yield of substance B 72%. Literature Werner Kullbach: Mengenberechnungen in der Chemie. Verlag Chemie, Weinheim 1980, . Eberhard Aust, Burkhard Bittner: Stöchiometrie - Chemisches Rechnen, CICERO-Verlag, Pegnitz, 4. Auflage, 2011, . Uwe Hillebrand: Stöchiometrie, Eine Einführung in die Grundlagen mit Beispielen und Übungsaufgaben, 2. Aufl., Springer-Verlag, Berlin Heidelberg 2009, . References Stoichiometry
Conversion (chemistry)
[ "Chemistry" ]
1,069
[ "Stoichiometry", "Chemical reaction engineering", "nan" ]
1,573,422
https://en.wikipedia.org/wiki/Leachate
A leachate is any liquid that, in the course of passing through matter, extracts soluble or suspended solids, or any other component of the material through which it has passed. Leachate is a widely used term in the environmental sciences where it has the specific meaning of a liquid that has dissolved or entrained environmentally harmful substances that may then enter the environment. It is most commonly used in the context of land-filling of putrescible or industrial waste. In the narrow environmental context leachate is therefore any liquid material that drains from land or stockpiled material and contains significantly elevated concentrations of undesirable material derived from the material that it has passed through. Landfill leachate Leachate from a landfill varies widely in composition depending on the age of the landfill and the type of waste that it contains. It usually contains both dissolved and suspended material. The generation of leachate is caused principally by precipitation percolating through waste deposited in a landfill. Once in contact with decomposing solid waste, the percolating water becomes contaminated, and if it then flows out of the waste material it is termed leachate. Additional leachate volume is produced during this decomposition of carbonaceous material producing a wide range of other materials including methane, carbon dioxide and a complex mixture of organic acids, aldehydes, alcohols and simple sugars. The risks of leachate generation can be mitigated by properly designed and engineered landfill sites, such as those that are constructed on geologically impermeable materials or sites that use impermeable liners made of geomembranes or engineered clay. The use of linings is now mandatory within the United States, Australia and the European Union except where the waste is deemed inert. In addition, most toxic and difficult materials are now specifically excluded from landfilling. However, despite much stricter statutory controls, leachates from modern sites are often found to contain a range of contaminants stemming from illegal activity or legally discarded household and domestic products. In a 2012 survey performed in New York State, all surveyed double-lined landfill cells had leakage rates of less than 500 liters per hectare per day. Average leakage rates were much lower than for landfills built according to older standards before 1992. Composition of landfill leachate When water percolates through waste, it promotes and assists the process of decomposition by bacteria and fungi. These processes in turn release by-products of decomposition and rapidly use up any available oxygen, creating an anoxic environment. In actively decomposing waste, the temperature rises and the pH falls rapidly with the result that many metal ions that are relatively insoluble at neutral pH become dissolved in the developing leachate. The decomposition processes themselves release more water, which adds to the volume of leachate. Leachate also reacts with materials that are not prone to decomposition themselves, such as fire ash, cement-based building materials and gypsum-based materials changing the chemical composition. In sites with large volumes of building waste, especially those containing gypsum plaster, the reaction of leachate with the gypsum can generate large volumes of hydrogen sulfide, which may be released in the leachate and may also form a large component of the landfill gas. The physical appearance of leachate when it emerges from a typical landfill site is a strongly odoured black-, yellow- or orange-coloured cloudy liquid. The smell is acidic and offensive and may be very pervasive because of hydrogen-, nitrogen- and sulfur-rich organic species such as mercaptans. In a landfill that receives a mixture of municipal, commercial, and mixed industrial waste but excludes significant amounts of concentrated chemical waste, landfill leachate may be characterized as a water-based solution of four groups of contaminants: dissolved organic matter (alcohols, acids, aldehydes, short chain sugars, etc.), inorganic macro components (common cations and anions including sulfate, chloride, iron, aluminium, zinc and ammonia), heavy metals (Pb, Ni, Cu, Hg), and xenobiotic organic compounds such as halogenated organics, (PCBs, dioxins, etc.). A number of complex organic contaminants have also been detected in landfill leachates. Samples from raw and treated landfill leachate yielded 58 complex organic contaminants including 2-OH-benzothiazole in 84% of the samples and perfluorooctanoic acid in 68%. Bisphenol A, valsartan and 2-OH-benzothiazole had the highest average concentrations in raw leachates, after biological treatment and after reverse osmosis, respectively. Leachate management In older landfills and those with no membrane between the waste and the underlying geology, leachate is free to leave the waste and flow directly into the groundwater. In such cases, high concentrations of leachate are often found in nearby springs and flushes. As leachate first emerges it can be black in colour, anoxic, and possibly effervescent, with dissolved and entrained gases. As it becomes oxygenated it tends to turn brown or yellow because of the presence of iron salts in solution and in suspension. It also quickly develops a bacterial flora often comprising substantial growths of Sphaerotilus natans. History of landfill leachate collection In the UK, in the late 1960s, central Government policy was to ensure new landfill sites were being chosen with permeable underlying geological strata to avoid the build-up of leachate. This policy was dubbed "dilute and disperse". However, following a number of cases where this policy was seen to be failing, and an exposee in The Sunday Times of serious environmental damage being caused by inappropriate disposal of industrial wastes, both policy and the law were changed. The Deposit of Poisonous Wastes Act 1972, together with The 1974 Local Government Act, made local government responsible for waste disposal and for the enforcement of environmental standards regarding waste disposal. Proposed landfill locations also had to be justified not only by geography but also scientifically. Many European countries decided to select landfill sites in groundwater-free clay geological conditions or to require that the site have an engineered lining. In the wake of European advancements, the United States increased its development of leachate retaining and collection systems. This quickly led from lining in principle to the use of multiple lining layers in all landfills (excepting those truly inert). Goals of leachate collection systems The primary criterion for design of the leachate system is that all leachate be collected and removed from the landfill at a rate sufficient to prevent an unacceptable hydraulic head occurring at any point over the lining system. Components of leachate collection systems There are many components to a collection system including pumps, manholes, discharge lines and liquid level monitors. However, there are four main components which govern the overall efficiency of the system. These four elements are liners, filters, pumps and sumps. Liners Natural and synthetic liners may be utilized as both a collection device and as a means for isolating leachate within the fill to protect the soil and groundwater below. The chief concern is the ability of a liner to maintain integrity and impermeability over the life of the landfill. Subsurface water monitoring, leachate collection, and clay liners are commonly included in the design and construction of a waste landfill. To effectively serve the purpose of containing leachate in a landfill, a liner system must possess a number of physical properties. The liner must have high tensile strength, flexibility, and elongation without failure. It is also important that the liner resist abrasion, puncture, and chemical degradation by leachate. Lastly, the liner must withstand temperature variation, must resist UV light (which leads most liners to be black), must be easily installed, and must be economical. There are several types of liners used in leachate control and collection. These types include geomembranes, geosynthetic clay liners, geotextiles, geogrids, geonets, and geocomposites. Each style of liner has specific uses and abilities. Geomembranes are used to provide a barrier between mobile polluting substances released from wastes and the groundwater. In the closing of landfills, geomembranes are used to provide a low-permeability cover barrier to prevent the intrusion of rain water. Geosynthetic clay liners (GCLs) are fabricated by distributing sodium bentonite in a uniform thickness between woven and non-woven geotextiles. Sodium bentonite has a low permeability, which makes GCLs a suitable alternative to clay liners in a composite liner system. Geotextiles are used as separation between two different types of soils to prevent contamination of the lower layer by the upper layer. Geotextiles also act as a cushion to protect synthetic layers against puncture from underlying and overlaying rocks. Geogrids are structural synthetic materials used in slope veneer stability to create stability for cover soils over synthetic liners or as soil reinforcement in steep slopes. Geonets are synthetic drainage materials that are often used in lieu of sand and gravel. Radz can take of drainage sand, thus increasing the landfill space for waste. Geocomposites are a combination of synthetic materials that are ordinarily used singly. A common type of geocomposite is a geonet that is heat-bonded to two layers of geotextile, one on each side. The geocomposite serves as a filter and drainage medium. Geosynthetic clay liners are a type of combination liner. One advantage to using a geosynthetic clay liner (GCL) is the ability to order exact amounts of the liner. Ordering precise amounts from the manufacturer prevents surplus and over-spending. Another advantage to GCLs is that the liner can be used in areas without an adequate clay source. On the other hand, GCLs are heavy and cumbersome, and their installation is very labor-intensive. In addition to being arduous and difficult under normal conditions, installation can be cancelled during damp conditions because the bentonite would absorb the moisture, making the job even more burdensome and tedious. Leachate drainage system The leachate drainage system is responsible for the collection and transport of the leachate collected inside the liner. The pipe dimensions, type, and layout must all be planned with the weight and pressure of waste, and transport vehicles in mind. The pipes are located on the floor of the cell. Above the network lies an enormous amount of weight and pressure. To support this, the pipes can either be flexible or rigid, but the joints to connect the pipes yield better results if the connections are flexible. An alternative to placing the collection system underneath the waste is to position the conduits in trenches or above grade. The collection pipe network of a leachate collection system drains, collects, and transports leachate through the drainage layer to a collection sump where it is removed for treatment or disposal. The pipes also serve as drains within the drainage layer to minimize the mounding of leachate in the layer. These pipes are designed with cuts that are inclined to 120 degrees, preventing entry of solid particles. Filters The filter layer is used above the drainage layer in leachate collection. Two types of filters are typically used in engineering practices: granular and geotextile. Granular filters consist of one or more soil layers or multiple layers having a coarser gradation in the direction of the seepage than the soil to be protected. Sumps or leachate well As liquid enters the landfill cell, it moves down the filter, passes through the pipe network, and rests in the sump. As collection systems are planned, the number, location, and size of the sumps are vital to an efficient operation. When designing sumps, the amount of leachate and liquid expected is the foremost concern. Areas in which rainfall is higher than average typically have larger sumps. A further criterion for sump planning is accounting for the pump capacity. The relationship of pump capacity and sump size is inverse. If the pump capacity is low, the volume of the sump should be larger than average. It is critical for the volume of the sump to be able to store the expected leachate between pumping cycles. This relationship helps maintain a healthy operation. Sump pumps can function with preset phase times. If the flow is not predictable, a predetermined leachate height level can automatically switch the system on. Other conditions for sump planning are maintenance and pump drawdown. Collection pipes typically convey the leachate by gravity to one or more sumps, depending upon the size of the area drained. Leachate collected in the sump is removed by pumping to a vehicle, to a holding facility for subsequent vehicle pickup, or to an on-site treatment facility. Sump dimensions are governed by the amount of leachate to be stored, pump capacity, and minimum pump drawdown. The volume of the sump must be sufficient to hold the maximum amount of leachate anticipated between pump cycles, plus an additional volume equal to the minimum pump drawdown volume. Sump size should also consider dimensional requirements for conducting maintenance and inspection activities. Sump pumps may operate with preset cycling times or, if leachate flow is less predictable, the pump may be automatically switched on when the leachate reaches a predetermined level. Membrane and collection for treatment More modern landfills in the developed world have some form of membrane separating the waste from the surrounding ground, and in such sites there is often a leachate collection series of pipes laid on the membrane to convey the leachate to a collection or treatment location. An example of a treatment system with only minor membrane use is the Nantmel Landfill Site. All membranes are porous to a limited extent so that, over time, low volumes of leachate will cross the membrane. The design of landfill membranes is at such low volumes that they should never have a measurable adverse impact on the quality of the receiving groundwater. A more significant risk may be the failure or abandonment of the leachate collection system. Such systems are prone to internal failure as landfills suffer large internal movements as waste decomposes unevenly and thus buckles and distorts pipes. If a leachate collection system fails, leachate levels will slowly build in a site and may even over-top the containing membrane and flow out into the environment. Rising leachate levels can also wet waste masses that have previously been dry, triggering further active decomposition and leachate generation. Thus, what appears to be a stabilised and inactive site can become re-activated and restart significant gas production and exhibit significant changes in finished ground levels. Re-injection into landfill One method of leachate management that was more common in uncontained sites was leachate re-circulation, in which leachate was collected and re-injected into the waste mass. This process greatly accelerated decomposition and therefore gas production and had the impact of converting some leachate volume into landfill gas and reducing the overall volume of leachate for disposal. However, it also tended to increase substantially the concentrations of contaminant materials, making it a more difficult waste to treat. Treatment The most common method of handling collected leachate is on-site treatment. When treating leachate on-site, the leachate is pumped from the sump into the treatment tanks. The leachate may then be mixed with chemical reagents to modify the pH and to coagulate and settle solids and to reduce the concentration of hazardous matter. Traditional treatment involved a modified form of activated sludge to substantially reduce the dissolved organic content. Nutrient imbalance can cause difficulties in maintaining an effective biological treatment stage. The treated liquid is rarely of sufficient quality to be released to the environment and may be tankered or piped to a local sewage treatment facility; the decision depends on the age of the landfill and on the limit of water quality that must be achieved after treatment. With high conductivity, leachate is hard to treat with biological treatment or chemical treatment. Treatment with reverse osmosis is also limited, resulting in low recoveries and fouling of the RO membranes. Reverse osmosis applicability is limited by conductivity, organics, and scaling inorganic elements such as CaSO4, Si, and Ba. Removal to sewer system In some older landfills, leachate was directed to the sewers, but this can cause a number of problems. Toxic metals from leachate passing through the sewage treatment plant concentrate in the sewage sludge, making it difficult or dangerous to dispose of the sludge without incurring a risk to the environment. In Europe, regulations and controls have improved in recent decades, and toxic wastes are now no longer permitted to be disposed of in the Municipal Solid Waste landfills, and in most developed countries the metals problem has diminished. Paradoxically, however, as sewage treatment plant discharges are being improved throughout Europe and many other countries, the plant operators are finding that leachates are difficult waste streams to treat. This is because leachates contain very high ammoniacal nitrogen concentrations, are usually very acidic, are often anoxic and, if received in large volumes relative to the incoming sewage flow, lack the phosphorus needed to prevent nutrient starvation for the biological communities that perform the sewage treatment processes. The result is that leachates are a difficult-to-treat waste stream. However, within ageing municipal solid waste landfills, this may not be a problem as the pH returns close to neutral after the initial stage of acidogenic leachate decomposition. Many sewer undertakers limit maximum ammoniacal nitrogen concentration in their sewers to 250 mg/L to protect sewer maintenance workers, as the WHO's maximum occupational safety limit would be exceeded at above pH 9 to 10, which is often the highest pH allowed in sewer discharges. Many older leachate streams also contained a variety of synthetic organic species and their decomposition products, some of which had the potential to be acutely damaging to the environment. Environmental impact The risks from waste leachate are due to its high organic contaminant concentrations and high concentration of ammonia. Pathogenic microorganisms that might be present in it are often cited as the most important, but pathogenic organism counts reduce rapidly with time in the landfill, so this only applies to the freshest leachate. Toxic substances may, however, be present in variable concentrations, and their presence is related to the nature of the waste deposited. Most landfills containing organic material will produce methane, some of which dissolves in the leachate. This could, in theory, be released in poorly ventilated areas in the treatment plant. All plants in Europe must now be assessed under the EU ATEX Directive and zoned where explosion risks are identified to prevent future accidents. The most important requirement is the prevention of the discharge of dissolved methane from untreated leachate into public sewers, and most sewage treatment authorities limit the permissible discharge concentration of dissolved methane to 0.14 mg/L, or 1/10 of the lower explosive limit. This entails methane stripping from the leachate. The greatest environmental risks occur in the discharges from older sites constructed before modern engineering standards became mandatory and also from sites in the developing world where modern standards have not been applied. There are also substantial risks from illegal sites and ad-hoc sites used by organizations outside the law to dispose of waste materials. Leachate streams running directly into the aquatic environment have both an acute and chronic impact on the environment, which may be very severe and can severely diminish bio-diversity and greatly reduce populations of sensitive species. Where toxic metals and organics are present this can lead to chronic toxin accumulation in both local and far distant populations. Rivers impacted by leachate are often yellow in appearance and often support severe overgrowths of sewage fungus. The contemporary research in the field of assessment techniques and remedial technology of environmental issues originating from landfill leachate has been reviewed in an article published in Critical Reviews in Environmental Science and Technology journal. A possible ecological threat for the aquatic environment due to the occurrence of organic micropollutants in raw and treated landfill leachates has also been reported. Problems and failures with collection systems Leachate collection systems can experience many problems including clogging with mud or silt. Bioclogging can be exacerbated by the growth of micro-organisms in the conduit. The conditions in leachate collection systems are ideal for micro-organisms to multiply. Chemical reactions in the leachate may also cause clogging through generation of solid residues. The chemical composition of leachate can weaken pipe walls, which may then fail. Other types of leachate Leachate can also be produced from land that was contaminated by chemicals or toxic materials used in industrial activities such as factories, mines or storage sites. Composting sites in areas of high rainfall also produce leachate. Leachate is associated with stockpiled coal and with waste materials from metal ore mining and other rock extraction processes, especially those in which sulfide containing materials are exposed to air producing sulfuric acid, often with elevated metal concentrations. In the context of civil engineering (more specifically reinforced concrete design), leachate refers to the effluent of pavement wash-off (that may include melting snow and ice with salt) that permeates through the cement paste onto the surface of the steel reinforcement, thereby catalyzing its oxidation and degradation. Leachates can be genotoxic in nature. A possible risk for the aquatic environment due to the occurrence of organic micropollutants in raw or treated landfill leachates has also been reported in recent studies. References Anaerobic digestion Biodegradable waste management Environmental soil science Liquid-solid separation
Leachate
[ "Chemistry", "Engineering", "Environmental_science" ]
4,454
[ "Separation processes by phases", "Biodegradable waste management", "Biodegradation", "Anaerobic digestion", "Environmental engineering", "Water technology", "Environmental soil science", "Liquid-solid separation" ]
1,573,430
https://en.wikipedia.org/wiki/Electrostatic%20generator
An electrostatic generator, or electrostatic machine, is an electrical generator that produces static electricity, or electricity at high voltage and low continuous current. The knowledge of static electricity dates back to the earliest civilizations, but for millennia it remained merely an interesting and mystifying phenomenon, without a theory to explain its behavior and often confused with magnetism. By the end of the 17th century, researchers had developed practical means of generating electricity by friction, but the development of electrostatic machines did not begin in earnest until the 18th century, when they became fundamental instruments in the studies about the new science of electricity. Electrostatic generators operate by using manual (or other) power to transform mechanical work into electric energy, or using electric currents. Manual electrostatic generators develop electrostatic charges of opposite signs rendered to two conductors, using only electric forces, and work by using moving plates, drums, or belts to carry electric charge to a high potential electrode. Description Electrostatic machines are typically used in science classrooms to safely demonstrate electrical forces and high voltage phenomena. The elevated potential differences achieved have been also used for a variety of practical applications, such as operating X-ray tubes, particle accelerators, spectroscopy, medical applications, sterilization of food, and nuclear physics experiments. Electrostatic generators such as the Van de Graaff generator, and variations as the Pelletron, also find use in physics research. Electrostatic generators can be divided into categories depending on how the charge is generated: Friction machines use the triboelectric effect (electricity generated by contact or friction) Influence machines use electrostatic induction Others Friction machines History The first electrostatic generators are called friction machines because of the friction in the generation process. A primitive form of frictional machine was invented around 1663 by Otto von Guericke, using a sulphur globe that could be rotated and rubbed by hand. It may not actually have been rotated during use and was not intended to produce electricity (rather cosmic virtues), but inspired many later machines that used rotating globes. Isaac Newton suggested the use of a glass globe instead of a sulphur one. About 1706 Francis Hauksbee improved the basic design, with his frictional electrical machine that enabled a glass sphere to be rotated rapidly against a woollen cloth. Generators were further advanced when, about 1730, Prof. Georg Matthias Bose of Wittenberg added a collecting conductor (an insulated tube or cylinder supported on silk strings). Bose was the first to employ the "prime conductor" in such machines, this consisting of an iron rod held in the hand of a person whose body was insulated by standing on a block of resin. In 1746, William Watson's machine had a large wheel turning several glass globes, with a sword and a gun barrel suspended from silk cords for its prime conductors. Johann Heinrich Winckler, professor of physics at Leipzig, substituted a leather cushion for the hand. During 1746, Jan Ingenhousz invented electric machines made of plate glass. Experiments with the electric machine were largely aided by the invention of the Leyden Jar. This early form of the capacitor, with conductive coatings on either side of the glass, can accumulate a charge of electricity when connected with a source of electromotive force. The electric machine was soon further improved by Andrew (Andreas) Gordon, a Scotsman and professor at Erfurt, who substituted a glass cylinder in place of a glass globe; and by Giessing of Leipzig who added a "rubber" consisting of a cushion of woollen material. The collector, consisting of a series of metal points, was added to the machine by Benjamin Wilson about 1746, and in 1762, John Canton of England (also the inventor of the first pith-ball electroscope) improved the efficiency of electric machines by sprinkling an amalgam of tin over the surface of the rubber. In 1768, Jesse Ramsden constructed a widely used version of a plate electrical generator. In 1783, Dutch scientist Martin van Marum of Haarlem designed a large electrostatic machine of high quality with glass disks 1.65 meters in diameter for his experiments. Capable of producing voltage with either polarity, it was built under his supervision by John Cuthbertson of Amsterdam the following year. The generator is currently on display at the Teylers Museum in Haarlem. In 1785, N. Rouland constructed a silk-belted machine that rubbed two grounded tubes covered with hare fur. Edward Nairne developed an electrostatic generator for medical purposes in 1787 that had the ability to generate either positive or negative electricity, the first of these being collected from the prime conductor carrying the collecting points and the second from another prime conductor carrying the friction pad. The Winter machine possessed higher efficiency than earlier friction machines. In the 1830s, Georg Ohm possessed a machine similar to the Van Marum machine for his research (which is now at the Deutsches Museum, Munich, Germany). In 1840, the Woodward machine was developed by improving the 1768 Ramsden machine, placing the prime conductor above the disk(s). Also in 1840, the Armstrong hydroelectric machine was developed, using steam as a charge carrier. Friction operation The presence of surface charge imbalance means that the objects will exhibit attractive or repulsive forces. This surface charge imbalance, which leads to static electricity, can be generated by touching two differing surfaces together and then separating them due to the phenomenon of the triboelectric effect. Rubbing two non-conductive objects can generate a great amount of static electricity. This is not the result of friction; two non-conductive surfaces can become charged by just being placed one on top of the other. Since most surfaces have a rough texture, it takes longer to achieve charging through contact than through rubbing. Rubbing objects together increases amount of adhesive contact between the two surfaces. Usually insulators, e.g., substances that do not conduct electricity, are good at both generating, and holding, a surface charge. Some examples of these substances are rubber, plastic, glass, and pith. Conductive objects in contact generate charge imbalance too, but retain the charges only if insulated. The charge that is transferred during contact electrification is stored on the surface of each object. Note that the presence of electric current does not detract from the electrostatic forces nor from the sparking, from the corona discharge, or other phenomena. Both phenomena can exist simultaneously in the same system. Influence machines History Frictional machines were, in time, gradually superseded by the second class of instrument mentioned above, namely, influence machines. These operate by electrostatic induction and convert mechanical work into electrostatic energy by the aid of a small initial charge which is continually being replenished and reinforced. The first suggestion of an influence machine appears to have grown out of the invention of Volta's electrophorus. The electrophorus is a single-plate capacitor used to produce imbalances of electric charge via the process of electrostatic induction. The next step was when Abraham Bennet, the inventor of the gold leaf electroscope, described a "doubler of electricity" (Phil. Trans., 1787), as a device similar to the electrophorus, but that could amplify a small charge by means of repeated manual operations with three insulated plates, in order to make it observable in an electroscope. In 1788, William Nicholson proposed his rotating doubler, which can be considered as the first rotating influence machine. His instrument was described as "an instrument which by turning a winch produces the two states of electricity without friction or communication with the earth". (Phil. Trans., 1788, p. 403) Nicholson later described a "spinning condenser" apparatus, as a better instrument for measurements. Erasmus Darwin, W. Wilson, G. C. Bohnenberger, and (later, 1841) J. C. E. Péclet developed various modifications of Bennet's 1787 device. Francis Ronalds automated the generation process in 1816 by adapting a pendulum bob as one of the plates, driven by clockwork or a steam engine – he created the device to power his electric telegraph. Others, including T. Cavallo (who developed the "Cavallo multiplier", a charge multiplier using simple addition, in 1795), John Read, Charles Bernard Desormes, and Jean Nicolas Pierre Hachette, developed further various forms of rotating doublers. In 1798, The German scientist and preacher Gottlieb Christoph Bohnenberger, described the Bohnenberger machine, along with several other doublers of Bennet and Nicholson types in a book. The most interesting of these were described in the "Annalen der Physik" (1801). Giuseppe Belli, in 1831, developed a simple symmetrical doubler which consisted of two curved metal plates between which revolved a pair of plates carried on an insulating stem. It was the first symmetrical influence machine, with identical structures for both terminals. This apparatus was reinvented several times, by C. F. Varley, that patented a high power version in 1860, by Lord Kelvin (the "replenisher") 1868, and by A. D. Moore (the "dirod"), more recently. Lord Kelvin also devised a combined influence machine and electromagnetic machine, commonly called a mouse mill, for electrifying the ink in connection with his siphon recorder, and a water-drop electrostatic generator (1867), which he called the "water-dropping condenser". Holtz machine Between 1864 and 1880, W. T. B. Holtz constructed and described a large number of influence machines which were considered the most advanced developments of the time. In one form, the Holtz machine consisted of a glass disk mounted on a horizontal axis which could be made to rotate at a considerable speed by a multiplying gear, interacting with induction plates mounted in a fixed disk close to it. In 1865, August J. I. Toepler developed an influence machine that consisted of two disks fixed on the same shaft and rotating in the same direction. In 1868, the Schwedoff machine had a curious structure to increase the output current. Also in 1868, several mixed friction-influence machine were developed, including the Kundt machine and the Carré machine. In 1866, the Piche machine (or Bertsch machine) was developed. In 1869, H. Julius Smith received the American patent for a portable and airtight device that was designed to ignite powder. Also in 1869, sectorless machines in Germany were investigated by Poggendorff. The action and efficiency of influence machines were further investigated by F. Rossetti, A. Righi, and Friedrich Kohlrausch. E. E. N. Mascart, A. Roiti, and E. Bouchotte also examined the efficiency and current producing power of influence machines. In 1871, sectorless machines were investigated by Musaeus. In 1872, Righi's electrometer was developed and was one of the first antecedents of the Van de Graaff generator. In 1873, Leyser developed the Leyser machine, a variation of the Holtz machine. In 1880, Robert Voss (a Berlin instrument maker) devised a form of machine in which he claimed that the principles of Toepler and Holtz were combined. The same structure become also known as the Toepler–Holtz machine. Wimshurst machine In 1878, the British inventor James Wimshurst started his studies about electrostatic generators, improving the Holtz machine, in a powerful version with multiple disks. The classical Wimshurst machine, that became the most popular form of influence machine, was reported to the scientific community by 1883, although previous machines with very similar structures were previously described by Holtz and Musaeus. In 1885, one of the largest-ever Wimshurst machines was built in England (it is now at the Chicago Museum of Science and Industry). The Wimshurst machine is a considerably simple machine; it works, as all influence machines, with electrostatic induction of charges, which means that it uses even the slightest existing charge to create and accumulate more charges, and repeats this process for as long as the machine is in action. Wimshurst machines are composed of: two insulated disks attached to pulleys of opposite rotation, the disks have small conductive (usually metal) plates on their outward-facing sides; two double-ended brushes that serve as charge stabilizers and are also the place where induction happens, creating the new charges to be collected; two pairs of collecting combs, which are, as the name implies, the collectors of electrical charge produced by the machine; two Leyden Jars, the capacitors of the machine; a pair of electrodes, for the transfer of charges once they have been sufficiently accumulated. The simple structure and components of the Wimshurst Machine make it a common choice for a homemade electrostatic experiment or demonstration, these characteristics were factors that contributed to its popularity, as previously mentioned. In 1887, Weinhold modified the Leyser machine with a system of vertical metal bar inductors with wooden cylinders close to the disk for avoiding polarity reversals. M. L. Lebiez described the Lebiez machine, that was essentially a simplified Voss machine (L'Électricien, April 1895, pp. 225–227). In 1893, Louis Bonetti patented a machine with the structure of the Wimshurst machine, but without metal sectors in the disks. This machine is significantly more powerful than the sectored version, but it must usually be started with an externally applied charge. Pidgeon machine In 1898, the Pidgeon machine was developed with a unique setup by W. R. Pidgeon. On October 28 that year, Pidgeon presented this machine to the Physical Society after several years of investigation into influence machines (beginning at the start of the decade). The device was later reported in the Philosophical Magazine (December 1898, pg. 564) and the Electrical Review (Vol. XLV, pg. 748). A Pidgeon machine possesses fixed electrostatic inductors arranged in a manner that increases the electrostatic induction effect (and its electrical output is at least double that of typical machines of this type [except when it is overtaxed]). The essential features of the Pidgeon machine are, one, the combination of the rotating support and the fixed support for inducing charge, and, two, the improved insulation of all parts of the machine (but more especially of the generator's carriers). Pidgeon machines are a combination of a Wimshurst Machine and Voss Machine, with special features adapted to reduce the amount of charge leakage. Pidgeon machines excite themselves more readily than the best of these types of machines. In addition, Pidgeon investigated higher current "triplex" section machines (or "double machines with a single central disk") with enclosed sectors (and went on to receive British Patent 22517 (1899) for this type of machine). Multiple disk machines and "triplex" electrostatic machines (generators with three disks) were also developed extensively around the turn of the 20th century. In 1900, F. Tudsbury discovered that enclosing a generator in a metallic chamber containing compressed air, or better, carbon dioxide, the insulating properties of compressed gases enabled a greatly improved effect to be obtained owing to the increase in the breakdown voltage of the compressed gas, and reduction of the leakage across the plates and insulating supports. In 1903, Alfred Wehrsen patented an ebonite rotating disk possessing embedded sectors with button contacts at the disk surface. In 1907, Heinrich Wommelsdorf reported a variation of the Holtz machine using this disk and inductors embedded in celluloid plates (DE154175; "Wehrsen machine"). Wommelsdorf also developed several high-performance electrostatic generators, of which the best known were his "Condenser machines" (1920). These were single disk machines, using disks with embedded sectors that were accessed at the edges. Van de Graaff The Van de Graaff generator was invented by American physicist Robert J. Van de Graaff in 1929 at MIT as a particle accelerator. The first model was demonstrated in October 1929. In the Van de Graaff machine, an insulating belt transports electric charge to the interior of an insulated hollow metal high voltage terminal, where it is transferred to the terminal by a "comb" of metal points. The advantage of the design was that since there was no electric field in the interior of the terminal, the charge on the belt could continue to be discharged onto the terminal regardless of how high the voltage on the terminal was. Thus the only limit to the voltage on the machine is ionization of the air next to the terminal. This occurs when the electric field at the terminal exceeds the dielectric strength of air, about 30 kV per centimeter. Since the highest electric field is produced at sharp points and edges, the terminal is made in the form of a smooth hollow sphere; the larger the diameter the higher the voltage attained. The first machine used a silk ribbon bought at a five and dime store as the charge transport belt. In 1931 a version able to produce 1,000,000 volts was described in a patent disclosure. The Van de Graaff generator was a successful particle accelerator, producing the highest energies until the late 1930s when the cyclotron superseded it. The voltage on open air Van de Graaff machines is limited to a few million volts by air breakdown. Higher voltages, up to about 25 megavolts, were achieved by enclosing the generator inside a tank of pressurized insulating gas. This type of Van de Graaff particle accelerator is still used in medicine and research. Other variations were also invented for physics research, such as the Pelletron, that uses a chain with alternating insulating and conducting links for charge transport. Small Van de Graaff generators are commonly used in science museums and science education to demonstrate the principles of static electricity. A popular demonstration is to have a person touch the high voltage terminal while standing on an insulated support; the high voltage charges the person's hair, causing the strands to stand out from the head. Others Not all electrostatic generators use the triboelectric effect or electrostatic induction. Electric charges can be generated by electric currents directly. Examples are ionizers and ESD guns. Applications Gridded ion thruster EWICON An electrostatic vaneless ion wind generator, the EWICON, has been developed by The School of Electrical Engineering, Mathematics and Computer Science at Delft University of Technology (TU Delft). Its stands near Mecanoo, an architecture firm. The main developers were Johan Smit and Dhiradj Djairam. Other than the wind, it has no moving parts. It is powered by the wind carrying away charged particles from its collector. The design suffers from poor efficiency. Dutch Windwheel The technology developed for EWICON has been reused in the Dutch Windwheel. Air ioniser Fringe science and devices These generators have been used, sometimes inappropriately and with some controversy, to support various fringe science investigations. In 1911, George Samuel Piggott received a patent for a compact double machine enclosed within a pressurized box for his experiments concerning radiotelegraphy and "antigravity". Much later (in the 1960s), a machine known as "Testatika" was built by German engineer, Paul Suisse Bauman, and promoted by a Swiss community, the Methernithans. Testatika is an electromagnetic generator based on the 1898 Pidgeon electrostatic machine, said to produce "free energy" available directly from the environment. See also Electrostatic motor Electrometer (also known as the "electroscope") Electret Static electricity References Further reading : Beschreibung unterschiedlicher Elektrizitätsverdoppler von einer neuen Einrichtung nebst einer Anzahl von Versuchen üb. verschiedene Gegenstände d. Elektrizitätslehre [Description of different electricity-doubler of a new device, along with a number of experiments on various subjects of electricity] Tübingen 1798. Wilhelm Holtz: the higher charge on insulating surfaces by side pull and the transfer of this principle to the construction of induction machines .. In: Johann Poggendorff, CG Barth (eds): Annals of physics and chemistry. 130, Leipzig 1867, pp. 128–136 Wilhelm Holtz: The influence machine. In: F. Poske (Eds.): Annals of physics and chemistry. Julius Springer, Berlin 1904 (seventeenth year, the fourth issue). O. Lehmann: Dr. J. Frick's physical technique. 2, Friedrich Vieweg und Sohn, Braunschweig 1909, p. 797 (Section 2). F. Poske: New forms of influence machines. In: F. Poske (eds) for the physical and chemical education. journal Julius Springer, Berlin 1893 (seventh year, second issue). C. L. Stong, "Electrostatic motors are powered by electric field of the Earth". October, 1974. (PDF) Oleg D. Jefimenko, "Electrostatic Motors: Their History, Types, and Principles of Operation". Electret Scientific, Star City, 1973. G. W. Francis (author) and Oleg D. Jefimenko (editor), "Electrostatic Experiments: An Encyclopedia of Early Electrostatic Experiments, Demonstrations, Devices, and Apparatus". Electret Scientific, Star City, 2005. V. E. Johnson, "Modern High-Speed Influence Machines; Their principles, construction and applications to radiography, radio-telegraphy, spark photography, electro-culture, electro-therapeutics, high-tension gas ignition, and the testing of materials". ISBN B0000EFPCO J. Clerk Maxwell, Treatise on Electricity and Magnetism (2nd ed., Oxford, 1881), vol. i. p. 294 Joseph David Everett, Electricity (expansion of part iii. of Augustin Privat-Deschanel's "Natural Philosophy") (London, 1901), ch. iv. p. 20 A. Winkelmann, Handbuch der Physik (Breslau, 1905), vol. iv. pp. 50–58 (contains a large number of references to original papers) J. Gray, "Electrical Influence Machines, Their Historical Development and Modern Forms [with instruction on making them]" (London, I903). (J. A. F.) Silvanus P. Thompson, The Influence Machine from Nicholson – 1788 to 1888, Journ. Soc. Tel. Eng., 1888, 17, p. 569 John Munro, The Story Of Electricity (The Project Gutenberg Etext) A. D. Moore (Editor), "Electrostatics and its Applications". Wiley, New York, 1973. Oleg D. Jefimenko (with D. K. Walker), "Electrostatic motors". Phys. Teach. 9, 121–129 (1971). External links Electrostatic Generator – Interactive Java Tutorial National High Magnetic Field Laboratory "How it works : Electricity". triquartz.co.uk. Electrical generators Electrostatics Historical scientific instruments
Electrostatic generator
[ "Physics", "Technology" ]
4,823
[ "Physical systems", "Electrical generators", "Machines" ]
1,573,491
https://en.wikipedia.org/wiki/Quattor
Quattor is a generic open-source tool-kit used to install, configure, and manage computers. Quattor was originally developed in the framework of European Data Grid project (2001-2004). Since its first release in 2003, Quattor has been maintained and extended by a volunteer community of users and developers, primarily from the community of grid system administrators. The Quattor tool-kit, like other configuration management systems, reduces the staff required to maintain a cluster and facilitates reliable change management. However, three unique features make it particularly attractive for managing grid resources: Federated Management: The open, modular nature of the tool-kit permits system administrators at different institutes to share the management of their distributed resources. Shared Configuration and Management Efficiency: Quattor encourages the re-use of configuration information in such a way that it can be distributed and used with little or no modification at different sites, facilitating the distribution of best practices without the need for each site to implement configuration changes. Coherent Site Model: Quattor allows an administrator to develop a site model that, once constructed, can be used to manage a range of different resources, such as real machines, virtual machines and cloud resources. These features are also attractive beyond the grid context. This has been confirmed by the growing adoption of Quattor, by both large commercial organisations and academic institutions, most of them using the tool-kit to manage consistently their grid and non-grid systems. Principles The challenge of structuring and sharing components in a collaborative system is not new; over the years programming language designers have attacked this problem from many angles. While trends change, the basic principles are well understood. Features such as encapsulation, abstraction, modularity, and typing produce clear benefits. We believe that similar principles apply when sharing configuration information across administrative domains. The Quattor configuration tool-kit derives its architecture from LCFG, improving it in several aspects. At the core of Quattor is Pan, a high-level, typed language with flexible include mechanisms, a range of data structures, and validation features familiar to modern programmers. Pan allows collaborative administrators to build up a complex set of configuration templates describing service types, hardware components, configuration parameters, users etc. The use of a high-level language facilitates code reuse in a way that goes beyond cut-and-paste of configuration snippets. The principles embodied in Quattor are in line with those established within the system administration community. In particular, all managed nodes retrieve their configurations from a configuration server backed by a source-control system (or systems in the case of devolved management). This allows individual nodes to be recreated in the case of hardware failure. Quattor handles both distributed and traditional (single-site) infrastructures. Devolved management includes the following features: consistency over a multi-site infrastructure, multiple management points, and the ability to accommodate the specific needs of constituent sites. There is no single “correct” model for a devolved infrastructure, thus great flexibility is needed in the architecture of the configuration system itself. Sometimes a set of highly autonomous sites wish to collaborate loosely. In this case each site will host a fairly comprehensive set of configuration servers, with common configuration information being retrieved from a shared database and integrated with the local configuration. Distributing the management task can potentially introduce new costs. For example, transmitting configuration information over the WAN introduces latency and security concerns. Quattor allows servers to be placed at appropriate locations in the infrastructure to reduce latency, and the use of standard tools and protocols means that existing security systems (such as a public key infrastructure) can be harnessed to encrypt and authenticate communications. Quattor Architecture Configuration management system Quattor's configuration management system is composed of a configuration database that stores high-level configuration templates, the Pan compiler that validates templates and translates them to XML or JSON profiles, and a machine profile repository that serves the profiles to client nodes. Only the Pan compiler is strictly necessary in a Quattor system; the other two subsystems can be replaced by any service providing similar functionality. Devolved management in a cross-domain environment requires users to be authenticated and their operations to be authorized. For the configuration database X.509 certificates can be used because of the support offered by many standard tools, and access control lists (ACLs) because they allow a fine-grained control (an ACL can be attached to each template). When many users interact with the system, conflicts and misconfiguration may arise which require a roll back mechanism; to this purpose, a simple concurrent transaction mechanism, based on standard version control systems, was implemented. Quattor's modular architecture allows the three configuration management subsystems to be deployed in either a distributed or centralized fashion. In the distributed approach, profile compilation (at development stage) is carried out on client systems, templates are then checked into a suitable database, and finally the deployment is initiated by invoking a separate operation on the server. The centralized approach provides strict control of configuration data. The compilation burden is placed onto the central server, and users can only access and modify templates via a dedicated interface. Since the two paradigms provide essentially the same functionality, the choice between them depends on which fits the management model of an organization better. For instance, the centralized approach fits large computer centres well because of its strictly controlled work-flow, whereas multi-site organizations such as GRIF prefer the distributed approach because it allows different parts of the whole configuration set to be handled autonomously. Pan language The Pan language compiler panc sits at the core of the Quattor tool-kit. It compiles machine configurations written in the Pan configuration language by system administrators and produces XML or JSON files (profiles) that are easily consumed by Quattor clients. The Pan language itself has a simple, declarative syntax that allows simultaneous definition of configuration information and an associated schema. In this section, we focus only on the Pan features that are relevant to devolved management of distributed sites: validation, configuration reuse, and modularization. Validation. The extensive validation features in the Pan language maximize the probability of finding configuration problems at compile time, minimizing costly clean-ups of deployed misconfiguration. Pan enables system administrators to define atomic or compound types with associated validation functions; when a part of the configuration schema is bound to a type, the declared constraints are automatically enforced. Configuration reuse. Pan allows identification and reuse of configuration information through “structure templates.” These identify small, reusable chunks of Pan-level configuration information which can be used whenever an administrator identifies an invariant (or nearly invariant) configuration sub-tree. Modularization. With respect to the original design, two new features have been developed to promote modularization and large-scale reuse of configurations: the name-spacing and load-path mechanisms. A full site configuration typically consists of a large number of templates organized into directories and subdirectories. The Pan template name-spacing mimics (and enforces) this organization much as is done in the Java language. The name-space hierarchy is independent of the configuration schema. The configuration schema is often organized by low-level services such as firewall settings for ports, account generation, log rotation entries, cron entries, and the like. In contrast, the Pan templates are usually organized based on other criteria like high-level services (web server, mail server, etc.) or by responsible person/group. The name-spacing allows various parts of the configuration to be separated and identified. To effectively modularize part of the configuration for reuse, administrators must be able to import the modules easily into a site's configuration and to customize them. Users of the Pan compiler combine a load-path with the name-spacing to achieve this. The compiler uses the load-path to search multiple root directories for particular, named templates; the first version found on the load-path is the one that is used by the compiler. This allows modules to be kept in a pristine state while allowing sites to override any particular template. Further, module developers can also expose global variables to parameterize the module, permitting a system administrator to use a module without having to understand the inner workings of the module's templates. Quattor Working Group (QWG) templates are used to configure grid middleware services. The QWG templates use all of the features of Pan to allow distributed sites to share grid middleware expertise. Automated installation management A key feature for administering large distributed infrastructures is the ability to automatically install machines, possibly from a remote location. To this purpose, Quattor provides a modular framework called the Automated Installation Infrastructure (AII). This framework is responsible for translating the configuration parameters embodied in node profiles into installation instructions suitable for use by standard installation tools. Current AII modules use node profiles to configure DHCP servers, PXE boot and Kickstart-guided installations. Normally AII is set up with an install server at each site. However, the above-mentioned technologies allow the transparent implementation of multi-site installations, by setting up a central server and appropriate relays using standard protocols. Node configuration management In Quattor, managed nodes handle their configuration process autonomously; all actions are initiated locally, once the configuration profile has been retrieved from the repository. Each node has a set of configuration agents (components) that are each registered with a particular part of the configuration schema. For example, the component that manages user accounts is registered with the path /software/components/accounts. A dispatcher program running on the node performs an analysis of the freshly retrieved configuration for changes in the relevant sections, and triggers the appropriate components. Run-time dependencies may be expressed in the node's profile, so that a partial order can be enforced on component execution. For example, it is important that the user accounts component runs before the file creation component, to ensure that file ownership can be correctly specified. By design, no control loop is provided for ensuring the correct execution of configuration components. Site administrators typically use standard monitoring systems to detect and respond to configuration failures. Nagios and Lemon are both being used at Quattor sites for this purpose. In fact, Lemon has been developed in tandem with Quattor, and provides sensors to detect failures in Quattor component execution. While nodes normally update themselves automatically, administrators can configure the system to disable automatic change deployment. This is crucial in a devolved system where the responsibilities for, respectively, modifying and deploying the configuration may be separated. A typical scenario is that top-level administrators manage the shared configuration of multiple remote sites and local managers apply it according to their policies. For instance, software updates might be scheduled at different times. See also Comparison of open source configuration management software References External links Quattor Homepage Quattor Case Studies StratusLab Project Quattor on Ohloh LISA 08 Paper Journal of Grid Computing Paper Documentation on the Pan language Unix software Configuration management System administration
Quattor
[ "Technology", "Engineering" ]
2,290
[ "Systems engineering", "Information systems", "Configuration management", "System administration" ]
1,573,690
https://en.wikipedia.org/wiki/Sling%20%28implant%29
In surgery, a sling is an implant that is intended to provide additional support to a particular tissue. It usually consists of a synthetic mesh material in the shape of a narrow ribbon but sometimes a biomaterial (bovine or porcine) or the patient’s own tissue. The ends are usually attached to a fixed body part such as the skeleton. In stress incontinence In stress incontinence, a sling is a potential method of treatment, and is placed under the urethra through one vaginal incision and two small abdominal incisions. The idea is to replace the deficient pelvic floor muscles and provide a backboard of support under the urethra. For this purpose, Pelvicol (a porcine dermal sling) implant sling had a comparable patient-determined success rate with TVT. In female genital prolapse Slings can also be used in the surgical management of female genital prolapse. Chin sling A chin sling is a synthetic lining used in chin augmentation to lift the tissues under the chin and neck. The sling is surgically implanted under the skin of the chin and hooked behind the ears, giving a more youthful appearance, and reversing the effects of aging such as accumulated fat, lost skin elasticity and stretched muscle lining, all of which cause the neck to droop and sag. References Implants (medicine) Oral and maxillofacial surgery Otorhinolaryngology Plastic surgery Medical_devices
Sling (implant)
[ "Biology" ]
308
[ "Medical devices", "Medical technology" ]
1,573,860
https://en.wikipedia.org/wiki/Gilliflower
A gilliflower or gillyflower () is the carnation or a similar plant of the genus Dianthus, especially the Clove Pink Dianthus caryophyllus. Its botanical name is Matthiola incana, also known as stock. The same name also describes other plants, such as the wallflower, which have fragrant flowers. The name derives from the French giroflée from Greek karyophyllon = "nut-leaf" = the spice called clove, the association deriving from the flower's scent. Gilliflowers were allegedly referenced as payment for peppercorn rent in medieval feudal-tenure contracts. For example, in 1262 in Bedfordshire a tenant held an area of land called The Hyde "for the rent of one clove of gilliflower", and Elmore Court in Gloucester was granted to the Guise family by John De Burgh for the rent of "The clove of one Gillyflower" each year. In Kent in the 13th century Bartholomew de Badlesmere upon an exchange made between King Edward I and himself, received a royal grant in fee of a manor and chapel, to hold in socage, "by the service of paying one pair of clove gilliflowers", by the hands of the Sheriff. However, it is more likely that the rent was paid in the form of actual cloves (in Latin, gariofilum; the flower was later named after the spice, via French), cloves and peppercorns both being exotic spices. An old recipe for gilliflower wine is mentioned in the Cornish Recipes Ancient & Modern dated to 1753: "To 3 gallons water put 6lbs of the best powder sugar; boil together for the space of 1/2 an hour; keep skimming; let it stand to cool. Beet up 3 ounces of syrup of betony, with a large spoonful of ale yeast, put into liquor & brew it well; put a peck of gilliflowers free of stalks; let work fore 3 days covered with a cloth; strain & cask for 3-4 weeks, then bottle." In popular culture A rose and a gillyflower appear on the station badge of RAF Waterbeach in Cambridgeshire, and subsequently on the badge of 39 Engineer Regiment based at Waterbeach Barracks. A rose and gillyflower were demanded by the owner of the land on which Waterbeach Abbey was built, in the 12th century. Gilliflowers are mentioned by Mrs. Lovett in the song "Wait" from the Sondheim musical Sweeney Todd. They appear in the novel La Faute de l'Abbé Mouret (aka Abbe Mouret's Transgression or the Sin of the Father Mouret) by Émile Zola as part of the Les Rougon-Macquart series. Charles Ryder calls them gillyflowers, and they grow under his student window at Oxford in the novel Brideshead Revisited. Shakespeare's Perdita is scathing about gilliflowers, or "streaked gillyvors" in Act IV, Sc 4 of his Winter's Tale, because they are cross-fertilized by humans, rather than by Nature: "I have heard it said/There is an art which in their piedness shares/With great creating Nature ... I'll not put/The dibble in earth to set one slip of them." In the ballad Clerk Saunders, the ghost of Saunders tells May Margaret of the fate of those women who die in labour: "Their beds are made in the heavens high,/Down at the foot of our good Lord’s knee,/Weel set about wi’ gillyflowers;/I wot, sweet company for to see." Gallery References Plant common names Garden plants
Gilliflower
[ "Biology" ]
795
[ "Plants", "Plant common names", "Common names of organisms" ]
1,573,974
https://en.wikipedia.org/wiki/Ken%20Alibek
Kanatzhan "Kanat" Baizakovich Alibekov (born 1950), known as Kenneth "Ken" Alibek since 1992, is a Kazakh-American microbiologist, bioweaponeer, and biological warfare administrative management expert. He was the first deputy director of Biopreparat. During his career in Soviet bioweaponry development in the late 1970s and 1980s, Alibekov managed projects that included weaponizing glanders and Marburg hemorrhagic fever, and created Russia's first tularemia bomb. His most prominent accomplishment was the creation of a new "battle strain" of anthrax, known as "Strain 836", later described by the Los Angeles Times as "the most virulent and vicious strain of anthrax known to man". In 1992, he defected to the United States; he has since become an American citizen and made his living as a biodefense consultant, speaker, and entrepreneur. He had actively participated in the development of biodefense strategy for the U.S. government, and between 1998 and 2005 he testified several times before the U.S. Congress and other governments on biotechnology issues, saying he was “convinced that Russia’s biological weapons program has not been completely dismantled”. In 1994, Alibek received a congressional award, a bronze Barkley medal awarded in recognition of distinguished public service and his contribution to world peace. In 2002, Alibek told United Press International that there is concern that monkeypox could be engineered into a biological weapon. Ohio-based Locus Fermentation Solutions hired Alibek in 2015 as executive vice president for research and development of biologically active molecules for different applications. Early life and education Alibek was born Kanat Alibekov in Kauchuk, in the Kazakh SSR of the Soviet Union (present-day Kazakhstan), to a Kazakh family. He grew up in Almaty, the republic's former capital. He is a certified oncologist, a doctor of science, doctor of philosophy and a doctor of medicine. Career Alibek's academic performance while studying military medicine at the Tomsk Medical Institute and his family's noted patriotism led to his selection to work for Biopreparat, the secret biological weapons program overseen by the Soviet Union's Council of Ministers. His first assignment in 1975 was to the Eastern European Branch of the Institute of Applied Biochemistry (IAB) near Omutninsk, a combined pesticide production facility and reserve biological weapons production plant intended for activation in a time of war. At Omutninsk, Alibek mastered the art and science of formulating and evaluating nutrient media and cultivation conditions for the optimization of microbial growth. While there, he expanded his medical school laboratory skills into the complex skill set required for industrial-level production of microorganisms and their toxins. After a year at Omutninsk, Alibek was transferred to the Siberian Branch of the IAB near Berdsk (another name of the branch was the Berdsk scientific and production base). With the assistance of a colleague, he designed and constructed a microbiology research and development laboratory that worked on techniques to optimize the production of biological formulations. After several promotions, Alibek was transferred back to Omutninsk, where he rose to the position of deputy director. He was soon transferred to the Kazakhstan Scientific and Production Base in Stepnogorsk (another reserve BW facility) to become the new director of that facility. Officially, he was deputy director of the Progress Scientific and Production Association, a manufacturer of fertilizer and pesticide. At Stepnogorsk, Alibek created an efficient industrial scale assembly line for biological formulations. In a time of war, the assembly line could be used to produce weaponized anthrax. Continued successes in science and biotechnology led to more promotions, which resulted in a transfer to Moscow. Biopreparat In Moscow, Alibek began his service as deputy chief of the biosafety directorate at Biopreparat. He was promoted in 1988 to first deputy director of Biopreparat, where he not only oversaw the biological weapons facilities but also the significant number of pharmaceutical facilities that produced antibiotics, vaccines, sera, and interferon for the public. In response to a Spring 1990 announcement that the Ministry of Medical and Microbiological Industry was to be reorganized, Alibek drafted and forwarded a memo to then General Secretary Mikhail Gorbachev proposing the cessation of Biopreparat's biological weapons work. Gorbachev approved the proposal, but an additional paragraph was secretly inserted into Alibek's draft, resulting in a presidential decree that ordered the end of Biopreparat's biological weapons work but also required them to remain prepared for future bioweapons production. Alibek used his position at Biopreparat and the authority granted to him by the first part of the decree to begin the destruction of the biological weapons to dismantle biological weapons production and testing capabilities at a number of research and development facilities, including Stepnogorsk, Kol'tsovo, Obolensk, and others. He also negotiated a concurrent appointment to a Biopreparat facility called Biomash. Biomash designed and produced technical equipment for microbial cultivation and testing. He planned to increase the proportion of its products sent to hospitals and civilian medical laboratories beyond the 40% allocated at the time. Following the dissolution of the Soviet Union in December 1991, Alibek was subsequently placed in charge of intensive preparations for inspections of Soviet biological facilities by a joint American and British delegation. But when he participated in the subsequent Soviet inspection of American facilities, his suspicion that the U.S. did not have an offensive bioweapons program was confirmed before his return to Russia. In January 1992, not long after his return from the U.S., Alibek protested against Russia's continuation of bioweapons work and resigned from both the Russian Army and Biopreparat. Immigration to the United States In October 1992, Alibek and his family emigrated to the United States. After moving to the U.S., Alibekov provided the government with a detailed accounting of the former Soviet biological weapons program. During a CIA debriefing, Alibek described the Soviet efforts to weaponize a particularly virulent smallpox strain, producing hundreds of tons of the virus that could be disseminated with bombs or ballistic missiles. Information about the Soviet biological weapons program had already been provided in 1989 by the defected scientist Vladimir Pasechnik. Alibekov has testified before the U.S. Congress several times and has provided guidance to U.S. intelligence, policy, national security, and medical communities. He was the impetus behind the creation of a biodefense graduate program at the Schar School of Policy and Government at George Mason University, serving as Distinguished Professor of Medical Microbiology and the program's Director of Education. He also developed the plans for the university's biosafety level three (BSL-3) research facility and secured $40 million of grants from the federal and state governments for its construction. From 1993 to 1999, Alibek took on multiple R&D roles, including a visiting scientist at the National Institute of Health, researching novel antigenic, potentially immunogenic substances for the development of tuberculosis vaccine; project manager at SRS Technologies where he researched, analyzed and developed detailed synthesis reports regarding the biotechnological of foreign countries; and program manager at Battelle Memorial Institute overseeing research projects in medical biotechnology, biosynthesis and fermentation equipment. In 1999, Alibek published an autobiographical account of his work in the Soviet Union and his defection. Reporting the prospect of Iraq gaining the ability to get hold of smallpox or anthrax, Alibek said, "there is no doubt that Saddam Hussein has weapons of mass destruction." However, no biological weapons were later found in Iraq. Entrepreneur and research administrator Alibek was president, chief scientific officer, and chief executive officer at AFG Biosolutions, Inc in Gaithersburg, Maryland, where he and his scientific team continued their development of advanced solutions for antimicrobial immunity. Motivated by the lack of affordable anti-cancer therapies available in Eastern Europe and Central Asia, AFG was using Alibek's biotechnology experience to plan, build, and manage a new pharmaceutical production facility designed specifically to address this problem. Alibek created a new pharmaceutical production company, MaxWell Biocorporation (MWB), in 2006 and served as its chief executive officer and president. Based in Washington, D.C., with several subsidiaries and affiliates in the U.S. and Ukraine, MWB's main stated goal is create a new, large-scale, high-technology, ultra-modern pharmaceutical fill-and-finish facility in Ukraine. Off-patent generic pharmaceuticals produced at this site are intended to target severe oncological, cardiological, immunological, and chronic infectious diseases. Construction of the Boryspil facility began in April 2007 and was completed in March 2008; initial production was scheduled to begin in 2008. The stated intention was that high-quality pharmaceuticals would be produced and become an affordable source of therapy for millions of underprivileged who currently have no therapeutic options. Abilek stepped down as President of MWB in the summer of 2008 shortly after the facility opened. Alibek's main research focus was developing novel forms of therapy for late-stage oncological diseases and other chronic degenerative pathologies and disorders. He focuses on the role of chronic viral and bacterial infections in causing age-related diseases and premature aging. Additionally, he develops and implements novel systemic immunotherapy methods for late-stage cancer patients. Throughout his career, Alibek has published nine research articles on the role of infectious diseases in cancer. Work in Kazakhstan In 2010, Alibek was invited to begin working in Kazakhstan as a head of the Department of Chemistry and Biology at the School of Science and Technology of Nazarbayev University in Astana, where he was engaged in the development of anti-cancer drugs and life-prolonging drugs, and was chairman of the board of the Republican Scientific Center for Emergency Medical Care and headed the National Scientific Center for Oncology and Transplantation. During his stay, he published a number of articles in research journals and taught various courses in various fields of biology and medicine. He focused on a possible role of chronic infections, metabolic disorders, and immunosuppression on cancer development. In 2011, he was awarded a prize from the Deputy Prime Minister for his contribution to the development of the educational system in Kazakhstan. In 2014, he was awarded a medal by the Minister of Education and Science of Kazakhstan for his contribution to research in Kazakhstan. He continues his work as an administrative manager of a research and medicine and education professor. However, after seven years, no significant scientific results from Alibek's work developed. During these seven years, Alibek received more than 1 billion Tenge from the budget for "New Systemic Therapy for Cancer Tumors" project he tried to implement. The promising Swedish technique has remained a common concept, a panacea for cancer treatment has not appeared. Three submitted Alibekov patent applications for registration were rejected by the National Institute of Intellectual Property of the Ministry of Justice of the Republic of Kazakhstan since there was no novelty. In 2016, Alibek was chosen as one of the nominees in the "Science" category of the national project El Tulgasy, which was designed to select the most significant citizens of Kazakhstan who are associated with national achievements. More than 350,000 people voted in this project, and Alibek was voted 10th place in his category. COVID-19 Alibek has experience in vaccine development for pandemics. In 2006, his article on new principles for developing these vaccines was published in Future Medicine. In January 2020, Alibek issued a warning about COVID-19 and its potential as a global problem. His research on safe methods of protection against the virus ahead of a vaccine was later published in the journal Research Ideas and Outcomes (RIO). He also wrote two chapters on methods to protect against the COVID-19 pandemic in the book Defending Against Biothreats: What We Can Learn from the Coronavirus Pandemic to Enhance U.S. Defenses Against Pandemics and Biological Weapons. In 2021, Alibek offered a free seminar on the antiviral biodefence in the world of epidemic uncertainties. Autism research Starting in 2007, Alibek began researching autism based on his background as a board-certified oncologist and his own personal connection to the disorder through his daughter, Mary. He supports the idea that the disorder is the result of prenatal viral and bacterial infections. Multiple studies have been conducted with patients that have autism spectrum disorder, including a 2018-2019 study with 57 patients, a 2021-2023 study with 142 patients and a 2023 study with 32 patients. In addition, more than 1,000 children have been treated using the protocol. His patients are located predominantly in nations in the former Soviet Union and Ukraine, and he consults mainly using free telemedicine services. During these studies, specific inflammation markers along with biochemical and neuropsychiatric parameters were identified as an objective measure of improvement in and a reduction of symptoms. Alibek has published 6 studies in peer-reviewed journals about the causes and treatment of Autism, and has one issued U.S. patent and three U.S. patent filings on his novel approach to treatment. Criticism In a September 2003 news release, Alibek and another professor suggested, based on their laboratory research, that the smallpox vaccination might increase a person's resistance to HIV. The work was rejected after peer review by the Journal of the American Medical Association and The Lancet and is no longer being pursued. According to smallpox expert and former White House science advisor Donald Henderson, "This is a theory that... does not hold up at all, and it does not make any sense from a biologic point of view...This idea...was straight off the wall. I would put no credence in it at all." In 2010, an article coauthored by Alibek appeared in Biomed Central - Immunology, a scientific journal, that outlined the results of their research showing that prior immunization with the vaccine Dryvax may confer resistance to HIV replication. Alibek also has promoted "Dr. Ken Alibek's Immune System Support Formula," a dietary supplement sold over the Internet. It contains vitamins, minerals, and a proprietary bacterial mix that will purportedly "bolster the immune system". Personal life Alibek has a wife and five children (two sons and three daughters); one of his daughters is autistic. Publications Books Alibek, Ken and Steven Handelman (1999), Biohazard: The Chilling True Story of the Largest Covert Biological Weapons Program in the World – Told from Inside by the Man Who Ran It, Random House, . "The Anthrax Vaccine: Is It safe? Does it Work?" (2002), Reviewer. National Academy Press, Washington, D.C., Institute of Medicine. Biological Threats and Terrorism: Assessing the Science and Response Capabilities (2002), Workshop Summary, Contributor. National Academy Press, Washington, D.C., Institute of Medicine. Weinstein, R.S. and K. Alibek (2003), Biological and Chemical Terrorism: A Guide for Healthcare Providers and First Responders, Thieme Medical Publishing, New York. Alibek, K., et al. (2003), Biological Weapons, Bio-Prep, Louisiana. Fong, I. and K. Alibek (2005), Bioterrorism and Infectious Agents: A New Dilemma for the 21st Century, Springer. Fong, I. and K. Alibek (2006), New and Evolving Infections of the 21st Century, Springer. Fleitz, Alibek, Bryen, Rosett, Chang, DeSutter, Elliott, Faddis, Geraghty, Gibson (2020), Defending Against Biothreats: What We Can Learn from the Coronavirus Pandemic to Enhance U.S. Defenses Against Pandemics and Biological Weapons Book chapters "Firepower in the Lab: Automation in the Fight Against Infectious Diseases and Bioterrorism" (2001), Chapter 15 of Biological Weapons: Past, Present, and Future, National Academy Press, Washington, D.C., Institute of Medicine. Jane's Chem-Bio Handbook (2002), Second Edition, F. R. Sidell, W. C. Patrick, T. R. Dashiell, K. Alibek, Jane's Information Group, Alexandria, VA. K. Alibek, C. Lobanova, "Modulation of Innate Immunity to Protect Against Biological Weapon Threat" (2006). In: Microorganisms and Bioterrorism, Springer. Op-Eds The New York Times "Russia's Deadly Expertise", March 27, 1998. "Smallpox Could Still Be a Danger", May 24, 1999. The Wall Street Journal "Russia Retains Biological Weapons Capability", February, 2000. "Bioterror: A Very Real Threat", October, 2001. The Washington Post "Anthrax under the Microscope", with Matthew Meselson, November 5, 2002. Selected Congressional Testimony Testimony before the Joint Economic Committee, May 1998: "Terrorist and Intelligence Operations: Potential Impact on the US Economy" Testimony before the Senate Select Committee on Intelligence, June, 1999 Testimony before the House Armed Services Committee, October, 1999 Testimony before the House Armed Services Committee, May, 2000 Testimony before the House Subcommittee on National Security, Veterans Affairs, and International Relations of the Committee on Government Reform, October 12, 2001: "Combating Terrorism: Assessing the Threat of a Biological Weapons Attack", House Serial No. 107-103 Testimony before the House Committee on International Relations, December, 2001: "Russia, Iraq, and Other Potential Sources of Anthrax, Smallpox, and Other Bioterrorist Weapons" Testimony before the Senate Subcommittee on Departments of Labor, Health and Human Services, Education, and Related Agencies of the Committee on Appropriations, November, 2001 Testimony before the Subcommittee on Prevention of Nuclear and Biological Attack, Committee on Homeland Security, US House of Representatives, July 28, 2005: "Implementing a National Biodefense Strategy" House Permanent Select Committee on Intelligence, March 1999 Biological Warfare Threats Testimony before the House Subcommittee on Prevention of Nuclear and Biological Attack, July 13, 2005: "Engineering Bio-terror Agents: Lessons Learned from the Offensive US and Russian Biological Weapons Programs" Awards, Presentations and Distinctions 2014 Kazakhstan Government's Prime Minister Award “For Distinguished Contribution to Science” 2011 Kazakhstan Government's Vice Prime Minister Award “For the Development of Kazakhstan Education System” 2007 “Panacea Award” for Innovations in Medical and Pharmaceutical Industries, Kyiv (Ukraine) 2005 Lecturer for on a “Russian-American Security Program of Harvard University’s John Kennedy Center for Government Studies 2005 Senior Fellow, Center for Advanced Defense Studies, Washington DC 2004 Outstanding Faculty Member Award, George Mason University 2002 Business Forward Magazine Award: “Deals of the Year” for One of the Largest Federal Research Contracts for Small Businesses 2000 (Davos, Switzerland), 2002 (New York) Invited speaker to the World Economic Forum 1994 Congressional Award: Bronze medal named after Albane W. Barkley - Awarded by the U.S. Government in Recognition of Distinguished Public Service 1989 A Colonel of Medical Services, awarded by the USSR's Minister of Defense 1983 Medal “For Combat Merits” by the USSR's Minister of Defense References Further reading "Interview Dr. Ken Alibek", Journal of Homeland Security, September 18, 2000 External links 1950 births Living people 20th-century American biologists 2001 anthrax attacks American people of Kazakhstani descent Kazakhstani emigrants to the United States Kazakhstani scientists People from Almaty Region People related to biological warfare Siberian State Medical University alumni Soviet biological weapons program Soviet microbiologists Soviet military doctors The Heritage Foundation
Ken Alibek
[ "Biology" ]
4,134
[ "People related to biological warfare", "Biological warfare" ]
1,573,991
https://en.wikipedia.org/wiki/Hilbert%27s%20fourth%20problem
In mathematics, Hilbert's fourth problem in the 1900 list of Hilbert's problems is a foundational question in geometry. In one statement derived from the original, it was to find — up to an isomorphism — all geometries that have an axiomatic system of the classical geometry (Euclidean, hyperbolic and elliptic), with those axioms of congruence that involve the concept of the angle dropped, and `triangle inequality', regarded as an axiom, added. If one assumes the continuity axiom in addition, then, in the case of the Euclidean plane, we come to the problem posed by Jean Gaston Darboux: "To determine all the calculus of variation problems in the plane whose solutions are all the plane straight lines." There are several interpretations of the original statement of David Hilbert. Nevertheless, a solution was sought, with the German mathematician Georg Hamel being the first to contribute to the solution of Hilbert's fourth problem. A recognized solution was given by Soviet mathematician Aleksei Pogorelov in 1973. In 1976, Armenian mathematician Rouben V. Ambartzumian proposed another proof of Hilbert's fourth problem. Original statement Hilbert discusses the existence of non-Euclidean geometry and non-Archimedean geometry ...a geometry in which all the axioms of ordinary euclidean geometry hold, and in particular all the congruence axioms except the one of the congruence of triangles (or all except the theorem of the equality of the base angles in the isosceles triangle), and in which, besides, the proposition that in every triangle the sum of two sides is greater than the third is assumed as a particular axiom. Due to the idea that a 'straight line' is defined as the shortest path between two points, he mentions how congruence of triangles is necessary for Euclid's proof that a straight line in the plane is the shortest distance between two points. He summarizes as follows: The theorem of the straight line as the shortest distance between two points and the essentially equivalent theorem of Euclid about the sides of a triangle, play an important part not only in number theory but also in the theory of surfaces and in the calculus of variations. For this reason, and because I believe that the thorough investigation of the conditions for the validity of this theorem will throw a new light upon the idea of distance, as well as upon other elementary ideas, e. g., upon the idea of the plane, and the possibility of its definition by means of the idea of the straight line, the construction and systematic treatment of the geometries here possible seem to me desirable. Flat metrics Desargues's theorem: If two triangles lie on a plane such that the lines connecting corresponding vertices of the triangles meet at one point, then the three points, at which the prolongations of three pairs of corresponding sides of the triangles intersect, lie on one line. The necessary condition for solving Hilbert's fourth problem is the requirement that a metric space that satisfies the axioms of this problem should be Desarguesian, i.e.,: if the space is of dimension 2, then the Desargues's theorem and its inverse should hold; if the space is of dimension greater than 2, then any three points should lie on one plane. For Desarguesian spaces Georg Hamel proved that every solution of Hilbert's fourth problem can be represented in a real projective space or in a convex domain of if one determines the congruence of segments by equality of their lengths in a special metric for which the lines of the projective space are geodesics. Metrics of this type are called flat or projective. Thus, the solution of Hilbert's fourth problem was reduced to the solution of the problem of constructive determination of all complete flat metrics. Hamel solved this problem under the assumption of high regularity of the metric. However, as simple examples show, the class of regular flat metrics is smaller than the class of all flat metrics. The axioms of geometries under consideration imply only a continuity of the metrics. Therefore, to solve Hilbert's fourth problem completely it is necessary to determine constructively all the continuous flat metrics. Prehistory of Hilbert's fourth problem Before 1900, there was known the Cayley–Klein model of Lobachevsky geometry in the unit disk, according to which geodesic lines are chords of the disk and the distance between points is defined as a logarithm of the cross-ratio of a quadruple. For two-dimensional Riemannian metrics, Eugenio Beltrami (1835–1900) proved that flat metrics are the metrics of constant curvature. For multidimensional Riemannian metrics this statement was proved by E. Cartan in 1930. In 1890, for solving problems on the theory of numbers, Hermann Minkowski introduced a notion of the space that nowadays is called the finite-dimensional Banach space. Minkowski space Let be a compact convex hypersurface in a Euclidean space defined by where the function satisfies the following conditions: and the form is positively definite. The length of the vector OA is defined by: A space with this metric is called Minkowski space. The hypersurface is convex and can be irregular. The defined metric is flat. Finsler spaces Let M and be a smooth finite-dimensional manifold and its tangent bundle, respectively. The function is called Finsler metric if ; For any point the restriction of on is the Minkowski norm. is Finsler space. Hilbert's geometry Let be a bounded open convex set with the boundary of class C2 and positive normal curvatures. Similarly to the Lobachevsky space, the hypersurface is called the absolute of Hilbert's geometry. Hilbert's distance (see fig.) is defined by The distance induces the Hilbert–Finsler metric on U. For any and (see fig.), we have The metric is symmetric and flat. In 1895, Hilbert introduced this metric as a generalization of the Lobachevsky geometry. If the hypersurface is an ellipsoid, then we have the Lobachevsky geometry. Funk metric In 1930, Funk introduced a non-symmetric metric. It is defined in a domain bounded by a closed convex hypersurface and is also flat. σ-metrics Sufficient condition for flat metrics Georg Hamel was first to contribute to the solution of Hilbert's fourth problem. He proved the following statement. Theorem. A regular Finsler metric is flat if and only if it satisfies the conditions: Crofton formula Consider a set of all oriented lines on a plane. Each line is defined by the parameters and where is a distance from the origin to the line, and is an angle between the line and the x-axis. Then the set of all oriented lines is homeomorphic to a circular cylinder of radius 1 with the area element . Let be a rectifiable curve on a plane. Then the length of is where is a set of lines that intersect the curve , and is the number of intersections of the line with . Crofton proved this statement in 1870. A similar statement holds for a projective space. Blaschke–Busemann measure In 1966, in his talk at the International Mathematical Congress in Moscow, Herbert Busemann introduced a new class of flat metrics. On a set of lines on the projective plane he introduced a completely additive non-negative measure , which satisfies the following conditions: , where is a set of straight lines passing through a point P; , where is a set of straight lines passing through some set X that contains a straight line segment; is finite. If we consider a -metric in an arbitrary convex domain of a projective space , then condition 3) should be replaced by the following: for any set H such that H is contained in and the closure of H does not intersect the boundary of , the inequality holds. Using this measure, the -metric on is defined by where is the set of straight lines that intersect the segment . The triangle inequality for this metric follows from Pasch's theorem. Theorem. -metric on is flat, i.e., the geodesics are the straight lines of the projective space. But Busemann was far from the idea that -metrics exhaust all flat metrics. He wrote, "The freedom in the choice of a metric with given geodesics is for non-Riemannian metrics so great that it may be doubted whether there really exists a convincing characterization of all Desarguesian spaces". Two-dimensional case Pogorelov's theorem The following theorem was proved by Pogorelov in 1973 Theorem. Any two-dimensional continuous complete flat metric is a -metric. Thus Hilbert's fourth problem for the two-dimensional case was completely solved. A consequence of this is that you can glue boundary to boundary two copies of the same planar convex shape, with an angle twist between them, you will get a 3D object without crease lines, the two faces being developable. Ambartsumian's proofs In 1976, Ambartsumian proposed another proof of Hilbert's fourth problem. His proof uses the fact that in the two-dimensional case the whole measure can be restored by its values on biangles, and thus be defined on triangles in the same way as the area of a triangle is defined on a sphere. Since the triangle inequality holds, it follows that this measure is positive on non-degenerate triangles and is determined on all Borel sets. However, this structure can not be generalized to higher dimensions because of Hilbert's third problem solved by Max Dehn. In the two-dimensional case, polygons with the same volume are scissors-congruent. As was shown by Dehn this is not true for a higher dimension. Three dimensional case For three dimensional case Pogorelov proved the following theorem. Theorem. Any three-dimensional regular complete flat metric is a -metric. However, in the three-dimensional case -measures can take either positive or negative values. The necessary and sufficient conditions for the regular metric defined by the function of the set to be flat are the following three conditions: the value on any plane equals zero, the value in any cone is non-negative, the value is positive if the cone contains interior points. Moreover, Pogorelov showed that any complete continuous flat metric in the three-dimensional case is the limit of regular -metrics with the uniform convergence on any compact sub-domain of the metric's domain. He called them generalized -metrics. Thus Pogorelov could prove the following statement. Theorem. In the three-dimensional case any complete continuous flat metric is a -metric in generalized meaning. Busemann, in his review to Pogorelov’s book "Hilbert’s Fourth Problem" wrote, "In the spirit of the time Hilbert restricted himself to n = 2, 3 and so does Pogorelov. However, this has doubtless pedagogical reasons, because he addresses a wide class of readers. The real difference is between n = 2 and n>2. Pogorelov's method works for n>3, but requires greater technicalities". Multidimensional case The multi-dimensional case of the Fourth Hilbert problem was studied by Szabo. In 1986, he proved, as he wrote, the generalized Pogorelov theorem. Theorem. Each n-dimensional Desarguesian space of the class , is generated by the Blaschke–Busemann construction. A -measure that generates a flat measure has the following properties: the -measure of hyperplanes passing through a fixed point is equal to zero; the -measure of the set of hyperplanes intersecting two segments [x, y], [y, z], where x, y та z are not collinear, is positive. There was given the example of a flat metric not generated by the Blaschke–Busemann construction. Szabo described all continuous flat metrics in terms of generalized functions. Hilbert's fourth problem and convex bodies Hilbert's fourth problem is also closely related to the properties of convex bodies. A convex polyhedron is called a zonotope if it is the Minkowski sum of segments. A convex body which is a limit of zonotopes in the Blaschke – Hausdorff metric is called a zonoid. For zonoids, the support function is represented by where is an even positive Borel measure on a sphere . The Minkowski space is generated by the Blaschke–Busemann construction if and only if the support function of the indicatrix has the form of (1), where is even and not necessarily of positive Borel measure. The bodies bounded by such hypersurfaces are called generalized zonoids. The octahedron in the Euclidean space is not a generalized zonoid. From the above statement it follows that the flat metric of Minkowski space with the norm is not generated by the Blaschke–Busemann construction. Generalizations of Hilbert's fourth problem There was found the correspondence between the planar n-dimensional Finsler metrics and special symplectic forms on the Grassmann manifold в . There were considered periodic solutions of Hilbert's fourth problem : Let (M, g) be a compact locally Euclidean Riemannian manifold. Suppose that Finsler metric on M with the same geodesics as in the metric g is given. Then the Finsler metric is the sum of a locally Minkovski metric and a closed 1-form. Let (M, g) be a compact symmetric Riemannian space of rank greater than one. If F is a symmetric Finsler metric whose geodesics coincide with geodesics of the Riemannian metric g, then (M, g) is a symmetric Finsler space. The analogue of this theorem for rank-one symmetric spaces has not been proven yet. Another exposition of Hilbert's fourth problem can be found in work of Paiva. Unsolved problems Hilbert's fourth problem for non-symmetric Finsler metric has not yet been solved. The description of the metric on for which k-planes minimize the k-area has not been given (Busemann). References Further reading Foundations of geometry 04 Geometry problems
Hilbert's fourth problem
[ "Mathematics" ]
2,953
[ "Geometry problems", "Mathematical axioms", "Hilbert's problems", "Foundations of geometry", "Geometry", "Mathematical problems" ]
1,574,029
https://en.wikipedia.org/wiki/Robin%20Hill%20%28biochemist%29
Robert Hill FRS (2 April 1899 – 15 March 1991), known as Robin Hill, was a British plant biochemist who, in 1939, demonstrated the 'Hill reaction' of photosynthesis, proving that oxygen is evolved during the light requiring steps of photosynthesis. He also made significant contributions to the development of the Z-scheme of oxygenic photosynthesis. Education and early life Hill was born in New Milverton, a suburb of Leamington Spa, Warwickshire. He was educated at Bedales School, where he became interested in biology and astronomy (he published a paper on sunspots in 1917), and Emmanuel College, Cambridge, where he read Natural Sciences. During the First World War he served in the Anti-gas Department of the Royal Engineers. Career In 1922, he joined the Department of Biochemistry at Cambridge, where he was directed to research haemoglobin. He published a number of papers on haemoglobin, and in 1926 he began to work with David Keilin on the haem containing protein cytochrome c. In 1932, he commenced work on plant biochemistry, focusing on photosynthesis and the oxygen evolution of chloroplasts, leading to the discovery of the 'Hill reaction'. From 1943, Hill's work was funded by the Agricultural Research Council (ARC), although he continued to work in the Cambridge Biochemistry Department. Hill continued to receive most of his recognition for his earlier work on photosynthesis, and beginning in the late 1950s, his work concentrated on the energetics of photosynthesis. In collaboration with Fay Bendall, he made his second great contribution to photosynthesis research with the discovery of the 'Z scheme' of electron transport. Hill retired from the ARC in 1966, although his research at Cambridge continued until his death in 1991. In his later years Hill worked on the issue of the application of the second law of thermodynamics to photosynthesis. He was an expert on natural dyes and cultivated plants such as madder and woad. He painted watercolours using pigments he himself extracted. In the 1920s, he developed a fish-eye camera and used it to take stereoscopic whole-sky images, recording cloud patterns in three dimensions. Awards and honours The Robert Hill Institute at the University of Sheffield, from which he received an honorary degree in 1990, was named after him. Hill was elected a Fellow of the Royal Society (FRS) in 1946, his certificate of election reads: He was awarded the Royal Medal in 1963, and the Copley Medal in 1987. References 1899 births 1991 deaths People from Leamington Spa Alumni of Emmanuel College, Cambridge English biochemists Fellows of the Royal Society Foreign associates of the National Academy of Sciences Recipients of the Copley Medal Researchers of photosynthesis Royal Medal winners People educated at Bedales School
Robin Hill (biochemist)
[ "Chemistry" ]
580
[ "Biochemists", "Photochemists", "Photosynthesis", "Researchers of photosynthesis" ]
1,574,036
https://en.wikipedia.org/wiki/ICL%20Direct%20Machine%20Environment
Direct Machine Environment, abbreviated DME, was a mainframe environment for the ICL 2900 Series of computing systems from International Computers Limited that was developed in the 1970s. DME was more-or-less an ICL 1900 order code processor in microcode, which permitted the ICL 1900 series executive, operating systems and program libraries to operate on the ICL 2900 series. Reason for Development At this time most companies that had computers had large teams of programmers to write their applications. DME was developed so that customers could buy the new hardware and run their 1900 or System 4 applications whilst they developed their replacement VME applications. This led to some users running DME and VME alternately on the same machine for some years. Unfortunately this led to situations where development teams were waiting around for time to run their new applications. This, and the fact that some users were not moving to the new system, led ICL to develop a system called Concurrent Machine Environment (CME) under which VME ran DME as a subsystem, enabling 1900 and System 4 applications to be run on a 2900 or Series 39 machine alongside VME applications. References Direct Machine Environment
ICL Direct Machine Environment
[ "Technology" ]
239
[ "Operating system stubs", "Computing stubs" ]
1,574,095
https://en.wikipedia.org/wiki/Hamilton%20O.%20Smith
Hamilton Othanel Smith (born August 23, 1931 in New York) is an American microbiologist and Nobel laureate. Smith graduated from University Laboratory High School of Urbana, Illinois. He attended the University of Illinois at Urbana-Champaign, but in 1950 transferred to the University of California, Berkeley, where he earned his B.A. in Mathematics in 1952 . He received his medical degree from Johns Hopkins School of Medicine in 1956. Between 1956 and 1957 Smith worked for the Washington University in St. Louis Medical Service. In 1975, he was awarded a Guggenheim Fellowship he spent at the University of Zurich. In 1970, Smith and Kent W. Wilcox discovered the first type II restriction enzyme, which is now known as HindII. Smith went on to discover DNA methylases that constitute the other half of the bacterial host restriction and modification systems, as hypothesized by Werner Arber of Switzerland. He was awarded the Nobel Prize in Physiology or Medicine in 1978 for discovering type II restriction enzymes with Werner Arber and Daniel Nathans as co-recipients. He later became a leading figure in the nascent field of genomics, when in 1995 he and a team at The Institute for Genomic Research sequenced the first bacterial genome, that of Haemophilus influenzae. H. influenza was the same organism in which Smith had discovered restriction enzymes in the late 1960s. He subsequently played a key role in the sequencing of many of the early genomes at The Institute for Genomic Research, and in the assembly of the human genome at Celera Genomics, which he joined when it was founded in 1998. More recently, he has directed a team at the J. Craig Venter Institute that works towards creating a partially synthetic bacterium, Mycoplasma laboratorium. In 2003 the same group synthetically assembled the genome of a virus, Phi X 174 bacteriophage. Smith is scientific director of privately held Synthetic Genomics, which was founded in 2005 by Craig Venter to continue this work. Synthetic Genomics is working to produce biofuels on an industrial-scale using recombinant algae and other microorganisms. References This article incorporates CC-BY-2.5 text from the reference Further reading External links American microbiologists Members of the United States National Academy of Sciences 21st-century American biologists Phage workers Nobel laureates in Physiology or Medicine American Nobel laureates University of California, Berkeley alumni Johns Hopkins University alumni 1931 births Living people University Laboratory High School (Urbana, Illinois) alumni Biotechnologists Human Genome Project scientists Washington University School of Medicine faculty
Hamilton O. Smith
[ "Engineering" ]
529
[ "Human Genome Project scientists" ]
1,574,412
https://en.wikipedia.org/wiki/Norman%20L.%20Bowen
Norman Levi Bowen FRS (June 21, 1887 – September 11, 1956) was a Canadian geologist. Bowen "revolutionized experimental petrology and our understanding of mineral crystallization". Beginning geology students are familiar with Bowen's reaction series depicting how different minerals crystallize under varying pressures and temperatures." Career Bowen conducted experimental research at the Geophysical Laboratory, Carnegie Institution for Science of Washington from 1912 to 1937. He published The Evolution of the Igneous Rocks in 1928. This book set the stage for a geochemical and geophysical foundation for the study of rocks and minerals. Personal life Born in Kingston, Ontario, Canada, Bowen married Mary Lamont in 1911, and they had a daughter, Catherine. Awards and honours Bowen was elected to the American Academy of Arts and Sciences in 1921, the American Philosophical Society in 1930, and the United States National Academy of Sciences in 1935. He was awarded the Penrose Medal of the Geological Society of America in 1941 and served as their president in 1945. He was elected a Foreign Member of the Royal Society (ForMemRS) in 1949. The Norman L. Bowen Award, awarded annually by the American Geophysical Union, is named in his honour. The astronauts of Apollo 17 named a small lunar crater after him. References Further reading Norman L. Bowen, science.ca Profile. Available from: http://www.science.ca/scientists/scientistprofile.php?pID=271 Yoder, H. S., Jr. Norman L. Bowen: The Experimental Approach to Petrology. GSA Today 5 (1998): 10–11. Available: https://web.archive.org/web/20130501133735/http://www.gsahist.org/gsat/gt98may10_11.pdf Yoder, H. S., Jr. Norman L. Bowen (1887–1956), MIT Class of 1912, First Predoctoral Fellow of the Geophysical Laboratory. Earth Sciences History 1 (1992): 45–55. Available: https://web.archive.org/web/20050306092622/http://vgp.agu.org/bowen_paper/bowen_paper.html Norman Levi Bowen Papers, 1907–1980 (Bulk 1907–1955), Geophysical Laboratory, Carnegie Institution of Washington, D.C., Finding aid written by: Jennifer Snyder, March 2004, PDF available: https://web.archive.org/web/20050222182019/http://hq.ciw.edu/legacy/findingaids/bowen.pdf Strickler, Mike, Ask GeoMan..., What is Bowen's Reaction Series?, http://homework.uoregon.edu/mstrick/AskGeoMan/geoQuerry32.html https://web.archive.org/web/20050325182852/http://carnegieinstitution.org/legacy/findingaids/bowen.html Bowen bibliography site https://web.archive.org/web/20051003223226/http://www.agu.org/inside/awardees.html#BowenList of Bowen Award winners Canadian geochemists Petrologists Fellows of the Royal Society of Canada Penrose Medal winners 1887 births 1956 deaths People from Kingston, Ontario Wollaston Medal winners Foreign members of the Royal Society Members of the United States National Academy of Sciences Presidents of the Geological Society of America Canadian fellows of the Royal Society Members of the American Philosophical Society
Norman L. Bowen
[ "Chemistry" ]
751
[ "Geochemists", "Canadian geochemists" ]
1,574,433
https://en.wikipedia.org/wiki/Homesickness
Homesickness is the distress caused by being away from home. Its cognitive hallmark is preoccupying thoughts of home and attachment objects. Sufferers typically report a combination of depressive and anxious symptoms, withdrawn behavior and difficulty focusing on topics unrelated to home. Experienced by children and adults, the affected person may be taking a short trip to a nearby place, such as summer camp, or they may be taking a long trip or have moved to a different country. In its mild form, homesickness prompts the development of coping skills and motivates healthy attachment behaviors, such as renewing contact with loved ones. Nearly all people miss something about home when they are away, making homesickness a nearly universal experience. However, intense homesickness can be painful and debilitating. Historical references Homesickness is an ancient phenomenon, mentioned in both the Old Testament books of Exodus and Psalm 137:1 ("By the rivers of Babylon, there we sat down, yea, we wept, when we remembered Zion") as well as Homer's Odyssey, whose opening scene features Athena arguing with Zeus to bring Odysseus home because he is homesick ("...longing for his wife and his homecoming..."). The Greek physician Hippocrates (–377 BC) believed that homesickness—also called "heimveh" (from German "Heimweh") or a "nostalgic reaction"—was caused by a surfeit of black bile in the blood. In recent history, homesickness is first mentioned specifically with Swiss people being abroad in Europe ("Heimweh") for a longer period of time in a document dating back to 1651. This was a normal phenomenon among the many common Swiss mercenaries serving in different countries and many rulers across Europe at that time. It was not uncommon for them staying many years away from home and, if lucky enough, return home if still alive. This phenomenon at that time was first only thought to affect Swiss people until this was revised, probably caused by big migration streams across Europe suggesting the same symptoms and thus homesickness found its way into general German medical literature in the 19th century. American contemporary histories, such as Susan J. Matt's Homesickness: An American History, describe experiences of homesickness in colonists, immigrants, gold miners, soldiers, explorers and others spending time away from home. First understood as a brain lesion, homesickness is now known to be a form of normative psychopathology that reflects the strength of a person's attachment to home, native culture and loved ones, as well as their ability to regulate their emotions and adjust to novelty. Cross-cultural research, with populations as diverse as refugees and boarding school students, suggests considerable agreement on the definition of homesickness. Additional historical perspectives on homesickness and place attachment can be found in books by van Tilburg & Vingerhoets, Matt, and Williams. Diagnosis and epidemiology Whereas separation anxiety disorder is characterized by "inappropriate and excessive fear or anxiety concerning separation from those to whom the individual is attached" symptoms of homesickness are most prominent after a separation and include both depression and anxiety. In DSM terms, homesickness may be related to separation anxiety disorder, but it is perhaps best categorized as either an adjustment disorder with mixed anxiety and depressed mood (309.28) or, for immigrants and foreign students as a V62.4, Acculturation Difficulty. As noted above, researchers use the following definition: "Homesickness is the distress or impairment caused by an actual or anticipated separation from home. Its cognitive hallmark is preoccupying thoughts of home and attachment objects." Recent pathogenic models support the possibility that homesickness reflects both insecure attachment and a variety of emotional and cognitive vulnerabilities, such as little previous experience away from home and negative attitudes about the novel environment. The prevalence of homesickness varies and depends on the population studied and the way homesickness is measured. One way to conceptualize homesickness prevalence is as a function of severity. Nearly all people miss something about home when they are away, so the absolute prevalence of homesickness is close to 100%, mostly in a mild form. Roughly 20% of university students and children at summer camp rate themselves at or above the midpoint on numerical rating scales of homesickness severity. 5–7% of students and campers report intense homesickness associated with severe symptoms of anxiety and depression. In adverse or painful environments, such as the hospital or the battlefield, intense homesickness is far more prevalent. In one study, 50% of children scored themselves at or above the midpoint on a numerical homesickness intensity scale (compared to 20% of children at summer camp). Soldiers report even more intense homesickness, sometimes to the point of suicidal misery. Aversive environmental elements, such as the trauma associated with war, exacerbate homesickness and other mental health problems. Homesickness is a normative pathology that can take on clinical relevance in its moderate and severe forms. Risk and protective factors Risk factors (constructs which increase the likelihood or intensity of homesickness) and protective factors (constructs that decrease the likelihood or intensity of homesickness) vary by population. For example, a seafarers on board, the environmental stressors associated with a hospital, a military boot camp or a foreign country may exacerbate homesickness and complicate treatment. Generally speaking, however, risk and protective factors transcend age and environment. Risk factors The risk factors for homesickness fall into five categories: experience, personality, family, attitude and environment. More is known about some of these factors in adults—especially personality factors—because more homesickness research has been performed with older populations. However, a growing body of research is elucidating the etiology of homesickness in younger populations, including children at summer camp, hospitalized children and students. Experience factors: Younger age; little previous experience away from home (for which age can be a proxy); little or no previous experience in the novel environment; little or no previous experience venturing out without primary caregivers. Attitude factors: The belief that homesickness will be strong; negative first impressions and low expectations for the new environment; perceived absence of social support; high perceived demands (e.g., on academic, vocational or sports performance); great perceived distance from home Personality factors: Insecure attachment relationship with primary caregivers; low perceived control over the timing and nature of the separation from home; anxious or depressed feelings in the months prior to the separation; low self-directedness; high harm avoidance; rigidity; a wishful-thinking coping style. Family factors: decision control (e.g., caregivers forcing young children to spend time away from home against their wishes); Protective factors Factors which mitigate the prevalence or intensity of homesickness are essentially the inverse of the risk factors cited above. Effective coping (reviewed in the following section) also diminishes the intensity of homesickness over time. Prior to a separation, however, key protective factors can be identified. Positive adjustment to separation from home is generally associated with the following factors: Experience factors: Older age; substantial previous experience away from home (for which age can be a proxy); previous experience in the novel environment; previous experience venturing out without primary caregivers. Attitude factors: The belief that homesickness will be mild; positive first impressions and high expectations for the new environment; perceptions of social support; low perceived demands (e.g., on academic or vocational performance); short perceived distance from home Personality factors: Secure attachment relationship with primary caregivers; high perceived control over the timing and nature of the separation from home; good mental health in the months prior to the separation; high self-directedness; adventure-seeking; flexibility; an instrumental coping style. Family factors: High decision control (e.g., caregivers including a young person in the decision to spend time away from home); individuals making their own choice about military service; supportive caregiving; caregivers who express confidence and optimism about the separation (e.g., "Have a great time away. I know you'll do great.") Environmental factors: Low cultural contrast (e.g., same language, similar customs, familiar food in the new environment); physical and emotional safety; few changes to familiar daily schedule; plenty of information about the new place prior to relocation; feeling welcome and accepted in the new place. Theories of coping Many psychologists argue that research into the causes of homesickness is valuable for three reasons. First, homesickness is experienced by millions of people who spend time away from home (see McCann, 1941, for an early review) including children at boarding schools, residential summer camps and hospitals. Second, severe homesickness is associated with significant distress and impairment. There is evidence that homesick persons are present with non-traumatic physical ailments significantly more than their non-homesick peers. Homesick boys and girls complain about somatic problems and exhibit more internalising and externalising behaviours problems than their non-homesick peers. First-year college students are three times more likely to drop out of school than their non-homesick peers. Other data have pointed to concentration and academic problems in homesick students. And maladjustment to separation from home has been documented in hospitalized young people and is generally associated with slower recovery. See Thurber & Walton (2012) for a review. Third, learning more about how people cope with homesickness is a helpful guide to designing treatment programs. By complementing existing theories of depression, anxiety and attachment, a better theoretical understanding of homesickness can shape applied interventions. Among the most relevant theories that could shape interventions are those concerned with Learned Helplessness and Control Beliefs. Learned helplessness predicts that persons who develop a belief that they cannot influence or adjust to their circumstance of separation from home will become depressed and make fewer attempts to change that circumstance. Control beliefs theory predicts that negative affect is most likely in persons who perceive personal incompetence in the separation environment (e.g., poor social skills at a summer camp or university) and who perceive contingency uncertainty (e.g., uncertainty about whether friendly behavior will garner friends). Although these are not the only broad etiologic theories that inform homesickness, both theories hinge on control, the perception of which "reflects the fundamental human need for competence" (Skinner, 1995, p. 8). This is particularly relevant to coping, because people's choice of how to respond to a stressor hinges partly on their perception of a stressor's controllability. An equally important coping factor is social connection, which for many people is the antidote to homesickness. As the results of several studies have suggested, social connection is a powerful mediator of homesickness intensity. Ways of coping The most effective way of coping with homesickness is mixed and layered. Mixed coping is that which involves both primary goals (changing circumstances) and secondary goals (adjusting to circumstances). Layered coping is that which involves more than one method. This kind of sophisticated coping is learned through experience, such as brief periods away from home without parents. As an example of mixed and layered coping, one study revealed the following method-goal combinations to be the most frequent and effective ways for boys and girls: Doing something fun (observable method) to forget about being homesick (secondary goal) Thinking positively and feel grateful (unobservable method) to feel better (secondary goal) Simply changing feelings and attitudes (unobservable method) to be happy (secondary goal) Reframing time (unobservable method) in order to perceive the time away as shorter (secondary goal) Renewing a connection with home, through letter writing (observable method) to feel closer to home (secondary goal) Talking with someone (observable method) who could provide support and help them make new friends (primary goal) Sometimes, people will engage in wishful thinking, attempt to arrange a shorter stay or (rarely) break rules or act violently in order to be sent home. These ways of coping are rarely effective and can produce unintended negative side effects. See also Nostalgia Third culture kid Sehnsucht Hiraeth Saudade Separation anxiety disorder Culture shock References External links "Preventing and Treating Homesickness" – American Academy of Pediatrics clinical report published in the journal "Pediatrics" Emotions Pediatrics Travel Adjustment disorders Depression (mood)
Homesickness
[ "Physics" ]
2,612
[ "Physical systems", "Transport", "Travel" ]
1,574,457
https://en.wikipedia.org/wiki/BioBlitz
A BioBlitz, also written without capitals as bioblitz, is an intense period of biological surveying in an attempt to record all the living species within a designated area. Groups of scientists, naturalists, and volunteers conduct an intensive field study over a continuous time period (e.g., usually 24 hours). There is a public component to many BioBlitzes, with the goal of getting the public interested in biodiversity. To encourage more public participation, these BioBlitzes are often held in urban parks or nature reserves close to cities. Research into the best practices for a successful BioBlitz has found that collaboration with local natural history museums can improve public participation. As well, BioBlitzes have been shown to be a successful tool in teaching post-secondary students about biodiversity. Features A BioBlitz has different opportunities and benefits than a traditional, scientific field study. Some of these potential benefits include: Enjoyment – Instead of a highly structured and measured field survey, this sort of event has the atmosphere of a festival. The short time frame makes the search more exciting. Local – The concept of biodiversity tends to be associated with coral reefs or tropical rainforests. A BioBlitz offers the chance for people to visit a nearby setting and see that local parks have biodiversity and are important to conserve. Science – These one-day events gather basic taxonomic information on some groups of species. Meet the Scientists – A BioBlitz encourages people to meet working scientists and ask them questions. Identifying rare and unique species/groups – When volunteers and scientists work together, they are able to identify uncommon or special habitats for protection and management and, in some cases, rare species may be uncovered. Documenting species occurrence – BioBlitzes do not provide a complete species inventory for a site, but they provide a species list which makes a basis for a more complete inventory and will often show what area or what taxon would benefit from a further study. Increases interest in science – BioBlitzes helps to build interest from the general public in science and environmental studies by enabling direct communication and inclusive activities. History The term "BioBlitz" was first coined by U.S. National Park Service naturalist Susan Rudy while assisting with the first BioBlitz. The first BioBlitz was held at Kenilworth Aquatic Gardens, Washington, D.C. in 1996. Approximately 1000 species were identified at this first event. This first accounting of biodiversity was organized by Sam Droege (USGS) and Dan Roddy (NPS) with the assistance of other government scientists. The public and especially the news media were invited. Since the success of the first bioblitz, many organizations around the world have repeated this concept. Since then, most BioBlitz contain a public component so that adults, kids, teens and anyone interested can join experts and scientists in the field. Participating in these hands-on field studies is a fun and exciting way for people to learn about biodiversity and better understand how to protect it. In 1998, Harvard biologist E.O. Wilson and Massachusetts wildlife expert Peter Alden developed a program to catalog the organisms around Walden Pond. This led to a statewide program known as Biodiversity Days. This concept is very similar to a BioBlitz and occasionally the two terms are used interchangeably. A variation on the BioBlitz, the Blogger Blitz began in 2007. Rather than gather volunteers and scientists at one location, participant blogs pledged to conduct individual surveys of biodiversity. These results were then compiled and mapped. The purpose of this blitz is not to survey down to species level across all taxonomic groups, but rather to raise awareness about biodiversity and provide a general snapshot of diversity. From 2007 through 2016 National Geographic Society and the US National Park Service partnered to put on a Bioblitz in a different National Park each year culminating in a Bioblitz across the National Park Service in 2016 as part of the National Park Service Centennial Celebration. The iNaturalist platform was used as the recording tool for the 2014, 2015, and 2016 Centennial Bioblitzes in this series. Highlights of the 2016 nationwide BioBlitz include: The National Parks BioBlitz—Washington, D.C. was the cornerstone of the national event. Nearly 300 scientists and experts led more than 2,600 students and thousands of members of the general public in all 13 of the National Capital Region's parks. As of the closing ceremony on May 21, nearly 900 species were recorded from this area alone. The Biodiversity Festival at Constitution Gardens on the National Mall served as a window to events across the country, with regular live feeds featuring species discoveries on jumbo screens located on the National Mall. E. O. Wilson, "father of biodiversity", was a significant part of the pre-BioBlitz events, including the Special Speaker Series at the American Association for the Advancement of Science and the 2016 National Parks BioBlitz Scientist Dinner at National Geographic Headquarters on Thursday, May 19. The BioBlitz Dance was a common activity throughout the festival weekend. Participants danced with John Griffith, founder of the dance, on the main stage several times at Constitution Gardens, and on the jumbotron from other park units across the nation. National parks and participating partners shared their BioBlitz activities via social media, using the hashtags #BioBlitz2016 and #FindYourPark. During the weekend's event, #BioBlitz2016 ranked in the top 10 on Twitter! At Cabrillo National Monument, Green Abalone (Haliotis fulgens) was documented. For the past thirty years, abalone have faced substantial conservation concerns due to overharvesting and disease. Their presence in the Cabrillo Rocky Intertidal Zone can be described as ephemeral at best. Knife River National Park conducted an ArcheoBlitz. A centuries-old bison tooth was found at Big Hidatsa Village, which was occupied from about 1740 to 1850. DNA extracted from this tooth can provide data on bison populations before their near-extinction at the end of the 19th century, a useful comparison for managers of modern herds. At Great Smoky Mountain National Park, experts teamed up with about 100 5th graders. Together they set out to explore pollinators and succeeded in discovering nearly 200 species. While it is too early to tell if they found any new species, they have added significant information to the park's database. Craters of the Moon National Monument and Preserve conducted a lichen survey and added several new species to their park list. One of those identified is Xanthoria elegans. This species of lichen survived an 18-month exposure to solar UV radiation, cosmic rays, vacuum and varying temperatures in an experiment performed by the ESA outside of the ISS. Channel Islands National Park broadcast a dive with oceanographer and National Geographic Explorer, Dr. Sylvia Earle, with support from the National Park Trust. The feed was featured online and on the jumbotrons on the National Mall and enabled the public to follow the exploration of one of the richest marine ecosystems in the world, the giant kelp forest. The National Parks BioBlitz used the iNaturalist app to deliver real-time information on species finds. Verified data will be included in National Park Service databases and international databases tracking biodiversity on the planet. This application can be used by parks and citizen scientists well into the future. Beginning with the 2010 NPS/NGS BioBlitz at Biscayne National Park, NPS initiated a corps of Biodiversity Youth Ambassadors. Each year through 2016, a student ambassador is selected by the host park to participate in the BioBlitz and assist in raising biodiversity awareness to their peers and in their home communities. In addition to the new NCR Biodiversity Youth Ambassador, Ms. Katherine Hagan, Ms. Mikaila Ulmer, 11, was selected to be the National Park Service Biodiversity Youth Ambassador representing the President's Pollinator Conservation Initiative for the National Park Service. BioBlitzes by country Australia The Woodland Watch Project (part of the World Wide Fund for Nature) (WWF) has organised BioBlitz's in the wheatbelt area of Western Australia in 2002, 2003, 2004, 2006, 2008, 2009 and 2010. Two 'SpiderBlitz's' (variants of the BioBlitz concept) were organised in 2007 and 2008 in the wheatbelt by WWF to focus attention on threatened trapdoor spiders, and their unique habitats. Wheatbelt Natural Resource Management Wheatbelt NRM ran a BioBlitz around the wheatbelt town of Korrelocking in 2012. The Discovery Circle program (UniSA) ran two BioBlitzes at a park in Salisbury and wetlands at Marion, South Australia. The Atlas of Life in the Coastal Wilderness www.alcw.org.au has run three successful bioblitzes – in Bermagui 2012, Pambula 2014 and Mimosa Rocks National Park 2014. The Atlas of Life works in association with the Atlas of Living Australia (the national biodiversity database) takayna BioBlitz The Bob Brown Foundation runs an annual takayna BioBlitz in Tasmania, Australia. The takayna BioBlitz is a festival of science in nature, held in one of the world's last truly wild places. This event brings together scientists, experts, naturalists and members of the public for a weekend of environmental scientific discovery. See: bobbrown.org.au Tarkine BioBlitz, 19–22 November 2015, was the first BioBlitz in Tasmania. More than 100 people surveyed moorland, rainforest, rivers and coastline in the remote Tarkine region in support of the Bob Brown Foundation's campaign for a Tarkine National Park to protect the natural values of the region. Melbourne City Council conducted a BioBlitz in 2014 and 2016, engaging citizens in nature conservation in cities . Canada Active Bioblitz The Robert Bateman Get to Know BioBlitz started in 2010 to celebrate the international year of biodiversity. In a partnership with Parks Canada there were many sites all across Canada which celebrated bioblitzes on the international day of biodiversity (May 22). British Columbia There has been an annual BioBlitz in Whistler, BC since 2007. The 2013 BioBlitz reported 497 species. Metro Vancouver has hosted their annual BioBlitz at Burnaby Lake Regional Park since 2010. This bioblitz has much public participation with many activities including pond-dipping, nature walks and meeting live animals up close. The species count currently stands at 488, including a Western Screech Owl, Red-legged Frog, Brassy Minnows and Common Fern which, despite its name, had never been found in the area before. Ontario: The Royal Ontario Museum and several other organizations have sponsored BioBlitz in the Toronto area since 2012, with the 2015 event scheduled for the Don River watershed. The 2014 Humber BioBlitz had over 500 participants and counted 1,560 species, including 2 spiders that were new to Canada. The Rouge National Urban Park hosted a Bioblitz event on June 24 and 25 of 2017. The previous Bioblitz at the park was held in 2013 where over 1700 species of flora and fauna were identified. New Brunswick: The New Brunswick Museum has held an annual bioblitz since 2009 in Protected Natural Areas (PNA) around the province. Scientists spend two weeks each year in the field, alternating June in one year with August in the next to catch seasonally available biodiversity. The bioblitz was held in Jacquet River Gorge PNA 2009–2010, Caldonia Gorge PNA in 2011–2012, Grand Lake PNA in 2013–2014, Nepisiguit PNA in 2015–2016, and Spednic Lake PNA in 2017–2018. More information is here. The 2013-2014 bioblitzes were the subject of a documentary Inactive and historic BioBlitz The Canadian Biodiversity Institute held numerous BioBlitzes between 1997 and 2001. Victoria's Beacon Hill has had two BioBlitzes, in April 2007 and October 2007. They successfully gave thanks for the biodiversity of the region. Beacon Hill has since been a site for Arborblitzs, which focus on identifying all the trees within the park. Saint Mary's University (Halifax) held BioBlitz in Nova Scotia between 2008 and 2010 with the report on the 2010 BioBlitz available here. The Warren Lake BioBlitz was scheduled for 11–13 August 2011. Warren Lake is on the east side of Cape Breton Highlands National Park. There is a hiking trail which circumnavigates the lake and it will be considered the border of the BioBlitz, i.e., there will be quite an extensive aquatic focus. Stanley Park in Vancouver held BioBlitz between 2011 and 2013. Harrison Hot Springs had a BioBlitz in July 2011 to highlight the biodiversity of species in the Fraser Valley. Hong Kong In Traditional Chinese this has been referred to as: 生態速查 (Ecological quick check). First HK's BioBlitz was organized by Tai Tam Tuk Foundation from 24 to 25 Oct, 2015. 50 experts leading 300 secondary students recorded more than 680 species in 30 hours, covering marine, terrestrial and intertidal habitats, in Tai Tam site of special scientific interest (SSSI). This event comes as part of the ‘Biodiversity Festival 2015’, an Agriculture, Fisheries and Conservation Department (AFCD) lead project that encompasses many events, exhibitions and seminars, and is a major section of Hong Kong's Biodiversity Strategy and Action Plan (BSAP). Highlights included 2 species of moth that are extremely rare and native to Hong Kong, the first official record of coral in Tai Tam Bay and the first official record of juvenile horseshoe crabs on Hong Kong island. Data are made available through an online platform iSpot. BioBlitz@CityU is a competition in the small wooded park on university campus organized by City University of Hong Kong on 4 March 2016. On 21–22 Oct, 2017, Lung Fu Shan Environmental Education Centre organized their first BioBlitz. This center was jointly established by the Environmental Protection Department of HK Government and The University of Hong Kong in 2008. 100 participants and volunteers found 151 species in Lung Fu Shan with the guidance of 11 experts within 24 hours. In 2018 this was expanded to separate bioblitz surveys into four animal groups: Birds; Butterflies, (other) Insects, and Amphibians and Reptiles. And in 2019 another bioblitz is planned. Tai Tam Tuk Foundation organized their second BioBlitz on 3–4 Nov, 2017. They translated the iNaturalist app and slideshow into Chinese with the help of Hong Kong Explorers Initiative and the technical support of Scott Loarie and Alex Shepard from iNaturalist.org for better data collection among local participants. Also, they organized the pilot self-guided activity "DIY BioBlitz" with the help of Environmental Life Science Society, HKU and the teacher training in this event. Data are made available: https://www.inaturalist.org/projects/hk-bioblitz-2017 This event is subvented by Agriculture, Fisheries and Conservation Department of HK Government. In January 2019 the Hong Kong BioBlitz @ Hong Kong Park was carried out in Kong Kong Park. Utilizing iNaturalist and experts from the Natural History Museum, London, and Tai Tam Tuk Eco Education Centre. With popularity of City Nature Challenge in Hong Kong since its first participation in 2018, bioblitzes have increasingly been combined with this and other iNaturalist based challenges such as the Hong Kong Inter-School City Nature Challenge. Hungary BioBlitz Events in Hungary are organized by the Hungarian Biodiversity Research Society http://www.biodiverzitasnap.hu/ since 2006, starting with the eco-village Gyürüfű and its surroundings in Baranya County. Since then the Society organizes BioBlitz Events (called also Biodiversity Days) every year, sometimes even several events a year, during which 60-80 experts and researchers contribute to a profound momentary inventory of a chosen area in Hungary, and from time to time in cross-border areas in joint-projects with neighbour countries. The Hungarian Biodiversity Research Society invites local inhabitants and the interested public to join their events, and focusses in its outreach to young local and regional pupils and their teachers just like students from Hungary and abroad. The BioBlitz Events are taking place in partnership with the local National Park Directories, Municipalities and Civil Organisations. A rather fresh approach is the involvement of high school students during their obligatory community/voluntary work into research and field work in the topics of biodiversity and nature protection based upon long term co-operation contracts with schools and educational centres. The main goals pursued by the Hungarian Biodiversity Research Society are to promote the correct understanding of biodiversity in its true context, based upon data collection, monitoring, research and expertise, passing on knowledge from generation to generation and outreach to the broader public. It also aims to strengthen national and international networks. The results of the BioBlitz Events are published in print and on-line media and serve mainly as fundamentals for maintenance-instructions for protected areas and for appropriate natural-resource management, but also for educational purposes. Ireland An Ireland's BioBlitz Event has been held annually since 2010 – established by the National Biodiversity Data Centre http://www.biodiversityireland.ie/ to celebrate International Year of Biodiversity. A unique feature of this event is that it has a number of parks through the island competing against each other to see which site records the most species over a 24hr period. The event is usually held on the third weekend in May each year. In 2010, the first year it was held, Connemara National Park won the competition having recording 542 species. In 2011, Killareny National Park won the event having recorded an astonishing tally of 1088 species. Crawfordsburn Country Park won in 2012 having recorded 984 species. All of the data are made available through an online mapping system, Biodiversity Maps http://maps.biodiversityireland.ie/# and hard copy species lists are produced http://bioblitz.biodiversityireland.ie/bioblitz-species-lists-now-available/ The event is co-ordinated by the National Biodiversity Data Centre who maintain a special website http://bioblitz.biodiversityireland.ie/ each year so that progress with the event can be tracked on-line. To cater for the success of BioBlitz in Ireland, support is provided for a special 'Local BioBlitz Challenge' for local sites. Also, on 14–15 June 2013 Limerick City hosts the first Urban BioBlitz in Ireland. On May 1, 2014, the first Intervarsity BioBlitz was held with support from the National Biodiversity Data Centre. University College Cork, National University of Ireland Galway, Trinity, Dublin City University and Dundalk IT all competed to count Biodiversity on campus, with NUIG being the inaugural winner. Israel On April 24, 2014, the first BioBlitz in Israel took place in Yeruham lake park. The event was supported by Ben Gurion University of the Negev. 531 different species were found. A second Bioblitz is scheduled to take place on March 26, 2015. Malaysia Since 2011 the Malaysian Nature Society has held an annual birdwatching bioblitz named "MY Garden Birdwatch". México Since 2019 the Rancho Komchén de los Pájaros has held an annual bioblitz. Check out the iNaturalist results New Zealand Landcare Research, in conjunction with colleagues in other institutes and agencies, held BioBlitzes in Auckland in 2004, 2005, 2006, and 2008; and in Christchurch in 2005. A BioBlitz was planned for early April 2009 in Christchurch. Other New Zealand BioBlitzes have been held in Hamilton and in Wellington. The first marine BioBlitz occurred on the Wellington South Coast over a month, since a marine BioBlitz is trickier weatherwise than a terrestrial one. In March 2012 Forest and Bird organised a BioBlitz on the Denniston Plateau on the West Coast of the South Island. It is the site of the proposed Escarpment Mine Project. See a List of BioBlitzes in New Zealand. Pakistan The first BioBlitz in Pakistan was organized at Hazarganji Chiltan National Park on April 15, 2023, by The First Steps School. Poland The first BioBlitz in Poland was organized in Sopot in May 2008 by the Polish Scientific Committee on Oceanic Research of the Polish Academy of Sciences. Portugal Faro was the first city in Portugal to have a BioBlitz, in October 2009. Singapore The Singapore National Parks (NParks) Community in Nature (CIN) program have been running BioBlitz in various parks and gardens across Singapore to coincide with the International Day for Biological Diversity. Slovenia Slovene's first BioBlitz took place on May 19/20, 2017, in Draga (in central Slovenia). The event was conducted during the project "Invazivke nikoli ne počivajo: Ozaveščanje o in preprečevanje negativnega vpliva invazivnih vrst na evropsko ogrožene vrste" and supported by the Slovene Ministry of the Environment and Spatial Planning. The event was held in cooperation with Societas herpetologica slovenica, the Botanical Society of Slovenia, the Centre for Cartography of Fauna and Flora, and the Slovene Dragonfly Society. During the event, 124 experts participated and 1,588 different species were found. BioBlitz Slovenia 2018 was held in Rače (in northeastern Slovenia) on June 15/16. Altogether, 71 experts from 21 different organisations participated, and at the end of the 24-hour event 934 species or higher taxon were identified. BioBlitz Slovenia 2018 was organised by four NGOs: Societas herpetologica slovenica, the Slovene Dragonfly Society, the Botanical Society of Slovenia, and the Centre for Cartography of Fauna and Flora. A third BioBlitz Slovenia took place on May 17/18, 2019, in the Lož Karst Field. As a part of the project "Še smo tu – domorodne vrste še nismo izrinjene", it was supported by the Slovene Ministry of the Environment and Spatial Planning. Eighty experts participated and 899 different species were found. BioBlitz Slovenia 2019 was organised by three NGOs: Societas herpetologica slovenica, the Slovene Dragonfly Society, and the Centre for Cartography of Fauna and Flora. The results of the events are published in print and on-line media and journals, also together with the list of species. BioBlitz Slovenia became a traditional annual event and has its own webpage. Spain In Formentera (Balearic Islands), during the Posidonia Festival 2008, there was a bioblitz. Barcelona (Catalonia) hosts a BioBlitz yearly since 2010, organized by Barcelona City Council, University of Barcelona and Natural History Museum of Barcelona, in collaboration with several naturalist and scientific societies. First BioBlitzBcn was held in June 2010 at Laberint d'Horta and Parc de la Ciutadella. Second in October 2011 at Jardí Botànic de Barcelona. Third in May 2012 at Jardí Botànic Històric. The university of Almeria organizes the AmBioBlitz in April yearly since 2018, with the collaboration of CECOUAL (Centre of Scientific Collections of the University of Almería) and Observation.org The Pablo de Olavide University, from Seville, will host in April 2021 its first BioBlitz in collaboration with Observation.org and Biological Station of Doñana-CSIC Sweden Sweden's first BioBlitz was organized in Röttle (Gränna) on the 4th and 5 August 2012. On the 7th and 8 September 2012 a BioBlitz was organized in Fliseryd near the river Emån. A total of 345 species were reported in this former industrial site on islands in the river. Sweden's fourth BioBlitz will be organized in Högsby on June 5 and 6 2014. Taiwan Taipei 228 Peace Park 2008 BioBlitz on December 20, sponsored by Taiwan Forestry Bureau and National Taiwan Museum, found more than 180 plants, 11 birds and 1 mammal. Trinidad & Tobago Tucker Valley Bioblitz 2012 was the first bioblitz in Trinidad and possibly the Caribbean. It was organised by Mike G. Rutherford, curator of the University of the West Indies Zoology Museum (UWIZM) with help from the Trinidad and Tobago Field Naturalists' Club (TTFNC) and was sponsored by First Citizens Bank. The 24-hour event found 654 species – 211 plants and 443 animals. Arima Valley Bioblitz 2013 was based at the Asa Wright Nature Centre. The event found 139 vertebrates, 247 invertebrates, 30 fungi, 7 diatoms and 317 plants making a total of 740 species. Nariva Swamp Bioblitz 2014 was based at the Forestry Division Field Station near Bush Bush Forest Reserve, the teams found 742 species. Charlotteville Bioblitz 2015 was the first event to take place in Tobago. Based at the Environmental Research Institute Charlotteville (ERIC) there was a large marine component and all together 1,044 species were recorded. Port of Spain Bioblitz 2016 took the event to the nation's capital and included a Nature Fair with over 20 local NGOs, government organisations and charity groups putting on a biodiversity and environmental display. 762 species were found in and around the city. Icacos Bioblitz 2017 was the final event organised by Rutherford and took the bioblitz to the far south-west of Trinidad and recorded 769 species. Toco Bioblitz 2018 was organised by a committee made up of TTFNC members and staff from the University of the West Indies Department of Life Sciences from the St. Augustine campus. The north-east corner of Trinidad yielded 906 species records. Türkiye The first BioBlitz event in Kocaeli Province was held in Ormanya on September 17, 2021, with the support of Kocaeli Metropolitan Municipality. 113 different species were found at the event. United Kingdom Natural History Consortium host the National BioBlitz Network hosting free resources for running a BioBlitz event and the national BioBlitz Calendar. (www.bnhc.org.uk) Examples of regions and organisations which have held BioBlitz events include: First UK Marine BioBlitz undertaken by the Marine Biological Association and the Natural History Museum together with other partners. Wembury, South Devon 2009 Bristol – Organised by Bristol Natural History Consortium Northumberland – Organised by Northumberland Biodiversity Network New Forest National Park – Organised by New Forest National Park Authority Swansea – Organised by Swansea City Council Cairngorms – Organised by Cairngorms Biodiversity Dundee – Organised by Dundee City Council Leicester – Organised by Leicester City and County Council Isle of Wight – Organised by Isle of Wight Council London – Organised by OPAL Derby – Organised by Derby City Council Brighton – Organised by Sussex Wildlife Trust Bath – Organised by Bristol Natural History Consortium Mothecombe, Devon, – Marine and coastal BioBlitz – Organised by OPAL and the Marine Biological Association Jersey – Organised by the Durrell Wildlife Conservation Trust Fife – Organised by Fife Coast and Countryside Trust and "Celebrating Fife 2010" Cambridge – Organised by Cambridge University Lincolnshire – Organised by Lincolnshire Wildlife Trust Nottingham – Organised by Nottinghamshire Biodiversity Action Group Flintshire – Organised by Flintshire County Council North Ayrshire – Organised by North Ayrshire Council Lancashire – Organised by Lancashire Wildlife Trust Kent – Organised by Kent Wildlife Trust Corfe Mullen – Organised by Corfe Mullen Nature Watch Cornwall – Organised by ERCCIS North Devon – Organised by Coastwise North Devon. Sandford – Organised by Ambios Mount Edgcumb – Marine and coastal bioblitz organised by the Marine Biological Association United States Alaska: The Chugach National Forest and Alaska Department of Fish & Game-Diversity Program organized the first BioBlitz in Southcentral Alaska on July 23 and 24, 2011, to coincide with the International Year of Forests. Arizona: More than 5,500 people, including 2,000 students and 150 scientists, attended the 2011 Saguaro BioBlitz, (October 21–22) and discovered 859 species during the 24 hour inventory period. Included in that total were more than 400 species, mostly invertebrate animals and non-vascular plants, which were previously unknown in the park. The accompanying Biodiversity Festival had an integrated art program that included pieces featuring local species, created by local students, seniors, and artists. California: The Santa Monica Mountains NPS/National Geographic Society BioBlitz (May 30–31, 2008) was accomplished through collaboration with the Santa Monica Mountains Conservancy, California State Parks, and Los Angeles Recreation and Parks Department. Six thousand participants discovered more than 1,700 species during the 24 hour inventory period. California: The San Diego Zoo Institute for Conservation Research hosted a BioBlitz in the San Dieguito River Park on the North Shore of Lake Hodges in Escondido April 25–26. California: The San Diego Natural History Museum began hosting a yearly BioBlitz starting in 2008. The 2008 BioBlitz was held in Balboa Park and in 2009 the event was held at Mission Trails Regional Park on May 1–2. California: The Santa Barbara Botanic Garden organized a BioBlitz of its natural spaces in May 2007. California: Golden Gate National Recreation Area: On March 28–29, 2014, participants in the BioBlitz at Golden Gate Park sites, including Pt. Reyes National Seashore, Muir Woods National Monument, the Presidio of San Francisco, Mori Point, and Rancho Corral de Tierra observed and recorded biodiversity in habitats ranging from the redwood canopy to windswept beaches. Highlights included the first ever canopy survey of redwoods at Muir Woods, the first-ever, park sighting of a climbing salamander in Muir Woods; sightings of great horned, spotted, barred and saw-whet owls; and a mountain lion at Corral de Tierra. Colorado: The National Wildlife Federation has been providing a toolset based on the eNature.com species data in the Denver/Boulder metropolitan area since 2004. Results are online. Colorado: On August 24–25, 2012, more than 150 scientists joined forces with 5,000 people of all ages and backgrounds to seek out the living creatures in Rocky Mountain National Park. Inventories took place in various ecological life zones, including ponderosa pine forests, the subalpine region, the tundra, and mountain meadows. Among the overall total of 490 species discovered, 138 were previously unknown to be in the park. A companion festival at the Estes Park Fairgrounds advanced and celebrated public awareness of biodiversity. Connecticut: The Center for Conservation and Biodiversity and Connecticut State Museum of Natural History have held nine BioBlitz events since 1999. The current record for a single Connecticut BioBlitz was set June 3–4, 2016 in a 5-mile radius around the Two Rivers Magnet School in East Hartford, where 2,765 species were recorded in the 24-hour period. Many of the organisms sighted in the 2016 BioBlitz were documented in an online iNaturalist project. The previous record was set in 2001 at Tarrywile Park in Danbury, where 2,519 species were recorded in the 24-hour period. District of Columbia: A BioBlitz at the Kenilworth Park and Aquatic Gardens in Washington, D.C. in 1996 found approximately 1000 species. Washington, D.C. 2007: The National Geographic Society held a BioBlitz in Rock Creek Park on May 18–19. The event was later on a segment of the TV series Wild Chronicles which airs on PBS. Participants included J. Michael Fay, Sylvia Earle, and Boyd Matson. The first National Park Service/National Geographic Society BioBlitz took place on May 18–19, 2007. A wide breadth of taxonomic groups was examined, including amphibians and reptiles, invertebrates, birds, fish, fungi, mammals, plants, insects, and more. The total number of species found was 661 over a 24-hour period. Florida: In Manatee County, the local government's Department of Natural Resources (formerly Conservation Lands Management) has sponsored annual BioBlitz events, every spring since 2007. The surveys rotate between the county's different parks and preserves. This event, however, involves only a 12-hour survey instead of the standard 24-hour. Florida: On April 30-May 1, 2010, 2,500 citizen scientists worked with their professional counterparts to explore life in one of the nation's largest marine national parks, Biscayne National Park. More than 800 species were found, including a number of species rare to the park, such as the mangrove cuckoo, and silver hairstreak butterfly. Also, 11 species of lichen and 22 species of ants were found that had not previously been documented in the park. Hawaii: At Punahou School, a biannual BioBlitz is organized by the students. The event examines certain parts of the campus, and has been held there since the summer of 2008. The BioBlitz there happens once in winter, and once in summer. Hawai'i: at Hawai'i Volcanoes National Park in 2015, working under the theme of I ka nānā no a ’ike ("By observing, one Learns"), traditional Hawaiian cultural practitioners, "alakai’i," were integrated into the survey teams, providing a holistic approach to the research and exploration activities. More than 170 leading scientists and alakai’i, teamed with thousands of public participants of all ages to explore one of the most fascinating biological landscapes in the world. Together they documented species that thrive in ecosystems from sea level to the summit of Kīlauea Volcano. Exciting finds included 22 new species added to the park's species list, and sightings of 73 threatened species, including the nēnē and Kamehameha butterfly. The number of fungi species on the park's list more than doubled, with 17 new fungi documented at the close of the event. Illinois: The Field Museum of Natural History and other organizations held a BioBlitz in Chicago in 2002. There are several bioblitzes in parts of the forest preserves of Cook and Lake County. Indiana: Indiana Dunes National Park – On May 16-16, 2009, more than 150 scientists, assisted by 2,000 grade school students and other members of the public, explored the sand dunes, lake shore, forests, wetlands, prairie, and streams of the recreation area. The excitement persevered through driving rain and high winds and resulted in the discovery of more than 1,200 species. Louisiana: The NPS/National Geographic Society BioBlitz at Jean Lafitte National Historical Park and Preserve (May 17–18, 2013), brought together leading scientists and naturalists from around the country and local citizens of all ages. Inventories included herpetofaunal counts, aquatic and terrestrial invertebrate inventories, avifauna observations, and native and non-native plant surveys. Participants also used technology, such as tree cameras and smartphones, to record and understand the diverse ecosystems of this unique national park. At the time of the event's closing ceremony, 458 species had been identified, including a rare Louisiana milk snake, 288 plants, and 122 invertebrate species. Maine: The Maine Entomological Society and other organizations have been holding Entomological BioBlitzes at Acadia National Park every summer since 2003. Results of the 2003-2011 blitzes were summarized by Chandler et al., 2012, showing that 1,605 species representing 348 families of insects were taken and identified over the 8-year period. Many were new to the Park fauna, and a significant number were also new to the known state fauna. Maryland/DC/Virginia, 2006: The Nature Conservancy sponsored a Potomac Gorge BioBlitz where more than 130 field biologists and experienced naturalists volunteered their expertise in an effort to see how many species they could find. During a 30-hour survey period from Saturday, June 24, through Sunday, June 25 their surveys revealed more than 1,000 species. Maryland: Jug Bay BioBlitz was sponsored by the Maryland-National Capital Park and Planning Commission's (M-NCPPC) Patuxent River Park staff and rangers, May 30–31, 2009. Massachusetts 2006 collaboration between the Boston Museum of Science and the Cape Cod Museum of Natural History. The first bioblitz in a series sponsored by the E.O. Wilson Biodiversity Foundation. The first bioblitz to utilize CyberTracker and NatureMapping technologies for data collection. On June 25–26, 2010, a BioBlitz was held in Falmouth, Massachusetts, using town conservation land and adjacent land owned by the 300 Committee (T3C), Falmouth's land trust. Surveys for 15 taxa were planned. About 120 volunteers participated. Preliminary estimate of 930 species found but this number is likely to increase as data are finalized. Full results to be published later in 2010 on the T3C website. On September 29, 2010, the TDWG Techno/BioBlitz was held alongside the Annual Biodiversity Information Standards Conference in Woods Hole. On July 8, 2019, the Great Walden BioBlitz was held at Walden Pond, Massachusetts, surveying a five-mile radius around Walden Woods. Organized by Peter Alden in honor of E.O. Wilson's 90th birthday and the 30th Massachusetts bioblitz, public participants were encouraged to explore Walden Woods and Minute Man NHP using the iNaturalist phone app to help document species. Minnesota: A group of organizations including the Bell Museum of Natural History has sponsored BioBlitzes in natural areas in or near the Twin Cities yearly in June since 2004. Missouri: Sponsored by the Academy of Science of St. Louis, partners from the public, academic and corporate sectors collaborate on the Academy of Science-St. Louis BioBlitz at urban parks, such as Forest Park in St Louis . Held at least once a year since 2006, the academy's BioBlitz has hosted future BioBlitz leaders from throughout the country and is a signature event of one of the oldest Academies of Science in the USA. www.academyofsciencestl.org New Hampshire: Odiorne Point State Park: The Seacoast Science Center has been hosting an annual BioBlitz! in September since 2003. The park's diversity of coastal habitats provides BioBlitzers the opportunity to find marine, freshwater and terrestrial species. The Center compiles and maintains each year's data. Squam Lakes. 2008. The Squam Lakes Natural Science Center in collaboration with Squam Lakes Association and Squam Lakes Conservation Society in cooperation with the Holderness Conservation Commission, the US Forest Service Hubbard Brook Experimental Forest, UNH Cooperative Extension, Plymouth State University, NH Fish and Game Department, and Ecosystem Management Consultants. New Jersey State Highlands, NJ Gateway National Recreation Area, Sandy Hook Unit, 2011. On Sept. 16–17, science students, along with park staff and over 150 volunteers, located nearly 450 species, mostly birds, terrestrial plants and invertebrates. Gateway National Recreation Area, Sandy Hook Unit, 2015. On September 18–19, the American Littoral Society, in partnership with the National Park Service, hosted the second Sandy Hook BioBlitz. Over 150 scientists, naturalists, and volunteers raced against the clock to identify as many species as possible. This BioBlitz found 75 birds, 12 fungi/lichen, 21 fish, 2 reptiles/amphibians, 44 marine invertebrates, 2 insects, 13 mammals, 15 aquatic plants, and 87 terrestrial plants. New York State New York City Central Park, 2003. This BioBlitz found more than 800 species, including 393 species of plants, 78 of moths, 14 fungi, 10 spiders, 9 dragonflies, 2 tardigrades, 102 other invertebrates, 7 mammals, 3 turtles, 46 birds and 2 frog species. s. Central Park, 2006. In collaboration with the E.O. Wilson Biodiversity Foundation, the Explorers Club, the American Museum of Natural History and the Boston Museum of Science. This is the first bioblitz in history to incorporate the collection and analysis of microorganisms. Central Park, 2013. On August 27–28, 2013 a BioBlitz at Central Park was held in partnership with Macaulay Honors College of CUNY. With help from the Central Park Conservancy over 350 Macaulay students worked with nearly 30 scientists and cataloged more than 460 species. New York Botanical Garden in the Bronx, 2014, September 6 and 7, in partnership with Macaulay Honors College of CUNY. The Saw Mill River watershed in Westchester County, September 2009. Groundwork Hudson Valley, leading the Saw Mill River Coalition, conducted a Saw Mill River BioBlitz on September 25–26 with more than 50 scientists from a wide variety of fields. A concurrent conference on the health of the river was held at Pace University in Pleasantville that was open to the public and had activities geared for children. Funded by a grant from Westchester Community Foundation with additional support from US EPA and NYS/DEC Hudson River Estuary Program. Major co-sponsors joining the effort were Westchester County Parks, Recreation and Conservation; Teatown Lake Reservation; Pace University's Department of Biology and Health Sciences; Pace University's Academy for Applied Environmental Studies; Sigma Xi: The Scientific Research Society; Greenburgh Nature Center; and the Saw Mill River Audubon. North Carolina: The North Carolina Botanical Garden in collaboration with the Morehead Planetarium sponsor an annual bioblitz in September on garden-owned property. Ohio: The Geauga Park District has hosted an annual BioBlitz at different park district properties since 2003. Oklahoma: The Oklahoma Biological Survey hosted an annual BioBlitz at different locations around Oklahoma starting in 2001. Their 2010 BioBlitz will be held on October 8–9 at Kaw Lake in north-central Oklahoma with a base camp at Camp McFadden. Pennsylvania: Phipps Conservatory hosted a Bioblitz on June 10, 2018, in Pittsburgh. Rhode Island: Rhode Island Natural History Survey has conducted a BioBlitz at a different site in the state every year since 2000, including a "backyard bioblitz" held in 2020, during COVID. Rhode Island BioBlitz may be the longest running annual BioBlitz in the world. In the 23 events through 2022, the average participation is 163 and the average species count is 1022; the record participation of 302 people and the record species count of 1,308 species were both in the Jamestown Rhode Island BioBlitz of 2012. Vermont: The Vermont Institute of Natural Science held a BioBlitz in 2004 at Hartford. Washington: BioBlitzes conducted using NatureTracker software on PDAs for conservation planning./ Wisconsin: The Milwaukee Public Museum (MPM) hosts an annual BioBlitz program that began in 2015. MPM events have occurred at Schlitz Audubon Nature Center in Milwaukee (2015), Grant Park in South Milwaukee (2016), Fox River Park in Waukesha (2017), Lake Farm County Park/Capitol Springs Recreation Area in Madison (2018), Riveredge Nature Center in Saukville (2019), and Whitnall Park in Franklin (2020). The non-profit Biodiversity Project held three Great Lakes BioBlitzes with support from the Wisconsin Coastal Management Program and NOAA in 2004. The sites were Riverside Park in Milwaukee; Baird Creek Parkway in Green Bay; and Wisconsin Point in Superior. See also Australian Bird Count (ABC) Bush Blitz an Australian Government variant of the concept co funded by BHP Billiton and with the participation of Earthwatch Australia Breeding Bird Survey Christmas Bird Count (CBC) (in the Western Hemisphere) City Nature Challenge Seabird Colony Register (SCR) The EBCC Atlas of European Breeding Birds Tucson Bird Count (TBC) (in Arizona, US) References External links BioBlitzes at National Geographic National BioBlitz Network (United Kingdom) Biological censuses Biodiversity Citizen science Ecological experiments
BioBlitz
[ "Biology" ]
9,068
[ "Biodiversity" ]
1,574,901
https://en.wikipedia.org/wiki/Cardinality%20of%20the%20continuum
In set theory, the cardinality of the continuum is the cardinality or "size" of the set of real numbers , sometimes called the continuum. It is an infinite cardinal number and is denoted by (lowercase Fraktur "c") or The real numbers are more numerous than the natural numbers . Moreover, has the same number of elements as the power set of . Symbolically, if the cardinality of is denoted as , the cardinality of the continuum is This was proven by Georg Cantor in his uncountability proof of 1874, part of his groundbreaking study of different infinities. The inequality was later stated more simply in his diagonal argument in 1891. Cantor defined cardinality in terms of bijective functions: two sets have the same cardinality if, and only if, there exists a bijective function between them. Between any two real numbers a < b, no matter how close they are to each other, there are always infinitely many other real numbers, and Cantor showed that they are as many as those contained in the whole set of real numbers. In other words, the open interval (a,b) is equinumerous with , as well as with several other infinite sets, such as any n-dimensional Euclidean space (see space filling curve). That is, The smallest infinite cardinal number is (aleph-null). The second smallest is (aleph-one). The continuum hypothesis, which asserts that there are no sets whose cardinality is strictly between and , means that . The truth or falsity of this hypothesis is undecidable and cannot be proven within the widely used Zermelo–Fraenkel set theory with axiom of choice (ZFC). Properties Uncountability Georg Cantor introduced the concept of cardinality to compare the sizes of infinite sets. He famously showed that the set of real numbers is uncountably infinite. That is, is strictly greater than the cardinality of the natural numbers, : In practice, this means that there are strictly more real numbers than there are integers. Cantor proved this statement in several different ways. For more information on this topic, see Cantor's first uncountability proof and Cantor's diagonal argument. Cardinal equalities A variation of Cantor's diagonal argument can be used to prove Cantor's theorem, which states that the cardinality of any set is strictly less than that of its power set. That is, (and so that the power set of the natural numbers is uncountable). In fact, the cardinality of , by definition , is equal to . This can be shown by providing one-to-one mappings in both directions between subsets of a countably infinite set and real numbers, and applying the Cantor–Bernstein–Schroeder theorem according to which two sets with one-to-one mappings in both directions have the same cardinality. In one direction, reals can be equated with Dedekind cuts, sets of rational numbers, or with their binary expansions. In the other direction, the binary expansions of numbers in the half-open interval , viewed as sets of positions where the expansion is one, almost give a one-to-one mapping from subsets of a countable set (the set of positions in the expansions) to real numbers, but it fails to be one-to-one for numbers with terminating binary expansions, which can also be represented by a non-terminating expansion that ends in a repeating sequence of 1s. This can be made into a one-to-one mapping by that adds one to the non-terminating repeating-1 expansions, mapping them into . Thus, we conclude that The cardinal equality can be demonstrated using cardinal arithmetic: By using the rules of cardinal arithmetic, one can also show that where n is any finite cardinal ≥ 2 and where is the cardinality of the power set of R, and . Alternative explanation for Every real number has at least one infinite decimal expansion. For example, (This is true even in the case the expansion repeats, as in the first two examples.) In any given case, the number of decimal places is countable since they can be put into a one-to-one correspondence with the set of natural numbers . This makes it sensible to talk about, say, the first, the one-hundredth, or the millionth decimal place of π. Since the natural numbers have cardinality each real number has digits in its expansion. Since each real number can be broken into an integer part and a decimal fraction, we get: where we used the fact that On the other hand, if we map to and consider that decimal fractions containing only 3 or 7 are only a part of the real numbers, then we get and thus Beth numbers The sequence of beth numbers is defined by setting and . So is the second beth number, beth-one: The third beth number, beth-two, is the cardinality of the power set of (i.e. the set of all subsets of the real line): The continuum hypothesis The continuum hypothesis asserts that is also the second aleph number, . In other words, the continuum hypothesis states that there is no set whose cardinality lies strictly between and This statement is now known to be independent of the axioms of Zermelo–Fraenkel set theory with the axiom of choice (ZFC), as shown by Kurt Gödel and Paul Cohen. That is, both the hypothesis and its negation are consistent with these axioms. In fact, for every nonzero natural number n, the equality = is independent of ZFC (case being the continuum hypothesis). The same is true for most other alephs, although in some cases, equality can be ruled out by König's theorem on the grounds of cofinality (e.g. ). In particular, could be either or , where is the first uncountable ordinal, so it could be either a successor cardinal or a limit cardinal, and either a regular cardinal or a singular cardinal. Sets with cardinality of the continuum A great many sets studied in mathematics have cardinality equal to . Some common examples are the following: Sets with greater cardinality Sets with cardinality greater than include: the set of all subsets of (i.e., power set ) the set 2R of indicator functions defined on subsets of the reals (the set is isomorphic to  – the indicator function chooses elements of each subset to include) the set of all functions from to the Lebesgue σ-algebra of , i.e., the set of all Lebesgue measurable sets in . the set of all Lebesgue-integrable functions from to the set of all Lebesgue-measurable functions from to the Stone–Čech compactifications of , , and the set of all automorphisms of the (discrete) field of complex numbers. These all have cardinality (beth two) See also Cardinal characteristic of the continuum References Bibliography Paul Halmos, Naive set theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition). Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. . Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. . Cardinal numbers Set theory Infinity
Cardinality of the continuum
[ "Mathematics" ]
1,530
[ "Cardinal numbers", "Set theory", "Mathematical logic", "Mathematical objects", "Infinity", "Numbers" ]
1,574,904
https://en.wikipedia.org/wiki/Prince%20Rupert%27s%20drop
Prince Rupert's drops (also known as Dutch tears or Batavian tears) are toughened glass beads created by dripping molten glass into cold water, which causes it to solidify into a tadpole-shaped droplet with a long, thin tail. These droplets are characterized internally by very high residual stresses, which give rise to counter-intuitive properties, such as the ability to withstand a blow from a hammer or a bullet on the bulbous end without breaking, while exhibiting explosive disintegration if the tail end is even slightly damaged. In nature, similar structures are produced under certain conditions in volcanic lava and are known as Pele's tears. The drops are named after Prince Rupert of the Rhine, who brought them to England in 1660, although they were reportedly being produced in the Netherlands earlier in the 17th century and had probably been known to glassmakers for much longer. They were studied as scientific curiosities by the Royal Society, and the unraveling of the principles of their unusual properties probably led to the development of the process for the production of toughened glass, patented in 1874. Research carried out in the 20th and 21st centuries shed further light on the reasons for the drops' contradictory properties. Description Prince Rupert's drops are produced by dropping molten glass drops into cold water. The glass rapidly cools and solidifies in the water from the outside inward. This thermal quenching may be described by means of a simplified model of a rapidly cooled sphere. Prince Rupert's drops have remained a scientific curiosity for nearly 400 years due to two unusual mechanical properties: when the tail is snipped, the drop disintegrates explosively into powder, whereas the bulbous head can withstand compressive forces of up to . The explosive disintegration arises due to multiple crack bifurcation events when the tail is cut – a single crack is accelerated in the tensile residual stress field in the center of the tail and bifurcates after it reaches a critical velocity of . Given these high speeds, the disintegration process due to crack bifurcation can only be inferred by looking into the tail and employing a high-speed camera. This is perhaps why this curious property of the drops remained unexplained for centuries. The second unusual property of the drops, namely the strength of the heads, is a direct consequence of large compressive residual stresses⁠up to ⁠that exist in the vicinity of the head's outer surface. This stress distribution is measured by using glass's natural property of stress-induced birefringence and by employing techniques of 3D photoelasticity. The high fracture toughness due to residual compressive stresses makes Prince Rupert's drops one of the earliest examples of toughened glass. History It has been suggested that methods for making the drops have been known to glassmakers since at least the times of the Roman Empire. Sometimes attributed to Dutch inventor Cornelis Drebbel, the drops were often referred to as lacrymae Borussicae (Prussian tears) or lacrymae Batavicae (Dutch tears) in contemporary accounts. Verifiable accounts of the drops from Mecklenburg in North Germany appear as early as 1625. The secret of how to make them remained in the Mecklenburg area for some time, although the drops were disseminated across Europe from there, for sale as toys or curiosities. The Dutch scientist Constantijn Huygens asked Margaret Cavendish, Duchess of Newcastle to investigate the properties of the drops; her opinion after carrying out experiments was that a small amount of volatile liquid was trapped inside. Although Prince Rupert did not discover the drops, he was responsible for bringing them to Britain in 1660. He gave them to King Charles II, who in turn delivered them in 1661 to the Royal Society (which had been created the previous year) for scientific study. Several early publications from the Royal Society give accounts of the drops and describe experiments performed. Among these publications was Micrographia of 1665 by Robert Hooke, who later would discover Hooke's Law. His publication laid out correctly most of what can be said about Prince Rupert's drops—without a fuller understanding than existed at the time of elasticity (to which Hooke himself later contributed), and of the failure of brittle materials from the propagation of cracks. A fuller understanding of crack propagation had to wait until the work of A. A. Griffith in 1920. In 1994, Srinivasan Chandrasekar, an engineering professor at Purdue University, and Munawar Chaudhri, head of the materials group at the University of Cambridge, used high-speed framing photography to observe the drop-shattering process and concluded that while the surface of the drops experiences highly compressive stresses, the inside experiences high tension forces, creating a state of unequal equilibrium which can easily be disturbed by breaking the tail. However, this left the question of how the stresses are distributed throughout a Prince Rupert's drop. In a further study published in 2017, the team collaborated with Hillar Aben, a professor at Tallinn University of Technology in Estonia using a transmission polariscope to measure the optical retardation of light from a red LED as it travelled through the glass drop, and used the data to construct the stress distribution throughout the drop. This showed that the heads of the drops have a much higher surface compressive stress than previously thought at up to , but that this surface compressive layer is also thin, only about 10% of the diameter of the head of a drop. This gives the surface a high fracture strength, which means it is necessary to create a crack that enters the interior tension zone to break the droplet. As cracks on the surface tend to grow parallel to the surface, they cannot enter the tension zone but a disturbance in the tail allows cracks to enter the tension zone. A scholarly account of the early history of Prince Rupert's drops is given in the Notes and Records of the Royal Society of London, where much of the early scientific study of the drops was performed. Scientific uses The study of drops probably inspired the process of producing toughened glass by quenching. It was patented in England by Parisian Francois Barthelemy Alfred Royer de la Bastie in 1874, just one year after V. De Luynes had published accounts of his experiments with them. Since at least the 19th century, it has been known that formations similar to Prince Rupert's drops are produced under certain conditions in volcanic lava. More recently researchers at the University of Bristol and the University of Iceland have studied the glass particles produced by explosive fragmentation of Prince Rupert's drops in the laboratory to better understand magma fragmentation and ash formation driven by stored thermal stresses in active volcanoes. Literary references Because of their use as a party piece, Prince Rupert's drops became widely known in the late 17th century—far more than today. It can be seen that educated people (or those in "society") were expected to be familiar with them, from their use in the literature of the day. Samuel Butler used them as a metaphor in his poem Hudibras in 1663, and Pepys refers to them in his diary. The drops were immortalized in a verse of the anonymous Ballad of Gresham College (1663): Diarist George Templeton Strong wrote (volume 4, p. 122) of a hazardous sudden breaking up of pedestrian-bearing ice in New York City's East River during the winter of 1867 that "The ice flashed into fragments all at once like a Prince Rupert's drop." Alfred Jarry's 1902 novel Supermale makes reference to the drops in an analogy for the molten glass drops falling from a failed device meant to pass eleven thousand volts of electricity through the supermale's body. Sigmund Freud, discussing the dissolution of military groups in Group Psychology and the Analysis of the Ego (1921), notes the panic that results from the loss of the leader: "The group vanishes in dust, like a Prince Rupert's drop when its tail is broken off." E. R. Eddison's 1935 novel Mistress of Mistresses references Rupert's drops in the last chapter as Fiorinda sets off a whole set of them. In the 1940 detective novel There Came Both Mist and Snow by Michael Innes (J. I. M. Stewart), a character incorrectly refers to them as "Verona drops"; the error is corrected towards the end of the novel by the detective Sir John Appleby. In his 1943 novella Conjure Wife, Fritz Leiber uses Prince Rupert drops as a metaphor for the volatility of several characters' personalities. These small-town college faculty people seem to be placid and impervious, but "explode" at a mere "flick of the filament". Peter Carey devotes a chapter to the drops in his 1988 novel Oscar and Lucinda. The title-giving suite to progressive rock band King Crimson's 1970 third studio album Lizard includes both parts referring to a fictionalised version of Prince Rupert as well as an extended section called "The Battle of Glass Tears". See also Bologna bottle References Further reading Sir Robert Moray (1661). "An Account of the Glass Drops", Royal Society (transcribed, archive reference). External links PrinceRupertsDrop.com High-speed slow-motion video demonstrations. Video showing the making and the breaking of Prince Rupert's Drops from the Museum of Glass Popular Science article with a video detailing Prince Rupert's Drops Former Mythbusters Adam Savage and Jamie Hyneman demonstrate Rupert's Drops, including diagram of internal stresses Glass types Science demonstrations Novelty items Fluid mechanics
Prince Rupert's drop
[ "Engineering" ]
1,963
[ "Civil engineering", "Fluid mechanics" ]
1,574,954
https://en.wikipedia.org/wiki/A%20Treatise%20on%20the%20Astrolabe
A Treatise on the Astrolabe is a medieval instruction manual on the astrolabe by Geoffrey Chaucer. It was completed in 1391. It describes both the form and the proper use of the instrument, and stands out as a prose technical work from a writer better known for poetry, written in English rather than the more typical Latin. Significance The Treatise is considered the "oldest work in English written upon an elaborate scientific instrument". It is admired for its clarity in explaining difficult concepts – although modern readers lacking an actual astrolabe may find the details of the astrolabe difficult to understand. Robinson believes that it indicates that had Chaucer written more freely composed prose it would have been superior to his translations of Boethius and Renaut de Louhans. Chaucer’s exact source is undetermined but most of his ‘conclusions’ go back, directly or indirectly, to Compositio et Operatio Astrolabii, a Latin translation of Messahala's Arabic treatise of the 8th century. His description of the instrument amplifies Messahala’s, and Chaucer’s indebtedness to Messahala was recognised by John Selden and established by Walter William Skeat. Mark Harvey Liddell held Chaucer drew on De Sphaera of John de Sacrobosco for the substance of his astronomical definitions and descriptions, but the non-correspondence in language suggests the probable use of an alternative compilation. A collotype facsimile of the second part of the Latin text of Messahala (the portion which is parallel to Chaucer's) is found in Skeat’s Treatise On The Astrolabe. and in Gunther's Chaucer and Messahalla on the Astrolabe. Paul Kunitzsch argued that the treatise on the astrolabe long attributed to Messahala was in fact written by Ibn al-Saffar. Language The work is written in the free-flowing Middle English of that time (1391). Chaucer explains this departure from the norm thus: "This treatis, ..., wol I shewe the ... in Englissh, for Latyn ne canst thou yit but small" Chaucer proceeds to labour the point somewhat: "Grekes ... in Grek; and to Arabiens in Arabik, and to Jewes in Ebrew, and to Latyn folk in Latyn; whiche Latyn folk had hem [conclusions] first out of othere dyverse languages, and writen hem in her owne tunge, that is to seyn, in Latyn.". He continues to explain that it easier for a child to understand things in his own language than struggle with unfamiliar grammar, a commonplace idea today but radical in the fourteenth century. Finally, he appeals to Royalty. Philippa Roet, Chaucer's wife was a lady-in-waiting to Philippa of Hainault, Edward III's queen. She was also a sister to Katherine Swynford, John of Gaunt's wife. Chaucer's appeal is an early version of the phrase "the King's English": "And preie God save the King, that is lord of this language, ..." Manuscripts Skeat identifies 22 manuscripts of varying quality. The best he labels A, B and C which are MS. Dd. 3.53 (part 2) in the Cambridge University Library, MS. E Museo 54 in the Bodleian Library and MS. Rawlinson, Misc. 1262 also in the Bodleian. A and B were apparently written by the same scribe, but A has been corrected by another hand. Skeat observes that the errors are just those described in "Chaucers Wordes unto Adam, His Owne Scriveyn": "So ofte a-daye I mot thy werk renewe, "It to correcte and eek to rubbe and scrape; "And al is thorough thy negligence and rape." A has indeed been rubbed and scraped then corrected by another hand. This latter scribe Skeat believes to be a better writer than the first. To this second writer was the insertion of diagrams entrusted. A and B were apparently written in London about the year 1400, that is some 9 years after the original composition. Manuscript C is also early, perhaps 1420 and closely agrees with A. Audience Chaucer opens with the words "Lyte Lowys my sone". In the past a question arose whether the Lowys was Chaucer's son or some other child he was in close contact with. Kittredge suggested that it could be Lewis Clifford, a son of a friend and possible a godson of Chaucer's. As evidence he advanced that Lewis Clifford died in October 1391, the year of the composition, which could explain its abandonment. Robinson reports though the finding of a document by Professor Manly "recently" (to 1957) which links one Lewis Chaucer with Geoffrey's eldest child Thomas Chaucer. The likelihood therefore is that the dedication can be taken at face value. Chaucer had an eye to the wider public as well. In the prologue he says: Now wol I preie mekely every discret persone that redith or herith this litel tretys..." Structure The work was planned to have an introduction and five sections: A description of the astrolabe A rudimentary course in using the instrument Various tables of longitudes, latitudes, declinations, etc. A "theorike" (theory) of the motion of the celestial bodies, in particular a table showing the "very moving of the moon" An introduction to the broader field of "astrologie," a word which at the time referred to the entire span of what we now divide into astrology and astronomy. Part 1 is complete and extant. Part 2 is also extant with certain caveats described below. Part 3, if it ever existed, is not extant as part of the Treatise. Part 4 was, in the opinion of Skeat, probably never written. Part 5 also was probably never written which Skeat approves of. Indeed, he draws attention to Chaucer's comment at the end of conclusion 4: "Natheles these ben observaunces of judicial matere and rytes of payens, in whiche my spirit hath no feith, no knowing of her horoscopum." Part 1 The whole of this section describes the form of an astrolabe. The astrolabe is based on a large plate ("The moder" or "mother") which is arranged to hang vertically from a thumb ring. It has "a large hool, that resceiveth in hir wombe the thin plates". The back of the astrolabe is engraved with various scales (see Skeat's sketch below). Mounted on the back is a sighting rule (Skeat's fig 3, below) "a brod rule, that hath on either end a square plate perced with certein holes". To hold it all together there is a "pyn" with a "littel wegge" (wedge) as shown below at Skeat's fig 7. Into the "womb" various thin plates can be inserted which are designed for a particular place: "compowned after the latitude of Oxenforde". These plates show the star map. Surmounting them is a "riet" or "rete" which is a pierced framework carrying the major stars shown at fig 9. Outside all is another rule, this time not with sighting holes, mounted on the common pivot, see fig 6. Part 2 Part 2 consists of around 40 propositions or descriptions of things that can be done with the astrolabe. The exact number is uncertain since of the later propositions some are of disputed or doubtful authenticity. Skeat accepts that propositions 1-40 are unambiguously genuine. Robinson generally follows Skeat's reasoning. These first 40 propositions form the canon of part 2; the propositions that follow are usually labeled "Supplementary Propositions." The astrolabe The astrolabe was a sophisticated precision instrument. With it one could determine the date, time (when the sky was clear), the position of stars, the passage of the zodiac, latitude on the earth's surface, tides and basic surveying. Care must be taken not to dismiss the astrological aspects; as well as any mystical interpretation astrological terminology was used for what today would be recognized as astronomy. Determining when the sun entered a house (or sign) of the zodiac was a precise determination of the calendar. Skeat produced a number of sketches to accompany his edition: The stars listed on the rim of the rete of the drawings in the Treatise are given below with their modern names: See also The equatorie of the planetis by John Westwyk References Footnotes Citations Bibliography quoted by 5th impression. Originally published by Houghton Mifflin Co, Boston, Mass. External links Plain-text format (with line numbering): Part 1 Part 2 from eChaucer The text of A Treatise on the Astrolabe – presented in Middle English and Modern English side-by-side. A Treatise on the Astrolabe – a verb database (language analysis, description of the astrolabe and Middle English period) 1391 books Medieval literature History of astronomy Astronomy books Astrological texts Works by Geoffrey Chaucer Treatises Handbooks and manuals
A Treatise on the Astrolabe
[ "Astronomy" ]
2,005
[ "Astronomy books", "Works about astronomy", "History of astronomy" ]
1,574,968
https://en.wikipedia.org/wiki/Comparison%20of%20document%20markup%20languages
The following tables compare general and technical information for a number of document markup languages. Please see the individual markup languages' articles for further information. General information Basic general information about the markup languages: creator, version, etc. Note: While Rich Text Format (RTF) is human readable, it is not considered to be a markup language and is thus excluded from the table. Characteristics Some characteristics of the markup languages. Notes See also List of document markup languages Comparison of Office Open XML and OpenDocument Comparison of e-book formats Comparison of data serialization formats Document markup languages Comparison of document markup languages
Comparison of document markup languages
[ "Technology" ]
132
[ "Computing comparisons", "Markup language comparisons" ]
1,575,082
https://en.wikipedia.org/wiki/JSON
JSON (JavaScript Object Notation, pronounced or ) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of name–value pairs and arrays (or other serializable values). It is a commonly used data format with diverse uses in electronic data interchange, including that of web applications with servers. JSON is a language-independent data format. It was derived from JavaScript, but many modern programming languages include code to generate and parse JSON-format data. JSON filenames use the extension .json. Douglas Crockford originally specified the JSON format in the early 2000s. He and Chip Morningstar sent the first JSON message in April 2001. Naming and pronunciation The 2017 international standard (ECMA-404 and ISO/IEC 21778:2017) specifies that "JSON" is "pronounced , as in 'Jason and The Argonauts. The first (2013) edition of ECMA-404 did not address the pronunciation. The UNIX and Linux System Administration Handbook states, "Douglas Crockford, who named and promoted the JSON format, says it's pronounced like the name Jason. But somehow, 'JAY-sawn' seems to have become more common in the technical community." Crockford said in 2011, "There's a lot of argument about how you pronounce that, but I strictly don't care." Standards After RFC 4627 had been available as its "informational" specification since 2006, JSON was first standardized in 2013, as ECMA-404. RFC 8259, published in 2017, is the current version of the Internet Standard STD 90, and it remains consistent with ECMA-404. That same year, JSON was also standardized as ISO/IEC 21778:2017. The ECMA and ISO/IEC standards describe only the allowed syntax, whereas the RFC covers some security and interoperability considerations. History JSON grew out of a need for a real-time server-to-browser session communication protocol without using browser plugins such as Flash or Java applets, the dominant methods used in the early 2000s. Crockford first specified and popularized the JSON format. The acronym originated at State Software, a company cofounded by Crockford and others in March 2001. The cofounders agreed to build a system that used standard browser capabilities and provided an abstraction layer for Web developers to create stateful Web applications that had a persistent duplex connection to a Web server by holding two Hypertext Transfer Protocol (HTTP) connections open and recycling them before standard browser time-outs if no further data were exchanged. The cofounders had a round-table discussion and voted on whether to call the data format JSML (JavaScript Markup Language) or JSON (JavaScript Object Notation), as well as under what license type to make it available. The JSON.org website was launched in 2001. In December 2005, Yahoo! began offering some of its Web services in JSON. A precursor to the JSON libraries was used in a children's digital asset trading game project named Cartoon Orbit at Communities.com which used a browser side plug-in with a proprietary messaging format to manipulate DHTML elements. Upon discovery of early Ajax capabilities, digiGroups, Noosh, and others used frames to pass information into the user browsers' visual field without refreshing a Web application's visual context, realizing real-time rich Web applications using only the standard HTTP, HTML, and JavaScript capabilities of Netscape 4.0.5+ and Internet Explorer 5+. Crockford then found that JavaScript could be used as an object-based messaging format for such a system. The system was sold to Sun Microsystems, Amazon.com, and EDS. JSON was based on a subset of the JavaScript scripting language (specifically, Standard ECMA-262 3rd Edition—December 1999) and is commonly used with JavaScript, but it is a language-independent data format. Code for parsing and generating JSON data is readily available in many programming languages. JSON's website lists JSON libraries by language. In October 2013, Ecma International published the first edition of its JSON standard ECMA-404. That same year, used ECMA-404 as a reference. In 2014, became the main reference for JSON's Internet uses, superseding and (but preserving ECMA-262 and ECMA-404 as main references). In November 2017, ISO/IEC JTC 1/SC 22 published ISO/IEC 21778:2017 as an international standard. On December 13, 2017, the Internet Engineering Task Force obsoleted when it published , which is the current version of the Internet Standard STD 90. Crockford added a clause to the JSON license stating, "The Software shall be used for Good, not Evil", in order to open-source the JSON libraries while mocking corporate lawyers and those who are overly pedantic. On the other hand, this clause led to license compatibility problems of the JSON license with other open-source licenses since open-source software and free software usually imply no restrictions on the purpose of use. Syntax The following example shows a possible JSON representation describing a person. { "first_name": "John", "last_name": "Smith", "is_alive": true, "age": 27, "address": { "street_address": "21 2nd Street", "city": "New York", "state": "NY", "postal_code": "10021-3100" }, "phone_numbers": [ { "type": "home", "number": "212 555-1234" }, { "type": "office", "number": "646 555-4567" } ], "children": [ "Catherine", "Thomas", "Trevor" ], "spouse": null } Character encoding Although Crockford originally asserted that JSON is a strict subset of JavaScript and ECMAScript, his specification actually allows valid JSON documents that are not valid JavaScript; JSON allows the Unicode line terminators and to appear unescaped in quoted strings, while ECMAScript 2018 and older do not. This is a consequence of JSON disallowing only "control characters". For maximum portability, these characters are backslash-escaped. JSON exchange in an open ecosystem must be encoded in UTF-8. The encoding supports the full Unicode character set, including those characters outside the Basic Multilingual Plane (U+0000 to U+FFFF). However, if escaped, those characters must be written using UTF-16 surrogate pairs. For example, to include the Emoji character in JSON: { "face": "😐" } // or { "face": "\uD83D\uDE10" } JSON became a strict subset of ECMAScript as of the language's 2019 revision. Data types JSON's basic data types are: Number: a signed decimal number that may contain a fractional part and may use exponential E notation but cannot include non-numbers such as NaN. The format makes no distinction between integer and floating-point. JavaScript uses IEEE-754 double-precision floating-point format for all its numeric values (later also supporting BigInt), but other languages implementing JSON may encode numbers differently. String: a sequence of zero or more Unicode characters. Strings are delimited with double quotation marks and support a backslash escaping syntax. Boolean: either of the values true or false Array: an ordered list of zero or more elements, each of which may be of any type. Arrays use square bracket notation with comma-separated elements. Object: a collection of name–value pairs where the names (also called keys) are strings. The current ECMA standard states, "The JSON syntax does not impose any restrictions on the strings used as names, does not require that name strings be unique, and does not assign any significance to the ordering of name/value pairs." Objects are delimited with curly brackets and use commas to separate each pair, while within each pair, the colon ':' character separates the key or name from its value. null: an empty value, using the word null Whitespace is allowed and ignored around or between syntactic elements (values and punctuation, but not within a string value). Four specific characters are considered whitespace for this purpose: space, horizontal tab, line feed, and carriage return. In particular, the byte order mark must not be generated by a conforming implementation (though it may be accepted when parsing JSON). JSON does not provide syntax for comments. Early versions of JSON (such as specified by ) required that a valid JSON text must consist of only an object or an array type, which could contain other types within them. This restriction was dropped in , where a JSON text was redefined as any serialized value. Numbers in JSON are agnostic with regard to their representation within programming languages. While this allows for numbers of arbitrary precision to be serialized, it may lead to portability issues. For example, since no differentiation is made between integer and floating-point values, some implementations may treat 42, 42.0, and 4.2E+1 as the same number, while others may not. The JSON standard makes no requirements regarding implementation details such as overflow, underflow, loss of precision, rounding, or signed zeros, but it does recommend expecting no more than IEEE 754 binary64 precision for "good interoperability". There is no inherent precision loss in serializing a machine-level binary representation of a floating-point number (like binary64) into a human-readable decimal representation (like numbers in JSON) and back since there exist published algorithms to do this exactly and optimally. Comments were intentionally excluded from JSON. In 2012, Douglas Crockford described his design decision thus: "I removed comments from JSON because I saw people were using them to hold parsing directives, a practice which would have destroyed interoperability." JSON disallows "trailing commas", a comma after the last value inside a data structure. Trailing commas are a common feature of JSON derivatives to improve ease of use. Interoperability RFC 8259 describes certain aspects of JSON syntax that, while legal per the specifications, can cause interoperability problems. Certain JSON implementations only accept JSON texts representing an object or an array. For interoperability, applications interchanging JSON should transmit messages that are objects or arrays. The specifications allow JSON objects that contain multiple members with the same name. The behavior of implementations processing objects with duplicate names is unpredictable. For interoperability, applications should avoid duplicate names when transmitting JSON objects. The specifications specifically say that the order of members in JSON objects is not significant. For interoperability, applications should avoid assigning meaning to member ordering even if the parsing software makes that ordering visible. While the specifications place no limits on the magnitude or precisions of JSON number literals, the widely used JavaScript implementation stores them as IEEE754 "binary64" quantities. For interoperability, applications should avoid transmitting numbers that cannot be represented in this way, for example, 1E400 or 3.141592653589793238462643383279. While the specifications do not constrain the character encoding of the Unicode characters in a JSON text, the vast majority of implementations assume UTF-8 encoding; for interoperability, applications should always and only encode JSON messages in UTF-8. The specifications do not forbid transmitting byte sequences that incorrectly represent Unicode characters. For interoperability, applications should transmit messages containing no such byte sequences. The specification does not constrain how applications go about comparing Unicode strings. For interoperability, applications should always perform such comparisons code unit by code unit. In 2015, the IETF published RFC 7493, describing the "I-JSON Message Format", a restricted profile of JSON that constrains the syntax and processing of JSON to avoid, as much as possible, these interoperability issues. Semantics While JSON provides a syntactic framework for data interchange, unambiguous data interchange also requires agreement between producer and consumer on the semantics of specific use of the JSON syntax. One example of where such an agreement is necessary is the serialization of data types that are not part of the JSON standard, for example, dates and regular expressions. Metadata and schema The official MIME type for JSON text is application/json, and most modern implementations have adopted this. Legacy MIME types include text/json, text/x-json, and text/javascript. The standard filename extension is .json. JSON Schema specifies a JSON-based format to define the structure of JSON data for validation, documentation, and interaction control. It provides a contract for the JSON data required by a given application and how that data can be modified. JSON Schema is based on the concepts from XML Schema (XSD) but is JSON-based. As in XSD, the same serialization/deserialization tools can be used both for the schema and data, and it is self-describing. It is specified in an Internet Draft at the IETF, with the latest version as of 2024 being "Draft 2020-12". There are several validators available for different programming languages, each with varying levels of conformance. The JSON standard does not support object references, but an IETF draft standard for JSON-based object references exists. Uses JSON-RPC is a remote procedure call (RPC) protocol built on JSON, as a replacement for XML-RPC or SOAP. It is a simple protocol that defines only a handful of data types and commands. JSON-RPC lets a system send notifications (information to the server that does not require a response) and multiple calls to the server that can be answered out of order. Asynchronous JavaScript and JSON (or AJAJ) refers to the same dynamic web page methodology as Ajax, but instead of XML, JSON is the data format. AJAJ is a web development technique that provides for the ability of a web page to request new data after it has loaded into the web browser. Typically, it renders new data from the server in response to user actions on that web page. For example, what the user types into a search box, client-side code then sends to the server, which immediately responds with a drop-down list of matching database items. JSON has seen ad hoc usage as a configuration language. However, it does not support comments. In 2012, Douglas Crockford, JSON creator, had this to say about comments in JSON when used as a configuration language: "I know that the lack of comments makes some people sad, but it shouldn't. Suppose you are using JSON to keep configuration files, which you would like to annotate. Go ahead and insert all the comments you like. Then pipe it through JSMin before handing it to your JSON parser." MongoDB uses JSON-like data for its document-oriented database. Some relational databases, such as PostgreSQL and MySQL, have added support for native JSON data types. This allows developers to store JSON data directly in a relational database without having to convert it to another data format. Safety JSON being a subset of JavaScript can lead to the misconception that it is safe to pass JSON texts to the JavaScript eval() function. This is not safe, due to certain valid JSON texts, specifically those containing or , not being valid JavaScript code until JavaScript specifications were updated in 2019, and so older engines may not support it. To avoid the many pitfalls caused by executing arbitrary code from the Internet, a new function, , was first added to the fifth edition of ECMAScript, which as of 2017 is supported by all major browsers. For non-supported browsers, an API-compatible JavaScript library is provided by Douglas Crockford. In addition, the TC39 proposal "Subsume JSON" made ECMAScript a strict JSON superset as of the language's 2019 revision. Various JSON parser implementations have suffered from denial-of-service attack and mass assignment vulnerability. Alternatives JSON is promoted as a low-overhead alternative to XML as both of these formats have widespread support for creation, reading, and decoding in the real-world situations where they are commonly used. Apart from XML, examples could include CSV and supersets of JSON. Google Protocol Buffers can fill this role, although it is not a data interchange language. CBOR has a superset of the JSON data types, but it is not text-based. Ion is also a superset of JSON, with a wider range of primary types, annotations, comments, and allowing trailing commas. XML XML has been used to describe structured data and to serialize objects. Various XML-based protocols exist to represent the same kind of data structures as JSON for the same kind of data interchange purposes. Data can be encoded in XML in several ways. The most expansive form using tag pairs results in a much larger (in character count) representation than JSON, but if data is stored in attributes and 'short tag' form where the closing tag is replaced with , the representation is often about the same size as JSON or just a little larger. However, an XML attribute can only have a single value and each attribute can appear at most once on each element. XML separates "data" from "metadata" (via the use of elements and attributes), while JSON does not have such a concept. Another key difference is the addressing of values. JSON has objects with a simple "key" to "value" mapping, whereas in XML addressing happens on "nodes", which all receive a unique ID via the XML processor. Additionally, the XML standard defines a common attribute , that can be used by the user, to set an ID explicitly. XML tag names cannot contain any of the characters !"#$%&'()*+,/;<=>?@[\]^`{|}~, nor a space character, and cannot begin with , , or a numeric digit, whereas JSON keys can (even if quotation mark and backslash must be escaped). XML values are strings of characters, with no built-in type safety. XML has the concept of schema, that permits strong typing, user-defined types, predefined tags, and formal structure, allowing for formal validation of an XML stream. JSON has several types built-in and has a similar schema concept in JSON Schema. XML supports comments, while JSON does not. Supersets Support for comments and other features have been deemed useful, which has led to several nonstandard JSON supersets being created. Among them are HJSON, HOCON, and JSON5 (which despite its name, is not the fifth version of JSON). YAML YAML version 1.2 is a superset of JSON; prior versions were not strictly compatible. For example, escaping a slash with a backslash is valid in JSON, but was not valid in YAML. YAML supports comments, while JSON does not. CSON CSON ("CoffeeScript Object Notation") uses significant indentation, unquoted keys, and assumes an outer object declaration. It was used for configuring GitHub's Atom text editor. There is also an unrelated project called CSON ("Cursive Script Object Notation") that is more syntactically similar to JSON. HOCON HOCON ("Human-Optimized Config Object Notation") is a format for human-readable data, and a superset of JSON. The uses of HOCON are: It is primarily used in conjunction with the Play framework, and is developed by Lightbend. It is also supported as a configuration format for .NET projects via Akka.NET and Puppet. TIBCO Streaming: HOCON is the primary configuration file format for the TIBCO Streaming family of products (StreamBase, LiveView, and Artifact Management Server) as of TIBCO Streaming Release 10. It is also the primary configuration file format for several subsystems of Exabeam Advanced Analytics. Jitsi uses it as the "new" config system and .properties-Files as fallback JSON5 JSON5 ("JSON5 Data Interchange Format") is an extension of JSON syntax that, just like JSON, is also valid JavaScript syntax. The specification was started in 2012 and finished in 2018 with version 1.0.0. The main differences to JSON syntax are: Optional trailing commas Unquoted object keys Single quoted and multiline strings Additional number formats Comments JSON5 syntax is supported in some software as an extension of JSON syntax, for instance in SQLite. JSONC JSONC (JSON with Comments) is a subset of JSON5 used in Microsoft's Visual Studio Code: supports single line comments (//) and block comments (/* */) accepts trailing commas, but they are discouraged and the editor will display a warning Derivatives Several serialization formats have been built on or from the JSON specification. Examples include GeoJSON, a format designed for representing simple geographical features JSON-LD, a method of encoding linked data using JSON JSON-RPC, a remote procedure call protocol encoded in JSON JsonML, a lightweight markup language used to map between XML and JSON Smile (data interchange format) UBJSON, a binary computer data interchange format imitating JSON, but requiring fewer bytes of data See also BSON Comparison of data serialization formats Amazon Ion a superset of JSON (though limited to UTF-8, like JSON for interchange, unlike general JSON) Jackson (API) jaql – a functional data processing and query language most commonly used for JSON query processing jq – a "JSON query language" and high-level programming language JSONiq – a JSON-oriented query and processing language based on XQuery JSON streaming S-expression Notes References External links Ajax (programming) Articles with example JavaScript code Computer-related introductions in 2001 Ecma standards ISO standards Markup languages Open formats
JSON
[ "Technology" ]
4,761
[ "Computer standards", "Ecma standards" ]
1,575,127
https://en.wikipedia.org/wiki/Mimesis%20%28mathematics%29
In mathematics, mimesis is the quality of a numerical method which imitates some properties of the continuum problem. The goal of numerical analysis is to approximate the continuum, so instead of solving a partial differential equation one aims to solve a discrete version of the continuum problem. Properties of the continuum problem commonly imitated by numerical methods are conservation laws, solution symmetries, and fundamental identities and theorems of vector and tensor calculus like the divergence theorem. Both finite difference or finite element method can be mimetic; it depends on the properties that the method has. For example, a mixed finite element method applied to Darcy flows strictly conserves the mass of the flowing fluid. The term geometric integration denotes the same philosophy. References Numerical differential equations
Mimesis (mathematics)
[ "Mathematics" ]
152
[ "Applied mathematics", "Applied mathematics stubs" ]
1,575,168
https://en.wikipedia.org/wiki/Subgiant
A subgiant is a star that is brighter than a normal main-sequence star of the same spectral class, but not as bright as giant stars. The term subgiant is applied both to a particular spectral luminosity class and to a stage in the evolution of a star. Yerkes luminosity class IV The term subgiant was first used in 1930 for class G and early K stars with absolute magnitudes between +2.5 and +4. These were noted as being part of a continuum of stars between obvious main-sequence stars such as the Sun and obvious giant stars such as Aldebaran, although less numerous than either the main sequence or the giant stars. The Yerkes spectral classification system is a two-dimensional scheme that uses a letter and number combination to denote the temperature of a star (e.g. A5 or M1) and a Roman numeral to indicate the luminosity relative to other stars of the same temperature. Luminosity class IV stars are the subgiants, located between main-sequence stars (luminosity class V) and red giants (luminosity class III). Rather than defining absolute features, a typical approach to determining a spectral luminosity class is to compare similar spectra against standard stars. Many line ratios and profiles are sensitive to gravity, and therefore make useful luminosity indicators, but some of the most useful spectral features for each spectral class are: O: relative strength of N emission and He absorption, strong emission is more luminous B: Balmer line profiles, and strength of O lines A: Balmer line profiles, broader wings means less luminous F: line strengths of Fe, Ti, and Sr G: Sr and Fe line strengths, and wing widths in the Ca, H and K lines K: Ca, H, and K line profiles, Sr/Fe line ratios, and MgH and TiO line strengths M: strength of the 422.6 nm Ca line and TiO bands Morgan and Keenan listed examples of stars in luminosity class IV when they established the two-dimensional classification scheme: B0: γ Cassiopeiae, δ Scorpii B0.5: β Scorpii B1: ο Persei, β Cephei B2: γ Orionis, π Scorpii, θ Ophiuchi, λ Scorpii B2.5: γ Pegasi, ζ Cassiopeiae B3: ι Herculis B5: τ Herculis A2: β Aurigae, λ Ursae Majoris, β Serpentis A3: δ Herculis F2: δ Geminorum, ζ Serpentis F5: Procyon, 110 Herculis F6: τ Boötis, θ Boötis, γ Serpentis F8: 50 Andromedae, θ Draconis G0: η Boötis, ζ Herculis G2: μ2 Cancri G5: μ Herculis G8: β Aquilae K0: η Cephei K1: γ Cephei Later analysis showed that some of these were blended spectra from double stars and some were variable, and the standards have been expanded to many more stars, but many of the original stars are still considered standards of the subgiant luminosity class. O-class stars and stars cooler than K1 are rarely given subgiant luminosity classes. Subgiant branch The subgiant branch is a stage in the evolution of low to intermediate mass stars. Stars with a subgiant spectral type are not always on the evolutionary subgiant branch, and vice versa. For example, the stars FK Com and 31 Com both lie in the Hertzsprung Gap and are likely evolutionary subgiants, but both are often assigned giant luminosity classes. The spectral classification can be influenced by metallicity, rotation, unusual chemical peculiarities, etc. The initial stages of the subgiant branch in a star like the sun are prolonged with little external indication of the internal changes. One approach to identifying evolutionary subgiants include chemical abundances such as Lithium which is depleted in subgiants, and coronal emission strength. As the fraction of hydrogen remaining in the core of a main sequence star decreases, the core temperature increases and so the rate of fusion increases. This causes stars to evolve slowly to higher luminosities as they age and broadens the main sequence band in the Hertzsprung–Russell diagram. Once a main sequence star ceases to fuse hydrogen in its core, the core begins to collapse under its own weight. This causes it to increase in temperature and hydrogen fuses in a shell outside the core, which provides more energy than core hydrogen burning. Low- and intermediate-mass stars expand and cool until at about 5,000 K they begin to increase in luminosity in a stage known as the red-giant branch. The transition from the main sequence to the red giant branch is known as the subgiant branch. The shape and duration of the subgiant branch varies for stars of different masses, due to differences in the internal configuration of the star. Very-low-mass stars Stars less massive than about are convective throughout most of the star. These stars continue to fuse hydrogen in their cores until essentially the entire star has been converted to helium, and they do not develop into subgiants. Stars of this mass have main-sequence lifetimes many times longer than the current age of the Universe. to Stars with 40 percent the mass of the Sun and larger have non-convective cores with a strong temperature gradient from the centre outwards. When they exhaust hydrogen at the core of the star, the shell of hydrogen surrounding the central core continues to fuse without interruption. The star is considered to be a subgiant at this point although there is little change visible from the exterior. As the fusing hydrogen shell converts its mass into helium the convective effect separates the helium towards the core where it very slowly increases the mass of the non-fusing core of nearly pure helium plasma. As this takes place the fusing hydrogen shell gradually expands outward which increases the size of the outer shell of the star up to the subgiant size from two to ten times the original radius of the star when it was on the main sequence. The expansion of the outer layers of the star into the subgiant size nearly balances the increase energy generated by the hydrogen shell fusion causing the star to nearly maintain its surface temperature. This causes the spectral class of the star to change very little in the lower end of this range of star mass. The subgiant surface area radiating the energy is so much larger the potential circumstellar habitable zone where planetary orbits will be in the range to form liquid water is shifted much further out into any planetary system. The surface area of a sphere is found as 4πr2 so a sphere with a radius of will release 400% as much energy at the surface and a sphere with a will release 10000% as much energy. The helium core mass is below the Schönberg–Chandrasekhar limit and it remains in thermal equilibrium with the fusing hydrogen shell. Its mass continues to increase and the star very slowly expands as the hydrogen shell migrates outwards. Any increase in energy output from the shell goes into expanding the envelope of the star and the luminosity stays approximately constant. The subgiant branch for these stars is short, horizontal, and heavily populated, as visible in very old clusters. After one to eight billion years, the helium core becomes too massive to support its own weight and becomes degenerate. Its temperature increases, the rate of fusion in the hydrogen shell increases, the outer layers become strongly convective, and the luminosity increases at approximately the same effective temperature. The star is now on the Red-giant branch. Mass Stars as massive and larger than the Sun have a convective core on the main sequence. They develop a more massive helium core, taking up a larger fraction of the star, before they exhaust the hydrogen in the entire convective region. Fusion in the star ceases entirely and the core begins to contract and increase in temperature. The entire star contracts and increases in temperature, with the radiated luminosity actually increasing despite the lack of fusion. This continues for several million years before the core becomes hot enough to ignite hydrogen in a shell, which reverses the temperature and luminosity increase and the star starts to expand and cool. This hook is generally defined as the end of the main sequence and the start of the subgiant branch in these stars. The core of stars below about is still below the Schönberg–Chandrasekhar limit, but hydrogen shell fusion quickly increases the mass of the core beyond that limit. More-massive stars already have cores above the Schönberg–Chandrasekhar mass when they leave the main sequence. The exact initial mass at which stars will show a hook and at which they will leave the main sequence with cores above the Schönberg–Chandrasekhar limit depend on the metallicity and the degree of overshooting in the convective core. Low metallicity causes the central part of even low mass cores to be convectively unstable, and overshooting causes the core to be larger when hydrogen becomes exhausted. Once the core exceeds the C–R limit, it can no longer remain in thermal equilibrium with the hydrogen shell. It contracts and the outer layers of the star expand and cool. The energy to expand the outer envelope causes the radiated luminosity to decrease. When the outer layers cool sufficiently, they become opaque and force convection to begin outside the fusing shell. The expansion stops and the radiated luminosity begins to increase, which is defined as the start of the red giant branch for these stars. Stars with an initial mass approximately can develop a degenerate helium core before this point and that will cause the star to enter the red giant branch as for lower mass stars. The core contraction and envelope expansion is very rapid, taking only a few million years. In this time the temperature of the star will cool from its main sequence value of 6,000–30,000 K to around 5,000 K. Relatively few stars are seen in this stage of their evolution and there is an apparent lack in the H–R diagram known as the Hertzsprung gap. It is most obvious in clusters from a few hundred million to a few billion years old. Massive stars Beyond about , depending on metallicity, stars have hot massive convective cores on the main sequence due to CNO cycle fusion. Hydrogen shell fusion and subsequent core helium fusion begin soon after core hydrogen exhaustion, before the star could reach the red giant branch. Such stars, for example early B main sequence stars, experience a brief and shortened subgiant branch before becoming supergiants. They may also be assigned a giant spectral luminosity class during this transition. In very massive O-class main sequence stars, the transition from main sequence to giant to supergiant occurs over a very narrow range of temperature and luminosity, sometimes even before core hydrogen fusion has ended, and the subgiant class is rarely used. Values for the surface gravity, log(g), of O-class stars are around 3.6 cgs for giants and 3.9 for dwarfs. For comparison, typical log(g) values for K class stars are 1.59 (Aldebaran) and 4.37 (α Centauri B), leaving plenty of scope to classify subgiants such as η Cephei with log(g) of 3.47. Examples of massive subgiant stars include θ2 Orionis A and the primary star of the δ Circini system, both class O stars with masses of over . Properties This table shows the typical lifetimes on the main sequence (MS) and subgiant branch (SB), as well as any hook duration between core hydrogen exhaustion and the onset of shell burning, for stars with different initial masses, all at solar metallicity (Z = 0.02). Also shown are the helium core mass, surface effective temperature, radius, and luminosity at the start and end of the subgiant branch for each star. The end of the subgiant branch is defined to be when the core becomes degenerate or when the luminosity starts to increase. In general, stars with lower metallicity are smaller and hotter than stars with higher metallicity. For subgiants, this is complicated by different ages and core masses at the main sequence turnoff. Low metallicity stars develop a larger helium core before leaving the main sequence, hence lower mass stars show a hook at the start of the subgiant branch. The helium core mass of a Z=0.001 (extreme population II) star at the end of the main sequence is nearly double that of a Z=0.02 (population I) star. The low metallicity star is also over 1,000 K hotter and over twice as luminous at the start of the subgiant branch. The difference in temperature is less pronounced at the end of the subgiant branch, but the low metallicity star is larger and nearly four times as luminous. Similar differences exist in the evolution of stars with other masses, and key values such as the mass of a star that will become a supergiant instead of reaching the red giant branch are lower at low metallicity. Subgiants in the H–R diagram A Hertzsprung–Russell (H–R) diagram is a scatter plot of stars with temperature or spectral type on the x-axis and absolute magnitude or luminosity on the y-axis. H–R diagrams of all stars, show a clear diagonal main sequence band containing the majority of stars, a significant number of red giants (and white dwarfs if sufficiently faint stars are observed), with relatively few stars in other parts of the diagram. Subgiants occupy a region above (i.e. more luminous than) the main sequence stars and below the giant stars. There are relatively few on most H–R diagrams because the time spent as a subgiant is much less than the time spent on the main sequence or as a giant star. Hot, class B, subgiants are barely distinguishable from the main sequence stars, while cooler subgiants fill a relatively large gap between cool main sequence stars and the red giants. Below approximately spectral type K3 the region between the main sequence and red giants is entirely empty, with no subgiants. Stellar evolutionary tracks can be plotted on an H–R diagram. For a particular mass, these trace the position of a star throughout its life, and show a track from the initial main sequence position, along the subgiant branch, to the giant branch. When an H–R diagram is plotted for a group of stars which all have the same age, such as a cluster, the subgiant branch may be visible as a band of stars between the main sequence turnoff point and the red giant branch. The subgiant branch is only visible if the cluster is sufficiently old that stars have evolved away from the main sequence, which requires several billion years. Globular clusters such as ω Centauri and old open clusters such as M67 are sufficiently old that they show a pronounced subgiant branch in their color–magnitude diagrams. ω Centauri actually shows several separate subgiant branches for reasons that are still not fully understood, but appear to represent stellar populations of different ages within the cluster. Variability Several types of variable star include subgiants: Beta Cephei variables, early B main sequence and subgiant stars Slowly pulsating B-type stars, mid to late B main sequence and subgiant stars Delta Scuti variables, late A and early F main sequence and subgiant stars Subgiants more massive than the sun cross the Cepheid instability strip, called the first crossing since they may cross the strip again later on a blue loop. In the range, this includes Delta Scuti variables such as β Cas. At higher masses the stars would pulsate as Classical Cepheid variables while crossing the instability strip, but massive subgiant evolution is very rapid and it is difficult to detect examples. SV Vulpeculae has been proposed as a subgiant on its first crossing but was subsequently determined to be on its second crossing Planets Planets in orbit around subgiant stars include Kappa Andromedae b, Kepler-36 b and c, TOI-4603 b and HD 224693 b. References Bibliography External links Post-main sequence evolution through helium burning Long period variables – period luminosity relations and classification in the Gaia Mission Star types
Subgiant
[ "Astronomy" ]
3,443
[ "Star types", "Astronomical classification systems" ]
1,575,176
https://en.wikipedia.org/wiki/List%20of%20style%20sheet%20languages
The following is a list of style sheet languages. Standard Cascading Style Sheets (CSS) Document Style Semantics and Specification Language (DSSSL) Extensible Stylesheet Language (XSL) Non-standard JavaScript Style Sheets (JSSS) Formatting Output Specification Instance (FOSI) Syntactically Awesome Style Sheets (Sass) Less (Less) Stylus SMIL Timesheets Stylesheet languages
List of style sheet languages
[ "Technology" ]
91
[ "Computing-related lists", "Lists of computer languages" ]
1,575,447
https://en.wikipedia.org/wiki/Shear%20modulus
In materials science, shear modulus or modulus of rigidity, denoted by G, or sometimes S or μ, is a measure of the elastic shear stiffness of a material and is defined as the ratio of shear stress to the shear strain: where = shear stress is the force which acts is the area on which the force acts = shear strain. In engineering , elsewhere is the transverse displacement is the initial length of the area. The derived SI unit of shear modulus is the pascal (Pa), although it is usually expressed in gigapascals (GPa) or in thousand pounds per square inch (ksi). Its dimensional form is M1L−1T−2, replacing force by mass times acceleration. Explanation The shear modulus is one of several quantities for measuring the stiffness of materials. All of them arise in the generalized Hooke's law: Young's modulus E describes the material's strain response to uniaxial stress in the direction of this stress (like pulling on the ends of a wire or putting a weight on top of a column, with the wire getting longer and the column losing height), the Poisson's ratio ν describes the response in the directions orthogonal to this uniaxial stress (the wire getting thinner and the column thicker), the bulk modulus K describes the material's response to (uniform) hydrostatic pressure (like the pressure at the bottom of the ocean or a deep swimming pool), the shear modulus G describes the material's response to shear stress (like cutting it with dull scissors). These moduli are not independent, and for isotropic materials they are connected via the equations The shear modulus is concerned with the deformation of a solid when it experiences a force parallel to one of its surfaces while its opposite face experiences an opposing force (such as friction). In the case of an object shaped like a rectangular prism, it will deform into a parallelepiped. Anisotropic materials such as wood, paper and also essentially all single crystals exhibit differing material response to stress or strain when tested in different directions. In this case, one may need to use the full tensor-expression of the elastic constants, rather than a single scalar value. One possible definition of a fluid would be a material with zero shear modulus. Shear waves In homogeneous and isotropic solids, there are two kinds of waves, pressure waves and shear waves. The velocity of a shear wave, is controlled by the shear modulus, where G is the shear modulus is the solid's density. Shear modulus of metals The shear modulus of metals is usually observed to decrease with increasing temperature. At high pressures, the shear modulus also appears to increase with the applied pressure. Correlations between the melting temperature, vacancy formation energy, and the shear modulus have been observed in many metals. Several models exist that attempt to predict the shear modulus of metals (and possibly that of alloys). Shear modulus models that have been used in plastic flow computations include: the Varshni-Chen-Gray model developed by and used in conjunction with the Mechanical Threshold Stress (MTS) plastic flow stress model. the Steinberg-Cochran-Guinan (SCG) shear modulus model developed by and used in conjunction with the Steinberg-Cochran-Guinan-Lund (SCGL) flow stress model. the Nadal and LePoac (NP) shear modulus model that uses Lindemann theory to determine the temperature dependence and the SCG model for pressure dependence of the shear modulus. Varshni-Chen-Gray model The Varshni-Chen-Gray model (sometimes referred to as the Varshni equation) has the form: where is the shear modulus at , and and are material constants. SCG model The Steinberg-Cochran-Guinan (SCG) shear modulus model is pressure dependent and has the form where, μ0 is the shear modulus at the reference state (T = 300 K, p = 0, η = 1), p is the pressure, and T is the temperature. NP model The Nadal-Le Poac (NP) shear modulus model is a modified version of the SCG model. The empirical temperature dependence of the shear modulus in the SCG model is replaced with an equation based on Lindemann melting theory. The NP shear modulus model has the form: where and μ0 is the shear modulus at absolute zero and ambient pressure, ζ is an area, m is the atomic mass, and f is the Lindemann constant. Shear relaxation modulus The shear relaxation modulus is the time-dependent generalization of the shear modulus : . See also Elasticity tensor Dynamic modulus Impulse excitation technique Shear strength Seismic moment References Materials science Shear strength Elasticity (physics) Mechanical quantities
Shear modulus
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
999
[ "Structural engineering", "Physical phenomena", "Mechanical quantities", "Applied and interdisciplinary physics", "Physical quantities", "Elasticity (physics)", "Deformation (mechanics)", "Quantity", "Shear strength", "Materials science", "Mechanics", "nan", "Mechanical engineering", "Phys...
1,575,583
https://en.wikipedia.org/wiki/Acid%20gas
Acid gas is a particular typology of natural gas or any other gas mixture containing significant quantities of hydrogen sulfide (H2S), carbon dioxide (CO2), or similar acidic gases. A gas is determined to be acidic or not after it is mixed with water. The pH scale ranges from 0 to 14, anything above 7 is basic while anything below 7 is acidic. Water has a neutral pH of 7 so once a gas is mixed with water, if the resulting mixture has a pH of less than 7 that means it is an acidic gas; if the pH is more than 7, that means it is an alkaline gas. The term/s acid gas and sour gas are often incorrectly treated as synonyms. Strictly speaking, a sour gas is any gas that specifically contains hydrogen sulfide in significant amounts; an acid gas is any gas that contains significant amounts of acidic gases such as carbon dioxide (CO2) or hydrogen sulfide. Thus, carbon dioxide by itself is an acid gas but not a sour gas. Dangers of acid gas Once a process burns a gas containing an acidic mixture, that acid gas is released into the atmosphere. This causes one of manufacturing's most detrimental effects on the environment, acid rain. The acidic gases burned from one power plant can travel hundreds of miles after the gas mixes with water molecules in the atmosphere. The compounds then fall to the earth again in different forms of precipitation (acid rain) and can cause respiratory health issues in humans, kill plants and wildlife, erode structures and buildings, and contaminate water sources. Acid gases are also hazardous in other ways than polluting the environment. Acid gases can be extremely flammable and explosive under pressure, so must be kept away from heat, sparks, or open flames. Hydrogen sulfide is a toxic gas, it can cause breathing problems, asphyxiation and death. It also is very corrosive to metals which restricts the materials that can be used for piping and other equipment for handling sour gas, as many metals are sensitive to sulfide stress cracking. Carbon dioxide at concentrations of 7% to 10.1% causes dizziness, headache, visual and hearing dysfunction, and unconsciousness within a few minutes to an hour. Concentrations above 17% are lethal when exposed for more than one minute. Processing and safety Before a raw natural gas containing hydrogen sulfide and/or carbon dioxide can be used, the raw gas must be treated to reduce impurities to acceptable levels and this is commonly done with an amine gas treating process. There are physical and chemical absorption processes to removing the toxic properties of these gases, both of which involve the syngas being washed with a lean solvent in an absorber to remove the H2S. Once the toxic gas leaves the bottom of the absorber it is sent to a regenerator where the solution is further stripped with steam under extremely lower pressures to remover the sulfur from the gas. The removed H2S is most often subsequently converted to by-product elemental sulfur in a Claus process or alternatively converted to valuable sulfuric acid in a WSA Process unit. Processes within oil refineries or natural-gas processing plants that remove mercaptans and/or hydrogen sulfide are commonly referred to as 'sweetening' processes because they result in products which no longer have the sour, foul odors of mercaptans and hydrogen sulfide. See also Oil refinery Rectisol Selexol References Natural gas Oil refining Industrial gases
Acid gas
[ "Chemistry" ]
709
[ "Chemical process engineering", "Petroleum technology", "Industrial gases", "Oil refining" ]
1,575,603
https://en.wikipedia.org/wiki/Metal%20Gear%20Solid%204%3A%20Guns%20of%20the%20Patriots
Metal Gear Solid 4: Guns of the Patriots is a 2008 action-adventure stealth video game developed by Kojima Productions and published by Konami for the PlayStation 3. It is the sixth Metal Gear game directed by Hideo Kojima. Set five years after the events of Metal Gear Solid 2: Sons of Liberty, the story centers around a prematurely aged Solid Snake, now known as Old Snake, as he goes on one last mission to assassinate his nemesis Liquid Snake, who now inhabits the body of his former henchman Revolver Ocelot under the guise of Liquid Ocelot, before he takes control of the Sons of the Patriots, an A.I. system that controls the activities of PMCs worldwide. The game was released on June 12, 2008. Guns of the Patriots received universal acclaim, with praise for its gameplay, graphics, characters, and emotional weight, while criticism centered on its plot as convoluted and its emphasis on cutscenes. The game garnered Game of the Year awards from several major gaming publications. It is one of the most significant titles for the seventh generation of video game consoles, as its release caused a boost in sales of the PlayStation 3, and had sold six million copies worldwide by 2014. Gameplay In Guns of the Patriots, players assume the role of an aged Solid Snake (colloquially referred to as Old Snake), using stealth, close-quarters combat, and traditional Metal Gear combat. The overhead third-person view camera of earlier games has been replaced by a streamlined view and over-the-shoulder camera for aiming a weapon, with an optional first-person view like a first-person shooter at the toggle of a button. A further addition to gameplay mechanics is the Psyche Meter. Psyche is decreased by non-lethal attacks and is influenced by battlefield psychology. Stressors (including temperature extremes, foul smells, and being hunted by the enemy) increase Snake's stress gauge, eventually depleting his Psyche. Adverse effects include difficulty in aiming, more frequent back pain and the possibility of Snake passing out upon receiving damage. If Snake kills too many enemies in a short amount of time, he will have a vision of Liquid and vomit, greatly reducing his Psyche. Among the available methods of restoring Psyche are eating, drinking, smoking, and reading an adult magazine. Snake has a few gadgets to aid him in battle. The OctoCamo suit mimics the appearance and texture of any surface in a similar fashion to an octopus, decreasing the probability of Snake being noticed. Additionally, FaceCamo is made available to players after they defeat Laughing Octopus. FaceCamo can be worn by Solid Snake on his face and it can be set to either work in tandem with the Octocamo or instead mimic the face of other in-game characters. To get access to these unique FaceCamos, players have to complete certain in-game requirements first. When the FaceCamo is worn with OctoCamo, under ideal conditions, Snake's stealth quotient can reach 100%. The Solid Eye device highlights items and enemies and can operate in a night vision and a binocular mode. It also offers a baseline map, which indicates the location of nearby units. The latter function is also performed by the Threat Ring, a visualization of Snake's senses that deforms based on nearby unit proximity and relays them to the player. Metal Gear Mk.II (later replaced with Mk.III), is a small support robot that always tags along with Snake, offers codec functionality and a means to the in-game menu for a large part of Snake's mission. It can be remotely controlled to stun enemies, provide reconnaissance and interact with the environment. Its design is based on the namesake robot from Snatcher, a game designed by Hideo Kojima. It is also controlled during the beginning of each separate "Act", although the player is not able to utilize its capabilities during this time. Whenever the Drebin menu is available, weapons, attachments, and ammunition can be purchased via Drebin Points (DPs), awarded for on-site procurement of weapons already in the inventory and by initiating specific scripted events or destroying Unmanned Vehicles. The conversion rate between weapons and DPs depends on current battlefield conditions, with more-intense fighting yielding higher prices. Drebin would also purchase items from the player at a discounted price, especially at certain points in the story and certain days in real life. The game may also be finished without killing anyone, using non-lethal weapons. The Virtual Range, similar to the Virtual Reality training of previous titles, functions as a test facility for weapon performance and gameplay controls. Synopsis Setting Guns of the Patriots is set within an alternate history timeline in which the Cold War continued into the 1990s before ending before the turn of the century. The events themselves take place in 2014, five years after Sons of Liberty, and form the final chapter in the storyline covering the character of Solid Snake, providing conclusions to the events that led up to Guns of the Patriots. The world's economy relies on continuous civil wars fought by private military companies (PMCs), which outnumber government military forces. Soldiers are equipped with nanomachines that monitor and enhance their performance on the battlefield, controlled by a vast network known as the Sons of the Patriots (SOP) system. Revolver Ocelot, missing since the events of Sons of Liberty, is possessed by the will of Liquid Snake and re-emerges from hiding to launch an insurrection against the Patriots, a secret cabal that manipulates global affairs from the shadows. In the meantime, Solid Snake is experiencing accelerated aging and has about a year left to live. He is living on board the airplane Nomad with Dr. Hal Emmerich, nicknamed Otacon, and Olga Gurlukovich's daughter, Sunny, a child prodigy in computer programming. Since the aftermath of the Big Shell incident, Raiden has drifted away from Rose, who had apparently suffered a miscarriage with their child and gone to live with Snake's former commander, Colonel Roy Campbell, and has become a cyborg ninja fighting against the Patriots. Meryl Silverburgh commands a PMC inspection unit in the U.S. military, which includes Johnny Sasaki. Plot After learning that he has only a year to live due to Werner syndrome-like symptoms, Snake is tasked by Campbell, now working with the United Nations Security Council, to assassinate Liquid Ocelot. In a Middle-Eastern war zone occupied by one of Liquid's PMCs, Snake meets Drebin, an arms dealer who injects Snake with nanomachines to use the latest weaponry, and Meryl. Snake reaches Liquid, but the latter transmits a signal that incapacitates those nearby with nanomachines. Snake sees Dr. Naomi Hunter, who departs with Liquid. In South America, Snake locates Naomi, herself a captive of Liquid. She explains that Liquid plans to use the biometrics of Big Boss to access and take command of the Patriots' firearms control system. Snake learns his accelerated aging is due to intentional genetic mutations as a human clone, and that the FOXDIE virus inside him will also mutate within months, spreading to the general populace and causing a deadly pandemic. Liquid's PMC soldiers kidnap Naomi, but Snake rescues her, assisted by Drebin and Raiden, and they escape, though Raiden is injured by Vamp. Snake locates an Eastern-European resistance group, which happens to possess the comatose body of Big Boss, in order to heal Raiden. Its leader EVA reveals she was the surrogate mother to Snake and Liquid through the cloning process. They move Big Boss' body by boat while Liquid's PMC soldiers attack decoys. Liquid captures the body and obtains the biometrics, intending to infiltrate the Patriots' system by using the repaired artificial intelligence (AI) core, GW, as a trojan. U.S. soldiers arrive to arrest Liquid, but he kills them after disabling their firearms. Big Boss' body is incinerated, and Snake saves EVA from the fire when she tries to save the body, but both are badly injured in the process. Naomi leaves with Liquid, but Otacon tracks them. EVA dies from her injuries. Snake and Otacon learn that Liquid aims to destroy the Patriots' master AI with a nuclear strike, allowing GW to take control. They deduce that Liquid intends to use Metal Gear REX at Shadow Moses in Alaska, due to its ability to destroy the AI, which is in a satellite in space, while also being outside the control of The Patriots. There, Snake is ambushed by Vamp, accompanied by Naomi; but Vamp is killed when his self-healing nanomachines are disabled via injection. Naomi reveals she has terminal cancer; and, overcome with guilt for her mistakes, she disables the nanomachines keeping her alive and dies. Snake and Raiden use REX to escape and fend off Liquid piloting a Metal Gear RAY. Liquid reveals Outer Haven, a modified Arsenal Gear. Raiden saves Snake from being crushed before the USS Missouri, captained by Mei Ling, arrives and forces Haven to retreat. Snake, Meryl, and Johnny board Haven when it surfaces in order to launch the nuke. At the core, Snake installs a computer virus coded by Naomi and Sunny into Liquid's trojan, which destroys both the core AI as well as the entire Patriot network controlling global affairs, leaving the necessities for civilization to survive. Atop Haven, Liquid explains to Snake that he allowed the virus' installation in order to destroy the Patriots. The two fight, with Snake victorious and Liquid becoming Ocelot again before dying. Meryl reconciles with her father, Campbell, and marries Johnny. Raiden reunites with Rose after learning their child was not miscarried and that her marriage to Campbell was a ruse to protect them from the Patriots. Snake visits the grave of The Boss at Arlington National Cemetery. Snake, feeling that he has no further purpose and must prevent an epidemic from his FOXDIE, attempts suicide, but hesitates at the last moment. Snake is then met by Big Boss with a vegetative Zero, with Big Boss explaining that the body burned in Europe was Solidus Snake. He then reveals that the Patriots were founded by himself, Zero, EVA, Ocelot, Sigint, and Para-Medic to realize the will of the Boss, Big Boss' mentor. Differing interpretations split the group into two factions: Zero's, which stood for government control of society to prevent conflict, and Big Boss', where soldiers fought for personal beliefs, unrestrained by governments. Zero consigned control to AI networks, the Patriots. After Big Boss' downfall in Zanzibar, the Patriots placed him in an induced coma, and later initiated the war economy, a vision far from the Boss' will. Ocelot and EVA planned to restore Big Boss by destroying the Patriots, with the possession of Ocelot through Liquid being a ruse to draw the Patriots' attention. Big Boss then kills Zero by cutting off his life support. He informs Snake that the nanomachines from Drebin contained FOXDIE, engineered by the Patriots to kill EVA, Ocelot, and Big Boss. With the new strain supplanting the mutated strain, Snake poses no risk of becoming a biological weapon unless he lives long enough for it to mutate. After understanding his mentor's will and telling Snake to find a new reason to keep living, Big Boss dies beside the Boss' grave. Snake decides to live the time he has left peacefully with Otacon and Sunny. Development Metal Gear Solid 4 started development due to fan demand. Series creator Hideo Kojima had previously directed the prequel Metal Gear Solid 3: Snake Eater, which was meant to end the series. People's demand to have a sequel to Metal Gear Solid 2: Sons of Liberty and clear the mysteries Kojima wanted to leave to the players' interpretations resulted in the making of Metal Gear Solid 4. Kojima announced that he would be retiring as director of the Metal Gear series after Snake Eater, and would leave his position open to another person for Metal Gear Solid 4. As a joke, the new director was announced as Alan Smithee; in R, a 400-page book bundled with Metal Gear Solid 3s Japanese Premium Package, the director was revealed to be Shuyo Murata, co-writer of Snake Eater and director of Zone of the Enders: The 2nd Runner. He also contributed easter eggs to Sons of Liberty and Metal Gear: Ghost Babel. It was announced that Kojima would be co-directing the game with Murata after substantial negative fan reaction, including death threats. Kojima wished to implement a new style of gameplay which was set in a full-scale war zone. Kojima wanted to also retain the stealth elements from previous entries in the series, which made the team abandon the original "No Place to Hide" concept. The only announced war zone before release was the Middle East. Using several locations emphasized Kojima's original intention to present the world in full-scale armed conflict. Solid Snake was physically aged to portray to the player the games' overarching theme, SENSE, and to assign them to a character whose task was to pass moral values to future generations. Kojima's initial ending for the Guns of the Patriots would entail Snake and Otacon turning themselves in for breaking the law, and subsequently convicted and executed. This was avoided after negative feedback from the development team. Snake's experience across the series made the creation of new enemies challenging and encouraged staff to create groups of non-human enemies to rival Snake. During development, the game's exclusivity was continuously questioned, even after Kojima officially confirmed the exclusivity several times. The exclusivity of the game was still in doubt from non-PlayStation 3 owners for a long period after the initial release, with the company confirming that the 25th Anniversary edition of the game released in late 2012 was still a PlayStation 3 exclusive. Upon the release of Metal Gear Solid: The Legacy Collection, Kojima had once again firmly denied chances of the games release on any other console, stating that an "Xbox 360 version [will not be] released, because an Xbox 360 version of MGS4 hasn't gone on sale" and that "the amount of data in MGS4 is just too enormous". The game was publicly announced first at E3 2005, by means of a humorous and slightly abstract gag machinima using characters from Snake Eater, under the slogan of "No Place to Hide". The title was described as "essentially finished" by January 2008 and went through extensive beta testing. At Destination PlayStation on February 26, 2008, Sony announced that the game would be released worldwide on June 12, 2008, along with the special Guns of the Patriots-themed PlayStation 3 bundle. It was announced that Guns of the Patriots is the first PlayStation 3 game that uses a 50GB dual layer Blu-ray Disc, even with the use of file compression. The budget for the game has been estimated to be between US$ 50-70 million. Kenichiro Imaizumi from Kojima Productions denied this stating if it had cost this much, the game would have been for multiple platforms. One of the main objectives of the budget was research of environments the game would feature. Music The score to Metal Gear Solid 4 was led by Harry Gregson-Williams, his third Metal Gear Solid soundtrack, and Nobuko Toda, who provided music for Metal Gear Acid and Metal Gear Acid 2. Other contributors are Konami employees Shuichi Kobori, Kazuma Jinnouchi, Akihiro Honda, and Sota Fujimori. Directed by Norihiko Hibino, GEM Impact employees Yoshitaka Suzuki and Takahiro Izutani also made compositions late in the game's production. It was revealed in an interview with Norihiko Hibino that the team, in fact, wrote 90 minutes of music for the game's cutscenes, only 15 minutes of which made its way onto the official soundtrack. There are two vocal themes for the game. The opening theme, "Love Theme", is sung by Jackie Presti and composed by Nobuko Toda. The ending theme, "Here's to You", is sung by Lisbeth Scott. Before the release of the game, "MGS4 – Theme of Love – Smash Bros. Brawl Version" was provided for Super Smash Bros. Brawl in the Shadow Moses Island level. The "Metal Gear Solid Main Theme", composed by Tappi "Tappy" Iwase, was notably omitted from the soundtrack, and the soundtrack of Metal Gear Solid: Portable Ops. In an interview with Electronic Gaming Monthly, Norihiko Hibino stated that the company had difficulties with "Russian composers who said we stole their music", referring to an occasion when a group of Russian games journalists presented Hideo Kojima with a composition by Georgy Sviridov and claimed this had been plagiarised to create the theme. Hibino states that "they didn't actually" but the company was "too sensitive about the situation" and elected to drop the theme. The official soundtrack was released on May 28, 2008, by Konami Digital Entertainment under the catalog number GFCA-98/9. It consists of two discs of music and 47 tracks. A soundtrack album was also packaged with Metal Gear Solid 4: Guns of the Patriots Limited Edition. Trophy support In August 2009, when asked if there would be a patch to add PlayStation Network Trophies to the game, Kojima Productions' Sean Eyestone asked people to "stay patient". This led to speculation that an updated version of the game in the vein of Substance or Subsistence would be released alongside a Trophy patch. In November 2010, an updated Greatest Hits box art of the game was released, which in the top right-hand corner boasted the addition of Trophies to the game. This was later reported as a typo, and removed from later printings. False reports of an "incoming Trophy Patch" often appeared, usually on internet forums and on April Fools' Day, some even going to the extent of a mock-up Trophy listing. In July 2012, a patch was announced that would include Trophies for the game, which would later be released on August 6, 2012. In addition to the support for Trophies, the patch also allowed a full install of the game onto the hard drive to remove the installs between acts. Marketing At a press conference on May 13, 2008, Hideo Kojima announced a marketing campaign and agreements with several companies to promote the game. Apple computers and monitors feature in the game and an Apple iPod is an in-game item that Snake can use to change the background music, listen to in-game podcasts and collect hidden songs scattered throughout the game. ReGain Energy Drinks are used in the game as a Psyche gauge booster, and mobile phones from Sony Ericsson are used, specifically by Naomi and Vamp. In addition, the motorcycles featured in the game are a Triumph Bonneville and Speed Triple. Konami and Ubisoft put an unlockable costume in the game for Snake, Altaïr from the Ubisoft stealth game Assassin's Creed. Initially revealed on April Fool's Day 2008, Kojima later announced that it would actually be in the game, unlockable by doing "something special". To obtain the attire, the player must acquire the Assassin Emblema nod to the game's title—or input a password in the Extras section). Konami had originally planned to organize grand launch events in Tokyo, but some of them were canceled with the "safety of participants in mind" in light of the Akihabara massacre on June 8, 2008. On June 15, 2009, a year after its release, Konami re-released Guns of the Patriots as a part of Sony Greatest Hits collection. Release Metal Gear Solid 4 includes the Starter Pack for Metal Gear Online 2 (MGO2). MGO2 features up to 16 player online tactical battles and incorporates several gameplay elements from Metal Gear Solid 4, including the SOP system that allows players to have a visual confirmation of their teammates' position and battle status. MGO2 also allows fully customizable characters. The Starter Pack allows players to engage in sneaking missions, where Old Snake and Metal Gear Mk.II acquire dog tags from other human contestants, along with standard Deathmatch, Team Deathmatch, and several special modes. Expansions packs, offering more maps and playable special characters (Mei Ling, Meryl, Akiba, Liquid Ocelot, Raiden, and Vamp), can be purchased via the MGO2 menu item "MGO Shop (PlayStation Network)", or via MGO2 or Konami's shop. The PlayStation Wallet is used for the first option and a credit card for the latter two. Metal Gear Online 2 was completely shut down on June 13, 2012. On June 19, 2008, Konami released the Metal Gear Solid 4 Database onto the PlayStation Store in North America and Japan, and one week later on the European store. The Database is a downloadable application for PlayStation 3 that catalogs every piece of Metal Gear lore from all the canonical entries in the series released up to Metal Gear Solid 4 in the form of an encyclopedia (browsable by alphabet and category), a timeline, and character relationship diagrams. Highlighted words in each article link to related articles, and it keeps track of which ones the user has already read. The Database automatically locks any items related to the game, in order to prevent the leaking of spoilers to players who have not completed the game yet. In order to reveal these articles, the user must have a completed Guns of the Patriots game save that was created on the same console and with a version of the game from the same region as their account. Editions and bundles A Limited Edition was released simultaneously with the game's standard edition, as an enhanced counterpart. The limited edition contains Guns of the Patriots, a box with artwork by Yoji Shinkawa, a Blu-ray Disc containing two "making of" documentaries, and partial game soundtrack containing only songs written by Harry Gregson-Williams. The Limited Edition was available exclusively at GameStop in the United States and EB Games in Canada, while a similar bundle with an additional 6-inch 'Olive Drab' Old Snake Figurine was made available at Play.com in the United Kingdom. It is also included in the 40GB Limited Edition PlayStation 3 Metal Gear Solid 4 bundle. In North America, a bundle containing an 80GB PlayStation 3, a DualShock 3 wireless controller, a downloadable game coupon for Pain, and a copy of Metal Gear Solid 4: Guns of the Patriots was released for US$499 on June 12, 2008, to coincide with the release of the standalone edition. Japan saw the release of the Guns of the Patriots Welcome Box that contains the game itself, a DualShock 3 controller, a Sixaxis controller, and a 40GB PlayStation 3 in either black, white, or silver. Sony also announced a limited edition pre-order bundle containing Guns of the Patriots Limited Edition and a matte grey (officially titled Gunmetal Grey) 40GB PlayStation 3. First announced in Japan on March 18, 2008, at a cost of JPY¥51,800, the bundle sold out by March 25, 2008. An identical bundle was available in North America for pre-order on May 19, 2008, in "very limited" supply for US$600 at Konami's official website. David Reeves has announced a similar bundle for Europe which includes a 40GB PlayStation 3, the game itself and a DualShock 3 controller. A downloadable version of the game was released on PlayStation Store at the end of 2014. It was also briefly available as a PlayStation Now rental title in North America, before being delisted by Konami due to undisclosed reasons. It was added again to the service in March 2019. Downloadable content The game received new content through PlayStation Network (in-game downloads) between 2008 and 2009. A total of 49 free add-ons have been released, which include 12 additional OctoCamo patterns, 12 MGS4 Integral podcasts (the Japanese version got 18 Guns of the HIDECHAN!Radio podcasts instead), and 25 iPod songs (a twenty-sixth song, named "Chair Race" was removed for copyright reasons in 2011). On July 16, 2014, it was announced on the Japanese site that all DLC downloads would terminate on July 31; however, a workaround was made in January 2015. As of today, this is the only way to obtain the content on an unmodified console, as it was never made available on PlayStation Store. Related media Metal Gear Solid Touch for iOS is a "touch shooting" game that revisits Guns of the Patriotss plot and action through the touch interface. Developers of the game LittleBigPlanet, Media Molecule, released an expansion pack based on Guns of the Patriots on December 23, 2008. It includes character skins for Old Snake, Raiden, Meryl, and Screaming Mantis, as well as a Metal Gear-themed set of levels. In October 2011, Konami and Hasbro produced a special Guns of the Patriots-themed version of Risk, which had playing pieces based on the game's various characters, plus a battle map based on Outer Haven. The game's characters can also be used as special allies. A novelization of Guns of the Patriots written by Project Itoh was published by Kadokawa Shoten in Japan on the day of the game's release. An English-language translation of the novel was published in North America in June 2012. Reception Critical reception Metal Gear Solid 4 received universal acclaim, according to review aggregator website Metacritic. The first review was a 10/10 from PlayStation Official Magazine – UK, commenting "MGS4 shifts gears constantly, innovating again and again". The game has been awarded 10/10 from Game Informer, as well as a perfect score in all categories (graphics, control, sound, and fun factor) from GamePro, Famitsu (40/40), and Empire. The game received a 9.9/10 from IGN United Kingdom, a 9.5/10 from IGN Australia, and a 10/10 from IGN USA. IGN was quoted in a video review, saying it is "one of the best games ever made". Edge and Eurogamer both gave the game 8/10. GameSpot gave it a 10/10, saying "Metal Gear Solid 4: Guns of the Patriots is the most technically stunning video game ever made", making Guns of Patriots one of only five games ever to receive a perfect 10 from both IGN and GameSpot, and the sixth of only nine games to receive a perfect ten from GameSpot overall. IGN also included the game at No. 59 in its list of Top 100 Games of a Generation. Reviewers acclaimed the manner in which the title concludes the series. Eurogamer stated: "You could not ask for a funnier, cleverer, more ambitious or inspired or over-the-top conclusion", and IGN found that the result "refines the MGS formula and introduces just enough new (or respectfully influenced) ideas to ensure that it stands on its own as a game". Edge concluded that "it is faithful to its fans, its premise and its heart, delivering an experience that is, in so many ways, without equal", while IGN UK described it simply as "the ultimate Metal Gear game" and "a dazzling, heart-lifting, voyage of discovery". The game was also described as being unusually sad and depressing for a video game. Kotaku said "Metal Gear Solid 4 is so unusual in that it's the rare game that asks them to be interested in something else: a march toward defeat, an interactive tragedy." The new control scheme ("the ideal balance of intuitiveness and range"), camouflage system, and shift to more free-form, replayable gameplay, in particular the Drebin Points system and alternatives to stealthy play, were particularly highly praised with a few minor annoyances. The variety of set-piece events, details such as the psyche meter, and healthy provision of secrets were also remarked upon. Eurogamer tempered their overall praise with concern that one of the chapters may induce ennui but that the game quickly recovered, while Edge expressed mild disappointment that the Beauty and the Beast Unit compare poorly to the previous title's main foes, the Cobra Unit. The game was lauded for its technological and artistic achievements, with Edge describing the Otacon character as "the real star", and "a gaming revolution", while they found the game's score to be superior to that of many Hollywood offerings. The magazine felt that the few visual shortfalls, such as texture detail, did nothing to detract from the game's overall quality. IGN UK commented that the attention to detail in both visuals and audio represented "sublime brilliance", and remarked upon innovations such as the use of split-screen. Criticism of the game was largely leveled at the storyline, which reviewers found at times to be confusing, or poorly executed, and with IGN UK advising players to revisit the earlier titles for clarity. The overall result was praised as emotionally engaging and topical, and characters such as Liquid Ocelot were singled out for the quality of their depiction. It was generally conceded that although the use of cutscenes were more intrusive than they needed to be, comprising "about half of the content of the game" by one estimate, and which "might make you crave action, or wonder why they couldn't have been turned into interactive sequences", the style was somewhat appropriate given the rest of the series ("in many ways it's a vindication of Kojima's unique interpretation of the videogame medium"), and unlikely to trouble fans. The addition of a pause function for these story sequences was universally welcomed. Edge and Eurogamer concluded that although the game represents an apotheosis of the series style, it ultimately fails to revitalize it, and would not win over new fans. IGN UK was concerned that the game's hype and widespread praise may lead to disappointment but felt that the game was a "masterpiece". Review restrictions Several publications have commented on limitations given to pre-release reviewers by Konami, including discussion on the length of cutscenes and size of the PlayStation 3 installation. These limitations resulted in Electronic Gaming Monthly delaying its review. In lieu of a review, the magazine printed a roundtable discussion about the game. Kojima Productions spokesperson Ryan Payton has since explained more specifically what the NDA restricts, and has amended "some items [that] are outdated and require more explanation." He also listed the length of install times, noting that the restrictions were intended to prevent spoilers regarding what occurs during the installations. Following this statement, gaming site GameSpot published a blog entry in which it claims it will be unable to review the game either, claiming Konami have withheld review code because of non-compliance with the limitations. The article originally implied that the absence of a review was due to GameSpots refusal to attend the Boot Camp event at Kojima Productions' offices. It has since revised it to state that the Boot Camp was a mid-development feedback and PR exercise, and would not have led to a review in any case. The day before Konami's restrictions were to be lifted, Electronic Gaming Monthly (EGM) reviewer Jeremy Parish clarified the reasons for the self-imposed review embargo, dispelling rumors of a disagreement between Konami and EGM on the review conditions in a lengthy blog commentary. His review of the game appeared on the website shortly after. Sales According to Konami, the game shipped over 3 million units worldwide on the day of its release. According to Enterbrain, Guns of the Patriots sold 476,334 copies in its first four days on sale in Japan, which includes copies bundled with the PlayStation 3, and caused a boost in PlayStation 3 sales. The PlayStation 3, which at the time sold about 10,000 units in a given week, went on to sell 77,208 units in the games debut week. It was the 11th best-selling game of Japan in 2008, selling 686,254 copies. According to Chart-Track, the game is the second fastest-selling PlayStation 3 title in the United Kingdom after Grand Theft Auto IV and was below Metal Gear Solid 2: Sons of Liberty'''s opening weekend figure by 14,000 copies recorded in 2002; the sales of the PlayStation 3 increased by a "minimal" seven percent over the opening weekend. Konami has reported that Guns of the Patriots sold over one million copies across Europe in its first week, with 25,000 limited-edition copies "snapped up almost immediately". The game received a Platinum sales award from the Entertainment and Leisure Software Publishers Association, indicating sales of at least 300,000 copies in the United Kingdom. In the United States, it was the best-selling game in June 2008 selling 774,600 copies (nearly one million if the number of copies bundled with the PlayStation 3 console was included), causing console sales to double over the previous month according to the NPD Group. It became one of the best-selling PlayStation 3 video games, and was the best-selling PlayStation 3 exclusive until the release of Gran Turismo 5. As a result, Guns of the Patriots is one of the Platinum/Greatest Hits range of best-selling games, and helped to increase sales for the Metal Gear franchise and record profits for Konami in 2008–2010. On May 21, 2014, Ryan Payton, a former employee of Kojima Productions who worked on Metal Gear Solid 4, stated that the game has sold 6 million copies. Awards Following the critical acclaim it received upon its release, Guns of the Patriots won many Game of the Year awards from many international outlets, these including GameSpot, Gamezine, and PALGN, along with a significant number of Readers' Choice awards, and awards directed towards its story-telling, graphical, and voice-acting aspects. GameSpot praised the game significantly, and awarded it "Game of the Year", "Best PS3 Game", "Best Graphics (Technical)", "Best Boss Battles", "Best Story", "Best Voice Acting", "Most Memorable Moment", and "Best Action/Adventure Game". IGN awarded the game "Best PS3 Game of 2008", "Best Graphics Technology", "Best Original Score", and "Best Action Game". PALGN awarded it "Game of the Year", "PS3 Game of the Year", and "Best Visuals". PC World heralded it with "Game of the Year". Playfire awarded it "Game of the Year", "Best Action/Adventure Game", and "Best Graphics". It also won "Game of the Year" from the Portuguese Eurogamer. On NeoGAF, it was awarded "Game of the Year". The German site 4PLAYER.de gave it "Game of the Year". GamePro awarded it "Best PS3 Game of 2008" and "Best Action/Adventure Game". 1UP.com gave it "Game of the Year", "Best PS3 Game", "Best Action Game", and "Best Audiovisual Experience". Fox News Channel awarded the game "Best PS3 Game of 2008" and "Best Game of 2008". GameSpy awarded it "Best PS3 Action Game". From Gamezine, it won "Game of the Year" and "Best PS3 Game". Giant Bomb gave it "Best PS3-Only Game", "Best Graphics", and "Most Satisfying Sequel". In the Golden Joystick 2008 awards, it was awarded "Best PS3 Game". At Tokyo Game Show 2009, it received the Grand Award (alongside Mario Kart Wii) and the Award of Excellence. The readers of PlayStation Official Magazine – UK voted it the 5th best PlayStation title released. At the 12th Annual Interactive Achievement Awards, the Academy of Interactive Arts & Sciences awarded Guns of the Patriots'' with "Outstanding Achievement in Original Music Composition", along with receiving nominations for "Overall Game of the Year", "Console Game of the Year", and outstanding achievement in "Animation", "Character Performance" (Old Snake), "Game Direction", and "Visual Engineering". Notes References Further reading External links 2008 video games Action-adventure games Alternate history video games Fiction about augmented reality Cancelled Xbox 360 games Cyberpunk video games Dystopian video games Fiction about hypnosis Japan Game Awards' Game of the Year winners Fiction about malware Metal Gear video games Fiction about nanotechnology PlayStation 3 games PlayStation 3-only games Postmodern works Propaganda in fiction Single-player video games Stealth video games Third-person shooters Video game sequels Video games designed by Hideo Kojima Video games developed in Japan Video games directed by Hideo Kojima Video games involved in plagiarism controversies Video games produced by Hideo Kojima Video games set in 2014 Video games set in Alaska Video games set in Europe Video games set in the Middle East Video games set in South America Video games with commentaries Video games about artificial intelligence Video games about old age Video games scored by Harry Gregson-Williams Video games scored by Takahiro Izutani Viz Media novels
Metal Gear Solid 4: Guns of the Patriots
[ "Materials_science" ]
7,716
[ "Fiction about nanotechnology", "Nanotechnology" ]
10,950,869
https://en.wikipedia.org/wiki/Orthonormal%20function%20system
An orthonormal function system (ONS) is an orthonormal basis in a vector space of functions. References Linear algebra Functional analysis
Orthonormal function system
[ "Mathematics" ]
34
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations", "Linear algebra", "Algebra" ]
10,952,377
https://en.wikipedia.org/wiki/Material-handling%20equipment
Material handling equipment (MHE) is mechanical equipment used for the movement, storage, control, and protection of materials, goods and products throughout the process of manufacturing, distribution, consumption, and disposal. The different types of equipment can be classified into four major categories: transport equipment, positioning equipment, unit load formation equipment, and storage equipment. Transport equipment Transport equipment is used to move material from one location to another (e.g., between workplaces, between a loading dock and a storage area, etc.), while positioning equipment is used to manipulate material at a single location. The major subcategories of transport equipment are conveyors, cranes, and industrial trucks. Material can also be transported manually using no equipment. Conveyors Conveyors are used when material is to be moved frequently between specific points over a fixed path and when there is a sufficient flow volume to justify the fixed conveyor investment. Different types of conveyors can be characterized by the type of product being handled: unit load or bulk load; the conveyor's location: in-floor, on-floor, or overhead, and whether or not loads can accumulate on the conveyor. Accumulation allows intermittent movement of each unit of material transported along the conveyor, while all units move simultaneously on conveyors without accumulation capability. For example, while both the roller and flat-belt are unit-load on-floor conveyors, the roller provides accumulation capability while the flat-belt does not; similarly, both the power-and-free and trolley are unit-load overhead conveyors, with the power-and-free designed to include an extra track in order to provide the accumulation capability lacking in the trolley conveyor. Examples of bulk-handling conveyors include the magnetic-belt, troughed-belt, bucket, and screw conveyors. A sortation conveyor system is used for merging, identifying, inducting, and separating products to be conveyed to specific destinations, and typically consists of flat-belt, roller, and chute conveyor segments together with various moveable arms and/or pop-up wheels and chains that deflect, push, or pull products to different destinations. Cranes Cranes are used to transport loads over variable (horizontal and vertical) paths within a restricted area and when there is insufficient (or intermittent) flow volume such that the use of a conveyor cannot be justified. Cranes provide more flexibility in movement than conveyors because the loads handled can be more varied with respect to their shape and weight. Cranes provide less flexibility in movement than industrial trucks because they only can operate within a restricted area, though some can operate on a portable base. Most cranes utilize trolley-and-tracks for horizontal movement and hoists for vertical movement, although manipulators can be used if precise positioning of the load is required. The most common cranes include the jib, bridge, gantry, and stacker cranes. Industrial trucks Industrial trucks are trucks that are not licensed to travel on public roads (commercial trucks are licensed to travel on public roads). Industrial trucks are used to move materials over variable paths and when there is insufficient (or intermittent) flow volume such that the use of a conveyor cannot be justified. They provide more flexibility in movement than conveyors and cranes because there are no restrictions on the area covered, and they provide vertical movement if the truck has lifting capabilities. Different types of industrial trucks can be characterized by whether or not they have forks for handling pallets, provide powered or require manual lifting and travel capabilities, allow the operator to ride on the truck or require that the operator walk with the truck during travel, provide load stacking capability, and whether or not they can operate in narrow aisles. Hand trucks (including carts and dollies), the simplest type of industrial truck, cannot transport or stack pallets, is non-powered, and requires the operator to walk. A pallet jack, which cannot stack a pallet, uses front wheels mounted inside the end of forks that extend to the floor as the pallet is only lifted enough to clear the floor for subsequent travel. A counterbalanced lift truck (sometimes referred to as a forklift truck, but other attachments besides forks can be used) can transport and stack pallets and allows the operator to ride on the truck. The weight of the vehicle (and operator) behind the front wheels of truck counterbalances weight of the load (and weight of vehicle beyond front wheels); the front wheels act as a fulcrum or pivot point. Narrow-aisle trucks usually require that the operator stand-up while riding in order to reduce the truck's turning radius. Reach mechanisms and outrigger arms that straddle and support a load can be used in addition to the just the counterbalance of the truck. On a turret truck, the forks rotate during stacking, eliminating the need for the truck itself to turn in narrow aisles. An order picker allows the operator to be lifted with the load to allow for less-than-pallet-load picking. Automated guided vehicles (AGVs) are industrial trucks that can transport loads without requiring a human operator. Rail or wheel steered transfer carts are preferred for areas that do not have favourable conditions for the operation of forklifts. Rail transfer carts are carts that can move on the rail line. Wheel steered transfer carts can move independently of the route with battery powered energy systems. An electric tug is a small battery powered and pedestrian operated machine capable of either pushing or pulling a significantly heavier load than itself. Manual Handling Equipment Commonly used to assist in moving smaller loads where larger equipment would struggle, manual handling equipment such as pallet trucks, trolleys, and sack trucks can be an essential part of any material handling. Yard ramp A yard ramp, sometimes called a mobile yard ramp, is a movable metal ramp for loading and unloading of vehicles. A yard ramp is placed at the back of a vehicle to provide access for forklifts to ascend the ramp. Using a yard ramp for vehicle loading or unloading allows the work to be carried out by a forklift. Positioning equipment Positioning equipment is used to handle material at a single location. It can be used at a workplace to feed, orient, load/unload, or otherwise manipulate materials so that are in the correct position for subsequent handling, machining, transport, or storage. As compared to manual handling, the use of positioning equipment can raise the productivity of each worker when the frequency of handling is high, improve product quality and limit damage to materials and equipment when the item handled is heavy or awkward to hold and damage is likely through human error or inattention, and can reduce fatigue and injuries when the environment is hazardous or inaccessible. In many cases, positioning equipment is required for and can be justified by the ergonomic requirements of a task. Examples of positioning equipment include lift/tilt/turn tables, hoists, balancers, manipulators, and industrial robots. Manipulators act as “muscle multipliers” by counterbalancing the weight of a load so that an operator lifts only a small portion (1%) of the load's weight, and they fill the gap between hoists and industrial robots: they can be used for a wider range of positioning tasks than hoists and are more flexible than industrial robots due to their use of manual control. They can be powered manually, electrically, or pneumatically, and a manipulator's end-effector can be equipped with mechanical grippers, vacuum grippers, electromechanical grippers, or other tooling. Unit load formation equipment Unit load formation equipment is used to restrict materials so that they maintain their integrity when handled a single load during transport and for storage. If materials are self-restraining (e.g., a single part or interlocking parts), then they can be formed into a unit load with no equipment. Examples of unit load formation equipment include pallets, skids, slipsheets, tote pans, bins/baskets, cartons, bags, and crates. A pallet is a platform made of wood (the most common), paper, plastic, rubber, or metal with enough clearance beneath its top surface (or face) to enable the insertion of forks for subsequent lifting purposes. A slipsheet is a thick piece of paper, corrugated fiber, or plastic upon which a load is placed and has tabs that can be grabbed by special push/pull lift truck attachments. They are used in place of a pallet to reduce weight and volume, but loading/unloading is slower. Storage equipment Storage equipment is used for holding or buffering materials over a period of time. The design of each type of storage equipment, along with its use in warehouse design, represents a trade-off between minimizing handling costs, by making material easily accessible, and maximizing the utilization of space (or cube). If materials are stacked directly on the floor, then no storage equipment is required, but, on average, each different item in storage will have a stack only half full; to increase cube utilization, storage racks can be used to allow multiple stacks of different items to occupy the same floor space at different levels. The use of racks becomes preferable to floor storage as the number of units per item requiring storage decreases. Similarly, the depth at which units of an item are stored affects cube utilization in proportion to the number of units per item requiring storage. Pallets can be stored using single- and double-deep racks when the number of units per item is small, while pallet-flow and push-back racks are used when the units per item are mid-range, and floor-storage or drive-in racks are used when the number of units per item is large, with drive-in providing support for pallet loads that cannot be stacked on top of each other. Individual cartons can either be picked from pallet loads or can be stored in carton-flow racks, which are designed to allow first-in, first-out (FIFO) carton access. For individual piece storage, bin shelving, storage drawers, carousels, and A-frames can be used. Engineered systems Engineered systems are automated solutions designed to streamline and optimize material handling processes. An automatic storage/retrieval system (AS/RS) is an integrated computer-controlled storage system that combines storage medium, transport mechanism, and controls with various levels of automation for fast and accurate random storage of products and materials. Identification and Control Equipment Equipment used to collect and communicate the information that is used to coordinate the flow of materials within a facility and between a facility and its suppliers and customers. The identification of materials and associated control can be performed manually with no specialized equipment. See also Automated guided vehicle Automated storage and retrieval system Bulk material handling Caster Drum handler Electric track vehicle system Electric tug Forklift truck Industrial robot Material handling Packaging machinery Pallet Pallet inverter Pallet racking Slip sheet Telescopic handler Warehouse Notes References Chu, H.K., Egbelu, P.J., and Wu, C.T., 1995, "ADVISOR: A computer-aided material handling equipment selection system", Int. J. Prod. Res., 33(12):3311−3329. Kay, M.G., 2012, Material Handling Equipment, Retrieved 2014-10-02. Kulwiec, R.A., Ed., 1985, Materials Handling Handbook, 2nd Ed., New York: Wiley. Mulcahy, D.E., 1999, Materials Handling Handbook, New York: McGraw-Hill. Tompkins, J.A., White, J.A., Bozer, Y.A., and Tanchoco, J.M.A., 2003, Facilities Planning, 3rd Ed., Wiley, Appendix 5.B. External links College Industry Council on Material Handling Education (CICMHE) European Federation of Materials Handling Industrial Truck Association Material Handling Equipment Distributors Association Material Handling Equipment Taxonomy Material Handling Industry Equipment Industrial equipment
Material-handling equipment
[ "Physics", "Engineering" ]
2,464
[ "Materials", "Material handling", "nan", "Matter" ]
10,952,814
https://en.wikipedia.org/wiki/Zionts%E2%80%93Wallenius%20method
Within computer science, the Zionts–Wallenius method is an interactive method used to find a best solution to a multi-criteria optimization problem. Detail Specifically it can help a user solve a linear programming problem having more than one (linear) objective. A user is asked to respond to comparisons between feasible solutions or to choose directions of change desired in each iteration. Providing certain mathematical assumptions hold, the method finds an optimal solution. References Zionts, S. and J. Wallenius, “An Interactive Programming Method for Solving the Multiple Criteria Problem,” Management Science. Vol. 22, No. 6, pp. 652–663, 1976. Optimization algorithms and methods
Zionts–Wallenius method
[ "Mathematics" ]
139
[ "Applied mathematics", "Applied mathematics stubs" ]
10,953,228
https://en.wikipedia.org/wiki/Translocase
Translocase is a general term for a protein that assists in moving another molecule, usually across a cell membrane. These enzymes catalyze the movement of ions or molecules across membranes or their separation within membranes. The reaction is designated as a transfer from “side 1” to “side 2” because the designations “in” and “out”, which had previously been used, can be ambiguous. Translocases are the most common secretion system in Gram positive bacteria. It is also a historical term for the protein now called elongation factor G, due to its function in moving the transfer RNA (tRNA) and messenger RNA (mRNA) through the ribosome. History The enzyme classification and nomenclature list was first approved by the International Union of Biochemistry in 1961. Six enzyme classes had been recognized based on the type of chemical reaction catalyzed, including oxidoreductases (EC 1), transferases (EC 2), hydrolases (EC 3), lyases (EC 4), isomerases (EC 5) and ligases (EC 6). However, it became apparent that none of these could describe the important group of enzymes that catalyse the movement of ions or molecules across membranes or their separation within membranes. Several of these involve the hydrolysis of ATP and had been previously classified as ATPases (EC 3.6.3.-), although the hydrolytic reaction is not their primary function. In August 2018, the International Union of Biochemistry and Molecular Biology classified these enzymes under a new enzyme class (EC) of translocases (EC 7). Mechanism of catalysis The reaction most translocases catalyse is: AX + Bside 1|| = A + X + || Bside 2 A clear example of an enzyme that follows this scheme is H+-transporting two-sector ATPase: ATP + H2O + 4 H+side 1 = ADP + phosphate + 4 H+side 2 This ATPase carries out the dephosphorylation of ATP into ADP while it transports H+ to the other side of the membrane. However, other enzymes that also fall into this category do not follow the same reaction scheme. This is the case of ascorbate ferrireductase: ascorbateside 1 + Fe(III)side 2 = monodehydroascorbateside 1 + Fe(II)side 2 In which the enzyme only transports an electron in the catalysation of an oxidoreductase reaction between a molecule and an inorganic cation located on different sides of the membrane. Function The basic function, as already mentioned (see: ), is to "catalyse the movement of ions or molecules across membranes or their separation within membranes". This form of membrane transport is classified under active membrane transport, an energy-requiring process of pumping molecules and ions across membranes against a concentration gradient. Translocases biological importance relies primarily on their critical function, in the way that they provide movement across the cell's membrane in many cellular processes that are substantial, such as: Oxidative phosphorylation ADP/ATP translocase (ANT) imports adenosine diphosphate ADP from the cytosol and exports ATP from the mitochondrial matrix, which are key transport steps for oxidative phosphorylation in eukaryotic organisms. ADP from the cytosol is transported back into the mitochondrion for ATP synthesis and the synthesised ATP, produced from oxidative phosphorylation, is exported out of the mitochondrion for use in the cytosol, providing the cells with its main energy currency. Protein import into mitochondria Hundreds of proteins encoded by the nucleus are required for mitochondrial metabolism, growth, division, and partitioning to daughter cells, and all of these proteins must be imported into the organelle. Translocase of the outer membrane (TOM) and translocase of the inner membrane (TIM) mediate the import of proteins into the mitochondrion. The translocase of the outer membrane (TOM) sorts proteins via several mechanisms either directly to the outer membrane, the intermembrane space, or the translocase of the inner membrane (TIM). Then, generally, the TIM23 machinery mediates protein translocation into the matrix and the TIM22 machinery mediates insertion into the inner membrane. Fatty acids import into mitochondria (Carnitine Shuttle System) Carnitine-acylcarnitine translocase (CACT) catalyzes both unidirectional transport of carnitine and carnitine/acylcarnitine exchange in the inner mitochondrial membrane, allowing the import of long-chain fatty acids into the mitochondria where they are oxidized by the β-oxidation pathway. The mitochondrial membrane is impermeable to long-chain fatty acids, hence the need for this translocation. Classification The enzyme subclasses designate the types of components that are being transferred, and the sub-subclasses indicate the reaction processes that provide the driving force for the translocation. EC 7.1 Catalysing the translocation of hydrons This subclass contains translocases that catalyze the translocation of hydrons. Based on the reaction they are linked to, EC 7.1 can be further classified into: EC 7.1.1 Hydron translocation or charge separation linked to oxidoreductase reactions EC 7.1.2 Hydron translocation linked to the hydrolysis of a nucleoside triphosphate EC 7.1.3 Hydron translocation linked to the hydrolysis of diphosphate An important translocase contained in this group is ATP synthase, also known as EC 7.1.2.2. EC 7.2 Catalysing the translocation of inorganic cations and their chelates This subclass contains translocases that transfer inorganic cations (metal cations). Based on the reaction they're linked to, EC 7.2 can be further classified into: EC 7.2.1 Translocation of inorganic cations linked to oxidoreductase reactions EC 7.2.2 Translocation of inorganic cations linked to the hydrolysis of a nucleoside triphosphate EC 7.2.4 Translocation of inorganic cations linked to decarboxylation An important translocase contained in this group is Na+/K+ pump, also known as EC 7.2.2.13. EC 7.3 Catalysing the translocation of inorganic anions This subclass contains translocases that transfer inorganic cations anions. Subclasses are based on the reaction processes that provide the driving force for the translocation. At present only one subclass is represented: EC 7.3.2 Translocation of inorganic anions linked to the hydrolysis of a nucleoside triphosphate. 7.3.2.1 ABC-type phosphate transporter The expected taxonomic range for this enzyme is: Eukaryota, Bacteria. A bacterial enzyme that interacts with an extracytoplasmic substrate binding protein and mediates the high affinity uptake of phosphate anions. Unlike P-type ATPases, it does not undergo phosphorylation during the transport process. ATP + H2O + phosphate [phosphate - binding protein][side 1] = ADP + phosphate + phosphate [side 2] + [phosphate - binding protein][side 1] 7.3.2.2 ABC-type phosphonate transporter The enzyme, found in bacteria, interacts with an extracytoplasmic substrate binding protein and mediates the import of phosphonate and organophosphate anions. ATP + H2O + phosphonate [phosphonate-binding protein][side 1] = ADP + phosphate + phosphonate [side 2] + [phosphonate- binding protein][side 1] 7.3.2.3 ABC-type sulfate transporter The expected taxonomic range for this enzyme is: Eukaryota, Bacteria. The enzyme from Escherichia coli can interact with either of two periplasmic binding proteins and mediates the high affinity uptake of sulfate and thiosulfate. May also be involved in the uptake of selenite, selenate and possibly molybdate. Does not undergo phosphorylation during the transport. ATP + H2O + sulfate [sulfate - binding protein] [side 1] = ADP + phosphate + sulfate [side 2] + [sulfate - binding protein][side 1] 7.3.2.4 ABC-type nitrate transporter The expected taxonomic range for this enzyme is: Eukaryota, Bacteria. The enzyme, found in bacteria, interacts with an extracytoplasmic substrate binding protein and mediates the import of nitrate, nitrite, and cyanate. ATP + H2O + nitrate [nitrate - binding protein][side 1] = ADP + phosphate + nitrate [side 2] + [nitrate - binding protein][side 1] 7.3.2.5 ABC-type molybdate transporter The expected taxonomic range for this enzyme is: Archaea, Eukaryota, Bacteria. The enzyme, found in bacteria, interacts with an extracytoplasmic substrate binding protein and mediates the high-affinity import of molybdate and tungstate. Does not undergo phosphorylation during the transport process. ATP + H2O + molybdate [molybdate - binding protein][side 1] = ADP + phosphate + molybdate [side 2] + [molybdate - binding protein][side 1] 7.3.2.6 ABC-type tungstate transporter The expected taxonomic range for this enzyme is: Archaea, Bacteria. The enzyme, characterized from the archaeon Pyrococcus furiosus, the Gram-positive bacterium Eubacterium acidaminophilum and the Gram-negative bacterium Campylobacter jejuni, interacts with an extracytoplasmic substrate binding protein and mediates the import of tungstate into the cell for incorporation into tungsten-dependent enzymes. ATP + H2O + tungstate [tungstate - binding protein][side 1] = ADP + phosphate + tungstate [side 2] + [tungstate - binding protein][side 1] EC 7.4 Catalysing the translocation of amino acids and peptides Subclasses are based on the reaction processes that provide the driving force for the translocation. At present there is only one subclass: EC 7.4.2 Translocation of amino acids and peptides linked to the hydrolysis of a nucleoside triphosphate. 7.4.2.1 ABC-type polar-amino-acid transporter The expected taxonomic range for this enzyme is: Eukaryota, Bacteria. The enzyme, found in bacteria, interacts with an extracytoplasmic substrate binding protein and mediates the import of polar amino acids. This entry comprises bacterial enzymes that import Histidine, Arginine, Lysine, Glutamine, Glutamate, Aspartate, ornithine, octopine and nopaline. ATP + H2O + polar amino acid [polar amino acid-binding protein][side 1] = ADP + phosphate + polar amino acid [side 2] + [polar amino acid-binding protein][side1] 7.4.2.2 ABC-type nonpolar-amino-acid transporter The expected taxonomic range for this enzyme is: Eukaryota, Bacteria. The enzyme, found in bacteria, interacts with an extracytoplasmic substrate binding protein. This entry comprises enzymes that import Leucine, Isoleucie and Valine. ATP + H2O + non polar amino acid [non polar amino acid - binding protein][side 1] = ADP + phosphate + non polar amino acid [side 2] + [non polar amino acid - binding protein][side 1] 7.4.2.3 ABC-type mitochondrial protein-transporting ATPase The expected taxonomic range for this enzyme is: Eukaryota, Bacteria. A non-phosphorylated, non-ABC (ATP-binding cassette) ATPase involved in the transport of proteins or preproteins into mitochondria using the TIM (Translocase of the Inner Membrane) protein complex. TIM is the protein transport machinery of the mitochondrial inner membrane that contains three essential TIM proteins: Tim17 and Tim23 are thought to build a preprotein translocation channel while Tim44 interacts transiently with the matrix heat-shock protein Hsp70 to form an ATP-driven import motor. ATP + H2O + mitochondrial protein [side 1] = ADP + phosphate + mitochondrial protein [side 2] 7.4.2.4 ABC-type chloroplast protein-transporting ATPase The enzyme appears in viruses and cellular organisms. Involved in the transport of proteins or preproteins into chloroplast stroma (several ATPases may participate in this process). ATP + H2O + chloroplast protein [side 1] = ADP + phosphate + chloroplast protein [side 2] 7.4.2.5 ABC-type protein transporter The expected taxonomic range for this enzyme is: Eukaryota, Bacteria. This entry stands for a family of bacterial enzymes that are dedicated to the secretion of one or several closely related proteins belonging to the toxin, protease and lipase families. Examples from Gram-negative bacteria include α-hemolysin, cyclolysin, colicin V and siderophores, while examples from Gram-positive bacteria include bacteriocin, subtilin, competence factor and pediocin. ATP + H2O + protein [side 1] = ADP + phosphate + protein [side 2] 7.4.2.6 ABC-type oligopeptide transporter A bacterial enzyme that interacts with an extracytoplasmic substrate binding protein and mediates the import of oligopeptides of varying nature. The binding protein determines the specificity of the system. Does not undergo phosphorylation during the transport process. ATP + H2O + oligopeptide [oligopeptide - binding protein][side 1] = ADP + phosphate + oligopeptide [side 2] + [oligopeptide - binding protein][side 1] 7.4.2.7 ABC-type alpha-factor-pheromone transporter The enzyme appears in viruses and cellular organisms characterized by the presence of two similar ATP-binding domains/proteins and two integral membrane domains/proteins. Does not undergo phosphorylation during the transport process. A yeast enzyme that exports the α-factor sex pheromone. ATP + H2O + alpha factor [side 1] = ADP + phosphate + alpha factor [side 2] 7.4.2.8 ABC-type protein-secreting ATPase The expected taxonomic range for this enzyme is: Archaea, Bacteria. A non-phosphorylated, non-ABC (ATP-binding cassette) ATPase that is involved in protein transport. ATP + H2O + cellular protein [side 1] = ADP + phosphate + cellular protein [side 2] 7.4.2.9 ABC-type dipeptide transporter The enzyme appears in viruses and cellular organisms. ATP-binding cassette (ABC) type transporter, characterized by the presence of two similar ATP-binding domains/proteins and two integral membrane domains/proteins. A bacterial enzyme that interacts with an extracytoplasmic substrate binding protein and mediates the uptake of dipeptides and tripeptides. ATP + H2O + dipeptide [dipeptide - binding protein][side 1] = ADP + phosphate + [side 2] + [dipeptide - binding protein][side 1] 7.4.2.10 ABC-type glutathione transporter A prokaryotic ATP-binding cassette (ABC) type transporter, characterized by the presence of two similar ATP-binding domains/proteins and two integral membrane domains/proteins. The enzyme from the bacterium Escherichia coli is a heterotrimeric complex that interacts with an extracytoplasmic substrate binding protein to mediate the uptake of glutathione. ATP + H2O glutathione [glutathione - binding protein][side 1] = ADP + phosphate + glutathione [side 2] + [glutathione - binding protein][side 1] 7.4.2.11 ABC-type methionine transporter A bacterial enzyme that interacts with an extracytoplasmic substrate binding protein and functions to import methionine. (1) ATP + H2O + L-methionine [methionine - binding protein][side 1] = ADP + phosphate + L-methionine [side 2] + [methionine - binding protein][side 1] (2) ATP + H2O + D-methionine [methionine - binding protein][side 1] = ADP + phosphate + D-methionine [side 2] + [methionine - binding protein][side 1] 7.4.2.12 ABC-type cystine transporter A bacterial enzyme that interacts with an extracytoplasmic substrate binding protein and mediates the high affinity import of trace cystine. The enzyme from Escherichia coli K-12 can import both isomers of cystine and a variety of related molecules including djenkolate, lanthionine, diaminopimelate and homocystine. (1) ATP + H2O + L-cystine [cystine - binding protein][side 1] = ADP + phosphate + L-cystine [side 2] + [cystine - binding protein][side 1] (2) ATP + H2O + D-cystine [cystine - binding protein][side 1] = ADP + phosphate + D-cystine [side 2] + [cystine - binding protein][side 1] EC 7.5 Catalysing the translocation of carbohydrates and their derivatives EC 7.5.2 Linked to the hydrolysis of a nucleoside triphosphate EC 7.6 Catalysing the translocation of other compounds EC 7.6.2 Linked to the hydrolysis of a nucleoside triphosphate Examples ornithine translocase (SLC25A15), associated with ornithine translocase deficiency. carnitine-acylcarnitine translocase (SLC25A20), associated with carnitine-acylcarnitine translocase deficiency. Translocase of outer mitochondrial membrane 40 (TOMM40), a protein encoded by the TOMM40 gene, whose alleles differentially impact the risk for Alzheimer's disease References Translocases Membrane proteins Solute carrier family
Translocase
[ "Biology" ]
4,100
[ "Protein classification", "Membrane proteins" ]
10,953,559
https://en.wikipedia.org/wiki/Glycerol-3-phosphate%20dehydrogenase
Glycerol-3-phosphate dehydrogenase (GPDH) is an enzyme that catalyzes the reversible redox conversion of dihydroxyacetone phosphate (a.k.a. glycerone phosphate, outdated) to sn-glycerol 3-phosphate. Glycerol-3-phosphate dehydrogenase serves as a major link between carbohydrate metabolism and lipid metabolism. It is also a major contributor of electrons to the electron transport chain in the mitochondria. Older terms for glycerol-3-phosphate dehydrogenase include alpha glycerol-3-phosphate dehydrogenase (alphaGPDH) and glycerolphosphate dehydrogenase (GPDH). However, glycerol-3-phosphate dehydrogenase is not the same as glyceraldehyde 3-phosphate dehydrogenase (GAPDH), whose substrate is an aldehyde not an alcohol. Metabolic function GPDH plays a major role in lipid biosynthesis. Through the reduction of dihydroxyacetone phosphate into glycerol 3-phosphate, GPDH allows the prompt dephosphorylation of glycerol 3-phosphate into glycerol. Additionally, GPDH is one of the enzymes involved in maintaining the redox potential across the inner mitochondrial membrane. Reaction The NAD+/NADH coenzyme couple act as an electron reservoir for metabolic redox reactions, carrying electrons from one reaction to another. Most of these metabolism reactions occur in the mitochondria. To regenerate NAD+ for further use, NADH pools in the cytosol must be reoxidized. Since the mitochondrial inner membrane is impermeable to both NADH and NAD+, these cannot be freely exchanged between the cytosol and mitochondrial matrix. One way to shuttle this reducing equivalent across the membrane is through the Glycerol-3-phosphate shuttle, which employs the two forms of GPDH: Cytosolic GPDH, or GPD1, is localized to the outer membrane of the mitochondria facing the cytosol, and catalyzes the reduction of dihydroxyacetone phosphate into glycerol-3-phosphate. In conjunction, Mitochondrial GPDH, or GPD2, is embedded on the outer surface of the inner mitochondrial membrane, overlooking the cytosol, and catalyzes the oxidation of glycerol-3-phosphate to dihydroxyacetone phosphate. The reactions catalyzed by cytosolic (soluble) and mitochondrial GPDH are as follows: Variants There are two forms of GPDH: The following human genes encode proteins with GPDH enzymatic activity: GPD1 Cytosolic Glycerol-3-phosphate dehydrogenase (GPD1), is an NAD+-dependent enzyme that reduces dihydroxyacetone phosphate to glycerol-3-phosphate. Simultaneously, NADH is oxidized to NAD+ in the following reaction: As a result, NAD+ is regenerated for further metabolic activity. GPD1 consists of two subunits, and reacts with dihydroxyacetone phosphate and NAD+ though the following interaction: Figure 4. The putative active site. The phosphate group of DHAP is half-encircled by the side-chain of Arg269, and interacts with Arg269 and Gly268 directly by hydrogen bonds (not shown). The conserved residues Lys204, Asn205, Asp260 and Thr264 form a stable hydrogen bonding network. The other hydrogen bonding network includes residues Lys120 and Asp260, as well as an ordered water molecule (with a B-factor of 16.4 Å2), which hydrogen bonds to Gly149 and Asn151 (not shown). In these two electrostatic networks, only the ε-NH3+ group of Lys204 is the nearest to the C2 atom of DHAP (3.4 Å). GPD2 Mitochondrial glycerol-3-phosphate dehydrogenase (GPD2), catalyzes the irreversible oxidation of glycerol-3-phosphate to dihydroxyacetone phosphate and concomitantly transfers two electrons from FAD to the electron transport chain. GPD2 consists of 4 identical subunits. Response to environmental stresses Studies indicate that GPDH is mostly unaffected by pH changes: neither GPD1 or GPD2 is favored under certain pH conditions. At high salt concentrations (E.g. NaCl), GPD1 activity is enhanced over GPD2, since an increase in the salinity of the medium leads to an accumulation of glycerol in response. Changes in temperature do not appear to favor neither GPD1 nor GPD2. Glycerol-3-phosphate shuttle The cytosolic together with the mitochondrial glycerol-3-phosphate dehydrogenase work in concert. Oxidation of cytoplasmic NADH by the cytosolic form of the enzyme creates glycerol-3-phosphate from dihydroxyacetone phosphate. Once the glycerol-3-phosphate has moved through the outer mitochondrial membrane it can then be oxidised by a separate isoform of glycerol-3-phosphate dehydrogenase that uses quinone as an oxidant and FAD as a co-factor. As a result, there is a net loss in energy, comparable to one molecule of ATP. The combined action of these enzymes maintains the NAD+/NADH ratio that allows for continuous operation of metabolism. Role in disease The fundamental role of GPDH in maintaining the NAD+/NADH potential, as well as its role in lipid metabolism, makes GPDH a factor in lipid imbalance diseases, such as obesity. Enhanced GPDH activity, particularly GPD2, leads to an increase in glycerol production. Since glycerol is a main subunit in lipid metabolism, its abundance can easily lead to an increase in triglyceride accumulation at a cellular level. As a result, there is a tendency to form adipose tissue leading to an accumulation of fat that favors obesity. GPDH has also been found to play a role in Brugada syndrome. Mutations in the gene encoding GPD1 have been proven to cause defects in the electron transport chain. This conflict with NAD+/NADH levels in the cell is believed to contribute to defects in cardiac sodium ion channel regulation and can lead to a lethal arrhythmia during infancy. Pharmacological target The mitochondrial isoform of G3P dehydrogenase is thought to be inhibited by metformin, a first line drug for type 2 diabetes. Biological Research Sarcophaga barbata was used to study the oxidation of L-3-glycerophosphate in mitochondria. It is found that the L-3-glycerophosphate does not enter the mitochondrial matrix, unlike pyruvate. This helps locate the L-3-glycerophosphate-flavoprotein oxidoreductase, which is on the inner membrane of the mitochondria. Structure Glycerol-3-phosphate dehydrogenase consists of two protein domains. The N-terminal domain is an NAD-binding domain, and the C-terminus acts as a substrate-binding domain. However, dimer and tetramer interface residues are involved in GAPDH-RNA binding, as GAPDH can exhibit several moonlighting activities, including the modulation of RNA binding and/or stability. See also substrate pages: glycerol 3-phosphate, dihydroxyacetone phosphate related topics: glycerol phosphate shuttle, creatine kinase, glycolysis, gluconeogenesis References Further reading External links equivalent entries: GPDH Yeast genome database GO term: GPDH Protein domains EC 1.1.1
Glycerol-3-phosphate dehydrogenase
[ "Biology" ]
1,689
[ "Protein domains", "Protein classification" ]
10,954,236
https://en.wikipedia.org/wiki/Michael%20Somogyi
Michael Somogyi (March 7, 1883 – July 21, 1971) was a Hungarian-American professor of biochemistry at Washington University in St. Louis and the Jewish Hospital of St. Louis. He prepared the first insulin treatment given to a child with diabetes in the US in October 1922. Somogyi later showed that excessive insulin makes diabetes unstable in the Chronic Somogyi rebound to which he gave his name. Career Somogyi was born on March 7, 1883, in the village of Zsámánd in Hungary (today Reinersdorf, part of Heiligenbrunn, Austria). He graduated in chemical engineering from the University of Budapest in 1905. After an additional year as an assistant in biochemistry, Somogyi went to the United States, where he eventually found a position as an assistant in biochemistry at Cornell University (1906–1908). He returned to Budapest where he worked at the Municipal Laboratory for the next decade. In 1914, he received his Ph.D. from the University of Budapest, submitting a dissertation on catalytic hydrogenation. During World War I he was in charge of providing food to the destitute. Somogyi was invited to return to the United States by Philip A. Shaffer, whom he had known at Cornell. In 1922 Somogyi became an instructor in biochemistry at Washington University in St. Louis. There Somogyi worked with Shaffer and Edward Adelbert Doisy on insulin preparation and insulin's use in the treatment of diabetes. In 1926, Somogyi became the first biochemist on the staff of the new Jewish Hospital of St. Louis where he worked closely with physicians. He directed the hospital's clinical laboratory until he retired in 1957. Research Insulin was discovered in 1921, by Frederick Banting, Charles Best and John Macleod at the University of Toronto. At a time when most child diabetics lived no more than months or a few years, insulin offered the hope of extending lives. Somogyi worked with Philip A. Shaffer and Edward Adelbert Doisy on insulin preparation and insulin's use in the treatment of diabetes. He developed a method for extracting insulin from the pancreases of dogs. In 1922 doctors treated the first diabetic American child, a baby boy, with Somogyi's insulin. Somogyi also developed a quicker, less expensive method for screening for diabetes, using sodium carbonate, urine, and heat. This led to the development of popular tests, including several varieties of urine sugar comparator from the Aloe Company of Saint Louis, Missouri. Urine Sugar Test kits were also produced by Eli Lilly and Company. In 1938 Somogyi published findings showing that excessive insulin can make diabetes management unstable and increase the difficulty of treatment. The Chronic Somogyi rebound, a form of post-hypoglycemic hyperglycemia that Somogyi theorized could occur as a defensive mechanism, is named for him. It can be confused with the Dawn phenomenon and whether or not Somogyi's theory is actually correct is still contested. In 1949, Somogyi argued against the use of high doses of insulin on the grounds that it was a potentially dangerous form of treatment. He also argued that many diabetic patients could successfully manage their conditions through a combination of diet and weight loss. In 1969, Somogyi had a stroke. He died on 21 July 1971. References External links Finding Aid to The Dr. Michael Somogyi Collection, 1912–1979 (bulk 1924–1970) at the Science History Institute (For full finding aid, click on 'Dr. Michael Somogyi Collection Finding Aid'.) Finding Aid to Photographs from the Dr. Michael Somogyi Collection, 1912–1971 (bulk 1950s) at the Science History Institute (For full finding aid, click on 'Dr. Michael Somogyi Collection Finding Aid'.) 1883 births 1971 deaths 20th-century American biochemists Emigrants from Austria-Hungary to the United States Budapest University alumni Clinical chemists Hungarian biochemists Washington University in St. Louis faculty Cornell University staff
Michael Somogyi
[ "Chemistry" ]
855
[ "Biochemists", "Clinical chemists" ]
10,954,379
https://en.wikipedia.org/wiki/International%20Society%20for%20Phylogenetic%20Nomenclature
The International Society for Phylogenetic Nomenclature was established to encourage and facilitate the development and use of, and communication about, phylogenetic nomenclature. It organizes periodic scientific meetings and is overseeing the completion and implementation of the PhyloCode. History and meetings The International Society for Phylogenetic Nomenclature (ISPN) was established in the first international phylogenetic nomenclature meeting, which convened in the Muséum national d'histoire naturelle, in Paris, on July 6–9, 2004. In the second meeting (2006), rules concerning the choice of name for crown clades were discussed, along with rules to clarify the use of binomial species names in the context of phylogenetic nomenclature and to enhance the information content of these names (regarding the monophyly or paraphyly of the genus name, considered a prenomen, in the context of the PhyloCode). It was also decided then to expand the CPN (Committee on Phylogenetic Nomenclature) from nine to twelve members. The third meeting convened at Dalhousie University in Halifax, from July 20 to 22, 2008. The editors of the Companion Volume presented a progress report, and a demonstration of the RegNum on-line registration database was given. Both of these are important to the society because they were required to implement the PhyloCode. Other discussions at the meeting covered the problem of hybrids in rank-based and phylogenetic nomenclature, phyloinformatics, and teaching phylogenetic nomenclature. Shortly after that meeting, the ISPN was admitted as a scientific member of IUBS, the International Union of Biological Sciences, to which other regulating bodies of biological nomenclature (such as the International Association for Plant Taxonomy and the International Commission on Zoological Nomenclature, among others) also belong. References Biological nomenclature Phylogenetics
International Society for Phylogenetic Nomenclature
[ "Biology" ]
355
[ "Bioinformatics", "Phylogenetics", "Biological nomenclature", "Taxonomy (biology)" ]
10,956,197
https://en.wikipedia.org/wiki/Balanced%20circuit
In electrical engineering, a balanced circuit is electronic circuitry for use with a balanced line, or the balanced line itself. Balanced lines are a common method of transmitting many types of electrical signals between two points on two wires. In a balanced line, the two signal lines are of a matched impedance to help ensure that interference, induced in the line, is common-mode and can be removed at the receiving end by circuitry with good common-mode rejection. To maintain the balance, circuit blocks which interface to the line or are connected in the line must also be balanced. Balanced lines work because the interfering noise from the surrounding environment induces equal noise voltages into both wires. By measuring the voltage difference between the two wires at the receiving end, the original signal is recovered while the noise is rejected. Any inequality in the noise induced in each wire is an imbalance and will result in the noise not being fully rejected. One requirement for balance is that both wires are an equal distance from the noise source. This is often achieved by placing the wires as close together as possible and twisting them together. Another requirement is that the impedance to ground (or to whichever reference point is being used by the difference detector) is the same for both conductors at all points along the length of the line. If one wire has a higher impedance to ground it will tend to have a higher noise induced, destroying the balance. Balance and symmetry A balanced circuit will normally show a symmetry of its components about a horizontal line midway between the two conductors (example in figure 3). This is different from what is normally meant by a symmetrical circuit, which is a circuit showing symmetry of its components about a vertical line at its midpoint. An example of a symmetrical circuit is shown in figure 2. Circuits designed for use with balanced lines will often be designed to be both balanced and symmetrical as shown in figure 4. The advantages of symmetry are that the same impedance is presented at both ports and that the circuit has the same effect on signals travelling in both directions on the line. Balance and symmetry are usually associated with reflected horizontal and vertical physical symmetry respectively as shown in figures 1 to 4. However, physical symmetry is not a necessary requirement for these conditions. It is only necessary that the electrical impedances are symmetrical. It is possible to design circuits that are not physically symmetrical but which have equivalent impedances which are symmetrical. Balanced signals and balanced circuits A balanced signal is one where the voltages on each wire are symmetrical with respect to ground (or some other reference). That is, the signals are inverted with respect to each other. A balanced circuit is a circuit where the two sides have identical transmission characteristics in all respects. A balanced line is a line in which the two wires will carry balanced currents (that is, equal and opposite currents) when balanced (symmetrical) voltages are applied. The condition for balance of lines and circuits will be met, in the case of passive circuitry, if the impedances are balanced. The line and circuit remain balanced, and the benefits of common-mode noise rejection continue to apply, whether or not the applied signal is itself balanced (symmetrical), always provided that the generator producing that signal maintains the impedance balance of the line. Driving and receiving circuits There are a number of ways that a balanced line can be driven and the signal detected. In all methods, for the continued benefit of good noise immunity, it is essential that the driving and receiving circuit maintain the impedance balance of the line. It is also essential that the receiving circuit detects only differential signals and rejects common-mode signals. It is not essential (although it is often the case) that the transmitted signal is balanced, that is, symmetrical about ground. Transformer balance The conceptually simplest way to connect to a balanced line is through transformers at each end shown in figure 5. Transformers were the original method of making such connections in telephony, and before the advent of active circuitry were the only way. In the telephony application they are known as repeating coils. Transformers have the additional advantage of completely isolating (or "floating") the line from earth and earth loop currents, which are an undesirable possibility with other methods. The side of the transformer facing the line, in a good quality design, will have the winding laid in two parts (often with a centre tap provided) which are carefully balanced to maintain the line balance. Line side and equipment side windings are more useful concepts than the more usual primary and secondary windings when discussing these kinds of transformers. At the sending end the line side winding is the secondary, but at the receiving end the line side winding is the primary. When discussing a two-wire circuit primary and secondary cease to have any meaning at all, since signals are flowing in both directions at once. The equipment side winding of the transformer does not need to be so carefully balanced. In fact, one leg of the equipment side can be earthed without effecting the balance on the line as shown in figure 5. With transformers the sending and receiving circuitry can be entirely unbalanced with the transformer providing the balancing. Active balance Active balance is achieved using differential amplifiers at each end of the line. An op-amp implementation of this is shown in figure 6, other circuitry is possible. Unlike transformer balance, there is no isolation of the circuitry from the line. Each of the two wires is driven by an op amp circuit which are identical except that one is inverting and one is non-inverting. Each one produces an asymmetrical signal individually but together they drive the line with a symmetrical signal. The output impedance of each amp is equal so the impedance balance of the line is maintained. While it is not possible to create an isolated drive with op-amp circuitry alone, it is possible to create a floating output. This is important if one leg of the line might become grounded or connected to some other voltage reference. Grounding one leg of the line in the circuit of figure 6 will result in the line voltage being halved since only one op-amp is now providing signal. To achieve a floating output additional feedback paths are required between the two op-amps resulting in a more complex circuit than figure 6, but still avoiding the expense of a transformer. A floating op-amp output can only float within the limits of the op-amp's supply rails. An isolated output can be achieved without transformers with the addition of opto-isolators. Impedance balance As noted above, it is possible to drive a balanced line with a single-ended signal and still maintain the line balance. This is represented in outline in figure 7. The amplifier driving one leg of the line through a resistor is assumed to be an ideal (that is, zero output impedance) single-ended output amp. The other leg is connected from ground through another resistor of the same value. The impedance to ground of both legs is the same and the line remains balanced. The receiving amplifier still rejects any common-mode noise as it has a differential input. On the other hand, the line signal is not symmetrical. The voltages at the input to the two legs, V+ and V− are given by; Where Zin is the input impedance of the line. These are clearly not symmetrical since V− is much smaller than V+. They are not even opposite polarities. In audio applications V− is usually so small it can be taken as zero. Balanced to unbalanced interfacing A circuit that has the specific purpose of allowing interfacing between balanced and unbalanced circuits is called a balun. A balun could be a transformer with one leg earthed on the unbalanced side as described in the transformer balance section above. Other circuits are possible such as autotransformers or active circuits. Connectors Common connectors used with balanced circuits include modular connectors on telephone instruments and broadband data, and XLR connectors for professional audio. 1/4" tip/ring/sleeve (TRS) phone connectors were once widely used on manual switchboards and other telephone infrastructure. Such connectors are now more commonly seen in miniature sizes (2.5 and 3.5 mm) being used for unbalanced stereo audio; however, professional audio equipment such as mixing consoles still commonly use balanced and unbalanced "line-level" connections with 1/4" phone jacks. References Bibliography Rod Elliot, Uwe Beis, "Balanced transmitter and receiver II", Elliot Sound Products, 1 April 2002, accessed and archived 7 October 2015. A. J. Peyton, V. Walsh, Analog electronics with Op Amps: a source book of practical circuits, Cambridge University Press, 1993 . Mike Rivers, "Balanced and unbalanced connections", Presonus, accessed and archived 7 October 2015. G. Randy Slone, Electricity and electronics, McGraw-Hill Professional, 2000 . Daniel M. Thompson, Understanding audio: getting the most out of your project or professional recording studio, Hal Leonard Corporation, 2005 . Gabriel Vasilescu, Electronic noise and interfering signals, Springer, 2005 . Jerry C. Whitaker, The resource handbook of electronics, CRC Press, 2001 . Jerry C. Whitaker, Master handbook of audio production: a guide to standards, equipment, and system design, McGraw-Hill Professional, 2003 . Electrical circuits Communication circuits
Balanced circuit
[ "Engineering" ]
1,922
[ "Telecommunications engineering", "Electronic engineering", "Electrical engineering", "Communication circuits", "Electrical circuits" ]
10,956,431
https://en.wikipedia.org/wiki/Flight%20of%20the%20Red%20Balloon
Flight of the Red Balloon () is a 2007 French-Taiwanese film directed by Hou Hsiao-hsien. It is the first part in a new series of films produced by Musée d'Orsay, and tells the story of a French family as seen through the eyes of a Chinese student. The film was shot in August and September 2006 on location in Paris. This is Hou's first non-Asian film. It references the classic 1956 French short The Red Balloon directed by Albert Lamorisse. The film opened the Un Certain Regard section of the Cannes Film Festival in May 2007. Plot Suzanne, a puppeteer, lives with her young son Simon in an apartment in Paris. While Suzanne is busy with producing her new Chinese puppet play based on an ancient Chinese text (求妻煮海人), she hires a Chinese film student, Song, as Simon's new nanny. For her college project — a homage to Albert Lamorisse's famous 1956 film The Red Balloon — Song starts to film Simon. She develops a good relationship with mother and son, and translates for Suzanne's masterclass with a Chinese puppet master. While Simon's older sister Louise is about to graduate from a high school in Brussels, Suzanne plans for Louise to apply for colleges in Paris. For that, she tries to evict her downstairs tenant Marc, who has repeatedly failed to pay his rent, while arguing on the phone with Pierre, Simon's father, who has gone off to Canada for two years to write a novel and who is rarely in touch. Simon visits the Musée d'Orsay on a school trip, where his class is shown The Ball, a painting by Félix Vallotton in which a child chases a red ball. Cast Juliette Binoche as Suzanne Simon Iteanu as Simon Hippolyte Girardot as Marc Fang Song as Song Louise Margolin as Louise Anna Sigalevitch as Anna Charles-Edouard Renault as Lorenzo Li Chuan-Zan (李傳燦, son of Li Tian-Lu) as the puppet master Critical reception Rotten Tomatoes reported that 81% of 91 sampled critics and 85% of top critics gave the film positive reviews, with an average rating of 6.9 out of 10. J. Hoberman, writing in The Village Voice was particularly appreciative of the film stating, "Flight of the Red Balloon is contemplative but never static, and punctuated by passages of pure cinema". Kate Stables of Sight & Sound also highly praised the film, "Finding a serene and contemplative beauty in the quotidian world has long been Taiwanese master-minimalist Hou Hsiao Hsien's stock in trade... Hou brings the same depth and deliberation to the film's Parisian exterior... ultimately, it is cinema that is the film's sacred repository of memory and creativity." Jonathan Rosenbaum of Chicago Reader likes the film as ''A relatively slight but sturdy work by Taiwanese master Hou Hsiao-hsien, this slice of contemporary urban life more or less does for Paris what his Cafe Lumiere did for Tokyo, albeit with less minimalism and more overt emotion.'' On the other hand, Duane Byrge of Hollywood Reporter was not impressed. "The imagery of the classic movie, where a spirited red balloon wafts unpredictably over Paris, never even attempts to reach a metaphorical height, nor does it even engage us compositionally." Top ten lists The film appeared on several critics' top ten lists of the best films of 2008. 1st - J. Hoberman, The Village Voice 1st - Reverse Shot 2nd - Nick Schager, Slant Magazine 3rd - Liam Lacey, The Globe and Mail 4th - Manohla Dargis, The New York Times 5th - Andrew O'Hehir, Salon.com 5th - Michael Phillips, Chicago Tribune 9th - Ty Burr, The Boston Globe Awards The film won the FIPRESCI Prize at the 2007 Valladolid International Film Festival. References External links 2007 films Taiwanese drama films 2000s French-language films French drama films Films directed by Hou Hsiao-hsien Balloons Musée d'Orsay Puppet films 2000s French films BAC Films films
Flight of the Red Balloon
[ "Chemistry" ]
869
[ "Balloons", "Fluid dynamics" ]
10,957,754
https://en.wikipedia.org/wiki/CD137
CD137, a member of the tumor necrosis factor (TNF) receptor family, is a type 1 transmembrane protein, expressed on surfaces of leukocytes and non-immune cells. Its alternative names are tumor necrosis factor receptor superfamily member 9 (TNFRSF9), 4-1BB, and induced by lymphocyte activation (ILA). It is of interest to immunologists as a co-stimulatory immune checkpoint molecule, and as a potential target in cancer immunotherapy. Expression CD137 is only expressed on the cell surface after T cell activation. When T cells are activated by Antigen Presenting Cells (APCs), CD137 becomes embedded in CD4+ and CD8+ T cells. CD137 is a costimulatory molecule functioning to stimulate T cell proliferation, dendritic cell maturation, and promotion of B cell antibody secretion. As a T cell co-stimulator, T cell receptor (TCR) and CD28 signaling causes expression of CD137 on T cell membranes. When CD137 then reacts with the CD137 ligand, it leads to CD137 upregulation. This is a form of self regulation or positive feedback cycle. When CD137 interacts with its ligand, it leads to T cell cytokine production and T cell proliferation, among other signaling pathway responses. Other cells that express CD137 include both immune cells (i.e. monocytes, natural killer cells, dendritic cells, follicular dendritic cells (FDCs), and regulatory T cells) and non-immune cells (i.e. chondrocytes, neurons, astrocytes, microglia and endothelial cells). Regulation of the immune system CD137 and its ligand both induce signaling cascades upon interaction, a phenomenon known as bidirectional signal transduction. The CD137/ligand complex is also involved in regulation of the immune system. The CD137 ligand is a type-II transmembrane glycoprotein expressed on APCs. The CD137 ligand is normally expressed at low levels, but can have increased expression in presence of pathogen associated molecular patterns (PAMPs) or proinflammatory immune responses like IL-1 secretion. Cross-linking CD137 and active T cells can not only result in T cell proliferation via increased IL-2 secretion, but surviving cells also contribute to expanding immune system memory and augmenting T cell cytolytic activity. Atherosclerosis Inflammation Atherosclerosis is a disease, linked to Cardiovascular Disease (CVD), and associated with cardiac inflammation, in the form of lesions in the walls of the atrial chambers and other vasculature. Treatments designed to target the CD137 molecules expressed on immune cell surfaces often lead to T cell proliferation as CD137 stimulation allows for the T cells to continue through the cell cycle. In this way, CD137 is often referred to as an immune checkpoint. This proliferation eventually leads to other immune cell responses and secretion of proinflammatory cytokines which result in exaggerated inflammatory responses that exacerbate atherosclerosis. Ongoing studies are researching CD137 as a biomarker for atherosclerosis as well as CD137 antagonists as potential therapeutics to reduce the symptoms associated with the condition. Endothelial cells The mechanism connecting CD137 bidirectional signaling to the promotion of atherosclerosis is related to CD137 mediation of epithelial cell damage. When the CD137/CD137L complex interacts with endothelial cells, including those lining vascular structures, it induces the upregulation of molecules that promote inflammation and damage. For instance, increases in adhesion molecules, including vascular adhesion molecule-1 or intracellular adhesion molecule-1, on epithelial cells causes recruitment of immune cells like macrophages and neutrophils. When they arrive, these cells initiate proinflammatory responses including cytokine secretion. In chronic cases, this results in excessive inflammation of the epithelial tissue, leading to cell damage and the formation of atherosclerotic inflammatory lesions. Interactions CD137 has been shown to interact with TRAF2. As a drug target Cancer immunotherapy CD137 is also involved in cancer having been found upregulated in cancerous cell lines. CD137/ligand stimulation has been found to lead to stronger anti-tumor responses due to cytotoxic T cell activation and is being examined as a possible anticancer therapy. Current cancer immunotherapy treatments use monoclonal antibodies (mAbs) to target and kill cancer cells. Cancer cells upregulate cell surface CD137, however the reason behind this remains unclear. What is known is the fact that mAbs targeting CD137 are successful in fighting cancer as they can not only mark cancer cells, but they allow for CD8+ T cell activation and increased IFN-gamma secretion as per CD137’s function as a costimulatory molecule. This enables the affected individual’s immune system to actively target and kill cancer cells that express CD137 on their cell surfaces. Currently, Utomilumab is the only mAb targeting CD137 on the market. Urelumab trials were temporarily halted due to risk of liver toxicity. Utomilumab trials resulted in the drug’s being cleared for therapeutic use. Utomilumab Utomilumab (PF-05082566) targets this receptor to stimulate a more intense immune system attack on cancers. It is a fully human IgG2 monoclonal antibody. It is in early clinical trials. , 5 clinical trials are active. In recent years, there has been a reignited interest in 4-1BB immunotherapy. Currently, there are several anti-4-1BB antibodies and recombinant proteins are in various stages of clinical trial. See also 4-1BB ligand Urelumab References External links Further reading Oncology Immunology
CD137
[ "Biology" ]
1,283
[ "Immunology" ]
10,958,185
https://en.wikipedia.org/wiki/Car%20dependency
Car dependency is a phenomenon in urban planning wherein existing and planned infrastructure prioritizes the use of automobiles over other modes of transportation, such as public transport, bicycles, and walking. Car dependency has been attributed with leading to a more polluting transport system compared to systems where all transportation modes are treated more equally. Car infrastructure is often paid for by governments from general taxes rather than gasoline taxes or mandated by governments. For instance, many cities have minimum parking requirements for new housing, which in practice requires developers to "subsidize" drivers. In some places, bicycle and rickshaws are banned from using road space. The road lobby plays an important role in maintaining car dependency, arguing that car infrastructure is good for economic growth. Description In many modern cities, automobiles are convenient and sometimes necessary to move easily. When it comes to automobile use, there is a spiraling effect where traffic congestion produces the 'demand' for more and bigger roads and the removal of 'impediments' to traffic flow. For instance, pedestrians, signalized crossings, traffic lights, cyclists, and various forms of street-based public transit, such as trams. These measures make automobile use more advantageous at the expense of other modes of transport, inducing greater traffic volumes. Additionally, the urban design of cities adjusts to the needs of automobiles in terms of movement and space. Buildings are replaced by parking lots. Open-air shopping streets are replaced by enclosed shopping malls. Walk-in banks and fast-food stores are replaced by drive-in versions of themselves that are inconveniently located for pedestrians. Town centers with a mixture of commercial, retail, and entertainment functions are replaced by single-function business parks, 'category-killer' retail boxes, and 'multiplex' entertainment complexes, each surrounded by large tracts of parking. These kinds of environments require automobiles to access them, thus inducing even more traffic onto the increased road space. This results in congestion, and the cycle above continues. Roads get ever bigger, consuming ever greater tracts of land previously used for housing, manufacturing, and other socially and economically useful purposes. Public transit becomes less viable and socially stigmatized, eventually becoming a minority form of transportation. People's choices and freedoms to live functional lives without the use of the car are greatly reduced. Such cities are automobile-dependent. Automobile dependency is seen primarily as an issue of environmental sustainability due to the consumption of non-renewable resources and the production of greenhouse gases responsible for global warming. It is also an issue of social and cultural sustainability. Like gated communities, the private automobile produces physical separation between people and reduces the opportunities for unstructured social encounters that is a significant aspect of social capital formation and maintenance in urban environments. Origins of car dependency As automobile use rose drastically in the 1910s, American road administrators favored building roads to accommodate traffic. Administrators and engineers in the interwar period spent their resources making small adjustments to accommodate traffic such as widening lanes and adding parking spaces, as opposed to larger projects that would change the built environment altogether. American cities began to tear out tram systems in the 1920s. Car dependency itself saw its formation around the Second World War, when urban infrastructure began to be built exclusively around the car. The resultant economic and built environment restructuring allowed wide adoption of automobile use. In the United States, the expansive manufacturing infrastructure, increase in consumerism, and the establishment of the Interstate Highway System set forth the conditions for car dependence in communities. In 1956, the Highway Trust Fund was established in America, reinvesting gasoline taxes back into car-based infrastructure. Urban design factors Land-use (zoning) In 1916 the first zoning ordinance was introduced in New York City, the 1916 Zoning Resolution. Zoning was created as a means of organizing specific land uses in a city so as to avoid potentially harmful adjacencies like heavy manufacturing and residential districts, which were common in large urban areas in the 19th and early 20th centuries. Zoning code also determines the permitted residential building types and densities in specific areas of a city by defining such things as single-family homes, and multi-family residential as being allowed as of right or not in certain areas. The overall effect of zoning in the last century has been to create areas of the city with similar land use patterns in cities that had previously been a mix of heterogenous residential and business uses. The problem is particularly severe right outside of cities, in suburban areas located around the periphery of a city where strict zoning codes almost exclusively allow for single family detached housing. Strict zoning codes that result in a heavily segregated built environment between residential and commercial land uses contributes to car dependency by making it nearly impossible to access all one's given needs, such as housing, work, school and recreation without the use of a car. One key solution to the spatial problems caused by zoning would be a robust public transportation network. There is also currently a movement to amend older zoning ordinances to create more mixed-use zones in cities that combine residential and commercial land uses within the same building or within walking distance to create the so-called 15-minute city. Parking minimums are also a part of modern zoning codes, and contribute to car dependency through a process known as induced demand. Parking minimums require a certain number of parking spots based on the land use of a building and are often designed in zoning codes to represent the maximum possible need at any given time. This has resulted in cities having nearly eight parking spaces for every car in America, which have created cities almost fully dedicated to parking from free on-street parking to parking lots up to three times the size of the businesses they serve. This prevalence in parking has perpetuated a loss in competition between other forms of transportation such that driving becomes the de facto choice for many people even when alternatives do exist. Street design The design of city roads can contribute significantly to the perceived and actual need to use a car over other modes of transportation in daily life. In the urban context car dependence is induced in greater numbers by design factors that operate in opposite directions - first, design that makes driving easier and second, design that makes all other forms of transportation more difficult. Frequently these two forces overlap in a compounding effect to induce more car dependence in an area that would have potential for a more heterogenous mix of transportation options. These factors include things like the width of roads, that make driving faster and therefore 'easier' while also making a less safe environment for pedestrians or cyclists that share the same road. The prevalence of on-street parking on most residential and commercial also streets makes driving easier while taking away street space that could be used for protected bike lanes, dedicated bus lanes, or other forms of public transportation. Negative externalities of automobiles According to the Handbook on estimation of external costs in the transport sector made by the Delft University, which is the main reference in European Union for assessing the externalities of cars, the main external costs of driving a car are: congestion and scarcity costs collision costs air pollution costs noise pollution costs climate change costs costs for nature and landscape costs for water pollution costs for soil pollution costs of energy dependency Other negative externalities may include increased cost of building infrastructure, inefficient use of space and energy, pollution and per capita fatality. Addressing the issue There are a number of planning and design approaches to redressing automobile dependency, known variously as New Urbanism, transit-oriented development, and smart growth. Most of these approaches focus on the physical urban design, urban density and landuse zoning of cities. Paul Mees argued that investment in good public transit, centralized management by the public sector and appropriate policy priorities are more significant than issues of urban form and density. Removal of minimum parking requirements from building codes can alleviate the problems generated by car dependency. Minimum parking requirements occupy valuable space that otherwise can be used for housing. However, removal of minimum parking requirements will require implementation of additional policies to manage the increase in alternative parking methods. There are, of course, many who argue against a number of the details within any of the complex arguments related to this topic, particularly relationships between urban density and transit viability, or the nature of viable alternatives to automobiles that provide the same degree of flexibility and speed. There is also research into the future of automobility itself in terms of shared usage, size reduction, road-space management and more sustainable fuel sources. Car-sharing is one example of a solution to automobile dependency. Research has shown that in the United States, services like Zipcar, have reduced demand by about 500,000 cars. In the developing world, companies like eHi, Carrot, Zazcar and Zoom have replicated or modified Zipcar's business model to improve urban transportation to provide a broader audience with greater access to the benefits of a car and provide "last-mile" connectivity between public transportation and an individual's destination. Car sharing also reduces private vehicle ownership. Urban sprawl and smart growth Whether smart growth does or can reduce problems of automobile dependency associated with urban sprawl has been fiercely contested for several decades. The influential study in 1989 by Peter Newman and Jeff Kenworthy compared 32 cities across North America, Australia, Europe and Asia. The study has been criticised for its methodology, but the main finding, that denser cities, particularly in Asia, have lower car use than sprawling cities, particularly in North America, has been largely accepted, but the relationship is clearer at the extremes across continents than it is within countries where conditions are more similar. Within cities, studies from across many countries (mainly in the developed world) have shown that denser urban areas with greater mixture of land use and better public transport tend to have lower car use than less dense suburban and exurban residential areas. This usually holds true even after controlling for socio-economic factors such as differences in household composition and income. This does not necessarily imply that suburban sprawl causes high car use, however. One confounding factor, which has been the subject of many studies, is residential self-selection: people who prefer to drive tend to move towards low-density suburbs, whereas people who prefer to walk, cycle or use transit tend to move towards higher density urban areas, better served by public transport. Some studies have found that, when self-selection is controlled for, the built environment has no significant effect on travel behaviour. More recent studies using more sophisticated methodologies have generally rejected these findings: density, land use and public transport accessibility can influence travel behaviour, although social and economic factors, particularly household income, usually exert a stronger influence. The paradox of intensification Reviewing the evidence on urban intensification, smart growth and their effects on automobile use, Melia et al. (2011) found support for the arguments of both supporters and opponents of smart growth. Planning policies that increase population densities in urban areas do tend to reduce car use, but the effect is weak. So, doubling the population density of a particular area will not halve the frequency or distance of car use. These findings led them to propose the paradox of intensification: All other things being equal, urban intensification which increases population density will reduce per capita car use, with benefits to the global environment, but will also increase concentrations of motor traffic, worsening the local environment in those locations where it occurs. At the citywide level, it may be possible, through a range of positive measures to counteract the increases in traffic and congestion that would otherwise result from increasing population densities: Freiburg im Breisgau in Germany is one example of a city which has been more successful in reducing automobile dependency and constraining increases in traffic despite substantial increases in population density. This study also reviewed evidence on local effects of building at higher densities. At the level of the neighbourhood or individual development, positive measures (like improvements to public transport) will usually be insufficient to counteract the traffic effect of increasing population density. This leaves policy-makers with four choices: intensify and accept the local consequences sprawl and accept the wider consequences a compromise with some element of both or intensify accompanied by more direct measures such as parking restrictions, closing roads to traffic and carfree zones. See also Automotive industry Accessibility (transport) Automotive city Car costs Car-free movement Cycling infrastructure Effects of the car on societies Forced rider Jevons paradox Peak car Sedentary lifestyle Sustainable transport Transit-oriented development Transport divide Urban planning Walkability 2008–2010 automotive industry crisis Notes and references Bibliography External links Automobile Dependency (TDM Encyclopedia), Victoria Transport Policy Institute Smart Cities concept cars at MIT Urban design Sustainable urban planning Sustainable transport Car ownership Automotive industry Industries (economics) History of the automobile
Car dependency
[ "Physics" ]
2,563
[ "Physical systems", "Transport", "Sustainable transport" ]
10,958,349
https://en.wikipedia.org/wiki/Air-mass%20thunderstorm
An air-mass thunderstorm, also called an "ordinary", "single cell", "isolated" or "garden variety" thunderstorm, is a thunderstorm that is generally weak and usually not severe. These storms form in environments where at least some amount of Convective Available Potential Energy (CAPE) is present, but with very low levels of wind shear and helicity. The lifting source, which is a crucial factor in thunderstorm development, is usually the result of uneven heating of the surface, though they can be induced by weather fronts and other low-level boundaries associated with wind convergence. The energy needed for these storms to form comes in the form of insolation, or solar radiation. Air-mass thunderstorms do not move quickly, last no longer than an hour, and have the threats of lightning, as well as showery light, moderate, or heavy rainfall. Heavy rainfall can interfere with microwave transmissions within the atmosphere. Lightning characteristics are related to characteristics of the parent thunderstorm, and could induce wildfires near thunderstorms with minimal rainfall. On unusual occasions there could be a weak downburst and small hail. They are common in temperate zones during a summer afternoon. Like all thunderstorms, the mean-layered wind field the storms form within determine motion. When the deep-layered wind flow is light, outflow boundary progression will determine storm movement. Since thunderstorms can be a hazard to aviation, pilots are advised to fly above any haze layers within regions of better visibility and to avoid flying under the anvil of these thunderstorms, which can be regions where hail falls from the parent thunderstorm. Vertical wind shear is also a hazard near the base of thunderstorms which have generated outflow boundaries. Life cycle The trigger for the lift of the initial cumulus cloud can be insolation heating the ground producing thermals, areas where two winds converge forcing air upwards, or where winds blow over terrain of increasing elevation. The moisture rapidly cools into liquid drops of water due to the cooler temperatures at high altitude, which appears as cumulus clouds. As the water vapor condenses into liquid, latent heat is released which warms the air, causing it to become less dense than the surrounding dry air. The air tends to rise in an updraft through the process of convection (hence the term convective precipitation). This creates a low-pressure zone beneath the forming thunderstorm, otherwise known as a cumulonimbus cloud. In a typical thunderstorm, approximately 5×108 kg of water vapor is lifted into the Earth's atmosphere. As they form in areas of minimal vertical wind shear, the thunderstorm's rainfall creates a moist and relatively cool outflow boundary which undercuts the storm's low level inflow, and quickly causes dissipation. Waterspouts, small hail, and strong wind gusts can occur in association with these thunderstorms. Common locations of appearance Also known as single cell thunderstorms, these are the typical summer thunderstorms in many temperate locales. They also occur in the cool unstable air which often follows the passage of a cold front from the sea during winter. Within a cluster of thunderstorms, the term "cell" refers to each separate principal updraft. Thunderstorm cells occasionally form in isolation, as the occurrence of one thunderstorm can develop an outflow boundary which sets up new thunderstorm development. Such storms are rarely severe and are a result of local atmospheric instability; hence the term "air mass thunderstorm". When such storms have a brief period of severe weather associated with them, it is known as a pulse severe storm. Pulse severe storms are poorly organized due to the minimal vertical wind shear in the storm's environment and occur randomly in time and space, making them difficult to forecast. Between formation and dissipation, single cell thunderstorms normally last 20–30 minutes. Motion The two major ways thunderstorms move are via advection of the wind and propagation along outflow boundaries towards sources of greater heat and moisture. Many thunderstorms move with the mean wind speed through the Earth's troposphere, or the lowest of the Earth's atmosphere. Younger thunderstorms are steered by winds closer to the Earth's surface than more mature thunderstorms as they tend not to be as tall. If the gust front, or leading edge of the outflow boundary, moves ahead of the thunderstorm, the thunderstorm's motion will move in tandem with the gust front. This is more of a factor with thunderstorms with heavy precipitation (HP), such as air-mass thunderstorms. When thunderstorms merge, which is most likely when numerous thunderstorms exist in proximity to each other, the motion of the stronger thunderstorm normally dictates future motion of the merged cell. The stronger the mean wind, the less likely other processes will be involved in storm motion. On weather radar, storms are tracked by using a prominent feature and tracking it from scan to scan. Convective precipitation Convective rain, or showery precipitation, occurs from cumulonimbus clouds. It falls as showers with rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds such as thunderstorms have limited horizontal extent. Most precipitation in the tropics appears to be convective. Graupel and hail are good indicators of convective precipitation and thunderstorms. In mid-latitudes, convective precipitation is intermittent and often associated with baroclinic boundaries such as cold fronts, squall lines, and warm fronts. High rainfall rates are associated with thunderstorms with larger raindrops. Heavy rainfall leads to fading of microwave transmissions starting above the frequency of 10 gigahertz (GHz), but is more severe above frequencies of 15 GHz. Lightning Relationships between lightning frequency and the height of precipitation within thunderstorms have been found. Thunderstorms which show radar returns above in height are associated with storms which have more than ten lightning flashes per minute. There is also a correlation between the total lightning rate and the size of the thunderstorm, its updraft velocity, and amount of graupel over land. The same relationships fail over tropical oceans, however. Lightning from low precipitation (LP) thunderstorms is one of the leading causes of wildfires. Aviation concerns In areas where these thunderstorms form in isolation and horizontal visibility is good, pilots can evade these storms rather easily. In more moist atmospheres which become hazy, pilots navigate above the haze layer in order to get a better vantage point of these storms. Flying under the anvil of thunderstorms is not advised, as hail is more likely to fall in such areas outside the thunderstorm's main rain shaft. When an outflow boundary forms due to a shallow layer of rain-cooled air spreading out near ground level from the parent thunderstorm, both speed and directional wind shear can result at the leading edge of the three-dimensional boundary. The stronger the outflow boundary is, the stronger the resultant vertical wind shear will become. See also Heat lightning References External links Glossary - NOAA's National Weather Service Atmospheric electricity Weather hazards to aircraft Mesoscale meteorology Storm Weather hazards Severe weather and convection
Air-mass thunderstorm
[ "Physics" ]
1,500
[ "Physical phenomena", "Weather hazards", "Weather", "Atmospheric electricity", "Electrical phenomena" ]
10,958,505
https://en.wikipedia.org/wiki/Destination%20sign
A destination sign (North American English) or destination indicator/destination blind (British English) is a sign mounted on the front, side or rear of a public transport vehicle, such as a bus, tram/streetcar or light rail vehicle, that displays the vehicle's route number and destination, or the route's number and name on transit systems using route names. The main such sign, mounted on the front of the vehicle, usually located above (or at the top of) the windshield, is often called the headsign, most likely from the fact that these signs are located on the front, or head, end of the vehicle. Depending on the type of the sign, it might also display intermediate points on the current route, or a road that comprises a significant amount of the route, especially if the route is particularly long and its final terminus by itself is not very helpful in determining where the vehicle is going. Technology types Several different types of technology have been used for destination signs, from simple rigid placards held in place by a frame or clips, to rollsigns, to various types of computerized, and more recently electronically controlled signs, such as flip-dot, LCD or LED displays. All of these can still be found in use today, but most transit-vehicle destination signs now in use in North America and Europe are electronic signs. In the US, the Americans with Disabilities Act of 1990 specifies certain design criteria for transit-vehicle destination signs, such as maximum and minimum character height-to-width ratio and contrast level, to ensure the signs are sufficiently readable to visually impaired persons. In the 2010s, LED signs have replaced flip-dot signs as the most common type of destination sign in new buses and rail transit vehicles. Rollsign For many decades, the most common type of multiple-option destination sign was the rollsign (or bus blind, curtain sign, destination blind, or tram scroll): a roll of flexible material with pre-printed route number/letter and destinations (or route name), which is turned by the vehicle operator at the end of the route when reversing direction, either by a hand crank or by holding a switch if the sign mechanism is motorized. These rollsigns were usually made of linen until Mylar (a type of PET film) became the most common material used for them, in the 1960s/70s. They can also be made of other material, such as Tyvek. In the 1990s rollsigns were still commonly seen in older public transport vehicles, and were sometimes used in modern vehicles of that time. Since the 1980s, they have largely been supplanted by electronic signs. A digital display may be somewhat less readable, but is easier to change between routes/destinations and to update for changes to a transit system's route network. However, given the long life of public transit vehicles and of sign rolls, if well made, some transit systems continue to use these devices in the present day. The roll is attached to metal tubes at the top and bottom, and flanges at the ends of the tubes are inserted into a mechanism which controls the rolling of the sign. The upper and lower rollers are positioned sufficiently far apart to permit a complete "reading" (a destination or route name) to be displayed, and a strip light is located behind the blind to illuminate it at night. When the display needs to be changed, the driver/operator/conductor turns a handle/crank—or holds a switch if the sign mechanism is motorized—which engages one roller to gather up the blind and disengages the other, until the desired display is found. A small viewing window in the back of the signbox (the compartment housing the sign mechanism) permits the driver to see an indication of what is being shown on the exterior. Automatic changing of rollsign/blind displays, through electronic control, has been possible since at least the 1970s, but is an option that primarily has been used on rail systems—where a metro train or articulated tram can have several separate signboxes each—and only infrequently on buses, where it is comparatively easy for the driver to change the display. These signs are controlled by a computer through an interface in the driver's cabin. Barcodes are printed on the reverse of the blind, and as the computer rolls the blind an optical sensor reads the barcodes until reaching the code for the requested display. The on-board computer is normally programmed with information on the order of the displays, and can be programmed using the non-volatile memory should the blind/roll be changed. Although these sign systems are normally accurate, over time the blind becomes dirty and the computer may not be able to read the markings well, leading occasionally to incorrect displays. For buses, this disadvantage is outweighed by the need (compared to manual) to change each destination separately; if changing routes, this could be up to seven different blinds. Automatic-setting rollsigns are common on many light rail and subway/metro systems in North America. Most Transport for London buses use a standard system with up & down buttons to change the destination shown on the blinds & a manual override using a crank. The blind system is integrated with a system controlling announcements & passenger information, which uses satellites to download stop data in a sequential order. It uses GPS to determine that a bus has departed a stop, and announce the next stop. As of 2024, TfL no longer require McKenna-brand motorised blind system installed on London Buses, with most operators ordering new vehicles with McKenna-brand Mobitec 'Luminator' LED displays or Hanover high-density LED displays after a fleet of Optare Solo SRs were put into service on the Hampstead Garden Suburb routes with LED displays fitted in 2017. Plastic sign Plastic signs are inserted by the driver into the slot at the front of the bus before a service run. In Hong Kong, plastic signs had been used since the mid-1990s on Kowloon Motor Bus (KMB) and Long Win Bus (LWB) buses to replace rollsigns on the existing fleet, and became a standard equipment until 2000 when electronic display became mainstream, with the exception of single decker buses, presumably because the number of destinations in the network was so large that rolling the destination between every trip was impractical. These buses were equipped with a destination sign slot which 2 plastic destination signs could be placed in it, such that the driver could press a button to flip them at the terminus, and one slot at the front, side and back of the bus respectively for the route number only. All buses with plastic signs were retired in 2017 upon completing 18 years of service. Flip-disc display In the United States, the first electronic destination signs for buses were developed by Luminator in the mid-1970s and became available to transit operators in the late 1970s, but did not become common until the 1980s. These are known as flip-disc, or "flip-dot", displays. Some transit systems still use these today. Flap display Another technology that has been employed for destination signs is the split-flap display, or Solari display, but outside Italy, this technology was never common for use in transit vehicles. Such displays were more often used at transit hubs and at airports to display arrival and departure information, rather than as destination signs on transit vehicles. Electronic displays Starting in the early 1990s, and becoming the primary type of destination sign by the end of the decade, electronic displays consist of liquid crystal display (LCD) or light-emitting diode (LED) panels that can show animated text, colors (in the case of LED signs), and a potentially unlimited number of routes (so long as they are programmed into the vehicle's sign controller unit; some sign controller units may also allow the driver to write the route number and the destination text through a keypad if required). In many systems, the vehicle has three integrated signs in the system, the front sign over the windshield, the side sign over the passenger entrance, both showing the route number and destination, and a rear sign usually showing the route number. An internal sign, that could also provide different kinds of information such as the current stop and the next one, aside from the route number and destination, may also be installed. Some such signs also have the capability of changing on-the-fly as the vehicle moves along its route, with the help of GPS technology, serial interfaces and a vehicle tracking system. See also Platform display Notes References External links www.rollsigngallery.com Rollsign Gallery, showing the history of public transit through their destination signs – USA, Canada, overseas Public transport information systems Display technology Signage
Destination sign
[ "Technology", "Engineering" ]
1,771
[ "Public transport information systems", "Information systems", "Electronic engineering", "Display technology" ]
10,959,253
https://en.wikipedia.org/wiki/Roland%20SP-404
The Roland SP-404 Sampling Workstation is a sampler made by Roland Corporation. Released in 2005, it is part of the SP family and successor to where Boss Corporation’s SP-505 sampler left off. The sampler was succeeded by the SP-555 in 2008, but was later given its own upgrade as the Roland SP-404SX Linear Wave Sampler in 2009. Another upgrade, the Roland SP-404A Linear Wave Sampler was released in 2017. A third upgrade, the SP-404MKII was released in 2021. The Roland SP-404 has played a huge role in influencing the sound of popular Hip Hop subgenre known as Lofi HipHop. Features OG / SX / A Having the traditional features of the Roland Grooveboxes, the 404 has the ability to record audio directly via line/mic, or import/export industry-standard WAV and AIF files via CompactFlash card. An onboard pattern sequencer allows up to 8,000 notes to be recorded in real time. Pattern data can be quantized and up to 24 patterns, each 1–99 measures long, can be stored in the internal memory. Using a 1GB CompactFlash card, sampling times can be as long as approximately 772 minutes in Lo-Fi mode, or up to 386 minutes long in Standard mode. However, the 404 (along with its own upgrades) lacks the D-Beam feature of the previous SP-808 and SP-606 installments. Although the first bank comes with preset samples that are protected, these samples can be removed by holding "cancel" as you turn it on. This allows you to delete the samples from the protected bank. MKII Released in 2021, the 404MKII was a significant upgrade, while retaining the unit's original form factor. Among the new features were an OLED display capable of showing sample waveforms, 16 GB of internal memory, USB-C and 32-note polyphony. The device was also given a new "Skip Back Sampling" feature with the addition of a "Mark" Button. This feature provides users the option to record the most recent 40 seconds of usage with the press of the button. In popular culture A number of musicians have used either the SP-303 and/or the SP-404 as part of their production and performance. These include Jel, Odd Nosdam, Alias, J Dilla, Madlib, JPEGMAFIA, Joji, MF Doom, Jneiro Jarel, Milo, Flying Lotus, James Blake, Samiyam, Ras G, Teebs, Grimes, Pictureplane, Four Tet, Beck Hansen, Bradford Cox of Deerhunter and Atlas Sound, Radiohead, Animal Collective, Spindrift, Toro y Moi, Broadcast, John Maus, Ellie Goulding, El Guincho, Illmind, Dibia$e, Jim James of My Morning Jacket, Oneohtrix Point Never (Daniel Lopatin), Matt Mondanile of Ducktails, Devo, Mad Professor, and many others. Jan Linton was hired by Roland in 2005 to produce a promotional European sound card for the SP404. References Further reading External links Roland SP-404 Sampling Workstation, Roland SP-404SX/SP-404A Linear Wave Sampler SP-404 Samplers (musical instrument) Grooveboxes Music sequencers Sound modules Music workstations Hip-hop production Japanese inventions
Roland SP-404
[ "Engineering" ]
710
[ "Music sequencers", "Automation" ]
10,959,358
https://en.wikipedia.org/wiki/Convergent%20extension
Convergent extension (CE), sometimes called convergence and extension (C&E), is the process by which the tissue of an embryo is restructured to converge (narrow) along one axis and extend (elongate) along a perpendicular axis by cellular movement. Example and explanation An example of this process is where the anteroposterior axis (the axis drawn between the head and tail end of an embryo) becomes longer as the lateral tissues (those that make up the left and right sides of the embryo) move in towards the dorsal midline (the middle of the back of the animal). This process plays a crucial role in shaping the body plan during embryogenesis and occurs during gastrulation, neurulation, axis elongation, and organogenesis in both vertebrate and invertebrate embryos. In chordate animals, this process is utilized within a vast population of cells; from the smaller populations in the notochord of the sea squirt (ascidian) to the larger populations of the dorsal mesoderm and neural ectoderm of frogs (Xenopus) and fish. Many characteristics of convergent extension are conserved in the teleost fish, the bird, and very likely within mammals at the molecular, cellular, and tissue level. In amphibians and fish Convergent extension has been primarily studied in frogs and fish due to their large embryo size and their development outside of a maternal host (in egg clutches in the water, as opposed to in a uterus). Within frogs and fish, however, there exist fundamental differences in how convergent extension is achieved. Frog embryogenesis utilizes cell rearrangement as the sole player of this process. Fish, on the other hand, utilize both cell rearrangement as well as directed migration (Fig. 1) . Cellular rearrangement is the process by which individual cells of a tissue rearrange to reshape the tissue as a whole, while cellular migration is the directed movement of a singular cell or small group of cells across a substrate such as a membrane or tissue. Frog (Xenopus), as well as other amphibian, gastrulation serves as an excellent example of the role of convergent extension in embryogenesis. During gastrulation in frogs, the driving force of convergent extension is the morphogenic activity of the presumptive dorsal mesodermal cells; this activity is driven by the mesenchymal cells that lie beneath the presumptive mesodermal and neural tissues. These tissues exist within the involuting marginal zone (IMZ) of the embryo which lies between the vegetal endoderm and the posterior neural tissue. The IMZ is integral to gastrulation and R. Keller et al. eloquently exemplify the importance of convergent extension in Xenopus gastrulation. “…the IMZ, true to its name, involutes or rolls over the blastoporal lip and turns inside out. As it involutes, it converges along the mediolateral axis and extends along the future anterior-posterior axis of the notochordal and somitic mesoderm. Convergence and extension of these tissues squeezes the blastopore shut and simultaneously elongate the body axis. Elongation continues through the neurula and tailbud stages…As these involuted dorsal mesodermal tissues converge and extend on the inside of the gastrula, the presumptive posterior neural tissue converges and extends on the outside of the embryo, parallel to the underlying mesoderm, and then rolls up to form the neural tube, which later forms the hindbrain and spinal cord of the central nervous system". Should convergent extension be interrupted or incomplete, the resulting organism will have a short anteroposterior axis, wide notochord, and broad, open neural tube. Cellular mechanisms The cellular signals required for convergent extension are not fully understood, however, it is known that the non-canonical Wnt signaling pathway plays an important role,. Current research is shedding light on the cellular mechanisms of convergent extension and recently the planar cell polarity (PCP) pathway was implicated in regulating the cell polarity of the factors involved in convergent extension. This is an interesting development as the PCP pathway is an integral and well-studied pathway in flies, but has classically been thought to be unemployed by vertebrates. In addition to the non-canonical Wnt and PCP pathways involvement in convergent extension, the down-regulation of certain cell-cell adhesion molecules, such as C-cadherin and fibronectin/integrin interactions, may also play a role. Reduction of the activity of these cell-cell adhesion molecules allows for cells undergoing convergent extension to move more freely. Consistent with a role for a reduction of cell-cell adhesion in convergent extension, when cell-cell adhesion is not reduced, convergent extension cannot occur. References Developmental biology
Convergent extension
[ "Biology" ]
1,042
[ "Behavior", "Developmental biology", "Reproduction" ]
10,960,215
https://en.wikipedia.org/wiki/Reverse%20sensitivity
Reverse sensitivity is a term from the New Zealand planning system. It describes the impacts of newer uses on prior activities occurring in mixed-use areas. Some activities tend to have the effect of limiting the ability of established ones to continue. A key instance is the impact of new residential development on mixed use neighbourhoods as an area goes through a process of gentrification. Such prior uses might be entertainment, commercial or industrial uses. New residents tend to have expectations of a level of amenity comparable to suburban residential areas and will complain about noise from established uses. This has previously had the effect of imposing economic burdens or operational limitations on the prior uses that reduce their viability, forcing them to close down or move. The concept of reverse sensitivity suggests that a reversal of this approach is possible and that the burden of providing residential amenity in mixed-use environments should fall to the developers of new residential buildings in those areas. Planning schemes can regulate these issues via zoning ordinances. References Reverse Sensitivity What is reverse sensitivity? Urban planning in New Zealand Zoning Urban design
Reverse sensitivity
[ "Engineering" ]
212
[ "Construction", "Zoning" ]
2,217,326
https://en.wikipedia.org/wiki/Gallium%28III%29%20hydroxide
Gallium hydroxide is an inorganic compound with the chemical formula . It is formed as a gel following the addition of ammonia to salts. It is also found in nature as the rare mineral söhngeite which is reported to contain octahedrally coordinated gallium atoms. Gallium hydroxide is amphoteric. In strongly acidic conditions, the gallium ion, is formed. In strongly basic conditions, (tetrahydroxogallate(III)) is formed. Salts of are sometimes called gallates. References Hydroxides Gallium compounds
Gallium(III) hydroxide
[ "Chemistry" ]
116
[ "Inorganic compounds", "Bases (chemistry)", "Hydroxides", "Inorganic compound stubs" ]
2,217,487
https://en.wikipedia.org/wiki/GG45
GG45 (GigaGate 45) and ARJ45 (Augmented RJ45) are two related connectors for Category 7, Category 7A, and Category 8 telecommunication cabling. The GG45 interface and related implementations are developed and sold by Nexans S.A., while the ARJ45 interface and related implementations are developed and sold by Bel Fuse Inc. The electrical performance of each is compliant with IEC 61076-3-110, as published by the International Electrotechnical Commission. Furthermore, the ARJ45 connector meets the mechanical dimensions specified in IEC 61076-3-110. Details The GG45 and ARJ45 connectors operate in the frequency band between 600 MHz and 5 GHz with shielded twisted pair and twinax cables. To reduce crosstalk, two of the four pairs have been moved so that each pair occupies one corner. GG45 is a variant of ARJ45 that allows for cables terminated with male 8P8C (AKA RJ45) connectors to be plugged into GG45 jacks. However, GG45 cables cannot plug into 8P8C jacks as a protrusion on the socket is designed to activate a switch on the jack for the alternative contact positions. Combined with an internal system of Faraday cages, the GG45 interface therefore has plenty of headroom, plus the ability to migrate to higher speed service by upgrading to Category 7A patch cords that activate the switch in the jack. There are two main variants of GG45/ARJ45: GG45 or ARJ45 HD is the full connector with 12 contacts, providing a Category 6 cable interface (100/250 MHz) for older devices as well as the new interface. ARJ45 HS is the version without the Cat-6–compatible contacts, for a total of 8 contacts. 1 2 3 4 5 6 7 8 |‾‾█‾█‾█‾█‾█‾█‾█‾█‾‾| Pinout of GG45 and ARJ45 HD sockets. The protrusion ▒▒▒ | | activates a switch, redirecting the 3-6 and 4-5 pairs to | | the corners on a GG45 jack (3′ and 6′, and 4′ and 5′). |_█_█▒▒▒█_█_|  3′6′ |   | 4′5′ ARJ45 HS omits the Cat-6–compatible 3-6 and 4-5 pairs.       |_| See also TERA References External links ARJ45 Modular Connector LANmark GG45 Connector IEC 61076-3 Networking hardware
GG45
[ "Engineering" ]
542
[ "Computer networks engineering", "Networking hardware" ]
2,217,599
https://en.wikipedia.org/wiki/Circular%20symmetry
In geometry, circular symmetry is a type of continuous symmetry for a planar object that can be rotated by any arbitrary angle and map onto itself. Rotational circular symmetry is isomorphic with the circle group in the complex plane, or the special orthogonal group SO(2), and unitary group U(1). Reflective circular symmetry is isomorphic with the orthogonal group O(2). Two dimensions A 2-dimensional object with circular symmetry would consist of concentric circles and annular domains. Rotational circular symmetry has all cyclic symmetry, Zn as subgroup symmetries. Reflective circular symmetry has all dihedral symmetry, Dihn as subgroup symmetries. Three dimensions In 3-dimensions, a surface or solid of revolution has circular symmetry around an axis, also called cylindrical symmetry or axial symmetry. An example is a right circular cone. Circular symmetry in 3 dimensions has all pyramidal symmetry, Cnv as subgroups. A double-cone, bicone, cylinder, toroid and spheroid have circular symmetry, and in addition have a bilateral symmetry perpendicular to the axis of system (or half cylindrical symmetry). These reflective circular symmetries have all discrete prismatic symmetries, Dnh as subgroups. Four dimensions In four dimensions, an object can have circular symmetry, on two orthogonal axis planes, or duocylindrical symmetry. For example, the duocylinder and Clifford torus have circular symmetry in two orthogonal axes. A spherinder has spherical symmetry in one 3-space, and circular symmetry in the orthogonal direction. Spherical symmetry An analogous 3-dimensional equivalent term is spherical symmetry. Rotational spherical symmetry is isomorphic with the rotation group SO(3), and can be parametrized by the Davenport chained rotations pitch, yaw, and roll. Rotational spherical symmetry has all the discrete chiral 3D point groups as subgroups. Reflectional spherical symmetry is isomorphic with the orthogonal group O(3) and has the 3-dimensional discrete point groups as subgroups. A scalar field has spherical symmetry if it depends on the distance to the origin only, such as the potential of a central force. A vector field has spherical symmetry if it is in radially inward or outward direction with a magnitude and orientation (inward/outward) depending on the distance to the origin only, such as a central force. See also Isotropy Rotational symmetry Particle in a spherically symmetric potential Gauss's theorem References Symmetry Rotation
Circular symmetry
[ "Physics", "Mathematics" ]
499
[ "Physical phenomena", "Classical mechanics", "Rotation", "Motion (physics)", "Geometry", "Symmetry" ]
2,217,619
https://en.wikipedia.org/wiki/Paper%20pallet
A paper pallet or ecopallet is a shipping or display pallet made from paperboard. Construction Paper shipping pallets come in corrugated fiberboard, partial wood decks or engineered with laminated paperboard. Some are made of paperboard composite honeycomb. Several designs have been developed. See also Unit load References Further reading Fiedler, R. M, "Distribution Packaging Technology", IoPP, 1995 McKinlay, A. H., "Transport Packaging", IoPP, 2004 Soroka, W, "Fundamentals of Packaging Technology", IoPP, 2002, Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, Pallets
Paper pallet
[ "Physics" ]
146
[ "Physical systems", "Transport", "Transport stubs" ]
2,217,712
https://en.wikipedia.org/wiki/Solenoid%20%28mathematics%29
This page discusses a class of topological groups. For the wrapped loop of wire, see Solenoid. In mathematics, a solenoid is a compact connected topological space (i.e. a continuum) that may be obtained as the inverse limit of an inverse system of topological groups and continuous homomorphisms where each is a circle and fi is the map that uniformly wraps the circle for times () around the circle . This construction can be carried out geometrically in the three-dimensional Euclidean space R3. A solenoid is a one-dimensional homogeneous indecomposable continuum that has the structure of an abelian compact topological group. Solenoids were first introduced by Vietoris for the case, and by van Dantzig the case, where is fixed. Such a solenoid arises as a one-dimensional expanding attractor, or Smale–Williams attractor, and forms an important example in the theory of hyperbolic dynamical systems. Construction Geometric construction and the Smale–Williams attractor Each solenoid may be constructed as the intersection of a nested system of embedded solid tori in R3. Fix a sequence of natural numbers {ni}, ni ≥ 2. Let T0 = S1 × D be a solid torus. For each i ≥ 0, choose a solid torus Ti+1 that is wrapped longitudinally ni times inside the solid torus Ti. Then their intersection is homeomorphic to the solenoid constructed as the inverse limit of the system of circles with the maps determined by the sequence {ni}. Here is a variant of this construction isolated by Stephen Smale as an example of an expanding attractor in the theory of smooth dynamical systems. Denote the angular coordinate on the circle S1 by t (it is defined mod 2π) and consider the complex coordinate z on the two-dimensional unit disk D. Let f be the map of the solid torus T = S1 × D into itself given by the explicit formula This map is a smooth embedding of T into itself that preserves the foliation by meridional disks (the constants 1/2 and 1/4 are somewhat arbitrary, but it is essential that 1/4 < 1/2 and 1/4 + 1/2 < 1). If T is imagined as a rubber tube, the map f stretches it in the longitudinal direction, contracts each meridional disk, and wraps the deformed tube twice inside T with twisting, but without self-intersections. The hyperbolic set Λ of the discrete dynamical system (T, f) is the intersection of the sequence of nested solid tori described above, where Ti is the image of T under the ith iteration of the map f. This set is a one-dimensional (in the sense of topological dimension) attractor, and the dynamics of f on Λ has the following interesting properties: meridional disks are the stable manifolds, each of which intersects Λ over a Cantor set periodic points of f are dense in Λ the map f is topologically transitive on Λ General theory of solenoids and expanding attractors, not necessarily one-dimensional, was developed by R. F. Williams and involves a projective system of infinitely many copies of a compact branched manifold in place of the circle, together with an expanding self-immersion. Construction in toroidal coordinates In the toroidal coordinates with radius , the solenoid can be parametrized by aswhere Here, are adjustable shape-parameters, with constraint . In particular, works. Let be the solenoid constructed this way, then the topology of the solenoid is just the subset topology induced by the Euclidean topology on . Since the parametrization is bijective, we can pullback the topology on to , which makes itself the solenoid. This allows us to construct the inverse limit maps explicitly: Construction by symbolic dynamics Viewed as a set, the solenoid is just a Cantor-continuum of circles, wired together in a particular way. This suggests to us the construction by symbolic dynamics, where we start with a circle as a "racetrack", and append an "odometer" to keep track of which circle we are on. Define as the solenoid. Next, define addition on the odometer , in the same way as p-adic numbers. Next, define addition on the solenoid byThe topology on the solenoid is generated by the basis containing the subsets , where is any open interval in , and is the set of all elements of starting with the initial segment . Pathological properties Solenoids are compact metrizable spaces that are connected, but not locally connected or path connected. This is reflected in their pathological behavior with respect to various homology theories, in contrast with the standard properties of homology for simplicial complexes. In Čech homology, one can construct a non-exact long homology sequence using a solenoid. In Steenrod-style homology theories, the 0th homology group of a solenoid may have a fairly complicated structure, even though a solenoid is a connected space. See also Protorus, a class of topological groups that includes the solenoids Pontryagin duality Inverse limit p-adic number Profinite integer References D. van Dantzig, Ueber topologisch homogene Kontinua, Fund. Math. 15 (1930), pp. 102–125 Clark Robinson, Dynamical systems: Stability, Symbolic Dynamics and Chaos, 2nd edition, CRC Press, 1998 S. Smale, Differentiable dynamical systems, Bull. of the AMS, 73 (1967), 747 – 817. L. Vietoris, Über den höheren Zusammenhang kompakter Räume und eine Klasse von zusammenhangstreuen Abbildungen, Math. Ann. 97 (1927), pp. 454–472 Robert F. Williams, Expanding attractors, Publ. Math. IHÉS, t. 43 (1974), p. 169–203 Further reading Topological groups Continuum theory Algebraic topology Ring theory Number theory P-adic numbers
Solenoid (mathematics)
[ "Mathematics" ]
1,273
[ "P-adic numbers", "Discrete mathematics", "Ring theory", "Algebraic topology", "Space (mathematics)", "Topological spaces", "Fields of abstract algebra", "Topology", "Topological groups", "Number theory" ]
2,217,816
https://en.wikipedia.org/wiki/J.%20Heinrich%20Matthaei
Johannes Heinrich Matthaei (born 4 May 1929) is a German biochemist. He is best known for his unique contribution to solving the genetic code on 15 May 1961. Career Whilst a post-doctoral visitor in the laboratory of Marshall Warren Nirenberg at the NIH in Bethesda, Maryland, he discovered that a synthetic RNA polynucleotide, composed of a repeating uridylic acid residue (Uracil), coded for a polypeptide chain encoding just one kind of amino acid, phenylalanine. In scientific terms, he discovered that polyU codes for polyphenylalanine and hence the coding unit for this amino acid is composed of a series of Us or, as we now know the genetic code is read in triplets, the codon for phenylalanine is UUU. This single experiment opened the way to the solution of the genetic code. It was for this and later work on the genetic code for which Nirenberg shared the Nobel Prize for Medicine and Physiology. In addition, Matthaei and his co-workers in the following years published a multitude of results concerning the early understanding of the form and function of the genetic code. Why Matthaei, who personally deciphered the genetic code, was excluded from this scientific prize is one of the Nobel Prize controversies. Later, Matthaei was a member of the Max Planck Society in Göttingen as a director. Bibliography References See also Rheinische Friedrich-Wilhelms-Universität Bonn German inventors and discoverers Genetic code German biochemists History of genetics 1929 births Living people University of Bonn alumni Max Planck Institute directors
J. Heinrich Matthaei
[ "Chemistry" ]
339
[ "Biochemistry stubs", "Biochemists", "Biochemist stubs" ]
2,217,896
https://en.wikipedia.org/wiki/STN%20display
A STN (super-twisted nematic) display is a type of liquid-crystal display (LCD). An LCD is a flat-panel display that uses liquid crystals to change its properties when exposed to an electric field, which can be used to create images. This change is called the twisted nematic (TN) field effect. Earlier TN displays twisted the liquid crystal molecules at a 90-degree angle. STN displays improved on that by twisting the liquid crystal molecules at a much greater angle, typically between 180 and 270 degrees. This allows for a sharper image and passive matrix addressing, a simpler way to control the pixels in an LCD. While STN displays were once common in various electronic devices, they have been largely replaced by TFT (thin-film transistor) displays. Development In 1982, C. M. Waters and E. P. Raynes patented STN displays, and by 1984 researchers at Brown Boveri (later ABB) built the first prototype STN matrix display, with 540 × 270 pixels. A key challenge was finding a way to address more pixels efficiently. Standard TN displays weren't ideal for this because of their voltage characteristics. STN displays, with their 180-270 degree twist, offered a solution. This twist allows for a clearer distinction between on and off states, making them suitable for passive-matrix addressing with more pixels. The main advantage of STN LCDs is their lower power consumption and affordability. They can also be made purely reflective for sunlight readability. In the late 1980s, they were used in portable computers and handheld devices like the Nintendo Game Boy. While still found in some simple digital products like calculators, STN displays have largely been replaced by TFT LCDs, which offer superior image quality and faster response times. Variants CSTN (color super-twist nematic) is a color variant of STN displays, developed by Sharp. It uses red, green, and blue filters to create color images. Early CSTN displays had limitations like slow response times and ghosting. However, advancements have improved response times to 100 ms (still longer than the 8 ms for TFT), widened viewing angles to 140 degrees, and enhanced color quality, making them a more competitive option at about half the cost of TFT displays. A newer passive-matrix technology, high-performance addressing (HPA), offers even better performance than CSTN. Other STN display variations were introduced, attempting to improve image quality and response times. They include: Double layer STN: Uses an extra compensating layer to provide a sharper image. DSTN (double STN or dual-scan STN): The screen is divided into halves, and each half is scanned simultaneously, thereby doubling the number of lines refreshed per second and providing a sharper appearance. Widely used on early portable computers. FRSTN (fast-response STN) : (film compensated STN, formulated STN or filtered STN): Uses a film compensating layer between the STN display and rear polarizer for added sharpness and contrast. Used on early portable computers before adoption of DSTN. FFSTN (double film compensated STN) MSTN (monochrome STN) References External links Display Types and Technologies Display technology Swiss inventions
STN display
[ "Engineering" ]
683
[ "Electronic engineering", "Display technology" ]
2,217,996
https://en.wikipedia.org/wiki/Reverse%20video
Reverse video (or invert video or inverse video or reverse screen) is a computer display technique whereby the background and text color values are inverted. On older computers, displays were usually designed to display text on a black background by default. For emphasis, the color scheme was swapped to bright background with dark text. Nowadays the two tend to be switched, since most computers today default to white as a background color. The opposite of reverse video is known as true video. Video is usually reversed by inverting the brightness values of the pixels of the involved region of the display. If there are 256 levels of brightness, encoded as 0 to 255, the 255 value becomes 0 and vice versa. A value of 1 becomes 254, 2 of 253, and so on: n is swapped for r - n, for r levels of brightness. This is occasionally called a ones' complement. If the source image is of middle brightness, reverse video can be difficult to see, 127 becomes 128 for example, which is only one level of brightness different. The computer displays where it was most commonly used were monochrome and only displayed two values so this issue seldom arose. Reverse video is commonly used in software programs as a visual aid to highlight a selection that has been made as an aid in preventing description errors, where an intended action is performed on an object that is not the one intended. It is more common in modern desktop environments to change the background to other colors such as blue, or to use a semi-transparent background to "highlight" the selected text. On a terminal understanding ANSI escape sequences, the reverse video function is activated using the escape sequence CSI 7 m (which equals SGR 7). Accessibility Reverse video is also sometimes used for accessibility reasons. When most computer displays were light-on-dark, it was found that users looking back and forth between a white paper and dark screen would experience eyestrain due to their pupils constantly dilating and contracting. Flicker was also an issue with early white-background displays. Today, people with visual impairments such as ocular toxocariasis may find it less tiring to the eyes to work with a predominantly black screen, since modern operating systems usually display a lot of white in a normal use. For the same, white-dominant reason, reverse video is an efficient way to read or write text in a dark environment, since the darkness of the screen may blend into the darkness of the environment. A number of operating systems, graphical programs, and websites offer dark modes, which serves a similar purpose as what was originally true video and is now reverse video in modern systems. References Display technology
Reverse video
[ "Engineering" ]
532
[ "Electronic engineering", "Display technology" ]
2,218,004
https://en.wikipedia.org/wiki/Amyloid%20plaques
Amyloid plaques (also known as neuritic plaques, amyloid beta plaques or senile plaques) are extracellular deposits of amyloid beta (Aβ) protein that present mainly in the grey matter of the brain. Degenerative neuronal elements and an abundance of microglia and astrocytes can be associated with amyloid plaques. Some plaques occur in the brain as a result of aging, but large numbers of plaques and neurofibrillary tangles are characteristic features of Alzheimer's disease. The plaques are highly variable in shape and size; in tissue sections immunostained for Aβ, they comprise a log-normal size distribution curve, with an average plaque area of 400-450 square micrometers (μm2). The smallest plaques (less than 200 μm2), which often consist of diffuse deposits of Aβ, are particularly numerous. Plaques form when Aβ misfolds and aggregates into oligomers and longer polymers, the latter of which are characteristic of amyloid. History In 1892, Paul Blocq and Gheorghe Marinescu first described the presence of plaques in grey matter. They referred to the plaques as 'nodules of neuroglial sclerosis'. In 1898, Emil Redlich reported plaques in three patients, two of whom had clinically verified dementia. Redlich used the term 'miliary sclerosis' to describe plaques because he thought they resembled millet seeds, and he was the first to refer to the lesions as 'plaques'. In the early 20th century, Oskar Fischer noted their similarity to actinomyces 'Drusen' (geode-like lesions), leading him to call the degenerative process 'drusige Nekrose'. Alois Alzheimer is often credited with first linking plaques to dementia in a 1906 presentation (published in 1907), but this short report focused mainly on neurofibrillary tangles, and plaques were only briefly mentioned. Alzheimer's first substantive description of plaques appeared in 1911. In contrast, Oskar Fischer published a series of comprehensive investigations of plaques and dementia in 1907, 1910 and 1912. By 1911, Max Bielschowsky proposed the amyloid-nature of plaque deposits. This was later confirmed by Paul Divry, who showed that plaques that are stained with the dye Congo Red show the optical property of birefringence, which is characteristic of amyloids in general. In 1911, Teofil Simchowicz introduced the term 'senile plaques' to denote their frequent presence in the brains of older individuals. In 1968, a quantitative analysis confirmed the association of senile plaques with dementia. The term 'neuritic plaques' was used in 1973 to designate plaques that include abnormal neuronal processes (neurites). An advance in 1984 and 1985 was the identification of Aβ as the protein that forms the cores of plaques. This discovery led to the generation of new tools to study plaques, particularly antibodies to Aβ, and presented a molecular target for the development of potential therapies for Alzheimer's disease. The generation of amyloid beta Amyloid beta (Aβ) is a small protein, most often 40 or 42 amino acids in length, that is released from a longer parent protein called the Aβ-precursor protein (APP). APP is produced by many types of cell in the body, but it is especially abundant in neurons. It is a single-pass transmembrane protein, passing once through cellular membranes. The Aβ segment of APP is partly within the membrane and partly outside of the membrane. To liberate Aβ, APP is sequentially cleaved by two enzymes: first, by beta secretase (or β-amyloid cleaving enzyme (BACE)) outside the membrane, and second, by gamma secretase (γ-secretase), an enzyme complex within the membrane. The sequential actions of these secretases results in Aβ protein fragments that are released into the extracellular space. In addition to Aβ peptides that are 40 or 42 amino acids long, several less abundant Aβ fragments also are generated. Aβ can be chemically modified in various ways, and the length of the protein and chemical modifications can influence both its tendency to aggregate and its toxicity. Identification Amyloid plaques are visible with the light microscope using a variety of staining techniques, including silver stains, Congo red, Thioflavin, cresyl violet, PAS-reaction, and luminescent conjugated oligothiophenes (LCOs). These methods often stain different components of the plaques, and they vary in their sensitivity Plaques may also be visualized immunohistochemically with antibodies directed against Aβ or other components of the lesions. Immunohistochemical stains are especially useful because they are both sensitive and specific for antigens that are associated with plaques. Composition The Aβ deposits that comprise amyloid plaques are variable in size and appearance. Under the light microscope, they range from small, wispy accumulations that are a few microns in diameter to much larger dense or diffuse masses. So-called 'classical plaques' consist of a compact Aβ-amyloid core that is surrounded by a corona of somewhat less densely packed Aβ. Classical plaques also include abnormal, swollen neuronal processes (neurites) deriving from many different types of neurons, along with activated astrocytes and microglia. Abnormal neurites and activated glial cells are not typical of most diffuse plaques, and it has been suggested that diffuse deposits are an early stage in the development of plaques. Anatomical distribution Dietmar Thal and his colleagues have proposed a sequence of stages of plaque formation in the brains of Alzheimer patients In Phase 1, plaques appear in the neocortex; in Phase 2, they appear in the allocortex, hippocampal formation and amygdala; in Phase 3, the basal ganglia and diencephalon are affected; in Phase 4, plaques appear in the midbrain and medulla oblongata; and in Phase 5, they appear in the pons and cerebellum. Thus, in end-stage Alzheimer's disease, plaques can be found in most parts of the brain. They are uncommon in the spinal cord. Formation and spread The normal function of Aβ is not certain, but plaques arise when the protein misfolds and begins to accumulate in the brain by a process of molecular templating ('seeding'). Mathias Jucker and Lary Walker have likened this process to the formation and spread of prions in diseases known as spongiform encephalopathies or prion diseases. According to the prion paradigm, certain proteins misfold into shapes that are rich in beta-sheet secondary structure. Involvement in disease Abundant Aβ plaques, along with neurofibrillary tangles consisting of aggregated tau protein, are the two lesions that are required for the neuropathological diagnosis of Alzheimer's disease. Although the number of neurofibrillary tangles correlates more strongly with the degree of dementia than does the number of plaques, genetic and pathological findings indicate that Aβ plays a central role in the risk, onset, and progression of Alzheimer's disease. The diagnosis of Alzheimer's disease typically requires a microscopic analysis of plaques and tangles in brain tissue, usually at autopsy. However, Aβ plaques (along with cerebral Aβ-amyloid angiopathy) can be detected in the brains of living subjects by preparing radiolabeled agents that bind selectively to Aβ deposits in the brain after being infused into the blood. The ligands cross the blood–brain barrier and attach to aggregated Aβ, and their retention in the brain is assessed by positron emission tomography. In addition, the presence of plaques and tangles can be estimated by measuring the amounts of the Aβ and tau proteins in the cerebrospinal fluid. Occurrence The probability of having plaques in the brain increases with advancing age. From the age of 60 years (10%) to the age of 80 years (60%), the proportion of people with senile plaques increases linearly. Women are slightly more likely to have plaques than are men. Both plaques and Alzheimer's disease also are more common in aging persons with trisomy-21 (Down syndrome). This is thought to result from the excess production of Aβ because the APP gene is on chromosome 21, which exists as three copies in Down syndrome. Amyloid plaques naturally occur in the aging brains of nonhuman species ranging from birds to great apes. In nonhuman primates, which are the closest biological relatives of humans, plaques have been found in all species examined thus far. Neurofibrillary tangles are rare, however, and no nonhuman species has been shown to have dementia along with the complete neuropathology of Alzheimer's disease. Research Both human samples and experimental models of Alzheimer's disease have been used to study the biochemical, cytological, and inflammatory characteristics of amyloid plaques. Experimental studies have focused not only on delineating mechanisms by which plaques arise and proliferate, but also on discovering methods by which they can be detected (and potentially prevented/removed) in the living brain. However, several aspects of amyloid biology are still under investigation. For example, recent evidence has suggested that amyloid plaque formation is linked to brain microvascular trauma. Other research implicates chronic inflammation of the brain and immune dysfunction of the nervous system. The environmental, physiological or genetic risk factors for plaque formation in Alzheimer's disease are under preliminary research. See also Prion Proteopathy References Histopathology
Amyloid plaques
[ "Chemistry" ]
2,003
[ "Histopathology", "Microscopy" ]
2,218,035
https://en.wikipedia.org/wiki/LMS%20color%20space
LMS (long, medium, short), is a color space which represents the response of the three types of cones of the human eye, named for their responsivity (sensitivity) peaks at long, medium, and short wavelengths. The numerical range is generally not specified, except that the lower end is generally bounded by zero. It is common to use the LMS color space when performing chromatic adaptation (estimating the appearance of a sample under a different illuminant). It is also useful in the study of color blindness, when one or more cone types are defective. Definition The cone response functions are the color matching functions for the LMS color space. The chromaticity coordinates (L, M, S) for a spectral distribution are defined as: The cone response functions are normalized to have their maxima equal to unity. XYZ to LMS Typically, colors to be adapted chromatically will be specified in a color space other than LMS (e.g. sRGB). The chromatic adaptation matrix in the diagonal von Kries transform method, however, operates on tristimulus values in the LMS color space. Since colors in most colorspaces can be transformed to the XYZ color space, only one additional transformation matrix is required for any color space to be adapted chromatically: to transform colors from the XYZ color space to the LMS color space. In addition, many color adaption methods, or color appearance models (CAMs), run a von Kries-style diagonal matrix transform in a slightly modified, LMS-like, space instead. They may refer to it simply as LMS, as RGB, or as ργβ. The following text uses the "RGB" naming, but do note that the resulting space has nothing to do with the additive color model called RGB. The chromatic adaptation transform (CAT) matrices for some CAMs in terms of CIEXYZ coordinates are presented here. The matrices, in conjunction with the XYZ data defined for the standard observer, implicitly define a "cone" response for each cell type. Notes: All tristimulus values are normally calculated using the CIE 1931 2° standard colorimetric observer. Unless specified otherwise, the CAT matrices are normalized (the elements in a row add up to 1) so the tristimulus values for an equal-energy illuminant (X=Y=Z), like CIE Illuminant E, produce equal LMS values. Hunt, RLAB The Hunt and RLAB color appearance models use the Hunt–Pointer–Estevez transformation matrix (MHPE) for conversion from CIE XYZ to LMS. This is the transformation matrix which was originally used in conjunction with the von Kries transform method, and is therefore also called von Kries transformation matrix (MvonKries). Equal-energy illuminants: Normalized to D65: Bradford's spectrally sharpened matrix (LLAB, CIECAM97s) The original CIECAM97s color appearance model uses the Bradford transformation matrix (MBFD) (as does the LLAB color appearance model). This is a “spectrally sharpened” transformation matrix (i.e. the L and M cone response curves are narrower and more distinct from each other). The Bradford transformation matrix was supposed to work in conjunction with a modified von Kries transform method which introduced a small non-linearity in the S (blue) channel. However, outside of CIECAM97s and LLAB this is often neglected and the Bradford transformation matrix is used in conjunction with the linear von Kries transform method, explicitly so in ICC profiles. A "spectrally sharpened" matrix is believed to improve chromatic adaptation especially for blue colors, but does not work as a real cone-describing LMS space for later human vision processing. Although the outputs are called "LMS" in the original LLAB incarnation, CIECAM97s uses a different "RGB" name to highlight that this space does not really reflect cone cells; hence the different names here. LLAB proceeds by taking the post-adaptation XYZ values and performing a CIELAB-like treatment to get the visual correlates. On the other hand, CIECAM97s takes the post-adaptation XYZ value back into the Hunt LMS space, and works from there to model the vision system's calculation of color properties. Later CIECAMs A revised version of CIECAM97s switches back to a linear transform method and introduces a corresponding transformation matrix (MCAT97s): The sharpened transformation matrix in CIECAM02 (MCAT02) is: CAM16 uses a different matrix: As in CIECAM97s, after adaptation, the colors are converted to the traditional Hunt–Pointer–Estévez LMS for final prediction of visual results. physiological CMFs From a physiological point of view, the LMS color space describes a more fundamental level of human visual response, so it makes more sense to define the physiopsychological XYZ by LMS, rather than the other way around. A set of physiologically-based LMS functions were proposed by Stockman & Sharpe in 2000. The functions have been published in a technical report by the CIE in 2006 (CIE 170). The functions are derived from Stiles and Burch RGB CMF data, combined with newer measurements about the contribution of each cone in the RGB functions. To adjust from the 10° data to 2°, assumptions about photopigment density difference and data about the absorption of light by pigment in the lens and the macula lutea are used. The Stockman & Sharpe functions can then be turned into a set of three color-matching functions similar to the CIE 1931 functions. Let be the three cone response functions, and let be the new XYZ color matching functions. Then, by definition, the new XYZ color matching functions are: where the transformation matrix is defined as: For any spectral distribution , let be the LMS chromaticity coordinates for , and let be the corresponding new XYZ chromaticity coordinates. Then: or, explicitly: The inverse matrix is shown here for comparison with the ones for traditional XYZ: The above development has the advantage of basing the new XFYFZF color matching functions on the physiologically-based LMS cone response functions. In addition, it offers a one-to-one relationship between the LMS chromaticity coordinates and the new XFYFZF chromaticity coordinates, which was not the case for the CIE 1931 color matching functions. The transformation for a particular color between LMS and the CIE 1931 XYZ space is not unique. It rather depends highly on the particular form of the spectral distribution ) producing the given color. There is no fixed 3x3 matrix which will transform between the CIE 1931 XYZ coordinates and the LMS coordinates, even for a particular color, much less the entire gamut of colors. Any such transformation will be an approximation at best, generally requiring certain assumptions about the spectral distributions producing the color. For example, if the spectral distributions are constrained to be the result of mixing three monochromatic sources, (as was done in the measurement of the CIE 1931 and the Stiles and Burch color matching functions), then there will be a one-to-one relationship between the LMS and CIE 1931 XYZ coordinates of a particular color. As of Nov 28, 2023, CIE 170-2 CMFs are proposals that have yet to be ratified by the full TC 1-36 committee or by the CIE. Quantal CMF For theoretical purposes, it is often convenient to characterize radiation in terms of photons rather than energy. The energy E of a photon is given by the Planck relation where E is the energy per photon, h is the Planck constant, c is the speed of light, ν is the frequency of the radiation and λ is the wavelength. A spectral radiative quantity in terms of energy, JE(λ), is converted to its quantal form JQ(λ) by dividing by the energy per photon: For example, if JE(λ) is spectral radiance with the unit W/m2/sr/m, then the quantal equivalent JQ(λ) characterizes that radiation with the unit photons/s/m2/sr/m. If CEλi(λ) (i=1,2,3) are the three energy-based color matching functions for a particular color space (LMS color space for the purposes of this article), then the tristimulus values may be expressed in terms of the quantal radiative quantity by: Define the quantal color matching functions: where λi max is the wavelength at which CEλ i(λ)/λ is maximized. Define the quantal tristimulus values: Note that, as with the energy based functions, the peak value of CQλi(λ) will be equal to unity. Using the above equation for the energy tristimulus values CEi For the LMS color space, ≈ {566, 541, 441} nm and J/photon Applications Color blindness The LMS color space can be used to emulate the way color-blind people see color. An early emulation of dichromats were produced by Brettel et al. 1997 and was rated favorably by actual patients. An example of a state-of-the-art method is Machado et al. 2009. A related application is making color filters for color-blind people to more easily notice differences in color, a process known as daltonization. Image processing JPEG XL uses an XYB color space derived from LMS. Its transform matrix is shown here: This can be interpreted as a hybrid color theory where L and M are opponents but S is handled in a trichromatic way, justified by the lower spatial density of S cones. In practical terms, this allows for using less data for storing blue signals without losing much perceived quality. The colorspace originates from Guetzli's butteraugli metric, and was passed down to JPEG XL via Google's Pik project. See also Color balance Color vision Luminous efficiency function Trichromacy References Color space Color blindness
LMS color space
[ "Mathematics" ]
2,152
[ "Color space", "Space (mathematics)", "Metric spaces" ]
2,218,040
https://en.wikipedia.org/wiki/Crystallographic%20restriction%20theorem
The crystallographic restriction theorem in its basic form was based on the observation that the rotational symmetries of a crystal are usually limited to 2-fold, 3-fold, 4-fold, and 6-fold. However, quasicrystals can occur with other diffraction pattern symmetries, such as 5-fold; these were not discovered until 1982 by Dan Shechtman. Crystals are modeled as discrete lattices, generated by a list of independent finite translations . Because discreteness requires that the spacings between lattice points have a lower bound, the group of rotational symmetries of the lattice at any point must be a finite group (alternatively, the point is the only system allowing for infinite rotational symmetry). The strength of the theorem is that not all finite groups are compatible with a discrete lattice; in any dimension, we will have only a finite number of compatible groups. Dimensions 2 and 3 The special cases of 2D (wallpaper groups) and 3D (space groups) are most heavily used in applications, and they can be treated together. Lattice proof A rotation symmetry in dimension 2 or 3 must move a lattice point to a succession of other lattice points in the same plane, generating a regular polygon of coplanar lattice points. We now confine our attention to the plane in which the symmetry acts , illustrated with lattice vectors in the figure. Now consider an 8-fold rotation, and the displacement vectors between adjacent points of the polygon. If a displacement exists between any two lattice points, then that same displacement is repeated everywhere in the lattice. So collect all the edge displacements to begin at a single lattice point. The edge vectors become radial vectors, and their 8-fold symmetry implies a regular octagon of lattice points around the collection point. But this is impossible, because the new octagon is about 80% as large as the original. The significance of the shrinking is that it is unlimited. The same construction can be repeated with the new octagon, and again and again until the distance between lattice points is as small as we like; thus no discrete lattice can have 8-fold symmetry. The same argument applies to any k-fold rotation, for k greater than 6. A shrinking argument also eliminates 5-fold symmetry. Consider a regular pentagon of lattice points. If it exists, then we can take every other edge displacement and (head-to-tail) assemble a 5-point star, with the last edge returning to the starting point. The vertices of such a star are again vertices of a regular pentagon with 5-fold symmetry, but about 60% smaller than the original. Thus the theorem is proved. The existence of quasicrystals and Penrose tilings shows that the assumption of a linear translation is necessary. Penrose tilings may have 5-fold rotational symmetry and a discrete lattice, and any local neighborhood of the tiling is repeated infinitely many times, but there is no linear translation for the tiling as a whole. And without the discrete lattice assumption, the above construction not only fails to reach a contradiction, but produces a (non-discrete) counterexample. Thus 5-fold rotational symmetry cannot be eliminated by an argument missing either of those assumptions. A Penrose tiling of the whole (infinite) plane can only have exact 5-fold rotational symmetry (of the whole tiling) about a single point, however, whereas the 4-fold and 6-fold lattices have infinitely many centres of rotational symmetry. Trigonometry proof Consider two lattice points A and B separated by a translation vector r. Consider an angle α such that a rotation of angle α about any lattice point is a symmetry of the lattice. Rotating about point B by α maps point A to a new point A'. Similarly, rotating about point A by α maps B to a point B'. Since both rotations mentioned are symmetry operations, A' and B' must both be lattice points. Due to periodicity of the crystal, the new vector r' which connects them must be equal to an integer multiple of r: with integer. The four translation vectors, three of length and one, connecting A' and B', of length , form a trapezium. Therefore, the length of r' is also given by: Combining the two equations gives: where is also an integer. Bearing in mind that we have allowed integers . Solving for possible values of reveals that the only values in the 0° to 180° range are 0°, 60°, 90°, 120°, and 180°. In radians, the only allowed rotations consistent with lattice periodicity are given by 2π/n, where n = 1, 2, 3, 4, 6. This corresponds to 1-, 2-, 3-, 4-, and 6-fold symmetry, respectively, and therefore excludes the possibility of 5-fold or greater than 6-fold symmetry. Short trigonometry proof Consider a line of atoms A-O-B, separated by distance a. Rotate the entire row by θ = +2π/n and θ = −2π/n, with point O kept fixed. After the rotation by +2π/n, A is moved to the lattice point C and after the rotation by -2π/n, B is moved to the lattice point D. Due to the assumed periodicity of the lattice, the two lattice points C and D will be also in a line directly below the initial row; moreover C and D will be separated by r = ma, with m an integer. But by trigonometry, the separation between these points is: . Equating the two relations gives: This is satisfied by only n = 1, 2, 3, 4, 6. Matrix proof For an alternative proof, consider matrix properties. The sum of the diagonal elements of a matrix is called the trace of the matrix. In 2D and 3D every rotation is a planar rotation, and the trace is a function of the angle alone. For a 2D rotation, the trace is 2 cos θ; for a 3D rotation, 1 + 2 cos θ. Examples Consider a 60° (6-fold) rotation matrix with respect to an orthonormal basis in 2D. The trace is precisely 1, an integer. Consider a 45° (8-fold) rotation matrix. The trace is 2/, not an integer. Selecting a basis formed from vectors that spans the lattice, neither orthogonality nor unit length is guaranteed, only linear independence. However the trace of the rotation matrix is the same with respect to any basis. The trace is a similarity invariant under linear transformations. In the lattice basis, the rotation operation must map every lattice point into an integer number of lattice vectors, so the entries of the rotation matrix in the lattice basis – and hence the trace – are necessarily integers. Similar as in other proofs, this implies that the only allowed rotational symmetries correspond to 1,2,3,4 or 6-fold invariance. For example, wallpapers and crystals cannot be rotated by 45° and remain invariant, the only possible angles are: 360°, 180°, 120°, 90° or 60°. Example Consider a 60° (360°/6) rotation matrix with respect to the oblique lattice basis for a tiling by equilateral triangles. The trace is still 1. The determinant (always +1 for a rotation) is also preserved. The general crystallographic restriction on rotations does not guarantee that a rotation will be compatible with a specific lattice. For example, a 60° rotation will not work with a square lattice; nor will a 90° rotation work with a rectangular lattice. Higher dimensions When the dimension of the lattice rises to four or more, rotations need no longer be planar; the 2D proof is inadequate. However, restrictions still apply, though more symmetries are permissible. For example, the hypercubic lattice has an eightfold rotational symmetry, corresponding to an eightfold rotational symmetry of the hypercube. This is of interest, not just for mathematics, but for the physics of quasicrystals under the cut-and-project theory. In this view, a 3D quasicrystal with 8-fold rotation symmetry might be described as the projection of a slab cut from a 4D lattice. The following 4D rotation matrix is the aforementioned eightfold symmetry of the hypercube (and the cross-polytope): Transforming this matrix to the new coordinates given by will produce: This third matrix then corresponds to a rotation both by 45° (in the first two dimensions) and by 135° (in the last two). Projecting a slab of hypercubes along the first two dimensions of the new coordinates produces an Ammann–Beenker tiling (another such tiling is produced by projecting along the last two dimensions), which therefore also has 8-fold rotational symmetry on average. The A4 lattice and F4 lattice have order 10 and order 12 rotational symmetries, respectively. To state the restriction for all dimensions, it is convenient to shift attention away from rotations alone and concentrate on the integer matrices . We say that a matrix A has order k when its k-th power (but no lower), Ak, equals the identity. Thus a 6-fold rotation matrix in the equilateral triangle basis is an integer matrix with order 6. Let OrdN denote the set of integers that can be the order of an N×N integer matrix. For example, Ord2 = {1, 2, 3, 4, 6}. We wish to state an explicit formula for OrdN. Define a function ψ based on Euler's totient function φ; it will map positive integers to non-negative integers. For an odd prime, p, and a positive integer, k, set ψ(pk) equal to the totient function value, φ(pk), which in this case is pk−pk−1. Do the same for ψ(2k) when k > 1. Set ψ(2) and ψ(1) to 0. Using the fundamental theorem of arithmetic, we can write any other positive integer uniquely as a product of prime powers, m = Πα pαk α; set ψ(m) = Σα ψ(pαk α). This differs from the totient itself, because it is a sum instead of a product. The crystallographic restriction in general form states that OrdN consists of those positive integers m such that ψ(m) ≤ N. {| class="wikitable" |+ Smallest dimension for a given order |- align="center" | m || 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 || 10 || 11 || 12 || 13 || 14 || 15 || 16 || 17 || 18 || 19 || 20 || 21 || 22 || 23 || 24 || 25 || 26 || 27 || 28 || 29 || 30 || 31 |- align="center" | ψ(m) || 0 || 0 || 2 || 2 || 4 || 2 || 6 || 4 || 6 || 4 || 10 || 4 || 12 || 6 || 6 || 8 || 16 || 6 || 18 || 6 || 8 || 10 || 22 || 6 || 20 || 12 || 18 || 8 || 28 || 6 || 30 |} For m>2, the values of ψ(m) are equal to twice the algebraic degree of cos(2π/m); therefore, ψ(m) is strictly less than m and reaches this maximum value if and only if m is a prime. These additional symmetries do not allow a planar slice to have, say, 8-fold rotation symmetry. In the plane, the 2D restrictions still apply. Thus the cuts used to model quasicrystals necessarily have thickness. Integer matrices are not limited to rotations; for example, a reflection is also a symmetry of order 2. But by insisting on determinant +1, we can restrict the matrices to proper rotations. Formulation in terms of isometries The crystallographic restriction theorem can be formulated in terms of isometries of Euclidean space. A set of isometries can form a group. By a discrete isometry group we will mean an isometry group that maps each point to a discrete subset of RN, i.e. the orbit of any point is a set of isolated points. With this terminology, the crystallographic restriction theorem in two and three dimensions can be formulated as follows. For every discrete isometry group in two- and three-dimensional space which includes translations spanning the whole space, all isometries of finite order are of order 1, 2, 3, 4 or 6. Isometries of order n include, but are not restricted to, n-fold rotations. The theorem also excludes S8, S12, D4d, and D6d (see point groups in three dimensions), even though they have 4- and 6-fold rotational symmetry only. Rotational symmetry of any order about an axis is compatible with translational symmetry along that axis. The result in the table above implies that for every discrete isometry group in four- and five-dimensional space which includes translations spanning the whole space, all isometries of finite order are of order 1, 2, 3, 4, 5, 6, 8, 10, or 12. All isometries of finite order in six- and seven-dimensional space are of order 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 18, 20, 24 or 30 . See also Crystallographic point group Crystallography Notes References External links The crystallographic restriction The crystallographic restriction theorem by CSIC Crystallography Group theory Theorems in algebra Articles containing proofs
Crystallographic restriction theorem
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,920
[ "Theorems in algebra", "Materials science", "Group theory", "Crystallography", "Fields of abstract algebra", "Condensed matter physics", "Mathematical problems", "Articles containing proofs", "Mathematical theorems", "Algebra" ]
2,218,269
https://en.wikipedia.org/wiki/Polymersome
In biotechnology, polymersomes are a class of artificial vesicles, tiny hollow spheres that enclose a solution. Polymersomes are made using amphiphilic synthetic block copolymers to form the vesicle membrane, and have radii ranging from 50 nm to 5 μm or more. Most reported polymersomes contain an aqueous solution in their core and are useful for encapsulating and protecting sensitive molecules, such as drugs, enzymes, other proteins and peptides, and DNA and RNA fragments. The polymersome membrane provides a physical barrier that isolates the encapsulated material from external materials, such as those found in biological systems. Synthosomes are polymersomes engineered to contain channels (transmembrane proteins) that allow certain chemicals to pass through the membrane, into or out of the vesicle. This allows for the collection or enzymatic modification of these substances. The term "polymersome" for vesicles made from block copolymers was coined in 1999. Polymersomes are similar to liposomes, which are vesicles formed from naturally occurring lipids. While having many of the properties of natural liposomes, polymersomes exhibit increased stability and reduced permeability. Furthermore, the use of synthetic polymers enables designers to manipulate the characteristics of the membrane and thus control permeability, release rates, stability and other properties of the polymersome. Preparation Several different morphologies of the block copolymer used to create the polymersome have been used. The most frequently used are the linear diblock or triblock copolymers. In these cases, the block copolymer has one block that is hydrophobic; the other block or blocks are hydrophilic. Other morphologies used include comb copolymers, where the backbone block is hydrophilic and the comb branches are hydrophobic, and dendronized block copolymers, where the dendrimer portion is hydrophilic. In the case of diblock, comb and dendronized copolymers the polymersome membrane has the same bilayer morphology of a liposome, with the hydrophobic blocks of the two layers facing each other in the interior of the membrane. In the case of triblock copolymers the membrane is a monolayer that mimics a bilayer, the central block filling the role of the two facing hydrophobic blocks of a bilayer. In general they can be prepared by the methods used in the preparation of liposomes. Film rehydration, direct injection method or dissolution method. Uses Polymersomes that contain active enzymes and that provide a way to selectively transport substrates for conversion by those enzymes have been described as nanoreactors. Polymersomes have been used to create controlled release drug delivery systems. Similar to coating liposomes with polyethylene glycol, polymersomes can be made invisible to the immune system if the hydrophilic block consists of polyethylene glycol. Thus, polymersomes are useful carriers for targeted medication. For in vivo applications, polymersomes are de facto limited to the use of FDA-approved polymers, as most pharmaceutical firms are unlikely to develop novel polymers due to cost issues. Fortunately, there are a number of such polymers available, with varying properties, including: Hydrophilic blocks Poly(ethylene glycol) (PEG/PEO) Poly(2-methyloxazoline) Hydrophobic blocks Polydimethylsiloxane (PDMS) Poly(caprolactone (PCL) Poly(lactide) (PLA) Poly(methyl methacrylate) (PMMA) If enough of the block copolymer molecules that make up a polymersome are cross-linked, the polymersome can be made into a transportable powder. Polymersomes can be used to make an artificial cell if hemoglobin and other components are added. The first artificial cell was made by Thomas Chang. See also Cell (biology) Liposome Polymer Copolymer Artificial cell References Biomolecules Polymers Immunology Pharmacokinetics
Polymersome
[ "Chemistry", "Materials_science", "Biology" ]
861
[ "Pharmacology", "Natural products", "Biochemistry", "Pharmacokinetics", "Organic compounds", "Immunology", "Polymer chemistry", "Biomolecules", "Structural biology", "Polymers", "Molecular biology" ]
2,218,352
https://en.wikipedia.org/wiki/Blum%20integer
In mathematics, a natural number n is a Blum integer if is a semiprime for which p and q are distinct prime numbers congruent to 3 mod 4. That is, p and q must be of the form , for some integer t. Integers of this form are referred to as Blum primes. This means that the factors of a Blum integer are Gaussian primes with no imaginary part. The first few Blum integers are 21, 33, 57, 69, 77, 93, 129, 133, 141, 161, 177, 201, 209, 213, 217, 237, 249, 253, 301, 309, 321, 329, 341, 381, 393, 413, 417, 437, 453, 469, 473, 489, 497, ... The integers were named for computer scientist Manuel Blum. Properties Given a Blum integer, Qn the set of all quadratic residues modulo n and coprime to n and . Then: a has four square roots modulo n, exactly one of which is also in Qn The unique square root of a in Qn is called the principal square root of a modulo n The function f : Qn → Qn defined by f(x) = x2 mod n is a permutation. The inverse function of f is: f(x) = . For every Blum integer n, −1 has a Jacobi symbol mod n of +1, although −1 is not a quadratic residue of n: No Blum integer is the sum of two squares. History Before modern factoring algorithms, such as MPQS and NFS, were developed, it was thought to be useful to select Blum integers as RSA moduli. This is no longer regarded as a useful precaution, since MPQS and NFS are able to factor Blum integers with the same ease as RSA moduli constructed from randomly selected primes. References Integer sequences
Blum integer
[ "Mathematics" ]
413
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
2,218,355
https://en.wikipedia.org/wiki/Astronomia%20nova
Astronomia nova (English: New Astronomy, full title in original Latin: ) is a book, published in 1609, that contains the results of the astronomer Johannes Kepler's ten-year-long investigation of the motion of Mars. One of the most significant books in the history of astronomy, the Astronomia nova provided strong arguments for heliocentrism and contributed valuable insight into the movement of the planets. This included the first mention of the planets' elliptical paths and the change of their movement to the movement of free floating bodies as opposed to objects on rotating spheres. It is recognized as one of the most important works of the Scientific Revolution. Background Prior to Kepler, Nicolaus Copernicus proposed in 1543 that the Earth and other planets orbit the Sun. The Copernican model of the Solar System was regarded as a device to explain the observed positions of the planets rather than a physical description. Kepler sought for and proposed physical causes for planetary motion. His work is primarily based on the research of his mentor, Tycho Brahe. The two, though close in their work, had a tumultuous relationship. Regardless, in 1601 on his deathbed, Brahe asked Kepler to make sure that he did not "die in vain," and to continue the development of his model of the Solar System. Kepler would instead write the Astronomia nova, in which he rejects the Tychonic system, as well as the Ptolemaic system and the Copernican system. Some scholars have speculated that Kepler's dislike for Brahe may have had a hand in his rejection of the Tychonic system and formation of a new one. By 1602, Kepler set to work on determining the orbit pattern of Mars, keeping David Fabricius informed of his progress. He suggested the possibility of an oval orbit to Fabricius by early 1604, though was not believed. Later in the year, Kepler wrote back with his discovery of Mars's elliptical orbit. The manuscript for Astronomia nova was completed by September 1607, and was in print by August 1609. Structure and summary In English, the full title of his work is the New Astronomy, Based upon Causes, or Celestial Physics, Treated by Means of Commentaries on the Motions of the Star Mars, from the Observations of Tycho Brahe, Gent. For over 650 pages (in the English translation), Kepler walks his readers, step by step, through his process of discovery. The discussion of scripture in the Astronomia novas introduction was the most widely distributed of Kepler's works in the seventeenth century. The introduction outlines the four steps Kepler took during his research. The first step is his claim that the Sun itself and not any imaginary point near the Sun (as in the Copernican system) is the point where all the planes of the eccentrics of the planets intersect, or the center of the orbits of the planets. The second step consists of Kepler placing the Sun as the center and mover of the other planets. This step also contains Kepler's reply to objections against placing the Sun at the center of the universe, including objections based on scripture. In reply to scripture, he argues that it is not meant to claim physical dogma, and the content should be taken spiritually. In the third step, he posits that the Sun is the source of the motion of all planets, using Brahe’s proof based on comets that planets do not rotate on orbs. The fourth step consists of describing the path of planets as not a circle, but an oval. As the Astronomia nova proper starts, Kepler demonstrates that the Tychonic, Ptolemaic, and Copernican systems are indistinguishable on the basis of observations alone. The three models predict the same positions for the planets in the near term, although they diverge from historical observations, and fail in their ability to predict future planetary positions by a small, though absolutely measurable amount. Kepler here introduces his famous diagram of the movement of Mars in relation to Earth if Earth remained unmoving at the center of its orbit. The diagram shows that Mars's orbit would be completely imperfect and never follow along the same path. Kepler discusses all his work at great length throughout the book. He addresses this length in the sixteenth chapter: If thou art bored with this wearisome method of calculation, take pity on me, who had to go through with at least seventy repetitions of it, at a very great loss of time. Kepler, in a very important step, also questions the assumption that the planets move around some point in their orbit at a uniform rate. He finds that computing critical measurements based upon the Sun's actual position in the sky, instead of the Sun's "mean" position injects a significant degree of uncertainty into the models, opening the path for further investigations. The idea that the planets do not move at a uniform rate, but at a speed that varies as their distance from the Sun, was completely revolutionary and would become his second law (discovered before his first). Kepler, in his calculations leading to his second law, made multiple mathematical errors, which luckily cancelled each other out “as if by miracle.” Given this second law, he puts forth in Chapter 33 that the Sun is the engine that moves the planets. To describe the motion of the planets, he claims the Sun emits a physical species, analogous to the light it also emits, which pushes the planets along. He also suggests a second force within every planet itself that pulls it towards the Sun to keep it from spiraling off into space. Kepler then attempts to find the true shape of planetary orbits, which he determines is elliptical. His initial attempt to define the orbit of Mars as a circle was off by only eight minutes of arc, but this was enough for him to dedicate six years to resolve the discrepancy. The data seemed to produce a symmetrical oviform curve inside of his predicted circle. He first tested an egg shape, then engineered a theory of an orbit which oscillates in diameter, and returned to the egg. Finally, in early 1605, he geometrically tested an ellipse, which he had previously assumed to be too simple a solution for earlier astronomers to have overlooked. Ironically, he had already derived this solution trigonometrically many months earlier. As he says, I laid [the original equation] aside, and fell back on ellipses, believing that this was quite a different hypothesis, whereas the two, as I shall prove in the next chapter, are one in the same... Ah, what a foolish bird I have been! Kepler's laws The Astronomia nova records the discovery of the first two of the three principles known today as Kepler's laws of planetary motion, which are: That the planets move in elliptical orbits with the Sun at one focus. That the speed of the planet changes at each moment such that the time between two positions is always proportional to the area swept out on the orbit between these positions. Kepler discovered the "second law" before the first. He presented his second law in two different forms: In Chapter 32 he states that the speed of the planet varies inversely based upon its distance from the Sun, and therefore he could measure changes in position of the planet by adding up all the distance measures, or looking at the area along an orbital arc. This is his so-called "distance law". In Chapter 59, he states that a radius from the Sun to a planet sweeps out equal areas in equal times. This is his so-called "area law". However, Kepler's "area-time principle" did not facilitate easy calculation of planetary positions. Kepler could divide up the orbit into an arbitrary number of parts, compute the planet's position for each one of these, and then refer all questions to a table, but he could not determine the position of the planet at each and every individual moment because the speed of the planet was always changing. This paradox, referred to as the "Kepler problem," prompted the development of calculus. A decade after the publication of the Astronomia nova, Kepler discovered his "third law", published in his 1619 Harmonices Mundi (Harmonies of the world). He found that the ratio of the cube of the length of the semi-major axis of each planet's orbit, to the square of time of its orbital period, is the same for all planets. Kepler's knowledge of gravity In his introductory discussion of a moving earth, Kepler addressed the question of how the Earth could hold its parts together if it moved away from the center of the universe which, according to Aristotelian physics, was the place toward which all heavy bodies naturally moved. Kepler proposed an attractive force similar to magnetism, which may have been known by Newton. Gravity is a mutual corporeal disposition among kindred bodies to unite or join together; thus the earth attracts a stone much more than the stone seeks the earth. (The magnetic faculty is another example of this sort).... If two stones were set near one another in some place in the world outside the sphere of influence of a third kindred body, these stones, like two magnetic bodies, would come together in an intermediate place, each approaching the other by a space proportional to the bulk [moles] of the other.... For it follows that if the earth's power of attraction will be much more likely to extend to the moon and far beyond, and accordingly, that nothing that consists to any extent whatever of terrestrial material, carried up on high, ever escapes the grasp of this mighty power of attraction. Kepler discusses the Moon's gravitational effect upon the tides as follows: The sphere of the attractive virtue which is in the moon extends as far as the earth, and entices up the waters; but as the moon flies rapidly across the zenith, and the waters cannot follow so quickly, a flow of the ocean is occasioned in the torrid zone towards the westward. If the attractive virtue of the moon extends as far as the earth, it follows with greater reason that the attractive virtue of the earth extends as far as the moon and much farther; and, in short, nothing which consists of earthly substance anyhow constituted although thrown up to any height, can ever escape the powerful operation of this attractive virtue. Kepler also clarifies the concept of lightness in terms of relative density, in opposition to the Aristotelian concept of the absolute nature or quality of lightness as follows. His argument could easily be applied today to something like the flight of a hot air balloon. Nothing which consists of corporeal matter is absolutely light, but that is comparatively lighter which is rarer, either by its own nature, or by accidental heat. And it is not to be thought that light bodies are escaping to the surface of the universe while they are carried upwards, or that they are not attracted by the earth. They are attracted, but in a less degree, and so are driven outwards by the heavy bodies; which being done, they stop, and are kept by the earth in their own place. In reference to Kepler's discussion relating to gravitation, Walter William Bryant makes the following statement in his book Kepler (1920). ...the Introduction to Kepler's "Commentaries on the Motion of Mars," always regarded as his most valuable work, must have been known to Newton, so that no such incident as the fall of an apple was required to provide a necessary and sufficient explanation of the genesis of his Theory of Universal Gravitation. Kepler's glimpse at such a theory could have been no more than a glimpse, for he went no further with it. This seems a pity, as it is far less fanciful than many of his ideas, though not free from the "virtues" and "animal faculties," that correspond to Gilbert's "spirits and humours". Kepler considered that this attraction was mutual and was proportional to the bulk of the bodies, but he considered it to have a limited range and he did not consider whether or how this force may have varied with distance. Furthermore, this attraction only acted between "kindred bodies"—bodies of a similar nature, a nature which he did not clearly define. Kepler's idea differed significantly from Newton's later concept of gravitation and it can be "better thought of as an episode in the struggle for heliocentrism than as a step toward Universal gravitation." Kepler sent Galileo the book while the latter was working on his Dialogue Concerning the Two Chief World Systems (published in 1632, two years after Kepler's death). Galileo had been trying to determine the path of an object falling from rest towards the center of the Earth, but used a semicircular orbit in his calculation. Commemoration The 2009 International Year of Astronomy commemorated the 400th anniversary of the publication of this work. Notes References Johannes Kepler, New Astronomy, translated by William H. Donahue, Cambridge: Cambridge Univ. Pr., 1992. Kepler's Astronomia Nova cosmology & astronomy External links Astronomia nova by Johannes Kepler, 1609, in Latin, full text scan Astronomia nova by Johannes Kepler, 1609, in Latin, full text at archive.org Origins of Modernity - Kepler: Astronomia nova 1609 books Astronomy books 17th-century books in Latin Physics books Historical physics publications History of astronomy 1609 in science Works by Johannes Kepler
Astronomia nova
[ "Astronomy" ]
2,761
[ "Astronomy books", "Works about astronomy", "History of astronomy" ]
2,218,687
https://en.wikipedia.org/wiki/Common%20Interface
In Digital Video Broadcasting (DVB), the Common Interface (also called DVB-CI) is a technology which allows decryption of pay TV channels. Pay TV stations want to choose which encryption method to use. The Common Interface allows TV manufacturers to support many different pay TV stations, by allowing to plug in exchangeable conditional-access modules (CAM) for various encryption schemes. The Common Interface is the connection between the TV tuner (TV or set-top box) and the module that decrypts the TV signal (CAM). This module, in turn, then accepts the pay-to-view subscriber card, which contains the access keys and permissions. The host (TV or set-top box) is responsible for tuning to pay TV channels and demodulation of the RF signal, while CAM is responsible for CA descrambling. The Common Interface allows them to communicate with each other. All Common Interface equipment must comply with the EN 50221-1997 standard. This is a defined standard that enables the addition of a CAM in a DTV receiver to adapt it to different kinds of cryptography. The EN 50221 specification allows many types of modules but only the CAM has found popularity because of the pay TV market. Indeed, one of Digital Video Broadcasting's main strengths is the option of implementing the required conditional access capability on the Common Interface. This allows broadcasters to use modules containing solutions from different suppliers, thus increasing their choice of anti-piracy options. Mode of operation A DVB receiver may have one or two slots implementing the Common Interface (CI). The CI uses the conditional-access module (PCMCIA) connector and conforms to the Common Scrambling Algorithm (CSA), the normative that specifies that such a receiver must be able to accept DES (Data Encryption Standard) keys in intervals of some milliseconds, and use them to decode private channels according to a specific algorithm. Those algorithms are proprietary to individual suppliers. Each one uses their own algorithms and there is no defined standard for them. As the full MPEG-2 transport data stream comes out of the demodulator, and error correction units, the DTV Receiver sends it through the card plugged into the Common Interface, before it is processed by the MPEG demultiplexer in the receiver. If several CI cards are present, the MPEG transport data stream will be passed sequentially through all these cards. An embedded CAM may not physically exist, as it may be in CPU software. In such a case, only the smart card reader normally in the CAM is fitted and not the PCMCIA type CI slots. Even if the Common Interface has been created to resolve cryptography issues, it can have other functions using other types of modules such as Web Browser, iDTV (Interactive Television), and so forth. In Europe, DVB-CI is obligatory in all iDTV terminals. The host sends an encrypted MPEG transport stream to the CAM and the CAM sends the decrypted transport stream back to the host. The CAM often contains a smart-card reader. Standards DVB-CI The normative DVB-CI standard EN 50221 was defined in 1997 by CENELEC, the European Committee for Electrotechnical Standardization. According to the Common Interface scheme: host : A device where module(s) can be connected; for example, an integrated receiver/decoder (IRD), a VCR, a PC, etc. module: A small device, not working by itself, designed to run specialized tasks in association with a host; for example, a conditional access sub system, an electronic program guide application module, or to provide resources required by an application but not provided directly by the host. The specification defines only two aspects, two logical interfaces to be included on the same physical interface. The first interface is the MPEG-2 transport stream. The link and physical layers are defined in this specification and the higher layers are defined in the MPEG-2 specifications. The second interface, the command interface, carries commands between the host (receiver) and the module. The specification does not define the operation or functionality of a conditional access system application on the module. The applications that may be performed by a module communicating across the interface are not limited to conditional access or to those described in this specification. More than one module may be supported concurrently. The common interface shares many features of the PC Card standard (PCMCIA). By reducing the widths of the address and data buses it has been possible to include a bi-directional parallel transport stream interface. Transport Stream Interface (TSI) The transport stream format is specified by IEC 13818-1 and is the MPEG 2 TS format. Command Interface In addition there is a command interface for communication between the host and module. This communication is in the form of a layered protocol stack which allows the host and module to share resources. For example, the module can request the current date and time from the host. To use this service, module shall open a session to the "Date-Time" resource provided by host. Or, module can ask the host to display a message on the TV screen and can then read keypresses from the host remote control. This is done by opening a session to host's Man-Machine Interface (MMI) Resource. This resource also allows the CAM to request and receive PIN numbers. Some of defined by DVB-CI resources are de facto optional. For example, the host could contain a modem for communication over a telephone line allowing the CAM to implement pay-per-view. This can be done by opening a session to host's Low-Speed Communication (LSC) resource (assuming that the host announced the availability of this resource). The Host Control resource (allowing CAM to request force-tuned) also may be absent in some of hosts. The definitely mandatory resources are Resource Manager, Application Information and Conditional Access Support ones. First two of these three are necessary for initial handshaking between CAM and its host, while the CA Support resource is necessary for descrambling the selected channels. The Command Interface is extensible and there are several specification documents available which describe these extensions (e.g. ETSI TS 101 699). However, these extensions have often not proved popular with manufacturers. CI+ Definition CI+ (also known as CI Plus or Common Interface Plus) is a specification that extends the original DVB Common Interface standard (DVB-CI, sometimes referred to as DVB-CIv1). The main addition introduced by CI+ is a form of copy protection between a CI+ conditional-access module (referenced by the spec as CICAM, while CI+ CAM seems to be a more precise abbreviation) and the television receiver (host). CI+ is backward-compatible with DVB-CIv1. Old television receivers which have a CIv1 CI-slot can be used with CI+ CAM and vice versa, but for viewing only those of TV programs which are not marked as CI+ protected. History Initial versions CI+ specification has been developed by consumer electronic firms Panasonic, Philips, Samsung and Sony, as well as pay-TV technology company SmarDTV and fabless chip maker Neotion. A first draft of the specification was put up for review in January 2008 as V1.00 CI Plus Specification. The establishment of the Trusted Authority has been completed and an official security certification lab appointed. In 2009, versions 1.1 and 1.2 were released. The 1.2 version became the first one which was massively deployed. The main features added to the original DVB-CI standard by CI+ v1.2 are: Content Control (allows re-encryption of video and audio on their way from CI+ CAM to its host) coordination of CAM firmware upgrade between CAM and its host "CI Plus browser"; support of MHEG-5 applications running on a CI+ host, launched by a CI+ CAM and being able to communicate with it support of IP communication was added to the DVB-CI's Low-Speed Communication (LSC) resource (but without renaming it to "High-Speed"). The spec does not state explicitly about each feature if it is mandatory or optional. The mandatory feature (as it is actually the main raison d'être of CI+) is Content Control. The optional feature of v1.2 version is "PVR Resource"; this can be concluded from the fact that it does not appear in newer CI+ spec versions. CI+ v1.3 In 2011, version 1.3 of the CI+ spec was released (later replaced with CI+ v1.3.1 and then with CI+ v1.3.2, still commonly referenced as CI+ v1.3). The main features added by CI+ v1.3 to CI+ v1.2 are: various enhancements of Content Control mechanism coordination of parental control PIN code handling between CAM and its host better IP communication support (increased data throughput) VOD support a new Operator Profile resource allowing CAM to adapt non-standard broadcast-specific service information to standard DVB format understandable by host. CI+ v1.4 With the development of CI+, the standard has now come under the umbrella of the DVB standards organization. In 2014, DVB released ETSI TS 103 205 V1.1.1 specification, defining what is often referred as "CI+ v1.4". The main features added by ETSI TS 103 205 to CI+ v1.3 are: multi-tuner support URI (usage rules information) extensions (the most prominent is addition of trick mode enable/disable flag) IP-delivered video support watermarking and transcoding capability the communication functionality was extended to support IP multicast and hybrid type of communication (hybrid communication means here that IP multicast data arrive to module over the transport stream interface) CI Plus™ browser extensions (interaction channel, streaming, video scaling etc.) letting a CI+ CAM to determine if its host supports an advanced application environment (e.g. HbbTV or MHP) and, if yes, to launch a corresponding application allowing CI+ CAM applications to be represented in the host's channel line-up in form of virtual channels. CI+ v2.0 In 2018, ETSI published the second generation DVB-CI standard (often referred to as CI+ v2.0): TS 103 605 V1.1.1. The main evolution of this version is to add USB as physical layer to replace the aging PC Card interface. Certification CI+ Host and CAM test tool development, testing and certification is carried out by Resillion (formerly Eurofins Digital Testing, formerly Digital TV Labs) in the UK (Bristol) and China (Shenzen). How it works Content protection By making use of certificates issued by a trusted certification authority, a secure authenticated channel (SAC) is formed between a CI+ CAM and television receiver (host). This SAC is used to generate a shared key, unique per a CAM-host pair, which protects from unauthorized copying the content marked in the associated URI (Usage Rules Info) as a content which needs to be re-encrypted on its way from CAM to host after removal the original CA or DRM scrambling (in the original CI standard, decrypted content could be sent over the PCMCIA interface only in unscrambled form). Revocation CI+ standard allows revocation of compromised CI+ hosts. This is done by broadcasting a Service Operator Certificate Revocation List (SOCRL) in a DSM-CC data carousel. If CAM detects that its host's ID, model or brand is listed in SOCRL (and is not listed in optional SOCWL – Service Operator Certificate White List), the CAM must refuse descrambling the content marked in CI+ URI as protected. A SOCRL is created and signed by the CI+ Root-of-Trust on request of a Service Operator. To prevent replay of out-of-dated SOCRL and SOCWL, they must be broadcast in combination with RSD (Revocation Signaling Data) table which specifies the last versions of SOCRL and SOCWL and their location in the DSM-CC data carousel. The RSD also must be signed. Enhanced MMI A CI+ 1.3 compliant host device must implement MHEG-5 interactive TV engine to manage navigation of the user within an interactive TV application, using its device remote control. Support of MHP or HbbTV interactive TV engines are also optional. CI+ 1.4 hosts may optionally support the MHEG-5 interactive TV engine. Operators (partial list) The following operators have currently rolled out CI+ support or plan to do so: Albania Digitalb Tring TV Bulgaria Blizoo – launched CI+ in 2014 Bulsatcom – launched CI+ v1.3 Belgium Telenet – launched CI+ in June 2013 Télésat and TV Vlaanderen VOO – launched CI+ in September 2015 Croatia evotv – launched CI+ v1.3 France Canal+ – launched the "Canal Ready" label for devices able to receive Canal+ channel Germany HD+ Kabel Deutschland – NDS CAM KBW Sky Deutschland – NDS CAM Tele Columbus Italy Mediaset Premium (Digital terrestrial television)– needs CI+ slot on HD television to descramble high-definition channel Premium Calcio HD. Tivùsat Luxembourg Eltrona Netherlands Caiway – launched CI+ in October 2009 Delta NV – launched CI+ in 2010 Kabel Noord – launched CI+ in 2010 Ziggo – launched CI+ in September 2009 (2011 in former UPC areas), SMiT and Neotion CAM modules are used Poland Cyfrowy Polsat – Nagravision CAM UPC Poland Platforma Canal+ Romania UPC Romania (now Vodafone) – launched CI+ in April 2012 RCS & RDS (Digi TV) – Starting November 2013 Focus Sat – Starting March 2020, previously compatible with 3rd party CIv1 or registered CI+ Conax modules Orange TV Telekom (formerly Romtelecom/Dolce) Russia Akado TV Serbia SBB Supernova Sweden Boxer Com Hem Viasat Switzerland UPC Cablecom – starting June 2010 Turkey D-Smart Teledünya United Kingdom Top Up TV In July 2009 the largest cable operator in the Netherlands, Ziggo, announced that it would support CI+ based Integrated Digital Television sets (IDTVs) actively. In September 2009 the first batch of 15,000 SMiT (Shenzhen State Micro Technology Co., Ltd.) CI+ CAMs was offered by various Dutch retailers, followed in October 2009 by the first batch of Neotion CAMs. Other supporters included Canal+, and conditional access companies Irdeto and Conax. In 2009, NDS (now Cisco) announced that it would support Kabel Deutschland to deploy CI+ to its customers. In 2014, CI+ CAMs with Cisco VideoGuard CA, manufactured by SMiT were deployed at D-Smart, KDG (Kabel Deutschland), KBW, Sky Deutschland, and Tele Columbus. Compatible TV sets (partial list) LG 2010 models all LD and LE series also MFT models MXX80D Many of Samsung's new LCD, LCD LED and plasma model variants with CI+ compatible motherboards, although there were some incompatibilities between TV and UPC and RCS-RDS CI+ modules, even with models certified by UPC and RCS-RDS. Some problems were solved by upgrading the firmware of the TV, other were solved by simply replacing (in many cases under warranty) the motherboard. Some Samsung models require an adaptor for non-standard CI module sockets. Many of Sony's models including the Bravia W5500 series. Some older models needed a firmware update. Philips 5000 and 9000 series LCD TVs (required firmware pending according to Ziggo) Panasonic early models (until early 2011) with CI+ slots needed a new firmware to be fully CI+ compatible. (update 2010). All incompatibility problems were solved by software and firmware updates, or sometimes by using a CI+ card or module with other firmware. All models produced after early 2011 are fully compatible with CI+. Some Tesco Technika models. Many Vestel-based TV sets mark the fact they are CI+ certified in their shop mode (or demo mode) or simply by a sticker attached on the front of the set. In many cases, CI+ compatibility of the Vestel sets is mentioned on the package. Embedded Common Interface A new ETSI working group will be working on Embedded Common Interface (ECI). See also Conditional-access module (CAM) DVB-CPCM Free-to-view HDCP Television encryption References External links CI Plus official web site Official CI Plus test lab Official CI Plus DigiCert EN 50221 Specification LinuxTV entry for common interface ETSI TS 101 699 - DVB Extensions to the Common Interface Specification R206-001:1998 - Guidelines for Implementation and Use of the Common Interface for DVB Decoder Applications Consortium DVB CENELEC CI Plus Specification V1.2 (2009-04) CI Plus Specification V1.3.1 (2011-09) ETSI TS 103 205 V1.1.1 (aka CI+ V1.4) Open source and open hardware CI implementation (Joker TV) Gerard O'Driscoll, The essential Guide to Digital Set-Boxes and Interactive TV, reprinted April 2000 Jerry whitaker, Television Receivers, 2001 Digital Video Broadcasting Digital rights management standards Conditional-access television broadcasting Set-top box
Common Interface
[ "Technology" ]
3,674
[ "Computer standards", "Digital rights management standards" ]
2,218,718
https://en.wikipedia.org/wiki/Sodium%20manganate
Sodium manganate is the inorganic compound with the formula Na2MnO4. This deep green solid is a rarely encountered analogue of the related salt K2MnO4. Sodium manganate is rare because it cannot be readily prepared from the oxidation of manganese dioxide and sodium hydroxide. Instead this oxidation reaction tends to stop at producing sodium hypomanganate, Na3MnO4, and even this Mn(V) salt is unstable in solution. Sodium manganate can be produced by reduction of sodium permanganate under basic conditions: 4 NaOH + 4NaMnO4 → 4 Na2MnO4 + 2 H2O + O2 Because NaMnO4 is difficult to prepare, sodium permanganate is more expensive than potassium permanganate. References Manganates Sodium compounds
Sodium manganate
[ "Chemistry" ]
166
[ "Salts", "Manganates", "Inorganic compounds", "Inorganic compound stubs" ]
2,218,753
https://en.wikipedia.org/wiki/Adatom
An adatom is an atom that lies on a crystal surface, and can be thought of as the opposite of a surface vacancy. This term is used in surface chemistry and epitaxy, when describing single atoms lying on surfaces and surface roughness. The word is a portmanteau of "adsorbed atom". A single atom, a cluster of atoms, or a molecule or cluster of molecules may all be referred to by the general term "adparticle". This is often a thermodynamically unfavorable state. However, cases such as graphene may provide counter-examples. Growth ″Adatom″ is a portmanteau word, short for adsorbed atom. When the atom arrives at a crystal surface, it is adsorbed by the periodic potential of the crystal, thus becoming an adatom. The minima of this potential form a network of adsorption sites on the surface. There are different types of adsorption sites. Each of these sites corresponds to a different structure of the surface. There are five different types of adsorption sites, which are: on a terrace, where the adsorption site is on top of the surface layer that is growing; at the step edge, which is next to the growing layer; in the kink of a growing layer; in the step edge of a growing layer, and in the surface layer, where the adsorption site is inside the lower layer. Out of these adsorption site types, kink sites play the most important role in crystal growth. Kink density is a major factor of growth kinetics. Attachment of an atom to the kink site, or removal of the atom from the kink, does not change the free surface energy of the crystal, since the number of broken bonds does not change. This gives that the chemical potential of an atom in the kink site is equal to that of the crystal, which means that the kink site is the one adsorption site type where an adatom becomes a part of the crystal. If crystallography is used, or if the growth temperatures are higher, which would give an entropy effect, the crystal surface becomes rough, causing greater number of kinks. This means that adatoms have a greater chance of arriving at a kink site, to become part of the crystal. This is the normal mechanism of growth. The opposite, so with a lower growth temperature, would give a smooth surface, which means that there is a higher number of terrace adsorption sites. There are still kink sites, but these are only found at the edges of steps. The crystal only grows through "lateral motion of the steps". This type of growth is called the layer mechanism of growth. How the adatoms grow on the surface depends on what interaction is the strongest or what the surface looks like. If the adatom-adatom interaction is the strongest, adatoms are more likely to create pyramids of adatoms on the surface. If the adatom-surface interaction is the strongest, the adatoms are more likely to arrange themselves in such a way as to create layers on the surface. But it also depends on the origins of the steps on the surface. In total there are five different types of layer growth: normal growth, step-flow growth, layer-by-layer growth, multilayer (or three-dimensional island) growth, and spiral growth. Step-flow growth is observed on stair-like surfaces. These surfaces have a geometry with vicinal steps separated by "atomically flat low-index terraces". When adatoms attach to the edges of the steps, they move along the surface, until they find a kink site to attach to become part of the crystal. However, if the kink density is not high enough, and thus not all adatoms arrive at one of the kinks, additional steps, as if there is a flat surface with small two-dimensional islands on it, are created on the terraces, leading to a mixed growth mode, which leads to a change in layer growth type, from step-flow to layer-by-layer growth. In layer-by-layer growth, the adatom-surface interaction is the strongest. A new layer is created through 2D islands, which is created on the surface. The islands grow until they spread out over the entire surface, and the next layer will start to grow. This growth is named Frank-Van der Merwe (FM) growth. In some cases the cycle of making new layers in layer-by-layer growth is broken by kinetic constraints. In these cases, growth in higher layers starts before lower layers are finished, which means three-dimensional island are created. A new type of growth, called multilayer growth, is started, instead of the layer-by-layer growth. Multilayer growth can be divided into Volmer-Weber growth and Stranski-Krastanov growth. If the crystal surface contains a screw dislocation, a different type of growth, called spiral growth might take place. Around the screw dislocation, a spiral shape is seen during growth. As the screw dislocation causes a growth spiral that does not disappear, islands might not be needed to cause crystal growth. The adatoms are bound to the surface through epitaxy. In this process, new layers of a crystal are created through the attachment of new atoms. This can be through a chemical reaction, or through heating a new film or centrifuging it. Generally, what happens is that the particles that are used to form a new layer, will not always be adsorbed. To create bonds with the surface, energy is needed and not every particle has the needed amount of energy to attach at that part of the surface (for different parts, different energies are needed). If one has a flux F of particles incoming, part of it will be adsorbed, given by the adsorption flux where s here is the sticking coefficient. Not only does this variable depend on the surface and on the energy of the incoming atom, but also on the chemical nature of both the particle and the surface. If both the particle and surface are made of a substance that easily reacts with other particles, it is easier for the atoms to stick to the surface. Surface thermodynamics Taking a look at the thermodynamics at the surface of the film, it is seen that bonds are broken, releasing energy, and bonds are formed, confining energy. The thermodynamics involved were modeled by the Walther Kossel and Ivan Stranski in 1920. This model is called terrace ledge kink model (TLK). The adatom can create more than one bond with the crystal, depending on the structure of the crystal. If it is a simple cubic lattice, the adatom can have up to 6 bonds, whereas in a face-centered cubic lattice, it can have up to 12 nearest neighbors. The more bonds created, the more energy is confined, making it harder to desorb the adatom. A special site for an adatom is a kink, where exactly half of the bonds with the surface can be created, also called the "half-crystal position". Magnetic adatoms Adatoms, due to having fewer bonds than the other atoms in the crystal, have unbound electrons. These electrons have spin and therefore a magnetic moment. This magnetic moment has no preference for orientation until an external influence, like a magnetic field, is present. The structure of the adatoms on a surface can be adjusted by changing the external magnetic field. Through this method theoretical situations, such as the atomic chain, can be simulated. Quantum mechanics needs to be taken into account when using adatoms due to the small scale. The magnetic field created by an atom is caused mostly by the orbit and spin of the electrons. The proton's and neutron's magnetic moment are negligible when compared to that of the electron due to their larger masses. When an atom with free electrons is inside an external magnetic field, its magnetic moment aligns with the external field because this lowers its energy. This is why bound electrons do not display this magnetic moment, they already have a favorable energy state and it is unfavorable to change. The magnetization of an (magnetically aligned) atom is given by: Where N is the number of electrons, gj is the g-factor, μB is the Bohr magneton, kb is the Boltzmann constant, T is the temperature and j is the total angular momentum quantum number. This formula holds under the assumption that the magnetic energy of an electron is given by and there is no exchange interaction. Movement across a surface The movement of adatoms across a surface can be described by the Burton, Cabrera and Frank (CBF) model. The model treats adatoms as a 2D gas on top of the surface. The adatoms diffuse with a diffusion constant D; they are desorbed back to the medium above with a rate of per atom and adsorbed with flux F. The diffusion constant can be, when the concentration of particles is small, expressed as: Where a is the hopping distance for the atom. ED is the energy needed to pass the diffusion barrier. ν0 is the attempt frequency. The CBF model obeys the following continuity equation: Combining the steady states () with the following boundary conditions can lead to an expression for the velocity of the adatoms at each adsorption site. The boundary conditions: And: Applications In 2012, scientists at the University of New South Wales were able to use phosphine to precisely, deterministically eject a single silicon atom onto a surface of epitaxial silicon. This resulting adatom created what is described as a single-atom transistor. Thus, inasmuch as chemical empirical formulas pinpoint the locations of branching ions that are attached to a particular molecule, the dopant of silicon based transistors and other such electronic components will have the location identified of each dopant atom or molecule, along with the associated characteristic of the device based on the named locations. Thus, the mapping of the dopant substances will give exact characteristics of any given semiconductor device, once all is known. With the technology available nowadays it is possible to create a linear chain of adatoms on top of an epitaxial film. With this, one can analyse theoretical situations. Furthermore, Usami et al. were able to create quantum wells by adding Si atoms to a SiGe bulk crystal. Within these wells they observed photoluminescence of excitons that were confined in these wells. References Surface science
Adatom
[ "Physics", "Chemistry", "Materials_science" ]
2,199
[ "Condensed matter physics", "Surface science" ]
2,218,814
https://en.wikipedia.org/wiki/Northern%20hairy-nosed%20wombat
The northern hairy-nosed wombat (Lasiorhinus krefftii) or yaminon is one of three extant species of Australian marsupials known as wombats. It is one of the rarest land mammals in the world and is critically endangered. Its historical range previously extended across New South Wales, Victoria, and Queensland, and as recently as 100 years ago it was considered as having become extinct, but in the 1930s a population of about 30 individuals was discovered located in one place, a range within the Epping Forest National Park in Queensland. With the species threatened by wild dogs, the Queensland Government built a -long predator-proof fence around all wombat habitat at Epping Forest National Park in 2002. Insurance populations have since been translocated to two other locations to ensure the species survives threats such as fire, flood, or disease. In 2003, the total population consisted of 113 individuals, including only around 30 breeding females. After recording an estimated 230 individuals in 2015, the number was up to over 300 by 2021, and over 400 by 2024. Taxonomy English naturalist Richard Owen described the species in 1873. The genus name Lasiorhinus comes from the Latin words lasios, meaning hairy or shaggy, and , meaning nose. The widely accepted common name is northern hairy-nosed wombat, based on the historical range of the species, as well as the fur, or "whiskers", on its nose. In some older literature, it is referred to as the Queensland hairy-nosed wombat. The northern hairy-nosed wombat shares its genus with one other extant species, the southern hairy-nosed wombat, while the common wombat is in the genus Vombatus. Both Lasiorhinus species differ morphologically from the common wombat by their silkier fur, broader hairy noses, and longer ears. Description In general, all species of wombat are heavily built, with large heads and short, powerful legs. They have strong claws to dig their burrows, where they live much of the time. It usually takes about a day for an individual to dig a burrow. Northern hairy-nosed wombats have bodies covered in soft, grey fur; the fur on their noses sets them apart from the common wombat. They have longer, more pointed ears and a much broader muzzle than the other two species. Individuals can be 35 cm high, up to 1 m long and weigh up to 40 kg. The species exhibits sexual dimorphism, with females being somewhat larger than males due to the presence of an extra layer of fat. They are slightly larger than the common wombat and able to breed somewhat faster (giving birth to two young every three years on average). The northern hairy-nosed wombat's nose is very important in its survival because it has very poor eyesight, so it must detect its food in the dark through smell. Examination of the wombat's digestive tract shows that the elastic properties of the ends of their large intestines are capable of turning liquid excrement into cubical scat. Distribution and habitat Northern hairy-nosed wombats require deep sandy soils in which to dig their burrows, and a year-round supply of grass, which is their primary food. These areas usually occur in open eucalypt woodlands. At Epping Forest National Park, northern hairy-nosed wombats construct their burrows in deep, sandy soils on levée banks which were deposited by a creek that no longer flows through the area. They forage in areas of heavy clay soils adjacent to the sandy soils, but do not dig burrows in these areas, which become waterlogged in the wet seasons. In the park, burrows are often associated with native bauhina trees (Lysiphyllum hookeri). This tree has a spreading growth form, and its roots probably provide stability for the extensive burrows dug by the wombats. By the 1980s the range of the northern hairy-nosed wombat had become restricted to a single site of about in the Epping National Forest in east-central Queensland, north-west of Clermont. Insurance populations have since been established at two locations near St George, at the Richard Underwood Nature Refuge in 2009, and in the Powrunna State Forest in 2024 with plans for a fourth site by 2041. Behaviour The northern hairy-nosed wombat is nocturnal, living underground in networks of burrows. They avoid coming above ground during harsh weather, as their burrows maintain a constant humidity and temperature. They have been known to share burrows with up to 10 individuals, equally divided by sex. Young are usually born during the wet season, between November and April. When rain is abundant, 50–80% of the females in the population will breed, giving birth to one offspring at a time. Juveniles stay in their mothers' pouches for 8 to 9 months, and are weaned at 12 months of age. The fat reserves and low metabolic rate of this species permit northern hairy-nosed wombats to go without food for several days when food is scarce. Even when they do feed every day, it is only for 6 hours a day in the winter and 2 hours in the summer, significantly less than a similar-sized kangaroo, which feeds for at least 18 hours a day. Their diet consists of native grasses: black speargrass (Heteropogon contortus), bottle washer grasses (Enneapogon spp.), golden beard grass (Chrysopogon fallax), and three-awned grass(Aristida spp.), as well as various types of roots. The teeth continue to grow beyond the juvenile period, and are worn down by the abrasive grasses they eat. Its habitat has become infested with African buffel grass, a grass species introduced for cattle grazing. The grass outcompetes the more nutritional and native grasses on which the wombat prefers to feed by limiting its quantity, forcing the wombat to travel further to find the native grasses it prefers, and leading to a reduction in biomass. Conservation Status The conservation status of the northern hairy-nosed wombat is as follows: Critically Endangered, per IUCN (; last assessed 15 June 2015), Critically Endangered, under the Australian Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act); and Critically Endangered, under the Nature Conservation Act 1992 (Qld). On 15 February 2018, the federal Department of the Environment and Energy (DoEE) upgraded the conservation status from Endangered to Critically Endangered under the EPBC Act to better align with the International Union for Conservation of Nature's (IUCN) Red List of Threatened Species. Due to its status under the EPBC Act, it is listed on the Species Profile and Threats Database (SPRAT). Threats Originally there were two main groups of hairy-nosed wombats (the other being the Southern hairy-nosed wombat, Lasiorhinus latifrons) that were separated by Spencer Gulf in South Australia. Both species experienced a population decline between 1870 and 1920, with the main influences being culling by agriculturalists, competition for food with introduced and feral species and predation. Threats to the northern hairy-nosed wombat include small population size, predation, competition for food, disease, floods, droughts, wildfires, and habitat loss. Its small, highly localised population makes the species especially vulnerable to natural disasters. Wild dogs are the wombat's primary predator, but the spread of invasive herbivores such as the European rabbit and the actions of landowners have also contributed to their decline. There have been two reports of male northern hairy-nosed wombats contracting a fungal infection caused by Emmonsia parva, a soil saprophytic fungus. It is likely that the northern hairy-nosed wombats are inhaling the infection from the soil. Counter-measures Since around 1993, the Queensland Government's Department of Environment and Science (DES) and predecessors have led a recovery program, supported by Glencore mining company and The Wombat Foundation, for the species. To combat the vulnerability of this species, a number of conservation projects have been put into action in the 21st century. One example was the construction of a two-metre-high, predator-proof fence around of the park in 2000. A second, insurance colony of this species of wombat was established at Richard Underwood Nature Refuge (RUNR) at Yarran Downs, near St George in southern Queensland in 2008. The reserve is surrounded by a predator-proof fence. In 2021 the Australian Wildlife Conservancy (AWC), a private conservation organisation, formed a partnership with DES to collaborate on research and management of the animals in the sanctuary. In October 2023 AWC signed an agreement with DES to care for the wombats in the Richard Underwood Nature Reserve. DES would focus on the Epping Forest population. In 2006, researchers performed a study to analyse the demography of the northern hairy-nosed wombat, by using double-sided tape in the burrows to collect hair of the wombats. Through DNA analysis, they found that the ratio of female to male wombats was 1:2.25 in the population of approximately 113 wombats. These findings allowed researchers to understand the demographics of this species, and opened up further research to better understand why there is a significant difference in males and females in the wild. Within Epping Forest National Park, increased attention and funds have been given for wombat research and population monitoring, fire management, maintenance of the predator-proof fence, general management, and control of predators and competitors, and elimination of invasive plant species. In addition, the species recovery plan of 2004 to 2008 included communication and community involvement in saving the species, and worked to increase the current population in the wild, established other populations within the wombat's historical range. There is also a volunteer caretaker program, that allows volunteers to contribute in monitoring the population and keeping the predator fence in good repair. In addition, DNA fingerprint identification of wombat hairs allows research to be conducted without an invasive trapping or radio-tracking program. Studies have also been conducted to assess diet and nutrition. Population increases Due to the combined efforts of these forces, the northern hairy-nosed wombat population has been slowly making a comeback. After having been considered extinct, a population of about 30 was discovered in the Epping Forest in the 1930s, and only 35 individuals were counted in the early 1980s. In 2003, the total population consisted of 113 individuals, including only around 30 breeding females. In the last census taken in 2013, the estimated population was 196 individuals, with an additional 9 individuals at RUNR at Yarran Downs. In 2016 the population was estimated to be 250 individuals. In May 2021, researchers found that the population had increased to over 300 individuals. In June 2024, the total population was reported as being over 400 individuals, including 18 at the RUNR, and 15 newly translocated to the Powrunna State Forest. References Critically endangered fauna of Australia Vombatiforms Mammals of Queensland Mammals of New South Wales Marsupials of Australia EDGE species Nature Conservation Act endangered biota Taxa named by Richard Owen Mammals described in 1873
Northern hairy-nosed wombat
[ "Biology" ]
2,271
[ "EDGE species", "Biodiversity" ]
2,218,925
https://en.wikipedia.org/wiki/Stellar%20engine
Stellar engines are a class of hypothetical megastructures which use the resources of a star to generate available work (also called exergy). For instance, they can use the energy of the star to produce mechanical, electrical or chemical work or they can use the impulse of the light emitted by the star to produce thrust, able to control the motion of a star system. The concept has been introduced by Bădescu and Cathcart. The variants which produce thrust may accelerate a star and anything orbiting it in a given direction. The creation of such a system would make its builders a type-II civilization on the Kardashev scale. Classes Three classes of stellar engines have been defined. Class A (Shkadov thruster) One of the simplest examples of a stellar engine is the Shkadov thruster (named after Dr. Leonid Shkadov, who first proposed it), or a class-A stellar engine. Such an engine is a stellar propulsion system, consisting of an enormous mirror/light sail—actually a massive type of solar statite large enough to classify as a megastructure—which would balance gravitational attraction towards and radiation pressure away from the star. Since the radiation pressure of the star would now be asymmetrical, i.e. more radiation being emitted in one direction as compared to another, the "excess" radiation pressure acts as net thrust, accelerating the star in the direction of the hovering statite. Such thrust and acceleration would be very slight, but such a system could be stable for millennia. Any planetary system attached to the star would be "dragged" along by its parent star. For a star such as the Sun, with luminosity 3.85 W and mass 1.99 kg, the total thrust produced by reflecting half of the solar output would be 1.28 N. After a period of one million years this would yield an imparted speed of 20 m/s, with a displacement from the original position of 0.03 light-years. After one billion years, the speed would be 20 km/s and the displacement 34,000 light-years, a little over a third of the estimated width of the Milky Way galaxy. Class B A class-B stellar engine consists of two concentric spheres around a star. The inner sphere (which may be assimilated with a Dyson shell) receives energy from the star and becomes hotter than the outer sphere. The difference of temperature between the two spheres drives thermal engines able to provide mechanical work. Unlike the Shkadov thruster, a class-B stellar engine is not propulsive. Class C A class-C stellar engine, such as the Badescu–Cathcart engine, combines the two other classes, employing both the propulsive aspects of the Shkadov thruster and the energy generating aspects of a class-B engine. A higher temperature Dyson shell partially covered by a mirror combined with an outer sphere at a lower temperature would be one incarnation of such a system. The non-spherical mirror ensures conversion of light impulse into effective thrust (like a class-A stellar engine) while the difference of temperature may be used to convert star energy into mechanical work (like a class-B stellar engine). Notice that such system suffers from the same stabilization problems as a non-propulsive shell, as would be a Dyson swarm with a large statite mirror (see image above). A Dyson bubble variant is already a Shkadov thruster (provided that the arrangement of statite components is asymmetrical); adding energy extraction capability to the components seems an almost trivial extension. Caplan thruster Astronomer Matthew E. Caplan of Illinois State University has proposed a type of stellar engine that uses concentrated stellar energy (repurposing the mirror statites from class A) to excite certain regions of the outer surface of the star and create beams of solar wind for collection by a multi-Bussard ramjet assembly. The ramjets would produce directed plasma to stabilize its orbit and jets of oxygen-14 to push the star. Using rudimentary calculations that assume maximum efficiency, Caplan estimates that the Bussard engine would use 1012 kg of solar material per second to produce a maximum acceleration of 10−9 m/s2, yielding a velocity of 200 km/s after 5 million years and a distance of 10 parsecs over 1 million years. While theoretically the Bussard engine would work for 100 million years, given the mass loss rate of the Sun, Caplan deems 10 million years to be sufficient for a stellar collision avoidance. His proposal was commissioned by the German educational YouTube channel Kurzgesagt. Svoronos Star Tug Alexander A. Svoronos of Yale University proposed the 'Star Tug', a concept that combines aspects of the Shkadov thruster and Caplan engine to produce an even more powerful and efficient mechanism for controlling a star's movement. Essentially, it replaces the giant parabolic mirror of the Shkadov thruster with an engine powered by mass lifted from the star, similar to the Caplan engine. However, instead of pushing a star from behind with a beam of thrust, as the Caplan engine does, it pulls the star from the front via its gravitational link to it, same as the Shkadov thruster. As a result, it only needs to produce a single beam of thrust (toward but narrowly missing the star), whereas the Caplan engine must produce two beams of thrust (one to push the star from behind and negate the force of gravity between the engine and the star, and one to propel the system as a whole forward). The result is that the Svoronos Star Tug is a much more efficient engine capable of significantly higher accelerations and max velocities. The Svoronos Star Tug can, in principle (assuming perfect efficiency), accelerate the Sun to ~27% the speed of light (after burning enough of the Sun's mass to transition it to a brown dwarf). See also Dyson spheres References Stellar engine (article at the website of the Encyclopedia of Astrobiology, Astronomy and Spaceflight) Solar Travel (Astronomy Today, Exploration Section) Megastructures Hypothetical technology Interstellar travel Hypothetical spacecraft Engine
Stellar engine
[ "Astronomy", "Technology" ]
1,286
[ "Exploratory engineering", "Astronomical hypotheses", "Hypothetical spacecraft", "Interstellar travel", "Megastructures" ]
2,218,957
https://en.wikipedia.org/wiki/Routing%20and%20wavelength%20assignment
The routing and wavelength assignment (RWA) problem is an optical networking problem with the goal of maximizing the number of optical connections. Definition The general objective of the RWA problem is to maximize the number of established connections. Each connection request must be given a route and wavelength. The wavelength must be consistent for the entire path, unless the usage of wavelength converters is assumed. Two connections requests can share the same optical link, provided a different wavelength is used. The RWA problem can be formally defined in an integer linear program (ILP). The ILP formulation given here is taken from. Maximize: subject to is the number of source-destination pairs, while is the number of connections established for each source-destination pair. is the number of links and is the number of wavelengths. is the set of paths to route connections. is a matrix which shows which source-destination pairs are active, is a matrix which shows which links are active, and is a route and wavelength assignment matrix. Note that the above formulation assumes that the traffic demands are known a priori. This type of problem is known as Static Lightpath Establishment (SLE). The above formulation also does not consider the signal quality. It has been shown that the SLE RWA problem is NP-complete in. The proof involves a reduction to the -graph colorability problem. In other words, solving the SLE RWA problem is as complex as finding the chromatic number of a general graph. Given that dynamic RWA is more complex than static RWA, it must be the case that dynamic RWA is also NP-complete. Another NP-complete proof is given in. This proof involves a reduction to the Multi-commodity Flow Problem. The RWA problem is further complicated by the need to consider signal quality. Many of the optical impairments are nonlinear, so a standard shortest path algorithm can't be used to solve them optimally even if we know the exact state of the network. This is usually not a safe assumption, so solutions need to be efficient using only limited network information. Methodology Given the complexity of RWA, there are two general methodologies for solving the problem: The first method is solving the routing portion first, and then assigning a wavelength second. Three types of route selection are Fixed Path Routing, Fixed Alternate Routing, and Adaptive Routing. The second approach is to consider both route selection and wavelength assignment jointly. First routing, then wavelength assignment Routing algorithms Fixed path routing Fixed path routing is the simplest approach to finding a lightpath. The same fixed route for a given source and destination pair is always used. Typically this path is computed ahead of time using a shortest path algorithm, such as Dijkstra's Algorithm. While this approach is very simple, the performance is usually not sufficient. If resources along the fixed path are in use, future connection requests will be blocked even though other paths may exist. The SP-1 (Shortest Path, 1 Probe) algorithm is an example of a Fixed Path Routing solution. This algorithm calculates the shortest path using the number of optical routers as the cost function. A single probe is used to establish the connection using the shortest path. The running time is the cost of Dijkstra's algorithm: , where is the number of edges and is the number of routers. The running time is just a constant if a predetermined path is used. This definition of SP-1 uses the hop count as the cost function. The SP-1 algorithm could be extended to use different cost functions, such as the number of EDFAs. Fixed alternate routing Fixed alternate routing is an extension of fixed path routing. Instead of having just one fixed route for a given source and destination pair, several routes are stored. The probes can be sent in a serial or parallel fashion. For each connection request, the source node attempts to find a connection on each of the paths. If all of the paths fail, then the connection is blocked. If multiple paths are available, only one of them would be utilized. The SP- (Shortest Path, Probes, ) algorithm is an example of Fixed Alternate Routing. This algorithm calculates the shortest paths using the number of optical routers as the cost function. The running time using Yen's algorithm is where is the number of edges, is the number of routers, and is the number of paths. The running time is a constant factor if the paths are precomputed. Adaptive routing The major issue with both fixed path routing and fixed alternate routing is that neither algorithm takes into account the current state of the network. If the predetermined paths are not available, the connection request will become blocked even though other paths may exist. Fixed Path Routing and Fixed Alternate Routing are both not quality aware. For these reasons, most of the research in RWA is currently taking place in Adaptive algorithms. Five examples of Adaptive Routing are LORA, PABR, IA-BF, IA-FF, and AQoS. Adaptive algorithms fall into two categories: traditional and physically aware. Traditional adaptive algorithms do not consider signal quality, however, physically aware adaptive algorithms do. Traditional adaptive RWA The lexicographical routing algorithm (LORA) algorithm was proposed in. The main idea behind LORA is to route connection requests away from congested areas of the network, increasing the probability that connection requests will be accepted. This is accomplished by setting the cost of each link to be where is parameter that can be dynamically adjusted according to the traffic load and is the number of wavelengths in use on link . A standard shortest path algorithm can then be used to find the path. This requires each optical switch to broadcast recent usage information periodically. Note that LORA does not consider any physical impairments. When is equal to one, the LORA algorithm is identical to the SP algorithm. Increasing the value of will increase the bias towards less used routes. The optimal value of can be calculated using the well-known hill climbing algorithm. The optimal values of were between 1.1 and 1.2 in the proposal. Physically aware adaptive RWA The physically aware backward reservation algorithm (PABR) is an extension of LORA. PABR is able to improve performance in two ways: considering physical impairments and improved wavelength selection. As PABR is searching for an optical path, paths with an unacceptable signal quality due to linear impairments are pruned. In other words, PABR is LORA with an additional quality constraint. Note that PABR can only consider linear impairments. The nonlinear impairments, on the other hand, would not be possible to estimate in a distributed environment due to their requirement of global traffic knowledge. PABR also considers signal quality when making the wavelength selection. It accomplishes this by removing from consideration all wavelengths with an unacceptable signal quality level. The approach is called Quality First Fit and it is discussed in the following section. Both LORA and PABR can be implemented with either single-probing or multi-probing. The maximum number of probes is denoted as LORA- or PABR-. With single-probing, only one path is selected by the route selection. With multi-probing, multiple paths are attempted in parallel, increasing the probability of connection success. Other routing approaches IA-BF - The Impairment Aware Best Fit (IA-BF) algorithm was proposed in. This algorithm is a distributed approach that is dependent upon a large amount of communication to use global information to always pick the shortest available path and wavelength. This is accomplished through the use of serial multi-probing. The shortest available path and wavelength are attempted first, and upon failure, the second shortest available path and wavelength are attempted. This process continues until a successful path and wavelength have been found or all wavelengths have been attempted. The multi-probing approach will allow IA-BF to outperform both PABR-1 and LORA-1. However, as the number of probes increases, the performance of the algorithms is similar. IA-FF - Impairment Aware First Fit (IA-FF) is a simple extension of IA-BF. Instead of picking the wavelengths in terms of the minimum cost, the wavelengths are selected in order according to their index. IA-BF tends to outperform IA-FF under most scenarios. AQoS - Adaptive Quality of Service (AQoS) was proposed in. This algorithm is unique in a couple of ways. First, each node maintains two counters: and . The purpose of each counter is to determine which issue is a bigger factor in blocking: Path and wavelength availability or Quality requirements. The algorithm chooses routes differently based upon the larger issue. Another distinction is that AQoS uses the Q-factor as the link cost. The cost of the link is calculated by this formula where is the number of lightpaths on the link, and are the quality factor measurements of the lightpath at the source and destination nodes of the link, respectively. The repeated quality factor estimations are computationally very expensive. This algorithm is single probing approach. The multi-probing approach, which the paper names ALT-AQoS (alternate AQoS) is a simple extension upon the same basic idea. Wavelength assignment Two of the most common methods for wavelength assignment are First Fit and Random Fit. First Fit chooses the available wavelength with the lowest index. Random Fit determines which wavelengths are available and then chooses randomly amongst them. The complexity of both algorithms is , where is the number of wavelengths. First Fit outperforms Random Fit. An extension to First Fit and Random Fit was proposed in to consider signal quality. Quality First Fit and Quality Random Fit eliminate from consideration wavelengths which have an unacceptable signal quality. The complexity of these algorithms is higher though, as up to calls to estimate the Q-factor are required. There are several other wavelength assignment algorithms: Least Used, Most Used, Min Product, Least Loaded, Max Sum, and Relative Capacity Loss. Most Used outperforms Least Used use significantly, and slightly outperforms First Fit. Min Product, Least Loaded, Max Sum, and Relative Capacity Loss all try to choose a wavelength that minimizes the probability that future requests will be blocked. A significant disadvantage of these algorithms is that they require a significant communication overhead, making them unpractical to implement unless you have a centralized network structure. Joint routing and wavelength assignment An alternate approach to selecting a route and wavelength separately is to consider them jointly. These approaches tend to be more theoretical and not as practical. As this is an NP-complete problem, an exact solution is likely not possible. The approximation techniques usually aren't very useful either, as they will require centralized control and, usually, predefined traffic demands. Two joint approaches are ILP formulation and Island Hopping. The ILP formulation listed above can be solved using a traditional ILP solver. This is typically done by temporarily relaxing the integer constraints, solving the problem optimally, and converting the real solution to an integer solution. Additional constraints can be added and the process repeated indefinitely using a branch and bound approach. In the authors report on the algorithm which can be used to solve efficiently and optimally a constrained RWA problem. The authors study a constrained routing and spectrum assignment (RSA) problem, which can be reduced to a constrained RWA problem by requesting one slice. The constriction limits the path length. In the authors report on the generalized Dijkstra algorithm, which can be used to efficiently and optimally solve the RWA, RSA, and the routing, modulation, and spectrum assignment (RMSA) problems, without the limit on the path length. References Fiber-optic communications Telecommunication theory NP-complete problems
Routing and wavelength assignment
[ "Mathematics" ]
2,368
[ "NP-complete problems", "Mathematical problems", "Computational problems" ]
2,219,011
https://en.wikipedia.org/wiki/Proofs%20involving%20the%20addition%20of%20natural%20numbers
This article contains mathematical proofs for some properties of addition of the natural numbers: the additive identity, commutativity, and associativity. These proofs are used in the article Addition of natural numbers. Definitions This article will use the Peano axioms for the definition of natural numbers. With these axioms, addition is defined from the constant 0 and the successor function S(a) by the two rules For the proof of commutativity, it is useful to give the name "1" to the successor of 0; that is, 1 = S(0). For every natural number a, one has Proof of associativity We prove associativity by first fixing natural numbers a and b and applying induction on the natural number c. For the base case c = 0, (a + b) + 0 = a + b = a + (b + 0) Each equation follows by definition [A1]; the first with a + b, the second with b. Now, for the induction. We assume the induction hypothesis, namely we assume that for some natural number c, (a + b) + c = a + (b + c) Then it follows, In other words, the induction hypothesis holds for S(c). Therefore, the induction on c is complete. Proof of identity element Definition [A1] states directly that 0 is a right identity. We prove that 0 is a left identity by induction on the natural number a. For the base case a = 0, 0 + 0 = 0 by definition [A1]. Now we assume the induction hypothesis, that 0 + a = a. Then This completes the induction on a. Proof of commutativity We prove commutativity (a + b = b + a) by applying induction on the natural number b. First we prove the base cases b = 0 and b = S(0) = 1 (i.e. we prove that 0 and 1 commute with everything). The base case b = 0 follows immediately from the identity element property (0 is an additive identity), which has been proved above: a + 0 = a = 0 + a. Next we will prove the base case b = 1, that 1 commutes with everything, i.e. for all natural numbers a, we have a + 1 = 1 + a. We will prove this by induction on a (an induction proof within an induction proof). We have proved that 0 commutes with everything, so in particular, 0 commutes with 1: for a = 0, we have 0 + 1 = 1 + 0. Now, suppose a + 1 = 1 + a. Then This completes the induction on a, and so we have proved the base case b = 1. Now, suppose that for all natural numbers a, we have a + b = b + a. We must show that for all natural numbers a, we have a + S(b) = S(b) + a. We have This completes the induction on b. See also Binary operation Proof Ring References Edmund Landau, Foundations of Analysis, Chelsea Pub Co. . Article proofs Abstract algebra Elementary algebra Operations on numbers
Proofs involving the addition of natural numbers
[ "Mathematics" ]
659
[ "Elementary algebra", "Elementary mathematics", "Article proofs", "Arithmetic", "Abstract algebra", "Operations on numbers", "Algebra" ]
2,219,115
https://en.wikipedia.org/wiki/Deficits%20in%20attention%2C%20motor%20control%20and%20perception
DAMP (deficits in attention, motor control, and perception) is a psychiatric concept conceived by Christopher Gillberg defined by the presence of five properties: problems of attention, gross and fine motor skills, perceptual deficits, and speech-language impairments. While routinely diagnosed in Scandinavian countries, the diagnosis has been rejected in the rest of the world. Minor cases of DAMP are roughly defined as a combination of developmental coordination disorder (DCD) and a pervading attention deficit. DAMP is similar to minimal brain dysfunction (MBD), a concept that was formulated in the 1960s, and which has since been recognised as attention deficit hyperactivity disorder. Both concepts are related to certain psychiatric conditions, such as hyperactivity. The concept of MBD was strongly criticized by Sir Michael Rutter [Gillberg, 2003, p. 904] and several other researchers, and this led to its abandonment in the 1980s. At the same time, research showed that something similar was needed. One alternative concept was attention-deficit hyperactivity disorder (ADHD). Gillberg proposed another alternative: DAMP. Gillberg's concept was formulated in the early 1980s, and the term itself was introduced in a paper that Gillberg published in 1986 (see Gillberg [1986]). DAMP is essentially MBD without the etiological assumptions.) The concept of DAMP met with considerable criticism. For example, Sir Michael Rutter stated that the concept of DAMP (unlike ADHD) was "muddled" and "lacks both internal coherence and external discriminative validity ... it has no demonstrated treatment or prognostic implications"; he concluded that the concept should be abandoned. Another example is the criticism of Per-Anders Rydelius, Professor of Child Psychiatry at the Karolinska Institute, who argued that the definition of DAMP was too vague: "the borderline between DAMP and conduct disorders [is] unclear ... the borderline between DAMP and ADHD [is] unclear"; he concluded that "the concept is in need of revision". And in 2000, Eva Kärfve, a sociologist at the University of Lund, published a book which argued that Gillberg's work on DAMP should be rejected. Perhaps the strongest criticism of DAMP is that Gillberg and his co-workers in Gothenburg are almost the only people doing research on DAMP. Indeed, in a review of DAMP published by Gillberg in 2003, it was noted that there were only "about 50" research papers that had been published on DAMP and that the "vast majority of these have either originated in the author's own clinical and research setting or have been supervised and/or co-authored by him" [Gillberg, 2003, p. 904]. This is in contrast to ADHD, on which "several thousand papers" had been published [Gillberg, 2003, p. 905]. As far as clinical practice goes, DAMP has been primarily accepted only in Gillberg's native Sweden and in Denmark [Gillberg, 2003, p. 904], and even in those countries, acceptance is mixed. In 2003, Gillberg revised his definition of DAMP. The new definition is as follows: ADHD as defined in DSM-IV; developmental coordination disorder (DCD) as defined in DSM-IV; condition not better accounted for by cerebral palsy; and IQ should be higher than about 50 [Gillberg, 2003: box 1]. (In the WHO system, this would be a hyperkinetic disorder combined with a developmental disorder of motor function.) About half of children with ADHD are believed to also have DCD [Gillberg, 2003; Martin et al., 2006]. Strong criticism of DAMP, however, has continued. In particular, it has been observed that "the validity and utility of DAMP will remain unclear until stronger evidence of the special status of the overlap between its constituent disorders is provided". In 2005, there was an hour-long television program broadcast on Swedish TV, questioning why Sweden, almost alone in the world, would accept the DAMP construct. The program featured critical commentary from Sir Michael Rutter. It also considered some of the controversies over Gillberg's Gothenburg Study of Children with DAMP. The concept of DAMP (deficits in attention, motor control, and perception) has been in clinical use in Scandinavia for about 20 years. DAMP is diagnosed on the basis of concomitant attention deficit/hyperactivity disorder and developmental coordination disorder in children who do not have a severe learning disability or cerebral palsy. In clinically severe form, it affects about 1.5% of the general population of 7-year-old-children; 3-6% are affected by more moderate variants. Boys are overrepresented; girls are currently probably underdiagnosed. There are many comorbid problems/overlapping conditions, including conduct disorder, depression/anxiety, and academic failure. There is a strong link with autism spectrum disorders in severe DAMP. Familial factors and pre- and perinatal risk factors account for much of the variance. Psychosocial risk factors appear to increase the risk of marked psychiatric abnormality in DAMP. The outcome in early adult age was psychosocially poor in one study in almost 60% of unmedicated cases. There are effective interventions available for many of the problems encountered in DAMP. Notes References Andersson, Emelie (2004), Debatten om DAMP: En kontroversstudie (University of Stockholm). [In Swedish] Bagge, Peter (5 July 2005), "Forskarstrid: DAMP ifrågasätts från fler än ett håll", Sveriges Television. (Summary of televised show, in Swedish.) Gallup, Raymond; Miller, Clifford G.; Elinder, Leif R.; Brante, Thomas; Kärfve, Eva; Josephson, Staffan (July 2005), "Rapid Responses", British Medical Journal. Kärfve, Eva (2000), Hjärnspöken: DAMP och hotet mot folkhälsan, Stockholm: Brutus Östlings Bokförlag. [In Swedish.] Rasmussen N.H. (17 November 2003), "Deficits in attention, motor control, and perception: a brief review", Archives of Disease in Childhood eLetters. 1986 neologisms 1986 quotations Attention disorders Motor control Perception da:DAMP
Deficits in attention, motor control and perception
[ "Biology" ]
1,329
[ "Behavior", "Motor control" ]
2,219,122
https://en.wikipedia.org/wiki/Geelong%20Keys
The Geelong Keys were a set of five keys discovered in 1847 at Limeburners Point, on the southern shore of Corio Bay, near Geelong, Victoria, Australia. Charles La Trobe, Superintendent of the Port Phillip District and a keen amateur geologist, was examining marine deposits revealed by excavations associated with lime production in the area. A worker showed him two of a set of five keys he claimed to have found the day before, in a layer of shells down an excavation for a lime kiln, which was about from the shoreline. La Trobe was fascinated by the find and believed, from their appearance, that the keys were between 100 and 150 years old (~1700-1750 AD). Since the 1802 expedition of Matthew Flinders is the earliest proven European presence in the vicinity, writer Kenneth McIntyre has suggested the keys may have originated with some earlier European explorers of the region, possibly the Portuguese. McIntyre has connected the discovery of the Geelong Keys with the presence of the so-called Mahogany Ship, further west on Victoria's Shipwreck Coast, claiming that it could be another possible relic of early Portuguese exploration. The editor of the Geelong Advertiser at the time, James Harrison, noted that metal objects were often embedded in new diggings to detect the leaching of payable metal. For instance, a copper oxide coating on the keys would suggest the presence of copper. Ronald Gunn, a friend of La Trobe, told the Royal Society of Victoria in September 1849 that he had gone to Geelong to investigate the discovery. On questioning the limeburner he found that the keys had not, in fact, been dug out of the shell layer but were found with shells at the bottom of the pit. It was only assumed that they had fallen from the upper layer. In practice they could have fallen from any layer, including the top of the excavation. The keys were the subject of two pamphlets published by the Royal Society of Victoria in the 1870s. The first of these suggested that the depth at which the keys allegedly lay indicated an age closer to 200–300 years. The second pamphlet repudiated this claim and was based on an interview with a limeburner, who said that the keys may have been dropped down a hole to that depth. Research in the 1980s by geologist Edmund Gill, and engineer and historian Peter Alsop, showed the age of the deposit in which the keys were supposedly found was between 2330 and 2800 years. According to Gill and Alsop, La Trobe's error is quite understandable, given that in 1847 most people thought the earth was only 6000 years old. The keys themselves, and all original drawings of them, have been lost. By the time they were shown to La Trobe, one had been lost by the limeburners' children and one had been given to a passer-by. La Trobe gave one to his friend Ronald Gunn and the other two went to the Melbourne Mechanics' Institute, from which they were lost after the institute went bankrupt. The keys are referred to in the children's books The Voyage of the Poppykettle and The Unchosen Land by Robert Ingpen. In the stories, the keys are used as ballast in a clay-pot ship sailed by migrant Peruvian gnomes. The stories were so popular in Ingpen's hometown, Geelong, that a fountain and an annual Poppykettle Festival celebrate the mythical landing of the "hairy Peruvians". References External links Letter from R.C. Gunn respecting the discovery of keys: Royal Society of Victoria, 1875) (item 81) The White Hat Guide to 7 Mysteries of Victoria: The Geelong Keys Mystery 1847 in Australia 1847 archaeological discoveries 19th century in Victoria (state) Australian folklore Geelong History of Australia (1788–1850) Lost objects Pre-1606 contact with Australia
Geelong Keys
[ "Physics" ]
770
[ "Lost objects", "Physical objects", "Matter" ]
2,219,538
https://en.wikipedia.org/wiki/Calcium%20signaling
Calcium signaling is the use of calcium ions (Ca2+) to communicate and drive intracellular processes often as a step in signal transduction. Ca2+ is important for cellular signalling, for once it enters the cytosol of the cytoplasm it exerts allosteric regulatory effects on many enzymes and proteins. Ca2+ can act in signal transduction resulting from activation of ion channels or as a second messenger caused by indirect signal transduction pathways such as G protein-coupled receptors. Concentration regulation The resting concentration of Ca2+ in the cytoplasm is normally maintained around 100 nM. This is 20,000- to 100,000-fold lower than typical extracellular concentration. To maintain this low concentration, Ca2+ is actively pumped from the cytosol to the extracellular space, the endoplasmic reticulum (ER), and sometimes into the mitochondria. Certain proteins of the cytoplasm and organelles act as buffers by binding Ca2+. Signaling occurs when the cell is stimulated to release Ca2+ ions from intracellular stores, and/or when Ca2+ enters the cell through plasma membrane ion channels. Under certain conditions, the intracellular Ca2+ concentration may begin to oscillate at a specific frequency. Phospholipase C pathway Specific signals can trigger a sudden increase in the cytoplasmic Ca2+ levels to 500–1,000 nM by opening channels in the ER or the plasma membrane. The most common signaling pathway that increases cytoplasmic calcium concentration is the phospholipase C (PLC) pathway. Many cell surface receptors, including G protein-coupled receptors and receptor tyrosine kinases, activate the PLC enzyme. PLC uses hydrolysis of the membrane phospholipid PIP2 to form IP3 and diacylglycerol (DAG), two classic secondary messengers. DAG attaches to the plasma membrane and recruits protein kinase C (PKC). IP3 diffuses to the ER and is bound to the IP3 receptor. The IP3 receptor serves as a Ca2+ channel, and releases Ca2+ from the ER. The Ca2+ bind to PKC and other proteins and activate them. Depletion from the endoplasmic reticulum Depletion of Ca2+ from the ER will lead to Ca2+ entry from outside the cell by activation of "Store-Operated Channels" (SOCs). This inflow of Ca2+ is referred to as Ca2+-release-activated Ca2+ current (ICRAC). The mechanisms through which ICRAC occurs are currently still under investigation. Although Orai1 and STIM1, have been linked by several studies, for a proposed model of store-operated calcium influx. Recent studies have cited the phospholipase A2 beta, nicotinic acid adenine dinucleotide phosphate (NAADP), and the protein STIM 1 as possible mediators of ICRAC. As a second messenger Calcium is a ubiquitous second messenger with wide-ranging physiological roles. These include muscle contraction, neuronal transmission (as in an excitatory synapse), cellular motility (including the movement of flagella and cilia), fertilization, cell growth (proliferation), neurogenesis, learning and memory as with synaptic plasticity, and secretion of saliva. High levels of cytoplasmic Ca2+ can also cause the cell to undergo apoptosis. Other biochemical roles of calcium include regulating enzyme activity, permeability of ion channels, activity of ion pumps, and components of the cytoskeleton. Many of Ca2+ mediated events occur when the released Ca2+ binds to and activates the regulatory protein calmodulin. Calmodulin may activate the Ca2+-calmodulin-dependent protein kinases, or may act directly on other effector proteins. Besides calmodulin, there are many other Ca2+-binding proteins that mediate the biological effects of Ca2+. In muscle contractions Contractions of skeletal muscle fiber are caused due to electrical stimulation. This process is caused by the depolarization of the transverse tubular junctions. Once depolarized the sarcoplasmic reticulum (SR) releases Ca2+ into the myoplasm where it will bind to a number of calcium sensitive buffers. The Ca2+ in the myoplasm will diffuse to Ca2+ regulator sites on the thin filaments. This leads to the actual contraction of the muscle. Contractions of smooth muscle fiber are dependent on how a Ca2+ influx occurs. When a Ca2+ influx occurs, cross bridges form between myosin and actin leading to the contraction of the muscle fibers. Influxes may occur from extracellular Ca2+ diffusion via ion channels. This can lead to three different results. The first is a uniform increase in the Ca2+ concentration throughout the cell. This is responsible for increases in vascular diameters. The second is a rapid time dependent change in the membrane potential which leads to a very quick and uniform increase of Ca2+. This can cause a spontaneous release of neurotransmitters via sympathetic or parasympathetic nerve channels. The last potential result is a specific and localized subplasmalemmal Ca2+ release. This type of release increases the activation of protein kinase, and is seen in cardiac muscle where it causes excitation-concentration coupling. Ca2+ may also result from internal stores found in the SR. This release may be caused by Ryaodine (RYRs) or IP3 receptors. RYRs Ca2+ release is spontaneous and localized. This has been observed in a number of smooth muscle tissues including arteries, portal vein, urinary bladder, ureter tissues, airway tissues, and gastrointestinal tissues. IP3 Ca2+ release is caused by activation of the IP3 receptor on the SR. These influxes are often spontaneous and localized as seen in the colon and portal vein, but may lead to a global Ca2+ wave as observed in many vascular tissues. In neurons In neurons, concomitant increases in cytosolic and mitochondrial Ca2+ are important for the synchronization of neuronal electrical activity with mitochondrial energy metabolism. Mitochondrial matrix Ca2+ levels can reach the tens of μM levels that are necessary for the activation of isocitrate dehydrogenase, which is one of the key regulatory enzymes of the Krebs cycle. The ER, in neurons, may serve in a network integrating numerous extracellular and intracellular signals in a binary membrane system with the plasma membrane. Such an association with the plasma membrane creates the relatively new perception of the ER and theme of "a neuron within a neuron." The ER's structural characteristics, ability to act as a Ca2+ sink, and specific Ca2+ releasing proteins, serve to create a system that may produce regenerative waves of Ca2+ release. These may communicate both locally and globally in the cell. These Ca2+ signals integrate extracellular and intracellular fluxes, and have been implicated to play roles in synaptic plasticity, memory, neurotransmitter release, neuronal excitability, and long term changes at the gene transcription level. ER stress is also related to Ca2+ signaling and along with the unfolded protein response, can cause ER associated degradation (ERAD) and autophagy. Astrocytes have a direct relationship with neurons through them releasing gliotransmitters. These transmitters allow communication between neurons and are triggered by calcium levels increasing around astrocytes from inside stores. This increase in calcium can also be caused by other neurotransmitters. Some examples of gliotransmitters are ATP and glutamate. Activation of these neurons will lead to an increase in the concentration of calcium in the cytosol from 100 nanomolar to 1 micromolar. In fertilization Ca2+ influx during fertilization has been observed in many species as a trigger for development of the oocyte. These influxes may occur as a single increase in concentration as seen with fish and echinoderms, or may occur with the concentrations oscillating as observed in mammals. The triggers to these Ca2+ influxes may differ. The influx have been observed to occur via membrane Ca2+ conduits and Ca2+ stores in the sperm. It has also been seen that sperm binds to membrane receptors that lead to a release in Ca2+ from the ER. The sperm has also been observed to release a soluble factor that is specific to that species. This prevents cross species fertilization to occur. These soluble factors lead to activation of IP3 which causes a Ca2+ release from the ER via IP3 receptors. It has also been seen that some model systems mix these methods such as seen with mammals. Once the Ca2+ is released from the ER the egg starts the process of forming a fused pronucleus and the restart of the mitotic cell cycle. Ca2+ release is also responsible for the activation of NAD+ kinase which leads to membrane biosynthesis, and the exocytosis of the oocytes cortical granules which leads to the formation of the hyaline layer allowing for the slow block to polyspermy. See also Nanodomain European Calcium Society References Further reading Cell signaling Signal transduction Calcium signaling
Calcium signaling
[ "Chemistry", "Biology" ]
1,964
[ "Biochemistry", "Neurochemistry", "Calcium signaling", "Signal transduction" ]
2,219,552
https://en.wikipedia.org/wiki/Heavens-Above
Heavens-Above is a non-profit website developed and maintained by Chris Peat as Heavens-Above GmbH. The web site is dedicated to helping people observe and track satellites orbiting the Earth without the need for optical equipment such as binoculars or telescopes. It provides detailed star charts showing the trajectory of the satellites against the background of the stars as seen during a pass. Special attention is paid to the ISS, Starlink satellites, and others. Space Shuttle missions were tracked until the program was retired in July 2011 and Iridium flares were also tracked until the program was retired in May 2018. The website also offers information on currently visible comets, asteroids, planet details, and other miscellaneous information. Sky & Telescope magazine described Heavens-Above as "the most popular website for tracking satellites." Users click on a map of the world to set their viewing location. Lists of objects, their brightness and the time and direction to look to see those objects are given. Space stations, rockets, satellites, space junk as well as Sun, Moon, and planetary data are given. The authors also offer a freeware mobile app that shows similar information for the user's location. See also List of satellite pass predictors References External links Astronomy websites Non-profit organisations based in Germany
Heavens-Above
[ "Astronomy" ]
253
[ "Works about astronomy", "Astronomy websites" ]