id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
2,327,311
https://en.wikipedia.org/wiki/Quality%2C%20cost%2C%20delivery
Quality, cost, delivery (QCD), sometimes expanded to quality, cost, delivery, morale, safety (QCDMS), is a management approach originally developed by the British automotive industry. QCD assess different components of the production process and provides feedback in the form of facts and figures that help managers make logical decisions. By using the gathered data, it is easier for organizations to prioritize their future goals. QCD helps break down processes to organize and prioritize efforts before they grow overwhelming. QCD is a "three-dimensional" approach. If there is a problem with even one dimension, the others will inevitably suffer as well. One dimension cannot be sacrificed for the sake of the other two. Quality Quality is the ability of a product or service to meet and exceed customer expectations. It is the result of the efficiency of the entire production process formed of people, material, and machinery. Customer requirements determine the quality scope. Quality is a competitive advantage; poor quality often results in bad business. The U.S. business organizations in the 1970s focused more on cost and productivity. That approach led to Japanese businesses capturing a major share of the U.S. market. It was not until the late 1970s and the beginning of the 1980s that the quality factor drastically shifted and became a strategic approach, created by Harvard professor David Garvin. This approach focuses on preventing mistakes and puts a great emphasis on customer satisfaction. Quality basis David A. Garvin lists eight dimensions of quality: Performance is a product's primary operating characteristics. For example, for a vehicle audio system, those characteristics include sound quality, surround sound, and Wi-Fi connectivity. Conformance refers to the degree to which a certain product meets the customer's expectations. Special features or extras are additional features of a product or service. An example of extras could be free meals on an airplane or Internet access for a TV. Aesthetics refer to a product's looks, sound, feel, smell, or taste. Aesthetics are subjective; thus, achieving total customer satisfaction is impossible. For example, not all customers like the smell of a certain perfume. Durability refers to how long the product lasts before it has to be replaced. Better raw materials and manufacturing processes can improve durability. For home appliances and automobiles, durability is a primary characteristic of quality. Reliability refers to the time until a product breaks down and has to be repaired, but not replaced. This feature is very important for products that have expensive maintenance. Serviceability is defined by speed, courtesy, competence and ease of repair." Customers want products that are quickly and easily serviceable. Perceived quality, which may be affected by the high price or the good aesthetics of a product. Product components The quality of a product depends almost entirely on the quality of its raw material. Suppliers and manufacturers must work together to eliminate defects and achieve higher quality. Small and medium-sized enterprises (SMEs) should discuss with their suppliers how quality improvements can affect the overall performance of the supply chain. Quality assurance can reduce testing, scrapping, reworks, and production costs. Consequences of poor quality Business loss: Poor quality results in unsatisfied customers and business loss, especially where customers can easily switch to a competitor. Reduced productivity: Poor-quality products must often be reworked or scrapped entirely, which diminishes usable output. Higher operating costs: Harrington argued that poor quality affects costs. Counterintuitively, higher costs are attached to offering lower-quality products and services. A reduction of cost and scheduling problems is achievable by avoiding the production of poor quality goods and services. Costs The biggest costs in most businesses are the four basic types of manufacturing costs: Raw materials Direct labour Variable overhead – production costs that increase or decrease depending on the quantity produced. For example, electricity is a variable overhead. If a company increases production, it will also increase the usage of equipment, which will result in a higher electricity bill. Fixed overhead In addition, there are business costs that stay the same, regardless of the production output. Business costs include: Salaries for employees that do not work directly on the production line (e.g. security guards or safety inspectors.) Depreciation costs Occupancy costs (e.g., property taxes and building insurance) Businesses desire to reduce costs to increase their operating profit and bottom line. Cost reduction strategies include: Minimizing supplier costs Adopting lean manufacturing Eliminating waste Delivery Logistics are an essential part in providing good customer service on time. Logistics customer service can be separated into three elements: Pre-transaction elements (before delivery) Transaction elements (during delivery) Post-transaction elements (after delivery) Benefits QCD offers a method of measuring both simple and complicated business processes. It also represents a basis for comparing businesses: for example, a business measuring a supplier's delivery performance may compare its findings with the business's own performance. Flexibility The "quality, cost, delivery, and flexibility" (QCDF) approach, includes flexibility as the capacity to adapt to changes or modifications in the input quality, output quality, product specifications, and delivery schedules. Profitability There are seven measures used to increase profitability. Not right first time (NRFT) Not getting things right the first time means wasted resources, effort and time. This all leads to excessive costs for the company and poor-quality, high-priced products for the customer. NRFT measures the quality of a product and is expressed in “number of defective parts per million”. The number of defective products is divided by the total quantity of finished products. This figure is then multiplied by 10^6 to get the number of defective parts per million. NRFT can be measured internally (defective parts identified within the production process) or externally (defective parts identified outside the production process (e.g. by the supplier or the customer). Delivery schedule achievement (DSA) DSA analyses how well a supplier delivers what the customer wants and when they want it. The goal is to achieve 100% on-time delivery without any special deliveries or overtime payments, which only increase the delivery cost. DSA measures the actual delivery performance against the planned delivery schedule. Failed deliveries include: "Not on time" deliveries – both late and early. "Incorrect quantity deliveries". Both "not on time" and "incorrect quantity deliveries". People productivity (PP) PP is measured by the time it takes (in staff hours) to produce a good-quality product. Obtaining high PP is only possible when: Most employees' work adds value to the process. Non-value added work is reduced as much as possible. Waste is completely eliminated . Stock turns (ST) The ST ratio shows how quickly a company turns raw materials into finished, ready-to-be-sold products. The quicker the better. A low ST means that the money is tied up in stock, and the company has fewer funds to invest in other parts of its business. Overall equipment effectiveness (OEE) The OEE shows how well a company uses its equipment and staff. OEE is calculated on the base of three elements: Availability – compares the planned and the actual time of the process run. For example, if a machine is planned to run 100 hours a week, but in reality runs only 50, then the availability is 50%. Performance – compares the ideal output and the actual output. For example, if a certain process is planned to take 10 minutes, but actually takes 20, then the productivity is 50%. Quality – to show the quality of a product, a company has to compare the number of good parts produced with the total parts produced. If it produces 100 parts per hour but only 50 of them are of saleable standard, then quality is running at 50%. Value added per person (VAPP) VAPP shows how well people are used to turn raw materials into finished goods. In order to calculate VAPP, three things need to be taken into account: The sales value of a unit after production (output value). The raw material value of a unit before production (input value). The number of direct production process employees. Floor space utilisation (FSU) FSU measures the sales revenue generated by a square meter of factory floor space. Usually to achieve higher FSU the floor space has to be reduced. That means eliminating inventory and reducing the necessary space to a minimum. See also Project management triangle Trilemma References Quality management Cost engineering
Quality, cost, delivery
Engineering
1,717
35,945,180
https://en.wikipedia.org/wiki/Radio%20object%20with%20continuous%20optical%20spectrum
Radio Objects with Continuous Optical Spectra, (abbr. ROCOS, also referred to as ROCOSes) is a group of about 80 astrophysical objects characterized by optical spectra anomalously devoid of emission or absorption features, which makes it impossible to determine their distances and locations in relation to our galaxy. They are considered to be a subclass of blazars, and are similar in their spectral characteristics to DC-dwarfs and single stellar-mass black holes. Discovery and study Radio Objects with Continuous Optical Spectra, or ROCOSes, were discovered in the 1970s. Among the discoverers was a group of Soviet astrophysicists, who studied them at the Crimean Astrophysical Observatory and the Special Astrophysical Observatory of the Russian Academy of Science, using the former's 2.6-meter optical telescope and the latter's 6-meter optical telescope (BTA-6), along with a 1000-channel photon counter and photometers. The group published their findings in a series of articles in the Russian scientific journals Astronomy Letters and Astronomy Reports. Criteria An astronomical radio object is classified as a ROCOS if it possesses (a) an optical image with stellar appearance, which is identified with a radio source, and (b) no emission or absorption features in its optical spectrum, except for those due to galactic interstellar medium, with a signal-to-noise ratio at the level of those observable for quasar candidates. About 8% of the known astronomical radio objects satisfy these two criteria and are considered ROCOSes. Properties The absence of distinct emission or absorption lines in the ROCOSes' spectra makes them very similar in this regard to highly polarized quasars (HPQ), BL Lac objects, and single stellar-mass black holes. The absence of optical spectral features also makes it impossible to use red shift for determining their distances or even ascertaining if they are located within or outside our galaxy. References Astrophysics Astronomical radio sources Radio astronomy
Radio object with continuous optical spectrum
Physics,Astronomy
401
7,625,671
https://en.wikipedia.org/wiki/Tetralemma
The tetralemma is a figure that features prominently in the logic of India. Definition It states that with reference to any a logical proposition (or axiom) X, there are four possibilities: (affirmation) (negation) (both) (neither) Catuskoti The history of fourfold negation, the Catuskoti (Sanskrit), is evident in the logico-epistemological tradition of India, given the categorical nomenclature Indian logic in Western discourse. Subsumed within the auspice of Indian logic, 'Buddhist logic' has been particularly focused in its employment of the fourfold negation, as evidenced by the traditions of Nagarjuna and the Madhyamaka, particularly the school of Madhyamaka given the retroactive nomenclature of Prasangika by the Tibetan Buddhist logico-epistemological tradition. Though tetralemma was also used as a form inquiry rather than logic in the Nasadiya Sukta of Rigveda (creation hymn) though seems to be rarely used as a tool of logic before Buddhism. See also Catuṣkoṭi, a similar concept in Indian philosophy De Morgan's laws Dialetheism Logical connective Paraconsistent logic Prasangika Pyrrhonism Semiotic square Two-truths doctrine References External links Wiktionary definition of tetralemma History of logic Logic Lemmas
Tetralemma
Mathematics
290
18,399,611
https://en.wikipedia.org/wiki/Shift-invariant%20system
In signal processing, a shift invariant system is the discrete equivalent of a time-invariant system, defined such that if is the response of the system to , then is the response of the system to . That is, in a shift-invariant system, the contemporaneous response of the output variable to a given value of the input variable does not depend on when the input occurs; time shifts are irrelevant in this regard. Applications Because digital systems need not be causal, some operations can be implemented in the digital domain that cannot be implemented using discrete analog components. Digital filters that require finite numbers of future values can be implemented while the analog counterparts cannot. Notes References Oppenheim, Schafer, Digital Signal Processing, Prentice Hall, 1975, See also LTI system theory Control theory
Shift-invariant system
Mathematics
159
13,591,037
https://en.wikipedia.org/wiki/Bucillamine
Bucillamine is an antirheumatic agent developed from tiopronin. Activity is mediated by the two thiol groups that the molecule contains. Research done in USA showed positive transplant preservation properties. Bucillamine is currently being investigated for COVID-19 drug repurposing. Bucillamine has a well-known safety profile and is prescribed in the treatment of rheumatoid arthritis in Japan and South Korea for over 30 years. It is a cysteine derivative with 2 thiol groups that is 16-fold more potent than acetylcysteine (NAC) as a thiol donor in vivo, giving it vastly superior function in restoring glutathione and therefore greater potential to prevent acute lung injury during influenza infection. Bucillamine has also been shown to prevent oxidative and reperfusion injury in heart and liver tissues. Bucillamine has both proven safety and proven mechanism of action similar to that of NAC, but with much higher potency, mitigating the previous obstacles to using thiols therapeutically. It is hypothesized that similar processes related to reactive oxygen species (ROS) are involved in acute lung injury during nCov-19 infection, possibly justifying the investigation of bucillamine as an intervention for COVID-19. On July 31, 2020, the U.S. Food & Drug Administration (FDA) has approved Revive Therapeutics Ltd. to proceed with a randomized, double-blind, placebo-controlled confirmatory Phase 3 clinical trial protocol to evaluate the safety and efficacy of Bucillamine in patients with mild-moderate COVID-19. References Antirheumatic products Carboxylic acids Propionamides Thiols
Bucillamine
Chemistry
363
78,110,876
https://en.wikipedia.org/wiki/Perseus-Taurus%20Shell
The Perseus-Taurus Shell is a near-spherical cavity in the interstellar medium, 500 light-years wide, located in the Perseus-Taurus constellations. A team from the Harvard Smithsonian Center for Astrophysics led by Catherine Zucker and Shmuel Bialy discovered the structure in 2021. Scientists believe that it appeared following the explosions of ancient supernovae. Molecular clouds surround the sphere-shape cavity. References Superbubbles
Perseus-Taurus Shell
Astronomy
94
14,410,427
https://en.wikipedia.org/wiki/GPR183
G-protein coupled receptor 183 also known as Epstein-Barr virus-induced G-protein coupled receptor 2 (EBI2) is a protein (GPCR) expressed on the surface of some immune cells, namely B cells and T cells; in humans it is encoded by the GPR183 gene. Expression of EBI2 is one critical mediator of immune cell localization within lymph nodes, responsible in part for the coordination of B cell, T cell, and dendritic cell movement and interaction following antigen exposure. EBI2 is a receptor for oxysterols. The most potent activator is 7α,25-dihydroxycholesterol (7α,25-OHC), with other oxysterols exhibiting varying affinities for the receptor. Oxysterol gradients drive chemotaxis, attracting the EBI2-expressing cells to locations of high ligand concentration. The GPR183 gene was identified due to its upregulation during Epstein-Barr virus infection of the Burkitt's lymphoma cell line BL41, hence its name: EBI2. Tissue distribution and function B cells EBI2 helps B cell homing to the outer follicular region within a lymph node. Approximately three hours following B cell exposure to plasma-soluble antigen, EBI2 is upregulated via the transcription factor BRRF1. More surface receptors binding the oxysterol ligand results in cellular migration up the gradient, to the outer follicular region. The reason for this early migration is still unknown; however, because soluble antigen enters lymph nodes via afferent lymphatic vasculature, near the outer region of the follicle, it is hypothesized that B cell movement is motivated by increased exposure to the antigen. Six hours after antigen exposure, EBI2 is downregulated to low levels, permitting the B cells to migrate to the border between the B cell and T cell zones of the lymph node. Here, B cells interact with T helper cells previously activated by antigen-presenting dendritic cells. Though CCR7 is the dominant receptor in this stage of B cell migration, EBI2 is still critical, the low expression of which contributes to organized interaction along the T zone border that maximizes interactions with T cells. Following B cell receptor and CD40 co-stimulation, EBI2 is again upregulated. The B cells thus move back toward the outer follicular space, where they begin cell division. At this point, a B cell either downregulates EBI2 expression in order to enter a germinal center or maintains EBI2 expression and remains in outer follicular regions. In germinal centers (GC), B cells downregulate the receptor via the transcriptional repressor B-cell lymphoma-6 (BCL6) and, following somatic hypermutation, differentiate into long-lived antibody-secreting plasma cells or memory B cells. EBI2 must turn off to move B cells to the germinal center from the periphery, and must turn on for B cells to exit the germinal center and re-enter the periphery. Meanwhile, those remaining outside the follicle differentiate into plasmablasts, eventually becoming short-lived plasma cells. Thus, EBI2 expression modulates B cell differentiation by directing cells toward or away from germinal centers. T cells EBI2 also regulates intra-lymphatic T cell migration. Mature T helper cells upregulate EBI2 to follow the oxysterol gradient, migrating to the outer edges of the T cell zone to receive signals from antigen-presenting dendritic cells arriving from the tissues. This migration is critical as the resulting T cell-DC interaction induces T helper cell differentiation into T follicular helper cells. In concert with upregulation of CXCR5, the downregulation of EBI2 helps T follicular helper cells move toward the follicle center to help B cells undergoing affinity maturation in germinal centers. Dendritic cells EBI2 expression on CD4+ dendritic cells is a key initiator of immune response. Antigen-activated dendritic cells are driven to lymph node bridging channels via the oxysterol-EBI2 pathway. In the spleen, bridging channels connect the marginal zone, where dendritic cells pick up plasma-soluble antigen, to the T cell zone, where they present antigen to T helper cells. This results in T cell proliferation and differentiation. Localization to bridging channels is also associated with dendritic cell reception of lymphotoxin beta signaling, which augments their blood pathogen uptake, resulting in an increase in T cell responses. Ligand Oxysterols bind to and activate EBI2. The highest affinity oxysterol ligand is 7α,25-dihydroxycholesterol (7α,25-OHC), formed by enzymatic oxidation of cholesterol by the hydroxylases CH25H and CYP7B1. 7α,25-OHC is concentrated in bridging channels and the outer perimeter of B cell follicles. Conversely it is not present in follicle centers, germ centers, nor in the T zone. The enzymes responsible for ligand biosynthesis, CH25H and CYP7B1, are unsurprisingly abundant in lymphoid stromal cells. On the other hand, the enzyme that deactivates the ligand, HSD3B7, is highly concentrated in areas where the ligand concentration should be lowest—the T zone. Though it is not a cytokine, the EBI2 ligand acts much like a chemokine in that its gradient drives cellular migration. Virus infection GPR183 plays a crucial role in driving inflammation in the lungs during severe viral respiratory infections such as influenza A virus (IAV) and SARS-CoV-2. Studies using preclinical murine models of infection revealed that the activation of GPR183 by oxidized cholesterols leads to the recruitment of monocytes/macrophages and the production of inflammatory cytokines in the lungs. References Further reading G protein-coupled receptors
GPR183
Chemistry
1,337
58,625,869
https://en.wikipedia.org/wiki/Joy%20Carter
Joy E. Carter, (née Randlesome; born 26 December 1955) is a British geologist and academic, specialising in geochemistry. From 2006 to 2021 she held the position of Vice-Chancellor of the University of Winchester. She previously taught at the University of Reading, University of Derby, and the University of Glamorgan; she served as a pro-vice-chancellor at Glamorgan. She has additionally served as the chair of GuildHE since 2013, an organisation representing the heads of British higher education institutions, and was chair of the Cathedrals Group from 2011, an association of British universities and university colleges with religious foundations. Early life and education Carter was born on 26 December 1955. She studied at the University of Durham, graduating with a Bachelor of Science (BSc) degree in 1977, and at the University of Lancaster, graduating with a Doctor of Philosophy (PhD) degree in 1980. Her doctoral thesis was titled "The geochemistry of mercury in estuarine mixing and sedimentation". Honours In March 2013, Carter was appointed a Deputy Lieutenant (DL) to the Lord Lieutenant of Hampshire. In the 2018 New Year Honours, she was appointed a Commander of the Order of the British Empire (CBE) "for services to higher education". Selected works References 1955 births Living people British geochemists British women geologists Women geochemists Academics of the University of Reading Academics of the University of Derby Academics of the University of Glamorgan Commanders of the Order of the British Empire Academics of the University of Winchester Deputy lieutenants of Hampshire Fellows of the Geological Society of London Place of birth missing (living people) Alumni of Lancaster University Alumni of Trevelyan College, Durham
Joy Carter
Chemistry
336
34,004,150
https://en.wikipedia.org/wiki/Amaz%C3%B4nia-1
The Amazônia-1 or SSR-1 (in Portuguese: Satélite de Sensoriamento Remoto-1), is the first Earth observation satellite developed by Brazil, helped by Argentina's INVAP, who provided the main computer, attitude controls and sensors, and the training of Brazilian engineers, and launched at 04:54:00 UTC (10:24:00 IST) on 28 February 2021. Operations will be joint with the China–Brazil Earth Resources Satellite program (CBERS-4) satellite. Background In the early 1990s, the design of SSR (Satélite de Sensoriamento Remoto) satellites, Amazônia-1 precursor, was revised and Instituto Nacional de Pesquisas Espaciais (INPE) technicians proposed replacing the polar orbit by an equatorial orbit, and this proposal was accepted. That made sense at that time as Brazil already had polar orbit coverage with the CBERS satellites. SSR-1 suffered several delays, either by lack of resources, or bid disputes. The effective start only occurred in 2001, when a contract was signed for the development of a multi-mission platform specifically, at the time, for this purpose. In 2001, a joint study between the INPE and German Aerospace Center (DLR) was published, found that most of the SSR-1 requirements can be met by two sensors: the Camera VIS / NIR and other MIR. However, with the publication PNAE=(?) review in 2005, the SSR-1 ceased to be a priority. Update Between September and October 2012, a structural model of the Amazônia-1 satellite was subjected to a series of vibration test. In the latest review of the PNAE, published in January 2013, the Amazônia-1 resurfaced with the same name, and even successors were planned (Amazon-1B in 2017 and Amazon-2 in 2018). However, with polar orbit as a design feature, the release dates of these satellites can not be met. The Amazônia-1 schedule is already delayed by two years. The satellite was originally supposed to launch on a Brazilian VLS-1 rocket, but the program was cancelled. The satellite was successfully launched on 28 February 2021 aboard ISRO's Polar Satellite Launch Vehicle (PSLV-C51) from the First Launch Pad of Satish Dhawan Space Centre. The cost of launch was nearly USD 26 million. Post-launch On March 2, 2021, journalist and science communicator Salvador Nogueira reported that according to trackers in the United States, the satellite may be tipping over in its orbit, but that the situation wasn't irreversible. This occurred after the satellite was put into "mission mode," which triggered a safety program where the satellite was in an attitude that ensured its solar panels were exposed to the Sun. The journalist later posted on Twitter that the situation may be due to the satellite's release and that it had already been resolved, but is awaiting word from INPE. Later Clezio di Nardin, INPE's director, confirmed that the satellite operates normally and is going through the qualification phase, which will last until March 15. The position of Clezio di Nardin and of Marcos Pontes, Minister of Science, was that nothing unusual had happened. Features The current design features are as follows: Orbit: Sun-synchronous orbit Period of Earth imaging: 4 days Optical sighting wide imaging (camera with 3 bands in the visible (VIS) and 1 band in the near-infrared (NIR)) Observation range: with resolution. Platform: Multi-Mission Platform (MMP) Weight: Instruments Advanced Wide Field Imager (AWFI), is a resolution camera. Amazônia-2 The Amazônia-2 satellite was planned for launch in 2022 to replace its predecessor. Gallery See also Brazilian space program References Spacecraft launched in 2021 2021 in Brazil Earth observation satellites of Brazil Earth imaging satellites
Amazônia-1
Astronomy
809
8,206,739
https://en.wikipedia.org/wiki/The%20Einstein%20Theory%20of%20Relativity
The Einstein Theory of Relativity (1923) is a silent animated short film directed by Dave Fleischer and released by Fleischer Studios. History In August 1922, Scientific American published an article explaining their position that a silent film would be unsuccessful in presenting the theory of relativity to the general public, arguing that only as part of a broader educational package including lecture and text would such film be successful. Scientific American then went on to review frames from an unnamed German film reported to be financially successful. Six months later, on February 8, 1923, the Fleischers released their relativity film, produced in collaboration with popular science journalist Garrett P. Serviss to accompany his book on the same topic. Two versions of the Fleischer film are reported to exist – a shorter two-reel (20 minute) edit intended for general theater audiences, and a longer five-reel (50 minute) version intended for educational use. The Fleischers lifted footage from the German predecessor, Die Grundlagen der Einsteinschen Relativitäts-Theorie, directed by Hanns-Walter Kornblum, for inclusion into their film. Presented here are images from the Fleischer film and German film. If actual footage was not recycled into The Einstein Theory of Relativity, these images and text from the Scientific American article suggest that original visual elements from the German film were. This film, like much of the Fleischer's work, has fallen into the public domain. Unlike Fleischer Studio's Superman or Betty Boop cartoons, The Einstein Theory of Relativity has very few existing prints and is available in 16mm from only a few specialized film preservation organizations. References External links The Einstein Theory of Relativity DVD of the film bundled with guidebook by Garrett P. Serviss (and including another Fleischer documentary, Evolution), from Apogee Books, . 1923 films 1923 animated short films 1923 documentary films 1920s American animated films 1920s educational films 1920s English-language films American educational films American silent short films Fleischer Studios short films Short films directed by Dave Fleischer Surviving American silent films Theory of relativity English-language short films American animated black-and-white films
The Einstein Theory of Relativity
Physics
442
24,715,625
https://en.wikipedia.org/wiki/Eurocode%201%3A%20Actions%20on%20structures
In the Eurocode series of European standards (EN) related to construction, Eurocode 1: Actions on structures (abbreviated EN 1991 or, informally, EC 1) describes how to design load-bearing structures. It includes characteristic values for various types of loads and densities for all materials which are likely to be used in construction. Eurocode 1 is divided into a number of parts. Part 1-1: Densities, self-weight, imposed loads for buildings EN 1991-1-1 gives design guidance and actions for the structural design of buildings and civil engineering works including some geotechnical aspects for the following subjects: Densities of construction materials and stored materials. Self-weight of construction works. Imposed loads for buildings. Part 1-2: Actions on structures exposed to fire Part 1-2 of EN 1991 deals with thermal and mechanical actions on structures exposed to fire. It is intended to be used in conjunction with the fire design Parts of EN 1992 to EN 1996 and EN 1999 which give rules for designing structures for fire resistance. Part 1-2 of EN 1991 contains thermal actions related to nominal and physically based thermal actions. More data and models for physically based thermal actions are given in annexes. Part 1-2 of EN 1991 gives general principles and application rules in connection to thermal and mechanical actions to be used in conjunction with EN 1990, EN 1991-1-1, EN 1991-1-3 and EN 1991-1-4. Part 1-3: General actions - Snow loads EN 1991-1-3 gives guidance to determine the values of loads due to snow to be used for the structural design of buildings and civil engineering works. It applies for sites at altitudes below 1500 m although treatments of snow loads for altitudes above 1500 m may be found in the National Annexes. Part 1-4: General actions - Wind actions EN 1991-1-4 gives guidance on the determination of natural wind actions for the structural design of building and civil engineering works for each of the loaded areas under consideration. This includes the whole structure or parts of the structure or elements attached to the structure, e. g. components, cladding units and their fixings, safety and noise barriers. EN 1991-1-4 is applicable to: Buildings and civil engineering works with heights up to 200 m. Bridges having no span greater than 200 m, provided that they satisfy the criteria for dynamic response. Part 1-5: General actions - Thermal actions EN 1991-1-5 gives principles and rules for calculating thermal actions on buildings, bridges and other structures including their structural elements. Principles needed for cladding and other appendages of buildings are also provided. EN 1991-1-5 describes the changes in the temperature of structural elements. Characteristic values of thermal actions are presented for use in the design of structures which are exposed to daily and seasonal climatic changes. Structures not so exposed may not need to be considered for thermal actions. Part 1-6: General actions - Actions during execution EN 1991-1-6 provides principles and general rules for the determination of actions which should be taken into account during the execution of buildings and civil engineering works. Part 1-7: General actions - Accidental Actions EN 1991-1-7 provides rules for safeguarding buildings and other civil engineering works against accidental actions. For buildings, EN 1991-1-7 also provides strategies to limit the consequences of localised failure caused by an unspecified accidental event. The recommended strategies for accidental actions range from the provision of measures to prevent or reduce the accidental action to that of designing the structure to sustain the action. In this context specific rules are given for accidental actions caused by impact and internal explosions. Localised failure of a building structure, however, may result from a wide range of events that could possibly affect the building during its lifespan. Such events may not necessarily be anticipated by the designer. This Part does not specifically deal with accidental actions caused by external explosions, warfare and terrorist activities, or the residual stability of buildings or other civil engineering works damaged by seismic action or fire etc. However, for buildings, adoption of the robustness strategies given in Annex A for safeguarding against the consequences of localised failure should ensure that the extent of the collapse of a building, if any, will not be disproportionate to the cause of the localised failure. This Part does not apply to dust explosions in silos (see EN1991-4), nor to impact from traffic travelling on the bridge deck or to structures designed to accept ship impact in normal operating conditions e.g. quay walls and breasting dolphins. See also Structural robustness. Part 2: Traffic loads on bridges EN 1991-2 defines imposed loads (models and representative values) associated with road traffic, pedestrian actions and rail traffic which include, when relevant, dynamic effects and centrifugal, braking and acceleration actions and actions for accidental design situations. Contents Definitions and symbols. Loading principles. Design situations. Loads (road bridges), imposed loads due to traffic actions, their conditions of mutual combination and of combination with pedestrian and cycle traffic, other actions. Loads (footways, cycle tracks and footbridges) imposed loads, other actions. Loads (Railway Bridges) imposed loads due to rail traffic, other actions. Part 3: Actions induced by cranes and machinery EN 1991-3 specifies imposed loads (models and representative values) associated with cranes on runway beams and stationary machines which include, when relevant, dynamic effects and braking, acceleration and accidental forces. Contents Common definitions and notations. Actions induced by cranes on runways. Actions induced by stationary machines. Part 4 : Silos and tanks EN 1991-4 provides general principles and actions for the structural design of silos for the storage of particulate solids and tanks for the storage of fluids and shall be used in conjunction with EN 1990: Basis of Design, other parts of EN 1991 and EN 1992 to EN 1999. External links The EN Eurocodes EN 1991: Actions on structures EN 1991 - Eurocode 1: Actions on structures - "Eurocodes: Background and applications" workshop Bridge design 1
Eurocode 1: Actions on structures
Engineering
1,227
23,661,513
https://en.wikipedia.org/wiki/C23H46N6O13
{{DISPLAYTITLE:C23H46N6O13}} The molecular formula C23H46N6O13 (molar mass: 614.64 g/mol, exact mass: 614.3123 u) may refer to: Neomycin Molecular formulas
C23H46N6O13
Physics,Chemistry
62
14,809,774
https://en.wikipedia.org/wiki/ZNF281
Zinc finger protein 281 is a protein that in humans is encoded by the ZNF281 gene. See also Zinc finger References Further reading External links Transcription factors
ZNF281
Chemistry,Biology
34
349,632
https://en.wikipedia.org/wiki/Clothes%20iron
A clothes iron (also flatiron, smoothing iron, dry iron, steam iron or simply iron) is a small appliance that, when heated, is used to press clothes to remove wrinkles and unwanted creases. Domestic irons generally range in operating temperature from between to . It is named for the metal (iron) of which the device was historically made, and the use of it is generally called ironing, the final step in the process of laundering clothes. Ironing works by loosening the ties between the long chains of molecules that exist in polymer fiber materials. With the heat and the weight of the ironing plate, the fibers are stretched and the fabric maintains its new shape when cool. Some materials, such as cotton, require the use of water to loosen the intermolecular bonds. History and development Before the introduction of electricity, irons were heated by combustion, either in a fire or with some internal arrangement. The said iron was made as a solid piece of iron with a handle and was heated, for example, on a wood stove and used to smooth clothes. It can also be called a smoothing iron. An "electric flatiron" was invented by American Henry W. Seely and patented on June 6, 1882. It weighed almost and took a long time to heat. The UK Electricity Association is reported to have said that an electric iron with a carbon arc appeared in France in 1880, but this is considered doubtful. Two of the oldest sorts of iron were either containers filled with a burning substance, or solid lumps of metal which could be heated directly. Metal pans filled with hot coals were used for smoothing fabrics in China in the 1st century BC. A later design consisted of an iron box which could be filled with hot coals, which had to be periodically aerated by attaching a bellows. In the late nineteenth and early twentieth centuries, there were many irons in use that were heated by fuels such as kerosene, ethanol, whale oil, natural gas, carbide gas (acetylene, as with carbide lamps), or even gasoline. Some houses were equipped with a system of pipes for distributing natural gas or carbide gas to different rooms in order to operate appliances such as irons, in addition to lights. Despite the risk of fire, liquid-fuel irons were sold in U.S. rural areas up through World War II.In Kerala, India, burning coconut shells were traditionally used as an alternative to charcoal due to their comparable heating capacity. This method is still employed as a backup option, particularly during frequent power outages. Other box irons had heated metal inserts instead of hot coals. From the 17th century, sadirons or sad irons (from Middle English "sad", meaning "solid", used in English through the 1800s) began to be used. They were thick slabs of cast iron, triangular and with a handle, heated in a fire or on a stove. These were also called flat irons. A laundry worker would employ a cluster of solid irons that were heated from a single source: As the iron currently in use cooled down, it could be quickly replaced by a hot one. In the industrialized world, these designs have been superseded by the electric iron, which uses resistive heating from an electric current. The hot plate, called the sole plate, is made of aluminium or stainless steel polished to be as smooth as possible; it is sometimes coated with a low-friction heat-resistant plastic to reduce friction below that of the metal plate. The heating element is controlled by a thermostat that switches the current on and off to maintain the selected temperature. The invention of the resistively heated electric iron is credited to Henry W. Seeley of New York City in 1882. In the same year an iron heated by a carbon arc was introduced in France, but was too dangerous to be successful. The early electric irons had no easy way to control their temperature, and the first thermostatically controlled electric iron appeared in the 1920s. The first commercially available electric steam iron was introduced in 1926 by a New York drying and cleaning company, Eldec, but was not a commercial success. The patent for an electric steam iron and dampener was issued to Max Skolnik of Chicago in 1934. In 1938, Skolnik granted the Steam-O-Matic Corporation of New York the exclusive right to manufacture steam-electric irons. This was the first steam iron to achieve any degree of popularity, and led the way to more widespread use of the electric steam iron during the 1940s and 1950s. Types and names Historically, irons have had several variations and have thus been called by many names: Flatiron (American English), flat iron (British English) or smoothing iron The general name for a hand-held iron consisting simply of a handle and a solid, flat, metal base, and named for the flat ironing face used to smooth clothes. Sad iron or sadiron Mentioned above, meaning "solid" or heavy iron, where the base is a solid block of metal, sometimes used to refer to irons with heavier bases than a typical "flatiron". Box iron, ironing box, charcoal iron, ox-tongue iron or slug iron Mentioned above; the base is a container, into which hot coals or a metal brick or slug can be inserted to keep the iron heated. The ox-tongue iron is named for the particular shape of the insert, referred to as an ox-tongue slug. Goose, tailor's goose or, in Scots, gusing iron A type of flat iron or sad iron named for the goose-like curve in its neck, and (in the case of "tailor's goose") its usage by tailors. Goffering iron This type of iron, now obsolete, consists of a metal cylinder oriented horizontally on a stand. It was used to iron ruffs and collars. Hygiene Proper ironing of clothes has proven to be an effective method to avoid infections like those caused by lice. Features Modern irons for home use can have the following features: A design that allows the iron to be set down, usually standing on its end, without the hot soleplate touching anything that could be damaged; A thermostat ensuring maintenance of a constant temperature; A temperature control dial allowing the user to select the operating temperatures (usually marked with types of cloth rather than temperatures: "silk", "wool", "cotton", "linen", etc.); An electrical cord with heat-resistant silicone rubber insulation; Injection of steam through the fabric during the ironing process; A water reservoir inside the iron used for steam generation; An indicator showing the amount of water left in the reservoir, Constant steam: constantly sends steam through the hot part of the iron into the clothes; Steam burst: sends a burst of steam through the clothes when the user presses a button; (advanced feature) Dial controlling the amount of steam to emit as a constant stream; (advanced feature) Anti-drip system; Cord control: the point at which the cord attaches to the iron has a spring to hold the cord out of the way while ironing and likewise when setting down the iron (prevents fires, is more convenient, etc.); A retractable cord for easy storage; (advanced feature) non-stick coating along the sole plate to help the iron glide across the fabric (advanced feature) Anti-burn control: if the iron is left flat (possibly touching clothes) for too long, the iron shuts off to prevent scorching and fires; (advanced feature) Energy saving control: if the iron is left undisturbed for several (10 or 15) minutes, the iron shuts off. Cordless irons: the iron is placed on a stand for a short period to warm up, using thermal mass to stay hot for a short period. These are useful for light loads only. Battery power is not viable for irons as they require more power than practical batteries can provide. (advanced feature) 3-way automatic shut-off (advanced feature) self-cleaning (advanced feature) Anti-scale to help remove lime scale buildup from using hard water for a long time. (advanced feature) vertical steam to help remove creases and wrinkles by holding an iron vertically and steaming material close to it. Collections One of the world's larger collection of irons, comprising 1300 historical examples of irons from Germany and the rest of the world, is housed in Gochsheim Castle, near Karlsruhe, Germany. Many ethnographical museums around the world have collections of irons. In Ukraine, for example, about 150 irons are the part of the exhibition of the Radomysl Castle in Ukraine. Ironing center An ironing center, steam ironing station, or steam generator iron is a device consisting of a clothes iron and a separate steam-generating tank. By having a separate tank, the ironing unit can generate more steam than a conventional iron, making steam ironing faster. Such ironing facilities take longer to warm up than conventional irons, and cost more. See also Dadeumi, a mechanical way to smooth clothing, once traditional in Korea Flatiron Building, of cross-section like a flatiron Flatiron gunboat, flatiron-shaped in plan view Hair iron Home robot Mangle (machine) Soldering iron Trouser press Mary Florence Potts, inventor of the detachable cold wooden handle for irons References External links Charcoal and other antique irons from the White River Valley Museum Antique Irons from the Virtual Museum of Textile Arts 1882 introductions Home appliances Laundry equipment 19th-century inventions Ancient inventions British inventions Textile tools
Clothes iron
Physics,Technology
1,995
24,144,551
https://en.wikipedia.org/wiki/C25H36O6
{{DISPLAYTITLE:C25H36O6}} The molecular formula C25H36O6 (molar mass: 432.54 g/mol, exact mass: 432.251189 u) may refer to: Coicenal C Hydrocortisone 17-butyrate Hydrocortisone 21-butyrate Pseudopterosin A Molecular formulas
C25H36O6
Physics,Chemistry
83
18,016,634
https://en.wikipedia.org/wiki/Gluten%20immunochemistry
The immunochemistry of Triticeae glutens is important in several inflammatory diseases. It can be subdivided into innate responses (direct stimulation of immune system), class II mediated presentation (HLA DQ), class I mediated stimulation of killer cells, and antibody recognition. The responses to gluten proteins and polypeptide regions differs according to the type of gluten sensitivity. The response is also dependent on the genetic makeup of the human leukocyte antigen genes. In gluten sensitive enteropathy, there are four types of recognition, innate immunity (a form of cellular immunity priming), HLA-DQ, and antibody recognition of gliadin and transglutaminase. With idiopathic gluten sensitivity only antibody recognition to gliadin has been resolved. In wheat allergy, the response pathways are mediated through IgE against other wheat proteins and other forms of gliadin. Innate immunity Innate immunity to gluten refers to an immune response that works independently of T-cell receptor or antibody recognition of the 'innate' peptide. This peptide acts directly on cells, such as monocytes, stimulating their growth and differentiation. Innate immunity to gluten is complicated by an apparent role gluten has in bypassing normal host defense and peptide exclusion mechanisms in the gut. While not truly innate, these activities allow gliadin to enter into areas where many lymphocytes patron. In bypassing these filters gliadin alters the normal behavior of both digestive cells, called enterocytes or epithelial cells, and lymphocytes. This increases the potential of causing sensitivity (see Underlying conditions). One potential explanation of why certain people become sensitive is that these individuals may not produce adequate peptidases in some areas of the gut, allowing these peptides to survive. Other explanation for some may be that food chemicals or drugs are weakening the defenses. This can be the case with ω5-gliadin allergy with salicylate sensitivity. There is no clear reasoning, either from genetics or from long-term studies of susceptible individuals why these gut peptide restrictions would change. Once inside, α-9 gliadin 31–55 shows the ability to activate undifferentiated immune cells that then proliferate and also produce inflammatory cytokines, notably interleukin 15 (IL-15). This produces a number of downstream responses that are pro-inflammatory. The other peptide that may have innate behavior is the "CXCR3" receptor binding peptides, the receptor exists on enterocytes, the brush border membrane cells. The peptide displaces an immune factor and signals the disruption of the membrane seal, the tight junctions, between cells. Alpha gliadin 31–43 Gluten bears an innate response peptide (IRP) found on α-9 gliadin, at positions 31–43 and on α-3, 4, 5, 8, and 11 gliadins. The IRP lies within a 25 amino-acid long region that is resistant to pancreatic proteases. The 25mer is also resistant to brush border membrane peptidases of the small intestine in coeliacs. IRP induced the rapid expression of interleukin 15 (IL15) and other factors. Thus IRP activates the immune system. Studies show that, while in normal individuals the peptide is trimmed over time to produce inactive peptide, in coeliacs a 19mer may lose a residue from one end or the other, after prolonged incubation that 50% remains intact. Intraepithileal lymphocytes and IL15 The release of IL15 is a major factor in coeliac disease as IL15 has been found to attract intraepithelial lymphocytes (IEL) that characterize Marsh grade 1 and 2 coeliac disease. Lymphocytes attracted by IL-15 are composed of markers enriched on natural killer cells versus normal helper T-cells. One hypothesis is that IL-15 induces the highly inflammatory Th1 response that activates T-helper cells (DQ2 restricted gliadin specific) that then orchestrate the destructive response, but the reason why inflammatory cells develop prior to gliadin specific helper cells is not known. The IRP response differs from typical responses that stimulate IL15 release, such as viral infection. In addition, other cytokines such as IL12 and IL2, which are typically associated with T-helper cell stimulation, are not involved. In these two ways the innate peptide activation of T-cells in coeliac disease is strange. IL-15 appears to induce increases in MICA and NKG2D that may increase brush-border cell killing. In addition, innate immunity to IRP peptide is involved in coeliac disease, dermatitis herpetiformis and possibly juvenile diabetes. IRP targets monocytes and increases the production of IL-15 by an HLA-DQ independent pathway, a subsequent study showed that both this region and the "33mer" could create the same response, in cells from both treated coeliacs and non-coeliacs. However, unlike the non-coeliacs, the treated coeliac cells produce the disease marker nitrite. This indicates that another abnormality in people with coeliac disease that allows stimulation to proceed past the normal healthy state. After extensive study, there is no known genetic association for this that appears to stand out at present, and implicates other environmental factors in the defect. Infiltrating peptides Some alpha gliadin have other direct-acting properties. Other gliadin peptides, one in a glutamine rich region and another peptide, "QVLQQSTYQLLQELCCQHLW", bind a chemoattractant receptor, CXCR3. Gliadin binds to, blocks and displaces a factor, I-TAC, that binds this receptor. In the process it recruits more CXCR3 receptor, increases MyD88 and zonulin expression. The factor it displaces, I-TAC, is a T-cell attractant. This peptide may also be involved in increased risk for type 1 diabetes as zonulin production is also a factor. This triggering of zonulin ultimately results in the degradation of tight junctions allowing large solutes, such as proteolytic resistant gliadin fragments to enter behind the brush border membrane cells. One study examined the effect of ω-5 gliadin, the primary cause of WD-EIA, and found increased permeability of intestinal cells. Other studies show that IgE reactivity to ω-5 gliadin increases greatly when deamidated or crosslinked to transglutaminase. HLA class I restrictions to gliadin HLA class I restrictions to gliadin are not well characterized. HLA-A2 presentation has been investigated. The HLA-A antigens can mediate apoptosis in autoimmune disease and HLA A*0201 in with the HLA-DQ8 haplotypes has been documented. The class I sites were found on the carboxyl end of gliadin at positions 123–131, 144–152, and 172–180. The involvement of class I responses may be minor, since antibodies to transglutaminase correlate with pathogenesis and recognition of extracellular matrix and cell surface transglutaminase can explain the destruction within coeliac disease. This process involves antibody-dependent cellular cytotoxicity. With regard to a receptor called FOS, euphemistically called the "Death Receptor", enterocytes appear to overexpress the receptor in coeliac lesions, there is speculation that Class I presentation of gliadin, tTG or other peptides that invokes signalling. The role of class I receptor in cell-mediated programmed cell (enterocyte) death is not known. MIC These proteins are called MHC class I polypeptide-related sequence A and B. Discovered by sequence homology analysis these proteins are found on the surface of enterocytes of the small intestine, are believed to play a role in disease. Studies to date have revealed no mutation that would increase risk for MICA. HLA-DQ recognition of gluten HLA-DQ proteins present polypeptide regions of proteins of about 9 amino acids and larger in size (10 to 14 residues in involved in binding is common for gliadin) to T lymphocytes. Gliadin proteins can be adsorbed by APC. After digestion in the lysozomes of APCs, gliadin peptides can be recycled to the cell's surface bound to DQ, or they can be bound and presented directly from the cell surface. The major source of inflammatory gluten is dietary gluten. Optimal reactivity of gliadin occurs when the protein is partially digested by small intestinal lysozyme and trypsin into proteolytic digests. These polypeptides of gluten can then make their way behind the epithelial layer of cells (membrane), where APCs and T-cells reside in the lamina propria. (See: Underlying conditions.) The APC bearing DQ-gliadin peptide on the surface can bind to T-cells that have an antibody-like T-cell receptor the specifically recognized DQ2.5 with gliadin. The complex (APC-DQ-gliadin) thus stimulates the gliadin specific T-cells to divide. These cells cause B-cells that recognize gliadin to proliferate. The B-cells mature into plasma cells producing anti-gliadin antibodies. This does not cause coeliac disease and is an unknown factor in idiopathic disease. Enteropathy is believed to occur when tissue transglutaminase (tTG) covelantly links itself to gliadin peptides that enter the lamina propria of the intestinal villus. The resulting structure can be presented by APC (with the same gliadin recognizing DQ isoforms) to T-cells, and B-cells can produce anti-transglutaminase antibodies. This appears to result in the destruction of the villi. The release of gliadin by transglutaminase does not lessen disease. When tTG-gliadin undergoes hydrolysis, the result is deamidated gliadin. Deamidated gliadin peptides are more inflammatory relative to natural peptides. Deamidated gliadin is also found in foods that have added gluten, such as wheat bread, food pastes. The major gluten proteins that are involved in coeliac disease are the α-gliadin isoforms. Alpha gliadin is composed of repeated motifs that, when digested, can be presented by HLA-DQ molecules. DQ2.5 recognizes several motifs in gluten proteins, and therefore HLA-DQ can recognize many motifs on each gliadin (see Understanding DQ haplotypes and DQ isoforms on the right) However, numbers of different proteins from the grass tribe Triticeae have been found to carry motifs presented by HLA DQ2.5 and DQ8. Wheat has a large number of these proteins because its genome contains chromosomes derived from two goat grass species and a primitive wheat species. The positions of these motifs in different species, strains and isoforms may vary because of insertions and deletions in sequence. There are a large number of wheat variants, and a large number of gliadins in each variant, and thus many potential sites. These proteins once identified and sequenced can be surveyed by sequence homology searches. HLA-DQ2.5 HLA-DQ recognition of gliadin is critical to the pathogenesis of gluten-sensitive enteropathy, it also appears to be involved in idiopathic gluten sensitivity (See Understanding DQ haplotypes and DQ isoforms on the right). HLA-DQ2 primarily presents gliadins with the HLA-DQ isoform DQ2.5 (DQ α5-β2) isoform. DQA1*0202:DQB1*0201 homozygotes (DQ α2-β2) also appear to be able to present pathogenic gliadin peptides, but a smaller set with lower binding affinity. DQ2.5 and α-gliadin Many of these gliadin motifs are substrates for tissue transglutaminase and therefore can be modified by deamidation in the gut to create more inflammatory peptides. The most important recognition appears to be directed toward the α-/β-gliadins. An example of the repetition of a motif across many proteins, the α-2 gliadin (57–68) and (62–75) are also found on α-4, α-9 gliadin. Many gliadins contain the "α-20 motif", which is found in wheat and other Triticeae genera.(see also: "α-20" gliadin motifs). Alpha-2 secalin, the glutinous protein in rye, is composed of two amino-terminal overlapping T-cell sites at positions (8–19) and (13–23). A2-gliadin Although T-cell responses to many prolamins can be found in coeliac disease, one particular gliadin, α2-gliadin appears to be the focus of T-cells. These responses were dependent on prior treatment with tissue transglutaminase. Α2-gliadin differs from the other α-gliadins, specifically because it contains an insert of 14 amino acids. This particular insertion creates six T-cell sites where, in the most similar gliadins, there are two or fewer sites. The sites belong to three epitope groups "α-I", "α-II", and "α-III" The insertion also creates a larger region of α-gliadin that is resistant to gastrointestinal proteases. The smallest digest of trypsin and chymotrypsin for the region is a 33mer. This particular region has three tissue transglutaminase sites, two sites that lie within the 14 amino acid insertion, a region of maximal stimulation are found with more than 80% reduction in response for native, un-deaminated, sequence at the position. Because of the density of T-cell sites on the "33mer" the affinity for deamidated gliadin is strongly indicates that it may be best treated as a single T-cell site of much higher affinity. This site alone may fulfill all the T-helper cell adaptive immune requirements with HLA-DQ2.5 involvement in some coeliac disease. DQ2.5 and γ-gliadin While gamma gliadin is not as important to DQ2.5 mediated disease as α-2 gliadin there are a number of identified motifs. The gamma epitopes identified are DQ2-"γ-I", -"γ-II" (γ30), -"γ-III", -"γ-IV", -"γ-VI" and -"γ-VII" Some of these epitopes are recognized in children who do not have T-cell reactivities toward α-2 gliadin. A 26 residue proteolytic resistance fragment has been found on γ-5 gliadin, positions 26–51, that has multiple transglutaminase and T-cell epitopes. This site has five overlapping T-cells sites of DQ2-"γ-II", -"γ-III", -"γ-IV", and "γ-glia 2". Computer analysis of 156 prolamins and glutelins revealed many more resistant fragments; one, a γ-gliadin, containing 4 epitopes was 68 amino acids in length. DQ2 and glutelins Triticeae glutelins presented by DQ2 is some coeliacs. In wheat, the low molecular weight glutenins often share structural similarity with the prolamins of the similar species of Triticeae. Two motifs, K1-like (46–60), pGH3-like (41–59) and GF1 (33–51) have been identified. High molecular weight glutenin has also been identified as a potentially toxic protein Some of the HMW glutenins increase response with transglutaminase treatment, indicating the sites might be similar to alpha-gliadin and gamma gliadin T-cell sites. DQ2.2 restricted gliadin sites DQ2.2 can present a fewer number of lower affinity sites relative to DQ2.5. Some of these sites are found on γ-gliadin the gliadin most similar to prolamins of other Triticeae genera, a gliadin that appears to similar to ancestral. Antigen presenting cells bearing DQ2.2 can present alpha gliadin sites, for example alpha-II region of the "33mer" and therefore the "33mer" may have a role in DQ2.2 bearing individuals, but the binding capacity is substantially lower. HLA-DQ8 HLA-DQ8 confers susceptibility to coeliac disease but in a fashion somewhat similar to DQ2.5. Homozygotes of DQ8, DQ2.5/DQ8 and DQ8/DQ2.2 are higher than expected based on levels in the general population (see: Understanding DQ haplotypes and DQ isoforms). HLA-DQ8 is generally not as involved in the most severe complications, and it does not recognize the "33mer" of α-2 gliadin to the same degree as DQ2.5. There are a smaller number of gliadin (prolamin) peptides presented by HLA-DQ8. A few studies have been done on the adaptive immune response for DQ8/DQ2− individuals. DQ8 appears to rely much more on adaptive immunity to the carboxyl half of alpha gliadins. In addition, it appear to react with gamma gliadin to a degree comparable to DQ2.5. T-cell responses to the high molecular weight glutenin may be more important with DQ8 mediated than DQ2.5 mediated coeliac disease. Antibody recognition Antibody recognition of gluten is complex. Direct binding to gluten such as anti-gliadin antibodies has an ambiguous pathogenesis in coeliac disease. The crosslinking of gliadin with tissue transglutaminase leads to the production of anti-transglutaminase antibodies, but this is mediated through T-cell recognition of gliadin. The allergic recognition of gliadin by mast cells, eosinophiles in the presence of IgE has notable direct consequences, such as exercise-induced anaphylaxis. Anti-gliadin antibodies, like those detected in coeliac disease bind to the α-2 gliadin (57–73). This site is within the T-cell reactive "33mer" presented by DQ2.5. There has been some suggestion wheat plays a role in juvenile diabetes as antibodies to the non-glutinous seed storage glb-1 (a globulin) are implicated in crossreactive autoantigenic antibodies that destroy islet cells in the pancreas. Anti-gliadin antibodies have been found to synapsin I Omega-gliadin and the HMW Glutenin subunit antibodies have been found most commonly in individuals with exercise-induced anaphylaxis and Baker's allergy, and represent a potent class of gluten allergens. Non-glutinous proteins in wheat are also allergens, these include: LTP (albumin/globulin), thioredoxin-hB, and wheat flour peroxidase. A particular 5 residue peptide, Gln-Gln-Gln-Pro-Pro motif, has been found to be a major wheat allergen. Taming Triticeae immunochemistry New immunogenic motifs appear in the literature almost monthly and new gliadin and Triticeae protein sequences appear that contain these motifs. The HLA DQ2.5 restricted peptide "IIQPQQPAQ" produced approximately 50 hits of identical sequences in NCBI-Blast search is one of several dozen known motifs whereas only a small fraction of Triticeae gluten variants have been examined. For this reason the immunochemistry is best discussed at the level of Triticeae, because it is clear that the special immunological properties of the proteins appear to have basal affinities to this taxa, appearing concentrated in wheat as a result of its three various genomes. Some current studies claim that removing the toxicity of gliadins from wheat as plausible, but, as the above illustrates, the problem is monumental. There are many gluten proteins, three genomes with many genes each for alpha, gamma, and omega gliadins. For each motif many genome-loci are present, and there are many motifs, some still not known. Different strains of Triticeae exist for different industrial applications; durum for pasta and food pastes, two types of barley for beer, bread wheats used in different areas with different growing conditions. Replacing these motifs is not a plausible task since a contamination of 0.02% wheat in a gluten-free diet is considered to be pathogenic and would require replacing the motifs in all known regional varieties—potentially thousands of genetic modifications. Class I and antibody responses are downstream of Class II recognition and are of little remedial value in change. The innate response peptide could be a silver bullet, assuming there is only one of these per protein and only a few genome loci with the protein. Unresolved questions relevant to a complete understanding of immune responses to gluten are: Why is the rate of late onset gluten sensitivity rapidly rising? Is this truly a wheat problem, or something that is being done to wheat, or to those who are eating wheat (for example, communicable diseases a trigger? Some individuals are susceptible by genetics (early onset), but many late onset cases could have different triggers because there is nothing genetically separating the 30 to 40% of people that could have Triticeae sensitivity from the ~1% that, in their lifetime, will have some level of this disease. Another way to make wheat less immunogenic is to insert proteolytic sites in the longer motifs (25-mer and 33-mer), facilitating more complete digestion. References Immune system Gluten
Gluten immunochemistry
Biology
4,793
46,768,792
https://en.wikipedia.org/wiki/Great%20Wall%20of%20Sand
The "Great Wall of Sand" is a series of land reclamation (artificial island building) projects by the People's Republic of China (PRC) in the Spratly Islands area of the South China Sea between late 2013 to late 2016. 2013–2016 Spratlys reclamations In late 2013, the PRC embarked on very large scale reclamations at seven locations in order to strengthen territorial claims to the region demarcated by the nine-dash line. The artificial islands were created by dredging sand onto reefs which were then concreted to make permanent structures. By the time of the 2015 Shangri-La Dialogue, over of new land had been created. By December 2016 it had reached and "'significant' weapons systems, including anti-aircraft and anti-missile systems" had been installed. The name "Great Wall of Sand" was first used in March 2015 by U.S. Admiral Harry Harris, who was commander of the Pacific Fleet. The PRC states that the construction is for "improving the working and living conditions of people stationed on these islands", and that, "China is aiming to provide shelter, aid in navigation, weather forecasts and fishery assistance to ships of various countries passing through the sea." Defence analysts IHS Jane's states that it is a "methodical, well planned campaign to create a chain of air and sea-capable fortresses". These "military-ready" installations include sea-walls and deep-water ports, barracks and notably include runways on three of the reclaimed "islands", including Fiery Cross Reef, Mischief Reef and Subi Reef. Aside from geo-political tensions, concerns have been raised about the environmental impact on fragile reef ecosystems through the destruction of habitat, pollution and interruption of migration routes. The Asia Maritime Transparency Initiative's "Island tracker" has listed the following locations as the sites of the PRC island reclamation activities: Total reclaimed area by PRC on 7 reefs: approx. Machinery used The PRC used hundreds of dredges and barges including a giant self-propelled dredger, the Tian Jing Hao. Built in 2009 in China, the Tian Jing Hao is a long seagoing cutter suction dredger designed by German engineering company Vosta LMG; (Lübecker Maschinenbau Gesellschaft (de)). At 6,017 gross tons, with a dredging capacity of 4500 m3/h, it is credited as being the largest of its type in Asia. It has been operating on Cuarteron Reef, the Gaven Reefs, and at Fiery Cross Reef. Strategic importance More than half of the world's annual merchant fleet tonnage passes through the Strait of Malacca, Sunda Strait, and Lombok Strait, with the majority continuing on into the South China Sea. Tanker traffic through the Strait of Malacca leading into the South China Sea is more than three times greater than Suez Canal traffic, and well over five times more than the Panama Canal. The People's Republic of China (PRC) has stated its unilateral claim to almost the entire body of water. Legal issues Territorial waters of an artificial island As the Mischief and Subi Reefs were under water prior to reclamations, they are considered by the Third United Nations Conference on the Law of the Sea (UNCLOS III) as "sea bed" in "international waters". Although the PRC had ratified a limited UNCLOS III not allowing innocent passage of war ships, according to the UNCLOS III, features built on the sea bed cannot have territorial waters. 2016 Permanent Court of Arbitration ruling on the construction of artificial islands outside a state's Exclusive Economic Zone (EEZ) UNCLOS contains the following provision: Article 60(1) - In the exclusive economic zone, the coastal State shall have the exclusive right to construct and to authorize and regulate the construction, operation and use of: (a) artificial islands If China’s artificial islands fall within their EEZ then they would be within their rights to construct them (although this would still be subject to the environmental provisions of UNCLOS). In 2016, the Permanent Court of Arbitration (PCA) reached a decision in the dispute between the Philippines and China, resolving the question of whether China’s artificial islands in the Spratly archipelago fell within its own EEZ. It was held that China’s EEZ did not extend to the Spratly chain, that China therefore lacked the authority to construct an artificial island in that region, and that China had subsequently infringed on the Philippines’ EEZ by their constructing and maintaining of an artificial island on Mischief Reef. This violated the Philippines’ “exclusive right to construct and to authorize and regulate the construction, operation and use of… artificial islands” as stated in Article 60(1)(a) of UNCLOS. China disputed the outcome of the PCA’s decision, labelling it “null and void”. China had argued that Taiping Island, which falls within its EEZ, should be classified as an ‘island’ under international law. This would have allowed China’s EEZ to be extended another 200nm from Taiping Island, as ‘islands’ are given the same status as mainland territory under UNCLOS. However, as Taiping Island was “incapable of self-sufficiently providing for a stable community of inhabitants, it fell under UNCLOS as a ‘rock’ rather than a naturally occurring island.” This only gave China a 12nm territorial sea surrounding Taiping Island and meant they could not extend their EEZ to the Spratly archipelago. Moreover, the mere creation of artificial islands is not enough to create an EEZ for which these islands can then fall within – as outlined in Article 60(8) of UNCLOS. Environmental legal issues The PRC has ratified UNCLOS III; the convention establishes general obligations for safeguarding the marine environment and protecting freedom of scientific research on the high seas, and also creates an innovative legal regime for controlling mineral resource exploitation in deep seabed areas beyond national jurisdiction, through an International Seabed Authority and the common heritage of mankind principle. UNCLOS contains the following environmental commitments: Article 192 - "States have the obligation to protect and preserve the marine environment" Article 194(2) - "States shall take all measures necessary to ensure that activities under their jurisdiction or control are so conducted as not to cause damage by pollution to other States and their environment" Article 194(5) – "The measures taken in accordance with this Part shall include those necessary to protect and preserve rare or fragile ecosystems as well as the habitat of depleted, threatened or endangered species and other forms of marine life" In addition to UNCLOS, China is a party to the Convention on Biological Diversity, which contains the following provision: Article 3 - "States have… the responsibility to ensure that activities within their jurisdiction or control do not cause damage to the environment of other States or of areas beyond the limits of national jurisdiction" China’s land reclamation efforts and creation of channels for ships has destroyed portions of reefs, killing coral and other organisms in the process. In the process of island building the sediment deposited on the reefs "can wash back into the sea, forming plumes that can smother marine life and could be laced with heavy metals, oil and other chemicals from the ships and shore facilities being built." These plumes damage coral tissue and can block sunlight from organisms such as reef-building corals which are dependent on that sunlight to survive. In the Spratly archipelago, China engaged in shallow-water dredging, removing "not only sand and gravel, but also the ecosystems of the lagoon and the reef flat, important parts of a reef." The damaged reefs that Chinese dredgers gathered sand and gravel from "may not fully recover for up to 10 to 15 years." The reefs where the dredged up sand and gravel gets placed on suffer obvious harm as coral can no longer grow on them with an artificial island now placed over the reef. Additionally, by placing artificial islands on top of reefs, fisheries are also harmed, as these reefs help replenish depleted fish stocks in the South China Sea’s coastal areas. The negative marine impact from China's artificial island building violates Articles 192 and 194(5) of UNCLOS. It also violates Article 194(2) of UNCLOS and Article 3 of the Convention on Biological Diversity as the artificial island creation has occurred outside of China’s EEZ and within that of other states. Therefore, the damage is to other states and their environment (per Article 194(2) of UNCLOS), and at the very least the damage is in an area beyond the limits of national jurisdiction (per Article 3 of the Convention on Biological Diversity). Regional concept According to Chinese sources, the concept was invented in 1972 by Vietnam's Bureau of Survey and Cartography of Vietnam under the Office of Premier Phạm Văn Đồng, which printed out "The World Atlas" and said, "The chain of islands from the Nansha and Xisha Islands to Hainan Island, Taiwan Island, the Penghu Islands and the Zhoushan Islands are shaped like a bow and constitute a Great Wall defending the China mainland." Reactions States Australia – Opposed to "any coercive or unilateral actions to change the status quo in the South and East China Sea", Australia continues to fly routine surveillance operations and exercise the right to freedom of navigation in international airspace "in accordance with the international civil aviation convention, and the United Nations Convention on the Law of the Sea." Amid rising Australia-China diplomatic tensions in 2020, Australia strengthened its opposition by making a submission to the United Nations declaring the works are "completely unlawful" – following its ally the United States' position. China – Following confrontations between US P8-A Poseidon aircraft and the Chinese Navy over the constructions in May 2015, China stated that it has "the right to engage in monitoring in the relevant air space and waters to protect the country's sovereignty and prevent accidents at sea." South Korea – No official stance, maintains an "increasingly notable silence on freedom of navigation in the South China Sea". United States – The construction is considered to be a key motivating factor behind the Obama administration's "Asia Pivot" military strategy. It believes "that China's activities in the South China Sea are driven by nationalism, part of a wider strategy aimed at undercutting US influence in Asia." It has declared that it would operate military aircraft in the region "'in accordance with international law in disputed areas of the South China Sea' and would continue to do so 'consistent with the rights freedoms and lawful uses of the sea.'" Since October 2015, when the USS Lassen passed close to man-made land built upon Subi Reef, the US has been conducting freedom of navigation operations (FONOP) near the artificial islands approximately every three months using Arleigh Burke-class Guided missile destroyers. In 2020 amid rising diplomatic and economic tensions in US-China relations, America declared that, "Beijing's claims to offshore resources across most of the South China Sea are completely unlawful, as is its campaign of bullying to control them." Organizations ASEAN – The Association of Southeast Asian Nations stated that the constructions "may undermine peace, security and stability" in the region as well as having strongly negative impact on the marine environment and fishery stocks. G7 – In a "Declaration on maritime security" before the 41st G7 summit, the G7 stated that, "We continue to observe the situation in the East and South China Seas and are concerned by any unilateral actions, such as large scale land reclamation, which change the status quo and increase tensions. We strongly oppose any attempt to assert territorial or maritime claims through the use of intimidation, coercion or force. In July 2016, the Permanent Court of Arbitration in The Hague issued a decision stating that China has no historic title over the area. Ecological impact Aside from geo-political tensions, concerns have been raised about the environmental impact on fragile reef ecosystems through the destruction of habitat, pollution and interruption of migration routes. These new islands are built on reefs previously below the level of the sea. For back-filling these seven artificial islands, a total area of , to the height of few meters, China had to destroy surrounding reefs and pump of sand and corals, resulting in significant and irreversible damage to the environment. Frank Muller-Karger, professor of biological oceanography at the University of South Florida, said sediment "can wash back into the sea, forming plumes that can smother marine life and could be laced with heavy metals, oil and other chemicals from the ships and shore facilities being built." Such plumes threaten the biologically diverse reefs throughout the Spratlys, which Dr. Muller-Karger said may have trouble surviving in sediment-laden water. Rupert Wingfield-Hayes, visiting the vicinity of the Philippine-controlled island of Pagasa by plane and boat, said he saw Chinese fishermen poaching and destroying the reefs on a massive scale. As he saw Chinese fishermen poaching endangered species like massive giant clams, he noted "None of this proves China is protecting the poachers. But nor does Beijing appear to be doing anything to stop them. The poachers we saw showed absolutely no sign of fear when they saw our cameras filming them". He concludes: "However shocking the reef plundering I witnessed, it is as nothing compared to the environmental destruction wrought by China's massive island building programme nearby. The latest island China has just completed at Mischief Reef is more than 9km (six miles) long. That is 9km of living reef that is now buried under millions of tonnes of sand and gravel." A 2014 United Nations Environment Programme (UNEP) report noted that "Sand is rarer than one thinks." "The average price of sand imported by Singapore was US$3 per tonne from 1995 to 2001, but the price increased to US$190 per tonne from 2003 to 2005". Although the Philippines and the PRC had both ratified the UNCLOS III, in the case of Johnson South Reef, Hughes Reef and Mischief Reef, the PRC dredged sand for free in the EEZ the Philippines had claimed from 1978, arguing this to be the "waters of China's Nansha Islands". "Although the consequences of substrate mining are hidden, they are tremendous. Aggregate particles that are too fine to be used are rejected by dredging boats, releasing vast dust plumes and changing water turbidity". John McManus, a professor of marine biology and ecology at the University of Miami said: "The worst thing anyone can do to a coral reef is to bury it under tons of sand and gravel ... There are global security concerns associated with the damage. It is likely broad enough to reduce fish stocks in the world's most fish-dependent region." He explained that the reason "the world has heard little about the damage inflicted by the People's Republic of China to the reefs is that the experts can't get to them", and noted "I have colleagues from the Philippines, Taiwan, PRC, Vietnam and Malaysia who have worked in the Spratly area. Most would not be able to get near the artificial islands except possibly some from PRC, and those would not be able to release their findings". See also Foreign policy of China Great Wall of China Great Firewall Nine-dash line References Territorial disputes of China Land reclamation South China Sea History of the Spratly Islands Coral reefs Artificial islands of Asia Chinese irredentism
Great Wall of Sand
Biology
3,197
413,216
https://en.wikipedia.org/wiki/Chloroacetic%20acid
Chloroacetic acid, industrially known as monochloroacetic acid (MCA), is the organochlorine compound with the formula . This carboxylic acid is a useful building block in organic synthesis. It is a colorless solid. Related compounds are dichloroacetic acid and trichloroacetic acid. Production Chloroacetic acid was first prepared (in impure form) by the French chemist Félix LeBlanc (1813–1886) in 1843 by chlorinating acetic acid in the presence of sunlight, and in 1857 (in pure form) by the German chemist Reinhold Hoffmann (1831–1919) by refluxing glacial acetic acid in the presence of chlorine and sunlight, and then by the French chemist Charles Adolphe Wurtz by hydrolysis of chloroacetyl chloride (), also in 1857. Chloroacetic acid is prepared industrially by two routes. The predominant method involves chlorination of acetic acid, with acetic anhydride as a catalyst: This route suffers from the production of dichloroacetic acid and trichloroacetic acid as impurities, which are difficult to separate by distillation: The second method entails hydrolysis of trichloroethylene: The hydrolysis is conducted at 130–140 °C in a concentrated (at least 75%) solution of sulfuric acid. This method produces a highly pure product, unlike the halogenation route. However, the significant quantities of HCl released have led to the increased popularity of the halogenation route. Approximately 420,000 tonnes are produced globally per year. Uses and reactions Most reactions take advantage of the high reactivity of the bond. In its largest-scale application, chloroacetic acid is used to prepare the thickening agent carboxymethyl cellulose and carboxymethyl starch. Chloroacetic acid is also used in the production of phenoxy herbicides by etherification with chlorophenols. In this way 2-methyl-4-chlorophenoxyacetic acid (MCPA), 2,4-dichlorophenoxyacetic acid, and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) are produced. It is the precursor to the herbicide glyphosate and dimethoate. Chloroacetic acid is converted to chloroacetyl chloride, a precursor to adrenaline (epinephrine). Displacement of chloride by sulfide gives thioglycolic acid, which is used as a stabilizer in PVC and a component in some cosmetics. Illustrative of its usefulness in organic chemistry is the O-alkylation of salicylaldehyde with chloroacetic acid, followed by decarboxylation of the resulting ether, producing benzofuran. Safety Like other chloroacetic acids and related halocarbons, chloroacetic acid is a hazardous alkylating agent. The for rats is 76 mg/kg. It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. See also Fluoroacetic acid References External links Sublimed crystals of the acid in a brown glass bottle (photo) Acetic acids Alkylating agents Organochlorides Organic compounds with 2 carbon atoms
Chloroacetic acid
Chemistry
771
16,021,637
https://en.wikipedia.org/wiki/Sliding%20window%20based%20part-of-speech%20tagging
Sliding window based part-of-speech tagging is used to part-of-speech tag a text. A high percentage of words in a natural language are words which out of context can be assigned more than one part of speech. The percentage of these ambiguous words is typically around 30%, although it depends greatly on the language. Solving this problem is very important in many areas of natural language processing. For example in machine translation changing the part-of-speech of a word can dramatically change its translation. Sliding window based part-of-speech taggers are programs which assign a single part-of-speech to a given lexical form of a word, by looking at a fixed sized "window" of words around the word to be disambiguated. The two main advantages of this approach are: It is possible to automatically train the tagger, getting rid of the need of manually tagging a corpus. The tagger can be implemented as a finite state automaton (Mealy machine) Formal definition Let be the set of grammatical tags of the application, that is, the set of all possible tags which may be assigned to a word, and let be the vocabulary of the application. Let be a function for morphological analysis which assigns each its set of possible tags, , that can be implemented by a full-form lexicon, or a morphological analyser. Let be the set of word classes, that in general will be a partition of with the restriction that for each all of the words will receive the same set of tags, that is, all of the words in each word class belong to the same ambiguity class. Normally, is constructed in a way that for high frequency words, each word class contains a single word, while for low frequency words, each word class corresponds to a single ambiguity class. This allows good performance for high frequency ambiguous words, and doesn't require too many parameters for the tagger. With these definitions it is possible to state problem in the following way: Given a text each word is assigned a word class (either by using the lexicon or morphological analyser) in order to get an ambiguously tagged text . The job of the tagger is to get a tagged text (with ) as correct as possible. A statistical tagger looks for the most probable tag for an ambiguously tagged text : Using Bayes formula, this is converted into: where is the probability that a particular tag (syntactic probability) and is the probability that this tag corresponds to the text (lexical probability). In a Markov model, these probabilities are approximated as products. The syntactic probabilities are modelled by a first order Markov process: where and are delimiter symbols. Lexical probabilities are independent of context: One form of tagging is to approximate the first probability formula: where is the right context of the size . In this way the sliding window algorithm only has to take into account a context of size . For most applications . For example to tag the ambiguous word "run" in the sentence "He runs from danger", only the tags of the words "He" and "from" are needed to be taken into account. Further reading Sanchez-Villamil, E., Forcada, M. L., and Carrasco, R. C. (2005). "Unsupervised training of a finite-state sliding-window part-of-speech tagger". Lecture Notes in Computer Science / Lecture Notes in Artificial Intelligence, vol. 3230, p. 454-463 Computational linguistics
Sliding window based part-of-speech tagging
Technology
736
25,067,334
https://en.wikipedia.org/wiki/Industrial%20fire
An industrial fire is a type of industrial disaster involving a conflagration which occurs in an industrial setting. Industrial fires often, but not always, occur together with explosions. They are most likely to occur in facilities where there is a lot of flammable material present. Such material can include petroleum, petroleum products such as petrochemicals, or natural gas. Processing flammable materials such as hydrocarbons in units at high temperature and/or high pressure makes the hazards more severe. Facilities with such combustible material include oil refineries, tank farms (oil depots), natural gas processing plants, and chemical plants, particularly petrochemical plants. Such facilities often have their own fire departments for firefighting. Sometimes dust or powder are vulnerable to combustion and their ignition can cause dust explosions. Severe industrial fires have involved multiple injuries, loss of life, costly financial loss, and/or damage to the surrounding community or environment. Process Hazard Analysis (PHA) is a set of organized and systematic assessments of the potential hazards for an industrial process used to analyze potential causes and consequences of fires, explosions, releases of toxic or flammable chemicals, and major spills of hazardous chemicals. Industrial fires, like the 2012 Amuay refinery explosion and the Standard Oil refinery fire in 1930, serve as stark reminders of the inherent risks associated with industrial activities involving flammable materials. These incidents underscore the importance of implementing robust safety measures and protocols to prevent and mitigate such disasters in industrial settings. Process Hazard Analysis (PHA) plays a critical role in enhancing industrial safety by systematically evaluating the potential hazards associated with industrial processes. By identifying and analyzing the causes and consequences of fires, explosions, chemical releases, and spills, PHA enables industrial facilities to proactively address vulnerabilities and implement preventive measures to reduce the likelihood of accidents. In facilities where flammable materials are processed at high temperatures and pressures, the risk of industrial fires and explosions is heightened. Oil refineries, chemical plants, and other industrial sites handling combustible substances must adhere to stringent safety standards and regulations to safeguard workers, the surrounding community, and the environment from the devastating impacts of industrial disasters. Safety measures and regulations vary depending on the local, state or federal agency jurisdiction. Moreover, the presence of on-site fire departments in industrial facilities underscores the proactive approach taken by industry stakeholders to enhance emergency response capabilities and minimize the impact of potential incidents. Through regular training, drills, and simulation exercises, these fire departments are better equipped to swiftly contain and extinguish fires, thereby reducing the risk of widespread damage and loss. As industrial processes evolve and technologies advance, continuous vigilance, adherence to best practices, and a strong commitment to safety remain paramount in mitigating the risks associated with industrial fires and ensuring the well-being of workers and the broader community. The integration of PHA into industrial safety management practices serves as a proactive measure to enhance preparedness, identify vulnerabilities, and promote a culture of safety across industrial operations. References External links Types of fire
Industrial fire
Chemistry
632
1,569,100
https://en.wikipedia.org/wiki/OGLE-TR-10
OGLE-TR-10 is a distant, magnitude 16 star in the constellation of Sagittarius. It is located near the Galactic Center. This star is listed as an eclipsing type variable star with the eclipse due to the passage of the planet as noted in the discovery papers. Planetary system This star is home to OGLE-TR-10b, a transiting planet found by the Optical Gravitational Lensing Experiment (OGLE) survey in 2002. See also Optical Gravitational Lensing Experiment or OGLE List of extrasolar planets References External links Planetary transit variables Sagittarius (constellation) G-type main-sequence stars Planetary systems with one confirmed planet Sagittarii, V5125
OGLE-TR-10
Astronomy
145
36,079,037
https://en.wikipedia.org/wiki/Unit%20Rig
Unit Rig was a manufacturer of haul trucks, sold under the brand name Lectra Haul. History Unit Rig was founded in 1935 by Hugh S. Chancey and two partners, Jerry R. Underwood and William C. Guier, who formed a partnership to build a rotary drill rig for oil field work that was more mobile than existing designs. The group based the company in Tulsa, Oklahoma. By 1947, the partnership between the men was beginning to break down when Underwood died, leaving Chancey and Guier as the remaining partners. Guier eventually took control of Unit Rig and in 1951 sold it to Kenneth W. Davis, who already had several oil-field related businesses under the parent company name of Kendavis Industries International. During the 1950s, Unit Rig began to diversify its products away from the limited oil-field products and looked towards mining products to utilize their manufacturing facilities. R. G. LeTourneau had already adapted compact electric drive wheels to construction machinery with great success, prompting Unit Rig to investigate the possibility of building a truck and finding a suitable client to take the finished machine. By 1960, the M-64 prototype truck was completed using General Electric drive systems and featuring special Goodyear low-pressure tires for the suspension. This truck was not a success; however, Unit Rig went on to be a very successful maker of off-highway dump trucks sold under the brand name of Lectra Haul (due to their electric drive system). In the 1970s, a large order for M200 trucks was received, to be shipped to the USSR. At that time, the US government did not allow trade with the Soviet Union. The company negotiated a deal with the Canadian government, allowing shipment from Canada, providing the company could verify the use of 40% Canadian materials and labor. The trucks required large quantities of steel, the majority of which was purchased from Canadian companies, and shipped to the Tulsa plant for fabrication and machining. Final assembly was done in Canada. The M200 utilized a diesel engine to generate power for the GE electric wheel motors. The wheel motors were the most expensive component of the M200. In 1976 the wheel motors cost $ 64,000 apiece, which equates to approximately $500,000 each in today's dollars. Lectra Haul was eventually sold to Terex then to Bucyrus Erie, who were taken over by Caterpillar Inc. around June 2011. Some Lectra Haul trucks are still sold alongside Caterpillar's own trucks but are branded as Unit Rig. Unit Rig trucks enjoyed a reputation for a simplified customer friendly design with very little requirement of parts during operation. A large fleet of Unit Rig trucks were sold in countries falling in Arctic circle and they proved to be survivor. Their solid rock like performance forced competitors to spend huge resources on R&D to match the performance in not only in sub zero but in sub Saharan hot environment. Unit Rig had just 4 engineers to design whole new trucks and a small but very experienced support group. In early 90s, Unit Rig brought in the first truck that had design intent for a min 8000 hours operation per year and successfully proved it. 8000 hours of operation means an availability over 92%. Unit Rig launched industry's first AC drive 150T (MT3300AC) truck with GE Invertex drive followed by 240T AC (MT4400AC) truck. Both trucks wrote new chapters in TCO - Total Cost of Ownership which is an indicator of life time expenses made by a client on parts, fuel and Capex. Unit Rig' truck design philosophy probably prompted later day's Linux like open source programming. In 1998, Unit Rig executed the mining industry's largest single order for supply of 160 trucks to Coal India Limited. Products Early models M64 M85 Kennecott Copper Co., Chino Mines, Santa Rita, NM received the first three M85s produced, serial numbers 52, 53, and 54. Serial #51 was their factory prototype. M100 M120 M200 (The first 200 tonne capacity truck with two axles) Second generation MK24 MK30: Became MT3000 MK33: Became MT3300 MK36 MT1900: Upgraded to MT2050, then MT2120, then became MT4000 Third generation (square appearance) MT3000 MT3300 MT3600 MT3700 MT4000 Fourth generation (rounded appearance) Sold as Lectra Haul, Terex, Bucyrus and now badged Unit Rig under Caterpillar ownership MT2700 MT3000 MT3300 MT3600 MT3700 MT4400 MT5500 Bucyrus MT6300AC: 400 ton class truck Coal haulers BD145: Became BD30 Unitized bottom dump hauler BD160 BD180 BD240/270 See also Haul truck References Orlemann, Eric (2012). Haulpak and Lectra Haul: The World's Greatest Off-Highway Earthmoving Trucks. Wisconsin: Icongrafix. . Haul trucks Mining equipment companies trucks of the United States
Unit Rig
Engineering
1,041
53,378,620
https://en.wikipedia.org/wiki/Erycina%20pusilla
Erycina pusilla is a species of flowering plants, which is a tiny orchid with an overall size of 2.5 to 3.5 cm from the orchid family, Orchidaceae. Its species are native to Mexico, Belize, Central America, South America and Trinidad. The leaves are shaped like a lance head (lanceolate) and arranged in a fan. Unlike other similar orchids, E. pusilla never develops lengthwise folded leaves (conduplicate leaves) or extra storage organs (pseudobulbs). The blooming season is from fall to spring. It produces solitary light-yellow orchid-shaped flowers. In comparison to the overall plant size, these flowers can reach a relatively large size (1 to 2.5 cm). The lateral sepals are united near the flower base. Compared to other orchids, E. pusilla has a short life cycle (about 17 months). It can reach adulthood in just one season, while the majority of the orchids reach maturity in up to 5 years. Name It is commonly known as the tiny psygmorchis, due to its miniature size. The current scientific name is Erycina pusilla. The etymology of its scientific name refers to its beauty and tiny size: “Erycina” is a byname of the Roman goddess for beauty, Venus (Venus of Eryx), and “pusilla” is Latin meaning “very little”. It was formerly classified in the genus Psygmorchis, due to its fan-shaped leaves (“psygmos” Greek for fan). Synonyms Homotypic synonyms: Epidendrum pusillum L., Sp. Pl. ed. 2: 1352 (1763) Cymbidium pusillum (L.) Sw., Nova Acta Regiae Soc. Sci. Upsal. 6: 74 (1799). Oncidium pusillum (L.) Mutel, Mém. Soc. Roy. Centr. Agric. Dépt. N. 1835-1836: 84 (1837). Tolumnia pusilla (L.) Hoehne, Ic. Orch. Bras.: 231 (1949). Psygmorchis pusilla (L.) Dodson & Dressler, Phytologia 24: 288 (1972). Heterotypic synonyms: Oncidium iridifolium Kunth in F.W.H.von Humboldt, A.J.A.Bonpland & C.S.Kunth, Nov. Gen. Sp. 1: 344 (1816). Epidendrum ventilabrum Vell., Fl. Flumin. 9: t. 32 (1831). Oncidium allemanii Barb.Rodr., Gen. Spec. Orchid. 2: 185 (1882). Oncidium pusillum var. megalanthum Schltr., Repert. Spec. Nov. Regni Veg. Beih. 27: 115 (1924). Psygmorchis allemanii (Barb.Rodr.) Garay & Stacy, Bradea 1: 408 (1974). Erycina allemanii (Barb.Rodr.) N.H.Williams & M.W.Chase, Lindleyana 16: 136 (2001). Distribution and habitat Erycina pusilla can be found in the neotropical region, including South and Central America, the southern Mexican lowlands, the Caribbean islands and southern Florida. Its habitat consists of humid forests at elevations of with temperatures varying from warm to hot. Like many orchids, E. pusilla grows harmlessly upon other plants. It gets moisture and nutrients from the surroundings without affecting the host plant (commensalism). Its quick development permits this orchid to grow on relatively short-lasting sites such as twigs or even leaves of bushes and trees, such as coffee plant or hibiscus. For this reason, it is usually classified as a twig epiphyte. Use in science Erycina pusilla is a promising model candidate for Oncidium research. Its relatively tiny size and its short life cycle, facilitates its cultivation. Additionally, it has the ability to complete its life cycle in vitro. The functional genomic research is easier because E. pusilla only has 6 chromosomes and a small genome size (1.5 pg 1C nucleus). Another aspect that speaks for the use of this orchid in research, is the rare pollination and production of seeds in nature. This reduces the risk of undesired propagation of transgenic lines. The rapid growth and the low chromosome numbers make E. pusilla is also an excellent parent for traditional hybridization methods. All these characteristics make E. pusilla a promising model not only for research, but also for commercial breeding, since it constitutes an excellent parent for traditional hybridization methods. Beyond the use of this orchid for research and commercial purposes, E. pusilla has also medical applications. The ingestion of whole plant cooked treats colic and stomachache. Additionally, the whole plant boiled is also used as a wash to treat lacerations cuts and wounds. In vitro cultivation Sporadic flowering in flasks was first reported by Livingston (1962), although the in vitro cultivation was just established (2007). The primary culture of E. pusilla becomes a callus after about one month of cultivation. Three months later it reaches leaf stage and after eight months the flowering stage begins. After two and half months E. pusilla produces fruits. A new cycle can start from a new primary culture: protocorm-like body (PLB) in vitro. Genome The transcriptome sequence of E. pusilla is available (Orchidstra Database). Some basic molecular resources were also established, including the sequencing of the chloroplast genome, the transcriptome and the BAC library. The miRNA database of E. pusilla, including the identification of miRNA biosynthesis-related genes and miRNA families, was established in 2013. The chloroplast genome has been sequenced efficiently and economically by using BAC library and next-generation sequencing. The chloroplast genome of E. pusilla is 143.164 bp in size, which contains a pair of inverted repeats (IRa and IRb) of 23.439 bp separated by large and small single copy regions of 84.189 and 12.097 bp, respectively. From these result compare to Oncidium, the gene order of chloroplast genome between E. pusilla and Oncidium are similar. In Taiwan, different hybridization compatibility of E. pusilla with Oncidium, Rodriguezia and Tolumnia was found by crossing with several important Oncidiinae orchids. MADS-box genes Due to their role in plant growth, the characterization of MADS-box genes in E. pusilla has turned into a hot topic for both researchers and commercial orchid breeders. MADS-box genes encode for MADS-domain proteins, which are generally transcription factors. In plants, these proteins control key developmental processes throughout almost all life stages. To date, 28 MADS-box genes were isolated in E. pusilla, namely EpMADS1 to 28. Nearly all of them contain introns greater than 10 kb, which reflects the complexity of the E. pusilla genome. Many EpMADS genes have expression patterns similar to those MADS-box genes in Arabidopsis. The 28 proteins, encoded by the E. pusilla MADS-box genes, are classified as type I or type II based on BLASTP analyses. References Other sources Pridgeon, A.M., Cribb, P.J., Chase, M.A. & Rasmussen, F. eds. (1999). Genera Orchidacearum Vols 1–3. Oxford Univ. Press. Berg Pana, H. 2005. Handbuch der Orchideen-Namen. Dictionary of Orchid Names. Dizionario dei nomi delle orchidee. Ulmer, Stuttgart. Establishment of an Agrobacterium-mediated genetic transformation procedure for the experimental model orchid Erycina pusilla. Shu-Hong Lee, Chia-Wen Li, Chia-Hui, Liau Pao-Yi, Chang Li-Jen Liao, Choun-Sea Lin Ming-Tsair Chan, Plant Cell, Tissue and Organ Culture, January 2015, Volume 120, Issue 1, pp 211–220 External links Oncidiinae Plant models Orchids of Central America Orchids of Belize
Erycina pusilla
Biology
1,800
14,722,972
https://en.wikipedia.org/wiki/Colony%20stimulating%20factor%201%20receptor
Colony stimulating factor 1 receptor (CSF1R), also known as macrophage colony-stimulating factor receptor (M-CSFR), and CD115 (Cluster of Differentiation 115), is a cell-surface protein encoded by the human CSF1R gene (known also as c-FMS). CSF1R is a receptor that can be activated by two ligands: colony stimulating factor 1 (CSF-1) and interleukin-34 (IL-34). CSF1R is highly expressed in myeloid cells, and CSF1R signaling is necessary for the survival, proliferation, and differentiation of many myeloid cell types in vivo and in vitro. CSF1R signaling is involved in many diseases and is targeted in therapies for cancer, neurodegeneration, and inflammatory bone diseases. Gene In the human genome, the CSF1R gene is located on chromosome 5 (5q32), and in mice the Csf1r gene is located on chromosome 18 (18D). CSF1R is 60.002 kilobases (kbs) in length. Hematopoietic stem cells express CSF1R at low levels, but CSF1R is highly expressed in more differentiated myeloid cell types such as monocytes, macrophages, osteoclasts, myeloid dendritic cells, microglia, and Paneth cells. CSF1R expression is controlled by two alternative promoters that are active in specific tissue types. Exon 1 of CSF1R is specifically transcribed in trophoblastic cells whereas exon 2 is specifically transcribed in macrophages. Activation of CSF1R transcription is regulated by several transcription factors including Ets and PU.1. Macrophage expression of the CSF1R gene is regulated by the promoter upstream of exon 2 and another highly conserved region termed the fms intronic regulatory element (FIRE). The FIRE is a 250-bp region in intron 2 that regulates transcript elongation during transcription of CSF1R in macrophages. Specific deletion of FIRE prevents differentiation of only specific macrophage types such as brain microglia and macrophages in the skin, kidney, heart, and peritoneum whereas deletion of the entire mouse Csf1r gene widely prevents macrophage differentiation, causing profound developmental defects. Additionally, the first intron of the CSF1R gene contains a transcriptionally inactive ribosomal protein L7 processed pseudogene, oriented in the opposite direction to the CSF1R gene. Protein CSF1R, the protein encoded by the CSF1R gene is a tyrosine kinase transmembrane receptor and member of the CSF1/PDGF receptor family of tyrosine-protein kinases. CSF1R has 972 amino acids, is predicted to have a molecular weight of 107.984 kilodaltons, and is composed of an extracellular and a cytoplasmic domain. The extracellular domain has 3 N-terminal immunoglobulin (Ig) domains (D1-D3) which bind ligand, 2 Ig domains (D4-D5) which stabilize the ligand, a linker region, and a single-pass transmembrane helix. The cytoplasmic domain has a juxtamembrane domain and tyrosine kinase domain that is interrupted by a kinase insert domain. At rest, the juxtamembrane domain of CSF1R enters an autoinhibitory position to prevent signaling of the CSF1R cytosolic domain. Upon binding of ligand to extracellular Ig domains, CSF1R dimerizes noncovalently and autophosphorylates several tyrosine residues. This first wave of CSF1R tyrosine phosphorylation creates phosphotyrosine-binding domains to which effector proteins can bind and initiate various cellular responses. Many proteins become tyrosine phosphorylated in response to CSF1R signaling (Table 1) including p85, Cbl, and Gab3 which are important for survival, differentiation, chemotaxis, and actin cytoskeleton of myeloid cells. The first wave of tyrosine phosphorylation also leads to the covalent dimerization of CSF1R via disulfide bonds. Covalent CSF1R dimerization is important for a series of modifications to CSF1R itself including a second wave of tyrosine phosphorylation, serine phosphorylation, ubiquitination, and eventually endocytosis which terminates signaling by trafficking the ligand-CSF1R complex to the lysosome for degradation. Colony stimulating factor 1 (CSF-1) and interleukin-34 (IL-34) are both CSF1R ligands. Both ligands regulate myeloid cell survival, proliferation, and differentiation, but CSF-1 and IL-34 differ in their structure, distribution in the body, and the specific cellular signaling cascades triggered upon binding to CSF1R. Function Osteoclasts Osteoclast are multi-nucleated cells that absorb and remove bone which is critical for growth of new bones and maintenance of bone strength. Osteoclasts are critical for the bone remodeling cycle which is achieved by the building of bone by osteoblasts, reabsorption by osteoclasts, and remodeling by osteoblasts. Osteoclasts precursor cells and mature osteoclast require stimulation of CSF1R for survival. Blockage of CSF1R signaling prevents osteoclast precursor cells from proliferating, maturing, and fusing into multi-nucleated cells. Stimulation of CSF1R promotes osteoclastogenesis (differentiation of monocytes into osteoclasts). CSF1R signaling in osteoclasts precursors promotes survival by upregulation of the Bcl-X(L) protein, an inhibitor of pro-apoptotic caspase-9. CSF1R signaling in mature osteoclasts promotes survival by stimulating mTOR/S6 kinase and the Na/HCO3 co-transporter, NBCn1. CSF1R signaling also directly regulates osteoclast function. Osteoclasts migrate along the bone surface, then adhere to the bone to degrade and reabsorb the bone matrix. CSF1R signaling positively regulates this behavior, increasing osteoclast chemotaxis and bone reabsorption. Monocytes and macrophages Monocytes and macrophages are mononuclear phagocytes. Monocytes circulate in the blood and are capable of differentiating into macrophages or dendritic cells, and macrophages are terminally differentiated tissue-resident cells. CSF1R signaling is necessary for differentiation of microglia and Langerhans cells which are derived from yolk sac progenitor cells with high expression of CSF1R. CSF1R signaling is only partially required for other tissue macrophages, and it is not necessary for monocytopoiesis (production of monocytes and macrophages) from hematopoietic stem cells. Macrophages of thymus and lymph nodes are almost completely independent of CSF1R signaling. In macrophages whose survival is fully or partially dependent on CSF1R signaling, CSF1R promotes survival by activating PI3K. CSF1R signaling also regulates macrophage function. One function of CSF1R signaling is to promote tissue protection and healing following damage. Damage to the kidney causes upregulation of CSF-1 and CSF1R in tubular epithelial cells. This promotes proliferation and survival of injured tubular epithelial cells and promotes anti-inflammatory phenotypes in resident macrophage to promote kidney healing. Lastly, activation of CSF1R is a strong chemokinetic signal, inducing macrophage polarization and chemotaxis towards the source of CSF1R ligand. This macrophage response requires rapid morphological changes which is achieved by remodeling of the actin cytoskeleton via the Src/ Pyk2 and PI3K signaling pathways. Microglia Microglia are the tissue-resident phagocytes of the central nervous system. CSF1R signaling promotes migration of primitive microglia precursor cells from the embryonic yolk sac to the developing brain prior to formation of the blood-brain-barrier. In perinatal development, microglia are instrumental in synaptic pruning, a process in which microglia phagocytose weak and inactive synapses via binding of microglial complement receptor 3 (CR3) (complex of CD11b and CD18) to synapse-bound iC3b. Csf1r loss-of-function inhibits synaptic pruning and leads to excessive non-functional synapses in the brain. In adulthood, CSF1R is required for the proliferation and survival of microglia. Inhibition of CSF1R signaling in adulthood causes near-complete (>99%) depletion (death) of brain microglia, however reversal of CSF1R inhibition stimulates remaining microglia to proliferate and repopulate microglia-free niches in the brain. Production of CSF1R ligands CSF-1 and IL-34 is increased in the brain following injury or viral infection, which directs microglia to proliferate and execute immune responses. Neural progenitor cells CSF1R signaling has been found to play important roles in non-myeloid cells such as neural progenitor cells, multipotent cells that are able to self-renew or terminally differentiate into neurons, astrocytes and oligodendrocytes. Mice with Csf1r loss-of-function have a significantly more neural progenitor cells in generative zones and fewer matured neurons in forebrain laminae due to failure of progenitor cell maturation and radial migration. These phenotypes were also seen in animals with Csf1r conditional knock-out specifically in neural progenitor cells, suggesting that CSF1R signaling by neural progenitor cells is important for maturation of certain neurons. Studies using cultured neural progenitor cells also show that CSF1R signaling stimulates neural progenitor cells maturation. Germline cells CSF1R is expressed in oocytes, the trophoblast, and fertilized embryos prior to implantation in the uterus. Studies using early mouse embryos in vitro have shown that activation of CSF1R stimulates formation of the blastocyst cavity and enhances the number of trophoblast cells. Csf1r loss-of-function mice exhibit several reproductive system abnormalities in the estrous cycle and ovulation rates as well as reduced antral follicles and ovarian macrophages. It is not clear whether ovulation dysfunction in Csf1r loss-of-function mice is due to loss of the protective effects of ovarian macrophages or loss of CSF1R signaling in oocytes themselves. Clinical significance Bone disease Bone remodeling is regulated by mutual cross-regulation between osteoclasts and osteoblasts. As a result, the dysfunction of CSF1R signaling directly affects the reabsorption (osteoclasts) and indirectly affects bone deposition (osteoblasts). In inflammatory arthritis conditions such as rheumatoid arthritis, psoriatic arthritis, and Crohn's disease, proinflammatory cytokine TNF-α is secreted by synovial macrophages which stimulates stromal cells and osteoblasts to produce CSF-1. Increased CSF-1 promotes proliferation of osteoclasts and osteoclast precursors and increases osteoclast bone reabsorption. This pathogenic increase in osteoclast activity causes abnormal bone loss or osteolysis. In animal models of rheumatoid arthritis, administration of CSF-1 increases the severity of disease whereas Csf1r loss-of-function reduces inflammation and joint erosion. In a rare bone disease called Gorham‐Stout disease, elevated production of CSF-1 by lymphatic endothelial cells similarly produces excessive osteoclastogenesis and osteolysis. Additionally, postmenopausal loss of estrogen has also been found to impact CSF1R signaling and cause osteoporosis. Estrogen deficiency causes osteoporosis by upregulating production of TNF-α by activated T cells. As in inflammatory arthritis, TNF-α stimulates stromal cells to produce CSF-1 which increases CSF1R signaling in osteoclasts. Cancer Tumor-associated macrophages (TAMs) react to early stage cancers with anti-inflammatory immune responses that support tumor survival at the expense of healthy tissue. Tumor infiltration by CSF1R-expressing TAMs yields a negative prognosis and is correlated with poor survival rates for individuals with lymphoma and solid tumors. The tumor microenvironment often produces high levels of CSF-1, creating a positive feedback loop in which the tumor stimulates survival of TAMs and TAMs promote tumor survival and growth. Thus, CSF1R signaling in TAMs is associated tumor survival, angiogenesis, therapy resistance, and metastasis. Production of CSF-1 by brain tumors called glioblastomas causes microglia (brain-resident macrophages) to exhibit immunosuppressive, tumor-permissive phenotypes. CSF1R inhibition in mouse glioblastoma models is beneficial and improves survival by inhibiting tumor-promoting functions of microglia. Mouse models of breast cancer also show that Csf1r loss-of-function delays TAM infiltration and metastasis. Because anti-cancer macrophages and microglia rely on GM-CSF and IFN-γ signaling instead CSF-1, inhibition of CSF1R signaling has been posited as a therapeutic target in cancer to preferentially deplete tumor-permissive TAMs. Additionally, mutations in CSF1R gene itself are associated with certain cancers such as chronic myelomonocytic leukemia and type M4 acute myeloblastic leukemia. Neurological disorders Adult-onset leukoencephalopathy Because of the importance of the CSF1R gene in myeloid cell survival, maturation, and function, loss-of-function in both inherited copies of the CSF1R gene causes postnatal mortality. Heterozygous mutations in the CSF1R gene prevent downstream CSF1R signaling and cause an autosomal dominant neurodegenerative disease called adult-onset leukoencephalopathy, which is characterized by dementia, executive dysfunction, and seizures. Partial loss of CSF1R in adult-onset leukoencephalopathy causes microglia to exhibit morphological and functional deficits (impaired cytokine production and phagocytosis) which is associated with axonal damage, demyelination, and neuronal loss. Signaling by a DAP12-TREM2 complex in microglia is downstream of CSF1R signaling and is needed for microglia phagocytosis of cellular debris and maintenance of brain homeostasis. TREM2 deficiency in cultured myeloid cells prevents stimulation of proliferation by treatment with CSF-1. Similarities between Nasu-Hakola disease (caused by mutations in either DAP12 or TREM2) and adult-onset leukoencephalopathy suggest partial loss of microglia CSF1R signaling promotes neurodegeneration. Defects in neurogenesis and neuronal survival are also seen in adult-onset leukoencephalopathy due to impaired CSF1R signaling in neural progenitor cells. Other brain diseases and disorders CSF1R signaling is involved in several diseases and disorders of the central nervous system. Research using animal models of epilepsy (kainic acid-induced seizures) suggests that CSF1 signaling during seizures protects neurons by activating neuronal CREB signaling. CSF1R agonism during seizures increases neuronal survival whereas neuron-specific Csf1r loss-of-function worsens kainic acid excitotoxicity, suggesting CSF1R signaling in neurons directly protects against seizure-related neuronal damage. Although CSF1R signaling is beneficial in certain contexts, it is detrimental in diseases where microglia drive tissue damage. In Charcot-Marie-Tooth disease type 1, CSF-1 secretion from endoneurial cells stimulates proliferation and activation of macrophages and microglia that cause demyelination. Likewise in multiple sclerosis, CSF1R signaling supports the survival of inflammatory microglia which promote demyelination. CSF1R inhibition prophylactically reduces demyelination in the experimental autoimmune encephalomyelitis animal model. The role of CSF1R signaling in Alzheimer's disease is more complicated because microglia both protect and damage the brain in response to Alzheimer's disease pathology. CSF-1 stimulates primary cultured human microglia to phagocytose toxic Aβ1–42 peptides. Microglia also initiate TREM2-dependent immune responses to amyloid plaques which protects neurons. However, Alzheimer's disease microglia also excessively secrete inflammatory cytokines and prune synapses promoting synapse loss, neuronal death, and cognitive impairment. Both CSF1R stimulation and inhibition improves cognitive function in Alzheimer's disease models. Thus, microglia seem to have both protective and neurotoxic functions during Alzheimer's disease neurodegeneration. Similar findings have been reported in lesion studies of the mouse brain, which showed that inhibition of CSF1R after lesioning improves recovery but inhibition during lesioning worsens recovery. CSF1R-targeting therapies for neurological disorders may impact both detrimental and beneficial microglia functions. Therapeutics Because TAM CSF1R signaling is tumor-permissive and can cause tumor treatment-resistance, CSF1R signaling is a promising therapeutic target in the treatment of cancer. Several studies have investigated the efficacy of CSF1R inhibitor as a monotherapy and as a combination therapy in refractory and metastatic cancers. Several small molecule inhibitors and monoclonal antibodies targeting CSF1R are in clinical development for cancer therapy (Table 2). Pexidartinib (PLX3397) is a small molecule inhibitor tyrosine of CSFR (as well as cKIT, FLT3, and VEGFR) with the most clinical development so far. Several completed and concurrent clinical trials have tested the efficacy and safety of Pexidartinib as a monotherapy for c-kit-mutated melanoma, prostate cancer, glioblastoma, classical Hodgkin lymphoma, neurofibroma, sarcoma, and leukemias. In 2019, Pexidartinib was FDA-approved for treatment of diffuse-type tenosynovial giant cell tumors, a non-malignant tumor that develops from synovial tissue lining the joints. Safety of CSF1R inhibition The safety of CSF1R inhibitors has been extensively characterized in clinical trials for the different small molecules and monoclonal antibodies in Table 2. In some studies, CSF1R inhibitors were not found to have dose-limiting toxicity while other studies did observe toxicity at high doses and have defined a maximum tolerated dose. Across multiple studies, the most frequent adverse effects included fatigue, elevated liver enzymes (creatine kinase, lactate dehydrogenase, aspartate aminotransferase, alanine transaminase), edema, nausea, lacrimation, and reduced appetite, but no signs of liver toxicity were found. There are some differences in the side effects of monoclonal antibody compared to small molecule CSF1R inhibitors. Edema was more common with monoclonal antibody treatment compared to small molecules, suggesting that immune response to monoclonal antibodies may drive some side effects. Additionally, some small molecule inhibitors are not specific for CSF1R, and off-target effects could explain observed side effects. For example, Pexidartinib treatment was found to change hair color, presumably by its impact on KIT kinase. Overall, CSF1R inhibitors have favorable safety profiles with limited toxicity. Controversy CSF1R inhibitors such as PLX5622 are widely used to study the role of microglia in mouse preclinical models of Alzheimer's disease, stroke, traumatic brain injury, and aging. PLX5622 is typically used for microglia research because PLX5622 has higher brain bioavailability and CSF1R-specificity compared to other CSF1R inhibitors such as PLX3397. In 2020, researchers David Hume (University of Queensland) and Kim Green (UCI) published a letter in the academic journal PNAS defending the use small molecule CSF1R inhibitors to study microglia in brain disease. This letter was in response to a primary research paper published in PNAS by lead correspondent Eleftherios Paschalis (HMS) and others which provided evidence that microglia research using PLX5622 is confounded by CSF1R inhibition in peripheral macrophages. Paschalis and colleagues published a subsequent letter in PNAS defending the findings of their published research. Interactions Colony stimulating factor 1 receptor has been shown to interact with: Cbl gene, FYN, Grb2, Suppressor of cytokine signaling 1, This receptor is also linked with the cells of MPS. See also Cluster of differentiation Mouse models of breast cancer metastasis Pimicotinib References Further reading External links Clusters of differentiation Immunoglobulin superfamily cytokine receptors Tyrosine kinase receptors
Colony stimulating factor 1 receptor
Chemistry
4,689
10,377,624
https://en.wikipedia.org/wiki/WOSTEP
WOSTEP, the Watchmakers of Switzerland Training and Educational Program, is an internationally recognized professional qualification in the maintenance and care of fine-quality watches. It was devised by the Centre Suisse de Formation et de Perfectionnement Horloger and is sponsored by manufacturers and retailers within the horological industry in Switzerland. Origin During the 1960s, and at the request of the U.S. Government, the Swiss government created what would eventually evolve into WOSTEP- Federation of the Swiss Watch Industry FH. It was originally designed to train American watchmakers in techniques of watchmaking that developed in Geneva and the Jura mountains as from the 16th Century. It is important to understand that at the time of the founding of Wostep, America was losing its title as "world's largest watch producer" to the Soviet Union (mostly making poor-quality everyday watches). As American watch companies continued to slide into oblivion after the end of World War II, some were able to update by purchasing movements from Swiss companies, even establishing their own subsidiaries in Switzerland (e.g. Waltham Watch Company, Hamilton Watch Company, Benrus, Bulova), to keep them going another 10–20 years before folding completely in the U.S. It was this reason that the U.S. requested some sort of formalized training for its best watchmakers. There had always been a small number of imports of ultra-fine Swiss watches, but after WWII, the number of watches imported as partial or complete watches increased exponentially. These "modern" watch movements were markedly different from the products of American companies, which grew out of 100 years of production (starting mass production of quality watches). American manufacturers were unable to develop new products or methods of competing, and they were destroyed in record time. Worldwide training programs The Federation developed an 11-month training program in which a watchmaker was flown to Neuchâtel, Switzerland, and trained by any one of many talented instructors that worked at WOSTEP over the years. Also, one can attend a selected school that is offered in limited locations outside of Switzerland. Recent changes in structure have assured the survival of WOSTEP as a foundation with a beautiful lakefront chateau converted to the school building. With the retirement of long-time director Antoine Simonin and his wife, the next generation has taken the reins and continues to develop courses for full training used throughout the world at their WOSTEP-Partnership Schools. The school also provides a variety of industry-specific training to companies and practicing watchmakers. References External links Simonin Publishers Institute of Swiss Watchmakers Horological organizations Horology
WOSTEP
Physics
537
9,396,720
https://en.wikipedia.org/wiki/Objective-collapse%20theory
Objective-collapse theories, also known spontaneous collapse models or dynamical reduction models, are proposed solutions to the measurement problem in quantum mechanics. As with other interpretations of quantum mechanics, they are possible explanations of why and how quantum measurements always give definite outcomes, not a superposition of them as predicted by the Schrödinger equation, and more generally how the classical world emerges from quantum theory. The fundamental idea is that the unitary evolution of the wave function describing the state of a quantum system is approximate. It works well for microscopic systems, but progressively loses its validity when the mass / complexity of the system increases. In collapse theories, the Schrödinger equation is supplemented with additional nonlinear and stochastic terms (spontaneous collapses) which localize the wave function in space. The resulting dynamics is such that for microscopic isolated systems, the new terms have a negligible effect; therefore, the usual quantum properties are recovered, apart from very tiny deviations. Such deviations can potentially be detected in dedicated experiments, and efforts are increasing worldwide towards testing them. An inbuilt amplification mechanism makes sure that for macroscopic systems consisting of many particles, the collapse becomes stronger than the quantum dynamics. Then their wave function is always well-localized in space, so well-localized that it behaves, for all practical purposes, like a point moving in space according to Newton's laws. In this sense, collapse models provide a unified description of microscopic and macroscopic systems, avoiding the conceptual problems associated to measurements in quantum theory. The most well-known examples of such theories are: Ghirardi–Rimini–Weber (GRW) model Continuous spontaneous localization (CSL) model Diósi–Penrose (DP) model Collapse theories stand in opposition to many-worlds interpretation theories, in that they hold that a process of wave function collapse curtails the branching of the wave function and removes unobserved behaviour. History of collapse theories Philip Pearle's 1976 paper pioneered the quantum nonlinear stochastic equations to model the collapse of the wave function in a dynamical way; this formalism was later used for the CSL model. However, these models lacked the character of “universality” of the dynamics, i.e. its applicability to an arbitrary physical system (at least at the non-relativistic level), a necessary condition for any model to become a viable option. The next major advance came in 1986, when Ghirardi, Rimini and Weber published the paper with the meaningful title “Unified dynamics for microscopic and macroscopic systems”, where they presented what is now known as the GRW model, after the initials of the authors. The model has two guiding principles: The position basis states are used in the dynamic state reduction (the "preferred basis" is position); The modification must reduce superpositions for macroscopic objects without altering the microscopic predictions. In 1990 the efforts for the GRW group on one side, and of P. Pearle on the other side, were brought together in formulating the Continuous Spontaneous Localization (CSL) model, where the Schrödinger dynamics and a randomly fluctuating classical field produce collapse into spatially localized eigentstates. In the late 1980s and 1990s, Diosi and Penrose and others independently formulated the idea that the wave function collapse is related to gravity. The dynamical equation is structurally similar to the CSL equation. Most popular models Three models are most widely discussed in the literature: Ghirardi–Rimini–Weber (GRW) model: It is assumed that each constituent of a physical system independently undergoes spontaneous collapses. The collapses are random in time, distributed according to a Poisson distribution; they are random in space and are more likely to occur where the wave function is larger. In between collapses, the wave function evolves according to the Schrödinger equation. For composite systems, the collapse on each constituent causes the collapse of the center of mass wave functions. Continuous spontaneous localization (CSL) model: The Schrödinger equation is supplemented with a nonlinear and stochastic diffusion process driven by a suitably chosen universal noise coupled to the mass-density of the system, which counteracts the quantum spread of the wave function. As for the GRW model, the larger the system, the stronger the collapse, thus explaining the quantum-to-classical transition as a progressive breakdown of quantum linearity, when the system's mass increases. The CSL model is formulated in terms of identical particles. Diósi–Penrose (DP) model: Diósi and Penrose formulated the idea that gravity is responsible for the collapse of the wave function. Penrose argued that, in a quantum gravity scenario where a spatial superposition creates the superposition of two different spacetime curvatures, gravity does not tolerate such superpositions and spontaneously collapses them. He also provided a phenomenological formula for the collapse time. Independently and prior to Penrose, Diósi presented a dynamical model that collapses the wave function with the same time scale suggested by Penrose. The Quantum Mechanics with Universal Position Localization (QMUPL) model should also be mentioned; an extension of the GRW model for identical particles formulated by Tumulka, which proves several important mathematical results regarding the collapse equations. In all models listed so far, the noise responsible for the collapse is Markovian (memoryless): either a Poisson process in the discrete GRW model, or a white noise in the continuous models. The models can be generalized to include arbitrary (colored) noises, possibly with a frequency cutoff: the CSL model has been extended to its colored version (cCSL), as well as the QMUPL model (cQMUPL). In these new models the collapse properties remain basically unaltered, but specific physical predictions can change significantly. In all collapse models, the noise effect must prevent quantum mechanical linearity and unitarity and thus cannot be described within quantum-mechanics. Because the noise responsible for the collapse induces Brownian motion on each constituent of a physical system, energy is not conserved. The kinetic energy increases at a constant rate. Such a feature can be modified, without altering the collapse properties, by including appropriate dissipative effects in the dynamics. This is achieved for the GRW, CSL, QMUPL and DP models, obtaining their dissipative counterparts (dGRW, dCSL, dQMUPL, DP). The QMUPL model has been further generalized to include both colored noise as well as dissipative effects (dcQMUPL model). Tests of collapse models Collapse models modify the Schrödinger equation; therefore, they make predictions that differ from standard quantum mechanical predictions. Although the deviations are difficult to detect, there is a growing number of experiments searching for spontaneous collapse effects. They can be classified in two groups: Interferometric experiments. They are refined versions of the double-slit experiment, showing the wave nature of matter (and light). The modern versions are meant to increase the mass of the system, the time of flight, and/or the delocalization distance in order to create ever larger superpositions. The most prominent experiments of this kind are with atoms, molecules and phonons. Non-interferometric experiments. They are based on the fact that the collapse noise, besides collapsing the wave function, also induces a diffusion on top of particles’ motion, which acts always, also when the wave function is already localized. Experiments of this kind involve cold atoms, opto-mechanical systems, gravitational wave detectors, underground experiments. Problems and criticisms to collapse theories Violation of the principle of the conservation of energy According to collapse theories, energy is not conserved, also for isolated particles. More precisely, in the GRW, CSL and DP models the kinetic energy increases at a constant rate, which is small but non-zero. This is often presented as an unavoidable consequence of Heisenberg's uncertainty principle: the collapse in position causes a larger uncertainty in momentum. This explanation is wrong; in collapse theories the collapse in position also determines a localization in momentum, driving the wave function to an almost minimum uncertainty state both in position and in momentum, compatibly with Heisenberg's principle. The reason the energy increases is that the collapse noise diffuses the particle, thus accelerating it. This is the same situation as in classical Brownian motion, and similarly this increase can be stopped by adding dissipative effects. Dissipative versions of the QMUPL, GRW, CSL and DP models exist, where the collapse properties are left unaltered with respect to the original models, while the energy thermalizes to a finite value (therefore it can even decrease, depending on its initial value). Still, in the dissipative model the energy is not strictly conserved. A resolution to this situation might come by considering also the noise a dynamical variable with its own energy, which is exchanged with the quantum system in such a way that the energy of the total system and noise together is conserved. Relativistic collapse models One of the biggest challenges in collapse theories is to make them compatible with relativistic requirements. The GRW, CSL and DP models are not. The biggest difficulty is how to combine the nonlocal character of the collapse, which is necessary in order to make it compatible with the experimentally verified violation of Bell inequalities, with the relativistic principle of locality. Models exist that attempt to generalize in a relativistic sense the GRW and CSL models, but their status as relativistic theories is still unclear. The formulation of a proper Lorentz-covariant theory of continuous objective collapse is still a matter of research. Tails problem In all collapse theories, the wave function is never fully contained within one (small) region of space, because the Schrödinger term of the dynamics will always spread it outside. Therefore, wave functions always contain tails stretching out to infinity, although their “weight” is smaller in larger systems. Critics of collapse theories argue that it is not clear how to interpret these tails. Two distinct problems have been discussed in the literature. The first is the “bare” tails problem: it is not clear how to interpret these tails because they amount to the system never being really fully localized in space. A special case of this problem is known as the “counting anomaly”. Supporters of collapse theories mostly dismiss this criticism as a misunderstanding of the theory, as in the context of dynamical collapse theories, the absolute square of the wave function is interpreted as an actual matter density. In this case, the tails merely represent an immeasurably small amount of smeared-out matter. This leads into the second problem, however, the so-called “structured tails problem”: it is not clear how to interpret these tails because even though their “amount of matter” is small, that matter is structured like a perfectly legitimate world. Thus, after the box is opened and Schroedinger's cat has collapsed to the “alive” state, there still exists a tail of the wavefunction containing “low matter” entity structured like a dead cat. Collapse theorists have offered a range of possible solutions to the structured tails problem, but it remains an open problem. See also Interpretation of quantum mechanics Many-worlds interpretation Philosophy of information Philosophy of physics Quantum information Quantum entanglement Coherence (physics) Quantum decoherence EPR paradox Quantum Zeno effect Measurement problem Measurement in quantum mechanics Wave function collapse Quantum gravity References External links Giancarlo Ghirardi, Collapse Theories, Stanford Encyclopedia of Philosophy (First published Thu Mar 7, 2002; substantive revision Fri May 15, 2020) Interpretations of quantum mechanics Quantum measurement
Objective-collapse theory
Physics
2,434
3,397,589
https://en.wikipedia.org/wiki/HD%201237
HD 1237 is a binary star system approximately 57 light-years away in the constellation of Hydrus (the Water Snake). The visible star in the system, A, is considered to be a solar analog due close mass to the sun. HD 1237 differs from the sun in that HD 1237 is much younger, has high metallicity, has much cooler temperature and is in a binary system. As of 2000, it has been confirmed that an extrasolar planet orbits the star. It is of note for being a relatively Sun-like star not very far from the Sun that is home to an extrasolar planet. Stellar components As a nearby Sun-like star, the last decade has seen HD 1237 A being studied carefully for the first time, especially after its substellar companion was discovered. It is currently believed that it is 800 million years old, though age estimates range from 150 million to 8.8 billion years old depending on the method used for the determination. The star is more enriched with iron than the Sun, is chromospherically active, and rotates around its axis more quickly than the Sun. The secondary star was discovered in 2006 during a deep imaging survey conducted at the European Southern Observatory using the Very Large Telescope. HD 1237 B is a M4 red dwarf star at a projected separation of 68 AU. Planetary system Announced in 2000, the Jovian planet GJ 3021 b (GJ 3021 being an alternate less-used designation for this star) orbits about 0.5 astronomical units from HD 1237 A with a minimum mass 3.37 times that of Jupiter, as determined by measuring variations in the radial velocity of the star. A study published in 2001 suggested that the usual inability to determine the orbital inclination of an extrasolar planet through radial velocity measurement had caused this mass to be severely underestimated. The astrometric orbit gives an orbital inclination of 11.8° and a mass of 16 Jupiter masses, which would make the object a brown dwarf. However, later analysis showed that Hipparcos was not sensitive enough to accurately determine astrometric orbits for substellar companions, which means the inclination (and hence the true mass) of the planet are still unknown. See also Epsilon Reticuli HD 196885 List of extrasolar planets References CD-80 9 3021 001237 001292 Hydrus Planetary systems with one confirmed planet Binary stars G-type main-sequence stars J00161266-7951042
HD 1237
Astronomy
503
21,620,243
https://en.wikipedia.org/wiki/Affinity%20electrophoresis
Affinity electrophoresis is a general name for many analytical methods used in biochemistry and biotechnology. Both qualitative and quantitative information may be obtained through affinity electrophoresis. Cross electrophoresis, the first affinity electrophoresis method, was created by Nakamura et al. Enzyme-substrate complexes have been detected using cross electrophoresis. The methods include the so-called electrophoretic mobility shift assay, charge shift electrophoresis and affinity capillary electrophoresis. The methods are based on changes in the electrophoretic pattern of molecules (mainly macromolecules) through biospecific interaction or complex formation. The interaction or binding of a molecule, charged or uncharged, will normally change the electrophoretic properties of a molecule. Membrane proteins may be identified by a shift in mobility induced by a charged detergent. Nucleic acids or nucleic acid fragments may be characterized by their affinity to other molecules. The methods have been used for estimation of binding constants, as for instance in lectin affinity electrophoresis or characterization of molecules with specific features like glycan content or ligand binding. For enzymes and other ligand-binding proteins, one-dimensional electrophoresis similar to counter electrophoresis or to "rocket immunoelectrophoresis", affinity electrophoresis may be used as an alternative quantification of the protein. Some of the methods are similar to affinity chromatography by use of immobilized ligands. Types and methods Currently, there is ongoing research in developing new ways of utilizing the knowledge already associated with affinity electrophoresis to improve its functionality and speed, as well as attempts to improve already established methods and tailor them towards performing specific tasks. Agarose gel electrophoresis A type of electrophoretic mobility shift assay (AMSA), agarose gel electrophoresis is used to separate protein-bound amino acid complexes from free amino acids. Using a low voltage (~10 V/cm) to minimize the risk for heat damage, electricity is run across an agarose gel. When dissolved in a hot buffered solution (50 to 55 degrees Celsius), it produces a viscous solution, but when cooled, it solidifies as a gel. Serum proteins, hemoglobin, nucleic acids, polymerase chain reaction products, etc. are all separated using this method. Agarose's fixed sulfate groups can cause enhanced electroendosmosis, which lowers band resolution. Utilizing ultrapure agarose gel with little sulfate content can stop this. Rapid agarose gel electrophoresis This technique utilizes a high voltage () with a 0.5× Tris-borate buffer run across an agarose gel. This method differs from the traditional agarose gel electrophoresis by utilizing a higher voltage to facilitate a shorter run time as well as yield a higher band resolution. Other factors included in developing the technique of rapid agarose gel electrophoresis are gel thickness, and the percentage of agarose within the gel. Boronate affinity electrophoresis Boronate affinity electrophoresis utilizes boronic acid infused acrylamide gels to purify NAD-RNA. This purification allows for researchers to easily measure the kinetic activity of NAD-RNA decapping enzymes. Affinity capillary electrophoresis Affinity capillary electrophoresis (ACE) refers to a number of techniques which rely on specific and nonspecific binding interactions to facilitate separation and detection through a formulary approach in accordance with the theory of electromigration. Using the intermolecular interactions between molecules occurring in free solution or mobilized onto a solid support, ACE allows for the separation and quantitation of analyte concentrations and binding and dissociation constants between molecules. As affinity probes in CAE, fluorophore-labeled compounds with affinities for the target molecules are employed. With ACE, scientists hope to develop strong binding drug candidates, understand and measure enzymatic activity, and characterize the charges on proteins. Affinity capillary electrophoresis can be divided into three distinct techniques: non-equilibrium electrophoresis of equilibrated sample mixtures, dynamic equilibrium ACE, and affinity-based ACE. Nonequilibrium electrophoresis of equilibrated sample mixtures is generally used in the separation and study of binding interactions of large proteins and involves combining both the analyte and its receptor molecule in a premixed sample. These receptor molecules often take the form of affinity probes consisting of fluorophore-labeled molecules that will bind to target molecules that are mixed with the sample being tested. This mixture, and its subsequent complexes, are then separated through capillary electrophoresis. Because the original mixture of analyte and receptor molecule were bound together in an equilibrium, the slow dissociation of these two bound molecules during the electrophoretic experiment will result in their separation and a subsequent shift in equilibrium towards further dissociation. The characteristic smear pattern produced by the slow release of the analyte from the complex during the experiment can be used to calculate the dissociation constant of the complex. Dynamic equilibrium ACE involves the combination of the analyte found in the sample and its receptor molecule found in the buffered solution in the capillary tube so that binding and separation only occur in the instrument. It is assumed for dynamic equilibrium affinity capillary electrophoresis that ligand-receptor binding occurs rapidly when the analyte and buffer are mixed. Binding constants are generally derived from this technique based upon the peak migration shift of the receptor which is dependent upon the concentration of the analyte in the sample. Affinity-based capillary electrophoresis, also known as capillary electroaffinity chromatography (CEC), involves the binding of analyte in sample to an immobilized receptor molecule on the capillary wall, microbeads, or microchannels. CEC offers the highest separation efficacy of all three ACE techniques as non-matrixed sample components are washed away and the ligand then be released and analyzed.   Affinity capillary electrophoresis takes the advantages of capillary electrophoresis and applies them to the study of protein interactions. ACE is advantageous because it has a high separation efficiency, has a shorter analysis time, can be run at physiological pH, and involves low consumption of ligand/molecules. In addition, the composition of the protein of interest does not have to be known in order to run ACE studies. The main disadvantage, though, is that it does not give much stoichiometric information about the reaction being studied. Affinity-trap polyacrylamide gel electrophoresis Affinity-trap polyacrylamide gel electrophoresis (PAGE) has become one of the most popular methods of protein separation. This is not only due to its separation qualities, but also because it can be used in conjunction with a variety of other analytic methods, such as mass spectrometry, and western blotting. In addition to helping isolate and purify proteins from biological samples, AT-PAGE is anticipated to be helpful in analyses of variations in the expression of particular proteins as well as in investigations of posttranslational modifications of proteins. This method utilizes a two-step approach. First, a protein sample is run through a polyacrylamide gel using electrophoresis. Then, the sample is transferred to a different polyacrylamide gel (the affinity-trap gel) where affinity probes are immobilized. The proteins that do not have affinity for the affinity probes pass through the affinity-trap gel, and proteins with affinity for the probes will be "trapped" by the immobile affinity probes. These trapped proteins are then visualized and identified using mass spectrometry after in-gel digestion. Phosphate affinity electrophoresis Phosphate affinity electrophoresis utilizes an affinity probe which consists of a molecule that binds specifically to divalent phosphate ions in neutral aqueous solution, known as a "Phos-Tag". This methods also utilizes a separation gel made of an acrylamide-pendent Phos-Tag monomer that is copolymerized. Phosphorylated proteins migrate slowly in the gel compared to non-phosphorylated proteins. This technique gives the researcher the ability to observe the differences in the phosphorylation states of any given protein. This technique allows for the detection of distinct bands even in protein molecules that have the same amount of phosphorylated amino acid residues but are phosphorylated at different amino acid locations. See also Immunoelectrophoresis References External links Comprehensive texts edited by Niels H. Axelsen in Scandinavian Journal of Immunology, 1975 Volume 4 Supplement Electrophoresis Molecular biology Protein methods Laboratory techniques Protein–protein interaction assays
Affinity electrophoresis
Chemistry,Biology
1,859
37,058,689
https://en.wikipedia.org/wiki/Water%20sampler
A water sampler is a device for field collection of one or more samples of water for testing. There are many different designs of water samplers. Selection or a particular sampler type depends on the type of analysis to be performed (e.g. ambient water quality or wastewater), the type of water source (e.g. a lake or pond, small stream or large river, coastal waters or deep ocean) and other factors such as ambient environmental conditions (e.g. collection of stormwater during a rain event vs. ambient water sampling during dry weather). Some sampler devices are designed for manual collection (a grab sample). Composite samplers can be configured to collect multiple samples over a specified time period or flow regime. See also Rosette sampler References Environmental science Sampler Sampler
Water sampler
Chemistry,Environmental_science
165
1,522,333
https://en.wikipedia.org/wiki/Gamma%20Velorum
Gamma Velorum is a quadruple star system in the constellation Vela. This name is the Bayer designation for the star, which is Latinised from γ Velorum and abbreviated γ Vel. At a combined magnitude of +1.72, it is one of the brightest stars in the night sky, and contains by far the closest and brightest Wolf–Rayet star. It has the traditional name Suhail al Muhlif and the modern name Regor , but neither is approved by the International Astronomical Union, making it the brightest star by apparent magnitude without an IAU approved name. The γ Velorum system includes a pair of stars separated by 41″, each of which is also a spectroscopic binary system. γ2 Velorum, the brighter of the visible pair, contains the Wolf–Rayet star and a blue supergiant, while γ1 Velorum contains a blue giant and an unseen companion. Distance Gamma Velorum is close enough to have accurate parallax measurements as well as distance estimates by more indirect means. The Hipparcos parallax for γ2 implies a distance of 342 parsecs (pc). A dynamical parallax derived from calculations of the orbital parameters gives a value of 336 pc, similar to spectrophotometric derivations. A VLTI-based interferometry measurement of the distance gives a slightly larger value of 368 ± 51 pc. All these distances are somewhat less than the commonly assumed distance of 450 pc for the Vela OB2 association which is the closest grouping of young massive stars. Stellar system The Gamma Velorum system is composed of at least four stars. The brightest member, γ2 Velorum or γ Velorum A, is a spectroscopic binary composed of a blue supergiant of spectral class O7.5 (), and a massive Wolf–Rayet star (, originally ). The binary has an orbital period of 78.5 days and separation varying from 0.8 to 1.6 astronomical units. Both the Wolf–Rayet star and the blue supergiant are likely to end their lives as Type Ib supernovae; they are among the nearest supernova candidates to the Sun. The Wolf–Rayet star has traditionally been regarded as the primary since its emission lines dominate the spectrum, but the O star is visually brighter and also more luminous. For clarity, the components are now often referred to as WR and O. The bright (apparent magnitude +4.2) γ1 Velorum or γ Velorum B, is a spectroscopic binary with a period of 1.48 days. Only the primary is detected and it is a blue-white giant. It is separated from the Wolf–Rayet binary by 41.2″, easily resolved with binoculars. The pair are too close to be separated without optical assistance, and they appear to the naked eye as a single star of apparent magnitude 1.72 (at the average brightness of γ2 of 1.83). Gamma Velorum has several fainter companions that share a common motion and are likely to be members of the Vela OB2 association. The magnitude +7.3 CD-46 3848 is a white F0 star at is 62.3 arcseconds from the A component. At 93.5 arcseconds is another binary star, an F0 star of magnitude +9.2. Gamma Velorum is associated with several hundred pre-main-sequence stars within less than a degree. The ages of these stars would be at least 5 million years. Etymology The Arabic name is al Suhail al Muḥlīf. al Muhlif refers to the oath-taker, and al Suhail is originally derived from a word meaning the plain. Suhail is used for at least three other stars: Canopus, λ Velorum (al Suhail al Wazn) and ζ Puppis (Suhail Hadar). Suhail is also a common Arabic male first name. In Chinese, (), meaning Celestial Earth God's Temple, refers to an asterism consisting of γ2 Velorum, δ Velorum, κ Velorum and b Velorum. Consequently, γ2 Velorum itself is known as (), "the First Star of Celestial Earth God's Temple". The name Regor ("Roger" spelled in reverse) was invented as a practical joke by the Apollo 1 astronaut Gus Grissom for his fellow astronaut Roger Chaffee. Due to the exotic nature of its spectrum (bright emission lines in lieu of dark absorption lines) it is also dubbed the Spectral Gem of Southern Skies. See also Gamma Cassiopeiae, informally named Navi for astronaut Virgil Ivan "Gus" Grissom Iota Ursae Majoris, informally named Dnoces for astronaut Ed White References O-type giants B-type giants Wolf–Rayet stars Spectroscopic binaries 6 Gum Nebula Vela (constellation) Velorum, Gamma 3207 Durchmusterung objects 068273 039953 Regor TIC objects Southern pole stars
Gamma Velorum
Astronomy
1,065
30,981,930
https://en.wikipedia.org/wiki/List%20of%20integrals%20of%20Gaussian%20functions
In the expressions in this article, is the standard normal probability density function, is the corresponding cumulative distribution function (where erf is the error function), and is Owen's T function. Owen has an extensive list of Gaussian-type integrals; only a subset is given below. Indefinite integrals In the previous two integrals, is the double factorial: for even it is equal to the product of all even numbers from 2 to , and for odd it is the product of all odd numbers from 1 to ; additionally it is assumed that . Definite integrals References Gaussian functions Gaussian function
List of integrals of Gaussian functions
Mathematics
127
47,049,248
https://en.wikipedia.org/wiki/Rubroboletus%20pulchrotinctus
Rubroboletus pulchrotinctus is a rare bolete fungus in the genus Rubroboletus, native to central and southern Europe. It was originally described in genus Boletus by Italian mycologist Carlo Luciano Alessio in 1985, but subsequently transferred to genus Rubroboletus by Zhao and colleagues (2015), on the basis of molecular evidence. Phylogenetically, R. pulchrotinctus is the sister-species of the better known Rubroboletus satanas, with which it shares several morphological features. Rubroboletus pulchrotinctus forms ectomycorrhizal associations with several members of the Fagaceae, particularly species of oak species (Quercus). It is known from Spain, France, Italy and Greece, as well as the Balkan and Crimean Peninsulas. In the eastern Mediterranean region, its distribution extends as south as Israel, where it is found in Mount Carmel National Park and Beit Oren growing under the Palestine oak (Quercus calliprinos) and the island of Cyprus, where it is found under the endemic golden oak (Quercus alnifolia). References pulchrotinctus Fungi described in 1985 Fungi of Europe Fungi of Western Asia Fungus species
Rubroboletus pulchrotinctus
Biology
258
65,682,412
https://en.wikipedia.org/wiki/Laila%20Ohlgren
Ragnhild Laila Lillemor Ohlgren, born Andersson (19 November 1937 – 6 January 2014) was a Swedish telecommunications engineer who is seen as the developer of mobile telephony together with Östen Mäkitalo, both engineers at Telia. In particular, she successfully introduced storage of the telephone number to be dialed in the phone's microprocessor so that connection could be achieved by pressing the call button. This avoided transmission breakages caused by obstacles such as trees during more lengthy traditional dialing. The approach was subsequently adopted worldwide. For her efforts, in 2009 she became the first woman to be awarded the Polhem Prize for technical innovation. Early life Born in Tingshammer just outside Stockholm on 17 November 1937, Ragnhild Laila Lillemor Andersson was the daughter of Johan Arvid Andersson and his wife Sally Elisabeth born Carlsson. The family name was subsequently changed to Tingshammar. She was brought up by her single mother in difficult conditions. She attended the public school in Kungsholmen. While still a teenager, she met the baritone Bo Viktor Ohlgren (1933–2015) who was active in the Mission Covenant Church of Sweden. They married in 1959. Together they had two children, Magnus and Håkan, who both became engineers. Thanks to her father-in-law who worked for the Swedish telecoms' authority Televerket, she began working there in 1956 while continuing her education at home in the evenings. In this way, she succeeded in passing not only the school matriculation examination, but was also able to graduate as an engineer. At Televerket where she was the only woman in her department, she was promoted to project leader with involvement in the development of mobile telephone technology. From 1969, she was working with Östen Mäkitalo in connection with the Nordic Mobile Telephone (NMT) project. Inventing the call button When final tests of the NMT system were being conducted in 1979 just a few days before a key meeting in Kalmar, it suddenly occurred to Laila Ohlgren that the frequent breakdowns in dialing caused by objects such as trees when on the move could be overcome by using the phone's microprocessor to store the number to be dialed. A call button could then be used to make the connection by combining all the digits in one go. Although it was Whit weekend, she called Östen and asked him if they should drive around Stockholm and test her idea out. As reported in Ny Teknik, she explained, "One of us drove and the other made calls, and we continued the whole weekend. We perhaps made a thousand connections in order to get a reasonable statistical basis to see if the new solution worked. And it did." The approach proved to represent an important improvement in performance and was adopted as a component of NMT, the first integrated mobile telephony system in the world. Ohlgren's call button innovation became a world standard. Ohlgren continued to be employed at Televerket, later known as Telia, heading 750 employees in their insurance branch in Haninge until her retirement in 2005. In 2009, Laila Ohlgren became the first woman to be honoured with the Polhem Prize from the Swedish Association of Graduate Engineers, which included an award of 250,000 Swedish crowns (or around $28,000). She died on 6 January 2014 and is buried in Skogskyrkogården Cemetery in Gamla Enskede. References Further reading External links Laila Ohlgren utvecklade mobiltelefonin, illustrated biography of Laila Ohlgren in Swedish 1937 births 2014 deaths 20th-century Swedish inventors Women inventors Telecommunications engineers Swedish women engineers 20th-century Swedish women engineers 21st-century Swedish women engineers 20th-century Swedish engineers 21st-century Swedish engineers
Laila Ohlgren
Engineering
791
19,271,448
https://en.wikipedia.org/wiki/Road%20coloring%20theorem
In graph theory the road coloring theorem, known previously as the road coloring conjecture, deals with synchronized instructions. The issue involves whether by using such instructions, one can reach or locate an object or destination from any other point within a network (which might be a representation of city streets or a maze). In the real world, this phenomenon would be as if you called a friend to ask for directions to his house, and he gave you a set of directions that worked no matter where you started from. This theorem also has implications in symbolic dynamics. The theorem was first conjectured by Roy Adler and Benjamin Weiss. It was proved by Avraham Trahtman. Example and intuition The image to the right shows a directed graph on eight vertices in which each vertex has out-degree 2. (Each vertex in this case also has in-degree 2, but that is not necessary for a synchronizing coloring to exist.) The edges of this graph have been colored red and blue to create a synchronizing coloring. For example, consider the vertex marked in yellow. No matter where in the graph you start, if you traverse all nine edges in the walk "blue-red-red—blue-red-red—blue-red-red", you will end up at the yellow vertex. Similarly, if you traverse all nine edges in the walk "blue-blue-red—blue-blue-red—blue-blue-red", you will always end up at the vertex marked in green, no matter where you started. The road coloring theorem states that for a certain category of directed graphs, it is always possible to create such a coloring. Mathematical description Let G be a finite, strongly connected, directed graph where all the vertices have the same out-degree k. Let A be the alphabet containing the letters 1, ..., k. A synchronizing coloring (also known as a collapsible coloring) in G is a labeling of the edges in G with letters from A such that (1) each vertex has exactly one outgoing edge with a given label and (2) for every vertex v in the graph, there exists a word w over A such that all paths in G corresponding to w terminate at v. The terminology synchronizing coloring is due to the relation between this notion and that of a synchronizing word in finite automata theory. For such a coloring to exist at all, it is necessary that G be aperiodic. The road coloring theorem states that aperiodicity is also sufficient for such a coloring to exist. Therefore, the road coloring problem can be stated briefly as: Every finite strongly connected aperiodic graph of uniform out-degree has a synchronizing coloring. Previous partial results Previous partial or special-case results include the following: If G is a finite strongly connected aperiodic directed graph with no multiple edges, and G contains a simple cycle of prime length which is a proper subset of G, then G has a synchronizing coloring. If G is a finite strongly connected aperiodic directed graph (multiple edges allowed) and every vertex has the same in-degree and out-degree k, then G has a synchronizing coloring. See also Four color theorem Graph coloring Notes References . . . . . Combinatorics Automata (computation) Mathematics and culture Graph coloring Topological graph theory Theorems in graph theory
Road coloring theorem
Mathematics
696
12,420,975
https://en.wikipedia.org/wiki/Polyaminopropyl%20biguanide
Polyaminopropyl biguanide (PAPB) is a polymer containing biguanide group connected with a three methylene (propyl) linker. The polymer is a propyl analogue of polyhexamethylene biguanide. The polymer display some antibacterial activity however much lower than PHMB. As of May 2024, PAPB is not approved as a biocidal active substance under EU regulations. Name controversy In some sources, particularly in lists of cosmetic ingredients (INCI), the name polyaminopropyl biguanide is wrongly associated with polyhexamethylene biguanide (PHMB) References Polymers Biguanides
Polyaminopropyl biguanide
Chemistry,Materials_science
144
73,479,809
https://en.wikipedia.org/wiki/Plutonium%20pentafluoride
Plutonium pentafluoride is a binary inorganic compound of plutonium and fluorine with the chemical formula . Synthesis Photodissociation of gaseous plutonium hexafluoride to plutonium pentafluoride and fluorine. Physical properties Plutonium pentafluoride forms a white solid. Hazards Plutonium pentafluoride is toxic and radioactive. References Fluorides Plutonium compounds Actinide halides
Plutonium pentafluoride
Chemistry
93
52,007,903
https://en.wikipedia.org/wiki/NGC%20282
NGC 282 is an elliptical galaxy in the constellation Pisces. It was discovered on October 13, 1879 by Édouard Stephan. References External links 0282 18791013 Pisces (constellation) Elliptical galaxies Discoveries by Édouard Stephan +05-03-015 003090
NGC 282
Astronomy
58
150,159
https://en.wikipedia.org/wiki/Noether%27s%20theorem
Noether's theorem states that every continuous symmetry of the action of a physical system with conservative forces has a corresponding conservation law. This is the first of two theorems (see Noether's second theorem) published by mathematician Emmy Noether in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries of physical space. Noether's theorem is used in theoretical physics and the calculus of variations. It reveals the fundamental relation between the symmetries of a physical system and the conservation laws. It also made modern theoretical physicists much more focused on symmetries of physical systems. A generalization of the formulations on constants of motion in Lagrangian and Hamiltonian mechanics (developed in 1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian alone (e.g., systems with a Rayleigh dissipation function). In particular, dissipative systems with continuous symmetries need not have a corresponding conservation law. Basic illustrations and background As an illustration, if a physical system behaves the same regardless of how it is oriented in space (that is, it's invariant), its Lagrangian is symmetric under continuous rotation: from this symmetry, Noether's theorem dictates that the angular momentum of the system be conserved, as a consequence of its laws of motion. The physical system itself need not be symmetric; a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry. It is the laws of its motion that are symmetric. As another example, if a physical process exhibits the same outcomes regardless of place or time, then its Lagrangian is symmetric under continuous translations in space and time respectively: by Noether's theorem, these symmetries account for the conservation laws of linear momentum and energy within this system, respectively. Noether's theorem is important, both because of the insight it gives into conservation laws, and also as a practical calculational tool. It allows investigators to determine the conserved quantities (invariants) from the observed symmetries of a physical system. Conversely, it allows researchers to consider whole classes of hypothetical Lagrangians with given invariants, to describe a physical system. As an illustration, suppose that a physical theory is proposed which conserves a quantity X. A researcher can calculate the types of Lagrangians that conserve X through a continuous symmetry. Due to Noether's theorem, the properties of these Lagrangians provide further criteria to understand the implications and judge the fitness of the new theory. There are numerous versions of Noether's theorem, with varying degrees of generality. There are natural quantum counterparts of this theorem, expressed in the Ward–Takahashi identities. Generalizations of Noether's theorem to superspaces also exist. Informal statement of the theorem All fine technical points aside, Noether's theorem can be stated informally as: A more sophisticated version of the theorem involving fields states that: The word "symmetry" in the above statement refers more precisely to the covariance of the form that a physical law takes with respect to a one-dimensional Lie group of transformations satisfying certain technical criteria. The conservation law of a physical quantity is usually expressed as a continuity equation. The formal proof of the theorem utilizes the condition of invariance to derive an expression for a current associated with a conserved physical quantity. In modern terminology, the conserved quantity is called the Noether charge, while the flow carrying that charge is called the Noether current. The Noether current is defined up to a solenoidal (divergenceless) vector field. In the context of gravitation, Felix Klein's statement of Noether's theorem for action I stipulates for the invariants: Brief illustration and overview of the concept The main idea behind Noether's theorem is most easily illustrated by a system with one coordinate and a continuous symmetry (gray arrows on the diagram). Consider any trajectory (bold on the diagram) that satisfies the system's laws of motion. That is, the action governing this system is stationary on this trajectory, i.e. does not change under any local variation of the trajectory. In particular it would not change under a variation that applies the symmetry flow on a time segment and is motionless outside that segment. To keep the trajectory continuous, we use "buffering" periods of small time to transition between the segments gradually. The total change in the action now comprises changes brought by every interval in play. Parts, where variation itself vanishes, i.e outside bring no . The middle part does not change the action either, because its transformation is a symmetry and thus preserves the Lagrangian and the action . The only remaining parts are the "buffering" pieces. In these regions both the coordinate and velocity change, but changes by , and the change in the coordinate is negligible by comparison since the time span of the buffering is small (taken to the limit of 0), so . So the regions contribute mostly through their "slanting" . That changes the Lagrangian by , which integrates to These last terms, evaluated around the endpoints and , should cancel each other in order to make the total change in the action be zero, as would be expected if the trajectory is a solution. That is meaning the quantity is conserved, which is the conclusion of Noether's theorem. For instance if pure translations of by a constant are the symmetry, then the conserved quantity becomes just , the canonical momentum. More general cases follow the same idea: Historical context A conservation law states that some quantity X in the mathematical description of a system's evolution remains constant throughout its motion – it is an invariant. Mathematically, the rate of change of X (its derivative with respect to time) is zero, Such quantities are said to be conserved; they are often called constants of motion (although motion per se need not be involved, just evolution in time). For example, if the energy of a system is conserved, its energy is invariant at all times, which imposes a constraint on the system's motion and may help in solving for it. Aside from insights that such constants of motion give into the nature of a system, they are a useful calculational tool; for example, an approximate solution can be corrected by finding the nearest state that satisfies the suitable conservation laws. The earliest constants of motion discovered were momentum and kinetic energy, which were proposed in the 17th century by René Descartes and Gottfried Leibniz on the basis of collision experiments, and refined by subsequent researchers. Isaac Newton was the first to enunciate the conservation of momentum in its modern form, and showed that it was a consequence of Newton's laws of motion. According to general relativity, the conservation laws of linear momentum, energy and angular momentum are only exactly true globally when expressed in terms of the sum of the stress–energy tensor (non-gravitational stress–energy) and the Landau–Lifshitz stress–energy–momentum pseudotensor (gravitational stress–energy). The local conservation of non-gravitational linear momentum and energy in a free-falling reference frame is expressed by the vanishing of the covariant divergence of the stress–energy tensor. Another important conserved quantity, discovered in studies of the celestial mechanics of astronomical bodies, is the Laplace–Runge–Lenz vector. In the late 18th and early 19th centuries, physicists developed more systematic methods for discovering invariants. A major advance came in 1788 with the development of Lagrangian mechanics, which is related to the principle of least action. In this approach, the state of the system can be described by any type of generalized coordinates q; the laws of motion need not be expressed in a Cartesian coordinate system, as was customary in Newtonian mechanics. The action is defined as the time integral I of a function known as the Lagrangian L where the dot over q signifies the rate of change of the coordinates q, Hamilton's principle states that the physical path q(t)—the one actually taken by the system—is a path for which infinitesimal variations in that path cause no change in I, at least up to first order. This principle results in the Euler–Lagrange equations, Thus, if one of the coordinates, say qk, does not appear in the Lagrangian, the right-hand side of the equation is zero, and the left-hand side requires that where the momentum is conserved throughout the motion (on the physical path). Thus, the absence of the ignorable coordinate qk from the Lagrangian implies that the Lagrangian is unaffected by changes or transformations of qk; the Lagrangian is invariant, and is said to exhibit a symmetry under such transformations. This is the seed idea generalized in Noether's theorem. Several alternative methods for finding conserved quantities were developed in the 19th century, especially by William Rowan Hamilton. For example, he developed a theory of canonical transformations which allowed changing coordinates so that some coordinates disappeared from the Lagrangian, as above, resulting in conserved canonical momenta. Another approach, and perhaps the most efficient for finding conserved quantities, is the Hamilton–Jacobi equation. Emmy Noether's work on the invariance theorem began in 1915 when she was helping Felix Klein and David Hilbert with their work related to Albert Einstein's theory of general relativity By March 1918 she had most of the key ideas for the paper which would be published later in the year. Mathematical expression Simple form using perturbations The essence of Noether's theorem is generalizing the notion of ignorable coordinates. One can assume that the Lagrangian L defined above is invariant under small perturbations (warpings) of the time variable t and the generalized coordinates q. One may write where the perturbations δt and δq are both small, but variable. For generality, assume there are (say) N such symmetry transformations of the action, i.e. transformations leaving the action unchanged; labelled by an index r = 1, 2, 3, ..., N. Then the resultant perturbation can be written as a linear sum of the individual types of perturbations, where εr are infinitesimal parameter coefficients corresponding to each: generator Tr of time evolution, and generator Qr of the generalized coordinates. For translations, Qr is a constant with units of length; for rotations, it is an expression linear in the components of q, and the parameters make up an angle. Using these definitions, Noether showed that the N quantities are conserved (constants of motion). Examples I. Time invariance For illustration, consider a Lagrangian that does not depend on time, i.e., that is invariant (symmetric) under changes t → t + δt, without any change in the coordinates q. In this case, N = 1, T = 1 and Q = 0; the corresponding conserved quantity is the total energy H II. Translational invariance Consider a Lagrangian which does not depend on an ("ignorable", as above) coordinate qk; so it is invariant (symmetric) under changes qk → qk + δqk. In that case, N = 1, T = 0, and Qk = 1; the conserved quantity is the corresponding linear momentum pk In special and general relativity, these two conservation laws can be expressed either globally (as it is done above), or locally as a continuity equation. The global versions can be united into a single global conservation law: the conservation of the energy-momentum 4-vector. The local versions of energy and momentum conservation (at any point in space-time) can also be united, into the conservation of a quantity defined locally at the space-time point: the stress–energy tensor(this will be derived in the next section). III. Rotational invariance The conservation of the angular momentum L = r × p is analogous to its linear momentum counterpart. It is assumed that the symmetry of the Lagrangian is rotational, i.e., that the Lagrangian does not depend on the absolute orientation of the physical system in space. For concreteness, assume that the Lagrangian does not change under small rotations of an angle δθ about an axis n; such a rotation transforms the Cartesian coordinates by the equation Since time is not being transformed, T = 0, and N = 1. Taking δθ as the ε parameter and the Cartesian coordinates r as the generalized coordinates q, the corresponding Q variables are given by Then Noether's theorem states that the following quantity is conserved, In other words, the component of the angular momentum L along the n axis is conserved. And if n is arbitrary, i.e., if the system is insensitive to any rotation, then every component of L is conserved; in short, angular momentum is conserved. Field theory version Although useful in its own right, the version of Noether's theorem just given is a special case of the general version derived in 1915. To give the flavor of the general theorem, a version of Noether's theorem for continuous fields in four-dimensional space–time is now given. Since field theory problems are more common in modern physics than mechanics problems, this field theory version is the most commonly used (or most often implemented) version of Noether's theorem. Let there be a set of differentiable fields defined over all space and time; for example, the temperature would be representative of such a field, being a number defined at every place and time. The principle of least action can be applied to such fields, but the action is now an integral over space and time (the theorem can be further generalized to the case where the Lagrangian depends on up to the nth derivative, and can also be formulated using jet bundles). A continuous transformation of the fields can be written infinitesimally as where is in general a function that may depend on both and . The condition for to generate a physical symmetry is that the action is left invariant. This will certainly be true if the Lagrangian density is left invariant, but it will also be true if the Lagrangian changes by a divergence, since the integral of a divergence becomes a boundary term according to the divergence theorem. A system described by a given action might have multiple independent symmetries of this type, indexed by so the most general symmetry transformation would be written as with the consequence For such systems, Noether's theorem states that there are conserved current densities (where the dot product is understood to contract the field indices, not the index or index). In such cases, the conservation law is expressed in a four-dimensional way which expresses the idea that the amount of a conserved quantity within a sphere cannot change unless some of it flows out of the sphere. For example, electric charge is conserved; the amount of charge within a sphere cannot change unless some of the charge leaves the sphere. For illustration, consider a physical system of fields that behaves the same under translations in time and space, as considered above; in other words, is constant in its third argument. In that case, N = 4, one for each dimension of space and time. An infinitesimal translation in space, (with denoting the Kronecker delta), affects the fields as : that is, relabelling the coordinates is equivalent to leaving the coordinates in place while translating the field itself, which in turn is equivalent to transforming the field by replacing its value at each point with the value at the point "behind" it which would be mapped onto by the infinitesimal displacement under consideration. Since this is infinitesimal, we may write this transformation as The Lagrangian density transforms in the same way, , so and thus Noether's theorem corresponds to the conservation law for the stress–energy tensor Tμν, where we have used in place of . To wit, by using the expression given earlier, and collecting the four conserved currents (one for each ) into a tensor , Noether's theorem gives with (we relabelled as at an intermediate step to avoid conflict). (However, the obtained in this way may differ from the symmetric tensor used as the source term in general relativity; see Canonical stress–energy tensor.) The conservation of electric charge, by contrast, can be derived by considering Ψ linear in the fields φ rather than in the derivatives. In quantum mechanics, the probability amplitude ψ(x) of finding a particle at a point x is a complex field φ, because it ascribes a complex number to every point in space and time. The probability amplitude itself is physically unmeasurable; only the probability p = |ψ|2 can be inferred from a set of measurements. Therefore, the system is invariant under transformations of the ψ field and its complex conjugate field ψ* that leave |ψ|2 unchanged, such as a complex rotation. In the limit when the phase θ becomes infinitesimally small, δθ, it may be taken as the parameter ε, while the Ψ are equal to iψ and −iψ*, respectively. A specific example is the Klein–Gordon equation, the relativistically correct version of the Schrödinger equation for spinless particles, which has the Lagrangian density In this case, Noether's theorem states that the conserved (∂ ⋅ j = 0) current equals which, when multiplied by the charge on that species of particle, equals the electric current density due to that type of particle. This "gauge invariance" was first noted by Hermann Weyl, and is one of the prototype gauge symmetries of physics. Derivations One independent variable Consider the simplest case, a system with one independent variable, time. Suppose the dependent variables q are such that the action integral is invariant under brief infinitesimal variations in the dependent variables. In other words, they satisfy the Euler–Lagrange equations And suppose that the integral is invariant under a continuous symmetry. Mathematically such a symmetry is represented as a flow, φ, which acts on the variables as follows where ε is a real variable indicating the amount of flow, and T is a real constant (which could be zero) indicating how much the flow shifts time. The action integral flows to which may be regarded as a function of ε. Calculating the derivative at ε = 0 and using Leibniz's rule, we get Notice that the Euler–Lagrange equations imply Substituting this into the previous equation, one gets Again using the Euler–Lagrange equations we get Substituting this into the previous equation, one gets From which one can see that is a constant of the motion, i.e., it is a conserved quantity. Since φ[q, 0] = q, we get and so the conserved quantity simplifies to To avoid excessive complication of the formulas, this derivation assumed that the flow does not change as time passes. The same result can be obtained in the more general case. Field-theoretic derivation Noether's theorem may also be derived for tensor fields where the index A ranges over the various components of the various tensor fields. These field quantities are functions defined over a four-dimensional space whose points are labeled by coordinates xμ where the index μ ranges over time (μ = 0) and three spatial dimensions (μ = 1, 2, 3). These four coordinates are the independent variables; and the values of the fields at each event are the dependent variables. Under an infinitesimal transformation, the variation in the coordinates is written whereas the transformation of the field variables is expressed as By this definition, the field variations result from two factors: intrinsic changes in the field themselves and changes in coordinates, since the transformed field αA depends on the transformed coordinates ξμ. To isolate the intrinsic changes, the field variation at a single point xμ may be defined If the coordinates are changed, the boundary of the region of space–time over which the Lagrangian is being integrated also changes; the original boundary and its transformed version are denoted as Ω and Ω’, respectively. Noether's theorem begins with the assumption that a specific transformation of the coordinates and field variables does not change the action, which is defined as the integral of the Lagrangian density over the given region of spacetime. Expressed mathematically, this assumption may be written as where the comma subscript indicates a partial derivative with respect to the coordinate(s) that follows the comma, e.g. Since ξ is a dummy variable of integration, and since the change in the boundary Ω is infinitesimal by assumption, the two integrals may be combined using the four-dimensional version of the divergence theorem into the following form The difference in Lagrangians can be written to first-order in the infinitesimal variations as However, because the variations are defined at the same point as described above, the variation and the derivative can be done in reverse order; they commute Using the Euler–Lagrange field equations the difference in Lagrangians can be written neatly as Thus, the change in the action can be written as Since this holds for any region Ω, the integrand must be zero For any combination of the various symmetry transformations, the perturbation can be written where is the Lie derivative of in the Xμ direction. When is a scalar or , These equations imply that the field variation taken at one point equals Differentiating the above divergence with respect to ε at ε = 0 and changing the sign yields the conservation law where the conserved current equals Manifold/fiber bundle derivation Suppose we have an n-dimensional oriented Riemannian manifold, M and a target manifold T. Let be the configuration space of smooth functions from M to T. (More generally, we can have smooth sections of a fiber bundle T over M.) Examples of this M in physics include: In classical mechanics, in the Hamiltonian formulation, M is the one-dimensional manifold , representing time and the target space is the cotangent bundle of space of generalized positions. In field theory, M is the spacetime manifold and the target space is the set of values the fields can take at any given point. For example, if there are m real-valued scalar fields, , then the target manifold is . If the field is a real vector field, then the target manifold is isomorphic to . Now suppose there is a functional called the action. (It takes values into , rather than ; this is for physical reasons, and is unimportant for this proof.) To get to the usual version of Noether's theorem, we need additional restrictions on the action. We assume is the integral over M of a function called the Lagrangian density, depending on , its derivative and the position. In other words, for in Suppose we are given boundary conditions, i.e., a specification of the value of at the boundary if M is compact, or some limit on as x approaches ∞. Then the subspace of consisting of functions such that all functional derivatives of at are zero, that is: and that satisfies the given boundary conditions, is the subspace of on shell solutions. (See principle of stationary action) Now, suppose we have an infinitesimal transformation on , generated by a functional derivation, Q such that for all compact submanifolds N or in other words, for all x, where we set If this holds on shell and off shell, we say Q generates an off-shell symmetry. If this only holds on shell, we say Q generates an on-shell symmetry. Then, we say Q is a generator of a one parameter symmetry Lie group. Now, for any N, because of the Euler–Lagrange theorem, on shell (and only on-shell), we have Since this is true for any N, we have But this is the continuity equation for the current defined by: which is called the Noether current associated with the symmetry. The continuity equation tells us that if we integrate this current over a space-like slice, we get a conserved quantity called the Noether charge (provided, of course, if M is noncompact, the currents fall off sufficiently fast at infinity). Comments Noether's theorem is an on shell theorem: it relies on use of the equations of motion—the classical path. It reflects the relation between the boundary conditions and the variational principle. Assuming no boundary terms in the action, Noether's theorem implies that The quantum analogs of Noether's theorem involving expectation values (e.g., ) probing off shell quantities as well are the Ward–Takahashi identities. Generalization to Lie algebras Suppose we have two symmetry derivations Q1 and Q2. Then, [Q1, Q2] is also a symmetry derivation. Let us see this explicitly. Let us say and Then, where f12 = Q1[f2μ] − Q2[f1μ]. So, This shows we can extend Noether's theorem to larger Lie algebras in a natural way. Generalization of the proof This applies to any local symmetry derivation Q satisfying QS ≈ 0, and also to more general local functional differentiable actions, including ones where the Lagrangian depends on higher derivatives of the fields. Let ε be any arbitrary smooth function of the spacetime (or time) manifold such that the closure of its support is disjoint from the boundary. ε is a test function. Then, because of the variational principle (which does not apply to the boundary, by the way), the derivation distribution q generated by q[ε][Φ(x)] = ε(x)Q[Φ(x)] satisfies q[ε][S] ≈ 0 for every ε, or more compactly, q(x)[S] ≈ 0 for all x not on the boundary (but remember that q(x) is a shorthand for a derivation distribution, not a derivation parametrized by x in general). This is the generalization of Noether's theorem. To see how the generalization is related to the version given above, assume that the action is the spacetime integral of a Lagrangian that only depends on and its first derivatives. Also, assume Then, for all . More generally, if the Lagrangian depends on higher derivatives, then Examples Example 1: Conservation of energy Looking at the specific case of a Newtonian particle of mass m, coordinate x, moving under the influence of a potential V, coordinatized by time t. The action, S, is: The first term in the brackets is the kinetic energy of the particle, while the second is its potential energy. Consider the generator of time translations Q = d/dt. In other words, . The coordinate x has an explicit dependence on time, whilst V does not; consequently: so we can set Then, The right hand side is the energy, and Noether's theorem states that (i.e. the principle of conservation of energy is a consequence of invariance under time translations). More generally, if the Lagrangian does not depend explicitly on time, the quantity (called the Hamiltonian) is conserved. Example 2: Conservation of center of momentum Still considering 1-dimensional time, let for Newtonian particles where the potential only depends pairwise upon the relative displacement. For , consider the generator of Galilean transformations (i.e. a change in the frame of reference). In other words, And This has the form of so we can set Then, where is the total momentum, M is the total mass and is the center of mass. Noether's theorem states: Example 3: Conformal transformation Both examples 1 and 2 are over a 1-dimensional manifold (time). An example involving spacetime is a conformal transformation of a massless real scalar field with a quartic potential in (3 + 1)-Minkowski spacetime. For Q, consider the generator of a spacetime rescaling. In other words, The second term on the right hand side is due to the "conformal weight" of . And This has the form of (where we have performed a change of dummy indices) so set Then Noether's theorem states that (as one may explicitly check by substituting the Euler–Lagrange equations into the left hand side). If one tries to find the Ward–Takahashi analog of this equation, one runs into a problem because of anomalies. Applications Application of Noether's theorem allows physicists to gain powerful insights into any general theory in physics, by just analyzing the various transformations that would make the form of the laws involved invariant. For example: Invariance of an isolated system with respect to spatial translation (in other words, that the laws of physics are the same at all locations in space) gives the law of conservation of linear momentum (which states that the total linear momentum of an isolated system is constant) Invariance of an isolated system with respect to time translation (i.e. that the laws of physics are the same at all points in time) gives the law of conservation of energy (which states that the total energy of an isolated system is constant) Invariance of an isolated system with respect to rotation (i.e., that the laws of physics are the same with respect to all angular orientations in space) gives the law of conservation of angular momentum (which states that the total angular momentum of an isolated system is constant) Invariance of an isolated system with respect to Lorentz boosts (i.e., that the laws of physics are the same with respect to all inertial reference frames) gives the center-of-mass theorem (which states that the center-of-mass of an isolated system moves at a constant velocity). In quantum field theory, the analog to Noether's theorem, the Ward–Takahashi identity, yields further conservation laws, such as the conservation of electric charge from the invariance with respect to a change in the phase factor of the complex field of the charged particle and the associated gauge of the electric potential and vector potential. The Noether charge is also used in calculating the entropy of stationary black holes. See also Conservation law Charge (physics) Gauge symmetry Gauge symmetry (mathematics) Invariant (physics) Goldstone boson Symmetry (physics) References Further reading Online copy. External links (Original in Gott. Nachr. 1918:235–257) Noether's Theorem at MathPages. Articles containing proofs Calculus of variations Conservation laws Concepts in physics Eponymous theorems of physics Partial differential equations Physics theorems Quantum field theory Symmetry
Noether's theorem
Physics,Mathematics
6,437
26,600,306
https://en.wikipedia.org/wiki/Esakia%20space
In mathematics, Esakia spaces are special ordered topological spaces introduced and studied by Leo Esakia in 1974. Esakia spaces play a fundamental role in the study of Heyting algebras, primarily by virtue of the Esakia duality—the dual equivalence between the category of Heyting algebras and the category of Esakia spaces. Definition For a partially ordered set and for , let } and let }. Also, for , let } and }. An Esakia space is a Priestley space such that for each clopen subset of the topological space , the set is also clopen. Equivalent definitions There are several equivalent ways to define Esakia spaces. Theorem: Given that is a Stone space, the following conditions are equivalent: (i) is an Esakia space. (ii) is closed for each and is clopen for each clopen . (iii) is closed for each and for each (where denotes the closure in ). (iv) is closed for each , the least closed set containing an up-set is an up-set, and the least up-set containing a closed set is closed. Since Priestley spaces can be described in terms of spectral spaces, the Esakia property can be expressed in spectral space terminology as follows: The Priestley space corresponding to a spectral space is an Esakia space if and only if the closure of every constructible subset of is constructible. Esakia morphisms Let and be partially ordered sets and let be an order-preserving map. The map is a bounded morphism (also known as p-morphism) if for each and , if , then there exists such that and . Theorem: The following conditions are equivalent: (1) is a bounded morphism. (2) for each . (3) for each . Let and be Esakia spaces and let be a map. The map is called an Esakia morphism if is a continuous bounded morphism. Notes References Esakia, L. (1974). Topological Kripke models. Soviet Math. Dokl., 15 147–151. Esakia, L. (1985). Heyting Algebras I. Duality Theory (Russian). Metsniereba, Tbilisi. General topology
Esakia space
Mathematics
458
25,527,999
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20April%201%2C%202098
A partial solar eclipse will occur at the Moon's ascending node of orbit on Tuesday, April 1, 2098, with a magnitude of 0.7984. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth. The partial solar eclipse will be visible for parts of Antarctica and southern and central South America. Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2098 A partial solar eclipse on April 1. A total lunar eclipse on April 15. A partial solar eclipse on September 25. A total lunar eclipse on October 10. A partial solar eclipse on October 24. Metonic Preceded by: Solar eclipse of June 13, 2094 Followed by: Solar eclipse of January 19, 2102 Tzolkinex Preceded by: Solar eclipse of February 18, 2091 Followed by: Solar eclipse of May 14, 2105 Half-Saros Preceded by: Lunar eclipse of March 26, 2089 Followed by: Lunar eclipse of April 7, 2107 Tritos Preceded by: Solar eclipse of May 2, 2087 Followed by: Solar eclipse of March 1, 2109 Solar Saros 121 Preceded by: Solar eclipse of March 21, 2080 Followed by: Solar eclipse of April 13, 2116 Inex Preceded by: Solar eclipse of April 21, 2069 Followed by: Solar eclipse of March 13, 2127 Triad Preceded by: Solar eclipse of June 1, 2011 Followed by: Solar eclipse of January 31, 2185 Solar eclipses of 2098–2101 Saros 121 Metonic series All eclipses in this table occur at the Moon's ascending node. Tritos series Inex series References External links 2098 04 01 Solar eclipse 2098 04 1 2098 4 1 2098 4 1
Solar eclipse of April 1, 2098
Astronomy
532
47,829,290
https://en.wikipedia.org/wiki/Penicillium%20thomii
Penicillium thomii is an anamorph species of fungus in the genus Penicillium which was isolated from spoiled faba beans in Australia. Penicillium thomii produces hadicidine, 6-methoxymelline and penicillic acid Further reading References thomii Fungi described in 1917 Fungus species
Penicillium thomii
Biology
70
74,315,578
https://en.wikipedia.org/wiki/DNA%20Valley
DNA Valley (or DNA Alley) is a region in Maryland that serves as a biotechnology hub with a focus on genetic medicine. Roughly traced by Rockville, Frederick, and Baltimore, DNA Valley includes the innovation companies in the Maryland I-270 technology corridor, the various campuses of federal entities such as the FDA and NIH, as well as The University of Maryland, Johns Hopkins University, The Institute for Human Virology, and various laboratories with high biosafety levels such as Fort Detrick. Major DNA valley cities include: Baltimore, Columbia, Germantown, Silver Spring, Rockville, Bethesda, Gaithersburg, College Park, and Frederick. The counties that make up DNA valley are Montgomery County, Frederick County, Howard County, Baltimore County, Anne Arundel County, and Carroll County. According to the Bureau of Economic Analysis, these counties contributed a combined GDP of $310,407,270 in 2021, higher than several nations. Local business leaders like Jeff Galvin expect this figure to increase in step with the growth of the biotechnology sector. DNA Valley is home to many of Maryland's biotechnology, pharmaceutical, and life science companies including AstraZeneca, BioNTech, GeneDx, Qiagen, American Gene Technologies, and GlaxoSmithKline. A defining feature of the region is its staggering concentration of scientists and doctors. According to New Scientist, "There are more MDs and PhDs per capita in a 10-mile radius of DC than anywhere else in the country". Etymology The name "DNA Valley" is championed by American Gene TechnologiesⓇ CEO, Jeff Galvin. Galvin came to Maryland and the life science industry after a successful career in Silicon Valley and immediately saw the similarities between the early days of the tech industry in Silicon Valley and the life science industry in Maryland. The earliest documented use of the name came from an article written by Alison George at New Scientist in 2004, as she recounted a cab ride where her driver referred to the D.C. area as "DNA Valley" because of the concentration of biotech companies in the area. DNA valley is not an actual geographical valley and is instead named as such because of the similarities between the biotechnology and life science boom in Maryland and the tech boom that occurred in Silicon Valley in the 1970s and 1980s. Previous to the growth of the biotechnology industry, Maryland and the surrounding regions were predominantly focused on the seafood, agriculture, and logistics industries due to the abundant waterways available in the state. History Role of the NIH The National Institutes of Health (NIH) have played a major role in the development of the life science industry boom in Maryland, and thus the creation of DNA Valley. The NIH originally moved its headquarters from the Old Naval Observatory to Bethesda, Maryland in 1938. In 1989, as part of the launch of the Human Genome Project, the National Center for Human Genome Research (now known as The National Human Genome Research Institute) was founded in Bethesda. This made Bethesda the national hub for genetic research as genetic researchers from around the country came to help sequence the human genome. This project, being one of the most influential scientific projects of the last century, planted the seeds for the eventual biotechnology hub that has formed in the area since. The infrastructure and attention to the industry that the NCHGR and the HGP brought to Maryland are what opened the door to the extensive cell and gene therapy industries that Maryland and DNA Valley are now home to. The NHGRI is not the only NIH subsidiary that has led to DNA Valley becoming such a major life science hub. The NIH as a whole has fueled the biotech industry in Maryland as the research done at the federally funded facilities has resulted in new fields of research, new tools, and highly trained researchers that often remain in the area and create their own life science companies. For example, the work done by Roscoe Brady, MD, PhD on viral vectors caught the attention of entrepreneur Jeff Galvin, inspiring him to found American Gene Technologies and pursue potential cures for diseases like HIV, PKU, and certain cancers.. The NIH also funds outside research in the area, which further allows for the industry to flourish as more companies want to be based near the NIH headquarters in Bethesda. A variety of life science-related conferences are held annually at the NIH headquarters in Bethesda, such as workshops, trainings, and professional conferences, all of which not only bring attention and prestige to the life science industry in Maryland, but also result in a better trained and educated population in the area, allowing for the further success of the industry. The NIH is not exclusively located in Bethesda and has a variety of campuses in Maryland. The Bayview Campus in Baltimore contains the research programs of the National Institute of Aging and the National Institute of Drug Abuse. The Frederick National Laboratory and Riverside Research Park are home to the National Cancer Institute, which includes the Center for Cancer Research. The widespread footprint of the NIH in Maryland directly correlates to the biotech boom that resulted in DNA Valley, as the highest concentrations of life science companies are located in the same locations of Rockville, Frederick, and Baltimore. Rise of genetic medicine The first speculation about the plausibility of introducing DNA sequences into patient's cells to cure diseases occurred in the 1960s. Then in 1972, Theodore Friedman and Richard Roblin published a paper in Science named "Gene Therapy for Human Genetic Diseases?", which detailed the possibility of inserting unmutated or healthy DNA to cure patients with genetic diseases. However, this paper also urged that the technology be furthered with caution as a result of the lack of understanding of the technology and its potential effects. They were primarily worried about the lack of knowledge about genetic recombination and gene regulation, lack of understanding about the relationship between genetic mutations and diseases, and the lack of understanding of the potential side effects of gene therapy. For 18 years after that paper was published, further research was conducted to help limit the risks detailed by Theodore Friedman and Richard Roblin. Then in 1990, the first successful gene therapy trial was launched. A four-year-old girl named Ashanthi De Silva with severe combined immunodeficiency (SCID) was treated with gene therapy. Ashanthi was lacking the enzyme adenosine deaminase (ADA), which caused her T-cells to die, leaving her with little to no protection against infection. To treat this, Dr. W. French Anderson from the National Heart, Lung, and Blood Institute in Bethesda, Maryland, delivered the correct ADA gene, using a disabled virus, to white blood cells that had been removed from her body, and then injected the cells back into her body. The rise of gene therapy was not easy as it suffered a major setback in 1999 with the trials at the University of Pennsylvania. During the trials, an 18-year-old named Jesse Gelsinger who had the genetic disease ornithine transcarbamylase deficiency, died from an immune response after being treated with a working gene carried by an adenovirus. The early 2010s brought back the evolution of gene therapy as a potential cure to many different diseases. New delivery methods for the gene therapies were discovered, thus making the techniques significantly safer. Researchers also added enhancers and promoters, which allowed for better control of the gene as they could decide when and where it would be turned on and to what extent. These discoveries, along with others made during this period, allowed gene therapy to regain its momentum and move to the forefront of Medical Technology development. There was then a wave of approvals for gene therapy techniques from 2003 to 2012, including therapies for cancer, artery disease, and others. Since then, the rate of development and approval of gene therapies has increased, with the FDA expecting to approve between 10 and 20 gene therapies each year until 2025. Economy The D.C. / Maryland area has the second-highest rated life science hub in the United States, with Maryland alone providing 44,260 jobs in life science. Maryland life science businesses generated over $18.6 billion in 2018, paid over $4.9 billion in wages, with an average salary of $110,690. Maryland also boasted the 5th highest concentration of doctoral scientists and engineers and the highest STEM concentration in the country in 2022. Between 2017 and 2022, the life science research jobs increased by 19%, which was larger than the national growth rate of 16%, indicating a particular focus on the industry in Maryland. The region has more than double the amount of federal research labs than any other state, partly due to the presence of the NIH headquarters in Bethesda, Maryland. Maryland also has the 11th lowest unemployment rate at 2.5% in 2023, which is partly a result of the booming biotech and life science industry in the area. Housing Maryland, and by association DNA Valley, has a severe affordable housing shortage, with only approximately 30 affordable and available rental units for every 100 extremely low income families and a total housing shortage of 120,000 units. This is possibly due to the boom in life science jobs in the area, while the creation of housing units has remained constant, leading to the imbalance. DNA Valley also includes some of the highest cost of living areas in the country, with D.C. having the second highest and Maryland having the sixth highest. Notable companies Thousands of life science companies are headquartered in DNA Valley. The following are some of the notable companies based in the area: Demographics Depending on what geographic regions (particularly parts of Washington, D.C.) are included in the meaning of the term, the population of DNA Valley is between 2 million and 3.5 million. According to the U.S. Census Bureau, almost a third of DNA Valley's population is Black or of African descent, 11% are of Hispanic descent and 6.9% is of Asian descent. Diversity DNA Valley is one of the most diverse areas in the country, with 3 of the 10 most diverse communities in the area, those being Gaithersburg, Germantown, and Silver Spring. Biotechnology as a whole is not a typically diverse field, being overwhelmingly dominated by white (56%) and Asian (21%) employees. Even greater disparity is seen among executives, with 72% of execs being White and 15% being Asian. The biotech hub in DNA Valley tends to differ from this norm, likely due to the diversity of the area. Gender Similarly to race, gender disparity is quite significant in the field of biotechnology, with males dominating the space, particularly in positions of power. 66% of executives and 79% of CEOs are men. DNA Valley follows this trend, as in 2021, women only made up around 22% of the executive positions at biotechnology companies. One possible explanation for this, as proposed by Harvard Senior Research Associate Vivek Wadhwa, is that parents tend to not encourage their daughters to pursue a career in science and engineering as much as they would with their sons. Wadhwa also cites the lack of potential role models for women in the science and engineering fields in comparison to men. However, interestingly, Maryland has the highest average salary for female CEOS, at around $280,000, which may be in part due to the higher average salaries in Maryland in general. Washington D.C. also has the second-highest female CEO percentage in the country at 47.5%, which would change the DNA Valley numbers depending on whether you include D.C. in the geographical boundaries of the region. There have been concerted efforts to fix the current lack of females in Maryland life science fields, including the founding of a Women in Bio (WIB) chapter in the D.C. region in 2011. The focus of this chapter is to promote diversity and inclusion for all women in life science-related fields. WIB also sponsors the Herstory Gala, in Rockville, Maryland every year to celebrate the women trailblazers in life sciences that have had an impact on the field in the DNA Valley area. Statistics Maryland, and thus DNA Valley, is considered one of the most diverse states in the country, based both on religious and ethnic group diversity. DNA Valley's population is made up of 32% Black, 7% Asian, 12% Hispanic or Latino, and 1% Native American people. In terms of religious affiliations, DNA Valley's population is divided into 69% Christian-based faiths (mostly made up of equal percentages of Evangelical Protestant, Mainline Protestant, Historically Black Protestant, and Catholic), 23% not affiliated with any faith, and 8% having non-Christian-based faiths, primarily made up of Jewish, Muslim, Buddhist, and Hindu faiths Education The funding for public schools in DNA Valley varies drastically depending on the area as a result of increased grants from private foundations in wealthier areas such as Montgomery County and particularly Bethesda. Less wealthy areas such as Garret County rely on state funding See also List of technology centers List of life sciences References Biotechnology
DNA Valley
Biology
2,674
33,386,159
https://en.wikipedia.org/wiki/Archaeornithes
The Archaeornithes, classically Archæornithes, is an extinct group of the first primitive, reptile-like birds. It is an evolutionary grade of transitional fossils, the primitive birds halfway between non avian dinosaur ancestors and the derived modern birds (avian dinosaur). Fossils of early birds were poorly known until the late 20th century. Of those known, all fell into either the relatively modernly built birds with fused ribcage and the breastbone extended into a keel, or the "Urvogels" of the Solnhofen Plattenkalk of late Jurassic age. As the physiological and anatomical difference between the two was so great, the subclass Archaeornithes was erected for the latter. With the unearthing of several well preserved early bird fossils in the last decades of the 20th century and early 21st century, our knowledge of the evolution of birds has increased dramatically. The evolution of the modern avian traits such as the compact body, clawless wing and the alula are now known to appear over successive stages. Today the Archaeornithes are classified into a series of nested monophyletic groups, and the name is rarely used in modern literature. Classification In traditional classification, it is one of two subclasses of birds, the other subclass being the Neornithes, the birds with a short, modern tail. This classification was erected by Hans Friedrich Gadow in 1893 and followed by Alfred Romer (1933) and subsequent authors through most of the 20th century. Other mesozoic birds like the toothed, but otherwise modern, birds like Hesperornis were included under the latter in their own superorder, the Odontognathae. According to Romer, the Archaeornithes are characterised by having clawed wings, a reptilian style ribcage without a large carina and the presence of a long, bony tail. The known members of the group by the time of its erection were Archaeopteryx and Archaeornis. The two are now thought to represent a single species, Archaeopteryx lithographica, the Archaeornis being the Berlin specimen of Archaeopteryx. The Confuciusornithidae and Enantiornithes were found a century after Gradow's organization of birds into two subclasses. They fall between Romer's description of Archaeornithes and Neornithes, in that they have clawed wings, but reduced tails with a rod-like pygostyle (as opposed to the ploughshare-shaped one in modern birds) and the presence of a small carina. While rarely used by palaenthologists today, the term was revived by the ornithologists Livezey and Zusi in 2007, for a group comprising Archaeopterygidae and the Confuciusornithidae. See also Sauriurae Evolution of birds References External links Archaeornithes at the Paleobiology Database Paravians Vertebrate subclasses Paraphyletic groups
Archaeornithes
Biology
643
43,680,801
https://en.wikipedia.org/wiki/Lem%20%28satellite%29
Lem (also called BRITE-PL) is the first Polish governmental artificial satellite. It was launched in November 2013 as part of the Bright-star Target Explorer (BRITE) programme. The spacecraft was launched aboard a Dnepr rocket. Named after the Polish science fiction writer Stanisław Lem, it is an optical astronomy spacecraft built by the Space Research Centre of the Polish Academy of Sciences and operated by Centrum Astronomiczne im. Mikołaja Kopernika PAN; one of two Polish contributions to the BRITE constellation along with the Heweliusz satellite. Features Lem is the first Polish scientific satellite, and the second (after PW-Sat) ever launched. Along with Heweliusz, TUGSAT-1, UniBRITE-1 and BRITE-Toronto, it is one from a constellation of six nanosatellites of the BRIght-star Target Explorer project, operated by a consortium of universities from Canada, Austria and Poland. Lem was developed and manufactured by the Space Research Centre of the Polish Academy of Sciences in 2011, based around the Generic Nanosatellite Bus, and had a mass at launch of (plus another 7 kg for the XPOD separation system). The satellite is used, along with four other operating spacecraft, to conduct photometric observations of stars with an apparent magnitude brighter than 4.0 as seen from Earth. Lem was one of two Polish BRITE satellites launched, along with the Heweliusz spacecraft. Four more satellites—two Austrian and two Canadian—were launched at different dates. Mission Lem observes the stars in the blue color range, whereas Heweliusz does it in red. Due to the multicolour option, geometrical and thermal effects in the analysis of the observed phenomena are separated. None of the much larger satellites, such as MOST and CoRoT, has this colour option; this is crucial in the diagnosis of the internal structure of stars. Lem photometrically measures low-level oscillations and temperature variations in stars brighter than visual magnitude (4.0), with unprecedented precision and temporal coverage not achievable through terrestrial based methods. Launch The Lem satellite was launched from the Russian Yasny air base aboard a Dnepr through the BRITE-PL Project satellite launch programme established in 2009 by the Space Research Centre of the Polish Academy of Sciences and The Nicolaus Copernicus Astronomical Centre of the Polish Academy of Sciences in cooperation with University of Toronto. The launch was subcontracted to the Russian Ministry of Defence which launched the satellites using Dnepr rocket from the Yasny air base along with 33 other satellites. The launch took place at 07:10 (UTC) on 21 November 2013, and the rocket deployed all of its payloads successfully. See also TUGSAT-1 UniBRITE-1 BRITE-Toronto BRITE-Montreal Heweliusz (BRITE-PL) Explanatory notes References Spacecraft launched in 2013 Satellites of Poland Space telescopes 2013 in Poland Spacecraft launched by Dnepr rockets Commemoration of Stanisław Lem
Lem (satellite)
Astronomy
629
41,219,148
https://en.wikipedia.org/wiki/Donovan%27s%20solution
Donovan's solution is an inorganic compound prepared from arsenic triiodide and mercuric iodide. Despite its name, it is a compound and not a solution. Method Donovan's solution can be prepared by mixing arsenic triiodide, mercuric iodide, and sodium bicarbonate in aqueous solution. Cooley's cyclopædia of practical receipts and ... information on the arts, manufactures, and trades gives a more complex method. Uses The solution been used in veterinary medicine to treat chronic diseases of the skin and as a folk remedy. It was used during the 19th century to treat lepra vulgaris and psoriasis in humans, taken internally. References Inorganic compounds Arsenic(III) compounds Iodides Mercury compounds
Donovan's solution
Chemistry
161
2,579,330
https://en.wikipedia.org/wiki/Rabbit-proof%20fence
The State Barrier Fence of Western Australia, formerly known as the Rabbit-Proof Fence, the State Vermin Fence, and the Emu Fence, is a pest-exclusion fence constructed between 1901 and 1907 to keep rabbits, and other agricultural pests from the east, out of Western Australian pastoral areas. There are three fences in Western Australia: the original No. 1 Fence crosses the state from north to south, No. 2 Fence is smaller and further west, and No. 3 Fence is smaller still and runs east–west. The fences took six years to build. When completed, the rabbit-proof fence (including all three fences) stretched . The cost to build each kilometre of fence at the time was about $250 (). When it was completed in 1907, the No. 1 Fence was the longest unbroken fence in the world. History Rabbits were introduced to Australia by the First Fleet in 1788. They became a problem after October 1859, when Thomas Austin released 24 wild rabbits from England for hunting purposes, believing "The introduction of a few rabbits could do little harm and might provide a touch of home, in addition to a spot of hunting." With virtually no local predators, the rabbits became extremely prolific and spread rapidly across the southern parts of the country. Australia had ideal conditions for an explosion in the rabbit population, which constituted an invasive species. By 1887, agricultural losses from rabbit damage compelled the New South Wales Government to offer a £25,000 reward () for "any method of success not previously known in the Colony for the effectual extermination of rabbits". A Royal Commission was held in 1901 to investigate the situation. It determined to build a pest-exclusion fence. Construction The fence posts are placed apart and have a minimum diameter of . There were initially three wires of gauge, strung , , and above ground, with a barbed wire added later at and a plain wire at , to make the fence a barrier against dingoes and foxes as well. Wire netting, extending below ground, was attached to the wire. The fence was constructed with a variety of materials, according to the local climate and availability of wood. At first, fence posts were made from salmon gum and gimlet, but they attracted termites (locally known as white ants) and had to be replaced. Split white gum was one of the best types of wood used in the fence. Other timbers used were mulga, wodjil, native pine, and tea-tree, depending on what could be found close to where the fence was to be built. Iron posts were used where there was no wood. Most materials had to be hauled hundreds of kilometres from rail heads and ports by bullock, mule and camel teams. From 1901, the fence was constructed by private contractors. In 1904, the project became the responsibility of the Public Works Department of Western Australia, under the supervision of Richard John Anketell. With a workforce of 120 men, 350 camels, 210 horses and 41 donkeys, Anketell was responsible for the construction of the greater part of No. 1 Fence and the survey of its last . Maintenance Alexander Crawford took over the maintenance of the fence from Anketell as each section was finished; he was in charge until he retired in 1922. The area inside the fence to the west became known as "Crawford's Paddock". The fence was maintained at first by boundary riders riding bicycles and later by riders astride camels. However, fence inspection was difficult from atop the tall animal. In 1910, a car was bought for fence inspection, but it was subject to punctured tyres. It was found the best way to inspect the fence was using buckboard buggies, pulled by two camels. The camels were also used as pack animals, especially in the north. In the east, camels were used to pull drays with supplies for the riders. Camels were ideal for this as they could go for a long time without water. They were considered critical to the building and maintenance of the fence. Crawford supervised four sub-inspectors, each responsible for about of fence, and 25 boundary riders, who regularly patrolled sections of fence. Due to frontier violence in the north of the state, a section of No. 1 Fence was patrolled by riders who traveled in pairs. Crawford also was responsible for eliminating rabbits that had breached the fence. In the first year following the fence's completion, rabbit colonies were found and all members killed at several locations inside the fence. These included sites near Coorow, Mullewa, and Northampton. Following the introduction of myxomatosis to control rabbits in the 1950s, the importance of the rabbit-proof fence diminished. Effectiveness By 1902, rabbits had already been found west of the fence line that had been initially constructed. The Number 2 Rabbit Proof Fence was built in 1905 in order to stem their advance. It held back the rabbits for many years, to such an extent that the Government Scheme for supplying rabbit netting, by extending long-term loans to farmers, was never applied to farmers west of that fence. The farmers between the two fences suffered from the ravages of the rabbits for many years, before they bred into plague form to spread out over the agricultural districts to the west of the No. 2 fence. Overall, as a long-term barrier to rabbits, the fences were a failure; even while construction was underway, rabbits were hopping into regions that the fences were intended to protect. Intersection with railway system No. 1 Fence intersected railway lines at: Eastern Railway near Burracoppin Wyalkatchem: Southern Cross railway at Campion Sandstone branch railway: just west of Anketell Meekatharra–Wiluna railway: at Paroo No. 2 Fence intersected with most of the Wheatbelt railway lines of Western Australia. Elsewhere in Australia The Darling Downs–Moreton Rabbit Board fence is a rabbit fence that extends along part of the Queensland–New South Wales border. Cultural references In 1907, Arthur Upfield, an Australian writer who had previously worked on the construction of No. 1 Fence, began writing a fictional story that explored a way of disposing of a body in the desert. Before the book was published, stockman Snowy Rowles, an acquaintance of the writer, carried out at least two murders and disposed of the bodies using the method described in the book. The 1932 trial that followed the arrest of Rowles for murder was one of the most sensational in the history of Western Australia. Decades later, Terry Walker wrote a book about this called Murder on the Rabbit Proof Fence: The Strange Case of Arthur Upfield and Snowy Rowles (1993). The events are now referred to as the Murchison Murders. Doris Pilkington Garimara's book, Follow the Rabbit-Proof Fence (1996), describes how three Indigenous Australian girls used the fence to guide their route back home from Moore River Native Settlement to Jigalong. The girls, taken from their families in Western Australia as part of the Stolen Generations, escaped from the mission settlement. Two sisters were successful in walking hundreds of kilometers back to their family at Jigalong by following the rabbit-proof fence. Garmimara is the daughter of Molly, one of the girls. The dramatic film Rabbit-Proof Fence (2002) is based on the book. In 2016, Englishwoman Lindsey Cole walked the fence from Moore River Settlement, through to Jigalong. She was met by Doris Garimara's daughter at the end of the walk in September 2016. See also Agricultural fencing Dingo Fence Rabbits in Australia Notes References External links Run Rabbit Run!, Australian Museums and Galleries Online The State Barrier Fence of Western Australia, 1901–2001, National Library of Australia The Rabbit Proof Fence, Library of West Australian History At Australia’s Bunny Fence, Variable Cloudiness Prompts Climate Study, The New York Times Animal migration Buildings and structures completed in 1907 Fences Pilbara Mid West (Western Australia) Wheatbelt (Western Australia) 1907 establishments in Australia Separation barriers
Rabbit-proof fence
Engineering,Biology
1,629
1,846,827
https://en.wikipedia.org/wiki/Catastrophe%20modeling
Catastrophe modeling (also known as cat modeling) is the process of using computer-assisted calculations to estimate the losses that could be sustained due to a catastrophic event such as a hurricane or earthquake. Cat modeling is especially applicable to analyzing risks in the insurance industry and is at the confluence of actuarial science, engineering, meteorology, and seismology. Catastrophes/ Perils Natural catastrophes (sometimes referred to as "nat cat") that are modeled include: Hurricane (main peril is wind damage; some models can also include storm surge and rainfall) Earthquake (main peril is ground shaking; some models can also include tsunami, fire following earthquakes, liquefaction, landslide, and sprinkler leakage damage) severe thunderstorm or severe convective storms (main sub-perils are tornado, straight-line winds and hail) Flood Extratropical cyclone (commonly referred to as European windstorm) Wildfire Winter storm Human catastrophes include: Terrorism events Warfare Casualty/liability events Forced displacement crises Cyber data breaches Lines of business modeled Cat modeling involves many lines of business, including: Personal property Commercial property Workers' compensation Automobile physical damage Limited liabilities Product liability Business Interruption Inputs, Outputs, and Use Cases The input into a typical cat modeling software package is information on the exposures being analyzed that are vulnerable to catastrophe risk. The exposure data can be categorized into three basic groups: Information on the site locations, referred to as geocoding data (street address, postal code, county/CRESTA zone, etc.) Information on the physical characteristics of the exposures (construction, occupation/occupancy, year built, number of stories, number of employees, etc.) Information on the financial terms of the insurance coverage (coverage value, limit, deductible, etc.) The output of a cat model is an estimate of the losses that the model predicts would be associated with a particular event or set of events. When running a probabilistic model, the output is either a probabilistic loss distribution or a set of events that could be used to create a loss distribution; probable maximum losses ("PMLs") and average annual losses ("AALs") are calculated from the loss distribution. When running a deterministic model, losses caused by a specific event are calculated; for example, Hurricane Katrina or "a magnitude 8.0 earthquake in downtown San Francisco" could be analyzed against the portfolio of exposures. Cat models have a variety of use cases for a number of industries, including: Insurers and risk managers use cat modeling to assess the risk in a portfolio of exposures. This might help guide an insurer's underwriting strategy or help them decide how much reinsurance to purchase. Some state departments of insurance allow insurers to use cat modeling in their rate filings to help determine how much premium their policyholders are charged in catastrophe-prone areas. Insurance rating agencies such as A. M. Best and Standard & Poor's use cat modeling to assess the financial strength of insurers that take on catastrophe risk. Reinsurers and reinsurance brokers use cat modeling in the pricing and structuring of reinsurance treaties. European insurers use cat models to derive the required regulatory capital under the Solvency II regime. Cat models are used to derive catastrophe loss probability distributions which are components of many Solvency II internal capital models. Likewise, cat bond investors, investment banks, and bond rating agencies use cat modeling in the pricing and structuring of a catastrophe bond. Open catastrophe modeling The Oasis Loss Modelling Framework ("LMF") is an open source catastrophe modeling platform. It developed by a nonprofit organisation funded and owned by the Insurance Industry to promote open access to models and to promote transparency. Additionally, some firms within the insurance industry are currently working with the Association for Cooperative Operations Research and Development (ACORD) to develop an industry standard for collecting and sharing exposure data. See also HAZUS Year loss table Catastrophe theory Catastrophe (disambiguation) References External links International Society of Catastrophe Managers Florida Public Hurricane Loss Model Insurance Information Institute LMF source code repository Actuarial science Disaster management tools Natural hazards Environmental modelling
Catastrophe modeling
Physics,Mathematics,Environmental_science
860
2,910,904
https://en.wikipedia.org/wiki/Glade%20%28brand%29
Glade (/gleɪd/) is an American brand of household air fresheners first introduced in 1956. It is a worldwide brand owned by S. C. Johnson & Son, also known as Gleid (among others). Brise was renamed Glade in Germany, France and the Netherlands in 2012. Product list The Glade family of products includes Aerosol, Candles, Car Scented Oil, Carpet & Room, Fragrant Mist, PlugIns (Scented Gel and Scented Oil), Press 'N Fresh, Quick 'N Fresh Secrets, Scented Oil Candles and Solid Air Fresheners. Aerosol Blue Odyssey Clean Linen Country Garden Crisp Waters French Vanilla Fresh Lemon Fruit Explosion Hawaiian Breeze Jasmine & White Rose Lavender & Vanilla Lilac Spring Neutralizer Powder Fresh Rainshower Tropical Mist White Tea & Lily Blooming Peony & Cherry Candles Cashmere Woods Apple Cinnamon Angel Whisper Clean Linen French Vanilla Glistening Snow - cancelled in 2009 Lavender Meadow Melon Burst Mountain Berry Pumpkin Pie Rainshower Refreshing Spa Strawberries & Cream Fresh Mountain Morning Three in one Baking With Grandma Berry Picking Evening At Home Starlit Garden Lighting Car Scented Oil Jasmine Mist Tropical Moment Outdoor Fresh Ocean Blue Carpet & Room Country Garden French Vanilla Fresh Scent Lilac Spring Melon Burst Neutralizer Rainshower Shake n' Vac Fragrant Mist Alpine Spice Country Garden PlugIns Apple Cinnamon Citrus Zest Clean Linen Country Garden Glade Country Spice Glistening Snow - cancelled in 2009 Island Breeze Lilac Spring Mango Splash Mountain Berry Mountain Meadow Mountain Snow Natural Springs Rainshower Refreshing Spa Strawberry Tropical Garden Vanilla Garden Scented Gel Lavender & Vanilla Scented Oil Apple Cinnamon Butterfly Garden Clean Linen Clear Springs Ferns and Blossoms Floral Escape Glistening Snow - cancelled in 2009 Hawaiian Breeze Jasmine & White Rose Lavender Meadow Mystical Garden Mango Fusion Ocean Blue Refreshing Citrus Rainshower Seaside Garden Sky Breeze Summer Berries Vanilla Breeze Press 'N Fresh Country Garden Just Orange Rainshower Quick 'N Fresh Country Garden Sunny Days Secrets Country Garden Floral Breeze Lavender Meadow Rainshower Summer Cravings Solid Air Fresheners Angel Whispers Apple Cinnamon Clean Linen Clear Springs (Tough Odor Solutions) Crisp Waters French Vanilla Fresh Berries Fresh Scent for Pet Odors (Tough Odor Solutions) Hawaiian Breeze Lavender & Vanilla Holiday products In the late 2005, Glade introduced candles inspired by artist Thomas Kinkade. Both candles had a different wintry scene printed on the jar and offered the choice of vanilla, apple cinnamon, or pumpkin pie scent. Competition Glade's two main competitors in the air freshener market are Air Wick and Renuzit. Formerly, Wizard Scented Oils was also a competitor to Glade as well until it was discontinued due to poor sales. References External links S.C. Johnson Company Carpet Cleaning New Jersey Cleaning product brands Cleaning products Cleaning product components American brands S. C. Johnson & Son brands Products introduced in 1956
Glade (brand)
Chemistry,Technology
568
20,015,881
https://en.wikipedia.org/wiki/Bharat%20Operating%20System%20Solutions
Bharat Operating System Solutions (BOSS GNU/Linux) is an Indian Linux distribution based on Debian. Its latest stable version is 10.0 (Pragya), which was released in March 2024. Editions BOSS Linux was released in various editions for different purposes: BOSS Desktop: Designed for personal, home, and office use. EduBOSS: Designed for schools and the education community. BOSS Advanced Server: The server-oriented edition. BOSS MOOL: A specialized edition for maintainability by changing how kernel drivers are loaded as modules. History BOSS Linux was developed by the Centre for Development of Advanced Computing with the aim of promoting the adoption of free and open-source software throughout India. As a vital deliverable software of the National Resource Centre for Free and Open Source Software, it has an enhanced desktop environment that includes support for various Indian language and instructional software. The software was endorsed by the Government of India for adoption and implementation in India. BOSS Linux has been certified by the Linux Foundation for compliance with the Linux Standard Base. BOSS Linux supported Intel and AMD IA-32/x86-64 architecture until version 6 ("Anoop"). From version 7 ("Drishti"), the development shifted to x86-64 architecture only. Versions BOSS Linux has nine major releases: BOSS 5.0 (Anokha) This release came with many new applications focused mainly on enhanced security and user-friendliness. The distribution included over 12,800 new packages, for a total of over . Most of the software in the distribution had been updated as well: over software packages (70% of all packages in Savir). BOSS 5.0 supported Linux Standard Base (LSB) version 4.1. It also featured XBMC to allow users to easily browse and view videos, photos, podcasts, and music from a hard drive, optical disc, local network, and the Internet. BOSS 6.0 (Anoop) There are several significant updates in BOSS Linux 6.0 (Anoop) from 5.0 (Anokha). Notable changes include a kernel update from 3.10 to 3.16, a shift for system boot from init to systemd, the full support of GNOME Shell as part of GNOME 3.14, an update to the GRUB version, the Iceweasel browser being replaced by Firefox and the Pidgin messaging client replacing Empathy, as well as several repository versions of available programs being updated as part of the release. BOSS Linux 6.0 also shipped various application and program updates, such as LibreOffice, X.Org, Evolution, GIMP, VLC media player, GTK+, GCC, GNOME Keyring, and Python. Related specifically to localization support, language support improved with the replacement of SCIM with IBus with the Integrated System Settings. Indic languages enabled with "Region and Languages" are now directly mapped to the IBus, and the OnScreenKeyboard layout is provided for all layouts. This release is fully compatible with LSB 4.1. BOSS 7.0 (Drishti) The most significant change over previous releases is that support for the x86 version has been dropped, and BOSS is now only available for x86-64. Other notable changes include a kernel update to 4.9.0, a GNOME update from 3.14 to 3.22, and software updates to various applications and programs with wide Indian language support & packages. This release aims to enhance the user interface with more glossy themes and is coupled with the latest applications from the community. BOSS 8.0 (Unnati) The desktop environment is changed from GNOME to Cinnamon. BOSS 9.0 (Urja) The Linux kernel was updated from 5.2 to 5.10. BOSS 10.0 (Pragya) BOSS GNU/Linux Version 10, featuring the Cinnamon Desktop Environment, is designed to further efforts in developing an e-Governance stack based on Free and Open Source Software [Note that the BOSS is Open Source only for namesake it's source is not available publicly!] (FOSS). The release aims to foster a robust FOSS community across industries, government, and academia, driving growth and contributing to a sustainable ecosystem in India. BOSS Pragya OS supports this initiative by promoting the adoption of FOSS solutions for a wide range of applications. The recommended system requirements include 2 GB of RAM, 15 GB of hard drive space, and a minimum 1GHz Pentium processor. See also Debian Comparison of Linux distributions Free culture movement Simputer References External links 2007 software Debian-based distributions Language-specific Linux distributions Operating system distributions bootable from read-only media X86-64 Linux distributions Indic computing Urdu-language computing State-sponsored Linux distributions Linux distributions
Bharat Operating System Solutions
Technology
981
37,024,493
https://en.wikipedia.org/wiki/Alfred%20William%20Bennett
Alfred William Bennett (24 June 1833 – 23 January 1902) was a British botanist and publisher. He was best known for his work on the flora of the Swiss Alps, cryptogams, and the Polygalaceae or Milkwort plant family, as well as his years in the publishing industry. Early life Alfred William Bennett was son of Quakers William Bennett (1804–1873), a successful tea dealer, amateur botanist, and sometime emu breeder, and Elizabeth (Trusted) Bennett (1798–1891), an author of religious books for the Society of Friends. William Bennett also corresponded with biologist Charles Darwin, though he did not accept the latter's theories concerning evolutionary biology. Alfred Bennett, a lifelong believer in evolution unlike his father, would later establish his own correspondence with the noted theorist. William Bennett took great interest in the education of his children, whom he schooled at home. The elder Bennett was influenced in his ideas of education by the writings of the Swiss philosopher and educational reformer Johann Heinrich Pestalozzi, and in the winter of 1841–1842, he took his family to Switzerland so that his children could study at the Pestalozzian School at Appenzell. It was during this trip that Alfred Bennett learned the German language, a skill that would help him in his future writings on Alpine plant life. William Bennett also created an environment conducive to the study of the natural sciences for his children. Between 1851 and 1854, he took Alfred and his brother Edward Trusted Bennett (1831–1908) on several walking tours of Wales and the western regions of England, where the boys studied British flora and took extensive notes on their observations. Their father also introduced them to noted entomologists and family friends Edward Newman, Henry Doubleday, and Edward Doubleday. Education and publishing Bennett attended University College London, where he received a BA with honours in chemistry and Botany in 1853, an MA in biology in 1855, and a BSc in biology in 1868. In 1858, he married Katharine Richardson (1835–1892) and turned to publishing as a career, taking over the business at 5, Bishopsgate Without, formerly run by Charles Gilpin and later by William & Frederick G. Cash. While he only spent the next 10 years as a publisher, he worked off and on in various aspects of the industry for the rest of his life. He was the editor and publisher of The Friend, an independent weekly publication for members of the Society of Friends. He was one of the first publishers to use photographic illustrations; and the first sub-editor of the journal Nature. Additionally, he went on to be the editor of the Journal of the Royal Microscopical Society, the main publication of the Royal Microscopical Society, an institution in which he was a fellow and also served three terms as vice-president. Botanical career Between 1871 and 1873, Bennett wrote a series of papers on fertilisation in plants that brought him to the attention of Charles Darwin, who encouraged his efforts. In particular, Bennett clarified many of the processes in flower fertilisation and established core terminology for its description, as well as illuminating how flower structure could facilitate cross-fertilization. Bennett also began to write on Polygalaceae during this time, and he contributed synopses of species within that family for the 1874 publication Flora Brasiliensis and J.D. Hooker's 1872 volume Flora of British India. During a walking tour of Switzerland in 1875, Bennett's interest in the natural world of the Swiss Alps was also rekindled after finding 200 species of flowering plants he had not seen before in the field. This led to his translation of J. Seboth's Alpenpflanzen nach der Natur gemalt as Alpine Plants (1879–84) and work on Austrian scientist K.W. von Dalla Torre's Tourists' Guide to the Flora of the Austrian Alps (1882, 1886), as well as Bennett's own definitive work The Flora of the Alps (1897). He also worked extensively on cryptogams, especially freshwater algae, during the last two decades as a botanist. In 1889, he published A Handbook of Cryptogamic Botany with his coauthor George Robert Milne Murray. His obituary in the Journal of the Royal Microscopical Society calls it his "most valuable original work." Bennett also spent many years as lecturer in botany at St Thomas' Hospital and Bedford College. Higher education of women After his retirement from publishing, in 1868, he and his wife opened their house in Park Village East, Regent's Park, for a limited number of ladies coming up to London to study. From this time forward he took a keen interest in the education of women. Upon him personally fell a large share of the effort. On 15 May 1878, University of London Convocation, received an address signed by 1,960 women, asking that the university "throw open all its degrees to women. A.W.Bennett was one of the speakers, named in the Times report of the ensuing debate. After nearly ten years, the campaign was successful in authorising the awarding of degrees to women by the University of London. Evolution Bennett accepted that evolution occurred but was a critic of natural selection. In 1870, he wrote a critical paper in the Nature journal entitled The Theory of Natural selection from a Mathematical Point of View. He argued that small random variations could not accumulate in any single direction as the incipient steps of a modification of an organ would be useless to the individual. His arguments were rejected by Alfred Russel Wallace. In 1871, Bennett endorsed St. George Jackson Mivart criticisms of natural selection and wrote a supportive review of his book On the Genesis of Species. Bennett wrote a review of Charles Darwin's On the Origin of Species in 1872. He praised parts of the book but raised objections to natural selection. He held that it was incompetent to account for the initial stages of mimicry. Darwin wrote to Bennett "I thank you sincerely for your generous review of the last. Edit. of the Origin, more especially as we different so greatly & I quite agree with you that the only way to arrive at the truth is to discuss & freely express all different of opinion." Despite their differences, Bennett wrote a supportive review of Darwin's book Insectivorous Plants and they exchanged friendly letters. He also wrote a paper that disputed the arguments of Fritz Müller that a protective mimicry in Lepidoptera could be explained by natural selection. Death Bennett died suddenly from a heart attack in Oxford Circus while riding home to Regent's Park atop an omnibus. A lifelong Quaker, he is buried in a Quaker burial-ground in Isleworth next to his wife Katharine. The couple was childless. Selected writings Review of the Genus Hydrolea (1870) The Theory of Natural Selection from a Mathematical Point of View (1870) Review of The Genesis of Species (1871) Review of The Origin of Species by Means of Natural Selection (1872) On the Medicinal Products of the Indian Simarubeae and Burseraceae (1875) Review of the British Species and Subspecies of Polygala (1877) On the Structure and the Affinities of Characeae (1878) Conspectus Polygalarum Europaearum (1878) Polygalae americanae novae vel parum cognitae (1878) Reproduction of the Zygnemaceae (1884) Freshwater Algae (1887) A Handbook of Cryptogamic Botany (1889) The Flora of the Alps (1897) References Bibliography "Alfred William Bennett" (1902). Proceedings of the Linnean Society of London: One Hundred and Fourteenth Session, pp. 26–27. Retrieved 15 September 2012 from Biodiversity Heritage Library. Baker, J.G. (1902). "Obituary: A.W. Bennett." Journal of the Royal Microscopical Society for the Year 1902, pp. 155–157. Retrieved 15 September 2012 from Biodiversity Heritage Library. Cleevely, R.J. (2004). "Bennett, Alfred William." Dictionary of Nineteenth-Century British Scientists, Volume 1: pp. 181–182. Bristol, England: Thoemmes Continuum. Cleevely, R.J. (2004). "Bennett, Alfred William (1833–1902)." Oxford Dictionary of National Biography. Oxford: Oxford University Press. Online edition. Retrieved 15 September 2012 through a subscription account. "Katharine Bennett" (1893). Annual Monitor, No. 51: p. 22. Retrieved 17 September 2012 from Internet Archive. S.A.S. (1902). "A.W. Bennett." Nature, 65: p. 321. Retrieved 14 September 2012 from Nature.com. Stafleu, Frans A. and Erik A. Mennega (1993). "Bennett, Alfred William." Taxonomic Literature: Supplement II, pp. 70–72. Königstein: Koelz Scientific Books. Retrieved 15 September 2012 from Taxonomic Literature II Online. External links Works by Alfred William Bennett at Biodiversity Heritage Library. Works by Alfred William Bennett at Wikisource. For an obituary of A W Bennett, as a Quaker, see Annual Monitor for 1903. 1833 births 1902 deaths 19th-century British botanists 19th-century English businesspeople Academics of Bedford College, London Alumni of University College London Botanists active in Europe English book publishers (people) English Quakers Fellows of the Royal Microscopical Society Non-Darwinian evolution People from Clapham British phycologists Women and education
Alfred William Bennett
Biology
1,936
40,968,065
https://en.wikipedia.org/wiki/LG%20G%20Flex
The LG G Flex is an Android phablet developed and manufactured by LG. First unveiled by the company on October 27, 2013 for a release in South Korea, and carrying similarities to its G2 model, the smartphone is the company's first to incorporate a flexible display, along with a "self-healing" rear cover which can repair minor abrasions on its own. The G Flex was met with mixed reviews by critics, who characterized the device as a proof of concept for bleeding edge flexible screen technology rather than a device targeted towards the mass market. While the G Flex was praised for its durability, performance and the visibility of its screen, it was panned for being too similar in hardware, software, and design to the G2, having a low-resolution display that suffered from noise and image retention issues, and for presenting no compelling justification for the curved display in relation to the device's high price. It was succeeded by the LG G Flex 2 in January 2015. History In May 2013, LG announced that it would unveil prototypes for two OLED flexible displays at an exhibition organized by the Society for Information Display; a 55-inch television, and a 5-inch "unbreakable" display meant for mobile devices. In October 2013, rumors from a "person familiar with the company's launch plans" suggested that LG was planning to release a phablet with a curved, 6-inch OLED display known as the "G Flex". On October 27, 2013, LG officially unveiled the G Flex for a release in South Korea in November 2013, and later announced releases in Europe and the rest of Asia. Although LG had yet to confirm a North American release, a variant supporting United States carriers' networks was approved by the Federal Communications Commission in November 2013. At Consumer Electronics Show in January 2014, LG announced a U.S. release for the G2 across several major carriers. In March 2014, LG released an advertisement online to promote the G Flex, which portrayed the device as being "the most human phone ever" by representing a caller as a talking, bearded mouth on the user's hand, complete with an ear on the finger as an earpiece. The user is also seen feeding the mouth cake, and when phoning a woman later in the ad, kissing it. The ad was met with attention from media outlets, who considered it to be bizarre, with a TechCrunch writer going as far as dubbing it the worst commercial for a smartphone ever, and Kate Hutchington of The Guardian declaring it "more disturbing than the act of taking your mum to see Nymphomaniac on Mother's Day." Specifications The G Flex's physical design resembles that of the LG G2, consisting of a polycarbonate shell with a curvature of 700 millimetres (28 in), with volume and power buttons located on the rear of the device directly below the camera—the power button also contains an LED lamp which can be used as a notification light. The rear casing of the G Flex carries a "brushed metal" look and features a "self-healing" coating which can repair minor scratches and abrasions made to it. LG claimed that the curved design would be more "natural" when held to the head for conducting phone calls, and would reduce the level of glare on the display. While the phone can withstand being bent—having been being bent a hundred times with 88 pounds (40 kg) of pressure during internal testing without any permanent damage to its form, LG chose to maintain a level of rigidity to the G Flex's design in order to ensure a "premium" feel. The G Flex's internal hardware is almost identical to the G2, with a 2.26 GHz quad-core Snapdragon 800 processor with 2 GB of RAM, support for LTE or LTE Advanced networks where available, 32 GB of internal storage, and an infrared emitter. Unlike the G2, however, the G Flex's display is a 6-inch (15 cm), 720p, flexible OLED display coated with Gorilla Glass, and it also incorporates a non-removable 3500 mAh battery specifically optimized for the G Flex's curved form factor - it curves around the frame to fill any empty space; LG claims it's the world’s first curved battery. The ability to film in 4K (2160p) resolution was added through a subsequent software update. Software The G Flex ships with Android 4.2.2 "Jelly Bean" with a similar user interface and software to the G2. Several minor new features were added, including a "dual-window" split-screen multitasking mode, and alongside the G2's existing optimization options for one-handed use, the ability to slide all of the on-screen navigation keys to one side of the screen. Aside from the lack of optical image stabilization (which was excluded because it would make the image sensor too tall for the device's body), the G Flex's 13-megapixel camera is similar to the G2, with the addition of a new "Face Tracking" shooting mode to assist users in taking photos containing themselves with the rear-facing camera, which automatically focuses on the user's face, and uses the power button's notification LED as a status light. In March 2014, LG began rolling out an update to Android 4.4.2 "KitKat". Along with other improvements, the update adds "Knock Code", a security feature introduced by the LG G Pro 2 which allows users to unlock their device by tapping certain quadrants of the screen in a sequence. The LG G Flex will not officially be updated to Android 5.0 "Lollipop". Reception The G Flex was released to mixed reviews. The design of the G Flex was praised for its durability and bendability, with Engadget reporting that they "did plenty of pushing and pulling on the device to test its physical limits, and none of our efforts resulted in cracking or any kind of damage to the chassis." However, The Verge felt that the "self-healing" rear cover was not effective enough after it was unable to recover from a scratch from a key, comparing it to Wolverine only being able to heal from paper cuts. The design of the phone itself was panned for being lackluster, and for being too similar to the G2. Although it was praised for its performance and battery life, the G Flex's hardware and software were also panned for being too similar to the G2, with its software in particular being criticized for not containing any features specifically intended to take advantage of the curved screen (besides the lock screen wallpaper tilting along with the phone), and for the removal of optical image stabilization from the camera. The G Flex's curved display was praised for having good viewing angles and brightness levels, and as promised, having a lower level of glare than most smartphones. However, the display was criticized for having a significantly lower resolution than other flagship phones and a grainy appearance, while Ars Technica also noticed issues with image retention and uneven lighting on screen contents. LG's decision to introduce its flexible display on an abnormally large phone was also noted; in response to LG billing the display as having a more "immersive" viewing experience for movies, The Verge felt that "it's [immersive] due to the sheer size of the display. And I don't care what it's made of: a 6-inch smartphone is never going to feel comfortable on my face while I make a phone call.". Engadget ultimately gave the G Flex an 83 out of 100, noting that the device was "a cross between a status symbol and a proof of concept [in some respects]", but contending that the device was too expensive (costing about US$940 upfront) and recommended that consumers wait for options with more "reasonable" prices before considering to buy a curved smartphone. The Verge gave the G Flex a 7 out of 10, arguing that given its high price, there was nothing "compelling" about the curved screen as used by the G Flex, and that it "[felt] like a tech demo, an R&D prototype that was accidentally swapped in a shipping crate with the G2. LG just decided to roll with it, put the G Flex on sale, and see what happens." References External links Mobile phones introduced in 2013 Discontinued smartphones LG Electronics smartphones Android (operating system) devices Phablets Mobile phones with infrared transmitter
LG G Flex
Technology
1,780
49,277,066
https://en.wikipedia.org/wiki/Abies%20minor
Abies minor is a taxonomic synonym that may refer to: Abies minor = Abies balsamea Abies minor = Abies alba References
Abies minor
Biology
31
22,251,394
https://en.wikipedia.org/wiki/Travicom
Travicom was the trading name of Travel Automation Services Ltd, a travel technology company based in the United Kingdom providing a global distribution system between airlines and travel agencies. In 1976, Videcom with British Airways, British Caledonian and CCL launched Travicom, the world's first multi-access reservations system, (wholly based on Videcom technology), forming a network providing distribution for initially 2 and later 49 subscribing international airlines (including British Airways, British Caledonian, TWA, Pan American World Airways, Qantas, Singapore Airlines, Air France, Lufthansa, SAS, Air Canada, KLM, Alitalia, Cathay Pacific and JAL). The initial system supported little more than 100 terminals but subsequent developments allowed most of the IATA licensed agencies in the UK to access the system. The system allowed agents to use the same entry formats for all the connected airlines' systems. The displays were returned in the format used by each airline system. By 1987 Travicom was handling 97% of UK airline business trade bookings. The system was replicated by Videcom in other areas of the world including the Middle East (DMARS), New Zealand, Kuwait (KMARS), Ireland, the Caribbean, United States and Hong Kong. The Travicom UK multi-access system was closed and replaced by the system called Galileo in the UK today and in 1988 Travicom changed its trading name to Galileo UK. Later, British Airways sold Galileo UK to Galileo International. British Airways and Sabre controversy In 1987, Sabre's success in selling to European travel agents was inhibited by the refusal of the big European carriers led by British Airways to grant the system ticketing authority for their flights, even though Sabre had obtained BSP clearance for the UK in 1986. American Airlines brought a High Court action which alleged that British Airways, after the arrival of Sabre on its doorstep, immediately offered financial incentives to travel agents who continued to use Travicom and would tie any override commissions to use of the Travicom system. British Airways eventually bought out the stakes in Travicom held by Videcom and British Caledonian to become the sole owner, and although Sabre's vice-president in London, David Schwarte, made representations to the US Department of Transportation and the British Monopolies Commission, BA defended the use of Travicom as a truly non-discriminatory system in flight selection because an agent had access to some 50 carriers worldwide, including Sabre, for flight information. References External links Videcom Software companies of the United Kingdom Computer reservation systems Travel technology
Travicom
Technology
533
32,966,352
https://en.wikipedia.org/wiki/Repository%20%28version%20control%29
In version control systems, a repository is a data structure that stores metadata for a set of files or directory structure. Depending on whether the version control system in use is distributed, like Git or Mercurial, or centralized, like Subversion, CVS, or Perforce, the whole set of information in the repository may be duplicated on every user's system or may be maintained on a single server. Some of the metadata that a repository contains includes, among other things, a historical record of changes in the repository, a set of commit objects, and a set of references to commit objects, called heads. The main purpose of a repository is to store a set of files, as well as the history of changes made to those files. Exactly how each version control system handles storing those changes, however, differs greatly. For instance, Subversion in the past relied on a database instance but has since moved to storing its changes directly on the filesystem. These differences in storage techniques have generally led to diverse uses of version control by different groups, depending on their needs. Overview In software engineering, a version control system is used to keep track of versions of a set of files, usually to allow multiple developers to collaborate on a project. The repository keeps track of the files in the project, which is represented as a graph. A distributed version control system is made up of central and branch repositories. A central repository exists on the server. To make changes to it, a developer first works on a branch repository, and proceeds to commit the change to the former. Forges A code forge is a web interface to a version control system. A user can commonly browse repositories and their constituent files on the page itself. Static web hosting While forges are mainly used to perform version control operations, some forges allow users to host static web pages by uploading its source code (such as HTML and JavaScript, but not PHP) to a repository. This is usually done in order to provide documentation or a landing page for a software project. The use of repositories as a place to upload web documents allows version control to be integrated, and additionally allows quick iteration because changes are pushed through the Version Control System instead of having to upload the file through a protocol like FTP. Examples of this kind of service include GitHub Pages and GitLab Pages. See also Sandbox (software development) Software repository Codebase Git Forge (software) Comparison of source-code-hosting facilities References Version control
Repository (version control)
Engineering
517
35,528,747
https://en.wikipedia.org/wiki/Glutaurine
Glutaurine is an endogenous dipeptide which is an amide formed from glutamic acid and taurine. Biological role Glutaurine is an antiepileptic with antiamnesia properties. Glutaurine was discovered in the parathyroid in 1980, and later in the mammalian brain. This led to studies on intrinsic and synthetic taurine peptides, and the suggestion that γ-glutamyltransferase (GGT; γ-glutamyl-transpeptidase) in the brain is responsible for its in vivo formation. The versatile molecule mimics the anxiolytic drug diazepam, and is implicated in phenomena from feline aggression to amphibian metamorphosis, radiation protection, and the glutamatergic system in schizophrenic disorders. References Sulfonic acids Alpha-Amino acids Amino acid derivatives
Glutaurine
Chemistry
192
6,662,023
https://en.wikipedia.org/wiki/Formestane
Formestane, formerly sold under the brand name Lentaron among others, is a steroidal, selective aromatase inhibitor which is used in the treatment of estrogen receptor-positive breast cancer in postmenopausal women. The drug is not active orally, and was available only as an intramuscular depot injection. Formestane was not approved by the United States FDA and the injectable form that was used in Europe in the past has been withdrawn from the market. Formestane is an analogue of androstenedione. Formestane is often used to suppress the production of estrogens from anabolic steroids or prohormones. It also acts as a prohormone to 4-hydroxytestosterone, an active steroid which displays weak androgenic activity in addition to acting as a weak aromatase inhibitor. References Enols Anabolic–androgenic steroids Androstanes Aromatase inhibitors Diketones Hormonal antineoplastic drugs Cyclohexenols Enones
Formestane
Chemistry
219
14,071,965
https://en.wikipedia.org/wiki/Robert%20H.%20Gray
Robert Hansen Gray (March 7, 1948 - December 6, 2021) was an American data analyst, author, and astronomer, and author of The Elusive Wow: Searching for Extraterrestrial Intelligence. Education Gray attended Shimer College, a Great Books school then located in Mount Carroll, Illinois, where he received a bachelor's degree in 1970. He went on to obtain a master's in urban planning and policy analysis from the University of Illinois at Chicago in 1980. Career Data analysis In 1984, Gray founded the company Gray Data in Chicago, which provided data analysis research services and published reference cards for microcomputer software. He continued to work as a data analyst through his company Gray Consulting. Search for Extraterrestrial Intelligence (SETI) Gray is best known for his work as an independent SETI researcher. The Atlantic called Gray "the 'Wow!' signal's most devoted seeker and chronicler, having traveled to the very ends of the earth in search of it." The Wow! signal was detected by the Ohio State University Radio Observatory (also known as Big Ear) on August 15, 1977. The signal was so pronounced in the data, and so similar to a radio signal rather than a natural source, that SETI scientist Jerry R. Ehman circled it on the computer printout in red ink and wrote "Wow!" next to it. After hearing about the Wow! signal a few years after its detection, Gray contacted the Ohio team, visited Big Ear, and spoke with Ehman, Robert S. Dixon (director of the SETI project) and John D. Kraus (the telescope's designer). In 1980, Gray began scanning the skies from his backyard in Chicago, using a 12-foot commercial telecommunications dish. He operated his small SETI radio observatory regularly beginning in 1983 and for the next 15 years, but did not find a trace of the Wow! signal. In 1987 and 1989 he led searches for the signal using the Harvard/Smithsonian META radio telescope at the Oak Ridge Observatory in Harvard, Massachusetts. In September 1995 and again in May 1996, Gray and Kevin B. Marvel reported searches for the signal using the Very Large Array (VLA) radio telescope in New Mexico (which is an array of 27 dishes simulating a single dish with a diameter of up to 22 miles), becoming the first amateur astronomer to use the VLA, and the first individual to use it to search for extraterrestrial signals. The VLA was, until the end of the twentieth century, the most powerful radio telescope ever built. In 1998, he and University of Tasmania professor Simon Ellingsen conducted searches using the 26-meter dish at the Mount Pleasant Radio Observatory in Hobart, Tasmania. Gray and Ellingsen made six 14-hour observations where the Big Ear was pointing when it found the Wow! signal, searching for intermittent and possibly periodic signals, rather than a constant signal. No signals resembling the Wow! were detected. Writing Gray and Marvel published a 2001 paper in The Astrophysical Journal detailing his use of the VLA in search of the signal. Gray and Ellingsen published "A Search for Periodic Emissions at the Wow Locale" in the October 2002 issue of The Astrophysical Journal, reporting on searches for the Wow! signal. In 2011, Gray published the book The Elusive Wow: Searching for Extraterrestrial Intelligence, summarizing what is known about the Wow! signal, covering his own search for the signal, and offering an overview of the search for extraterrestrial intelligence. In 2016, Gray published an article in Scientific American about the Fermi paradox, which claims that if extraterrestrials existed, we would see signs of them on Earth, because they would certainly colonize the galaxy by interstellar travel. Gray argued that the Fermi paradox, named after Nobel Prize-winning physicist Enrico Fermi, does not accurately represent Fermi's views. Gray stated that Fermi questioned the feasibility of interstellar travel, but did not say definitively whether or not he thought extraterrestrials exist. Personal life Gray lived in Chicago, Illinois, with his wife, photographer Sharon A. Hoogstraten. He died on December 6, 2021, from complications from lung cancer in Chicago. Bibliography Books Robert H. Gray, The Elusive Wow: Searching for Extraterrestrial Intelligence (Palmer Square Press, 2012) Articles "A VLA Search for the Ohio State 'Wow'" The Astrophysical Journal, vol. 546, no. 2, January 2001 (with Kevin B. Marvel) "A Search for Periodic Emissions at the Wow Locale" The Astrophysical Journal, vol. 578, no. 2, October 2002 (with Simon Ellingsen) "A VLA Search for Radio Signals from M31 and M33" The Astrophysical Journal, vol. 153, no. 3, February 2017 (with Kunal Mooley) "An ATA Search for a Repetition of the Wow Signal" The Astrophysical Journal, vol. 160, no. 4, September 2020 (with Gerald Harp, Jon Richards, Seth Shostak, and Jill Tarter) "The Fermi Paradox Is Not Fermi's, and It Is Not a Paradox" Scientific American, January 29, 2016 (first appeared in Astrobiology, vol. 15, issue 3, March 2015) "The Extended Kardashev Scale" Astronomical Journal, vol. 159, no. 5, April 2020 "Intermittent Signals and Planetary Days in SETI" Intl. Journal of Astrobiology, vol. 19, April 2020 References Living people 1948 births American science writers Amateur astronomers University of Illinois Chicago alumni Shimer College alumni Search for extraterrestrial intelligence
Robert H. Gray
Astronomy
1,162
68,319,397
https://en.wikipedia.org/wiki/Social%20audio
Social audio is a subclass of social media that designates social media platforms that use audio as their primary channel of communication. This can include text messages, podcasts, tools for recording and editing audio in addition to virtual audio rooms. Still in an evolutionary state, different companies that develop social audio products are still trying to figure out what works for their users, and what doesn't. History In March 2020, Alpha Exploration Co. launched a social audio application called Clubhouse on the iOS platform. The app has led to the emergence of a new social media segment known as social audio. Soon realizing the potential of this segment, a handful of companies came out with their social audio solutions as standalone products or as an expansion to their current products. Clubhouse being the pioneer in this segment, all competitors eventually adapted its features to their products. In October 2020, Betty Labs launched their social audio app Locker Room for iOS. In November 2020, Twitter announced that it would develop a social audio feature on its platform. In December 2020, Telegram introduced the social audio feature Voice Chats in its app. In the month, Twitter began beta testing its social audio feature known as Spaces with iOS users on their platform. In February 2021, Facebook announced plans to add social audio functionality to its app. In March 2021, Twitter has started beta testing Spaces with Android users on their platform. In that month, The Telegram released Voice Chat 2.0. On March 26, 2021, Slack CEO Stewart Butterfield was in a Clubhouse session moderated by PressClub when he announced that Slack would soon roll out a social audio feature. Late that month, Spotify purchased Betty Labs and announced its intention to rename the Locker Room app. On March 31, 2021, Discord introduced its social audio feature called Stage Channels. In April 2021, The Facebook NPE team launched a social audio product called Hotline in closed beta, requiring a Twitter account to login. In the month, Reddit announced a social audio feature called Reddit Talk for their subreddit communities. On May 3, 2021, Twitter Spaces released globally. On May 9, 2021, Clubhouse has launched a beta version of the Android app for users in the US, with worldwide access scheduled for a later date. Later, on May 21, 2021, Clubhouse became available worldwide for Android users. On June 16, 2021, Spotify released its social audio app Spotify Greenroom on Android (early access) and iOS. On June 21, 2021, Facebook has released its social audio feature known as Live Audio Rooms to users based in the United States. The company said it would deploy the feature globally in the coming months. On June 30, 2021, Slack started rolling out its social audio solution named Slack Huddles for paid customers. On July 21, 2021, Clubhouse app released its first gold build. Common features Chat rooms in which users can converse through shared audio recordings, typically in real-time. Traditional text and rich-media-based chat. List of platforms Sound Branch In 2016, Sean Gilligan launched the social audio application Sound Branch. The app supports various platforms, including the web, iOS, Android, Alexa, and Google Assistant, making it accessible for users worldwide. Sound Branch allows users to create and share short audio clips, fostering community interaction through voice. It is available globally at soundbranch.com and for private sites at soundbran.ch. Sound Branch aims to revolutionize social audio by integrating voice technology into everyday communication, making it easier for users to connect and share experiences through sound. Telegram Voice Chats In December 2020, Telegram introduced the social audio feature Voice Chats in its app. Later, In March 2021, Telegram released Voice Chat 2.0. It allows unlimited participants and conversation recording. Discord Stages On March 31, 2021, Discord introduced its social audio feature called Stage Channels. Facebook Live Audio Rooms and Podcasts In April 2021, The Facebook NPE team launched a product called Hotline in closed beta, requiring a Twitter account to login. It offers a room where only some people can speak while others listen. Also, speakers can turn on video. On June 21, 2021, Facebook has released its social audio feature known as Live Audio Rooms and Podcasts to users based in the United States. The company said it would deploy the feature globally in the coming months. It enables users to participate in the rooms with a maximum of 51 people (1 host and 50 speakers) onstage and an unlimited number of listeners. Also, provide access to a podcast library for its users. Reddit Talk In April 2021, Reddit announced a social audio feature called Reddit Talk for their subreddit communities. Twitter Spaces On May 3, 2021, Twitter added a social audio feature named Spaces to its platform. It allows users to engage in rooms with a maximum of 13 people (1 host, 2 co-hosts, and 10 speakers) onstage and an unlimited number of listeners. Spaces is much further ahead in development than Clubhouse, with integration into the Twitter API that enables app integration. In July 2021, Twitter announced a 'Voice Transformer' feature that would work in Spaces to change your voice. Twitter also poached Clubhouse's exclusive NFL deal with 20 official NFL Spaces scheduled for the 2021-22 season. Spotify Greenroom On June 16, 2021, Spotify released its social audio app Spotify Greenroom. Greenroom has built-in recording and integrates into both Spotify and Anchor.fm. This integration allows podcasters to record rooms and upload directly to podcast for distribution. It also notifies Spotify listeners when a verified artist is live in Greenroom. Greenroom can currently accommodate up to 1000 people in a room. Slack Huddles On June 30, 2021, Slack started rolling out its social audio solution named Slack Huddles for paid customers. It can take up to 50 participants at a time. References Collaborative projects Social networks
Social audio
Technology
1,199
364,820
https://en.wikipedia.org/wiki/Quotient%20module
In algebra, given a module and a submodule, one can construct their quotient module. This construction, described below, is very similar to that of a quotient vector space. It differs from analogous quotient constructions of rings and groups by the fact that in the latter cases, the subspace that is used for defining the quotient is not of the same nature as the ambient space (that is, a quotient ring is the quotient of a ring by an ideal, not a subring, and a quotient group is the quotient of a group by a normal subgroup, not by a general subgroup). Given a module over a ring , and a submodule of , the quotient space is defined by the equivalence relation if and only if for any in . The elements of are the equivalence classes The function sending in to its equivalence class is called the quotient map or the projection map, and is a module homomorphism. The addition operation on is defined for two equivalence classes as the equivalence class of the sum of two representatives from these classes; and scalar multiplication of elements of by elements of is defined similarly. Note that it has to be shown that these operations are well-defined. Then becomes itself an -module, called the quotient module. In symbols, for all in and in : Examples Consider the polynomial ring, with real coefficients, and the -module . Consider the submodule of , that is, the submodule of all polynomials divisible by . It follows that the equivalence relation determined by this module will be if and only if and give the same remainder when divided by . Therefore, in the quotient module , is the same as 0; so one can view as obtained from by setting . This quotient module is isomorphic to the complex numbers, viewed as a module over the real numbers See also Quotient group Quotient ring Quotient (universal algebra) References Module theory Module
Quotient module
Mathematics
413
21,985,449
https://en.wikipedia.org/wiki/Proactive%20learning
Proactive learning is a generalization of active learning designed to relax unrealistic assumptions and thereby reach practical applications. "In real life, it is possible and more general to have multiple sources of information with differing reliabilities or areas of expertise. Active learning also assumes that the single oracle is perfect, always providing a correct answer when requested. In reality, though, an "oracle" (if we generalize the term to mean any source of expert information) may be incorrect (fallible) with a probability that should be a function of the difficulty of the question. Moreover, an oracle may be reluctant – it may refuse to answer if it is too uncertain or too busy. Finally, active learning presumes the oracle is either free or charges uniform cost in label elicitation. Such an assumption is naive since cost is likely to be regulated by difficulty (amount of work required to formulate an answer) or other factors." Proactive learning relaxes all four of these assumptions, relying on a decision-theoretic approach to jointly select the optimal oracle and instance, by casting the problem as a utility optimization problem subject to a budget constraint. References Learning Machine learning
Proactive learning
Engineering
239
43,253,837
https://en.wikipedia.org/wiki/Apricoxib
Apricoxib is an experimental anticancer drug and nonsteroidal anti-inflammatory drug (NSAID). It is a COX-2 inhibitor which is intended to improve standard therapy response in molecularly-defined models of pancreatic cancer. It was also studied in clinical trials for non-small-cell lung cancer. Development was abandoned in 2015 due to poor clinical trial results. See also Tilmacoxib Cimicoxib NS-398 Celecoxib References COX-2 inhibitors Sulfonamides Abandoned drugs
Apricoxib
Chemistry
112
50,028,979
https://en.wikipedia.org/wiki/Allyl%20mercaptan
Allyl mercaptan (AM) is a small molecule allyl derivative and an organosulfur compound derived from garlic and a few other genus Allium plants. Its formula is C3H6S. It has been shown to be the most effective HDAC inhibitor of known garlic-derived organosulfur compounds and their metabolites. References Thiols Histone deacetylase inhibitors Allyl compounds Foul-smelling chemicals
Allyl mercaptan
Chemistry
91
29,850,197
https://en.wikipedia.org/wiki/Biomarkers%20of%20Alzheimer%27s%20disease
The biomarkers of Alzheimer's disease are neurochemical indicators used to assess the risk or presence of the disease. The biomarkers can be used to diagnose Alzheimer's disease (AD) in a very early stage, but they also provide objective and reliable measures of disease progress. It is imperative to diagnose AD disease as soon as possible, because neuropathologic changes of AD precede the symptoms by years. It is well known that amyloid beta (Aβ) is a good indicator of AD disease, which has facilitated doctors to accurately pre-diagnose cases of AD. When Aβ peptide is released by proteolytic cleavage of amyloid-beta precursor protein, some Aβ peptides that are solubilized are detected in CSF and blood plasma which makes AB peptides a promising candidate for biological markers. It has been shown that the amyloid beta biomarker shows 80% or above sensitivity and specificity, in distinguishing AD from dementia. It is believed that amyloid beta as a biomarker will provide a future for diagnosis of AD and eventually treatment of AD. Amyloid beta Amyloid beta (Aβ) is composed of a family of peptides produced by proteolytic cleavage of the type I transmembrane spanning glycoprotein amyloid-beta precursor protein (APP). Amyloid plaque Aβ protein species ends in residue 40 or 42, but it is suspected that Aβ42 form is crucial in the pathogenesis of AD. Although Aβ42 makes up less than 10% of total Aβ, it aggregates at much faster rates than Aβ40. Aβ42 is the initial and major component of amyloid plaque deposits. While the most prevalent hypothesis for mechanisms of Aβ-mediated neurotoxicity is structural damage to the synapse, various mechanisms such as oxidative stress, altered calcium homeostasis, induction of apoptosis, structural damage, chronic inflammation and neuronal formation of amyloid has been proposed. Observation of AB42/AB40 ratio has been a promising biomarker for AD. However, as AB42 fails to be a reliable biomarker in plasma, attention was drawn for alternative biomarkers. Current biomarkers BACE1 Various enzymatic digestion including beta secretase (β secretase), and gamma-secretase (γ-secretase) will cleave amyloid-beta precursor protein (APP) into various types of amyloid beta (Aβ) protein. Most β-secretase activity originates from an integral membrane aspartyle protease encoded by the β-site APP-cleaving enzyme 1 gene (BACE1). Dr. Zetterberg and his team used a sensitive and specific BACE1 assay to assess CSF BACE1 activity in AD. It was found that those with AD showed increased BACE1 expression and enzymatic activity. It was concluded that elevated BACE 1 activity may contribute to the amyloidgenic process in Alzheimer's disease. CSF BACE1 activity could be a potential candidate biomarker to monitor amyloidogenic APP metabolism in the CNS. Soluble Aβ precursor protein (sAPP) APP is an integral membrane protein whose proteolysis generates beta amyloid ranging from 39- to 42- amino acid peptide. Although the biological function of APP are not known, it has been hypothesized that APP may play a role during neuroregeneration, and regulation of neural activity, connectivity, plasticity, and memory. Recent research has shown that large soluble APP (sAPP) that are present in CSF may serve as a novel potential biomarker of Alzheimer's disease. In an article published in Nature, a group led by Lewczuk performed a test to observe the performance of a soluble form of APP α and β. A significant increase in sAPP α and sAPP β was found in people with AD as compared to normal subjects. However, the CSF level of α-sAPP and β-sAPP was found to be contradictory. Although many researchers have found that the CSF level of α sAPP increases in some people with AD, some report that there is no significant change, while Lannfelt argues that there is a slight decrease. Therefore, more studies using experimental models are needed in order to confirm the validity of sAPP as a biological marker for AD. Autoantibodies Researchers at Indiana University found that titres of anti-beta-amyloid antibodies in cerebral spinal fluid was lower in AD patients compared to healthy patients. Novel approach Recent studies primarily focus on use of an autoantibody, not only for biological markers but for future treatment. However, there are various arguments whether an autoantibody method provides a reliable biomarker. A number of reports show that patients with AD have lower levels of serum anti-AB antibodies than healthy individuals, and others have argued that the level of anti-AB antibody may be higher in AD. In order to avoid provide solution for discrepancy in the existing data, Dr. Gustaw came up with novel method of dissociation sample. Theory In biological fluids, antibodies and antigens are in a state of dynamic equilibrium between bound and unbound forms that is concentration-dependent. As antigen masks the antibody, it obstructs accurate measurement of antibody-antigen detection. Dr. Gustow discovered a novel way to enhance antibody-antigen detection. Using a dissociation buffer (1.5% bovine serum albumin (BSA) and 0.2M glycine HCl pH2/5), he dissociated antigen-antibody complexes. In dissociated samples, unbound antigen-antibody complexes reveal increased disease state compared to non-diseased state. Method Prepare dissociation buffer: 1.4% bovine serum albumin + 0.2M glycine-HCL, pH2.5 Incubate AB42 for 20 minutes Dissolve AB42 in 500 uL dissociation buffer in Microcon centrifugal device Incubate at for 20 minutes Centrifuge for 20 minutes at 16,000 G at Invert filter and spin for 3 minutes at 2000 G Bring the sample back to a neutral pH with 15-2uL 2.5M Tris pH9 Add ELISA buffer (1.5% BSA and 0.05% Tween 20 in phosphate buffered saline) Perform ELISA analysis. Result The white block represents non-dissociation data. The black block represents dissociation data. As the ELISA result shows, the detection of antibody is blocked by addition of beta-amyloid when the experiment was performed without dissociation. Following dissociation, the level of antibody detected increased to a level nearly control to level of control. He used the same methodology in vivo to examine sera collected from AD patients. The results, surprisingly, demonstrated a significant increase in antibody titer. It contradicts the majority of studies arguing that the amyloid-beta antibody decreases in AD patients. The non-dissociated sample follows the widespread theory that amyloid-beta decreases in AD patients. However, he had already proven that a non-dissociated sample fails to bring out a valid result. The dissociated sample results show significant increases in AD patients, which contradicts the majority of previous studies. Contribution Currently, there are many biomarkers for diagnosis of Alzheimer's disease. However, most of them do not provide consistent data results. The novel approach (autoantibody) not only explained the discrepancy of results in previous studies of autoantibody, but provided a new standard as a biomarker of Alzheimer's disease. Compared to other biomarkers which have variable measurements on diagnosis of AD, the new autoantibody approach accurately measures Aβ level with high sensitivity, and proved itself to be an excellent biomarker for Alzheimer's disease. It is believed that the new technology will provide not only future early diagnosis of Alzheimer's disease but also possible therapy for Alzheimer's disease. An open international study group (ND.Neuromark.net) has been constituted for arranging scientific information and developing a rational guide for implementing biomarkers into routine practice. See also Autoantibody Amyloid beta Biomarker BACE1 Neuroregeneration Dementia References Alzheimer's disease Biomarkers
Biomarkers of Alzheimer's disease
Biology
1,745
26,913,876
https://en.wikipedia.org/wiki/Confederation%20of%20UK%20Coal%20Producers
The Confederation of UK Coal Producers (or CoalPro) was the UK trade association for coal mining companies. Full members included Banks Developments, Celtic Energy, Kier Mining, Miller Argent, Hall Construction, Hargreaves Services, and Land Engineering Services. It represented 90% of the fragmented UK coal industry after the National Coal Board was dissolved in 1987. Status in the 2020s The UK coal industry now employs less than 1,000 people directly. The Confederation of UK Coal Producers was dissolved in February 2017 according to Companies House. See also British Coal Utilisation Research Association Coal Authority Institute of Materials, Minerals and Mining National Coal Mining Museum for England Mining Association of the United Kingdom References External links CoalPro Coal industry in Scotland in March 2005 Coal companies of the United Kingdom Coal organizations Organisations based in Wakefield Organizations established in 1991 Mining organizations Trade associations based in the United Kingdom 1991 establishments in the United Kingdom
Confederation of UK Coal Producers
Engineering
181
31,346,928
https://en.wikipedia.org/wiki/Tert-Butylphosphaacetylene
tert-Butylphosphaacetylene is an organophosphorus compound. Abbreviated t-BuCP, it was the first example of an isolable phosphaalkyne. Prior to its synthesis, the double bond rule had suggested that elements of Period 3 and higher were unable to form double or triple bonds with lighter main group elements because of weak orbital overlap. The synthesis of t-BuCP discredited much of the double bond rule and opened new studies into the formation of unsaturated phosphorus compounds. Synthesis and reactions The synthesis of t-BuCP entails the reaction of pivaloyl chloride and P(SiMe3)3. The reaction proceeds via the intermediacy of a bis(trimethylsilyl)pivaloylphosphine, which undergoes a 1,3-silyl shift to form E- or Z-phosphoalkene isomers. Carrying out the phosphoalkene reaction in diglyme at 20 °C in the presence of catalytic amounts of solid NaOH forms the final t-BuCP product. Me3CC(O)Cl + P(SiMe3)3 → Me3CC(O)P(SiMe3)2 + Me3SiCl Me3CC(O)P(SiMe3)2 → Me3CCP + O(SiMe3)2 Other phosphaalkynes Phosphaalkynes possessing a C≡P bonded to bulky aryl groups are also known, e.g. Mes*C≡P and P≡C(Tript)C≡P are known to possess C≡P bond lengths of 1.516 and 1.532 Å, respectively (see below). While t-BuCP possesses a carbon-phosphorus bond length of 1.536 Å and a first ionization potential (π MO) of 9.70eV, H-C≡P possesses a C≡P bond length of 1.5421Å and a first ionization potential (π MO) of 10.79eV. These physical properties produce characteristic reactivity differences between the two species: tert-butylphosphaacetylene is a stable volatile liquid (b.p. 61 °C), and phosphaacetylene readily reacts to form elemental phosphorus. It has been proposed that isophosphaalkynes (R-P≡C) are produced as intermediates during the syntheses of phosphaalkynes. Such isomeric species have never been isolated. Reactions With their characteristic C-P triple bonds, the phosphorus atoms of phosphaalkynes such as tert-butylphosphaacetylene exhibit reactivities similar to nitriles, despite the significant differences between the radii of P (1.09 Å) and N (0.71 Å). At temperatures above 130 °C, the phosphaalkyne undergoes cyclotetramerization. To some extent its reactivity more closely resembles the reactions of alkynes. tert-Butylphosphaacetylene can bind to metals via various coordination modes to give inorganic and organometallic complexes. These complexes utilize either the triple bond or the nonbonding electrons on P. The higher electronegativity of carbon (2.5) over phosphorus (2.2) leads to polarized Cδ−≡Pδ+ bonds, which induces protonation at its carbon center. Its variety of coordination geometries enable tert-butylphosphaacetylene to participate in several types of reactions, including 1,2-additions of halogenated compounds. Organolithium compounds and enophiles can also react with C-P triple bonds, along with [2+1], [2+2], [2+3], and [2+4] cycloadditions. tert-Butylphosphaacetylene also undergoes a homo Diels-Alder cycloaddition reaction. References Organophosphorus compounds Tert-butyl compounds
Tert-Butylphosphaacetylene
Chemistry
857
14,225,098
https://en.wikipedia.org/wiki/Pecten%20oculi
The pecten or pecten oculi is a comb-like structure of blood vessels belonging to the choroid in the eye of a bird. It is a non-sensory, pigmented structure that projects into the vitreous humor from the point where the optic nerve enters the eyeball. The pecten is believed to both nourish the retina and control the pH of the vitreous body. High levels of alkaline phosphatase activity in the pecten oculi have been linked to the transport of nutrient molecules from the highly vascularized choroid into vitreous and retinal cells, thus nourishing the eye. It is present in all birds and some reptiles. In the vertebrate eye, there are blood vessels in front of the retina, partially obscuring the image. The pecten helps to solve this problem by greatly reducing the number of blood vessels in the retina and leading to the extremely sharp eyesight of birds such as hawks. The pigmentation of the pecten is believed to protect the blood vessels against damage from ultraviolet light. Stray light absorption by melanin granules of pecten oculi is also considered to give rise to small increments in temperature of pecten and eye; this may offer increased metabolic rate to optimize eye physiology in low temperatures at high-altitude flights. The structure varies across bird species and is conical in the kiwi, vaned in the ostrich and pleated in most other birds. See also Conus papillaris, a similar structure found in reptiles References Birds
Pecten oculi
Biology
335
12,171,649
https://en.wikipedia.org/wiki/Cairo%20spiny%20mouse
The Cairo spiny mouse (Acomys cahirinus), also known as the common spiny mouse, Egyptian spiny mouse, or Arabian spiny mouse, is a nocturnal species of rodent in the family Muridae. It is found in Africa north of the Sahara Desert, where its natural habitats are rocky areas and hot deserts. It is omnivorous, feeding on seeds, desert plants, snails, and insects. It is a gregarious animal and lives in small family groups. It is the first and only known rodent species that exhibit spontaneous decidualization and menstruation. Description The Cairo spiny mouse grows to a head and body length of about with a tail of much the same length. Adults weigh between . The colour of the Cairo spiny mouse is sandy-brown or greyish-brown above and whitish beneath. A line of spine-like bristles run along the ridge of the back. The snout is slender and pointed, the eyes are large, the ears are large and slightly pointed and the tail is devoid of hairs. The spiny mouse is known to have relatively weak skin, compared to Mus musculus, and tail autotomy. Distribution and habitat The Cairo spiny mouse is native to northern Africa with its range extending from Mauritania, Morocco, and Algeria in the west to Sudan, Ethiopia, Eritrea, and Egypt in the east at altitudes up to about . It lives in dry stony habitats with sparse vegetation and is often found near human dwellings. It is common around cliffs and canyons and in gravelly plains with shrubby vegetation. It is not usually found in sandy habitats, but may be present among date palms. Behaviour Cairo spiny mice are social animals and live in a group with a dominant male. Breeding mostly takes place in the rainy season, between September and April, when availability of food is greater. The gestation period is five to six weeks, which is long for a mouse, and the young are well-developed when they are born. At this time, they are already covered with short fur and their eyes are open, and they soon start exploring their surroundings. The adults in the group cooperate in caring for the young, with lactating females feeding any of the group offspring. Females may become pregnant again immediately after giving birth, and have three or four litters of up to five young in a year. The juveniles mature at two to three months of age. Cairo spiny mice live in burrows or rock crevices and are mostly terrestrial, but they can also clamber about in low bushes. They are nocturnal and omnivorous, eating anything edible they can find. Their diet includes seeds, nuts, fruit, green leaves, insects, spiders, molluscs, and carrion. When they live in the vicinity of humans, they consume crops, grain, and stored food. They sometimes enter houses, especially in winter, and dislike cold weather. The fruit of Ochradenus baccatus (= Reseda baccata) has pleasant tasting flesh, but distasteful seeds. The Cairo desert mouse consumes the fruits, but spits the seeds out intact and thus acts as an efficient seed dispersal agent for this plant. The Cairo spiny mouse is a host of the Acanthocephalan intestinal parasite Moniliformis acomysi. Status The Cairo spiny mouse has a wide distribution and occupies diverse habitats. It is common and the population size large, so the IUCN, in its Red List of Threatened Species, lists it as being of "Least Concern". Research interest The spiny mouse is used for research in diabetes, development, regeneration, and menstruation. The spiny mouse is also the first known rodent species to exhibit spontaneous decidualization and menstruation, potentially serving as a great candidate model to study menstrual related diseases. It exhibits a 9-day cycle, and is the first rodent found to have such a cycle. , gene sequencing has been underway to investigate this, and other unique physiological traits displayed by this species. References External links A video demonstrating Acomys cahirinus spitting seed Acomys Fauna of the Sahara Rodents of North Africa Mammals of the Middle East Stored-product pests Mammals described in 1803 Taxonomy articles created by Polbot
Cairo spiny mouse
Biology
878
75,653,738
https://en.wikipedia.org/wiki/Hover%20%28behaviour%29
Hovering is the ability exhibited by some winged animals to remain relatively stationary in midair. Usually this involves rapid downward thrusts of the wings to generate upward lift. Sometimes hovering is maintained by flapping or soaring into a headwind; this form of hovering is called "wind hovering", "windhovering", or "kiting". True hoverers Hummingbirds Hummingbirds hover over flowers to obtain nectar, flapping their wings at up to 70 beats per second. Bats Like hummingbirds, fruit bats and nectar bats hover over flowers while feeding on fruits or nectar. Comparison between bats and hummingbirds has revealed that these animals exert similar amounts of energy relative to body weight during hovering: hummingbirds can twist their wings more easily and are more aerodynamic, but bats have bigger wings and larger strokes. Kingfishers Small Kingfishers such as Belted kingfisher may hover over water before diving in to catch fish. Larger species such as Ringed kingfisher are too heavy to hover for more than a few seconds. Moths Sphinx moths Some sphinx moths (family Sphingidae) are known as hummingbird moths for their ability to hover over flowers while nectaring. Moths are relatively heavy insects and sometimes hang on to the flower with their forelegs as they hover. Clearwing moths Some clearwing moths (family Sesiidae) also hover while nectaring or even puddling. Females may also hover to inspect ovipositing sites. Hoverflies Hoverflies are flies that often hover over the plants they visit. This hovering behaviour is unlike that of hummingbirds since they do not feed in midair. Hovering in general may be a means of finding a food source; in addition, male hovering is often a territorial display seeking females, while female hovering serves to inspect ovipositing sites. Bee flies Bee flies are parasitoids that can dart about in the air with great agility. Males hover as a courtship display, while females hover over ovipositing sites - usually the entrance of a host insect nest - and shoot eggs into the nest using an ejecting movement of their abdomen. Species that have a long proboscis can hover over flowers while feeding, much as hummingbirds do, though these flies may touch the flower with their legs for balance while hovering. Odonata Odonata is an insect order that includes dragonflies and damselflies. They are strong aviators renowned for their acrobatic flights, including the ability to hover, usually for a short pause during their ceaseless territorial patrols. Dragonflies In addition to short hovers while cruising, female dragonflies may hover over the water before or during oviposition, males may also hover-guard their mate at this time. Damselflies Some male damselflies hover in front of females or over the oviposition site during courtship; sometimes females also hover in response. After mating, males may hover-guard their mate by either circling over her or by hovering while attached to her in tandem. Males hover-guarding in tandem do not need wings at all to remain suspended in the air; they are held aloft by clasping their mate with their abdomen, and can maintain their position even when the head and thorax are removed by predators. Hymenoptera Bees Many bee species, such as bumblebees, hover momentarily as they approach flowers to feed. Males of some species, including carpenter bees and carder bees, also hover while patrolling their territories. Wasps Among the social wasps, Stenogastrinae are known as hover wasps due to their distinctive hovering flight. Males often hover to display banding patterns on their abdomen as a territorial display. Among the solitary wasps, parasitoid species such as scoliid wasps exhibit hovering behaviour while hunting for prey to feed their larvae. Males of some parasitoids may hover briefly while they patrol their territories, seeking females and chasing away rivals. Wind hoverers Raptors Many birds of prey such as kestrels, harriers, and members of the Buteo genus can "windhover" by facing the wind. Elanine kites also engage in "windhovering"; this behaviour is also called "kiting" due the common names of this genus. Seabirds Certain seabirds can windhover by soaring or flapping into the wind; often this behaviour takes advantage of thermals whipping off a coastal cliff. Tropicbirds can even fly backwards against a strong headwind; Red-tailed tropicbird pairs use this ability to circle each other during courtship displays. Smaller seabirds such as shearwaters and storm petrels feed by hovering low over the water surface, flapping with half-open wings and paddling with their feet in a technique called "pattering" or "sea-anchoring". The waves are accompanied by a slight horizontal wind that enables the birds to soar in place while using their feet to steady themselves. References Ethology Bird behavior Insect behavior
Hover (behaviour)
Biology
1,041
48,528,337
https://en.wikipedia.org/wiki/Idah%20Sithole-Niang
Idah Sithole-Niang (born 1957) is a Zimbabwean biochemist and educator. Her main area of research has been viruses which attack the cowpea, one of the major food crops of Zimbabwe. Biography Idah Sithole was born in Hwange, Zimbabwe, on 2 October 1957. She attended the University of London, on scholarship, earning a BS in biochemistry in 1982. When she was awarded a USAID Fellowship in 1983, Sithole chose to continue her education, studying plant and virus genetics. She earned a PhD in 1988 from Michigan State University (MSU) in Lansing, Michigan. She completed a post-doctoral fellowship at the Plant Research Laboratory at MSU researching the genetics of photosynthesis in cyanobacteria on the first William L. Brown Fellowship, awarded by the Resources Development Foundation. Returning to Zimbabwe, she became a Lecturer at the University of Zimbabwe in 1992 on viruses which infect plants. Her chief area of research is the potyvirus, which attacks the cowpea, a legume which is a chief food crop of Zimbabwe. That same year, she married Sheikh Ibrahima Niang, a Senegalese professor of anthropology, whom she met at Michigan State University. They have a commuting marriage, as he works at the University of Cheikh Anta Diop in Dakar, Senegal. She was awarded a Rockefeller Foundation Fellowship for careers in Biotechnology between 1992 and 1995. She has continued teaching and researching, publishing numerous papers. In 2006 she was made Associate Professor at the University of Zimbabwe. Sithole-Niang is in favor of genetic modifications to make cowpeas resistant to disease and believes that GMO versions of traditional crops are beneficial to developing nations. Working with the Network for the Genetic Improvement of Cowpea for Africa (NGICA) as a coordinator, she has consulted with other international experts, in part because of the lack of funding available from within Zimbabwe. She has served as both a member and a board member of numerous organizations including: Steering Committee Trustee of the African Women for Agriculture and Research and Development (AWARD) Programme, the American Association for the Advancement of Science, the American Society for Virologists, Oversight Committee of the Improved Maize for African Soils (IMAS), Selection Committee Chair of the Joshua Nkomo Scholarships, the New York Academy of Sciences, Vice Chair of the Research Council of Zimbabwe, Selection Committee for the Rhodes Scholarships in Zimbabwe and the Zimbabwe Academy of Sciences. Sithole-Niang is a Technical Advisor to the Program for Biosafety Systems for sub-Saharan Africa. Selected works References Bibliography External links WorldCat Publications 1957 births Living people Women biochemists Zimbabwean women scientists Zimbabwean biologists 21st-century women scientists 20th-century women scientists Fellows of the Zimbabwe Academy of Sciences
Idah Sithole-Niang
Chemistry
567
36,688,650
https://en.wikipedia.org/wiki/Moisture%20expansion
Moisture expansion is the tendency of matter to change its volume in response to a change in moisture content. The macroscopic effect is similar to that of thermal expansion but the microscopic causes are very different. Moisture expansion is caused by hygroscopy. Matter
Moisture expansion
Physics
54
11,763,579
https://en.wikipedia.org/wiki/Microvesicle
Microvesicles (ectosomes, or microparticles) are a type of extracellular vesicle (EV) that are released from the cell membrane. In multicellular organisms, microvesicles and other EVs are found both in tissues (in the interstitial space between cells) and in many types of body fluids. Delimited by a phospholipid bilayer, microvesicles can be as small as the smallest EVs (30 nm in diameter) or as large as 1000 nm. They are considered to be larger, on average, than intracellularly-generated EVs known as exosomes. Microvesicles play a role in intercellular communication and can transport molecules such as mRNA, miRNA, and proteins between cells. Though initially dismissed as cellular debris, microvesicles may reflect the antigenic content of the cell of origin and have a role in cell signaling. Like other EVs, they have been implicated in numerous physiologic processes, including anti-tumor effects, tumor immune suppression, metastasis, tumor-stroma interactions, angiogenesis, and tissue regeneration. Microvesicles may also remove misfolded proteins, cytotoxic agents and metabolic waste from the cell. Changes in microvesicle levels may indicate diseases including cancer. Formation and contents Different cells can release microvesicles from the plasma membrane. Sources of microvesicles include megakaryocytes, blood platelets, monocytes, neutrophils, tumor cells and placenta. Platelets play an important role in maintaining hemostasis: they promote thrombus growth, and thus they prevent loss of blood. Moreover, they enhance immune response, since they express the molecule CD154 (CD40L). Platelets are activated by inflammation, infection, or injury, and after their activation microvesicles containing CD154 are released from platelets. CD154 is a crucial molecule in the development of T cell-dependent humoral immune response. CD154 knockout mice are incapable of producing IgG, IgE, or IgA as a response to antigens. Microvesicles can also transfer prions and molecules CD41 and CXCR4. Endothelial microparticles Endothelial microparticles are small vesicles that are released from endothelial cells and can be found circulating in the blood. The microparticle consists of a plasma membrane surrounding a small amount of cytosol. The membrane of the endothelial microparticle contains receptors and other cell surface molecules which enable the identification of the endothelial origin of the microparticle, and allow it to be distinguished from microparticles from other cells, such as platelets. Although circulating endothelial microparticles can be found in the blood of normal individuals, increased numbers of circulating endothelial microparticles have been identified in individuals with certain diseases, including hypertension and cardiovascular disorders, and pre-eclampsia and various forms of vasculitis. The endothelial microparticles in some of these disease states have been shown to have arrays of cell surface molecules reflecting a state of endothelial dysfunction. Therefore, endothelial microparticles may be useful as an indicator or index of the functional state of the endothelium in disease, and may potentially play key roles in the pathogenesis of certain diseases, including rheumatoid arthritis. Endothelial microparticles have been found to prevent apoptosis in recipient cells by inhibiting the p38 pathway via inactivating mitogen-activated protein kinase (MKP)-1. Uptake of endothelial micoparticles is Annexin I/Phosphatidylserine receptor dependant. Microparticles are derived from many other cell types. Process of formation Microvesicles and exosomes are formed and released by two slightly different mechanisms. These processes result in the release of intercellular signaling vesicles. Microvesicles are small, plasma membrane-derived particles that are released into the extracellular environment by the outward budding and fission of the plasma membrane. This budding process involves multiple signaling pathways including the elevation of intracellular calcium and reorganization of the cell's structural scaffolding. The formation and release of microvesicles involve contractile machinery that draws opposing membranes together before pinching off the membrane connection and launching the vesicle into the extracellular space. Microvesicle budding takes place at unique locations on the cell membrane that are enriched with specific lipids and proteins reflecting their cellular origin. At these locations, proteins, lipids, and nucleic acids are selectively incorporated into microvesicles and released into the surrounding environment. Exosomes are membrane-covered vesicles, formed intracellularly are considered to be smaller than 100 nm. In contrast to microvesicles, which are formed through a process of membrane budding, or exocytosis, exosomes are initially formed by endocytosis. Exosomes are formed by invagination within a cell to create an intracellular vesicle called an endosome, or an endocytic vesicle. In general, exosomes are formed by segregating the cargo (e.g., lipids, proteins, and nucleic acids) within the endosome. Once formed, the endosome combines with a structure known as a multivesicular body (MVB). The MVB containing segregated endosomes ultimately fuses with the plasma membrane, resulting in exocytosis of the exosomes. Once formed, both microvesicles and exosomes (collectively called extracellular vesicles) circulate in the extracellular space near the site of release, where they can be taken up by other cells or gradually deteriorate. In addition, some vesicles migrate significant distances by diffusion, ultimately appearing in biological fluids such as cerebrospinal fluid, blood, and urine. Mechanism of shedding There are three mechanisms which lead to release of vesicles into the extracellular space. First of these mechanisms is exocytosis from multivesicular bodies and the formation of exosomes. Another mechanism is budding of microvesicles directly from a plasma membrane. And the last one is cell death leading to apoptotic blebbing. These are all energy-requiring processes. Under physiologic conditions, the plasma membrane of cells has an asymmetric distribution of phospholipids. aminophospholipids, phosphatidylserine, and phosphatidylethanolamine are specifically sequestered in the inner leaflet of the membrane. The transbilayer lipid distribution is under the control of three phospholipidic pumps: an inward-directed pump, or flippase; an outward-directed pump, or floppase; and a lipid scramblase, responsible for non-specific redistribution of lipids across the membrane. After cell stimulation, including apoptosis, a subsequent cytosolic Ca2+ increase promotes the loss of phospholipid asymmetry of the plasma membrane, subsequent phosphatidylserine exposure, and a transient phospholipidic imbalance between the external leaflet at the expense of the inner leaflet, leading to budding of the plasma membrane and microvesicle release. Molecular contents The lipid and protein content of microvesicles has been analyzed using various biochemical techniques. Microvesicles display a spectrum of enclosed molecules enclosed within the vesicles and their plasma membranes. Both the membrane molecular pattern and the internal contents of the vesicle depend on the cellular origin and the molecular processes triggering their formation. Because microvesicles are not intact cells, they do not contain mitochondria, Golgi, endoplasmic reticulum, or a nucleus with its associated DNA. Microvesicle membranes consist mainly of membrane lipids and membrane proteins. Regardless of their cell type of origin, nearly all microvesicles contain proteins involved in membrane transport and fusion. They are surrounded by a phospholipid bilayer composed of several different lipid molecules. The protein content of each microvesicle reflects the origin of the cell from which it was released. For example, those released from antigen-presenting cells (APCs), such as B cells and dendritic cells, are enriched in proteins necessary for adaptive immunity, while microvesicles released from tumors contain proapoptotic molecules and oncogenic receptors (e.g. EGFR). In addition to the proteins specific to the cell type of origin, some proteins are common to most microvesicles. For example, nearly all contain the cytoplasmic proteins tubulin, actin and actin-binding proteins, as well as many proteins involved in signal transduction, cell structure and motility, and transcription. Most microvesicles contain the so-called "heat-shock proteins" hsp70 and hsp90, which can facilitate interactions with cells of the immune system. Finally, tetraspanin proteins, including CD9, CD37, CD63 and CD81 are one of the most abundant protein families found in microvesicle membranes. Many of these proteins may be involved in the sorting and selection of specific cargos to be loaded into the lumen of the microvesicle or its membrane. Other than lipids and proteins, microvesicles are enriched with nucleic acids (e.g., messenger RNA (mRNA) and microRNA (miRNA)). The identification of RNA molecules in microvesicles supports the hypothesis that they are a biological vehicle for the transfer of nucleic acids and subsequently modulate the target cell's protein synthesis. Messenger RNA transported from one cell to another through microvesicles can be translated into proteins, conferring new function to the target cell. The discovery that microvesicles may shuttle specific mRNA and miRNA suggests that this may be a new mechanism of genetic exchange between cells. Exosomes produced by cells exposed to oxidative stress can mediate protective signals, reducing oxidative stress in recipient cells, a process which is proposed to depend on exosomal RNA transfer. These RNAs are specifically targeted to microvesicles, in some cases containing detectable levels of RNA that is not found in significant amounts in the donor cell. Because the specific proteins, mRNAs, and miRNAs in microvesicles are highly variable, it is likely that these molecules are specifically packaged into vesicles using an active sorting mechanism. At this point, it is unclear exactly which mechanisms are involved in packaging soluble proteins and nucleic acids into microvesicles. Role on target cells Once released from their cell of origin, microvesicles interact specifically with cells they recognize by binding to cell-type specific, membrane-bound receptors. Because microvesicles contain a variety of surface molecules, they provide a mechanism for engaging different cell receptors and exchanging material between cells. This interaction ultimately leads to fusion with the target cell and release of the vesicles' components, thereby transferring bioactive molecules, lipids, genetic material, and proteins. The transfer of microvesicle components includes specific mRNAs and proteins, contributing to the proteomic properties of target cells. microvesicles can also transfer miRNAs that are known to regulate gene expression by altering mRNA turnover. Mechanisms of signaling Degradation In some cases, the degradation of microvesicles is necessary for the release of signaling molecules. During microvesicle production, the cell can concentrate and sort the signaling molecules which are released into the extracellular space upon microvesicle degradation. Dendritic cells, macrophage and microglia derived microvesicles contain proinflammatory cytokines and neurons and endothelial cells release growth factors using this mechanism of release. Fusion Proteins on the surface of the microvesicle will interact with specific molecules, such as integrin, on the surface of its target cell. Upon binding, the microvesicle can fuse with the plasma membrane. This results in the delivery of nucleotides and soluble proteins into the cytosol of the target cell as well as the integration of lipids and membrane proteins into its plasma membrane. Internalization Microvesicles can be endocytosed upon binding to their targets, allowing for additional steps of regulation by the target cell. The microvesicle may fuse, integrating lipids and membrane proteins into the endosome while releasing its contents into the cytoplasm. Alternatively, the endosome may mature into a lysosome causing the degradation of the microvesicle and its contents, in which case the signal is ignored. Transcytosis After internalization of microvesicle via endocytosis, the endosome may move across the cell and fuse with the plasma membrane, a process called transcytosis. This results in the ejection of the microvesicle back into the extracellular space or may result in the transportation of the microvesicle into a neighboring cell. This mechanism might explain the ability of microvesicle to cross biological barriers, such as the blood brain barrier, by moving from cell to cell. Contact dependent signaling In this form of signaling, the microvesicle does not fuse with the plasma membrane or engulfed by the target cell. Similar to the other mechanisms of signaling, the microvesicle has molecules on its surface that will interact specifically with its target cell. There are additional surface molecules, however, that can interact with receptor molecules which will interact with various signaling pathways. This mechanism of action can be used in processes such as antigen presentation, where MHC molecules on the surface of microvesicle can stimulate an immune response. Alternatively, there may be molecules on microvesicle surfaces that can recruit other proteins to form extracellular protein complexes that may be involved in signaling to the target cell. Relevance in disease Cancer Promoting aggressive tumor phenotypes The oncogenic receptor ECGFvIII, which is located in a specific type of aggressive glioma tumor, can be transferred to a non-aggressive population of tumor cells via microvesicles. After the oncogenic protein is transferred, the recipient cells become transformed and show characteristic changes in the expression levels of target genes. It is possible that transfer of other mutant oncogenes, such as HER2, may be a general mechanism by which malignant cells cause cancer growth at distant sites. Microvesicles from non-cancer cells can signal to cancer cells to become more aggressive. Upon exposure to microvesicles from tumor-associated macrophages, breast cancer cells become more invasive in vitro. Promoting angiogenesis Angiogenesis, which is essential for tumor survival and growth, occurs when endothelial cells proliferate to create a matrix of blood vessels that infiltrate the tumor, supplying the nutrients and oxygen necessary for tumor growth. A number of reports have demonstrated that tumor-associated microvesicles release proangiogenic factors that promote endothelial cell proliferation, angiogenesis, and tumor growth. Microvesicles shed by tumor cells and taken up by endothelial cells also facilitate angiogenic effects by transferring specific mRNAs and miRNAs. Involvement in multidrug resistance When anticancer drugs such as doxorubicin accumulate in microvesicles, the drug's cellular levels decrease. This can ultimately contribute to the process of drug resistance. Similar processes have been demonstrated in microvesicles released from cisplatin-insensitive cancer cells. Vesicles from these tumors contained nearly three times more cisplatin than those released from cisplatin-sensitive cells. For example, tumor cells can accumulate drugs into microvesicles. Subsequently, the drug-containing microvesicles are released from the cell into the extracellular environment, thereby mediating resistance to chemotherapeutic agents and resulting in significantly increased tumor growth, survival, and metastasis. Interference with antitumor immunity Microvesicles from various tumor types can express specific cell-surface molecules (e.g. FasL or CD95) that induce T-cell apoptosis and reduce the effectiveness of other immune cells. microvesicles released from lymphoblastoma cells express the immune-suppressing protein latent membrane protein-1 (LMP1), which inhibits T-cell proliferation and prevents the removal of circulating tumor cells (CTCs). As a consequence, tumor cells can turn off T-cell responses or eliminate the antitumor immune cells altogether by releasing microvesicles. the combined use of microvesicles and 5-FU resulted in enhanced chemosensitivity of squamous cell carcinoma cells more than the use of either 5-FU or microvesicle alone Impact on tumor metastasis Degradation of the extracellular matrix is a critical step in promoting tumor growth and metastasis. Tumor-derived microvesicles often carry protein-degrading enzymes, including matrix metalloproteinase 2 (MMP-2), MMP-9, and urokinase-type plasminogen activator (uPA). By releasing these proteases, tumor cells can degrade the extracellular matrix and invade surrounding tissues. Likewise, inhibiting MMP-2, MMP-9, and uPA prevents microvesicles from facilitating tumor metastasis. Matrix digestion can also facilitate angiogenesis, which is important for tumor growth and is induced by the horizontal transfer of RNAs from microvesicles. Cellular Origin of Microvesicles The release of microvesicles has been shown from endothelial cells, vascular smooth muscle cells, platelets, white blood cells (e.g. leukocytes and lymphocytes), and red blood cells. Although some of these microvesicle populations occur in the blood of healthy individuals and patients, there are obvious changes in number, cellular origin, and composition in various disease states. It has become clear that microvesicles play important roles in regulating the cellular processes that lead to disease pathogenesis. Moreover, because microvesicles are released following apoptosis or cell activation, they have the potential to induce or amplify disease processes. Some of the inflammatory and pathological conditions that microvesicles are involved in include cardiovascular disease, hypertension, neurodegenerative disorders, diabetes, and rheumatic diseases. Cardiovascular disease Microvesicles are involved in cardiovascular disease initiation and progression. Microparticles derived from monocytes aggravate atherosclerosis by modulating inflammatory cells. Additionally, microvesicles can induce clotting by binding to clotting factors or by inducing the expression of clotting factors in other cells. Circulating microvesicles isolated from cardiac surgery patients were found to be thrombogenic in both in vitro assays and in rats. Microvesicles isolated from healthy individuals did not have the same effects and may actually have a role in reducing clotting. Tissue factor, an initiator of coagulation, is found in high levels within microvesicles, indicating their role in clotting. Renal mesangial cells exposed to high glucose media release microvesicles containing tissue factor, having an angiogenic effect on endothelial cells. Inflammation Microvesicles contain cytokines that can induce inflammation via numerous different pathways. These cells will then release more microvesicles, which have an additive effect. This can call neutrophils and leukocytes to the area, resulting in the aggregation of cells. However, microvesicles also seem to be involved in a normal physiological response to disease, as there are increased levels of microvesicles that result from pathology. Neurological disorders Microvesicles seem to be involved in a number of neurological diseases. Since they are involved in numerous vascular diseases and inflammation, strokes and multiple sclerosis seem to be other diseases for which microvesicles are involved. Circulating microvesicles seem to have an increased level of phosphorylated tau proteins during early stage Alzheimer's disease. Similarly, increased levels of CD133 are an indicator of epilepsy. Clinical applications Detection of cancer Tumor-associated microvesicles are abundant in the blood, urine, and other body fluids of patients with cancer, and are likely involved in tumor progression. They offer a unique opportunity to noninvasively access the wealth of biological information related to their cells of origin. The quantity and molecular composition of microvesicles released from malignant cells varies considerably compared with those released from normal cells. Thus, the concentration of plasma microvesicles with molecular markers indicative of the disease state may be used as an informative blood-based biosignature for cancer. Microvesicles express many membrane-bound proteins, some of which can be used as tumor biomarkers. Several tumor markers accessible as proteins in blood or urine have been used to screen and diagnose various types of cancer. In general, tumor markers are produced either by the tumor itself or by the body in response to the presence of cancer or some inflammatory conditions. If a tumor marker level is higher than normal, the patient is examined more closely to look for cancer or other conditions. For example, CA19-9, CA-125, and CEA have been used to help diagnose pancreatic, ovarian, and gastrointestinal malignancies, respectively. However, although they have proven clinical utility, none of these tumor markers are highly sensitive or specific. Clinical research data suggest that tumor-specific markers exposed on microvesicles are useful as a clinical tool to diagnose and monitor disease. Research is also ongoing to determine if tumor-specific markers exposed on microvesicles are predictive for therapeutic response. Evidence produced by independent research groups has demonstrated that microvesicles from the cells of healthy tissues, or selected miRNAs from these microvesicles, can be employed to reverse many tumors in pre-clinical cancer models, and may be used in combination with chemotherapy. Conversely, microvesicles processed from a tumor cell are involved in the transport of cancer proteins and in delivering microRNA to the surrounding healthy tissue. It leads to a change of healthy cell phenotype and creates a tumor-friendly environment. Microvesicles play an important role in tumor angiogenesis and in the degradation of matrix due to the presence of metalloproteases, which facilitate metastasis. They are also involved in intensification of the function of regulatory T-lymphocytes and in the induction of apoptosis of cytotoxic T-lymphocytes, because microvesicles released from a tumor cell contain Fas ligand and TRAIL. They prevent differentiation of monocytes to dendritic cells. Tumor microvesicles also carry tumor antigen, so they can be an instrument for developing tumor vaccines. Circulating miRNA and segments of DNA in all body fluids can be potential markers for tumor diagnostics. Microvesicles and Rheumatoid arthritis Rheumatoid arthritis is a chronic systemic autoimmune disease characterized by inflammation of joints. In the early stage there are abundant Th17 cells producing proinflammatory cytokines IL-17A, IL-17F, TNF, IL-21, and IL-22 in the synovial fluid. regulatory T-lymphocytes have a limited capability to control these cells. In the late stage, the extent of inflammation correlates with numbers of activated macrophages that contribute to joint inflammation and bone and cartilage destruction, because they have the ability to transform themselves into osteoclasts that destroy bone tissue. Synthesis of reactive oxygen species, proteases, and prostaglandins by neutrophils is increased. Activation of platelets via collagen receptor GPVI stimulates the release of microvesicles from platelet cytoplasmic membranes. These microparticles are detectable at a high level in synovial fluid, and they promote joint inflammation by transporting proinflammatory cytokine IL-1. Biological markers for disease In addition to detecting cancer, it is possible to use microvesicles as biological markers to give prognoses for various diseases. Many types of neurological diseases are associated with increased level of specific types of circulating microvesicles. For example, elevated levels of phosphorylated tau proteins can be used to diagnose patients in early stages of Alzheimer's. Additionally, it is possible to detect increased levels of CD133 in microvesicles of patients with epilepsy. Mechanism for drug delivery Circulating microvesicles may be useful for the delivery of drugs to very specific targets. Using electroporation or centrifugation to insert drugs into microvesicles targeting specific cells, it is possible to target the drug very efficiently. This targeting can help by reducing necessary doses as well as prevent off-target side effects. They can target anti-inflammatory drugs to specific tissues. Additionally, circulating microvesicles can bypass the blood–brain barrier and deliver their cargo to neurons while not having an effect on muscle cells. The blood-brain barrier is typically a difficult obstacle to overcome when designing drugs, and microvesicles may be a means of overcoming it. Current research is looking into efficiently creating microvesicles synthetically, or isolating them from patient or engineered cell lines. Microvesicles used in therapeutic genome editing appoaches are sometimes called a “gesicle”, especially if used to package/deliver the Cas9 RNP complex. See also International Society for Extracellular Vesicles Journal of Extracellular Vesicles Exocytosis Membrane vesicle trafficking References Further reading External links Vesiclepedia—A database of molecules identified in extracellular vesicles ExoCarta—A database of molecules identified in exosomes International Society for Extracellular Vesicles Resource on the detection of circulating microvesicles Cell biology Vesicles Medical diagnosis Nanotechnology
Microvesicle
Materials_science,Engineering,Biology
5,340
52,728,470
https://en.wikipedia.org/wiki/Zernike%20Institute%20for%20Advanced%20Materials
The Zernike Institute for Advanced Materials is the department of nanoscience and materials science of the University of Groningen in the Netherlands. The institute is named after the Dutch Nobel Prize winner Frits Zernike, famous for his development of phase contrast microscopy. The research of the Zernike Institute is focused on curiosity-driven studies of functional materials. The research involve researchers from the fields of physics, chemistry and biology. The aim is to understand how functional materials work at the atomic and molecular level. The Zernike Institute for Advanced Materials is involved in the whole string of research starting from design, through synthesis, device building, characterisation, investigation of the theoretical foundation and feedback to the design process. The institute is responsible for the education of numerous master and PhD students among others through the top master nanoscience programme. External links Zernike Institute for Advanced Materials Top master nanoscience programme Research institutes in the Netherlands Nanotechnology institutions University of Groningen
Zernike Institute for Advanced Materials
Materials_science
195
4,845,988
https://en.wikipedia.org/wiki/T-slot%20structural%20framing
T-slot structural framing is a framing system consisting of lengths of square or rectangular extruded aluminium, typically 6105-T5 aluminium alloy, with a T-slot down the centerline of one or more sides. It is also known under several generic names, such as aluminium extrusion, aluminium profile and 2020 extrusion if the cross-section is 20x20 mm, alongside brand names, such as 80/20 framing. While the precise history of the T-slot framing system is not known, advancement in extrusion press technology in the early 1950s allowed for economic production of aluminium profiles, and examples of use can be found from the early 1960s. Although no published standard defines the system, it is produced in a series of conventional sizes which allows for compatibility between manufacturers. There is a variation on T-slot profiles known as V-slot rails where V-slot wheels are slotted into the V-shaped channels of the framing for linear motion in a 3D printer or other CNC machine. Profiles T-slot framing is divided into metric and fractional (imperial) categories. The T-slot is always centered along the long-axis of the piece. Pieces are available in each series with a square cross-section. Rectangular cross sections are also available which measure x by 2x (where x is the defined width) - e.g. 40mm by 80mm for 40 series. See also T-slot nut Strut channel References Structural system
T-slot structural framing
Technology,Engineering
299
540,772
https://en.wikipedia.org/wiki/RC%20time%20constant
The RC time constant, denoted (lowercase tau), the time constant (in seconds) of a resistor–capacitor circuit (RC circuit), is equal to the product of the circuit resistance (in ohms) and the circuit capacitance (in farads): It is the time required to charge the capacitor, through the resistor, from an initial charge voltage of zero to approximately 63.2% of the value of an applied DC voltage, or to discharge the capacitor through the same resistor to approximately 36.8% of its initial charge voltage. These values are derived from the mathematical constant e, where and . The following formulae use it, assuming a constant voltage applied across the capacitor and resistor in series, to determine the voltage across the capacitor against time: Charging toward applied voltage (initially zero voltage across capacitor, constant across resistor and capacitor together) Discharging toward zero from initial voltage (initially across capacitor, constant zero voltage across resistor and capacitor together) Cutoff frequency The time constant is related to the RC circuit's cutoff frequency fc, by or, equivalently, where resistance in ohms and capacitance in farads yields the time constant in seconds or the cutoff frequency in hertz (Hz). The cutoff frequency when expressed as an angular frequency is simply the reciprocal of the time constant. Short conditional equations using the value for : fc in Hz = 159155 / τ in μs τ in μs = 159155 / fc in Hz Other useful equations are: rise time (20% to 80%) rise time (10% to 90%) In more complicated circuits consisting of more than one resistor and/or capacitor, the open-circuit time constant method provides a way of approximating the cutoff frequency by computing a sum of several RC time constants. Delay The signal delay of a wire or other circuit, measured as group delay or phase delay or the effective propagation delay of a digital transition, may be dominated by resistive-capacitive effects, depending on the distance and other parameters, or may alternatively be dominated by inductive, wave, and speed of light effects in other realms. Resistive-capacitive delay, or RC delay, hinders the further increasing of speed in microelectronic integrated circuits. When the feature size becomes smaller and smaller to increase the clock speed, the RC delay plays an increasingly important role. This delay can be reduced by replacing the aluminum conducting wire by copper, thus reducing the resistance; it can also be reduced by changing the interlayer dielectric (typically silicon dioxide) to low-dielectric-constant materials, thus reducing the capacitance. The typical digital propagation delay of a resistive wire is about half of R times C; since both R and C are proportional to wire length, the delay scales as the square of wire length. Charge spreads by diffusion in such a wire, as explained by Lord Kelvin in the mid nineteenth century. Until Heaviside discovered that Maxwell's equations imply wave propagation when sufficient inductance is in the circuit, this square diffusion relationship was thought to provide a fundamental limit to the improvement of long-distance telegraph cables. That old analysis was superseded in the telegraph domain, but remains relevant for long on-chip interconnects. See also Cutoff frequency and frequency response Emphasis, preemphasis, deemphasis Exponential decay Filter (signal processing) and transfer function High-pass filter, low-pass filter, band-pass filter RL circuit, and RLC circuit Rise time References External links RC Time Constant Calculator Conversion time constant to cutoff frequency fc and back RC time constant Analog circuits Time
RC time constant
Physics,Mathematics,Engineering
778
698,223
https://en.wikipedia.org/wiki/Phycoerythrin
Phycoerythrin (PE) is a red protein-pigment complex from the light-harvesting phycobiliprotein family, present in cyanobacteria, red algae and cryptophytes, accessory to the main chlorophyll pigments responsible for photosynthesis.The red pigment is due to the prosthetic group, phycoerythrobilin, which gives phycoerythrin its red color. Like all phycobiliproteins, it is composed of a protein part covalently binding chromophores called phycobilins. In the phycoerythrin family, the most known phycobilins are: phycoerythrobilin, the typical phycoerythrin acceptor chromophore. Phycoerythrobilin is a linear tetrapyrrole molecule found in cyanobacteria, red algae, and cryptomonads. Together with other bilins such as phycocyanobilin it serves as a light-harvesting pigment in the photosynthetic light-harvesting structures of cyanobacteria called phycobilisomes. Phycoerythrins are composed of (αβ) monomers, usually organised in a disk-shaped trimer (αβ)3 or hexamer (αβ)6 (second one is the functional unit of the antenna rods). These typical complexes also contain a third type of subunit, the γ chain. Phycobilisomes Phycobiliproteins are part of huge light harvesting antennae protein complexes called phycobilisomes. In red algae they are anchored to the stromal side of thylakoid membranes of chloroplasts, whereas in cryptophytes phycobilisomes are reduced and (phycobiliprotein 545 PE545 molecules here) are densely packed inside the lumen of thylakoides. Phycobiliproteins have many practical application to them including imperative properties like hepato-protective, anti-oxidants, anti-inflammatory and anti-aging activity of PBPs enable their use in food, cosmetics, pharmaceutical and biomedical industries. PBPs have been also noted to show beneficial effect in therapeutics of some disease like Alzheimer and cancer. Phycoerythrin is an accessory pigment to the main chlorophyll pigments responsible for photosynthesis. The light energy is captured by phycoerythrin and is then passed on to the reaction centre chlorophyll pair, most of the time via the phycobiliproteins phycocyanin and allophycocyanin. Structural characteristics Phycoerythrins except phycoerythrin 545 (PE545) are composed of (αβ) monomers assembled into disc-shaped (αβ)6 hexamers or (αβ)3 trimers with 32 or 3 symmetry and enclosing central channel. In phycobilisomes (PBS) each trimer or hexamer contains at least one linker protein located in central channel. B-phycoerythrin (B-PE) and R-phycoerythrin (R-PE) from red algae in addition to α and β chains have a third, γ subunit contributing both linker and light-harvesting functions, because it bears chromophores. R-phycoerythrin is predominantly produced by red algae. The protein is made up of at least three different subunits and varies according to the species of algae that produces it. The subunit structure of the most common R-PE is (αβ)6γ. The α subunit has two phycoerythrobilins (PEB), the β subunit has 2 or 3 PEBs and one phycourobilin (PUB), while the different gamma subunits are reported to have 3 PEB and 2 PUB (γ1) or 1 or 2 PEB and 1 PUB (γ2). The molecular weight of R-PE is 250,000 daltons. Crystal structures available in the Protein Data Bank contain in one (αβ)2 or (αβγ)2 asymmetric unit of different phycoerythrins: The assumed biological molecule of phycoerythrin 545 (PE545) is (αβ)2 or rather . The numbers 2 and 3 after the α letters in second formula are part of chain names here, not their counts. The synonym cryptophytan name of α3 chain is α1 chain. The largest assembly of B-phycoerythrin (B-PE) is (αβ)3 trimer . However, preparations from red algae yield also (αβ)6 hexamer . In case of R-phycoerythrin (R-PE) the largest assumed biological molecule here is (αβγ)6, or (αβ)6 dependently on publication, for other phycoerythrin types (αβ)6. These γ chains from the Protein Data Bank are very small and consist only of three or six recognizable amino acids , whereas described at the beginning of this section linker γ chain is large (for example 277 amino acid long 33 kDa in case of γ33 from red algae Aglaothamnion neglectum) . This is because the electron density of the gamma-polypeptide is mostly averaged out by its threefold crystallographic symmetry and only a few amino acids can be modeled . For (αβγ)6, (αβ)6 or the values from the table should be simply multiplied by 3, (αβ)3 contain intermediate numbers of non-protein molecules. (this non sequitur needs to be corrected) In phycoerythrin PE545 above, one α chain (-2 or -3) binds one molecule of billin, in other examples it binds two molecules. The β chain always binds to three molecules. The small γ chain binds to none. Two molecules of N-methyl asparagine are bound to the β chain, one 5-hydroxylysine to α (-3 or -2), one Mg2+ to α-3 and β, one Cl− to β, 1–2 molecules of to α or β. Below is sample crystal structure of R-phycoerythrin from Protein Data Bank: Spectral characteristics Absorption peaks in the visible light spectrum are measured at 495 and 545/566 nm, depending on the chromophores bound and the considered organism. A strong emission peak exists at 575 ± 10 nm. (Phycoerythrin absorbs slightly blue-green/yellowish light and emits slightly orange-yellow light.) PEB and DBV bilins in PE545 absorb in the green spectral region too, with maxima at 545 and 569 nm respectively. The fluorescence emission maximum is at 580 nm. R-Phycoerythrin variations As mentioned above, phycoerythrin can be found in a variety of algal species. As such, there can be variation in the efficiency of absorbance and emission of light required for facilitation of photosynthesis. This could be a result of the depth in the water column that a specific alga typically resides and a consequent need for greater or less efficiency of the accessory pigments. With advances in imaging and detection technology which can avoid rapid photobleaching, protein fluorophores have become a viable and powerful tool for researchers in fields such as microscopy, microarray analysis and Western blotting. In light of this, it may be beneficial for researchers to screen these variable R-phycoerythrins to determine which one is most appropriate for their particular application. Even a small increase in fluorescent efficiency could reduce background noise and lower the rate of false-negative results. Practical applications R-Phycoerythrin (also known as PE or R-PE) is useful in the laboratory as a fluorescence-based indicator for the presence of cyanobacteria and for labeling antibodies, most often for flow cytometry. Its also used in microarray assays, ELISAs, and other applications that require high sensitivity but not photostability. Its use is limited in immunofluorescence microscopy due to its rapid photobleaching characteristics. There are also other types of phycoerythrins, such as B-phycoerythrin, which have slightly different spectral properties. B-Phycoerythrin absorbs strongly at about 545 nm (slightly yellowish green) and emits strongly at 572 nm (yellow) instead and could be better suited for some instruments. B-Phycoerythrin may also be less "sticky" than R-phycoerythrin and contributes less to background signal due to nonspecific binding in certain applications. However, R-PE is much more commonly available as an antibody conjugate. Phycoerythrin (PE, λA max = 540–570 nm; λF max = 575–590 nm) R-Phycoerythrin and B-phycoerythrin are among the brightest fluorescent dyes ever identified. References External links Proteins Photosynthetic pigments Fluorescent proteins Cyanobacteria proteins Red algae
Phycoerythrin
Chemistry,Biology
1,990
21,575,036
https://en.wikipedia.org/wiki/TB10Cs1H2%20snoRNA
TB10Cs1H2 is a member of the H/ACA-like class of non-coding RNA (ncRNA) molecule that guide the sites of modification of uridines to pseudouridines of substrate RNAs. It is known as a small nucleolar RNA (snoRNA) thus named because of its cellular localization in the nucleolus of the eukaryotic cell. TB10Cs1H2 is predicted to guide the pseudouridylation of LSU5 ribosomal RNA (rRNA) at residue Ψ901. References Non-coding RNA
TB10Cs1H2 snoRNA
Chemistry
123
29,268,538
https://en.wikipedia.org/wiki/Unital%20map
In abstract algebra, a unital map on a C*-algebra is a map which preserves the identity element: This condition appears often in the context of completely positive maps, especially when they represent quantum operations. If is completely positive, it can always be represented as (The are the Kraus operators associated with ). In this case, the unital condition can be expressed as References C*-algebras
Unital map
Mathematics
84
35,892,078
https://en.wikipedia.org/wiki/Thermoanaerobacter%20italicus
Thermoanaerobacter italicus is a species of thermophilic, anaerobic, spore-forming bacteria. T. italicus was first isolated from hot springs in the north of Italy. The growth range for the organism is 45 to 78°C, with optimal growth conditions at 70°C and pH 7.0. The organism stains Gram-negative, although it has a Gram-positive cell structure. The species was named italicus in reference to the Italian hot springs in which it was first isolated. The organism was originally isolated because of its ability to digest pectin and pectate. References External links Type strain of Thermoanaerobacter italicus at BacDive - the Bacterial Diversity Metadatabase Thermoanaerobacterales Thermophiles Anaerobes Bacteria described in 1998
Thermoanaerobacter italicus
Biology
179
24,760,321
https://en.wikipedia.org/wiki/Integrated%20pulmonary%20index
Integrated pulmonary index (IPI) is a patient pulmonary index which uses information from capnography and pulse oximetry to provide a single value that describes the patient's respiratory status. IPI is used by clinicians to quickly assess the patient's respiratory status to determine the need for additional clinical assessment or intervention. The IPI is a patient index which provides a simple indication in real time of the patient's overall ventilatory status as an integer ranging from numbers 1 to 10. IPI integrates four major physiological parameters provided by a patient monitor, using this information along with an algorithm to produce the IPI score. The IPI score is not intended to replace current patient respiratory parameters, but to provide an additional integrated score or index of the patient ventilation status to the caregiver. Mechanism The IPI incorporates four patient parameters (end-tidal CO2 and respiratory rate measured by capnography, as well as pulse rate and blood oxygenation SpO2 as measured by pulse oximetry) into a single index value. The IPI value on the patient monitor indicates the patient ventilatory status, where a score of 10 is normal, indicating optimal pulmonary status, and a score of 1 or 2 requires immediate intervention. The IPI algorithm was developed based on the data from a group of medical experts (anesthesiologists, nurses, respiratory therapists, and physiologists) who evaluated cases with varying parameter values and whom assigned an IPI value to a predefined patient status. A mathematical model was built using patient normal ranges for these parameters and the ratings given to various combinations of the parameters by these professionals. Fuzzy logic, a mathematical method which mimics human logical thinking, was used to develop the IPI model. Clinical validation studies indicate that the IPI value produced by the IPI algorithm accurately reflects the patient's ventilatory status. In studies on both adult and pediatric patients, in which experts’ ratings of ventilatory status were collected along with IPI data, the IPI scores were found to be highly correlated with the experts’ annotated ratings. Studies conducted to validate the index also concluded that the single numeric value of IPI along with IPI trend may be valuable for promoting early awareness to changes in patient ventilatory status and in simplifying the monitoring of patients in busy clinical environments. How does IPI help clinicians? IPI is a real-time patient value, updated every second, always available to the caregiver. An IPI trend graph also shows IPI scores over the previous hour (or other set time period), indicating if the IPI is remaining steady or trending up or down, thus reflecting changes in pulmonary status over time. In the example seen here, the changing IPI score indicates changes in the ventilatory status of the patient; IPI improves after a stimulus is applied. IPI can promote early awareness to changes in a patient's ventilatory status. The caregiver can view the IPI trend, which indicates changes in IPI over time. A quick view of the IPI trend can show that if the IPI has changed over the previous minutes or hours, to help the clinician ascertain if the patient's overall ventilatory status is worsening, remaining steady, or improving. This information can help determine the next steps in patient care. Thus, IPI can simplify the monitoring of patients in clinical environments. The caregiver can quickly and easily assess a patient's ventilatory status by following one number, the IPI, before checking the four parameters that make up this number. The four parameters continue to be displayed on the monitor screen. A significant change in the IPI is a “red flag” indicator, indicating that the clinician should review other monitored data and assess the patient. In the clinical environment, a quick check of the IPI value and IPI trend is a first indicator of pulmonary status of the patient and may be used to determine if further patient assessment is warranted. IPI can increase patient safety, by indicating the presence of slow-developing patient respiratory issues not easily identified with individual instantaneous data to the caregiver in real time. This enables timely decisions and interventions to reduce patient risk, improve outcomes and increase patient safety. Since normal values for the physiological parameters are different for different age categories, the IPI algorithm differs for different age groups (three pediatric age groups and adult). IPI is not available for neonatal and infant patients (up to the age of 1 year). See also Anesthesia Medical tests Footnotes {{}}==References== A Novel Integrated Pulmonary Index (IPI) Quantifies Heart Rate, Etco2, Respiratory Rate and SpO2% , Arthur Taft, Ph.D., Michal Ronen, Ph.D., Chad Epps, M.D., Jonathan Waugh, Ph.D., Richard Wales, B.S., presented at the Annual meeting of the American Society of Anesthesiologists, 2008 Reliability of the Integrated Pulmonary Index Postoperatively , D. Gozal, MD, Y. Gozal, MD, presented at Society for Technology in Anesthesia (STA) in 2009 The Integrated Pulmonary Index: Validity and Application in the Pediatric Population , D. Gozal, MD, Y. Gozal, MD, presented at Society for Technology in Anesthesia (STA) in 2009 Medical technology
Integrated pulmonary index
Biology
1,124
41,973,969
https://en.wikipedia.org/wiki/RR%20Caeli
RR Caeli is an eclipsing binary star system, located 69 light-years from Earth in the constellation Caelum. It is made up of a red dwarf star and a white dwarf, which complete an orbit around each other every seven hours. There is evidence of two circumbinary planets orbiting even further away. Properties RR Caeli was first noted to be a high-proper motion star in 1955 by Jacob Luyten, and given the designation LFT 349. This star system consists of a red dwarf of spectral type M6 and a white dwarf that orbit each other every seven hours; the former is 18% as massive as the Sun, while the latter has 44% of the Sun's mass. The red dwarf is tidally locked with the white dwarf, meaning it displays the same side to the heavier star. The system is also a post-common-envelope binary, and the red dwarf star is transferring material onto the white dwarf. In approximately 9–20 billion years, RR Caeli will likely become a cataclysmic variable star due to the period's gradual shortening, leading to increasing rates of transfer of hydrogen to the surface of the white dwarf. The white dwarf is likely to have a plain helium core, as its density is too low for the carbon-oxygen core. Discovered to be an eclipsing binary in 1979, it has a baseline magnitude of 14.36, dimming markedly every 7.2 hours for an interval of around 10 minutes, due to the total eclipse of the hotter star by the cooler one. Its variability in brightness led to its being given the variable star designation RR Caeli in 1984. There are very shallow secondary eclipses where the white dwarf transits across the red dwarf. Planetary system In 2012, analysis of slight variations in the observed light curve of the system showed that there was likely a giant planet about four times as massive as Jupiter orbiting the pair of stars with a period of 11.9 years, and that there was also evidence for a second possible substellar body further out. More observations of the light curve are likely to help confirm the presence of one or both planets. A 2022 study found that at least the 2012 model fails to predict recent changes in eclipse timing, suggesting that a different explanation for the eclipse timing variations may be needed. A two-planet model was presented in 2021. Notes References Caelum Eclipsing binaries Caeli, RR M-type main-sequence stars White dwarfs Hypothetical planetary systems J04210556-4839070
RR Caeli
Astronomy
521
22,553,379
https://en.wikipedia.org/wiki/William%20Robert%20Bousfield
William Robert Bousfield (12 January 1854 – 16 July 1943) was a British lawyer, Conservative politician and scientist. Biography Bousfield was the son of Edward Tenney Bousfield, an engineer, and his wife Charlotte Eliza Collins, who was a noted diarist. He was born at Newark-on-Trent, from which his family moved to Sticklepath in 1856 and then to Bedford, where they arrived in September 1858. He attended Bedford Modern School before serving an apprenticeship as an engineer. In 1872 he was admitted to Gonville and Caius College, Cambridge, winning a scholarship there in 1873. Following graduation as 16th Wrangler in 1876 and a brief period as a lecturer at the University of Bristol, where he delivered the new institution's first ever lecture (on Mathematics at 9a.m. 10 October 1876), he decided to study law. In 1880 he was called to the bar at the Inner Temple. His knowledge of engineering led to him becoming a renowned expert on patent law. He became a Queen's Counsel in 1891 (which office became King's Counsel on the accession of a King in 1901). He was elected a bencher of the Inner Temple in 1897, and treasurer in 1920. Politically, Bousfield was a Conservative, and stood unsuccessfully twice for election as Member of Parliament for Mid Lanarkshire in the 1880s. He entered the Commons at a By-election at Hackney North in May 1892. He held the seat at the 1895 and 1900 elections, before being unseated by Thomas Hart-Davies, when the Liberals swept to power at the 1906 general election. He did not stand for election again. Bousfield was an enthusiastic scientist, particularly interested in physical chemistry and electrolysis. He worked in collaboration with T M Lowry, and their work was published in the Proceedings of the Royal Society, of which Bousfield was made a fellow in 1916. He co-authored an article with his daughter C. Elspeth Bousfield on the specific heat of water in the Transactions of the Royal Society, published in 1919. When his health began to fail in the 1920s, he was no longer able to carry out laboratory experiments, and turned his attention to psychology. He wrote three books on the subject: A Neglected Complex (1924), The Mind and its Mechanism (1927) and The Basis of Memory (1928). His book The Mind and its Mechanism co-authored with his son Paul Bousfield postulated the existence of a "psychoplasm" which like protoplasm is an essential part of each cell. The psychoplasm is composed of immaterial "psychons" which interact with the physical brain. Psychons are described as immeasurably smaller than electrons or protons. The book argued for a psycho-physical interaction. The "psychonic substance" is utilized to explain consciousness, ideas, memory, the unconscious mind and evolution. Bousfield favoured Lamarckian evolution, taking the view that habits become ingrained in the "mental structure" of the organism which influence the psychic structure of the germ plasm. Personal life In 1879 he married Florence Kelly of Shanklin, Isle of Wight. His son Paul Bousfield, was a specialist in nervous diseases who graduated MRCS, LRCP from St Bartholomew's Hospital. His other son John Keith Bousfield (1893–1945), was an army officer, businessman and member of the Legislative Council of Hong Kong. Bousfield died in Ottery St Mary in July 1943, aged 89. Selected publications A Neglected Complex (1924) The Mind and its Mechanism (with Paul Bousfield, 1927) The Basis of Memory (1928) References External links 1854 births 1943 deaths Lamarckism People educated at Bedford Modern School Conservative Party (UK) MPs for English constituencies UK MPs 1892–1895 UK MPs 1895–1900 UK MPs 1900–1906 Fellows of the Royal Society Hackney Members of Parliament British physical chemists Academics of University College Bristol English barristers Alumni of Gonville and Caius College, Cambridge English King's Counsel
William Robert Bousfield
Biology
835
321,017
https://en.wikipedia.org/wiki/Allen%20Brain%20Atlas
The Allen Mouse and Human Brain Atlases are projects within the Allen Institute for Brain Science which seek to combine genomics with neuroanatomy by creating gene expression maps for the mouse and human brain. They were initiated in September 2003 with a $100 million donation from Paul G. Allen and the first atlas went public in September 2006. , seven brain atlases have been published: Mouse Brain Atlas, Human Brain Atlas, Developing Mouse Brain Atlas, Developing Human Brain Atlas, Mouse Connectivity Atlas, Non-Human Primate Atlas, and Mouse Spinal Cord Atlas. There are also three related projects with data banks: Glioblastoma, Mouse Diversity, and Sleep. It is the hope of the Allen Institute that their findings will help advance various fields of science, especially those surrounding the understanding of neurobiological diseases. The atlases are free and available for public use online. History In 2001, Paul Allen gathered a group of scientists, including James Watson and Steven Pinker, to discuss the future of neuroscience and what could be done to enhance neuroscience research (Jones 2009). During these meetings David Anderson from the California Institute of Technology proposed the idea that a three-dimensional atlas of gene expression in the mouse brain would be of great use to the neuroscience community. The project was set in motion in 2003 with a 100 million dollar donation by Allen through the Allen Institute for Brain Science. The project used a technique for mapping gene expression developed by Gregor Eichele and colleagues at the Max Planck Institute for Biophysical Chemistry in Goettingen, Germany. The technique uses colorimetric in situ hybridization to map gene expression. The project set a 3-year goal of finishing the project and making it available to the public. An initial release of the first atlas, the mouse brain atlas, occurred in December 2004. Subsequently, more data for this atlas was released in stages. The final genome-wide data set was released in September 2006. However, the final release of the atlas was not the end of the project; the Atlas is still being improved upon. Also, other projects including the human brain atlas, developing mouse brain, developing human brain, mouse connectivity, non-human primate atlas, and the mouse spinal cord atlas are being developed through the Allen Institute for Brain Science in conjunction with the Allen Mouse Brain Atlas. Goals for the project The overarching goal and motto for all Allen Institute projects is "fueling discovery". The project strives to fulfill this goal and advance science in a few ways. First, they create brain atlases to better understand the connections between genes and brain functioning. They aim to advance the research and knowledge about neurobiological conditions such as Parkinson's, Alzheimer's, and Autism with their mapping of gene expression throughout the brain. The Brain Atlas projects also follow the "Allen Institute" motto with their open release of data and findings. This policy is also related to another goal of the Institute: collaborative and multidisciplinary research. Thus, any scientist from any discipline is able to look at the findings and take them into account while designing their own experiments. Also available to the public is the Brain Explorer application. Research techniques The Allen Institute for Brain Science uses a project-based philosophy for their research. Each brain atlas focuses on its own project, made up of its own team of researchers. To complete an atlas, each research team collects and synthesizes brain scans, medical data, genetic information and psychological data. With this information, they are able to construct the 3-D biochemical architecture of the brain and figure out which proteins are expressed in certain parts of the brain. To gather the needed data, scientists at the Allen Institute use various techniques. One technique involves the use of postmortem brains and brain scanning technology to discover where in the brain genes are turned on and off. Another technique, called in situ hybridization, or ISH, is used to view gene expression patterns as in situ hybridization images. Within the Brain Atlases, these 3-D ISH digital images and graphs reveal, in color, the regions where a given gene is expressed. In the Brain Explorer, any gene can be searched for and selected resulting in the in situ image appearing as an easily manipulated and explored fashion. Part of the creation of this anatomy-centred database of gene expression, includes aligning ISH data for each gene with a three-dimensional coordinate space through registration with a reference atlas created for the project. Contributions to neuroscience The different types of cells in the central nervous system originate from varying gene expression. A map of gene expression in the brain allows researchers to correlate forms and functions. The Allen Brain Atlas lets researchers view the areas of differing expression in the brain which enables the viewing of neural connections throughout the brain. Viewing these pathways through differing gene expression as well as functional imaging techniques permits researchers to correlate between gene expression, cell types, and pathway function in relation to behaviors or phenotypes. Even though the majority of research has been done in mice, 90% of genes in mice have a counterpart in humans. This makes the Atlas particularly useful for modeling neurological diseases. The gene expression patterns in normal individuals provide a standard for comparing and understanding altered phenotypes. Extending information learned from mouse diseases will help better the understanding of human neurological disorders. The atlas can show which genes and particular areas are effected in neurological disorders; the action of a gene in a disease can be evaluated in conjunction with general expression patterns and this data could shed light on the role of the particular gene in the disorder. Brain explorer The Allen Brain Atlas website contains a downloadable 3-D interactive Brain explorer. The explorer is essentially a search engine for locations of gene expression; this is particularly useful in finding regions that express similar genes. Users can delineate networks and pathways using this application by connecting regions that co-express a certain gene. The explorer uses a multicolor scale and contains multiple planes of the brain that let viewers see differences in density and expression level. The images are a composite of many averaged samples so it is useful when comparing to individuals with abnormally low gene expression. Atlases Mouse Brain The Allen Mouse Brain Atlas is a comprehensive genome-wide map of the adult mouse brain that reveals where each gene is expressed. The mouse brain atlas was the original project of the Allen Brain Atlas and was finished in 2006. The purpose of the atlas is to aid in the development of neuroscience research. The hope of the project is that it will allow scientists to gain a better understanding of brain diseases and disorders such as autism and depression. Human Brain The Allen Human Brain Atlas was made public in May 2010. It was the first anatomically and genomically comprehensive three-dimensional human brain map. The atlas was created to enhance research in many neuroscience research fields including neuropharmacology, human brain imaging, human genetics, neuroanatomy, genomics and more. The atlas is also geared toward furthering research into mental health disorders and brain injuries such as Alzheimer's disease, autism, schizophrenia and drug addiction. Developing Mouse Brain The Allen Developing Mouse Brain Atlas is an atlas which tracks gene expression throughout the development of a C57BL/6 mouse brain. The project began in 2008 and is currently ongoing. The atlas is based on magnetic resonance imaging (MRI). It traces the growth, white matter, connectivity, and development of the C57BL/6 mouse brain from embryonic day 12 to postnatal day 80. This atlas enhances the ability of neuroscientists to study how pollutants and genetic mutations effect the development of the brain. Thus, the atlas may be used to determine what toxins pose special threats to children and pregnant mothers. Mouse Brain Connectivity The Allen Mouse Brain Connectivity Atlas was launched in November 2011. Unlike other atlases from the Allen Institute, this atlas focuses on the identification of neural circuitry that govern behavior and brain function. This neural circuitry is responsible for functions like behavior and perception. This map will allow scientists to further understand how the brain works and what causes brain diseases and disorders, such as Parkinson's disease and depression. Mouse Spinal Cord Unveiled in July 2008, the Allen Mouse Spinal Cord Atlas was the first genome-wide map of the mouse spinal cord ever constructed. The spinal cord atlas is a map of genome wide gene expression in the spinal cord of adult and juvenile C57 black mice. The initial unveiling included data for 2,000 genes and an anatomical reference section. A plan for the future includes expanding the amount of data to about 20,000 genes spanning the full length of the spinal cord. The aim of the spinal cord atlas is to enhance research in the treatment of spinal cord injury, diseases, and disorders such as Lou Gehrig's diseases and spinal muscular atrophy. The project was funded by an array of donors including the Allen Research Institute, Paralyzed Veterans of America Research Foundation, the ALS Association, Wyeth Research, PEMCO Insurance, National Multiple Sclerosis Society, International Spinal Research Trust, and many other organizations, foundations, corporate and private donors. See also List of neuroscience databases EMAGE, the e-Mouse Atlas of Gene Expression References Pawel K. Olszewski, "Analysis of the network of feeding neuroregulators using the AllenBrain Atlas" Neuroscience of Behavior, 1 January 2009. Robert Lee Hotz, "Probing the Brain's Mysteries" The Wall Street Journal, 24 January 2012. Allan Jones, "The Allen Brain Atlas: 5 years and beyond", Nature, 2009. . External links Genomics Neuroscience projects Biological databases Open science
Allen Brain Atlas
Biology
1,944
53,659,697
https://en.wikipedia.org/wiki/Infinite%20chess
Infinite chess is any variation of the game of chess played on an unbounded chessboard. Versions of infinite chess have been introduced independently by multiple players, chess theorists, and mathematicians, both as a playable game and as a model for theoretical study. It has been found that even though the board is unbounded, there are ways in which a player can win the game in a finite number of moves. Background Classical (FIDE) chess is played on an 8×8 board (64 squares). However, the history of chess includes variants of the game played on boards of various sizes. A predecessor game called courier chess was played on a slightly larger 12×8 board (96 squares) in the 12th century, and continued to be played for at least six hundred years. Japanese chess (shogi) has been played historically on boards of various sizes; the largest is taikyoku shōgi ("ultimate chess"). This chess-like game, which dates to the mid 16th century, was played on a 36×36 board (1296 squares). Each player starts with 402 pieces of 209 different types, and a well-played game would require several days of play, possibly requiring each player to make over a thousand moves. Chess player Jianying Ji was one of many to propose infinite chess, suggesting a setup with the chess pieces in the same relative positions as in classical chess, with knights replaced by nightriders and a rule preventing pieces from travelling too far from opposing pieces. Numerous other chess players, chess theorists, and mathematicians who study game theory have conceived of variations of infinite chess, often with different objectives in mind. Chess players sometimes use the scheme simply to alter the strategy; since chess pieces, and in particular the king, cannot be trapped in corners on an infinite board, new patterns are required to form a checkmate. Theorists conceive of infinite chess variations to expand the theory of chess in general, or as a model to study other mathematical, economic, or game-playing strategies. Decidability of short mates For infinite chess, it has been found that the mate-in-n problem is decidable; that is, given a natural number n and a player to move and the positions (such as on ) of a finite number of chess pieces that are uniformly mobile and with constant and linear freedom, there is an algorithm that will answer if there is a forced checkmate in at most n moves. One such algorithm consists of expressing the instance as a sentence in Presburger arithmetic and using the decision procedure for Presburger arithmetic. The winning-position problem is not known to be decidable. Not only is there a lack of an upper bound on the smallest such n when there is a mate-in-n, there are also positions for which there is a forced mate but no integer n such that there is a mate-in-n. For example, there is a position such that after one rook move by black, the number of moves until black is checkmated will be one more than the distance by which black moved. See also List of chess variants Fairy chess pieces References External links Infinitechess.org: Online implementation that supports play against an opponent in the same room or against an opponent on the internet. Infinite Chess at The Chess Variant Pages Chess variants Combinatorial game theory Abstract strategy games
Infinite chess
Mathematics
678
11,891,750
https://en.wikipedia.org/wiki/Gq%20alpha%20subunit
{{DISPLAYTITLE:Gq alpha subunit}} Gq protein alpha subunit is a family of heterotrimeric G protein alpha subunits. This family is also commonly called the Gq/11 (Gq/G11) family or Gq/11/14/15 family to include closely related family members. G alpha subunits may be referred to as Gq alpha, Gαq, or Gqα. Gq proteins couple to G protein-coupled receptors to activate beta-type phospholipase C (PLC-β) enzymes. PLC-β in turn hydrolyzes phosphatidylinositol 4,5-bisphosphate (PIP2) to diacyl glycerol (DAG) and inositol trisphosphate (IP3). IP3 acts as a second messenger to release stored calcium into the cytoplasm, while DAG acts as a second messenger that activates protein kinase C (PKC). Family members In humans, there are four distinct proteins in the Gq alpha subunit family: Gαq is encoded by the gene GNAQ. Gα11 is encoded by the gene GNA11. Gα14 is encoded by the gene GNA14. Gα15 is encoded by the gene GNA15. Function The general function of Gq is to activate intracellular signaling pathways in response to activation of cell surface G protein-coupled receptors (GPCRs). GPCRs function as part of a three-component system of receptor-transducer-effector. The transducer in this system is a heterotrimeric G protein, composed of three subunits: a Gα protein such as Gαq, and a complex of two tightly linked proteins called Gβ and Gγ in a Gβγ complex. When not stimulated by a receptor, Gα is bound to guanosine diphosphate (GDP) and to Gβγ to form the inactive G protein trimer. When the receptor binds an activating ligand outside the cell (such as a hormone or neurotransmitter), the activated receptor acts as a guanine nucleotide exchange factor to promote GDP release from and guanosine triphosphate (GTP) binding to Gα, which drives dissociation of GTP-bound Gα from Gβγ. Recent evidence suggests that Gβγ and Gαq-GTP could maintain partial interaction via the N-α-helix region of Gαq. GTP-bound Gα and Gβγ are then freed to activate their respective downstream signaling enzymes. Gq/11/14/15 proteins all activate beta-type phospholipase C (PLC-β) to signal through calcium and PKC signaling pathways. PLC-β then cleaves a specific plasma membrane phospholipid, phosphatidylinositol 4,5-bisphosphate (PIP2) into diacyl glycerol (DAG) and inositol 1,4,5-trisphosphate (IP3). DAG remains bound to the membrane, and IP3 is released as a soluble molecule into the cytoplasm. IP3 diffuses to bind to IP3 receptors, a specialized calcium channel in the endoplasmic reticulum (ER). These channels are specific to calcium and only allow the passage of calcium from the ER into the cytoplasm. Since cells actively sequester calcium in the ER to keep cytoplasmic levels low, this release causes the cytosolic concentration of calcium to increase, causing a cascade of intracellular changes and activity through calcium binding proteins and calcium-sensitive processes. Further reading: Calcium function in vertebrates DAG works together with released calcium to activate specific isoforms of PKC, which are activated to phosphorylate other molecules, leading to further altered cellular activity. Further reading: function of protein kinase C The Gαq / Gα11 (Q209L) mutation is associated with the development of uveal melanoma and its pharmacological inhibition (cyclic depsipeptide FR900359 inhibitor), decreases tumor growth in preclinical trials. Receptors The following G protein-coupled receptors couple to Gq subunits: 5-HT2 serotonergic receptors Alpha-1 adrenergic receptor Vasopressin type 1 receptors: 1A and 1B Angiotensin II receptor type 1 Calcitonin receptor Glutamate mGluR1 and mGluR5 receptors Gonadotropin-releasing hormone receptor Histamine H1 receptor M1, M3, and M5 muscarinic receptors Thyrotropin-releasing hormone receptor Trace amine-associated receptor 1 At least some Gq-coupled receptors (e.g., the muscarinic acetylcholine M3 receptor) can be found preassembled (pre-coupled) with Gq. The common polybasic domain in the C-tail of Gq-coupled receptors appears necessary for this receptor¬G protein preassembly. Inhibitors The cyclic depsipeptides FR900359 and YM-254890 are strong, highly specific inhibitors of Gq and G11. See also Second messenger system G protein-coupled receptor Heterotrimeric G protein Phospholipase C Calcium signaling Protein kinase C Gs alpha subunit Gi alpha subunit G12/G13 alpha subunits References External links G proteins Peripheral membrane proteins
Gq alpha subunit
Chemistry
1,162
61,617,701
https://en.wikipedia.org/wiki/Ergonomics%20for%20manual%20material%20handling
Manual material handling (MMH) work contributes to a large percentage of the over half a million cases of musculoskeletal disorders reported annually in the United States. Musculoskeletal disorders often involve strains and sprains to the lower back, shoulders, and upper limbs. They can result in protracted pain, disability, medical treatment, and financial stress for those afflicted with them, and employers often fi nd themselves paying the bill, either directly or through workers’ compensation insurance, at the same time they must cope with the loss of the full capacity of their workers. Scientific evidence shows that effective ergonomic interventions can lower the physical demands of MMH work tasks, thereby lowering the incidence and severity of the musculoskeletal injuries they can cause. Their potential for reducing injury related costs alone make ergonomic interventions a useful tool for improving a company’s productivity, product quality, and overall business competitiveness. But very often productivity gets an additional and solid shot in the arm when managers and workers take a fresh look at how best to use energy, equipment, and exertion to get the job done in the most efficient, effective, and effortless way possible. Planning that applies these principles can result in big wins for all concerned. Improving manual material handling in a workplace According to the U.S. Department of Labor, handling is defined as: Seizing, holding, grasping, turning, or otherwise working with the hand or hands. Fingers are involved only to the extent that they are an extension of the hand, such as to turn a switch or to shift automobile gears. Manual handling of containers may expose workers to physical conditions (e.g., force, awkward postures, and repetitive motions) that can lead to injuries, wasted energy, and wasted time. To avoid these problems, your organization can directly benefit from improving the fit between the demands of work tasks and the capabilities of your workers. Remember that workers’ abilities to perform work tasks may vary because of differences in age, physical condition, strength, gender, stature, and other factors. In short, changing your workplace by improving the fit can benefit your workplace by: Reducing or preventing injuries Reducing workers’ efforts by decreasing forces in lifting, handling, pushing, and pulling materials Reducing risk factors for musculoskeletal disorders (e.g., awkward postures from reaching into containers) Increasing productivity, product and service quality, and worker morale Lowering costs by reducing or eliminating production bottlenecks, error rates or rejects, use of medical services because of musculoskeletal disorders, workers’ compensation claims, excessive worker turnover, absenteeism, and retraining Manual material handling tasks may expose workers to physical risk factors. If these tasks are performed repeatedly or over long periods of time, they can lead to fatigue and injury. The main risk factors, or conditions, associated with the development of injuries in manual material handling tasks include: Awkward postures (e.g., bending, twisting) Repetitive motions (e.g., frequent reaching, lifting, carrying) Forceful exertions (e.g., carrying or lifting heavy loads) Pressure points (e.g., grasping [or contact from] loads, leaning against parts or surfaces that are hard or have sharp edges) Static postures (e.g., maintaining fixed positions for a long time) Repeated or continual exposure to one or more of these factors initially may lead to fatigue and discomfort. Over time, injury to the back, shoulders, hands, wrists, or other parts of the body may occur. Injuries may include damage to muscles, tendons, ligaments, nerves, and blood vessels. Injuries of this type are known as musculoskeletal disorders, or MSDs. In addition, poor environmental conditions, such as extreme heat, cold, noise, and poor lighting, may increase workers’ chances of developing other types of problems. Types of ergonomic improvements In general, ergonomic improvements are changes made to improve the fit between the demands of work tasks and the capabilities of your workers. There are usually many options for improving a particular manual handling task. It is up to you to make informed choices about which improvements will work best for particular tasks. There are two types of ergonomic improvements: Engineering improvements Administrative improvements Engineering improvements These include rearranging, modifying, redesigning, providing or replacing tools, equipment, workstations, packaging, parts, processes, products, or materials (see “Improvement Options”). Administrative improvements Observe how different workers perform the same tasks to get ideas for improving work practices or organizing the work. Then consider the following improvements: Alternate heavy tasks with light tasks. Provide variety in jobs to eliminate or reduce repetition (i.e., overuse of the same muscle groups). Adjust work schedules, work pace, or work practices. Provide recovery time (e.g., short rest breaks). Modify work practices so that workers perform work within their power zone (i.e., above the knees, below the shoulders, and close to the body). Rotate workers through jobs that use different muscles, body parts, or postures. Administrative improvements, such as job rotation, can help reduce workers’ exposures to risk factors by limiting the amount of time workers spend on “problem jobs.” However, these measures may still expose workers to risk factors that can lead to injuries. For these reasons, the most effective way to eliminate “problem jobs” is to change them. This can be done by putting into place the appropriate engineering improvements and modifying work practices accordingly. Training Training alone is not an ergonomic improvement. Instead, it should be used together with any workplace changes made. Workers need training and hands-on practice with new tools, equipment, or work practices to make sure they have the skills necessary to work safely. Training is most effective when it is interactive and fully involves workers. Below are some suggestions for training based on adult learning principles: Provide hands-on practice when new tools, equipment, or procedures are introduced to the workforce. Use several types of visual aids (e.g., pictures, charts, videos) of actual tasks in your workplace. Hold small-group discussions and problem-solving sessions. Give workers ample opportunity for questions. Improvement options Use team lifting as a temporary measure until a more permanent improvement can be found. If possible, try to find a co-worker of similar height to help with the lift. Use a scissors lift, load lifter, or pneumatic lifter to raise or lower the load so that it is level with the work surface. Then slide the load instead of lifting. Use a turntable. Rotate the turntable to bring the container closer. Always work from the side closest to the load. Use a tool. Raise the worker so that the container is grasped 30’’– 40’’ from the surface the worker is standing on. Work within your power zone. Raise or lower the work surface. Store heavier or bulkier containers so that they can be handled within your power zone where you have the greatest strength and most comfort. Work within your power zone. Tilt the container to improve handling of materials. Use angled shelving to improve access to containers. Hold the container close to the body when lifting and lowering. For easier access, remove or lower the sides of the receptacle. Add extra handles for better grip and control. Support the container on or against a fixed object, rack, or stand while pouring the contents. Use a removable plate or a work surface to support the container while pouring the contents into the receptacle. Use a screen over the opening to support the sack. Pour the contents through the screen. Use a cutout work surface so that you can get closer to the container. NIOSH lifting equation The National Institute of Occupational Safety and Health (NIOSH) lifting equation (1994) provides guidelines for evaluating two-handed manual lifting tasks. It defines a Recommended Weight Limit (RWL) as the weight of the load that nearly all healthy workers can lift over a substantial period of time (e.g., eight hours) without an increased risk of developing lower back pain. The maximum weight to be lifted with two hands, under ideal conditions, is 51 pounds. The RWL is based on six variables that reduce the maximum weight to be lifted to less than 51 pounds. Easier ways to manually carry containers Redesign the container so it has handles, grips, or handholds. Hold the container close to the body. Don’t carry more than you can handle. To reduce the weight of the load, use a smaller container. Wear proper size gloves that fit. Gloves with rubber dots on the surface can increase grip stability on slippery surfaces. Increase the size of the bucket or pail handle with padding or a clamp-on handle. Get co-worker assistance when necessary. Discuss your plan so you don’t have surprise movements. Pad the shoulder. Support the container on one shoulder and alternate between shoulders. Use a tool. Alternatives to manual handling of individual containers Instead of lifting and pouring from the drum, insert a siphon or a pump. Increase the size of the container or the weight of the load so that it is too large to handle manually. Use a hook for light-weight containers to reduce your reach. Use a drum dolly. Use a cart or platform truck. Use a portable scissors lift. Use a hand truck. Use a conveyor, slide, or chute. Use a hand pallet truck. Use a portable hoist or crane. Use a stacker. Use a powered hand truck. Use an airball table. Use a forklift. Use a crane. Use a pallet truck. Use a lifter. Use a carousel. Use a tilter. See also Occupational Safety and Health Administration National Institute for Occupational Safety and Health Material handling Material handling equipment College-Industry Council on Material Handling Education Caster Sources References Further reading Kulwiec, R.A., Ed., 1985, "Materials Handling Handbook", 2nd Ed., New York: Wiley. Snook, S.H., and Ciriello, V.M., 1991, “The Design of Manual Handling Tasks: Revised Tables of Maximum Acceptable Weights and Forces.” Ergonomics 34(9): 1197–1213. Mulcahy, D.E., 1999, "Materials Handling Handbook", New York: McGraw-Hill. External links National Institute for Occupational Safety and Health (NIOSH) NIOSH Lifting Equation Snook’s Psychophysical Tables Ergonomic Assist Systems and Equipment Council (EASE) Material Handling Industry Material handling Ergonomics
Ergonomics for manual material handling
Physics
2,181
7,455
https://en.wikipedia.org/wiki/Chaparral
Chaparral ( ) is a shrubland plant community found primarily in California, in southern Oregon and in the northern portion of the Baja California Peninsula in Mexico. It is shaped by a Mediterranean climate (mild wet winters and hot dry summers) and infrequent, high-intensity crown fires. Many chaparral shrubs have hard sclerophyllous evergreen leaves, as contrasted with the associated soft-leaved, drought-deciduous, scrub community of coastal sage scrub, found often on drier, southern facing slopes. Three other closely related chaparral shrubland systems occur in southern Arizona, western Texas, and along the eastern side of central Mexico's mountain chains, all having summer rains in contrast to the Mediterranean climate of other chaparral formations. Chaparral comprises 9% of California's wildland vegetation and contains 20% of its plant species. Etymology The name comes from the Spanish word , which translates to "place of the scrub oak". Introduction In its natural state, chaparral is characterized by infrequent fires, with natural fire return intervals ranging between 30 years and over 150 years. Mature chaparral (at least 60 years since time of last fire) is characterized by nearly impenetrable, dense thickets (except the more open desert chaparral). These plants are flammable during the late summer and autumn months when conditions are characteristically hot and dry. They grow as woody shrubs with thick, leathery, and often small leaves, contain green leaves all year (are evergreen), and are typically drought resistant (with some exceptions). After the first rains following a fire, the landscape is dominated by small flowering herbaceous plants, known as fire followers, which die back with the summer dry period. Similar plant communities are found in the four other Mediterranean climate regions around the world, including the Mediterranean Basin (where it is known as ), central Chile (where it is called ), the South African Cape Region (known there as ), and in Western and Southern Australia (as ). According to the California Academy of Sciences, Mediterranean shrubland contains more than 20 percent of the world's plant diversity. The word chaparral is a loanword from Spanish , meaning place of the scrub oak, which itself comes from a Basque word, , that has the same meaning. Conservation International and other conservation organizations consider chaparral to be a biodiversity hotspot – a biological community with a large number of different species – that is under threat by human activity. California chaparral California chaparral and woodlands ecoregion The California chaparral and woodlands ecoregion, of the Mediterranean forests, woodlands, and scrub biome, has three sub-ecoregions with ecosystem–plant community subdivisions: California coastal sage and chaparral:In coastal Southern California and northwestern coastal Baja California, as well as all of the Channel Islands off California and Guadalupe Island (Mexico). California montane chaparral and woodlands:In southern and central coast adjacent and inland California regions, including covering some of the mountains of the California Coast Ranges, the Transverse Ranges, and the western slopes of the northern Peninsular Ranges. California interior chaparral and woodlands:In central interior California surrounding the Central Valley, covering the foothills and lower slopes of the northeastern Transverse Ranges and the western Sierra Nevada range. Chaparral and woodlands biota For the numerous individual plant and animal species found within the California chaparral and woodlands ecoregion, see: Flora of the California chaparral and woodlands Fauna of the California chaparral and woodlands. Some of the indicator plants of the California chaparral and woodlands ecoregion include: Quercus species – oaks: Quercus agrifolia – coast live oak Quercus berberidifolia – scrub oak Quercus chrysolepis – canyon live oak Quercus douglasii – blue oak Quercus wislizeni – interior live oak Artemisia species – sagebrush: Artemisia californica – California sagebrush, coastal sage brush Arctostaphylos species – manzanitas: Arctostaphylos glauca – bigberry manzanita Arctostaphylos manzanita – common manzanita Ceanothus species – California lilacs: Ceanothus cuneatus – buckbrush Ceanothus megacarpus – bigpod ceanothus Rhus species – sumacs: Rhus integrifolia – lemonade berry Rhus ovata – sugar bush Eriogonum species – buckwheats: Eriogonum fasciculatum – California buckwheat Salvia species – sages: Salvia mellifera – Californian black sage Chaparral soils and nutrient composition Chaparral characteristically is found in areas with steep topography and shallow stony soils, while adjacent areas with clay soils, even where steep, tend to be colonized by annual plants and grasses. Some chaparral species are adapted to nutrient-poor soils developed over serpentine and other ultramafic rock, with a high ratio of magnesium and iron to calcium and potassium, that are also generally low in essential nutrients such as nitrogen. California cismontane and transmontane chaparral subdivisions Another phytogeography system uses two California chaparral and woodlands subdivisions: the cismontane chaparral and the transmontane (desert) chaparral. California cismontane chaparral Cismontane chaparral ("this side of the mountain") refers to the chaparral ecosystem in the Mediterranean forests, woodlands, and scrub biome in California, growing on the western (and coastal) sides of large mountain range systems, such as the western slopes of the Sierra Nevada in the San Joaquin Valley foothills, western slopes of the Peninsular Ranges and California Coast Ranges, and south-southwest slopes of the Transverse Ranges in the Central Coast and Southern California regions. Cismontane chaparral plant species In Central and Southern California chaparral forms a dominant habitat. Members of the chaparral biota native to California, all of which tend to regrow quickly after fires, include: Adenostoma fasciculatum, chamise Adenostoma sparsifolium, redshanks Arctostaphylos spp., manzanita Ceanothus spp., ceanothus Cercocarpus spp., mountain mahogany Cneoridium dumosum, bush rue Eriogonum fasciculatum, California buckwheat Garrya spp., silk-tassel bush Hesperoyucca whipplei, yucca Heteromeles arbutifolia, toyon Acmispon glaber, deerweed Malosma laurina, laurel sumac Marah macrocarpus, wild cucumber Mimulus aurantiacus, bush monkeyflower Pickeringia montana, chaparral pea Prunus ilicifolia, islay or hollyleaf cherry Quercus berberidifolia, scrub oak Q. dumosa, scrub oak Q. wislizenii var. frutescens Rhamnus californica, California coffeeberry Rhus integrifolia, lemonade berry Rhus ovata, sugar bush Salvia apiana, Californian white sage Salvia mellifera, Californian black sage Xylococcus bicolor, mission manzanita Cismontane chaparral bird species The complex ecology of chaparral habitats supports a very large number of animal species. The following is a short list of birds which are an integral part of the cismontane chaparral ecosystems. Characteristic chaparral bird species include: Wrentit (Chamaea fasciata) California thrasher (Toxostoma redivivum) California towhee (Melozone crissalis) Spotted towhee (Pipilo maculatus) California scrub jay (Aphelocoma californica) Other common chaparral bird species include: Anna's hummingbird (Calypte anna) Bewick's wren (Thryomanes bewickii) Bushtit (Psaltriparus minimus) Costa's hummingbird (Calypte costae) Greater roadrunner (Geococcyx californianus) California transmontane (desert) chaparral Transmontane chaparral or desert chaparral—transmontane ("the other side of the mountain") chaparral—refers to the desert shrubland habitat and chaparral plant community growing in the rainshadow of these ranges. Transmontane chaparral features xeric desert climate, not Mediterranean climate habitats, and is also referred to as desert chaparral. Desert chaparral is a regional ecosystem subset of the deserts and xeric shrublands biome, with some plant species from the California chaparral and woodlands ecoregion. Unlike cismontane chaparral, which forms dense, impenetrable stands of plants, desert chaparral is often open, with only about 50 percent of the ground covered. Individual shrubs can reach up to in height. Transmontane chaparral or desert chaparral is found on the eastern slopes of major mountain range systems on the western sides of the deserts of California. The mountain systems include the southeastern Transverse Ranges (the San Bernardino and San Gabriel Mountains) in the Mojave Desert north and northeast of the Los Angeles basin and Inland Empire; and the northern Peninsular Ranges (San Jacinto, Santa Rosa, and Laguna Mountains), which separate the Colorado Desert (western Sonoran Desert) from lower coastal Southern California. It is distinguished from the cismontane chaparral found on the coastal side of the mountains, which experiences higher winter rainfall. Naturally, desert chaparral experiences less winter rainfall than cismontane chaparral. Plants in this community are characterized by small, hard (sclerophyllic) evergreen (non-deciduous) leaves. Desert chaparral grows above California's desert cactus scrub plant community and below the pinyon-juniper woodland. It is further distinguished from the deciduous sub-alpine scrub above the pinyon-juniper woodlands on the same side of the Peninsular ranges. Due to the lower annual rainfall (resulting in slower plant growth rates) when compared to cismontane chaparral, desert chaparral is more vulnerable to biodiversity loss and the invasion of non-native weeds and grasses if disturbed by human activity and frequent fire. Transmontane chaparral distribution Transmontane (desert) chaparral typically grows on the lower ( elevation) northern slopes of the southern Transverse Ranges (running east to west in San Bernardino and Los Angeles counties) and on the lower () eastern slopes of the Peninsular Ranges (running south to north from lower Baja California to Riverside and Orange counties and the Transverse Ranges). It can also be found in higher-elevation sky islands in the interior of the deserts, such as in the upper New York Mountains within the Mojave National Preserve in the Mojave Desert. The California transmontane (desert) chaparral is found in the rain shadow deserts of the following: Sierra Nevada creating the Great Basin Desert and northern Mojave Desert Transverse Ranges creating the western through eastern Mojave Desert Peninsular Ranges creating the Colorado Desert and Yuha Desert. Transmontane chaparral plants Adenostoma fasciculatum, chamise (a low shrub common to most chaparral with clusters of tiny needle like leaves or fascicles; similar in appearance to coastal Eriogonum fasciculatum) Agave deserti, desert agave Arctostaphylos glauca, bigberry manzanita (smooth red bark with large edible berries; glauca means blue-green, the color of its leaves) Ceanothus greggii, desert ceanothus, California lilac (a nitrogen fixer, has hair on both sides of leaves for heat dissipation) Cercocarpus ledifolius, curl leaf mountain mahogany, a nitrogen fixer important food source for desert bighorn sheep Dendromecon rigida, bush poppy (a fire follower with four petaled yellow flowers) Ephedra spp., Mormon teas Fremontodendron californicum, California flannel bush (lobed leaves with fine coating of hair, covered with yellow blossoms in spring) Opuntia acanthocarpa, buckhorn cholla (branches resemble antlers of a deer) Opuntia echinocarpa, silver or golden cholla (depending on color of the spines) Opuntia phaeacantha, desert prickly pear (fruit is important food source for animals) Purshia tridentata, buckbrush, antelope bitterbrush (Rosaceae family) Prunus fremontii, desert apricot Prunus fasciculata, desert almond (commonly infested with tent caterpillars of Malacosoma spp.) Prunus ilicifolia, holly-leaf cherry Quercus cornelius-mulleri, desert scrub oak or Muller's oak Rhus ovata, sugar bush Simmondsia chinensis, jojoba Yucca schidigera, Mojave yucca Hesperoyucca whipplei (syn. Yucca whipplei), foothill yucca – our lord's candle. Transmontane chaparral animals There is overlap of animals with those of the adjacent desert and pinyon-juniper communities. Canis latrans, coyote Lynx rufus, bobcat Neotoma sp., desert pack rat Odocoileus hemionus, mule deer Peromyscus truei, pinyon mouse Puma concolor, mountain lion Stagmomantis californica, California mantis Fire Chaparral is a coastal biome with hot, dry summers and mild, rainy winters. The chaparral area receives about of precipitation a year. This makes the chaparral most vulnerable to fire in the late summer and fall. The chaparral ecosystem as a whole is adapted to be able to recover from naturally infrequent, high-intensity fire (fires occurring between 30 and 150 years or more apart); indeed, chaparral regions are known culturally and historically for their impressive fires. (This does create a conflict with human development adjacent to and expanding into chaparral systems.) Additionally, Native Americans burned chaparral near villages on the coastal plain to promote plant species for textiles and food. Before a major fire, typical chaparral plant communities are dominated by manzanita, chamise Adenostoma fasciculatum and Ceanothus species, toyon (which can sometimes be interspersed with scrub oaks), and other drought-resistant shrubs with hard (sclerophyllous) leaves; these plants resprout (see resprouter) from underground burls after a fire. Plants that are long-lived in the seed bank or serotinous with induced germination after fire include chamise, Ceanothus, and fiddleneck. Some chaparral plant communities may grow so dense and tall that it becomes difficult for large animals and humans to penetrate, but may be teeming with smaller fauna in the understory. The seeds of many chaparral plant species are stimulated to germinate by some fire cue (heat or the chemicals from smoke or charred wood). During the time shortly after a fire, chaparral communities may contain soft-leaved herbaceous, fire following annual wildflowers and short-lived perennials that dominate the community for the first few years – until the burl resprouts and seedlings of chaparral shrub species create a mature, dense overstory. Seeds of annuals and shrubs lie dormant until the next fire creates the conditions needed for germination. Several shrub species such as Ceanothus fix nitrogen, increasing the availability of nitrogen compounds in the soil. Because of the hot, dry conditions that exist in the California summer and fall, chaparral is one of the most fire-prone plant communities in North America. Some fires are caused by lightning, but these are usually during periods of high humidity and low winds and are easily controlled. Nearly all of the very large wildfires are caused by human activity during periods of hot, dry easterly Santa Ana winds. These human-caused fires are commonly ignited by power line failures, vehicle fires and collisions, sparks from machinery, arson, or campfires. Threatened by high fire frequency Though adapted to infrequent fires, chaparral plant communities can be eliminated by frequent fires. A high frequency of fire (less than 10-15 years apart) will result in the loss of obligate seeding shrub species such as Manzanita spp. This high frequency disallows seeder plants to reach their reproductive size before the next fire and the community shifts to a sprouter-dominance. If high frequency fires continue over time, obligate resprouting shrub species can also be eliminated by exhausting their energy reserves below-ground. Today, frequent accidental ignitions can convert chaparral from a native shrubland to non-native annual grassland and drastically reduce species diversity, especially under drought brought about by climate change. Wildfire debate There are two older hypotheses relating to California chaparral fire regimes that caused considerable debate in the past within the fields of wildfire ecology and land management. Research over the past two decades have rejected these hypotheses: That older stands of chaparral become "senescent" or "decadent", thus implying that fire is necessary for the plants to remain healthy, That wildfire suppression policies have allowed dead chaparral to accumulate unnaturally, creating ample fuel for large fires. The perspective that older chaparral is unhealthy or unproductive may have originated during the 1940s when studies were conducted measuring the amount of forage available to deer populations in chaparral stands. However, according to recent studies, California chaparral is extraordinarily resilient to very long periods without fire and continues to maintain productive growth throughout pre-fire conditions. Seeds of many chaparral plants actually require 30 years or more worth of accumulated leaf litter before they will successfully germinate (e.g., scrub oak, Quercus berberidifolia; toyon, Heteromeles arbutifolia; and holly-leafed cherry, Prunus ilicifolia). When intervals between fires drop below 10 to 15 years, many chaparral species are eliminated and the system is typically replaced by non-native, invasive, weedy grassland. The idea that older chaparral is responsible for causing large fires was originally proposed in the 1980s by comparing wildfires in Baja California and southern California. It was suggested that fire suppression activities in southern California allowed more fuel to accumulate, which in turn led to larger fires. This is similar to the observation that fire suppression and other human-caused disturbances in dry, ponderosa pine forests in the Southwest of the United States has unnaturally increased forest density. Historically, mixed-severity fires likely burned through these forests every decade or so, burning understory plants, small trees, and downed logs at low-severity, and patches of trees at high-severity. However, chaparral has a high-intensity crown-fire regime, meaning that fires consume nearly all the above ground growth whenever they burn, with a historical frequency of 30 to 150 years or more. A detailed analysis of historical fire data concluded that fire suppression activities have been ineffective at excluding fire from southern California chaparral, unlike in ponderosa pine forests. In addition, the number of fires is increasing in step with population growth and exacerbated by climate change. Chaparral stand age does not have a significant correlation to its tendency to burn. Large, infrequent, high-intensity wildfires are part of the natural fire regime for California chaparral. Extreme weather conditions (low humidity, high temperature, high winds), drought, and low fuel moisture are the primary factors in determining how large a chaparral fire becomes. See also California Chaparral Institute California chaparral and woodlands ecoregion California coastal sage and chaparral California montane chaparral and woodlands California interior chaparral and woodlands Heath (habitat) Fire ecology Keystone species reintroduction: (sufficient) native keystone grazing species in grasslands will promote tree growth, reducing wildfire likelihood Garrigue International Association of Wildland Fire References Bibliography Haidinger, T.L., and J.E. Keeley. 1993. Role of high fire frequency in destruction of mixed chaparral. Madrono 40: 141–147. Halsey, R.W. 2008. Fire, Chaparral, and Survival in Southern California. Second Edition. Sunbelt Publications, San Diego, CA. 232 p. Hanes, T. L. 1971. Succession after fire in the chaparral of southern California. Ecol. Monographs 41: 27–52. Hubbard, R.F. 1986. Stand age and growth dynamics in chamise chaparral. Master's thesis, San Diego State University, San Diego, California. Keeley, J. E., C. J. Fotheringham, and M. Morais. 1999. Reexamining fire suppression impacts on brushland fire regimes. Science 284:1829–1832. Keeley, J.E. 1995. Future of California floristics and systematics: wildfire threats to the California flora. Madrono 42: 175–179. Keeley, J.E., A.H. Pfaff, and H.D. Stafford. 2005. Fire suppression impacts on postfire recovery of Sierra Nevada chaparral shrublands. International Journal of Wildland Fire 14: 255–265. Larigauderie, A., T.W. Hubbard, and J. Kummerow. 1990. Growth dynamics of two chaparral shrub species with time after fire. Madrono 37: 225–236. Minnich, R. A. 1983. Fire mosaics in southern California and northern Baja California. Science 219:1287–1294. Moritz, M.A., J.E. Keeley, E.A. Johnson, and A.A. Schaffner. 2004. Testing a basic assumption of shrubland fire management: How important is fuel age? Frontiers in Ecology and the Environment 2:67–72. Pratt, R. B., A. L. Jacobsen, A. R. Ramirez, A. M. Helms, C. A. Traugh, M. F. Tobin, M. S. Heffner, and S. D. Davis. 2013. Mortality of resprouting chaparral shrubs after a fire and during a record drought: physiological mechanisms and demographic consequences. Global Change Biology 20:893–907. Syphard, A. D., V. C. Radeloff, J. E. Keeley, T. J. Hawbaker, M. K. Clayton, S. I. Stewart, and R. B. Hammer. 2007. Human influence on California fire regimes. Ecological Applications 17:1388–1402. Vale, T. R. 2002. Fire, Native Peoples, and the Natural Landscape. Island Press, Washington, DC, USA. Venturas, M. D., E. D. MacKinnon, H. L. Dario, A. L. Jacobsen, R. B. Pratt, and S. D. Davis. 2016. Chaparral shrub hydraulic traits, size, and life history types relate to species mortality during California's historic drought of 2014. PLoS ONE 11(7): p.e0159145. Zedler, P.H. 1995. Fire frequency in southern California shrublands: biological effects and management options, pp. 101–112 in J.E. Keeley and T. Scott (eds.), Brushfires in California wildlands: ecology and resource management. International Association of Wildland Fire, Fairfield, Wash. External links The California Chaparral Institute website Mediterranean forests, woodlands, and scrub in the United States Plant communities of California Plants by habitat . . . San Bernardino Mountains San Gabriel Mountains Santa Susana Mountains Santa Ana Mountains Ecology of the Sierra Nevada (United States) Wildfire ecology Nearctic ecoregions Sclerophyll forests
Chaparral
Biology
5,017
227,223
https://en.wikipedia.org/wiki/MicroStation
MicroStation is a CAD software platform for two- and three-dimensional design and drafting, developed and sold by Bentley Systems and used in the architectural and engineering industries. It generates 2D/3D vector graphics objects and elements and includes building information modeling (BIM) features. The current version is MicroStation CONNECT Edition. History MicroStation was initially developed by 3 Individual developers and sold and supported by Intergraph in the 1980s. The latest versions of the software are released solely for Microsoft Windows operating systems, but historically MicroStation was available for Macintosh platforms and a number of Unix-like operating systems. From its inception MicroStation was designed as an IGDS (Interactive Graphics Design System) file editor for the PC. Its initial development was a result of the developers experience developing PseudoStation released in 1984, a program designed to replace the use of proprietary Intergraph graphic workstations to edit DGN files by substituting the much less expensive Tektronix compatible graphics terminals. PseudoStation as well as Intergraph's IGDS program ran on a modified version of Digital Equipment Corporation's VAX super-mini computer. In 1985, MicroStation 1.0 was released as a DGN file read-only and plot program designed to run exclusively on the IBM PC-AT personal computer. In 1987, MicroStation 2.0 was released, and was the first version of MicroStation to read and write DGN files. Almost two years later, MicroStation 3.0 was released, which took advantage of the increasing processing power of the PC, particularly with respect to dynamics. Intergraph MicroStation 4.0 was released in late 1990 and added many features: reference file clipping and masking, a DWG translator, fence modes, the ability to name levels, as well as GUI enhancements. The 1992 release of version 4 introduced the ability to write applications using the MicroStation Development Language (MDL). In 1993, MicroStation 5.0 was released. New capabilities included binary raster support, custom line styles, settings manager, and dimension driven design. The "V5 for Power Macintosh provided a comprehensive tool set for both 2-D and 3-D CAD ... with added several truly useful features ... the high-end PowerPC- native CAD package runs on steroids." This was the last version to be supported in Unix. This version was branded both Intergraph (on CLIX) and Bentley MicroStation (on PC). Later versions were all branded Bentley. This was the last version to run on Intergraph CLIX. All platforms other than the PC used 32-bit processors. In 1995, Windows 95 was released. Bentley soon followed with a release of MicroStation for that operating system. Aside from being the first version of MicroStation to not include the version number in its name (MicroStation 95 was actually MicroStation v5.5), MicroStation 95 included the ability to be mostly driven by graphic icon buttons. This version introduced a host of new features: Accudraw, dockable dialogs, Smartline, revised view controls, movie generation, and the ability to use two application windows (similar to previous Unix driven Intergraph terminals. Many of these features are among the most popular used today. MicroStation 95 was the first version of MicroStation for a PC platform to use 32-bit hardware. The last multi-platform release, MicroStation SE (SE standing for special edition, but it was actually MicroStation 5.7) was released late in 1997, and was the first MicroStation release to include color button icons. These icons could also be made borderless, just like in Office 97. This version of MicroStation also included several features to enable more work over the internet. This version also introduced enhanced precision and a very commonly used tool in MicroStation - PowerSelector. MicroStation/J (a.k.a. MicroStation 7.0, a.k.a. MicroStation V7) was released almost a year after SE. The J in the software title stood for Java, as this version introduced a Java-enhanced version of MDL, called JMDL. Other features included QuickvisionGL and a revised help system. MicroStation/J was the last version to be based upon the IGDS file format; since MicroStation/J was actually Version 7, the file format became known as "V7 DGN". That file format had been used for about 20 years. However, with the advent of MicroStation V8 in 2001 came a new IEEE-754 based 64-bit file format, referred to as V8 DGN. Along with the new file format came many new enhancements, including unlimited levels, a nearly limitless design plane and no limits on file size. Other features that were added were: Accusnap, Design History, models, unlimited undo, VBA programming, .Net interoperability, True Scale, and standard definitions for working units (as the new file format stored everything internally in meters, but can recognize rational unit conversions so that it can know the size of geometry)(some of these features were also available in MicroStation 95 to MicroStation/J). It also included the ability to work natively with DWG files. MicroStation V8 2004 Edition (V8.5) followed nearly three years later with support for newer DWG releases, Multi-snaps, PDF creation, the Standards Checker and Feature modeling. MicroStation V8 XM Edition (V8.9) was released in May 2006. It builds upon the changes made by V8. The XM edition includes a completely revised Direct3d-based graphics subsystem, PDF References, task navigation, element templates, color books, support for PANTONE and RAL color systems and keyboard mapping. In MicroStation V8i (V8.11) (November 2008) the task navigation was overhauled and the then newest DWG format was supported. MicroStation now contains a module for GPS data. MicroStation CONNECT Edition (V10.XX) first release in September 2015. This version updated the application architecture to 64-bit and changed to a Ribbon Interface. Future versions are being delivered as (roughly) quarterly updates. MicroStation 2023 (23.00.00.108) was released on June 28th, 2023. This is the first major release adopting the new naming convention. New features include improved workflows, and several user experience enhancements, with focuses on a new access to geospatial features and maps, issue resolution improvements, increased data reporting. File format support Its native format is the DGN format, though it can also read and write a variety of standard CAD formats including DWG, DXF, SKP and OBJ and produce media output in such forms as rendered images (JPEG and BMP), animations (AVI), 3D web pages in Virtual Reality Modeling Language (VRML), and Adobe Systems PDF. At its inception, MicroStation was used in the engineering and architecture fields primarily for creating construction drawings; however, it has evolved through its various versions to include advanced parametric modeling and rendering features, including Boolean solids, VUE Rendering, raytracing, pathtracing, PBR Materials, and keyframe animation. It can provide specialized environments for architecture, civil engineering, mapping, or plant design, among others. In 2000, Bentley made revisions to the DGN file format in V8 to add additional features like Digital Rights and Design History - a revision control ability that allows reinstating previous revisions either globally or by selection, and to better support import/export of Autodesk's DWG format. Additionally, the V8 DGN file format removed many data restrictions from earlier releases such as limited design levels and drawing area. CONNECT Edition versions continue to use the V8 DGN file format. Criticism The software has been criticized many times, somewhat reflected by the steep decline in usage. Common issues include, but are not limited to: Crashes during the process of rendering Blank renders Arbitrary light disruption Material applying unpredictably See also ProjectWise GenerativeComponents Comparison of computer-aided design editors Rendering (computer graphics) References External links MicroStation home page at Bentley CONNECT Edition Book Series at Bentley Computer-aided design software Computer-aided design software for Windows
MicroStation
Engineering
1,692
3,593,667
https://en.wikipedia.org/wiki/Atomic%20layer%20epitaxy
Atomic layer epitaxy (ALE), more generally known as atomic layer deposition (ALD), is a specialized form of thin film growth (epitaxy) that typically deposit alternating monolayers of two elements onto a substrate. The crystal lattice structure achieved is thin, uniform, and aligned with the structure of the substrate. The reactants are brought to the substrate as alternating pulses with "dead" times in between. ALE makes use of the fact that the incoming material is bound strongly until all sites available for chemisorption are occupied. The dead times are used to flush the excess material. It is mostly used in semiconductor fabrication to grow thin films of thickness in the nanometer scale. Technique This technique was invented in 1974 and patented the same year (patent published in 1976) by Dr. Tuomo Suntola at the Instrumentarium company, Finland. Dr. Suntola's purpose was to grow thin films of Zinc sulfide to fabricate electroluminescent flat panel displays. The main trick used for this technique is the use of a self-limiting chemical reaction to control in an accurate way the thickness of the film deposited. Since the early days, ALE (ALD) has grown to a global thin film technology which has enabled the continuation of Moore's law. In 2018, Suntola received the Millennium Technology Prize for ALE (ALD) technology. Compared to basic chemical vapour deposition, in ALE (ALD), chemical reactants are pulsed alternatively in a reaction chamber and then chemisorb in a saturating manner on the surface of the substrate, forming a chemisorbed monolayer. ALD introduces two complementary precursors (e.g. Al(CH3)3 and H2O ) alternatively into the reaction chamber. Typically, one of the precursors will adsorb onto the substrate surface until it saturates the surface and further growth cannot occur until the second precursor is introduced. Thus the film thickness is controlled by the number of precursor cycles rather than the deposition time as is the case for conventional CVD processes. ALD allows for extremely precise control of film thickness and uniformity. See also Atomic layer deposition References External links Plasma-assisted Atomic Layer Deposition by the Plasma & Materials Processing group at Eindhoven University of Technology Atomic layer epitaxy – a valuable tool for nanotechnology? ALENET – Atomic Layer Epitaxy Network Surface smoothing of GaAs microstructure by atomic layer epitaxy Electrochemical characterisation of atomic layer deposition Thin film deposition Finnish inventions
Atomic layer epitaxy
Chemistry,Materials_science,Mathematics
519
36,854,747
https://en.wikipedia.org/wiki/Problem-Solving%20Group
Problem-Solving Group (PSG) is a team of problem management and technical support staff that is formed to investigate and diagnose a recurring IT problem. Background The concept of the Problem-Solving Group was introduced in ITIL Service Operation 2007 but was removed from ITIL Service Operation 2011. Definition The ITIL Service Operation manual describes the purpose of a Problem-Solving Group as follows: A paper with a more detailed description of a PSG was presented at a meeting of the British Computer Society at the University of Northampton. See also ITIL v3 problem management References Information technology management
Problem-Solving Group
Technology
119
47,089,053
https://en.wikipedia.org/wiki/Hypomyces%20completus
Hypomyces completus is a parasitic ascomycete in the order Hypocreales. The fungus grows on boletes, typically Suillus spraguei in North America, although the type collection was found on growing on Boletinus oxydabilis in Siberia. The color of its subiculum (a crust-like growth of mycelium) ranges from white initially to yellow-brown to greenish-brown to brown to black; the fruitbodies (perithecia) range from pale brown to dark brown to black. Spores measure 35–40 by 4–6 μm. The species was described as new to science in 1971 by G. R. W. Arnold, who placed it in Peckiella, a genus segregated from Hypomyces by Pier Andrea Saccardo to contain species having unicellular ascospores. In their 1989 review, Rogerson and Samuels did not accept this genus as valid, stating "variations in these features occur, occasionally in a single perithecium", and they reclassified the fungus in Hypomyces. The anamorph species associated with H. completus is Sepedonium brunneum, first described by Charles Horton Peck in 1887. References External links Fungi described in 1971 Fungi of Europe Fungi of North America Hypocreaceae Inedible fungi Parasitic fungi Fungus species
Hypomyces completus
Biology
290
33,998,310
https://en.wikipedia.org/wiki/Mountain%20car%20problem
Mountain Car, a standard testing domain in Reinforcement learning, is a problem in which an under-powered car must drive up a steep hill. Since gravity is stronger than the car's engine, even at full throttle, the car cannot simply accelerate up the steep slope. The car is situated in a valley and must learn to leverage potential energy by driving up the opposite hill before the car is able to make it to the goal at the top of the rightmost hill. The domain has been used as a test bed in various reinforcement learning papers. Introduction The mountain car problem, although fairly simple, is commonly applied because it requires a reinforcement learning agent to learn on two continuous variables: position and velocity. For any given state (position and velocity) of the car, the agent is given the possibility of driving left, driving right, or not using the engine at all. In the standard version of the problem, the agent receives a negative reward at every time step when the goal is not reached; the agent has no information about the goal until an initial success. History The mountain car problem appeared first in Andrew Moore's PhD thesis (1990). It was later more strictly defined in Singh and Sutton's reinforcement learning paper with eligibility traces. The problem became more widely studied when Sutton and Barto added it to their book Reinforcement Learning: An Introduction (1998). Throughout the years many versions of the problem have been used, such as those which modify the reward function, termination condition, and the start state. Techniques used to solve mountain car Q-learning and similar techniques for mapping discrete states to discrete actions need to be extended to be able to deal with the continuous state space of the problem. Approaches often fall into one of two categories, state space discretization or function approximation. Discretization In this approach, two continuous state variables are pushed into discrete states by bucketing each continuous variable into multiple discrete states. This approach works with properly tuned parameters but a disadvantage is information gathered from one state is not used to evaluate another state. Tile coding can be used to improve discretization and involves continuous variables mapping into sets of buckets offset from one another. Each step of training has a wider impact on the value function approximation because when the offset grids are summed, the information is diffused. Function approximation Function approximation is another way to solve the mountain car. By choosing a set of basis functions beforehand, or by generating them as the car drives, the agent can approximate the value function at each state. Unlike the step-wise version of the value function created with discretization, function approximation can more cleanly estimate the true smooth function of the mountain car domain. Eligibility traces One aspect of the problem involves the delay of actual reward. The agent is not able to learn about the goal until a successful completion. Given a naive approach for each trial the car can only backup the reward of the goal slightly. This is a problem for naive discretization because each discrete state will only be backed up once, taking a larger number of episodes to learn the problem. This problem can be alleviated via the mechanism of eligibility traces, which will automatically backup the reward given to states before, dramatically increasing the speed of learning. Eligibility traces can be viewed as a bridge from temporal difference learning methods to Monte Carlo methods. Technical details The mountain car problem has undergone many iterations. This section focuses on the standard well-defined version from Sutton (2008). State variables Two-dimensional continuous state space. Actions One-dimensional discrete action space. Reward For every time step: Update function For every time step: Starting condition Optionally, many implementations include randomness in both parameters to show better generalized learning. Termination condition End the simulation when: Variations There are many versions of the mountain car which deviate in different ways from the standard model. Variables that vary include but are not limited to changing the constants (gravity and steepness) of the problem so specific tuning for specific policies become irrelevant and altering the reward function to affect the agent's ability to learn in a different manner. An example is changing the reward to be equal to the distance from the goal, or changing the reward to zero everywhere and one at the goal. Additionally, a 3D mountain car can be used, with a 4D continuous state space. References Implementations C++ Mountain Car Software. Richard s. Sutton. Java Mountain Car with support for RL Glue Python, with good discussion (blog post - down page) Further reading Mountain Car with Replacing Eligibility Traces Gaussian Processes with Mountain Car Machine learning
Mountain car problem
Engineering
920
4,082,119
https://en.wikipedia.org/wiki/D3O
D3O is the namesake ingredient brand of British company D3O Lab, specializing in rate-sensitive impact protection technologies. The brand comprises more than 30 technologies and materials, including set foams, formable foams, set elastomers, and formable elastomers. D3O is sold in more than 50 countries. It is used in sports and motorcycle gear, protective cases for consumer electronics, including phones, industrial workwear, and military protection, including helmet pads and limb protectors. History In 1999, the materials scientists Richard Palmer and Philip Green experimented with a dilatant fluid with non-Newtonian properties. Unlike water, this fluid was free-flowing at rest but became instantly hard upon impact. As snowboarders, Palmer and Green drew inspiration from snow and decided to replicate its matrix-like quality to develop a flexible material that incorporated the dilatant fluid. After experimenting with numerous materials and formulas, they invented a flexible, pliable material that locked together and solidified in the event of a collision. When incorporated into clothing, the material moves with the wearer while providing comprehensive protection. Palmer and Green filed a patent application, which they used as the foundation for commercializing their invention and setting up a business in 1999. D3O was used commercially for the first time by the United States Ski Team and the Canadian ski team at the 2006 Olympic Winter Games. D3O first entered the motorcycle market in 2009 when the ingredient was incorporated into CE-certified armour for the apparel brand Firstgear. Philip Green left D3O in 2006, and in 2009 founder Richard Palmer brought in Stuart Sawyer as interim CEO. Palmer took a sabbatical in 2010 and left the business in 2011, at which point executive leadership was officially handed over to Sawyer, who has remained in the position since. In 2014, D3O received one of the Queen’s Awards for Enterprise and was awarded £237,000 by the Technology Strategy Board—now known as Innovate UK—to develop a shock absorption helmet system prototype for the defence market to reduce the risk of traumatic brain injury. The following year, Sawyer secured £13 million in private equity funding from venture capital investor Beringea, allowing D3O to place more emphasis on product development and international marketing. D3O opened headquarters in London, which include test laboratories and house its global business functions. With exports to North America making up an increasing part of its business, the company set up a new operating base located within the Virginia Tech Corporate Research Center (VTCRC), a research park for high-technology companies located in Blacksburg, Virginia. The same year, D3O consumer electronics brand partner Gear4 became the UK’s number 1 phone case brand in volume and value. Gear 4 has since become present in consumer electronics retail stores worldwide including Verizon, AT&T and T-Mobile. In 2017, D3O became part of the American National Standards Institute (ANSI)/International Safety Equipment Association (ISEA) committee which developed the first standard in North America to address the risk to hands from impact injuries: ANSI/ISEA 138-2019, American National Standard for Performance and Classification for Impact Resistant Hand Protection. D3O was acquired in September 2021 by independent private equity fund Elysian Capital III LP. The acquisition saw previous owners Beringea US & UK and Entrepreneurs Fund exit the business after six years of year-on-year growth. D3O applications D3O has various applications, such as in electronics (low-profile impact protection for phones, laptops, and other electronic devices), sports (protective equipment), motorcycle riding gear, defence (helmet liners and body protection; footwear) and industrial workwear (personal protective equipment such as gloves, knee pads and metatarsal guards for boots). In 2020, D3O became the specified helmet suspension pad supplier for the US Armed Forces' Integrated Helmet Protection System (IHPS) Suspension System. Product development D3O uses patented and proprietary technologies to create both standard and custom products. In-house rapid prototyping and testing laboratories ensure each D3O development is tested to CE standards for sports and motorcycle applications, ISEA 138 for industrial applications, and criteria set by government agencies for defense applications. Sponsorship D3O sponsors athletes including: Downhill mountain bike rider Tahnée Seagrave Seth Jones, ice hockey defenseman and alternate captain for the Columbus Blue Jackets in the NHL Motorcycle racer Michael Dunlop, 25-times winner of the Isle of Man TT The Troy Lee Designs team of athletes including three-times Red Bull Rampage winner Brandon Semenuk Enduro rider Rémy Absalon, 12-times Megavalanche winner. Awards and recognition D3O has received the following awards and recognition: 2014: Queen’s Award for Enterprise 2016: Inclusion in the Sunday Times Tech Track 100 ‘Ones to Watch’ list 2017: T3 Awards together with Three: Best Mobile Accessory 2018: British Yachting Awards – clothing innovation 2019: ISPO Award – LP2 Pro 2020: Red Dot - Snickers Ergo Craftsmen Kneepads 2022/2023: ISPO Textrends Award - Accessories & Trim 2023: IF Design Award - D3O Ghost Reactiv Body Protection 2023: ISPO Award – D3O Ghost back protector References Materials Non-Newtonian fluids Motorcycle apparel
D3O
Physics
1,095
40,152,905
https://en.wikipedia.org/wiki/Max%20Bernhard%20Weinstein
Max Bernhard Weinstein (1 September 1852 in Kaunas, Vilna Governorate – 25 March 1918) was a German physicist and philosopher. He is best known as an opponent of Albert Einstein's Theory of Relativity, and for having written a broad examination of various theological theories, including extensive discussion of pandeism. Born into a Jewish family in Kovno (then Imperial Russia), Weinstein translated James Clerk Maxwell's Treatise on Electricity and Magnetism into German in 1883, and taught courses on electrodynamics at the University of Berlin. While teaching at the Institute of Physics in the University of Berlin, Weinstein associated with Max Planck, Emil du Bois-Reymond, Hermann von Helmholtz, Ernst Pringsheim Sr., Wilhelm Wien, Carl A. Paalzow of the Technische Hochschule in Berlin Charlottenburg, August Kundt, Werner von Siemens, theologian Adolph von Siemens, historian Theodor Mommsen, and Germanic philologist Wilhelm Scherer. Criticism of Einstein's theory of relativity Weinstein was among the first physicists to reject and criticize Albert Einstein's theory of relativity, contending that "general relativity had removed gravity from its earlier isolated position and made it into a "world power" controlling all laws of nature," and warning that "physics and mathematics would have to be revised." It was Weinstein's writings, and their impact driving public sentiment against Einstein's theories, which led astronomer Wilhelm Foerster to convince Einstein to write a more accessible explanation of those ideas. But, one commentator contends that Weinstein's summaries of relativistic physics were "tedious exercises in algebra." Weinstein argued against relativity in his book Die Physik der bewegten Materie und die Relativitätstheorie, published in 1913. Philosophical writings In addition to his work in physics, Weinstein wrote several philosophical works. Welt- und Lebensanschauungen, Hervorgegangen aus Religion, Philosophie und Naturerkenntnis ("World and Life Views, Emerging From Religion, Philosophy and Perception of Nature") (1910) examined the origins and development of a great many philosophical areas, including the broadest and most far-reaching examination of the theological theory of pandeism written up to that point. A critique reviewing Weinstein's work in this field deemed the term pandeism to be an 'unsightly' combination of Greek and Latin, though Weinstein did not coin the term, nor did he claim to have. The reviewer further criticises Weinstein's broad assertions that such historical philosophers as Scotus Erigena, Anselm of Canterbury, Nicholas of Cusa, Giordano Bruno, Mendelssohn, and Lessing all were pandeists or leaned towards pandeism. Philosophically, Weinstein was attracted to what he called a psychical or spiritual monism, which he believed to be comparable to the pantheism of Spinoza, and wherein the essence of all phenomena could be found entirely in the mind. Though he could see no way around the eventual heat death of the Universe, Weinstein suggested that there existed a fundamental 'psychical energy,' of which a maximum-entropy world would ultimately consist. Weinstein wrote: From this premise Weinstein reasoned that the world must have both a beginning and an end, and that a supernatural force must have initiated it, and so could bring about its end as well: Though he rejected theistic formulations regarding such things, Weinstein found the origin of the Universe to be so problematic that he wrote: "As far as I can see, only Spinozist pantheism, among all philosophies, can lead to a satisfactory solution." Works Handbuch der physikalischen Maassbestimmungen. Zweiter Band. Einheiten und Dimensionen, Messungen für Längen, Massen, Volumina und Dichtigkeiten, Julius Springer, Berlin 1888 Die philosophischen Grundlagen der Wissenschaften. Vorlesungen gehalten an der Universität Berlin …, B. G. Teubner, Leipzig und Berlin 1906 Welt- und Lebensanschauungen hervorgegangen aus Religion, Philosophie und Naturerkenntnis, Johann Ambrosius Barth, Leipzig 1910 Die Physik der bewegten Materie und die Relativitätstheorie, Barth, Leipzig 1913 Kräfte und Spannungen. Das Gravitations- und Strahlenfeld, Friedr. Vieweg & Sohn, Braunschweig 1914 References External links 1852 births 1918 deaths 19th-century German non-fiction writers 19th-century German philosophers 19th-century German physicists 20th-century German non-fiction writers 20th-century German philosophers 20th-century German physicists German Jews German male non-fiction writers German male writers German physicists Academic staff of the Humboldt University of Berlin Jewish philosophers Jewish German physicists Lithuanian Jews Pantheists German philosophers of religion German philosophers of science Philosophy writers Relativity critics
Max Bernhard Weinstein
Physics
1,062
2,016,219
https://en.wikipedia.org/wiki/National%20Institute%20of%20Building%20Sciences
The National Institute of Building Sciences is a non-profit, non-governmental organization that identifies and resolves problems and potential issues in the built environment throughout the United States. Its creation was authorized by the U.S. Congress in the Housing and Community Development Act of 1974. Board of directors The Institute is governed by a board of directors which consists of 21 members. All members serve for terms of three years, with a third of the board up for new terms each year. The President, with the advice and consent of the Senate, appoints six members to represent the public interest. The remaining 15 members are elected from the nation's construction industry, including representatives of construction labor organizations, product manufacturers, and builders, housing management experts, and experts in building standards, codes, and fire safety, as well as public interest representatives including architects, professional engineers, officials of Federal, State, and local agencies, and representatives of consumer organizations. The board shall always have a majority of public interest representatives. After the expiration of the term of any member, they may continue to serve until their successor has been elected or has been appointed and confirmed. The board annually elects from among its members a chairman. It shall also elect one or more vice chairmen. The terms are for one year and no one can serve as chairman or vice chairman for more than two consecutive terms. Among the board's duties is to appoint a president and CEO, and other executive officers and as they see fit. As of September 12, 2024, George K. Guszcza is the President and CEO of the NIBS. Board members appointed by the President The current members of the board that are appointed by the President, : Councils and Workgroups Building Enclosure Technology and Environment Council (BETEC) Building Information Management (BIM) Council (formerly the buildingSMART alliance) Building Seismic Safety Council (BSSC) Consultative Council Facility Management and Operations Council (FMOC) Multi-Hazard Mitigation Council (MMC) Off-Site Construction Council Whole Building Design Guide (WBDG) Workgroup Technology programs HAZUS ProjNet Whole Building Design Guide WBDG News NIBS Member Quarterly Newsletter Standards and publications National BIM Standard - United States United States National CAD Standard Former councils include: Facility Information Council (FIC) International Alliance for Interoperability (IAI) Charter members  Mortimer M. Marshall Jr., FAIA  Homer Hurst See also National CAD Standard Whole Building Design Guide References External links Building engineering organizations Professional associations based in the United States Institutes based in the United States
National Institute of Building Sciences
Engineering
518
2,176,160
https://en.wikipedia.org/wiki/Interval%20arithmetic
[[File:Set of curves Outer approximation.png|345px|thumb|right|Tolerance function (turquoise) and interval-valued approximation (red)]] Interval arithmetic (also known as interval mathematics; interval analysis or interval computation) is a mathematical technique used to mitigate rounding and measurement errors in mathematical computation by computing function bounds. Numerical methods involving interval arithmetic can guarantee relatively reliable and mathematically correct results. Instead of representing a value as a single number, interval arithmetic or interval mathematics represents each value as a range of possibilities. Mathematically, instead of working with an uncertain real-valued variable , interval arithmetic works with an interval that defines the range of values that can have. In other words, any value of the variable lies in the closed interval between and . A function , when applied to , produces an interval which includes all the possible values for for all . Interval arithmetic is suitable for a variety of purposes; the most common use is in scientific works, particularly when the calculations are handled by software, where it is used to keep track of rounding errors in calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such as differential equations) and optimization problems. Introduction The main objective of interval arithmetic is to provide a simple way of calculating upper and lower bounds of a function's range in one or more variables. These endpoints are not necessarily the true supremum or infimum of a range since the precise calculation of those values can be difficult or impossible; the bounds only need to contain the function's range as a subset. This treatment is typically limited to real intervals, so quantities in the form where and are allowed. With one of , infinite, the interval would be an unbounded interval; with both infinite, the interval would be the extended real number line. Since a real number can be interpreted as the interval intervals and real numbers can be freely combined. Example Consider the calculation of a person's body mass index (BMI). BMI is calculated as a person's body weight in kilograms divided by the square of their height in meters. Suppose a person uses a scale that has a precision of one kilogram, where intermediate values cannot be discerned, and the true weight is rounded to the nearest whole number. For example, 79.6 kg and 80.3 kg are indistinguishable, as the scale can only display values to the nearest kilogram. It is unlikely that when the scale reads 80 kg, the person has a weight of exactly 80.0 kg. Thus, the scale displaying 80 kg indicates a weight between 79.5 kg and 80.5 kg, or the interval . The BMI of a man who weighs 80 kg and is 1.80m tall is approximately 24.7. A weight of 79.5 kg and the same height yields a BMI of 24.537, while a weight of 80.5 kg yields 24.846. Since the body mass is continuous and always increasing for all values within the specified weight interval, the true BMI must lie within the interval . Since the entire interval is less than 25, which is the cutoff between normal and excessive weight, it can be concluded with certainty that the man is of normal weight. The error in this example does not affect the conclusion (normal weight), but this is not generally true. If the man were slightly heavier, the BMI's range may include the cutoff value of 25. In such a case, the scale's precision would be insufficient to make a definitive conclusion. The range of BMI examples could be reported as since this interval is a superset of the calculated interval. The range could not, however, be reported as , as the interval does not contain possible BMI values. Multiple intervals Height and body weight both affect the value of the BMI. Though the example above only considered variation in weight, height is also subject to uncertainty. Height measurements in meters are usually rounded to the nearest centimeter: a recorded measurement of 1.79 meters represents a height in the interval . Since the BMI uniformly increases with respect to weight and decreases with respect to height, the error interval can be calculated by substituting the lowest and highest values of each interval, and then selecting the lowest and highest results as boundaries. The BMI must therefore exist in the interval In this case, the man may have normal weight or be overweight; the weight and height measurements were insufficiently precise to make a definitive conclusion. Interval operators A binary operation on two intervals, such as addition or multiplication is defined by In other words, it is the set of all possible values of , where and are in their corresponding intervals. If is monotone for each operand on the intervals, which is the case for the four basic arithmetic operations (except division when the denominator contains ), the extreme values occur at the endpoints of the operand intervals. Writing out all combinations, one way of stating this is provided that is defined for all and . For practical applications, this can be simplified further: Addition: Subtraction: Multiplication: Division: where The last case loses useful information about the exclusion of . Thus, it is common to work with and as separate intervals. More generally, when working with discontinuous functions, it is sometimes useful to do the calculation with so-called multi-intervals of the form The corresponding multi-interval arithmetic maintains a set of (usually disjoint) intervals and also provides for overlapping intervals to unite. Interval multiplication often only requires two multiplications. If , are nonnegative, The multiplication can be interpreted as the area of a rectangle with varying edges. The result interval covers all possible areas, from the smallest to the largest. With the help of these definitions, it is already possible to calculate the range of simple functions, such as For example, if , and : Notation To shorten the notation of intervals, brackets can be used. can be used to represent an interval. Note that in such a compact notation, should not be confused between a single-point interval and a general interval. For the set of all intervals, we can use as an abbreviation. For a vector of intervals we can use a bold font: . Elementary functions Interval functions beyond the four basic operators may also be defined. For monotonic functions in one variable, the range of values is simple to compute. If is monotonically increasing (resp. decreasing) in the interval then for all such that (resp. ). The range corresponding to the interval can be therefore calculated by applying the function to its endpoints: From this, the following basic features for interval functions can easily be defined: Exponential function: for Logarithm: for positive intervals and Odd powers: , for odd For even powers, the range of values being considered is important and needs to be dealt with before doing any multiplication. For example, for should produce the interval when But if is taken by repeating interval multiplication of form then the result is wider than necessary. More generally one can say that, for piecewise monotonic functions, it is sufficient to consider the endpoints , of an interval, together with the so-called critical points within the interval, being those points where the monotonicity of the function changes direction. For the sine and cosine functions, the critical points are at or for , respectively. Thus, only up to five points within an interval need to be considered, as the resulting interval is if the interval includes at least two extrema. For sine and cosine, only the endpoints need full evaluation, as the critical points lead to easily pre-calculated values—namely -1, 0, and 1. Interval extensions of general functions In general, it may not be easy to find such a simple description of the output interval for many functions. But it may still be possible to extend functions to interval arithmetic. If is a function from a real vector to a real number, then is called an interval extension of if This definition of the interval extension does not give a precise result. For example, both and are allowable extensions of the exponential function. Tighter extensions are desirable, though the relative costs of calculation and imprecision should be considered; in this case, should be chosen as it gives the tightest possible result. Given a real expression, its natural interval extension is achieved by using the interval extensions of each of its subexpressions, functions, and operators. The Taylor interval extension (of degree ) is a times differentiable function defined by for some , where is the -th order differential of at the point and is an interval extension of the Taylor remainder.The vector lies between and with , is protected by . Usually one chooses to be the midpoint of the interval and uses the natural interval extension to assess the remainder. The special case of the Taylor interval extension of degree is also referred to as the mean value form. Complex interval arithmetic An interval can be defined as a set of points within a specified distance of the center, and this definition can be extended from real numbers to complex numbers. Another extension defines intervals as rectangles in the complex plane. As is the case with computing with real numbers, computing with complex numbers involves uncertain data. So, given the fact that an interval number is a real closed interval and a complex number is an ordered pair of real numbers, there is no reason to limit the application of interval arithmetic to the measure of uncertainties in computations with real numbers. Interval arithmetic can thus be extended, via complex interval numbers, to determine regions of uncertainty in computing with complex numbers. One can either define complex interval arithmetic using rectangles or using disks, both with their respective advantages and disadvantages. The basic algebraic operations for real interval numbers (real closed intervals) can be extended to complex numbers. It is therefore not surprising that complex interval arithmetic is similar to, but not the same as, ordinary complex arithmetic. It can be shown that, as is the case with real interval arithmetic, there is no distributivity between the addition and multiplication of complex interval numbers except for certain special cases, and inverse elements do not always exist for complex interval numbers. Two other useful properties of ordinary complex arithmetic fail to hold in complex interval arithmetic: the additive and multiplicative properties, of ordinary complex conjugates, do not hold for complex interval conjugates. Interval arithmetic can be extended, in an analogous manner, to other multidimensional number systems such as quaternions and octonions, but with the expense that we have to sacrifice other useful properties of ordinary arithmetic. Interval methods The methods of classical numerical analysis cannot be transferred one-to-one into interval-valued algorithms, as dependencies between numerical values are usually not taken into account. Rounded interval arithmetic To work effectively in a real-life implementation, intervals must be compatible with floating point computing. The earlier operations were based on exact arithmetic, but in general fast numerical solution methods may not be available for it. The range of values of the function for and are for example . Where the same calculation is done with single-digit precision, the result would normally be . But , so this approach would contradict the basic principles of interval arithmetic, as a part of the domain of would be lost. Instead, the outward rounded solution is used. The standard IEEE 754 for binary floating-point arithmetic also sets out procedures for the implementation of rounding. An IEEE 754 compliant system allows programmers to round to the nearest floating-point number; alternatives are rounding towards 0 (truncating), rounding toward positive infinity (i.e., up), or rounding towards negative infinity (i.e., down). The required external rounding for interval arithmetic can thus be achieved by changing the rounding settings of the processor in the calculation of the upper limit (up) and lower limit (down). Alternatively, an appropriate small interval can be added. Dependency problem The so-called "dependency" problem is a major obstacle to the application of interval arithmetic. Although interval methods can determine the range of elementary arithmetic operations and functions very accurately, this is not always true with more complicated functions. If an interval occurs several times in a calculation using parameters, and each occurrence is taken independently, then this can lead to an unwanted expansion of the resulting intervals. As an illustration, take the function defined by The values of this function over the interval are As the natural interval extension, it is calculated as: which is slightly larger; we have instead calculated the infimum and supremum of the function over There is a better expression of in which the variable only appears once, namely by rewriting as addition and squaring in the quadratic. So the suitable interval calculation is and gives the correct values. In general, it can be shown that the exact range of values can be achieved, if each variable appears only once and if is continuous inside the box. However, not every function can be rewritten this way. The dependency of the problem causing over-estimation of the value range can go as far as covering a large range, preventing more meaningful conclusions. An additional increase in the range stems from the solution of areas that do not take the form of an interval vector. The solution set of the linear system is precisely the line between the points and Using interval methods results in the unit square, This is known as the wrapping effect. Linear interval systems A linear interval system consists of a matrix interval extension and an interval vector . We want the smallest cuboid , for all vectors which there is a pair with and satisfying. . For quadratic systems – in other words, for – there can be such an interval vector , which covers all possible solutions, found simply with the interval Gauss method. This replaces the numerical operations, in that the linear algebra method known as Gaussian elimination becomes its interval version. However, since this method uses the interval entities and repeatedly in the calculation, it can produce poor results for some problems. Hence using the result of the interval-valued Gauss only provides first rough estimates, since although it contains the entire solution set, it also has a large area outside it. A rough solution can often be improved by an interval version of the Gauss–Seidel method. The motivation for this is that the -th row of the interval extension of the linear equation. can be determined by the variable if the division is allowed. It is therefore simultaneously. and . So we can now replace by , and so the vector by each element. Since the procedure is more efficient for a diagonally dominant matrix, instead of the system one can often try multiplying it by an appropriate rational matrix with the resulting matrix equation. left to solve. If one chooses, for example, for the central matrix , then is outer extension of the identity matrix. These methods only work well if the widths of the intervals occurring are sufficiently small. For wider intervals, it can be useful to use an interval-linear system on finite (albeit large) real number equivalent linear systems. If all the matrices are invertible, it is sufficient to consider all possible combinations (upper and lower) of the endpoints occurring in the intervals. The resulting problems can be resolved using conventional numerical methods. Interval arithmetic is still used to determine rounding errors. This is only suitable for systems of smaller dimension, since with a fully occupied matrix, real matrices need to be inverted, with vectors for the right-hand side. This approach was developed by Jiri Rohn and is still being developed. Interval Newton method An interval variant of Newton's method for finding the zeros in an interval vector can be derived from the average value extension. For an unknown vector applied to , gives. . For a zero , that is , and thus, must satisfy. . This is equivalent to . An outer estimate of can be determined using linear methods. In each step of the interval Newton method, an approximate starting value is replaced by and so the result can be improved. In contrast to traditional methods, the interval method approaches the result by containing the zeros. This guarantees that the result produces all zeros in the initial range. Conversely, it proves that no zeros of were in the initial range if a Newton step produces the empty set. The method converges on all zeros in the starting region. Division by zero can lead to the separation of distinct zeros, though the separation may not be complete; it can be complemented by the bisection method. As an example, consider the function , the starting range , and the point . We then have and the first Newton step gives. . More Newton steps are used separately on and . These converge to arbitrarily small intervals around and . The Interval Newton method can also be used with thick functions such as , which would in any case have interval results. The result then produces intervals containing . Bisection and covers The various interval methods deliver conservative results as dependencies between the sizes of different interval extensions are not taken into account. However, the dependency problem becomes less significant for narrower intervals. Covering an interval vector by smaller boxes so that is then valid for the range of values. So, for the interval extensions described above the following holds: Since is often a genuine superset of the right-hand side, this usually leads to an improved estimate. Such a cover can be generated by the bisection method such as thick elements of the interval vector by splitting in the center into the two intervals and If the result is still not suitable then further gradual subdivision is possible. A cover of intervals results from divisions of vector elements, substantially increasing the computation costs. With very wide intervals, it can be helpful to split all intervals into several subintervals with a constant (and smaller) width, a method known as mincing. This then avoids the calculations for intermediate bisection steps. Both methods are only suitable for problems of low dimension. Application Interval arithmetic can be used in various areas (such as set inversion, motion planning, set estimation, or stability analysis) to treat estimates with no exact numerical value. Rounding error analysis Interval arithmetic is used with error analysis, to control rounding errors arising from each calculation. The advantage of interval arithmetic is that after each operation there is an interval that reliably includes the true result. The distance between the interval boundaries gives the current calculation of rounding errors directly: Error = for a given interval . Interval analysis adds to rather than substituting for traditional methods for error reduction, such as pivoting. Tolerance analysis Parameters for which no exact figures can be allocated often arise during the simulation of technical and physical processes. The production process of technical components allows certain tolerances, so some parameters fluctuate within intervals. In addition, many fundamental constants are not known precisely. If the behavior of such a system affected by tolerances satisfies, for example, , for and unknown then the set of possible solutions. , can be found by interval methods. This provides an alternative to traditional propagation of error analysis. Unlike point methods, such as Monte Carlo simulation, interval arithmetic methodology ensures that no part of the solution area can be overlooked. However, the result is always a worst-case analysis for the distribution of error, as other probability-based distributions are not considered. Fuzzy interval arithmetic Interval arithmetic can also be used with affiliation functions for fuzzy quantities as they are used in fuzzy logic. Apart from the strict statements and , intermediate values are also possible, to which real numbers are assigned. corresponds to definite membership while is non-membership. A distribution function assigns uncertainty, which can be understood as a further interval. For fuzzy arithmetic only a finite number of discrete affiliation stages are considered. The form of such a distribution for an indistinct value can then be represented by a sequence of intervals. The interval corresponds exactly to the fluctuation range for the stage The appropriate distribution for a function concerning indistinct values and the corresponding sequences. can be approximated by the sequence. where and can be calculated by interval methods. The value corresponds to the result of an interval calculation. Computer-assisted proof Warwick Tucker used interval arithmetic in order to solve the 14th of Smale's problems, that is, to show that the Lorenz attractor is a strange attractor. Thomas Hales used interval arithmetic in order to solve the Kepler conjecture. History Interval arithmetic is not a completely new phenomenon in mathematics; it has appeared several times under different names in the course of history. For example, Archimedes calculated lower and upper bounds 223/71 < π < 22/7 in the 3rd century BC. Actual calculation with intervals has neither been as popular as other numerical techniques nor been completely forgotten. Rules for calculating with intervals and other subsets of the real numbers were published in a 1931 work by Rosalind Cicely Young. Arithmetic work on range numbers to improve the reliability of digital systems was then published in a 1951 textbook on linear algebra by ; intervals were used to measure rounding errors associated with floating-point numbers. A comprehensive paper on interval algebra in numerical analysis was published by Teruo Sunaga (1958). The birth of modern interval arithmetic was marked by the appearance of the book Interval Analysis by Ramon E. Moore in 1966. He had the idea in spring 1958, and a year later he published an article about computer interval arithmetic. Its merit was that starting with a simple principle, it provided a general method for automated error analysis, not just errors resulting from rounding. Independently in 1956, Mieczyslaw Warmus suggested formulae for calculations with intervals, though Moore found the first non-trivial applications. In the following twenty years, German groups of researchers carried out pioneering work around Ulrich W. Kulisch and at the University of Karlsruhe and later also at the Bergische University of Wuppertal. For example, explored more effective implementations, while improved containment procedures for the solution set of systems of equations were due to Arnold Neumaier among others. In the 1960s, Eldon R. Hansen dealt with interval extensions for linear equations and then provided crucial contributions to global optimization, including what is now known as Hansen's method, perhaps the most widely used interval algorithm. Classical methods in this often have the problem of determining the largest (or smallest) global value, but could only find a local optimum and could not find better values; Helmut Ratschek and Jon George Rokne developed branch and bound methods, which until then had only applied to integer values, by using intervals to provide applications for continuous values. In 1988, Rudolf Lohner developed Fortran-based software for reliable solutions for initial value problems using ordinary differential equations. The journal Reliable Computing (originally Interval Computations) has been published since the 1990s, dedicated to the reliability of computer-aided computations. As lead editor, R. Baker Kearfott, in addition to his work on global optimization, has contributed significantly to the unification of notation and terminology used in interval arithmetic. In recent years work has concentrated in particular on the estimation of preimages of parameterized functions and to robust control theory by the COPRIN working group of INRIA in Sophia Antipolis in France. Implementations There are many software packages that permit the development of numerical applications using interval arithmetic. These are usually provided in the form of program libraries. There are also C++ and Fortran compilers that handle interval data types and suitable operations as a language extension, so interval arithmetic is supported directly. Since 1967, Extensions for Scientific Computation (XSC) have been developed in the University of Karlsruhe for various programming languages, such as C++, Fortran, and Pascal. The first platform was a Zuse Z23, for which a new interval data type with appropriate elementary operators was made available. There followed in 1976, Pascal-SC, a Pascal variant on a Zilog Z80 that it made possible to create fast, complicated routines for automated result verification. Then came the Fortran 77-based ACRITH-XSC for the System/370 architecture (FORTRAN-SC), which was later delivered by IBM. Starting from 1991 one could produce code for C compilers with Pascal-XSC; a year later the C++ class library supported C-XSC on many different computer systems. In 1997, all XSC variants were made available under the GNU General Public License. At the beginning of 2000, C-XSC 2.0 was released under the leadership of the working group for scientific computation at the Bergische University of Wuppertal to correspond to the improved C++ standard. Another C++-class library was created in 1993 at the Hamburg University of Technology called Profil/BIAS (Programmer's Runtime Optimized Fast Interval Library, Basic Interval Arithmetic), which made the usual interval operations more user-friendly. It emphasized the efficient use of hardware, portability, and independence of a particular presentation of intervals. The Boost collection of C++ libraries contains a template class for intervals. Its authors are aiming to have interval arithmetic in the standard C++ language. The Frink programming language has an implementation of interval arithmetic that handles arbitrary-precision numbers. Programs written in Frink can use intervals without rewriting or recompilation. GAOL is another C++ interval arithmetic library that is unique in that it offers the relational interval operators used in interval constraint programming. The Moore library is an efficient implementation of interval arithmetic in C++. It provides intervals with endpoints of arbitrary precision and is based on the concepts feature of C++. The Julia programming language has an implementation of interval arithmetics along with high-level features, such as root-finding (for both real and complex-valued functions) and interval constraint programming, via the ValidatedNumerics.jl package. In addition, computer algebra systems, such as Euler Mathematical Toolbox, FriCAS, Maple, Mathematica, Maxima and MuPAD, can handle intervals. A Matlab extension Intlab'' builds on BLAS routines, and the toolbox b4m makes a Profil/BIAS interface. A library for the functional language OCaml was written in assembly language and C. IEEE 1788 standard A standard for interval arithmetic, IEEE Std 1788-2015, has been approved in June 2015. Two reference implementations are freely available. These have been developed by members of the standard's working group: The libieeep1788 library for C++, and the interval package for GNU Octave. A minimal subset of the standard, IEEE Std 1788.1-2017, has been approved in December 2017 and published in February 2018. It should be easier to implement and may speed production of implementations. Conferences and workshops Several international conferences or workshops take place every year in the world. The main conference is probably SCAN (International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation), but there is also SWIM (Small Workshop on Interval Methods), PPAM (International Conference on Parallel Processing and Applied Mathematics), REC (International Workshop on Reliable Engineering Computing). See also Affine arithmetic INTLAB (Interval Laboratory) Automatic differentiation Multigrid method Monte-Carlo simulation Interval finite element Fuzzy number Significant figures Karlsruhe Accurate Arithmetic (KAA) Unum References Further reading (11 pages) (NB. About Triplex-ALGOL Karlsruhe, an ALGOL 60 (1963) implementation with support for triplex numbers.) External links Interval arithmetic (Wolfram Mathworld) Validated Numerics for Pedestrians Interval Methods from Arnold Neumaier, University of Vienna SWIM (Summer Workshop on Interval Methods) International Conference on Parallel Processing and Applied Mathematics INTLAB, Institute for Reliable Computing , Hamburg University of Technology Ball arithmetic by Joris van der Hoeven kv - a C++ Library for Verified Numerical Computation Arb - a C library for arbitrary-precision ball arithmetic Arithmetic Computer arithmetic Numerical analysis Data types
Interval arithmetic
Mathematics
5,695
13,256
https://en.wikipedia.org/wiki/Helium
Helium (from ) is a chemical element; it has symbol He and atomic number 2. It is a colorless, odorless, non-toxic, inert, monatomic gas and the first in the noble gas group in the periodic table. Its boiling point is the lowest among all the elements, and it does not have a melting point at standard pressures. It is the second-lightest and second most abundant element in the observable universe, after hydrogen. It is present at about 24% of the total elemental mass, which is more than 12 times the mass of all the heavier elements combined. Its abundance is similar to this in both the Sun and Jupiter, because of the very high nuclear binding energy (per nucleon) of helium-4, with respect to the next three elements after helium. This helium-4 binding energy also accounts for why it is a product of both nuclear fusion and radioactive decay. The most common isotope of helium in the universe is helium-4, the vast majority of which was formed during the Big Bang. Large amounts of new helium are created by nuclear fusion of hydrogen in stars. Helium was first detected as an unknown, yellow spectral line signature in sunlight during a solar eclipse in 1868 by Georges Rayet, Captain C. T. Haig, Norman R. Pogson, and Lieutenant John Herschel, and was subsequently confirmed by French astronomer Jules Janssen. Janssen is often jointly credited with detecting the element, along with Norman Lockyer. Janssen recorded the helium spectral line during the solar eclipse of 1868, while Lockyer observed it from Britain. However, only Lockyer proposed that the line was due to a new element, which he named after the Sun. The formal discovery of the element was made in 1895 by chemists Sir William Ramsay, Per Teodor Cleve, and Nils Abraham Langlet, who found helium emanating from the uranium ore cleveite, which is now not regarded as a separate mineral species, but as a variety of uraninite. In 1903, large reserves of helium were found in natural gas fields in parts of the United States, by far the largest supplier of the gas today. Liquid helium is used in cryogenics (its largest single use, consuming about a quarter of production), and in the cooling of superconducting magnets, with its main commercial application in MRI scanners. Helium's other industrial uses—as a pressurizing and purge gas, as a protective atmosphere for arc welding, and in processes such as growing crystals to make silicon wafers—account for half of the gas produced. A small but well-known use is as a lifting gas in balloons and airships. As with any gas whose density differs from that of air, inhaling a small volume of helium temporarily changes the timbre and quality of the human voice. In scientific research, the behavior of the two fluid phases of helium-4 (helium I and helium II) is important to researchers studying quantum mechanics (in particular the property of superfluidity) and to those looking at the phenomena, such as superconductivity, produced in matter near absolute zero. On Earth, it is relatively rare—5.2 ppm by volume in the atmosphere. Most terrestrial helium present today is created by the natural radioactive decay of heavy radioactive elements (thorium and uranium, although there are other examples), as the alpha particles emitted by such decays consist of helium-4 nuclei. This radiogenic helium is trapped with natural gas in concentrations as great as 7% by volume, from which it is extracted commercially by a low-temperature separation process called fractional distillation. Terrestrial helium is a non-renewable resource because once released into the atmosphere, it promptly escapes into space. Its supply is thought to be rapidly diminishing. However, some studies suggest that helium produced deep in the Earth by radioactive decay can collect in natural gas reserves in larger-than-expected quantities, in some cases having been released by volcanic activity. History Scientific discoveries The first evidence of helium was observed on August 18, 1868, as a bright yellow line with a wavelength of 587.49 nanometers in the spectrum of the chromosphere of the Sun. The line was detected by French astronomer Jules Janssen during a total solar eclipse in Guntur, India. This line was initially assumed to be sodium. On October 20 of the same year, English astronomer Norman Lockyer observed a yellow line in the solar spectrum, which he named the D3 because it was near the known D1 and D2 Fraunhofer lines of sodium. He concluded that it was caused by an element in the Sun unknown on Earth. Lockyer named the element with the Greek word for the Sun, ἥλιος (helios). It is sometimes said that English chemist Edward Frankland was also involved in the naming, but this is unlikely as he doubted the existence of this new element. The ending "-ium" is unusual, as it normally applies only to metallic elements; probably Lockyer, being an astronomer, was unaware of the chemical conventions. In 1881, Italian physicist Luigi Palmieri detected helium on Earth for the first time through its D3 spectral line, when he analyzed a material that had been sublimated during a recent eruption of Mount Vesuvius. On March 26, 1895, Scottish chemist Sir William Ramsay isolated helium on Earth by treating the mineral cleveite (a variety of uraninite with at least 10% rare-earth elements) with mineral acids. Ramsay was looking for argon but, after separating nitrogen and oxygen from the gas, liberated by sulfuric acid, he noticed a bright yellow line that matched the D3 line observed in the spectrum of the Sun. These samples were identified as helium by Lockyer and British physicist William Crookes. It was independently isolated from cleveite in the same year by chemists Per Teodor Cleve and Abraham Langlet in Uppsala, Sweden, who collected enough of the gas to accurately determine its atomic weight. Helium was also isolated by American geochemist William Francis Hillebrand prior to Ramsay's discovery, when he noticed unusual spectral lines while testing a sample of the mineral uraninite. Hillebrand, however, attributed the lines to nitrogen. His letter of congratulations to Ramsay offers an interesting case of discovery, and near-discovery, in science. In 1907, Ernest Rutherford and Thomas Royds demonstrated that alpha particles are helium nuclei by allowing the particles to penetrate the thin glass wall of an evacuated tube, then creating a discharge in the tube, to study the spectrum of the new gas inside. In 1908, helium was first liquefied by Dutch physicist Heike Kamerlingh Onnes by cooling the gas to less than . He tried to solidify it by further reducing the temperature but failed, because helium does not solidify at atmospheric pressure. Onnes' student Willem Hendrik Keesom was eventually able to solidify 1 cm3 of helium in 1926 by applying additional external pressure. In 1913, Niels Bohr published his "trilogy" on atomic structure that included a reconsideration of the Pickering–Fowler series as central evidence in support of his model of the atom. This series is named for Edward Charles Pickering, who in 1896 published observations of previously unknown lines in the spectrum of the star ζ Puppis (these are now known to occur with Wolf–Rayet and other hot stars). Pickering attributed the observation (lines at 4551, 5411, and 10123 Å) to a new form of hydrogen with half-integer transition levels. In 1912, Alfred Fowler managed to produce similar lines from a hydrogen-helium mixture, and supported Pickering's conclusion as to their origin. Bohr's model does not allow for half-integer transitions (nor does quantum mechanics) and Bohr concluded that Pickering and Fowler were wrong, and instead assigned these spectral lines to ionised helium, He+. Fowler was initially skeptical but was ultimately convinced that Bohr was correct, and by 1915 "spectroscopists had transferred [the Pickering–Fowler series] definitively [from hydrogen] to helium." Bohr's theoretical work on the Pickering series had demonstrated the need for "a re-examination of problems that seemed already to have been solved within classical theories" and provided important confirmation for his atomic theory. In 1938, Russian physicist Pyotr Leonidovich Kapitsa discovered that helium-4 has almost no viscosity at temperatures near absolute zero, a phenomenon now called superfluidity. This phenomenon is related to Bose–Einstein condensation. In 1972, the same phenomenon was observed in helium-3, but at temperatures much closer to absolute zero, by American physicists Douglas D. Osheroff, David M. Lee, and Robert C. Richardson. The phenomenon in helium-3 is thought to be related to pairing of helium-3 fermions to make bosons, in analogy to Cooper pairs of electrons producing superconductivity. In 1961, Vignos and Fairbank reported the existence of a different phase of solid helium-4, designated the gamma-phase. It exists for a narrow range of pressure between 1.45 and 1.78 K. Extraction and use After an oil drilling operation in 1903 in Dexter, Kansas produced a gas geyser that would not burn, Kansas state geologist Erasmus Haworth collected samples of the escaping gas and took them back to the University of Kansas at Lawrence where, with the help of chemists Hamilton Cady and David McFarland, he discovered that the gas consisted of, by volume, 72% nitrogen, 15% methane (a combustible percentage only with sufficient oxygen), 1% hydrogen, and 12% an unidentifiable gas. With further analysis, Cady and McFarland discovered that 1.84% of the gas sample was helium. This showed that despite its overall rarity on Earth, helium was concentrated in large quantities under the American Great Plains, available for extraction as a byproduct of natural gas. Following a suggestion by Sir Richard Threlfall, the United States Navy sponsored three small experimental helium plants during World War I. The goal was to supply barrage balloons with the non-flammable, lighter-than-air gas. A total of of 92% helium was produced in the program even though less than a cubic meter of the gas had previously been obtained. Some of this gas was used in the world's first helium-filled airship, the U.S. Navy's C-class blimp C-7, which flew its maiden voyage from Hampton Roads, Virginia, to Bolling Field in Washington, D.C., on December 1, 1921, nearly two years before the Navy's first rigid helium-filled airship, the Naval Aircraft Factory-built USS Shenandoah, flew in September 1923. Although the extraction process using low-temperature gas liquefaction was not developed in time to be significant during World War I, production continued. Helium was primarily used as a lifting gas in lighter-than-air craft. During World War II, the demand increased for helium for lifting gas and for shielded arc welding. The helium mass spectrometer was also vital in the atomic bomb Manhattan Project. The government of the United States set up the National Helium Reserve in 1925 at Amarillo, Texas, with the goal of supplying military airships in time of war and commercial airships in peacetime. Because of the Helium Act of 1925, which banned the export of scarce helium on which the US then had a production monopoly, together with the prohibitive cost of the gas, German Zeppelins were forced to use hydrogen as lifting gas, which would gain infamy in the Hindenburg disaster. The helium market after World War II was depressed but the reserve was expanded in the 1950s to ensure a supply of liquid helium as a coolant to create oxygen/hydrogen rocket fuel (among other uses) during the Space Race and Cold War. Helium use in the United States in 1965 was more than eight times the peak wartime consumption. After the Helium Acts Amendments of 1960 (Public Law 86–777), the U.S. Bureau of Mines arranged for five private plants to recover helium from natural gas. For this helium conservation program, the Bureau built a pipeline from Bushton, Kansas, to connect those plants with the government's partially depleted Cliffside gas field near Amarillo, Texas. This helium-nitrogen mixture was injected and stored in the Cliffside gas field until needed, at which time it was further purified. By 1995, a billion cubic meters of the gas had been collected and the reserve was US$1.4 billion in debt, prompting the Congress of the United States in 1996 to discontinue the reserve. The resulting Helium Privatization Act of 1996 (Public Law 104–273) directed the United States Department of the Interior to empty the reserve, with sales starting by 2005. Helium produced between 1930 and 1945 was about 98.3% pure (2% nitrogen), which was adequate for airships. In 1945, a small amount of 99.9% helium was produced for welding use. By 1949, commercial quantities of Grade A 99.95% helium were available. For many years, the United States produced more than 90% of commercially usable helium in the world, while extraction plants in Canada, Poland, Russia, and other nations produced the remainder. In the mid-1990s, a new plant in Arzew, Algeria, producing began operation, with enough production to cover all of Europe's demand. Meanwhile, by 2000, the consumption of helium within the U.S. had risen to more than 15 million kg per year. In 2004–2006, additional plants in Ras Laffan, Qatar, and Skikda, Algeria were built. Algeria quickly became the second leading producer of helium. Through this time, both helium consumption and the costs of producing helium increased. From 2002 to 2007 helium prices doubled. , the United States National Helium Reserve accounted for 30 percent of the world's helium. The reserve was expected to run out of helium in 2018. Despite that, a proposed bill in the United States Senate would allow the reserve to continue to sell the gas. Other large reserves were in the Hugoton in Kansas, United States, and nearby gas fields of Kansas and the panhandles of Texas and Oklahoma. New helium plants were scheduled to open in 2012 in Qatar, Russia, and the US state of Wyoming, but they were not expected to ease the shortage. In 2013, Qatar started up the world's largest helium unit, although the 2017 Qatar diplomatic crisis severely affected helium production there. 2014 was widely acknowledged to be a year of over-supply in the helium business, following years of renowned shortages. Nasdaq reported (2015) that for Air Products, an international corporation that sells gases for industrial use, helium volumes remain under economic pressure due to feedstock supply constraints. Characteristics Atom In quantum mechanics In the perspective of quantum mechanics, helium is the second simplest atom to model, following the hydrogen atom. Helium is composed of two electrons in atomic orbitals surrounding a nucleus containing two protons and (usually) two neutrons. As in Newtonian mechanics, no system that consists of more than two particles can be solved with an exact analytical mathematical approach (see 3-body problem) and helium is no exception. Thus, numerical mathematical methods are required, even to solve the system of one nucleus and two electrons. Such computational chemistry methods have been used to create a quantum mechanical picture of helium electron binding which is accurate to within < 2% of the correct value, in a few computational steps. Such models show that each electron in helium partly screens the nucleus from the other, so that the effective nuclear charge Zeff which each electron sees is about 1.69 units, not the 2 charges of a classic "bare" helium nucleus. Related stability of the helium-4 nucleus and electron shell The nucleus of the helium-4 atom is identical with an alpha particle. High-energy electron-scattering experiments show its charge to decrease exponentially from a maximum at a central point, exactly as does the charge density of helium's own electron cloud. This symmetry reflects similar underlying physics: the pair of neutrons and the pair of protons in helium's nucleus obey the same quantum mechanical rules as do helium's pair of electrons (although the nuclear particles are subject to a different nuclear binding potential), so that all these fermions fully occupy 1s orbitals in pairs, none of them possessing orbital angular momentum, and each cancelling the other's intrinsic spin. This arrangement is thus energetically extremely stable for all these particles and has astrophysical implications. Namely, adding another particle – proton, neutron, or alpha particle – would consume rather than release energy; all systems with mass number 5, as well as beryllium-8 (comprising two alpha particles), are unbound. For example, the stability and low energy of the electron cloud state in helium accounts for the element's chemical inertness, and also the lack of interaction of helium atoms with each other, producing the lowest melting and boiling points of all the elements. In a similar way, the particular energetic stability of the helium-4 nucleus, produced by similar effects, accounts for the ease of helium-4 production in atomic reactions that involve either heavy-particle emission or fusion. Some stable helium-3 (two protons and one neutron) is produced in fusion reactions from hydrogen, though its estimated abundance in the universe is about relative to helium-4. The unusual stability of the helium-4 nucleus is also important cosmologically: it explains the fact that in the first few minutes after the Big Bang, as the "soup" of free protons and neutrons which had initially been created in about 6:1 ratio cooled to the point that nuclear binding was possible, almost all first compound atomic nuclei to form were helium-4 nuclei. Owing to the relatively tight binding of helium-4 nuclei, its production consumed nearly all of the free neutrons in a few minutes, before they could beta-decay, and thus few neutrons were available to form heavier atoms such as lithium, beryllium, or boron. Helium-4 nuclear binding per nucleon is stronger than in any of these elements (see nucleogenesis and binding energy) and thus, once helium had been formed, no energetic drive was available to make elements 3, 4 and 5. It is barely energetically favorable for helium to fuse into the next element with a lower energy per nucleon, carbon. However, due to the short lifetime of the intermediate beryllium-8, this process requires three helium nuclei striking each other nearly simultaneously (see triple-alpha process). There was thus no time for significant carbon to be formed in the few minutes after the Big Bang, before the early expanding universe cooled to the temperature and pressure point where helium fusion to carbon was no longer possible. This left the early universe with a very similar ratio of hydrogen/helium as is observed today (3 parts hydrogen to 1 part helium-4 by mass), with nearly all the neutrons in the universe trapped in helium-4. All heavier elements (including those necessary for rocky planets like the Earth, and for carbon-based or other life) have thus been created since the Big Bang in stars which were hot enough to fuse helium itself. All elements other than hydrogen and helium today account for only 2% of the mass of atomic matter in the universe. Helium-4, by contrast, comprises about 24% of the mass of the universe's ordinary matter—nearly all the ordinary matter that is not hydrogen. Gas and plasma phases Helium is the second least reactive noble gas after neon, and thus the second least reactive of all elements. It is chemically inert and monatomic in all standard conditions. Because of helium's relatively low molar (atomic) mass, its thermal conductivity, specific heat, and sound speed in the gas phase are all greater than any other gas except hydrogen. For these reasons and the small size of helium monatomic molecules, helium diffuses through solids at a rate three times that of air and around 65% that of hydrogen. Helium is the least water-soluble monatomic gas, and one of the least water-soluble of any gas (CF4, SF6, and C4F8 have lower mole fraction solubilities: 0.3802, 0.4394, and 0.2372 x2/10−5, respectively, versus helium's 0.70797 x2/10−5), and helium's index of refraction is closer to unity than that of any other gas. Helium has a negative Joule–Thomson coefficient at normal ambient temperatures, meaning it heats up when allowed to freely expand. Only below its Joule–Thomson inversion temperature (of about 32 to 50 K at 1 atmosphere) does it cool upon free expansion. Once precooled below this temperature, helium can be liquefied through expansion cooling. Most extraterrestrial helium is plasma in stars, with properties quite different from those of atomic helium. In a plasma, helium's electrons are not bound to its nucleus, resulting in very high electrical conductivity, even when the gas is only partially ionized. The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind together with ionized hydrogen, the particles interact with the Earth's magnetosphere, giving rise to Birkeland currents and the aurora. Liquid phase Helium liquifies when cooled below 4.2 K at atmospheric pressure. Unlike any other element, however, helium remains liquid down to a temperature of absolute zero. This is a direct effect of quantum mechanics: specifically, the zero point energy of the system is too high to allow freezing. Pressures above about 25 atmospheres are required to freeze it. There are two liquid phases: Helium I is a conventional liquid, and Helium II, which occurs at a lower temperature, is a superfluid. Helium I Below its boiling point of and above the lambda point of , the isotope helium-4 exists in a normal colorless liquid state, called helium I. Like other cryogenic liquids, helium I boils when it is heated and contracts when its temperature is lowered. Below the lambda point, however, helium does not boil, and it expands as the temperature is lowered further. Helium I has a gas-like index of refraction of 1.026 which makes its surface so hard to see that floats of Styrofoam are often used to show where the surface is. This colorless liquid has a very low viscosity and a density of 0.145–0.125 g/mL (between about 0 and 4 K), which is only one-fourth the value expected from classical physics. Quantum mechanics is needed to explain this property and thus both states of liquid helium (helium I and helium II) are called quantum fluids, meaning they display atomic properties on a macroscopic scale. This may be an effect of its boiling point being so close to absolute zero, preventing random molecular motion (thermal energy) from masking the atomic properties. Helium II Liquid helium below its lambda point (called helium II) exhibits very unusual characteristics. Due to its high thermal conductivity, when it boils, it does not bubble but rather evaporates directly from its surface. Helium-3 also has a superfluid phase, but only at much lower temperatures; as a result, less is known about the properties of the isotope. Helium II is a superfluid, a quantum mechanical state of matter with strange properties. For example, when it flows through capillaries as thin as 10 to 100 nm it has no measurable viscosity. However, when measurements were done between two moving discs, a viscosity comparable to that of gaseous helium was observed. Existing theory explains this using the two-fluid model for helium II. In this model, liquid helium below the lambda point is viewed as containing a proportion of helium atoms in a ground state, which are superfluid and flow with exactly zero viscosity, and a proportion of helium atoms in an excited state, which behave more like an ordinary fluid. In the fountain effect, a chamber is constructed which is connected to a reservoir of helium II by a sintered disc through which superfluid helium leaks easily but through which non-superfluid helium cannot pass. If the interior of the container is heated, the superfluid helium changes to non-superfluid helium. In order to maintain the equilibrium fraction of superfluid helium, superfluid helium leaks through and increases the pressure, causing liquid to fountain out of the container. The thermal conductivity of helium II is greater than that of any other known substance, a million times that of helium I and several hundred times that of copper. This is because heat conduction occurs by an exceptional quantum mechanism. Most materials that conduct heat well have a valence band of free electrons which serve to transfer the heat. Helium II has no such valence band but nevertheless conducts heat well. The flow of heat is governed by equations that are similar to the wave equation used to characterize sound propagation in air. When heat is introduced, it moves at 20 meters per second at 1.8 K through helium II as waves in a phenomenon known as second sound. Helium II also exhibits a creeping effect. When a surface extends past the level of helium II, the helium II moves along the surface, against the force of gravity. Helium II will escape from a vessel that is not sealed by creeping along the sides until it reaches a warmer region where it evaporates. It moves in a 30 nm-thick film regardless of surface material. This film is called a Rollin film and is named after the man who first characterized this trait, Bernard V. Rollin. As a result of this creeping behavior and helium II's ability to leak rapidly through tiny openings, it is very difficult to confine. Unless the container is carefully constructed, the helium II will creep along the surfaces and through valves until it reaches somewhere warmer, where it will evaporate. Waves propagating across a Rollin film are governed by the same equation as gravity waves in shallow water, but rather than gravity, the restoring force is the van der Waals force. These waves are known as third sound. Solid phases Helium remains liquid down to absolute zero at atmospheric pressure, but it freezes at high pressure. Solid helium requires a temperature of 1–1.5 K (about −272 °C or −457 °F) at about 25 bar (2.5 MPa) of pressure. It is often hard to distinguish solid from liquid helium since the refractive index of the two phases are nearly the same. The solid has a sharp melting point and has a crystalline structure, but it is highly compressible; applying pressure in a laboratory can decrease its volume by more than 30%. With a bulk modulus of about 27 MPa it is ~100 times more compressible than water. Solid helium has a density of at 1.15 K and 66 atm; the projected density at 0 K and 25 bar (2.5 MPa) is . At higher temperatures, helium will solidify with sufficient pressure. At room temperature, this requires about 114,000 atm. Helium-4 and helium-3 both form several crystalline solid phases, all requiring at least 25 bar. They both form an α phase, which has a hexagonal close-packed (hcp) crystal structure, a β phase, which is face-centered cubic (fcc), and a γ phase, which is body-centered cubic (bcc). Isotopes There are nine known isotopes of helium of which two, helium-3 and helium-4, are stable. In the Earth's atmosphere, one atom is for every million that are . Unlike most elements, helium's isotopic abundance varies greatly by origin, due to the different formation processes. The most common isotope, helium-4, is produced on Earth by alpha decay of heavier radioactive elements; the alpha particles that emerge are fully ionized helium-4 nuclei. Helium-4 is an unusually stable nucleus because its nucleons are arranged into complete shells. It was also formed in enormous quantities during Big Bang nucleosynthesis. Helium-3 is present on Earth only in trace amounts. Most of it has been present since Earth's formation, though some falls to Earth trapped in cosmic dust. Trace amounts are also produced by the beta decay of tritium. Rocks from the Earth's crust have isotope ratios varying by as much as a factor of ten, and these ratios can be used to investigate the origin of rocks and the composition of the Earth's mantle. is much more abundant in stars as a product of nuclear fusion. Thus in the interstellar medium, the proportion of to is about 100 times higher than on Earth. Extraplanetary material, such as lunar and asteroid regolith, have trace amounts of helium-3 from being bombarded by solar winds. The Moon's surface contains helium-3 at concentrations on the order of 10 ppb, much higher than the approximately 5 ppt found in the Earth's atmosphere. A number of people, starting with Gerald Kulcinski in 1986, have proposed to explore the Moon, mine lunar regolith, and use the helium-3 for fusion. Liquid helium-4 can be cooled to about using evaporative cooling in a 1-K pot. Similar cooling of helium-3, which has a lower boiling point, can achieve about in a helium-3 refrigerator. Equal mixtures of liquid and below separate into two immiscible phases due to their dissimilarity (they follow different quantum statistics: helium-4 atoms are bosons while helium-3 atoms are fermions). Dilution refrigerators use this immiscibility to achieve temperatures of a few millikelvins. It is possible to produce exotic helium isotopes, which rapidly decay into other substances. The shortest-lived heavy helium isotope is the unbound helium-10 with a half-life of . Helium-6 decays by emitting a beta particle and has a half-life of 0.8 second. Helium-7 and helium-8 are created in certain nuclear reactions. Helium-6 and helium-8 are known to exhibit a nuclear halo. Properties Table of thermal and physical properties of helium gas at atmospheric pressure: Compounds Helium has a valence of zero and is chemically unreactive under all normal conditions. It is an electrical insulator unless ionized. As with the other noble gases, helium has metastable energy levels that allow it to remain ionized in an electrical discharge with a voltage below its ionization potential. Helium can form unstable compounds, known as excimers, with tungsten, iodine, fluorine, sulfur, and phosphorus when it is subjected to a glow discharge, to electron bombardment, or reduced to plasma by other means. The molecular compounds HeNe, HgHe10, and WHe2, and the molecular ions , , , and have been created this way. HeH+ is also stable in its ground state but is extremely reactive—it is the strongest Brønsted acid known, and therefore can exist only in isolation, as it will protonate any molecule or counteranion it contacts. This technique has also produced the neutral molecule He2, which has a large number of band systems, and HgHe, which is apparently held together only by polarization forces. Van der Waals compounds of helium can also be formed with cryogenic helium gas and atoms of some other substance, such as LiHe and He2. Theoretically, other true compounds may be possible, such as helium fluorohydride (HHeF), which would be analogous to HArF, discovered in 2000. Calculations show that two new compounds containing a helium-oxygen bond could be stable. Two new molecular species, predicted using theory, CsFHeO and N(CH3)4FHeO, are derivatives of a metastable FHeO− anion first theorized in 2005 by a group from Taiwan. Helium atoms have been inserted into the hollow carbon cage molecules (the fullerenes) by heating under high pressure. The endohedral fullerene molecules formed are stable at high temperatures. When chemical derivatives of these fullerenes are formed, the helium stays inside. If helium-3 is used, it can be readily observed by helium nuclear magnetic resonance spectroscopy. Many fullerenes containing helium-3 have been reported. Although the helium atoms are not attached by covalent or ionic bonds, these substances have distinct properties and a definite composition, like all stoichiometric chemical compounds. Under high pressures helium can form compounds with various other elements. Helium-nitrogen clathrate (He(N2)11) crystals have been grown at room temperature at pressures ca. 10 GPa in a diamond anvil cell. The insulating electride Na2He has been shown to be thermodynamically stable at pressures above 113 GPa. It has a fluorite structure. Occurrence and production Natural abundance Although it is rare on Earth, helium is the second most abundant element in the known Universe, constituting 23% of its baryonic mass. Only hydrogen is more abundant. The vast majority of helium was formed by Big Bang nucleosynthesis one to three minutes after the Big Bang. As such, measurements of its abundance contribute to cosmological models. In stars, it is formed by the nuclear fusion of hydrogen in proton–proton chain reactions and the CNO cycle, part of stellar nucleosynthesis. In the Earth's atmosphere, the concentration of helium by volume is only 5.2 parts per million. The concentration is low and fairly constant despite the continuous production of new helium because most helium in the Earth's atmosphere escapes into space by several processes. In the Earth's heterosphere, a part of the upper atmosphere, helium and hydrogen are the most abundant elements. Most helium on Earth is a result of radioactive decay. Helium is found in large amounts in minerals of uranium and thorium, including uraninite and its varieties cleveite and pitchblende, carnotite and monazite (a group name; "monazite" usually refers to monazite-(Ce)), because they emit alpha particles (helium nuclei, He2+) to which electrons immediately combine as soon as the particle is stopped by the rock. In this way an estimated 3000 metric tons of helium are generated per year throughout the lithosphere. In the Earth's crust, the concentration of helium is 8 parts per billion. In seawater, the concentration is only 4 parts per trillion. There are also small amounts in mineral springs, volcanic gas, and meteoric iron. Because helium is trapped in the subsurface under conditions that also trap natural gas, the greatest natural concentrations of helium on the planet are found in natural gas, from which most commercial helium is extracted. The concentration varies in a broad range from a few ppm to more than 7% in a small gas field in San Juan County, New Mexico. , the world's helium reserves were estimated at 31 billion cubic meters, with a third of that being in Qatar. In 2015 and 2016 additional probable reserves were announced to be under the Rocky Mountains in North America and in the East African Rift. The Bureau of Land Management (BLM) has proposed an October 2024 plan for managing natural resources in western Colorado. The plan involves closing 543,000 acres to oil and gas leasing while keeping 692,300 acres open. Among the open areas, 165,700 acres have been identified as suitable for helium recovery. The United States possesses an estimated 306 billion cubic feet of recoverable helium, sufficient to meet current consumption rates of 2.15 billion cubic feet per year for approximately 150 years. Modern extraction and distribution For large-scale use, helium is extracted by fractional distillation from natural gas, which can contain as much as 7% helium. Since helium has a lower boiling point than any other element, low temperatures and high pressure are used to liquefy nearly all the other gases (mostly nitrogen and methane). The resulting crude helium gas is purified by successive exposures to lowering temperatures, in which almost all of the remaining nitrogen and other gases are precipitated out of the gaseous mixture. Activated charcoal is used as a final purification step, usually resulting in 99.995% pure Grade-A helium. The principal impurity in Grade-A helium is neon. In a final production step, most of the helium that is produced is liquefied via a cryogenic process. This is necessary for applications requiring liquid helium and also allows helium suppliers to reduce the cost of long-distance transportation, as the largest liquid helium containers have more than five times the capacity of the largest gaseous helium tube trailers. In 2008, approximately 169 million standard cubic meters (SCM) of helium were extracted from natural gas or withdrawn from helium reserves, with approximately 78% from the United States, 10% from Algeria, and most of the remainder from Russia, Poland, and Qatar. By 2013, increases in helium production in Qatar (under the company Qatargas managed by Air Liquide) had increased Qatar's fraction of world helium production to 25%, making it the second largest exporter after the United States. An estimated deposit of helium was found in Tanzania in 2016. A large-scale helium plant was opened in Ningxia, China in 2020. In the United States, most helium is extracted from the natural gas of the Hugoton and nearby gas fields in Kansas, Oklahoma, and the Panhandle Field in Texas. Much of this gas was once sent by pipeline to the National Helium Reserve, but since 2005, this reserve has been depleted and sold off, and it is expected to be largely depleted by 2021 under the October 2013 Responsible Helium Administration and Stewardship Act (H.R. 527). The helium fields of the western United States are emerging as an alternate source of helium supply, particularly those of the "Four Corners" region (the states of Arizona, Colorado, New Mexico and Utah). Diffusion of crude natural gas through special semipermeable membranes and other barriers is another method to recover and purify helium. In 1996, the U.S. had proven helium reserves in such gas well complexes of about 147 billion standard cubic feet (4.2 billion SCM). At rates of use at that time (72 million SCM per year in the U.S.; see pie chart below) this would have been enough helium for about 58 years of U.S. use, and less than this (perhaps 80% of the time) at world use rates, although factors in saving and processing impact effective reserve numbers. Helium is generally extracted from natural gas because it is present in air at only a fraction of that of neon, yet the demand for it is far higher. It is estimated that if all neon production were retooled to save helium, 0.1% of the world's helium demands would be satisfied. Similarly, only 1% of the world's helium demands could be satisfied by re-tooling all air distillation plants. Helium can be synthesized by bombardment of lithium or boron with high-velocity protons, or by bombardment of lithium with deuterons, but these processes are a completely uneconomical method of production. Helium is commercially available in either liquid or gaseous form. As a liquid, it can be supplied in small insulated containers called dewars which hold as much as 1,000 liters of helium, or in large ISO containers, which have nominal capacities as large as 42 m3 (around 11,000 U.S. gallons). In gaseous form, small quantities of helium are supplied in high-pressure cylinders holding as much as 8 m3 (approximately . 282 standard cubic feet), while large quantities of high-pressure gas are supplied in tube trailers, which have capacities of as much as 4,860 m3 (approx. 172,000 standard cubic feet). Conservation advocates According to helium conservationists like Nobel laureate physicist Robert Coleman Richardson, writing in 2010, the free market price of helium has contributed to "wasteful" usage (e.g. for helium balloons). Prices in the 2000s had been lowered by the decision of the U.S. Congress to sell off the country's large helium stockpile by 2015. According to Richardson, the price needed to be multiplied by 20 to eliminate the excessive wasting of helium. In the 2012 Nuttall et al. paper titled "Stop squandering helium", it was also proposed to create an International Helium Agency that would build a sustainable market for "this precious commodity". Applications While balloons are perhaps the best-known use of helium, they are a minor part of all helium use. Helium is used for many purposes that require some of its unique properties, such as its low boiling point, low density, low solubility, high thermal conductivity, or inertness. Of the 2014 world helium total production of about 32 million kg (180 million standard cubic meters) helium per year, the largest use (about 32% of the total in 2014) is in cryogenic applications, most of which involves cooling the superconducting magnets in medical MRI scanners and NMR spectrometers. Other major uses were pressurizing and purging systems, welding, maintenance of controlled atmospheres, and leak detection. Other uses by category were relatively minor fractions. Controlled atmospheres Helium is used as a protective gas in growing silicon and germanium crystals, in titanium and zirconium production, and in gas chromatography, because it is inert. Because of its inertness, thermally and calorically perfect nature, high speed of sound, and high value of the heat capacity ratio, it is also useful in supersonic wind tunnels and impulse facilities. Gas tungsten arc welding Helium is used as a shielding gas in arc welding processes on materials that, at welding temperatures are contaminated and weakened by air or nitrogen. A number of inert shielding gases are used in gas tungsten arc welding, but helium is used instead of cheaper argon especially for welding materials that have higher heat conductivity, like aluminium or copper. Minor uses Industrial leak detection One industrial application for helium is leak detection. Because helium diffuses through solids three times faster than air, it is used as a tracer gas to detect leaks in high-vacuum equipment (such as cryogenic tanks) and high-pressure containers. The tested object is placed in a chamber, which is then evacuated and filled with helium. The helium that escapes through the leaks is detected by a sensitive device (helium mass spectrometer), even at the leak rates as small as 10−9 mbar·L/s (10−10 Pa·m3/s). The measurement procedure is normally automatic and is called helium integral test. A simpler procedure is to fill the tested object with helium and to manually search for leaks with a hand-held device. Helium leaks through cracks should not be confused with gas permeation through a bulk material. While helium has documented permeation constants (thus a calculable permeation rate) through glasses, ceramics, and synthetic materials, inert gases such as helium will not permeate most bulk metals. Flight Because it is lighter than air, airships and balloons are inflated with helium for lift. While hydrogen gas is more buoyant and escapes permeating through a membrane at a lower rate, helium has the advantage of being non-flammable, and indeed fire-retardant. Another minor use is in rocketry, where helium is used as an ullage medium to backfill rocket propellant tanks in flight and to condense hydrogen and oxygen to make rocket fuel. It is also used to purge fuel and oxidizer from ground support equipment prior to launch and to pre-cool liquid hydrogen in space vehicles. For example, the Saturn V rocket used in the Apollo program needed about of helium to launch. Minor commercial and recreational uses Helium as a breathing gas has no narcotic properties, so helium mixtures such as trimix, heliox and heliair are used for deep diving to reduce the effects of narcosis, which worsen with increasing depth. As pressure increases with depth, the density of the breathing gas also increases, and the low molecular weight of helium is found to considerably reduce the effort of breathing by lowering the density of the mixture. This reduces the Reynolds number of flow, leading to a reduction of turbulent flow and an increase in laminar flow, which requires less breathing. At depths below divers breathing helium-oxygen mixtures begin to experience tremors and a decrease in psychomotor function, symptoms of high-pressure nervous syndrome. This effect may be countered to some extent by adding an amount of narcotic gas such as hydrogen or nitrogen to a helium–oxygen mixture. Helium–neon lasers, a type of low-powered gas laser producing a red beam, had various practical applications which included barcode readers and laser pointers, before they were almost universally replaced by cheaper diode lasers. For its inertness and high thermal conductivity, neutron transparency, and because it does not form radioactive isotopes under reactor conditions, helium is used as a heat-transfer medium in some gas-cooled nuclear reactors. Helium, mixed with a heavier gas such as xenon, is useful for thermoacoustic refrigeration due to the resulting high heat capacity ratio and low Prandtl number. The inertness of helium has environmental advantages over conventional refrigeration systems which contribute to ozone depletion or global warming. Helium is also used in some hard disk drives. Scientific uses The use of helium reduces the distorting effects of temperature variations in the space between lenses in some telescopes due to its extremely low index of refraction. This method is especially used in solar telescopes where a vacuum tight telescope tube would be too heavy. Helium is a commonly used carrier gas for gas chromatography. The age of rocks and minerals that contain uranium and thorium can be estimated by measuring the level of helium with a process known as helium dating. Helium at low temperatures is used in cryogenics and in certain cryogenic applications. As examples of applications, liquid helium is used to cool certain metals to the extremely low temperatures required for superconductivity, such as in superconducting magnets for magnetic resonance imaging. The Large Hadron Collider at CERN uses 96 metric tons of liquid helium to maintain the temperature at . Medical uses Helium was approved for medical use in the United States in April 2020 for humans and animals. As a contaminant While chemically inert, helium contamination impairs the operation of microelectromechanical systems (MEMS) such that iPhones may fail. Inhalation and safety Effects Neutral helium at standard conditions is non-toxic, plays no biological role and is found in trace amounts in human blood. The speed of sound in helium is nearly three times the speed of sound in air. Because the natural resonance frequency of a gas-filled cavity is proportional to the speed of sound in the gas, when helium is inhaled, a corresponding increase occurs in the resonant frequencies of the vocal tract, which is the amplifier of vocal sound. This increase in the resonant frequency of the amplifier (the vocal tract) gives increased amplification to the high-frequency components of the sound wave produced by the direct vibration of the vocal folds, compared to the case when the voice box is filled with air. When a person speaks after inhaling helium gas, the muscles that control the voice box still move in the same way as when the voice box is filled with air; therefore the fundamental frequency (sometimes called pitch) produced by direct vibration of the vocal folds does not change. However, the high-frequency-preferred amplification causes a change in timbre of the amplified sound, resulting in a reedy, duck-like vocal quality. The opposite effect, lowering resonant frequencies, can be obtained by inhaling a dense gas such as sulfur hexafluoride or xenon. Hazards Inhaling helium can be dangerous if done to excess, since helium is a simple asphyxiant and so displaces oxygen needed for normal respiration. Fatalities have been recorded, including a youth who suffocated in Vancouver in 2003 and two adults who suffocated in South Florida in 2006. In 1998, an Australian girl from Victoria fell unconscious and temporarily turned blue after inhaling the entire contents of a party balloon. Inhaling helium directly from pressurized cylinders or even balloon filling valves is extremely dangerous, as high flow rate and pressure can result in barotrauma, fatally rupturing lung tissue. Death caused by helium is rare. The first media-recorded case was that of a 15-year-old girl from Texas who died in 1998 from helium inhalation at a friend's party; the exact type of helium death is unidentified. In the United States, only two fatalities were reported between 2000 and 2004, including a man who died in North Carolina of barotrauma in 2002. A youth asphyxiated in Vancouver during 2003, and a 27-year-old man in Australia had an embolism after breathing from a cylinder in 2000. Since then, two adults asphyxiated in South Florida in 2006, and there were cases in 2009 and 2010, one of whom was a Californian youth who was found with a bag over his head, attached to a helium tank, and another teenager in Northern Ireland died of asphyxiation. At Eagle Point, Oregon a teenage girl died in 2012 from barotrauma at a party. A girl from Michigan died from hypoxia later in the year. On February 4, 2015, it was revealed that, during the recording of their main TV show on January 28, a 12-year-old member (name withheld) of Japanese all-girl singing group 3B Junior suffered from air embolism, losing consciousness and falling into a coma as a result of air bubbles blocking the flow of blood to the brain after inhaling huge quantities of helium as part of a game. The incident was not made public until a week later. The staff of TV Asahi held an emergency press conference to communicate that the member had been taken to the hospital and is showing signs of rehabilitation such as moving eyes and limbs, but her consciousness has not yet been sufficiently recovered. Police have launched an investigation due to a neglect of safety measures. The safety issues for cryogenic helium are similar to those of liquid nitrogen; its extremely low temperatures can result in cold burns, and the liquid-to-gas expansion ratio can cause explosions if no pressure-relief devices are installed. Containers of helium gas at 5 to 10 K should be handled as if they contain liquid helium due to the rapid and significant thermal expansion that occurs when helium gas at less than 10 K is warmed to room temperature. At high pressures (more than about 20 atm or two MPa), a mixture of helium and oxygen (heliox) can lead to high-pressure nervous syndrome, a sort of reverse-anesthetic effect; adding a small amount of nitrogen to the mixture can alleviate the problem. See also Abiogenic petroleum origin Helium-3 propulsion Leidenfrost effect Superfluid Tracer-gas leak testing method Hamilton Cady Notes References Bibliography External links General U.S. Government's Bureau of Land Management: Sources, Refinement, and Shortage. With some history of helium. U.S. Geological Survey publications on helium beginning 1996: Helium Where is all the helium? Aga website It's Elemental – Helium Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Helium International Chemical Safety Cards – Helium includes health and safety information regarding accidental exposures to helium More detail Helium at The Periodic Table of Videos (University of Nottingham) Helium at the Helsinki University of Technology; includes pressure-temperature phase diagrams for helium-3 and helium-4 Lancaster University, Ultra Low Temperature Physics – includes a summary of some low temperature techniques Video: Demonstration of superfluid helium (Alfred Leitner, 1963, 38 min.) Miscellaneous Physics in Speech with audio samples that demonstrate the unchanged voice pitch Article about helium and other noble gases Helium shortage America's Helium Supply: Options for Producing More Helium from Federal Land: Oversight Hearing before the Subcommittee on Energy and Mineral Resources of the Committee on Natural Resources, U.S. House Of Representatives, One Hundred Thirteenth Congress, First Session, Thursday, July 11, 2013 Helium Program: Urgent Issues Facing BLM's Storage and Sale of Helium Reserves: Testimony before the Committee on Natural Resources, House of Representatives Government Accountability Office Chemical elements Noble gases Quantum phases Airship technology Coolants Nuclear reactor coolants Underwater diving equipment E-number additives Helios
Helium
Physics,Chemistry,Materials_science
10,905
16,391,238
https://en.wikipedia.org/wiki/Siliceous%20ooze
Siliceous ooze is a type of biogenic pelagic sediment located on the deep ocean floor. Siliceous oozes are the least common of the deep sea sediments, and make up approximately 15% of the ocean floor. Oozes are defined as sediments which contain at least 30% skeletal remains of pelagic microorganisms. Siliceous oozes are largely composed of the silica based skeletons of microscopic marine organisms such as diatoms and radiolarians. Other components of siliceous oozes near continental margins may include terrestrially derived silica particles and sponge spicules. Siliceous oozes are composed of skeletons made from opal silica SiO2·nH2O, as opposed to calcareous oozes, which are made from skeletons of calcium carbonate (CaCO3·nH2O) organisms (i.e. coccolithophores). Silica (Si) is a bioessential element and is efficiently recycled in the marine environment through the silica cycle. Distance from land masses, water depth and ocean fertility are all factors that affect the opal silica content in seawater and the presence of siliceous oozes. Formation Biological uptake of marine silica Siliceous marine organisms, such as diatoms and radiolarians, use silica to form skeletons through a process known as biomineralization. Diatoms and radiolarians have evolved to uptake silica in the form of silicic acid, Si(OH)4. Once an organism has sequestered Si(OH)4 molecules in its cytoplasm, the molecules are transported to silica deposition vesicles where they are transformed into opal silica (B-SiO2). Diatoms and radiolarians have specialized proteins called silicon transporters that prevent mineralization during the sequestration and transportation of silicic acid within the organism. The chemical formula for biological uptake of silicic acid is: H4SiO4(aq) <-> SiO2*nH2O(s) + (2-n)H2O(l) Opal silica saturation state The opal silica saturation state increases with depth in the ocean due to dissolution of sinking opal particles produced in surface ocean waters, but still remains low enough that the reaction to form biogenic opal silica remains thermodynamically unfavorable. Despite the unfavorable conditions, organisms can use dissolved silicic acid to make opal silica shells through biologically controlled biomineralization. The amount of opal silica that makes it to the seafloor is determined by the rates of sinking, dissolution, and water column depth. Export of silica to the deep ocean The dissolution rate of sinking opal silica (B-SiO2) in the water column affects the formation of siliceous ooze on the ocean floor. The rate of dissolution of silica is dependent on the saturation state of opal silica in the water column and the dependent on re-packaging of opal silica particles within larger particles from the surface ocean. Re-packaging is the formation (and sometimes re-formation) of solid organic matter (usually fecal pellets) around opal silica. The organic matter protects against the immediate dissolution of opal silica into silicic acid, which allows for increased sedimentation of the seafloor. The opal compensation depth, similar to the carbonate compensation depth, occurs at approximately 6000 meters. Below this depth, there is greater dissolution of opal silica into silicic acid than formation of opal silica from silicic acid. Only four percent of opal silica produced in the surface ocean will, on average, be deposited to the seafloor, while the remaining 96% is recycled in the water column. Accumulation rates Siliceous oozes accumulate over long timescales. In the open ocean, siliceous ooze accumulates at a rate of approximately 0.01 mol Si m−2 yr−1. The fastest accumulation rates of siliceous ooze occur in the deep waters of the Southern Ocean (0.1 mol Si m−2 yr−1) where biogenic silica production and export is greatest.  The diatom and radiolarian skeletons that make up Southern Ocean oozes can take 20 to 50 years to sink to the sea floor. Siliceous particles may sink faster if they are encased in the fecal pellets of larger organisms.  Once deposited, silica continues to dissolve and cycle, delaying long term burial of particles until a depth of 10–20 cm in the sediment layer is reached. Marine chert formation When opal silica accumulates faster than it dissolves, it is buried and can provide a diagenetic environment for marine chert formation.  The processes leading to chert formation have been observed in the Southern Ocean, where siliceous ooze accumulation is the fastest.  Chert formation however can take tens of millions of years. Skeleton fragments from siliceous organisms are subject to recrystallization and cementation. Chert is the main fate of buried siliceous ooze and permanently removes silica from the oceanic silica cycle. Geographic locations Siliceous oozes form in upwelling areas that provide valuable nutrients for the growth of siliceous organisms living in oceanic surface waters. A notable example is in the Southern ocean, where the consistent upwelling of Indian, Pacific, and Antarctic circumpolar deep water has resulted in a contiguous siliceous ooze that stretches around the globe. There is a band of siliceous ooze that is the result of enhanced equatorial upwelling in Pacific Ocean sediments below the North Equatorial Current. In the subpolar North Pacific, upwelling occurs along the eastern and western sides of the basin from the Alaska current and the Oyashio Current. Siliceous ooze is present along the seafloor in these subpolar regions. Ocean basin boundary currents, such as the Humboldt Current and the Somali Current, are examples of other upwelling currents that favor the formation of siliceous ooze. Siliceous ooze is usually categorized based upon its composition. Diatomaceous oozes are predominantly formed of diatom skeletons and are typically found along continental margins in higher latitudes. Diatomaceous oozes are present in the Southern Ocean and the North Pacific Ocean. Radiolarian oozes are made mostly of radiolarian skeletons and are located mainly in tropical equatorial and subtropical regions. Examples of radiolarian ooze are the oozes of the equatorial region, subtropical Pacific region, and the subtropical basin of the Indian Ocean. A small surface area of deep sea sediment is covered by radiolarian ooze in the equatorial East Atlantic basin. Role in the oceanic silica cycle Deep seafloor deposition in the form of ooze is the largest long-term sink of the oceanic silica cycle (6.3 ± 3.6 Tmol Si year−1). As noted above, this ooze is diagenetically transformed into lithospheric marine chert. This sink is roughly balanced by silicate weathering and river inputs of silicic acid into the ocean. Biogenic silica production in the photic zone is estimated to be 240 ± 40 Tmol si year −1.  Rapid dissolution in the surface removes roughly 135 Tmol opal Si year−1, converting it back to soluble silicic acid that can be used again for biomineralization. The remaining opal silica is exported to the deep ocean in sinking particles. In the deep ocean, another 26.2 Tmol Si Year−1 is dissolved before being deposited to the sediments as opal silica.  At the sediment water interface, over 90% of the silica is recycled and upwelled for use again in the photic zone. The residence time on a biological timescale is estimated to be about 400 years, with each molecule of silica recycled 25 times before sediment burial. Siliceous oozes and carbon sequestration Diatoms are primary producers that convert carbon dioxide into organic carbon via photosynthesis, and export organic carbon from the surface ocean to the deep sea via the biological pump. Diatoms can therefore be a significant sink for carbon dioxide in surface waters. Due to the relatively large size of diatoms (when compared to other phytoplankton), they are able to take up more total carbon dioxide. Additionally, diatoms do not release carbon dioxide into the environment during formation of their opal silicate shells. Phytoplankton that build calcium-carbonate shells (i.e. coccolithophores) release carbon dioxide as a byproduct during shell formation, making them a less efficient sink for carbon dioxide. The opal silicate skeletons enhance the sinking velocity of diatomaceous particles (i.e. carbon) from the surface ocean to the seafloor. Iron fertilization experiments Atmospheric carbon dioxide levels have been increasing exponentially since the Industrial Revolution and researchers are exploring ways to mitigate atmospheric carbon dioxide levels by increasing the uptake of carbon dioxide in the surface ocean via photosynthesis. An increase in the uptake of carbon dioxide in the surface waters may lead to more carbon sequestration in the deep sea through the biological pump. The bloom dynamics of diatoms, their ballasting by opal silica, and various nutrient requirements have made diatoms a focus for carbon sequestration experiments. Iron fertilization projects like the SERIES iron-enrichment experiments have introduced iron into ocean basins to test if this increases the rate of carbon dioxide uptake by diatoms and ultimately sinking it to the deep ocean. Iron is a limiting nutrient for diatom photosynthesis in high-nutrient, low-chlorophyll areas of the ocean, thus increasing the amount of available iron can lead to a subsequent increase in photosynthesis, sometimes resulting in a diatom bloom. This increase removes more carbon dioxide from the atmosphere.  Although more carbon dioxide is being taken up, the carbon sequestration rate in deep sea sediments is generally low. Most of the carbon dioxide taken up during the process of photosynthesis is recycled within the surface layer several times before making it to the deep ocean to be sequestered. Paleo-oozes Before siliceous organisms During the Precambrian, oceanic silica concentrations were an order of magnitude higher than in modern oceans. The evolution of biosilicification is thought to have emerged during this time period. Siliceous oozes formed once silica-sequestering organisms such as radiolarians and diatoms began to flourish in the surface waters. Evolution of siliceous organisms Radiolaria Fossil evidence suggests that radiolarians first emerged during the late Cambrian as free-floating shallow water organisms. They did not become prominent in the fossil record until the Ordovician. Radiolarites evolved in upwelling regions in areas of high primary productivity and are the oldest known organisms capable of shell secretion. The remains of radiolarians are preserved in chert; a byproduct of siliceous ooze transformation. Major speciation events of radiolarians occurred during the Mesozoic. Many of those species are now extinct in the modern ocean. Scientists hypothesize that competition with diatoms for dissolved silica during the Cenozoic is the likely cause for the mass extinction of most radiolarian species. Diatoms The oldest well-preserved diatom fossils have been dated to the beginning of the Jurassic period. However, the molecular record suggests diatoms evolved at least 250 million years ago during the Triassic. As new species of diatoms evolved and spread, oceanic silica levels began to decrease. Today, there are an estimated 100,000 species of diatoms, most of which are microscopic (2-200 μm). Some early diatoms were larger, and could be between 0.2 and 22mm in diameter. The earliest diatoms were radial centrics, and lived in shallow water close to shore. These early diatoms were adapted to live on the benthos, as their outer shells were heavy and prevented them from free-floating. Free-floating diatoms, known as bipolar and multipolar centrics, began evolving approximately 100 million years ago during the Cretaceous. Fossil diatoms are preserved in diatomite (also known as diatomaceous earth), which is one of the by-products of the transformation from ooze to rock formation. As diatomaceous particles began to sink to the ocean floor, carbon and silica were sequestered along continental margins. The carbon sequestered along continental margins has become the major petroleum reserves of today. Diatom evolution marks a time in Earth's geologic history of significant removal of carbon dioxide from the atmosphere while simultaneously increasing atmospheric oxygen levels. How scientists use paleo-ooze Paleoceanographers study prehistoric oozes to learn about changes in the oceans over time. The sediment distribution and deposition patterns of oozes inform scientists about prehistoric areas of the oceans that exhibited prime conditions for the growth of siliceous organisms. Scientists examine paleo-ooze by taking cores of deep sea sediments. Sediment layers in these cores reveal the deposition patterns of the ocean over time. Scientists use paleo-oozes as tools so that they can better infer the conditions of the paleo oceans. Paleo-ooze accretion rates can be used to determine deep sea circulation, tectonic activity, and climate at a specific point in time. Oozes are also useful in determining the historical abundances of siliceous organisms. Burubaital Formation The Burubatial Formation, located in the West Balkhash region of Kazakhstan, is the oldest known abyssal biogenic deposit. The Burubaital Formation is primarily composed of chert which was formed over a period of 15 million years (late Cambrian-middle Ordovician). It is likely that these deposits were formed in an upwelling region in subequatorial latitudes. The Burubaital Formation is largely composed of radiolarites, as diatoms had yet to evolve at the time of its formation. The Burubaital deposits have led researchers to believe that radiolaria played a significant role in the late Cambrian silica cycle. The late Cambrian (497-485.4 mya) marks a time of transition for marine biodiversity and is the beginning of ooze accumulation on the seafloor. Distribution shifts during the Miocene A shift in the geographical distribution of siliceous oozes occurred during the Miocene. Sixteen million years ago there was a gradual decline in siliceous ooze deposits in the North Atlantic and a concurrent rise in siliceous ooze deposits in the North Pacific. Scientists speculate that this regime shift may have been caused by the introduction of Nordic Sea Overflow Water, which contributed to the formation of North Atlantic Deep Water (NADW). The formation of Antarctic Bottom Water (AABW) occurred at approximately the same time as the formation of NADW. The formation of NADW and AABW dramatically transformed the ocean, and resulted in a spatial population shift of siliceous organisms. Paleocene plankton blooms The Cretaceous-Tertiary boundary was a time of global mass extinction, commonly referred to as the K-T mass extinction. While most organisms were disappearing, marine siliceous organisms were thriving in the early Paleocene seas. One such example occurred in the waters near Marlborough, New Zealand. Paleo-ooze deposits indicate that there was a rapid growth of both diatoms and radiolarians at this time. Scientists believe that this period of high biosiliceous productivity is linked to global climatic changes. This boom in siliceous plankton was greatest during the first one million years of the Tertiary period and is thought to have been fueled by enhanced upwelling in response to a cooling climate and increased nutrient cycling due to a change in sea level. See also Diatomaceous earth Calcareous ooze References Sedimentary rocks Chert Oceanography Diatom biology
Siliceous ooze
Physics,Environmental_science
3,396
4,286,589
https://en.wikipedia.org/wiki/Quarry-faced%20stone
Quarry-faced stone is a stone with a rough, unpolished surface, straight from the quarry. References Building stone Building materials Natural materials Stone (material)
Quarry-faced stone
Physics,Engineering
34
74,473,174
https://en.wikipedia.org/wiki/Chinese%20character%20components
In Written Chinese, components () are building blocks of characters, composed of strokes. In most cases, a component consists of more than one stroke, and is smaller than the whole of the character. For example, the character consists of two components: and . These can be further decomposed: can be analyzed as the sequence of strokes , and as the sequence . There are two methods for Chinese character component analysis, hierarchical dividing and plane dividing. Hierarchical dividing separates layer by layer from larger to smaller components, and finally gets the primitive components. Plane dividing separates out the primitive components at one time. The structure of a Chinese character is the pattern or rule in which the character is formed by its (first level) components. Chinese character structures include single-component structure, left-right structure, up-down structure and surrounding structure. Analysis Chinese characters may be analyzed in terms of smaller components. This analysis is generally based on graphical forms, without considering aspects like pronunciation and meaning. Component analysis is very helpful for learning Chinese characters. For example: →+ →+ →+ Through component analysis, one may learn characters in an easier way. If a student learns first, the knowledge will help with the learning or review of , , and . Obviously, learning by component analysis is much more efficient than learning by analyzing each character to strokes. Component analysis is also used in Chinese character encoding for computer input. There are two methods for Chinese character dividing, hierarchical dividing and plane dividing. Hierarchical dividing separates layer by layer from large to small components, and finally gets the primitive components. Plane dividing separates out the primitive components at one time. Hierarchical dividing can display the external structure of Chinese characters, while plane splitting can be regarded as omitting the higher splitting levels, and directly writing out the final separating result of primitive components. Rules for division The rules for hierarchical dividing include: The separation space ditch/gap () is an obvious boundary, where the character (or bigger component) is split into (smaller) components. If there is only one separation ditch, split into two components along the separation ditch. For example: →+, →+. When there is more than one separation ditch, divide along the longer one first. For example: → + →+, to get the hierarchical structure of ((+)+) with two layers of components. When several separation ditches are parallel and equal in length, divide along all of them. For example: →++. Intersecting stroke groups are not divided, for example, and are primitive components. The lower bound of dividing is generally greater than single strokes, and components with only two strokes, such as "", are not to be separated. Hierarchical analysis should conform to the basic structure of Chinese characters. For example: the outermost layer of "" is in a left-right structure, so the left and right separation is employed first: →+, followed by the inside-outside division (→+), although the latter's L-shaped separation ditch may be longer. A character containing multi-level components are divided from larger to smaller sizes to generate first-level components, second-level components, third-level components, etc. An example The hierarchical analysis of character in (1) bracketed representation: (((+(+))+(⿱)(+(+(+))))+), 5 layers of components. or in tree structure: / \ / \ (⿱) / \ / \ / \ / \ / \ The level to which a Chinese character is to be analyzed or divided depends on actual applications. In plane analysis, only components on the tree-leaves are presented, i.e., : ,,,,,,,. Analysis data of the Cihai The following is the analysis data of Cihai (), with a character set of 16,339 traditional and simplified Chinese characters. In most cases, a component is larger than a stroke and smaller than the whole character (combines with some other components to form the character). The condition for a single stroke to be a component is: occupies a relatively independent location usually occupied by a multiple-stroke component in a character. For example: the top stroke in character , the bottom in , the left in , the right ㇟ in , the central ㇔ in , and the outer ㇆ in . In the special cases of one-stroke characters, such as and , a stroke is a component and is a character. Classification of components Character components and non-character components A component that can independently form a character is a character component, or a component of independent character formation (). For example, component formed character independently, and is a component in characters , and ; and component is also a character by itself, and a component in , and . A component that can not independently form a character is a non-character component, or a component of dependent character formation (). For example, component in character , and ; and component in , and . Neither nor is a character in modern Chinese. Primitive components and Compound components A component that cannot be (further) divided into smaller components by the rules is a primitive component, or basic component (, ). Primitive components are the final-level components of hierarchical dividing. For example, components and in character , and in character . A component composed of two or more primitive components is a compound component (). For example, component in character , and , and component in , and . Hierarchy of components A component divided out at the first level is called a level-one component, a component divided out at the second level is called a level-two component, and so on. A component divided out at the final level is called a final-level component, i.e., primitive component. For example, in the example of character , / \ (level-one components) / \ (⿱) (level-two components) / \ / \ (level-three components) / \ / \ (level-four components) / \ (level-five components) where the leaf components , , , , , , and are final-level components or primitive components. Single-stroke components and multi-stroke components A component formed by one stroke is called a single-stroke component. For example, stroke in character stroke ㇑ in character stroke ㇓ in character stroke ㇔ in character stroke ㇆ in character . A component formed by more than one stroke is called a multi-stroke component. For example, component in character , in character , and of . Primitive components Among the 16,339 traditional, simplified and unsimplified characters in Cihai, there are 675 primitive components; among the 11,834 characters excluding the simplified traditional characters, there are 648 primitive components. In Chinese Character Information Dictionary, among the 7,785 China Mainland standard characters, a total of 623 primitive components have been divided out. (Divided from 11,834 simplified and unsimplified characters from Cihai). Component standards Chinese character components are widely used in Chinese character keyboard encoding input methods. Different encoding input methods have different ways for component separation. Therefore, it is necessary to formulate norms or standards related to Chinese character components. "Chinese Character Component Standard of GB13000.1 Character Set for Information Processing" (GB13000.1) is a standard released on February 1, 1997, by the National Language Commission of China. It includes a "List of Chinese Character Primitive Components". The list contains 560 primitive components. All the 20,902 CJK Chinese characters in the GB13000.1 character set can be formed with these components. This standard is mainly for Chinese information processing. Another important standard is the " Specification of Common Modern Chinese Character Components and Component Names" () formulated by the National Language Commission in 2009. It includes a list of 514 primitive components of commonly-used characters and component names. This standard is mainly for Chinese character education and dictionary collation. Component naming The rules for component naming include the following: If the component is a character, then call it by this character, for example: (kǒu) and (tǔ). If the character has more than one sounds, then use the more common one, such as: component "" is called zhōng, not zhòng. If the component is not a character, then if it has a name, then use the existing name. For example, (tí shǒu, ) and (bǎo gài, ). If the component has more than one name, then use the name commonly used, for example, is rather called shuāng lì rén () than shuāngrén páng (). For a component without a name, a colloquial and reasonable name should be determined. One way is to refer to the component by its position in common characters. For example: "the head of character " (, ), "the frame of character " (, ). Chinese character structures The structure of a Chinese character is the pattern or rule in which the character is formed by its (first level) components. Chinese character structures include Single-component structure: The character is formed by a single primitive component, such as , and . Left-right structure (⿰): The character is formed by a component on the left and another one on the right, such as , and . Left-middle-right structure (⿲): The character is formed by a component on the left, a component on the right and a component in the middle, such as , and . Up-down structure (⿱): The character is formed by a component above another component, such as , and . Up-middle-down structure (⿳): The character is formed by a component at the top, a component at the bottom and a component in the middle, such as , and . Complete-surrounding (⿴ ): such as , and . Left-top-right-surrounding (⿵): such as , and . Top-left-bottom-surrounding (⿷): such as , and . Left-bottom-right surrounding (⿶): such as , and . Top-left surrounding (⿸): such as , and . Top-right surrounding (⿹): such as , and . Left-bottom surrounding (⿺): such as , and . overlapping (⿻), or multi-frame surrounding: such as , , , . The principles of Chinese character first-level structure analysis can be extended to other levels. For example, character is in left-right structure, where the left component is in up-down structure. Deformation of components Sometimes in order to make the glyph more beautiful and reasonable in structure, a component may need to be changed in form according to the character environment. The deformation of the components can be made in two ways: Change the shape of individual strokes. The entire component is flattened or narrowed. Stroke deformation within a component Stroke deformation includes the following situations: When the bottom stroke of a left component is ㇐ (heng, horizontal) or ㇐ intersected with ㇑ (shu, vertical), the ㇐ is usually changed to ㇀ (ti ). For example: , exception: . When "" is used as the left component, the last stroke ㇑(shu) should be changed to ㇓(pie). For example: "". If the last stroke of a component is ㇏ (na), and the component is on the left side or in a surrounding structure, then ㇏ often needs to be changed to ㇔ (dian, dot). Such as: "". When adjacent strokes have two or more (parallel) ㇏ (na), generally only keep one ㇏, and change the rest to ㇔ (dots). Such as: "". When component "" is on another component, the hook should be removed. For example: "". When "" is on the left side of other components, the horizontal bending hook is often changed to lifting. For example: "". When the last stroke of the left radical is a ㇟(vertical bend hook), it is often changed to a ㇙ (vertical lift). For example: "". When "" (hand) is used on the left side, the vertical hook may be changed to ㇓ (pie). For example: "". Narrowing or flattening of components The narrowing or flattening of components is to make the structure of the whole character harmonious and well-proportioned. Take "" (dog) as an example: In the upper and lower structures, to be flattened. For example: "". In the left and right structure, to be narrowed. For example: "". Pianpang and radicals Pianpangs () and radicals () are components. Originally, the left side of a combined Chinese character was called pian, and the right side was called pang. Nowadays, it is customary to refer to the left and right, upper and lower, outer and inner parts of combined characters as pianpangs. Therefore, the pianpang analysis of combined characters is similar to the first-level component analysis. Piangpang generally carry sound or meaning information. They are called "sound side" (also called "sound symbol") and "meaning side" (also called "meaning symbol") respectively. Radicals are components used for sorting and retrieving Chinese characters. According to the glyph structure of Chinese characters, the common components of a group of characters are taken as the basis for character sorting and searching. And these components are called radicals. In pictophonetic characters, the radicals are mostly pianpangs representing the meaning. Component optimization Hu Qiaomu said: "The (primitive) components of Chinese characters should be reduced, and the components of Chinese characters should be made independent characters as many as possible; those that cannot be characters should be universal and easy to say. This may be more important than reducing the number of strokes and characters. Some simplified characters have added new components of Chinese characters. For example, '' and so on. Although the traditional character has more strokes, it is very clear to say: '+ '. When we simplify Chinese characters, we should avoid new unspeakable and uncommon components. " Components are important structural units of Chinese characters. Optimizing the components of Chinese characters to make them more concise, standardized, and easy to learn and use is an important task for Chinese character optimization, and there is a long way to go. See also Chinese character strokes Chinese whole characters Chinese character structures Modern Chinese characters :zh:漢字部件 Notes References Citations Works cited
Chinese character components
Technology
2,984
32,000,609
https://en.wikipedia.org/wiki/Paradoxes%20of%20the%20Infinite
Paradoxes of the Infinite (German title: Paradoxien des Unendlichen) is a mathematical work by Bernard Bolzano on the theory of sets. It was published by a friend and student, František Přihonský, in 1851, three years after Bolzano's death. The work contained many interesting results in set theory. Bolzano expanded on the theme of Galileo's paradox, giving more examples of correspondences between the elements of an infinite set and proper subsets of infinite sets. In the work he also explained the term Menge, rendered in English as "set", which he had coined and used in several works since the 1830s. References Paradoxes of the Infinite; trans. by D.A.Steele; London: Routledge, 1950 (German original) History of mathematics Infinity Logic literature Set theory 1851 essays Works published posthumously
Paradoxes of the Infinite
Mathematics
177